Initial commit: Turbo Mothership bare metal management cluster
- k0s bootstrap with Cilium and OpenEBS - ArgoCD apps for infra, CAPI, Tinkerbell, and Netris - Ansible playbooks for virtual baremetal lab and Netris switches - CAPI provider manifests for k0smotron and Tinkerbell
This commit is contained in:
159
ansible/netris-switches/README.md
Normal file
159
ansible/netris-switches/README.md
Normal file
@@ -0,0 +1,159 @@
|
||||
# Ansible Virtual Switch Lab
|
||||
|
||||
Creates a virtual Cumulus Linux switch lab using libvirt/KVM with UDP tunnels for inter-switch links.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
On the hypervisor (Debian/Ubuntu):
|
||||
|
||||
```bash
|
||||
# Install required packages
|
||||
apt-get update
|
||||
apt-get install -y qemu-kvm libvirt-daemon-system libvirt-clients \
|
||||
bridge-utils python3-libvirt python3-lxml genisoimage ansible sshpass
|
||||
|
||||
# Download Cumulus Linux image
|
||||
curl -L -o /var/lib/libvirt/images/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2 \
|
||||
https://networkingdownloads.nvidia.com/custhelp/Non_Monetized_Products/Software/CumulusSoftware/CumulusVX/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2
|
||||
|
||||
# Ensure libvirt is running
|
||||
systemctl enable --now libvirtd
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# 1. Create the lab (VMs + links)
|
||||
ansible-playbook -i inventory.yml playbook.yml
|
||||
|
||||
# 2. Wait 2-3 minutes for switches to boot, then configure them
|
||||
ansible-playbook -i inventory.yml configure-switches.yml
|
||||
|
||||
# 3. SSH to a switch
|
||||
ssh -p 2200 cumulus@127.0.0.1 # spine-0
|
||||
ssh -p 2201 cumulus@127.0.0.1 # leaf-0
|
||||
ssh -p 2202 cumulus@127.0.0.1 # leaf-1
|
||||
ssh -p 2203 cumulus@127.0.0.1 # leaf-2
|
||||
|
||||
# Default credentials: cumulus / cumulus
|
||||
|
||||
# 4. Destroy the lab
|
||||
ansible-playbook -i inventory.yml destroy.yml
|
||||
```
|
||||
|
||||
## Topology (Default)
|
||||
|
||||
```
|
||||
┌──────────┐
|
||||
│ spine-0 │
|
||||
│ AS 65000 │
|
||||
└────┬─────┘
|
||||
│
|
||||
┌───────────┼───────────┐
|
||||
│ │ │
|
||||
│ swp1 swp2│ swp3│
|
||||
│ │ │
|
||||
swp31 swp31 swp31
|
||||
┌────┴───┐ ┌────┴───┐ ┌────┴───┐
|
||||
│ leaf-0 │ │ leaf-1 │ │ leaf-2 │
|
||||
│AS 65001│ │AS 65002│ │AS 65003│
|
||||
└────┬───┘ └────┬───┘ └────┬───┘
|
||||
swp1 swp1 swp1
|
||||
│ │ │
|
||||
server-0 server-1 server-2
|
||||
```
|
||||
|
||||
## Customizing the Topology
|
||||
|
||||
Edit `group_vars/all.yml` to modify:
|
||||
|
||||
### Add more switches
|
||||
|
||||
```yaml
|
||||
topology:
|
||||
spines:
|
||||
- name: spine-0
|
||||
- name: spine-1 # Add second spine
|
||||
|
||||
leaves:
|
||||
- name: leaf-0
|
||||
- name: leaf-1
|
||||
- name: leaf-2
|
||||
- name: leaf-3 # Add more leaves
|
||||
```
|
||||
|
||||
### Add more links (dual uplinks)
|
||||
|
||||
```yaml
|
||||
topology:
|
||||
links:
|
||||
# First set of uplinks
|
||||
- { local: "spine-0", local_port: "swp1", remote: "leaf-0", remote_port: "swp31" }
|
||||
- { local: "spine-0", local_port: "swp2", remote: "leaf-1", remote_port: "swp31" }
|
||||
# Second set of uplinks (redundancy)
|
||||
- { local: "spine-1", local_port: "swp1", remote: "leaf-0", remote_port: "swp32" }
|
||||
- { local: "spine-1", local_port: "swp2", remote: "leaf-1", remote_port: "swp32" }
|
||||
```
|
||||
|
||||
### Adjust VM resources
|
||||
|
||||
```yaml
|
||||
switch_vcpus: 2
|
||||
switch_memory_mb: 2048 # 2GB per switch
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
### UDP Tunnels for Switch Links
|
||||
|
||||
Each link between switches uses a pair of UDP ports:
|
||||
|
||||
```
|
||||
spine-0:swp1 <--UDP--> leaf-0:swp31
|
||||
|
||||
spine-0 VM leaf-0 VM
|
||||
┌─────────────┐ ┌─────────────┐
|
||||
│ swp1 NIC │──────────────│ swp31 NIC │
|
||||
│ local:10000 │ UDP/IP │ local:10001 │
|
||||
│remote:10001 │<────────────>│remote:10000 │
|
||||
└─────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
This is handled by QEMU's `-netdev socket,udp=...` option.
|
||||
|
||||
### Management Access
|
||||
|
||||
Each VM gets a management NIC using QEMU user-mode networking with SSH port forwarding:
|
||||
|
||||
- spine-0: localhost:2200 → VM:22
|
||||
- leaf-0: localhost:2201 → VM:22
|
||||
- etc.
|
||||
|
||||
## Useful Commands
|
||||
|
||||
```bash
|
||||
# List running VMs
|
||||
virsh list
|
||||
|
||||
# Console access (escape: Ctrl+])
|
||||
virsh console leaf-0
|
||||
|
||||
# Check switch interfaces
|
||||
ssh -p 2201 cumulus@127.0.0.1 "nv show interface"
|
||||
|
||||
# Check LLDP neighbors
|
||||
ssh -p 2201 cumulus@127.0.0.1 "nv show service lldp neighbor"
|
||||
|
||||
# Check BGP status
|
||||
ssh -p 2201 cumulus@127.0.0.1 "nv show router bgp neighbor"
|
||||
```
|
||||
|
||||
## Memory Requirements
|
||||
|
||||
| Switches | RAM per Switch | Total |
|
||||
|----------|---------------|-------|
|
||||
| 4 (default) | 2GB | 8GB |
|
||||
| 8 | 2GB | 16GB |
|
||||
| 16 | 2GB | 32GB |
|
||||
|
||||
For 64GB RAM, you can run ~25-30 switches comfortably.
|
||||
34
ansible/netris-switches/configure-switches.yml
Normal file
34
ansible/netris-switches/configure-switches.yml
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
# Configure Cumulus switches after boot
|
||||
# Run this after the VMs have fully booted (give them ~2-3 minutes)
|
||||
|
||||
- name: Configure Cumulus Switches
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
|
||||
vars:
|
||||
all_switches: "{{ topology.spines + topology.leaves }}"
|
||||
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
|
||||
|
||||
tasks:
|
||||
- name: Wait for switches to be reachable
|
||||
wait_for:
|
||||
host: 127.0.0.1
|
||||
port: "{{ mgmt_ssh_base_port + idx }}"
|
||||
delay: 10
|
||||
timeout: 300
|
||||
loop: "{{ all_switches }}"
|
||||
loop_control:
|
||||
index_var: idx
|
||||
label: "{{ item.name }}"
|
||||
|
||||
- name: Configure each switch
|
||||
include_tasks: tasks/configure-switch.yml
|
||||
vars:
|
||||
switch_name: "{{ item.0.name }}"
|
||||
switch_ssh_port: "{{ mgmt_ssh_base_port + item.1 }}"
|
||||
switch_type: "{{ 'spine' if item.0.name.startswith('spine') else 'leaf' }}"
|
||||
switch_id: "{{ item.1 }}"
|
||||
loop: "{{ all_switches | zip(range(all_switches | length)) | list }}"
|
||||
loop_control:
|
||||
label: "{{ item.0.name }}"
|
||||
46
ansible/netris-switches/destroy.yml
Normal file
46
ansible/netris-switches/destroy.yml
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
# Destroy Virtual Switch Lab
|
||||
|
||||
- name: Destroy Virtual Switch Lab
|
||||
hosts: localhost
|
||||
become: yes
|
||||
gather_facts: no
|
||||
|
||||
vars:
|
||||
all_switches: "{{ topology.spines + topology.leaves }}"
|
||||
|
||||
tasks:
|
||||
- name: Stop VMs
|
||||
command: virsh destroy {{ item.name }}
|
||||
register: result
|
||||
failed_when: false
|
||||
changed_when: result.rc == 0
|
||||
loop: "{{ all_switches }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
|
||||
- name: Undefine VMs
|
||||
command: virsh undefine {{ item.name }}
|
||||
register: result
|
||||
failed_when: false
|
||||
changed_when: result.rc == 0
|
||||
loop: "{{ all_switches }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
|
||||
- name: Remove VM disk images
|
||||
file:
|
||||
path: "{{ vm_disk_path }}/{{ item.name }}.qcow2"
|
||||
state: absent
|
||||
loop: "{{ all_switches }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
|
||||
- name: Clean up XML definitions
|
||||
file:
|
||||
path: /tmp/switch-lab-xml
|
||||
state: absent
|
||||
|
||||
- name: Lab destroyed
|
||||
debug:
|
||||
msg: "Virtual Switch Lab has been destroyed"
|
||||
61
ansible/netris-switches/group_vars/all.yml
Normal file
61
ansible/netris-switches/group_vars/all.yml
Normal file
@@ -0,0 +1,61 @@
|
||||
# Virtual Switch Lab Configuration
|
||||
# Adjust these values based on your available RAM
|
||||
|
||||
# Base images
|
||||
cumulus_image: "/var/lib/libvirt/images/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2"
|
||||
cumulus_image_url: "https://networkingdownloads.nvidia.com/custhelp/Non_Monetized_Products/Software/CumulusSoftware/CumulusVX/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2"
|
||||
ubuntu_image: "/var/lib/libvirt/images/ubuntu-24.04-server-cloudimg-amd64.img"
|
||||
vm_disk_path: "/var/lib/libvirt/images"
|
||||
|
||||
# VM Resources
|
||||
switch_vcpus: 2
|
||||
switch_memory_mb: 2048
|
||||
server_vcpus: 1
|
||||
server_memory_mb: 1024
|
||||
|
||||
# Management network - SSH access to VMs via port forwarding
|
||||
mgmt_ssh_base_port: 2200 # leaf-0 = 2200, leaf-1 = 2201, etc.
|
||||
|
||||
# UDP tunnel base port for inter-switch links
|
||||
udp_base_port: 10000
|
||||
|
||||
# Topology Definition
|
||||
# Simple leaf-spine topology for testing
|
||||
topology:
|
||||
spines:
|
||||
- name: spine-0
|
||||
mgmt_mac: "52:54:00:sp:00:00"
|
||||
|
||||
leaves:
|
||||
- name: leaf-0
|
||||
mgmt_mac: "52:54:00:le:00:00"
|
||||
- name: leaf-1
|
||||
mgmt_mac: "52:54:00:le:01:00"
|
||||
- name: leaf-2
|
||||
mgmt_mac: "52:54:00:le:02:00"
|
||||
|
||||
# Links format: [local_switch, local_port, remote_switch, remote_port]
|
||||
# Each link will get a unique UDP port pair
|
||||
links:
|
||||
- { local: "spine-0", local_port: "swp1", remote: "leaf-0", remote_port: "swp31" }
|
||||
- { local: "spine-0", local_port: "swp2", remote: "leaf-1", remote_port: "swp31" }
|
||||
- { local: "spine-0", local_port: "swp3", remote: "leaf-2", remote_port: "swp31" }
|
||||
# Add more links as needed, e.g., dual uplinks:
|
||||
# - { local: "spine-0", local_port: "swp4", remote: "leaf-0", remote_port: "swp32" }
|
||||
|
||||
# Optional: Simulated servers connected to leaves
|
||||
servers:
|
||||
- name: server-0
|
||||
connected_to: leaf-0
|
||||
switch_port: swp1
|
||||
- name: server-1
|
||||
connected_to: leaf-1
|
||||
switch_port: swp1
|
||||
- name: server-2
|
||||
connected_to: leaf-2
|
||||
switch_port: swp1
|
||||
|
||||
# Cumulus default credentials
|
||||
cumulus_user: cumulus
|
||||
cumulus_default_password: cumulus
|
||||
cumulus_new_password: "CumulusLinux!"
|
||||
5
ansible/netris-switches/inventory.yml
Normal file
5
ansible/netris-switches/inventory.yml
Normal file
@@ -0,0 +1,5 @@
|
||||
all:
|
||||
hosts:
|
||||
localhost:
|
||||
ansible_connection: local
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
100
ansible/netris-switches/playbook.yml
Normal file
100
ansible/netris-switches/playbook.yml
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
# Ansible Playbook for Virtual Cumulus Switch Lab
|
||||
# Creates a leaf-spine topology using libvirt/KVM with UDP tunnels for links
|
||||
|
||||
- name: Setup Virtual Switch Lab
|
||||
hosts: localhost
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
all_switches: "{{ topology.spines + topology.leaves }}"
|
||||
|
||||
tasks:
|
||||
# ===========================================
|
||||
# Prerequisites
|
||||
# ===========================================
|
||||
- name: Ensure required packages are installed
|
||||
apt:
|
||||
name:
|
||||
- qemu-kvm
|
||||
- qemu-utils
|
||||
- libvirt-daemon-system
|
||||
- libvirt-clients
|
||||
- virtinst
|
||||
- bridge-utils
|
||||
state: present
|
||||
update_cache: no
|
||||
|
||||
- name: Ensure libvirtd is running
|
||||
service:
|
||||
name: libvirtd
|
||||
state: started
|
||||
enabled: yes
|
||||
|
||||
- name: Check if Cumulus image exists
|
||||
stat:
|
||||
path: "{{ cumulus_image }}"
|
||||
register: cumulus_img
|
||||
|
||||
- name: Download Cumulus image if not present
|
||||
get_url:
|
||||
url: "{{ cumulus_image_url }}"
|
||||
dest: "{{ cumulus_image }}"
|
||||
mode: '0644'
|
||||
when: not cumulus_img.stat.exists
|
||||
|
||||
# ===========================================
|
||||
# Create switch VM disks (copy-on-write)
|
||||
# ===========================================
|
||||
- name: Create switch disk images (CoW backed by base)
|
||||
command: >
|
||||
qemu-img create -f qcow2 -F qcow2
|
||||
-b {{ cumulus_image }}
|
||||
{{ vm_disk_path }}/{{ item.name }}.qcow2
|
||||
args:
|
||||
creates: "{{ vm_disk_path }}/{{ item.name }}.qcow2"
|
||||
loop: "{{ all_switches }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
|
||||
# ===========================================
|
||||
# Build link information for each switch
|
||||
# ===========================================
|
||||
- name: Create VMs with virt-install
|
||||
include_tasks: tasks/create-switch-vm.yml
|
||||
vars:
|
||||
switch_name: "{{ item.0.name }}"
|
||||
switch_index: "{{ item.1 }}"
|
||||
ssh_port: "{{ mgmt_ssh_base_port + item.1 }}"
|
||||
loop: "{{ all_switches | zip(range(all_switches | length)) | list }}"
|
||||
loop_control:
|
||||
label: "{{ item.0.name }}"
|
||||
|
||||
# ===========================================
|
||||
# Display connection info
|
||||
# ===========================================
|
||||
- name: Display VM access information
|
||||
debug:
|
||||
msg: |
|
||||
============================================
|
||||
Virtual Switch Lab is running!
|
||||
============================================
|
||||
|
||||
SSH Access (user: cumulus, password: cumulus):
|
||||
{% for switch in all_switches %}
|
||||
{{ switch.name }}: ssh -p {{ mgmt_ssh_base_port + loop.index0 }} cumulus@127.0.0.1
|
||||
{% endfor %}
|
||||
|
||||
Console Access:
|
||||
{% for switch in all_switches %}
|
||||
{{ switch.name }}: virsh console {{ switch.name }}
|
||||
{% endfor %}
|
||||
|
||||
Topology:
|
||||
{% for link in topology.links %}
|
||||
{{ link.local }}:{{ link.local_port }} <--> {{ link.remote }}:{{ link.remote_port }}
|
||||
{% endfor %}
|
||||
|
||||
To destroy the lab: ansible-playbook -i inventory.yml destroy.yml
|
||||
============================================
|
||||
59
ansible/netris-switches/tasks/configure-switch.yml
Normal file
59
ansible/netris-switches/tasks/configure-switch.yml
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
# Configure a single Cumulus switch via SSH
|
||||
|
||||
- name: "{{ switch_name }} - Set hostname"
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||
"sudo hostnamectl set-hostname {{ switch_name }}"
|
||||
register: result
|
||||
retries: 3
|
||||
delay: 10
|
||||
until: result.rc == 0
|
||||
|
||||
- name: "{{ switch_name }} - Configure loopback IP"
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||
"sudo nv set interface lo ip address 10.0.0.{{ switch_id + 1 }}/32 && sudo nv config apply -y"
|
||||
register: result
|
||||
retries: 3
|
||||
delay: 5
|
||||
until: result.rc == 0
|
||||
|
||||
- name: "{{ switch_name }} - Enable LLDP"
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||
"sudo nv set service lldp && sudo nv config apply -y"
|
||||
register: result
|
||||
failed_when: false
|
||||
|
||||
- name: "{{ switch_name }} - Bring up all switch ports"
|
||||
delegate_to: 127.0.0.1
|
||||
shell: |
|
||||
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||
"for i in \$(seq 1 48); do sudo nv set interface swp\$i link state up 2>/dev/null; done && sudo nv config apply -y"
|
||||
register: result
|
||||
failed_when: false
|
||||
|
||||
- name: "{{ switch_name }} - Configure BGP ASN"
|
||||
delegate_to: 127.0.0.1
|
||||
vars:
|
||||
bgp_asn: "{{ 65000 + switch_id }}"
|
||||
shell: |
|
||||
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||
"sudo nv set router bgp autonomous-system {{ bgp_asn }} && \
|
||||
sudo nv set router bgp router-id 10.0.0.{{ switch_id + 1 }} && \
|
||||
sudo nv config apply -y"
|
||||
register: result
|
||||
failed_when: false
|
||||
|
||||
- name: "{{ switch_name }} - Configuration complete"
|
||||
debug:
|
||||
msg: "{{ switch_name }} configured with loopback 10.0.0.{{ switch_id + 1 }}/32, ASN {{ 65000 + switch_id }}"
|
||||
83
ansible/netris-switches/tasks/create-switch-vm.yml
Normal file
83
ansible/netris-switches/tasks/create-switch-vm.yml
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
# Create a single switch VM using virt-install
|
||||
|
||||
- name: "{{ switch_name }} - Check if VM exists"
|
||||
command: virsh dominfo {{ switch_name }}
|
||||
register: vm_exists
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: "{{ switch_name }} - Build link list"
|
||||
set_fact:
|
||||
switch_links: []
|
||||
|
||||
- name: "{{ switch_name }} - Add links where this switch is local"
|
||||
set_fact:
|
||||
switch_links: "{{ switch_links + [{'port': link_item.local_port, 'udp_local': udp_base_port + (link_idx * 2), 'udp_remote': udp_base_port + (link_idx * 2) + 1}] }}"
|
||||
loop: "{{ topology.links }}"
|
||||
loop_control:
|
||||
loop_var: link_item
|
||||
index_var: link_idx
|
||||
label: "{{ link_item.local }}:{{ link_item.local_port }}"
|
||||
when: link_item.local == switch_name
|
||||
|
||||
- name: "{{ switch_name }} - Add links where this switch is remote"
|
||||
set_fact:
|
||||
switch_links: "{{ switch_links + [{'port': link_item.remote_port, 'udp_local': udp_base_port + (link_idx * 2) + 1, 'udp_remote': udp_base_port + (link_idx * 2)}] }}"
|
||||
loop: "{{ topology.links }}"
|
||||
loop_control:
|
||||
loop_var: link_item
|
||||
index_var: link_idx
|
||||
label: "{{ link_item.remote }}:{{ link_item.remote_port }}"
|
||||
when: link_item.remote == switch_name
|
||||
|
||||
- name: "{{ switch_name }} - Add server links"
|
||||
set_fact:
|
||||
switch_links: "{{ switch_links + [{'port': srv_item.switch_port, 'udp_local': udp_base_port + ((topology.links | length + srv_idx) * 2), 'udp_remote': udp_base_port + ((topology.links | length + srv_idx) * 2) + 1}] }}"
|
||||
loop: "{{ servers | default([]) }}"
|
||||
loop_control:
|
||||
loop_var: srv_item
|
||||
index_var: srv_idx
|
||||
label: "{{ srv_item.name }}"
|
||||
when: srv_item.connected_to == switch_name
|
||||
|
||||
- name: "{{ switch_name }} - Debug links"
|
||||
debug:
|
||||
msg: "Links for {{ switch_name }}: {{ switch_links }}"
|
||||
|
||||
- name: "{{ switch_name }} - Build virt-install command"
|
||||
set_fact:
|
||||
virt_install_cmd: >-
|
||||
virt-install
|
||||
--name {{ switch_name }}
|
||||
--vcpus {{ switch_vcpus }}
|
||||
--memory {{ switch_memory_mb }}
|
||||
--import
|
||||
--disk path={{ vm_disk_path }}/{{ switch_name }}.qcow2,bus=sata
|
||||
--graphics none
|
||||
--video none
|
||||
--osinfo detect=on,require=off
|
||||
--network none
|
||||
--controller usb,model=none
|
||||
--noautoconsole
|
||||
--qemu-commandline='-netdev user,id=mgmt,net=192.168.0.0/24,hostfwd=tcp::{{ ssh_port }}-:22'
|
||||
--qemu-commandline='-device virtio-net-pci,netdev=mgmt,mac=00:01:00:00:{{ "%02x" | format(switch_index | int) }}:00,bus=pci.0,addr=0x10'
|
||||
{% for link in switch_links %}
|
||||
--qemu-commandline='-netdev socket,udp=127.0.0.1:{{ link.udp_remote }},localaddr=127.0.0.1:{{ link.udp_local }},id={{ link.port }}'
|
||||
--qemu-commandline='-device virtio-net-pci,mac=00:02:00:{{ "%02x" | format(switch_index | int) }}:{{ "%02x" | format(loop.index) }}:{{ "%02x" | format(link.udp_local % 256) }},netdev={{ link.port }},bus=pci.0,addr=0x{{ "%x" | format(17 + loop.index0) }}'
|
||||
{% endfor %}
|
||||
|
||||
- name: "{{ switch_name }} - Create VM with virt-install"
|
||||
shell: "{{ virt_install_cmd }}"
|
||||
when: vm_exists.rc != 0
|
||||
|
||||
- name: "{{ switch_name }} - Start VM if not running"
|
||||
command: virsh start {{ switch_name }}
|
||||
register: start_result
|
||||
failed_when: start_result.rc != 0 and 'already active' not in start_result.stderr
|
||||
changed_when: start_result.rc == 0
|
||||
when: vm_exists.rc == 0
|
||||
|
||||
- name: "{{ switch_name }} - Set autostart"
|
||||
command: virsh autostart {{ switch_name }}
|
||||
changed_when: false
|
||||
77
ansible/netris-switches/templates/switch-vm.xml.j2
Normal file
77
ansible/netris-switches/templates/switch-vm.xml.j2
Normal file
@@ -0,0 +1,77 @@
|
||||
{# Build list of links for this switch #}
|
||||
{% set switch_links = [] %}
|
||||
{% set link_idx = namespace(value=0) %}
|
||||
{% for link in topology.links %}
|
||||
{% if link.local == vm_name %}
|
||||
{% set _ = switch_links.append({'port': link.local_port, 'remote': link.remote, 'remote_port': link.remote_port, 'udp_local': udp_base_port + (link_idx.value * 2), 'udp_remote': udp_base_port + (link_idx.value * 2) + 1}) %}
|
||||
{% endif %}
|
||||
{% if link.remote == vm_name %}
|
||||
{% set _ = switch_links.append({'port': link.remote_port, 'remote': link.local, 'remote_port': link.local_port, 'udp_local': udp_base_port + (link_idx.value * 2) + 1, 'udp_remote': udp_base_port + (link_idx.value * 2)}) %}
|
||||
{% endif %}
|
||||
{% set link_idx.value = link_idx.value + 1 %}
|
||||
{% endfor %}
|
||||
{# Add server links #}
|
||||
{% set server_link_start = topology.links | length %}
|
||||
{% for server in servers | default([]) %}
|
||||
{% if server.connected_to == vm_name %}
|
||||
{% set _ = switch_links.append({'port': server.switch_port, 'remote': server.name, 'remote_port': 'eth1', 'udp_local': udp_base_port + ((server_link_start + loop.index0) * 2), 'udp_remote': udp_base_port + ((server_link_start + loop.index0) * 2) + 1}) %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
|
||||
<name>{{ vm_name }}</name>
|
||||
<memory unit='MiB'>{{ switch_memory_mb }}</memory>
|
||||
<vcpu placement='static'>{{ switch_vcpus }}</vcpu>
|
||||
<os>
|
||||
<type arch='x86_64' machine='pc-q35-8.2'>hvm</type>
|
||||
<boot dev='hd'/>
|
||||
</os>
|
||||
<features>
|
||||
<acpi/>
|
||||
<apic/>
|
||||
</features>
|
||||
<cpu mode='host-passthrough' check='none' migratable='on'/>
|
||||
<clock offset='utc'>
|
||||
<timer name='rtc' tickpolicy='catchup'/>
|
||||
<timer name='pit' tickpolicy='delay'/>
|
||||
<timer name='hpet' present='no'/>
|
||||
</clock>
|
||||
<on_poweroff>destroy</on_poweroff>
|
||||
<on_reboot>restart</on_reboot>
|
||||
<on_crash>destroy</on_crash>
|
||||
<devices>
|
||||
<emulator>/usr/bin/qemu-system-x86_64</emulator>
|
||||
<disk type='file' device='disk'>
|
||||
<driver name='qemu' type='qcow2'/>
|
||||
<source file='{{ vm_disk_path }}/{{ vm_name }}.qcow2'/>
|
||||
<target dev='vda' bus='virtio'/>
|
||||
</disk>
|
||||
<controller type='usb' model='none'/>
|
||||
<serial type='pty'>
|
||||
<target type='isa-serial' port='0'>
|
||||
<model name='isa-serial'/>
|
||||
</target>
|
||||
</serial>
|
||||
<console type='pty'>
|
||||
<target type='serial' port='0'/>
|
||||
</console>
|
||||
<memballoon model='none'/>
|
||||
</devices>
|
||||
|
||||
<!-- QEMU command line arguments for networking -->
|
||||
<qemu:commandline>
|
||||
<!-- Management interface with SSH port forwarding -->
|
||||
<qemu:arg value='-netdev'/>
|
||||
<qemu:arg value='user,id=mgmt,net=192.168.100.0/24,hostfwd=tcp::{{ mgmt_ssh_base_port + vm_index | int }}-:22'/>
|
||||
<qemu:arg value='-device'/>
|
||||
<qemu:arg value='virtio-net-pci,netdev=mgmt,mac=52:54:00:00:{{ "%02x" | format(vm_index | int) }}:00'/>
|
||||
|
||||
{% for link in switch_links %}
|
||||
<!-- {{ link.port }} <-> {{ link.remote }}:{{ link.remote_port }} -->
|
||||
<qemu:arg value='-netdev'/>
|
||||
<qemu:arg value='socket,id={{ link.port | replace("/", "_") }},udp=127.0.0.1:{{ link.udp_remote }},localaddr=127.0.0.1:{{ link.udp_local }}'/>
|
||||
<qemu:arg value='-device'/>
|
||||
<qemu:arg value='virtio-net-pci,netdev={{ link.port | replace("/", "_") }},mac=52:54:{{ "%02x" | format(vm_index | int) }}:{{ "%02x" | format(loop.index) }}:{{ "%02x" | format((link.udp_local // 256) % 256) }}:{{ "%02x" | format(link.udp_local % 256) }}'/>
|
||||
|
||||
{% endfor %}
|
||||
</qemu:commandline>
|
||||
</domain>
|
||||
Reference in New Issue
Block a user