- k0s bootstrap with Cilium and OpenEBS - ArgoCD apps for infra, CAPI, Tinkerbell, and Netris - Ansible playbooks for virtual baremetal lab and Netris switches - CAPI provider manifests for k0smotron and Tinkerbell
4.3 KiB
4.3 KiB
Ansible Virtual Switch Lab
Creates a virtual Cumulus Linux switch lab using libvirt/KVM with UDP tunnels for inter-switch links.
Prerequisites
On the hypervisor (Debian/Ubuntu):
# Install required packages
apt-get update
apt-get install -y qemu-kvm libvirt-daemon-system libvirt-clients \
bridge-utils python3-libvirt python3-lxml genisoimage ansible sshpass
# Download Cumulus Linux image
curl -L -o /var/lib/libvirt/images/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2 \
https://networkingdownloads.nvidia.com/custhelp/Non_Monetized_Products/Software/CumulusSoftware/CumulusVX/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2
# Ensure libvirt is running
systemctl enable --now libvirtd
Quick Start
# 1. Create the lab (VMs + links)
ansible-playbook -i inventory.yml playbook.yml
# 2. Wait 2-3 minutes for switches to boot, then configure them
ansible-playbook -i inventory.yml configure-switches.yml
# 3. SSH to a switch
ssh -p 2200 cumulus@127.0.0.1 # spine-0
ssh -p 2201 cumulus@127.0.0.1 # leaf-0
ssh -p 2202 cumulus@127.0.0.1 # leaf-1
ssh -p 2203 cumulus@127.0.0.1 # leaf-2
# Default credentials: cumulus / cumulus
# 4. Destroy the lab
ansible-playbook -i inventory.yml destroy.yml
Topology (Default)
┌──────────┐
│ spine-0 │
│ AS 65000 │
└────┬─────┘
│
┌───────────┼───────────┐
│ │ │
│ swp1 swp2│ swp3│
│ │ │
swp31 swp31 swp31
┌────┴───┐ ┌────┴───┐ ┌────┴───┐
│ leaf-0 │ │ leaf-1 │ │ leaf-2 │
│AS 65001│ │AS 65002│ │AS 65003│
└────┬───┘ └────┬───┘ └────┬───┘
swp1 swp1 swp1
│ │ │
server-0 server-1 server-2
Customizing the Topology
Edit group_vars/all.yml to modify:
Add more switches
topology:
spines:
- name: spine-0
- name: spine-1 # Add second spine
leaves:
- name: leaf-0
- name: leaf-1
- name: leaf-2
- name: leaf-3 # Add more leaves
Add more links (dual uplinks)
topology:
links:
# First set of uplinks
- { local: "spine-0", local_port: "swp1", remote: "leaf-0", remote_port: "swp31" }
- { local: "spine-0", local_port: "swp2", remote: "leaf-1", remote_port: "swp31" }
# Second set of uplinks (redundancy)
- { local: "spine-1", local_port: "swp1", remote: "leaf-0", remote_port: "swp32" }
- { local: "spine-1", local_port: "swp2", remote: "leaf-1", remote_port: "swp32" }
Adjust VM resources
switch_vcpus: 2
switch_memory_mb: 2048 # 2GB per switch
How It Works
UDP Tunnels for Switch Links
Each link between switches uses a pair of UDP ports:
spine-0:swp1 <--UDP--> leaf-0:swp31
spine-0 VM leaf-0 VM
┌─────────────┐ ┌─────────────┐
│ swp1 NIC │──────────────│ swp31 NIC │
│ local:10000 │ UDP/IP │ local:10001 │
│remote:10001 │<────────────>│remote:10000 │
└─────────────┘ └─────────────┘
This is handled by QEMU's -netdev socket,udp=... option.
Management Access
Each VM gets a management NIC using QEMU user-mode networking with SSH port forwarding:
- spine-0: localhost:2200 → VM:22
- leaf-0: localhost:2201 → VM:22
- etc.
Useful Commands
# List running VMs
virsh list
# Console access (escape: Ctrl+])
virsh console leaf-0
# Check switch interfaces
ssh -p 2201 cumulus@127.0.0.1 "nv show interface"
# Check LLDP neighbors
ssh -p 2201 cumulus@127.0.0.1 "nv show service lldp neighbor"
# Check BGP status
ssh -p 2201 cumulus@127.0.0.1 "nv show router bgp neighbor"
Memory Requirements
| Switches | RAM per Switch | Total |
|---|---|---|
| 4 (default) | 2GB | 8GB |
| 8 | 2GB | 16GB |
| 16 | 2GB | 32GB |
For 64GB RAM, you can run ~25-30 switches comfortably.