Initial commit: Turbo Mothership bare metal management cluster
- k0s bootstrap with Cilium and OpenEBS - ArgoCD apps for infra, CAPI, Tinkerbell, and Netris - Ansible playbooks for virtual baremetal lab and Netris switches - CAPI provider manifests for k0smotron and Tinkerbell
This commit is contained in:
2
.gitignore
vendored
Normal file
2
.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
.claude/
|
||||||
|
*.tgz
|
||||||
62
README.md
Normal file
62
README.md
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
# Turbo Mothership
|
||||||
|
|
||||||
|
Bare metal Kubernetes management cluster for provisioning infrastructure via Cluster API and Tinkerbell.
|
||||||
|
|
||||||
|
## Deployment Flow
|
||||||
|
|
||||||
|
1. **Deploy k0s** with Cilium CNI and OpenEBS storage
|
||||||
|
2. **Bootstrap** ArgoCD, cert-manager, ingress-nginx, and sealed-secrets via Helm
|
||||||
|
3. **Pivot to ArgoCD** for GitOps-managed applications
|
||||||
|
4. **Install Tinkerbell** for bare metal provisioning (PXE, DHCP, workflows)
|
||||||
|
5. **Install CAPI Operator** (Cluster API lifecycle manager)
|
||||||
|
6. **Install CAPI Providers** for infrastructure provisioning
|
||||||
|
7. **Install Netris controller and operator** for fabric management
|
||||||
|
8. **Spin up virtual baremetals and switches** to use as cluster resources
|
||||||
|
|
||||||
|
## Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
├── bootstrap/ # Helm chart for initial cluster bootstrap
|
||||||
|
├── apps/ # ArgoCD Application manifests
|
||||||
|
│ ├── infra/ # Infrastructure apps (cert-manager, ingress-nginx, etc.)
|
||||||
|
│ ├── bm/ # Bare metal apps (tinkerbell)
|
||||||
|
│ ├── capi/ # Cluster API operator and providers
|
||||||
|
│ └── netris/ # Netris controller and operator
|
||||||
|
├── manifests/
|
||||||
|
│ └── capi-stack/ # CAPI provider manifests (k0smotron, tinkerbell)
|
||||||
|
└── ansible/
|
||||||
|
├── virtual-bm/ # Ansible playbooks for virtual baremetal lab
|
||||||
|
└── netris-switches/ # Ansible for Netris switch VMs
|
||||||
|
```
|
||||||
|
|
||||||
|
## Virtual Baremetal Lab
|
||||||
|
|
||||||
|
The `ansible/virtual-bm/` directory contains playbooks for setting up a virtual bare metal environment for testing:
|
||||||
|
|
||||||
|
- `playbook.yml` - Creates br-mgmt bridge (172.16.81.0/24) with NAT
|
||||||
|
- `create-vms.yml` - Creates libvirt VMs with VirtualBMC for IPMI simulation
|
||||||
|
- `destroy-vms.yml` - Tears down the virtual environment
|
||||||
|
|
||||||
|
### Virtual BM Summary
|
||||||
|
|
||||||
|
| VM | MAC Address | VBMC Port |
|
||||||
|
|-----|-------------------|-----------|
|
||||||
|
| vm1 | 52:54:00:12:34:01 | 6231 |
|
||||||
|
| vm2 | 52:54:00:12:34:02 | 6232 |
|
||||||
|
| vm3 | 52:54:00:12:34:03 | 6233 |
|
||||||
|
|
||||||
|
## Netris
|
||||||
|
|
||||||
|
Netris provides network automation for bare metal infrastructure.
|
||||||
|
|
||||||
|
- `apps/netris/netris-controller.yaml` - Netris Controller for network management UI
|
||||||
|
- `apps/netris/netris-operator.yaml` - Kubernetes operator for Netris resources
|
||||||
|
- `ansible/netris-switches/` - Playbooks to create virtual Netris switch VMs
|
||||||
|
|
||||||
|
### Default Credentials
|
||||||
|
|
||||||
|
netris-controller web UI:
|
||||||
|
- Login: `netris`
|
||||||
|
- Password: `newNet0ps`
|
||||||
|
|
||||||
|
Change these after first login.
|
||||||
159
ansible/netris-switches/README.md
Normal file
159
ansible/netris-switches/README.md
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
# Ansible Virtual Switch Lab
|
||||||
|
|
||||||
|
Creates a virtual Cumulus Linux switch lab using libvirt/KVM with UDP tunnels for inter-switch links.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
On the hypervisor (Debian/Ubuntu):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install required packages
|
||||||
|
apt-get update
|
||||||
|
apt-get install -y qemu-kvm libvirt-daemon-system libvirt-clients \
|
||||||
|
bridge-utils python3-libvirt python3-lxml genisoimage ansible sshpass
|
||||||
|
|
||||||
|
# Download Cumulus Linux image
|
||||||
|
curl -L -o /var/lib/libvirt/images/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2 \
|
||||||
|
https://networkingdownloads.nvidia.com/custhelp/Non_Monetized_Products/Software/CumulusSoftware/CumulusVX/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2
|
||||||
|
|
||||||
|
# Ensure libvirt is running
|
||||||
|
systemctl enable --now libvirtd
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Create the lab (VMs + links)
|
||||||
|
ansible-playbook -i inventory.yml playbook.yml
|
||||||
|
|
||||||
|
# 2. Wait 2-3 minutes for switches to boot, then configure them
|
||||||
|
ansible-playbook -i inventory.yml configure-switches.yml
|
||||||
|
|
||||||
|
# 3. SSH to a switch
|
||||||
|
ssh -p 2200 cumulus@127.0.0.1 # spine-0
|
||||||
|
ssh -p 2201 cumulus@127.0.0.1 # leaf-0
|
||||||
|
ssh -p 2202 cumulus@127.0.0.1 # leaf-1
|
||||||
|
ssh -p 2203 cumulus@127.0.0.1 # leaf-2
|
||||||
|
|
||||||
|
# Default credentials: cumulus / cumulus
|
||||||
|
|
||||||
|
# 4. Destroy the lab
|
||||||
|
ansible-playbook -i inventory.yml destroy.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Topology (Default)
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────┐
|
||||||
|
│ spine-0 │
|
||||||
|
│ AS 65000 │
|
||||||
|
└────┬─────┘
|
||||||
|
│
|
||||||
|
┌───────────┼───────────┐
|
||||||
|
│ │ │
|
||||||
|
│ swp1 swp2│ swp3│
|
||||||
|
│ │ │
|
||||||
|
swp31 swp31 swp31
|
||||||
|
┌────┴───┐ ┌────┴───┐ ┌────┴───┐
|
||||||
|
│ leaf-0 │ │ leaf-1 │ │ leaf-2 │
|
||||||
|
│AS 65001│ │AS 65002│ │AS 65003│
|
||||||
|
└────┬───┘ └────┬───┘ └────┬───┘
|
||||||
|
swp1 swp1 swp1
|
||||||
|
│ │ │
|
||||||
|
server-0 server-1 server-2
|
||||||
|
```
|
||||||
|
|
||||||
|
## Customizing the Topology
|
||||||
|
|
||||||
|
Edit `group_vars/all.yml` to modify:
|
||||||
|
|
||||||
|
### Add more switches
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
topology:
|
||||||
|
spines:
|
||||||
|
- name: spine-0
|
||||||
|
- name: spine-1 # Add second spine
|
||||||
|
|
||||||
|
leaves:
|
||||||
|
- name: leaf-0
|
||||||
|
- name: leaf-1
|
||||||
|
- name: leaf-2
|
||||||
|
- name: leaf-3 # Add more leaves
|
||||||
|
```
|
||||||
|
|
||||||
|
### Add more links (dual uplinks)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
topology:
|
||||||
|
links:
|
||||||
|
# First set of uplinks
|
||||||
|
- { local: "spine-0", local_port: "swp1", remote: "leaf-0", remote_port: "swp31" }
|
||||||
|
- { local: "spine-0", local_port: "swp2", remote: "leaf-1", remote_port: "swp31" }
|
||||||
|
# Second set of uplinks (redundancy)
|
||||||
|
- { local: "spine-1", local_port: "swp1", remote: "leaf-0", remote_port: "swp32" }
|
||||||
|
- { local: "spine-1", local_port: "swp2", remote: "leaf-1", remote_port: "swp32" }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adjust VM resources
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
switch_vcpus: 2
|
||||||
|
switch_memory_mb: 2048 # 2GB per switch
|
||||||
|
```
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
### UDP Tunnels for Switch Links
|
||||||
|
|
||||||
|
Each link between switches uses a pair of UDP ports:
|
||||||
|
|
||||||
|
```
|
||||||
|
spine-0:swp1 <--UDP--> leaf-0:swp31
|
||||||
|
|
||||||
|
spine-0 VM leaf-0 VM
|
||||||
|
┌─────────────┐ ┌─────────────┐
|
||||||
|
│ swp1 NIC │──────────────│ swp31 NIC │
|
||||||
|
│ local:10000 │ UDP/IP │ local:10001 │
|
||||||
|
│remote:10001 │<────────────>│remote:10000 │
|
||||||
|
└─────────────┘ └─────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
This is handled by QEMU's `-netdev socket,udp=...` option.
|
||||||
|
|
||||||
|
### Management Access
|
||||||
|
|
||||||
|
Each VM gets a management NIC using QEMU user-mode networking with SSH port forwarding:
|
||||||
|
|
||||||
|
- spine-0: localhost:2200 → VM:22
|
||||||
|
- leaf-0: localhost:2201 → VM:22
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
## Useful Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List running VMs
|
||||||
|
virsh list
|
||||||
|
|
||||||
|
# Console access (escape: Ctrl+])
|
||||||
|
virsh console leaf-0
|
||||||
|
|
||||||
|
# Check switch interfaces
|
||||||
|
ssh -p 2201 cumulus@127.0.0.1 "nv show interface"
|
||||||
|
|
||||||
|
# Check LLDP neighbors
|
||||||
|
ssh -p 2201 cumulus@127.0.0.1 "nv show service lldp neighbor"
|
||||||
|
|
||||||
|
# Check BGP status
|
||||||
|
ssh -p 2201 cumulus@127.0.0.1 "nv show router bgp neighbor"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Memory Requirements
|
||||||
|
|
||||||
|
| Switches | RAM per Switch | Total |
|
||||||
|
|----------|---------------|-------|
|
||||||
|
| 4 (default) | 2GB | 8GB |
|
||||||
|
| 8 | 2GB | 16GB |
|
||||||
|
| 16 | 2GB | 32GB |
|
||||||
|
|
||||||
|
For 64GB RAM, you can run ~25-30 switches comfortably.
|
||||||
34
ansible/netris-switches/configure-switches.yml
Normal file
34
ansible/netris-switches/configure-switches.yml
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
---
|
||||||
|
# Configure Cumulus switches after boot
|
||||||
|
# Run this after the VMs have fully booted (give them ~2-3 minutes)
|
||||||
|
|
||||||
|
- name: Configure Cumulus Switches
|
||||||
|
hosts: localhost
|
||||||
|
gather_facts: no
|
||||||
|
|
||||||
|
vars:
|
||||||
|
all_switches: "{{ topology.spines + topology.leaves }}"
|
||||||
|
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Wait for switches to be reachable
|
||||||
|
wait_for:
|
||||||
|
host: 127.0.0.1
|
||||||
|
port: "{{ mgmt_ssh_base_port + idx }}"
|
||||||
|
delay: 10
|
||||||
|
timeout: 300
|
||||||
|
loop: "{{ all_switches }}"
|
||||||
|
loop_control:
|
||||||
|
index_var: idx
|
||||||
|
label: "{{ item.name }}"
|
||||||
|
|
||||||
|
- name: Configure each switch
|
||||||
|
include_tasks: tasks/configure-switch.yml
|
||||||
|
vars:
|
||||||
|
switch_name: "{{ item.0.name }}"
|
||||||
|
switch_ssh_port: "{{ mgmt_ssh_base_port + item.1 }}"
|
||||||
|
switch_type: "{{ 'spine' if item.0.name.startswith('spine') else 'leaf' }}"
|
||||||
|
switch_id: "{{ item.1 }}"
|
||||||
|
loop: "{{ all_switches | zip(range(all_switches | length)) | list }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.0.name }}"
|
||||||
46
ansible/netris-switches/destroy.yml
Normal file
46
ansible/netris-switches/destroy.yml
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
# Destroy Virtual Switch Lab
|
||||||
|
|
||||||
|
- name: Destroy Virtual Switch Lab
|
||||||
|
hosts: localhost
|
||||||
|
become: yes
|
||||||
|
gather_facts: no
|
||||||
|
|
||||||
|
vars:
|
||||||
|
all_switches: "{{ topology.spines + topology.leaves }}"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Stop VMs
|
||||||
|
command: virsh destroy {{ item.name }}
|
||||||
|
register: result
|
||||||
|
failed_when: false
|
||||||
|
changed_when: result.rc == 0
|
||||||
|
loop: "{{ all_switches }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.name }}"
|
||||||
|
|
||||||
|
- name: Undefine VMs
|
||||||
|
command: virsh undefine {{ item.name }}
|
||||||
|
register: result
|
||||||
|
failed_when: false
|
||||||
|
changed_when: result.rc == 0
|
||||||
|
loop: "{{ all_switches }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.name }}"
|
||||||
|
|
||||||
|
- name: Remove VM disk images
|
||||||
|
file:
|
||||||
|
path: "{{ vm_disk_path }}/{{ item.name }}.qcow2"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ all_switches }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.name }}"
|
||||||
|
|
||||||
|
- name: Clean up XML definitions
|
||||||
|
file:
|
||||||
|
path: /tmp/switch-lab-xml
|
||||||
|
state: absent
|
||||||
|
|
||||||
|
- name: Lab destroyed
|
||||||
|
debug:
|
||||||
|
msg: "Virtual Switch Lab has been destroyed"
|
||||||
61
ansible/netris-switches/group_vars/all.yml
Normal file
61
ansible/netris-switches/group_vars/all.yml
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
# Virtual Switch Lab Configuration
|
||||||
|
# Adjust these values based on your available RAM
|
||||||
|
|
||||||
|
# Base images
|
||||||
|
cumulus_image: "/var/lib/libvirt/images/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2"
|
||||||
|
cumulus_image_url: "https://networkingdownloads.nvidia.com/custhelp/Non_Monetized_Products/Software/CumulusSoftware/CumulusVX/cumulus-linux-5.11.1-vx-amd64-qemu.qcow2"
|
||||||
|
ubuntu_image: "/var/lib/libvirt/images/ubuntu-24.04-server-cloudimg-amd64.img"
|
||||||
|
vm_disk_path: "/var/lib/libvirt/images"
|
||||||
|
|
||||||
|
# VM Resources
|
||||||
|
switch_vcpus: 2
|
||||||
|
switch_memory_mb: 2048
|
||||||
|
server_vcpus: 1
|
||||||
|
server_memory_mb: 1024
|
||||||
|
|
||||||
|
# Management network - SSH access to VMs via port forwarding
|
||||||
|
mgmt_ssh_base_port: 2200 # leaf-0 = 2200, leaf-1 = 2201, etc.
|
||||||
|
|
||||||
|
# UDP tunnel base port for inter-switch links
|
||||||
|
udp_base_port: 10000
|
||||||
|
|
||||||
|
# Topology Definition
|
||||||
|
# Simple leaf-spine topology for testing
|
||||||
|
topology:
|
||||||
|
spines:
|
||||||
|
- name: spine-0
|
||||||
|
mgmt_mac: "52:54:00:sp:00:00"
|
||||||
|
|
||||||
|
leaves:
|
||||||
|
- name: leaf-0
|
||||||
|
mgmt_mac: "52:54:00:le:00:00"
|
||||||
|
- name: leaf-1
|
||||||
|
mgmt_mac: "52:54:00:le:01:00"
|
||||||
|
- name: leaf-2
|
||||||
|
mgmt_mac: "52:54:00:le:02:00"
|
||||||
|
|
||||||
|
# Links format: [local_switch, local_port, remote_switch, remote_port]
|
||||||
|
# Each link will get a unique UDP port pair
|
||||||
|
links:
|
||||||
|
- { local: "spine-0", local_port: "swp1", remote: "leaf-0", remote_port: "swp31" }
|
||||||
|
- { local: "spine-0", local_port: "swp2", remote: "leaf-1", remote_port: "swp31" }
|
||||||
|
- { local: "spine-0", local_port: "swp3", remote: "leaf-2", remote_port: "swp31" }
|
||||||
|
# Add more links as needed, e.g., dual uplinks:
|
||||||
|
# - { local: "spine-0", local_port: "swp4", remote: "leaf-0", remote_port: "swp32" }
|
||||||
|
|
||||||
|
# Optional: Simulated servers connected to leaves
|
||||||
|
servers:
|
||||||
|
- name: server-0
|
||||||
|
connected_to: leaf-0
|
||||||
|
switch_port: swp1
|
||||||
|
- name: server-1
|
||||||
|
connected_to: leaf-1
|
||||||
|
switch_port: swp1
|
||||||
|
- name: server-2
|
||||||
|
connected_to: leaf-2
|
||||||
|
switch_port: swp1
|
||||||
|
|
||||||
|
# Cumulus default credentials
|
||||||
|
cumulus_user: cumulus
|
||||||
|
cumulus_default_password: cumulus
|
||||||
|
cumulus_new_password: "CumulusLinux!"
|
||||||
5
ansible/netris-switches/inventory.yml
Normal file
5
ansible/netris-switches/inventory.yml
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
all:
|
||||||
|
hosts:
|
||||||
|
localhost:
|
||||||
|
ansible_connection: local
|
||||||
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
100
ansible/netris-switches/playbook.yml
Normal file
100
ansible/netris-switches/playbook.yml
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
---
|
||||||
|
# Ansible Playbook for Virtual Cumulus Switch Lab
|
||||||
|
# Creates a leaf-spine topology using libvirt/KVM with UDP tunnels for links
|
||||||
|
|
||||||
|
- name: Setup Virtual Switch Lab
|
||||||
|
hosts: localhost
|
||||||
|
become: yes
|
||||||
|
gather_facts: yes
|
||||||
|
|
||||||
|
vars:
|
||||||
|
all_switches: "{{ topology.spines + topology.leaves }}"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# ===========================================
|
||||||
|
# Prerequisites
|
||||||
|
# ===========================================
|
||||||
|
- name: Ensure required packages are installed
|
||||||
|
apt:
|
||||||
|
name:
|
||||||
|
- qemu-kvm
|
||||||
|
- qemu-utils
|
||||||
|
- libvirt-daemon-system
|
||||||
|
- libvirt-clients
|
||||||
|
- virtinst
|
||||||
|
- bridge-utils
|
||||||
|
state: present
|
||||||
|
update_cache: no
|
||||||
|
|
||||||
|
- name: Ensure libvirtd is running
|
||||||
|
service:
|
||||||
|
name: libvirtd
|
||||||
|
state: started
|
||||||
|
enabled: yes
|
||||||
|
|
||||||
|
- name: Check if Cumulus image exists
|
||||||
|
stat:
|
||||||
|
path: "{{ cumulus_image }}"
|
||||||
|
register: cumulus_img
|
||||||
|
|
||||||
|
- name: Download Cumulus image if not present
|
||||||
|
get_url:
|
||||||
|
url: "{{ cumulus_image_url }}"
|
||||||
|
dest: "{{ cumulus_image }}"
|
||||||
|
mode: '0644'
|
||||||
|
when: not cumulus_img.stat.exists
|
||||||
|
|
||||||
|
# ===========================================
|
||||||
|
# Create switch VM disks (copy-on-write)
|
||||||
|
# ===========================================
|
||||||
|
- name: Create switch disk images (CoW backed by base)
|
||||||
|
command: >
|
||||||
|
qemu-img create -f qcow2 -F qcow2
|
||||||
|
-b {{ cumulus_image }}
|
||||||
|
{{ vm_disk_path }}/{{ item.name }}.qcow2
|
||||||
|
args:
|
||||||
|
creates: "{{ vm_disk_path }}/{{ item.name }}.qcow2"
|
||||||
|
loop: "{{ all_switches }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.name }}"
|
||||||
|
|
||||||
|
# ===========================================
|
||||||
|
# Build link information for each switch
|
||||||
|
# ===========================================
|
||||||
|
- name: Create VMs with virt-install
|
||||||
|
include_tasks: tasks/create-switch-vm.yml
|
||||||
|
vars:
|
||||||
|
switch_name: "{{ item.0.name }}"
|
||||||
|
switch_index: "{{ item.1 }}"
|
||||||
|
ssh_port: "{{ mgmt_ssh_base_port + item.1 }}"
|
||||||
|
loop: "{{ all_switches | zip(range(all_switches | length)) | list }}"
|
||||||
|
loop_control:
|
||||||
|
label: "{{ item.0.name }}"
|
||||||
|
|
||||||
|
# ===========================================
|
||||||
|
# Display connection info
|
||||||
|
# ===========================================
|
||||||
|
- name: Display VM access information
|
||||||
|
debug:
|
||||||
|
msg: |
|
||||||
|
============================================
|
||||||
|
Virtual Switch Lab is running!
|
||||||
|
============================================
|
||||||
|
|
||||||
|
SSH Access (user: cumulus, password: cumulus):
|
||||||
|
{% for switch in all_switches %}
|
||||||
|
{{ switch.name }}: ssh -p {{ mgmt_ssh_base_port + loop.index0 }} cumulus@127.0.0.1
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
Console Access:
|
||||||
|
{% for switch in all_switches %}
|
||||||
|
{{ switch.name }}: virsh console {{ switch.name }}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
Topology:
|
||||||
|
{% for link in topology.links %}
|
||||||
|
{{ link.local }}:{{ link.local_port }} <--> {{ link.remote }}:{{ link.remote_port }}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
To destroy the lab: ansible-playbook -i inventory.yml destroy.yml
|
||||||
|
============================================
|
||||||
59
ansible/netris-switches/tasks/configure-switch.yml
Normal file
59
ansible/netris-switches/tasks/configure-switch.yml
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
---
|
||||||
|
# Configure a single Cumulus switch via SSH
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Set hostname"
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||||
|
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||||
|
"sudo hostnamectl set-hostname {{ switch_name }}"
|
||||||
|
register: result
|
||||||
|
retries: 3
|
||||||
|
delay: 10
|
||||||
|
until: result.rc == 0
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Configure loopback IP"
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||||
|
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||||
|
"sudo nv set interface lo ip address 10.0.0.{{ switch_id + 1 }}/32 && sudo nv config apply -y"
|
||||||
|
register: result
|
||||||
|
retries: 3
|
||||||
|
delay: 5
|
||||||
|
until: result.rc == 0
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Enable LLDP"
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||||
|
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||||
|
"sudo nv set service lldp && sudo nv config apply -y"
|
||||||
|
register: result
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Bring up all switch ports"
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
shell: |
|
||||||
|
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||||
|
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||||
|
"for i in \$(seq 1 48); do sudo nv set interface swp\$i link state up 2>/dev/null; done && sudo nv config apply -y"
|
||||||
|
register: result
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Configure BGP ASN"
|
||||||
|
delegate_to: 127.0.0.1
|
||||||
|
vars:
|
||||||
|
bgp_asn: "{{ 65000 + switch_id }}"
|
||||||
|
shell: |
|
||||||
|
sshpass -p 'cumulus' ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
|
||||||
|
-p {{ switch_ssh_port }} cumulus@127.0.0.1 \
|
||||||
|
"sudo nv set router bgp autonomous-system {{ bgp_asn }} && \
|
||||||
|
sudo nv set router bgp router-id 10.0.0.{{ switch_id + 1 }} && \
|
||||||
|
sudo nv config apply -y"
|
||||||
|
register: result
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Configuration complete"
|
||||||
|
debug:
|
||||||
|
msg: "{{ switch_name }} configured with loopback 10.0.0.{{ switch_id + 1 }}/32, ASN {{ 65000 + switch_id }}"
|
||||||
83
ansible/netris-switches/tasks/create-switch-vm.yml
Normal file
83
ansible/netris-switches/tasks/create-switch-vm.yml
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
---
|
||||||
|
# Create a single switch VM using virt-install
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Check if VM exists"
|
||||||
|
command: virsh dominfo {{ switch_name }}
|
||||||
|
register: vm_exists
|
||||||
|
failed_when: false
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Build link list"
|
||||||
|
set_fact:
|
||||||
|
switch_links: []
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Add links where this switch is local"
|
||||||
|
set_fact:
|
||||||
|
switch_links: "{{ switch_links + [{'port': link_item.local_port, 'udp_local': udp_base_port + (link_idx * 2), 'udp_remote': udp_base_port + (link_idx * 2) + 1}] }}"
|
||||||
|
loop: "{{ topology.links }}"
|
||||||
|
loop_control:
|
||||||
|
loop_var: link_item
|
||||||
|
index_var: link_idx
|
||||||
|
label: "{{ link_item.local }}:{{ link_item.local_port }}"
|
||||||
|
when: link_item.local == switch_name
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Add links where this switch is remote"
|
||||||
|
set_fact:
|
||||||
|
switch_links: "{{ switch_links + [{'port': link_item.remote_port, 'udp_local': udp_base_port + (link_idx * 2) + 1, 'udp_remote': udp_base_port + (link_idx * 2)}] }}"
|
||||||
|
loop: "{{ topology.links }}"
|
||||||
|
loop_control:
|
||||||
|
loop_var: link_item
|
||||||
|
index_var: link_idx
|
||||||
|
label: "{{ link_item.remote }}:{{ link_item.remote_port }}"
|
||||||
|
when: link_item.remote == switch_name
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Add server links"
|
||||||
|
set_fact:
|
||||||
|
switch_links: "{{ switch_links + [{'port': srv_item.switch_port, 'udp_local': udp_base_port + ((topology.links | length + srv_idx) * 2), 'udp_remote': udp_base_port + ((topology.links | length + srv_idx) * 2) + 1}] }}"
|
||||||
|
loop: "{{ servers | default([]) }}"
|
||||||
|
loop_control:
|
||||||
|
loop_var: srv_item
|
||||||
|
index_var: srv_idx
|
||||||
|
label: "{{ srv_item.name }}"
|
||||||
|
when: srv_item.connected_to == switch_name
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Debug links"
|
||||||
|
debug:
|
||||||
|
msg: "Links for {{ switch_name }}: {{ switch_links }}"
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Build virt-install command"
|
||||||
|
set_fact:
|
||||||
|
virt_install_cmd: >-
|
||||||
|
virt-install
|
||||||
|
--name {{ switch_name }}
|
||||||
|
--vcpus {{ switch_vcpus }}
|
||||||
|
--memory {{ switch_memory_mb }}
|
||||||
|
--import
|
||||||
|
--disk path={{ vm_disk_path }}/{{ switch_name }}.qcow2,bus=sata
|
||||||
|
--graphics none
|
||||||
|
--video none
|
||||||
|
--osinfo detect=on,require=off
|
||||||
|
--network none
|
||||||
|
--controller usb,model=none
|
||||||
|
--noautoconsole
|
||||||
|
--qemu-commandline='-netdev user,id=mgmt,net=192.168.0.0/24,hostfwd=tcp::{{ ssh_port }}-:22'
|
||||||
|
--qemu-commandline='-device virtio-net-pci,netdev=mgmt,mac=00:01:00:00:{{ "%02x" | format(switch_index | int) }}:00,bus=pci.0,addr=0x10'
|
||||||
|
{% for link in switch_links %}
|
||||||
|
--qemu-commandline='-netdev socket,udp=127.0.0.1:{{ link.udp_remote }},localaddr=127.0.0.1:{{ link.udp_local }},id={{ link.port }}'
|
||||||
|
--qemu-commandline='-device virtio-net-pci,mac=00:02:00:{{ "%02x" | format(switch_index | int) }}:{{ "%02x" | format(loop.index) }}:{{ "%02x" | format(link.udp_local % 256) }},netdev={{ link.port }},bus=pci.0,addr=0x{{ "%x" | format(17 + loop.index0) }}'
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Create VM with virt-install"
|
||||||
|
shell: "{{ virt_install_cmd }}"
|
||||||
|
when: vm_exists.rc != 0
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Start VM if not running"
|
||||||
|
command: virsh start {{ switch_name }}
|
||||||
|
register: start_result
|
||||||
|
failed_when: start_result.rc != 0 and 'already active' not in start_result.stderr
|
||||||
|
changed_when: start_result.rc == 0
|
||||||
|
when: vm_exists.rc == 0
|
||||||
|
|
||||||
|
- name: "{{ switch_name }} - Set autostart"
|
||||||
|
command: virsh autostart {{ switch_name }}
|
||||||
|
changed_when: false
|
||||||
77
ansible/netris-switches/templates/switch-vm.xml.j2
Normal file
77
ansible/netris-switches/templates/switch-vm.xml.j2
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
{# Build list of links for this switch #}
|
||||||
|
{% set switch_links = [] %}
|
||||||
|
{% set link_idx = namespace(value=0) %}
|
||||||
|
{% for link in topology.links %}
|
||||||
|
{% if link.local == vm_name %}
|
||||||
|
{% set _ = switch_links.append({'port': link.local_port, 'remote': link.remote, 'remote_port': link.remote_port, 'udp_local': udp_base_port + (link_idx.value * 2), 'udp_remote': udp_base_port + (link_idx.value * 2) + 1}) %}
|
||||||
|
{% endif %}
|
||||||
|
{% if link.remote == vm_name %}
|
||||||
|
{% set _ = switch_links.append({'port': link.remote_port, 'remote': link.local, 'remote_port': link.local_port, 'udp_local': udp_base_port + (link_idx.value * 2) + 1, 'udp_remote': udp_base_port + (link_idx.value * 2)}) %}
|
||||||
|
{% endif %}
|
||||||
|
{% set link_idx.value = link_idx.value + 1 %}
|
||||||
|
{% endfor %}
|
||||||
|
{# Add server links #}
|
||||||
|
{% set server_link_start = topology.links | length %}
|
||||||
|
{% for server in servers | default([]) %}
|
||||||
|
{% if server.connected_to == vm_name %}
|
||||||
|
{% set _ = switch_links.append({'port': server.switch_port, 'remote': server.name, 'remote_port': 'eth1', 'udp_local': udp_base_port + ((server_link_start + loop.index0) * 2), 'udp_remote': udp_base_port + ((server_link_start + loop.index0) * 2) + 1}) %}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
|
||||||
|
<name>{{ vm_name }}</name>
|
||||||
|
<memory unit='MiB'>{{ switch_memory_mb }}</memory>
|
||||||
|
<vcpu placement='static'>{{ switch_vcpus }}</vcpu>
|
||||||
|
<os>
|
||||||
|
<type arch='x86_64' machine='pc-q35-8.2'>hvm</type>
|
||||||
|
<boot dev='hd'/>
|
||||||
|
</os>
|
||||||
|
<features>
|
||||||
|
<acpi/>
|
||||||
|
<apic/>
|
||||||
|
</features>
|
||||||
|
<cpu mode='host-passthrough' check='none' migratable='on'/>
|
||||||
|
<clock offset='utc'>
|
||||||
|
<timer name='rtc' tickpolicy='catchup'/>
|
||||||
|
<timer name='pit' tickpolicy='delay'/>
|
||||||
|
<timer name='hpet' present='no'/>
|
||||||
|
</clock>
|
||||||
|
<on_poweroff>destroy</on_poweroff>
|
||||||
|
<on_reboot>restart</on_reboot>
|
||||||
|
<on_crash>destroy</on_crash>
|
||||||
|
<devices>
|
||||||
|
<emulator>/usr/bin/qemu-system-x86_64</emulator>
|
||||||
|
<disk type='file' device='disk'>
|
||||||
|
<driver name='qemu' type='qcow2'/>
|
||||||
|
<source file='{{ vm_disk_path }}/{{ vm_name }}.qcow2'/>
|
||||||
|
<target dev='vda' bus='virtio'/>
|
||||||
|
</disk>
|
||||||
|
<controller type='usb' model='none'/>
|
||||||
|
<serial type='pty'>
|
||||||
|
<target type='isa-serial' port='0'>
|
||||||
|
<model name='isa-serial'/>
|
||||||
|
</target>
|
||||||
|
</serial>
|
||||||
|
<console type='pty'>
|
||||||
|
<target type='serial' port='0'/>
|
||||||
|
</console>
|
||||||
|
<memballoon model='none'/>
|
||||||
|
</devices>
|
||||||
|
|
||||||
|
<!-- QEMU command line arguments for networking -->
|
||||||
|
<qemu:commandline>
|
||||||
|
<!-- Management interface with SSH port forwarding -->
|
||||||
|
<qemu:arg value='-netdev'/>
|
||||||
|
<qemu:arg value='user,id=mgmt,net=192.168.100.0/24,hostfwd=tcp::{{ mgmt_ssh_base_port + vm_index | int }}-:22'/>
|
||||||
|
<qemu:arg value='-device'/>
|
||||||
|
<qemu:arg value='virtio-net-pci,netdev=mgmt,mac=52:54:00:00:{{ "%02x" | format(vm_index | int) }}:00'/>
|
||||||
|
|
||||||
|
{% for link in switch_links %}
|
||||||
|
<!-- {{ link.port }} <-> {{ link.remote }}:{{ link.remote_port }} -->
|
||||||
|
<qemu:arg value='-netdev'/>
|
||||||
|
<qemu:arg value='socket,id={{ link.port | replace("/", "_") }},udp=127.0.0.1:{{ link.udp_remote }},localaddr=127.0.0.1:{{ link.udp_local }}'/>
|
||||||
|
<qemu:arg value='-device'/>
|
||||||
|
<qemu:arg value='virtio-net-pci,netdev={{ link.port | replace("/", "_") }},mac=52:54:{{ "%02x" | format(vm_index | int) }}:{{ "%02x" | format(loop.index) }}:{{ "%02x" | format((link.udp_local // 256) % 256) }}:{{ "%02x" | format(link.udp_local % 256) }}'/>
|
||||||
|
|
||||||
|
{% endfor %}
|
||||||
|
</qemu:commandline>
|
||||||
|
</domain>
|
||||||
120
ansible/virtual-bm/br-mgmt-nat.yml
Normal file
120
ansible/virtual-bm/br-mgmt-nat.yml
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
---
|
||||||
|
- name: Configure br-mgmt bridge for libvirt and Tinkerbell
|
||||||
|
hosts: local
|
||||||
|
become: true
|
||||||
|
gather_facts: false
|
||||||
|
|
||||||
|
vars:
|
||||||
|
br_mgmt_name: br-mgmt
|
||||||
|
br_mgmt_ip: 172.16.81.254
|
||||||
|
br_mgmt_cidr: 24
|
||||||
|
br_mgmt_netmask: 255.255.255.0
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Ensure bridge-utils is installed
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: bridge-utils
|
||||||
|
state: present
|
||||||
|
update_cache: false
|
||||||
|
|
||||||
|
- name: Create bridge interface configuration file
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/network/interfaces.d/br-mgmt
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0644"
|
||||||
|
content: |
|
||||||
|
# Management bridge for libvirt VMs and Tinkerbell
|
||||||
|
# This bridge provides the 172.16.81.0/24 network for bare metal provisioning
|
||||||
|
|
||||||
|
auto {{ br_mgmt_name }}
|
||||||
|
iface {{ br_mgmt_name }} inet static
|
||||||
|
address {{ br_mgmt_ip }}
|
||||||
|
netmask {{ br_mgmt_netmask }}
|
||||||
|
bridge_ports none
|
||||||
|
bridge_stp off
|
||||||
|
bridge_fd 0
|
||||||
|
bridge_maxwait 0
|
||||||
|
|
||||||
|
- name: Check if bridge already exists
|
||||||
|
ansible.builtin.command: ip link show {{ br_mgmt_name }}
|
||||||
|
register: bridge_exists
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Create bridge interface
|
||||||
|
ansible.builtin.command: ip link add name {{ br_mgmt_name }} type bridge
|
||||||
|
when: bridge_exists.rc != 0
|
||||||
|
|
||||||
|
- name: Set bridge interface up
|
||||||
|
ansible.builtin.command: ip link set {{ br_mgmt_name }} up
|
||||||
|
when: bridge_exists.rc != 0
|
||||||
|
|
||||||
|
- name: Check current IP on bridge
|
||||||
|
ansible.builtin.shell: ip addr show {{ br_mgmt_name }} | grep -q '{{ br_mgmt_ip }}'
|
||||||
|
register: ip_configured
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Assign IP address to bridge
|
||||||
|
ansible.builtin.command: ip addr add {{ br_mgmt_ip }}/{{ br_mgmt_cidr }} dev {{ br_mgmt_name }}
|
||||||
|
when: ip_configured.rc != 0
|
||||||
|
|
||||||
|
- name: Enable IP forwarding
|
||||||
|
ansible.posix.sysctl:
|
||||||
|
name: net.ipv4.ip_forward
|
||||||
|
value: "1"
|
||||||
|
sysctl_set: true
|
||||||
|
state: present
|
||||||
|
reload: true
|
||||||
|
|
||||||
|
- name: Install iptables-persistent
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: iptables-persistent
|
||||||
|
state: present
|
||||||
|
update_cache: false
|
||||||
|
environment:
|
||||||
|
DEBIAN_FRONTEND: noninteractive
|
||||||
|
|
||||||
|
- name: Configure NAT masquerade for br-mgmt network
|
||||||
|
ansible.builtin.iptables:
|
||||||
|
table: nat
|
||||||
|
chain: POSTROUTING
|
||||||
|
source: 172.16.81.0/24
|
||||||
|
out_interface: enp41s0
|
||||||
|
jump: MASQUERADE
|
||||||
|
comment: "NAT for br-mgmt network"
|
||||||
|
|
||||||
|
- name: Allow forwarding from br-mgmt to external
|
||||||
|
ansible.builtin.iptables:
|
||||||
|
chain: FORWARD
|
||||||
|
in_interface: "{{ br_mgmt_name }}"
|
||||||
|
out_interface: enp41s0
|
||||||
|
jump: ACCEPT
|
||||||
|
comment: "Forward br-mgmt to internet"
|
||||||
|
|
||||||
|
- name: Allow forwarding return traffic to br-mgmt
|
||||||
|
ansible.builtin.iptables:
|
||||||
|
chain: FORWARD
|
||||||
|
in_interface: enp41s0
|
||||||
|
out_interface: "{{ br_mgmt_name }}"
|
||||||
|
ctstate: ESTABLISHED,RELATED
|
||||||
|
jump: ACCEPT
|
||||||
|
comment: "Return traffic to br-mgmt"
|
||||||
|
|
||||||
|
- name: Save iptables rules
|
||||||
|
ansible.builtin.shell: iptables-save > /etc/iptables/rules.v4
|
||||||
|
|
||||||
|
- name: Display bridge status
|
||||||
|
ansible.builtin.shell: |
|
||||||
|
echo "=== Bridge Status ==="
|
||||||
|
ip addr show {{ br_mgmt_name }}
|
||||||
|
echo ""
|
||||||
|
echo "=== Bridge Details ==="
|
||||||
|
brctl show {{ br_mgmt_name }}
|
||||||
|
register: bridge_status
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Show bridge status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
var: bridge_status.stdout_lines
|
||||||
159
ansible/virtual-bm/create-vms.yml
Normal file
159
ansible/virtual-bm/create-vms.yml
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
---
|
||||||
|
- name: Create virtual baremetal VMs with VirtualBMC
|
||||||
|
hosts: local
|
||||||
|
become: true
|
||||||
|
gather_facts: false
|
||||||
|
|
||||||
|
vars:
|
||||||
|
vbmc_user: admin
|
||||||
|
vbmc_password: password
|
||||||
|
bridge_name: br-mgmt
|
||||||
|
vm_vcpus: 6
|
||||||
|
vm_ram: 6144
|
||||||
|
vm_disk_size: 60
|
||||||
|
disk_path: /var/lib/libvirt/images
|
||||||
|
|
||||||
|
vms:
|
||||||
|
- name: vm1
|
||||||
|
mac: "52:54:00:12:34:01"
|
||||||
|
vbmc_port: 6231
|
||||||
|
- name: vm2
|
||||||
|
mac: "52:54:00:12:34:02"
|
||||||
|
vbmc_port: 6232
|
||||||
|
- name: vm3
|
||||||
|
mac: "52:54:00:12:34:03"
|
||||||
|
vbmc_port: 6233
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Install required packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- python3-pip
|
||||||
|
- ovmf
|
||||||
|
state: present
|
||||||
|
update_cache: false
|
||||||
|
|
||||||
|
- name: Install virtualbmc
|
||||||
|
ansible.builtin.pip:
|
||||||
|
name: virtualbmc
|
||||||
|
state: present
|
||||||
|
break_system_packages: true
|
||||||
|
|
||||||
|
- name: Ensure vbmcd service file exists
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/systemd/system/vbmcd.service
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: "0644"
|
||||||
|
content: |
|
||||||
|
[Unit]
|
||||||
|
Description=Virtual BMC daemon
|
||||||
|
After=network.target libvirtd.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart=/usr/local/bin/vbmcd --foreground
|
||||||
|
Restart=always
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
|
||||||
|
- name: Enable and start vbmcd service
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: vbmcd
|
||||||
|
daemon_reload: true
|
||||||
|
enabled: true
|
||||||
|
state: started
|
||||||
|
|
||||||
|
- name: Wait for vbmcd to be ready
|
||||||
|
ansible.builtin.pause:
|
||||||
|
seconds: 3
|
||||||
|
|
||||||
|
- name: Check if VMs already exist
|
||||||
|
ansible.builtin.command: virsh dominfo {{ item.name }}
|
||||||
|
loop: "{{ vms }}"
|
||||||
|
register: vm_exists
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Create VMs with virt-install
|
||||||
|
ansible.builtin.command: >
|
||||||
|
virt-install
|
||||||
|
--name "{{ item.item.name }}"
|
||||||
|
--vcpus "{{ vm_vcpus }}"
|
||||||
|
--ram "{{ vm_ram }}"
|
||||||
|
--os-variant "debian12"
|
||||||
|
--connect "qemu:///system"
|
||||||
|
--disk "path={{ disk_path }}/{{ item.item.name }}-disk.img,bus=virtio,size={{ vm_disk_size }},sparse=yes"
|
||||||
|
--disk "device=cdrom,bus=sata"
|
||||||
|
--network "bridge:{{ bridge_name }},mac={{ item.item.mac }}"
|
||||||
|
--console "pty,target.type=virtio"
|
||||||
|
--serial "pty"
|
||||||
|
--graphics "vnc,listen=0.0.0.0"
|
||||||
|
--import
|
||||||
|
--noautoconsole
|
||||||
|
--noreboot
|
||||||
|
--boot "uefi,firmware.feature0.name=enrolled-keys,firmware.feature0.enabled=no,firmware.feature1.name=secure-boot,firmware.feature1.enabled=yes,bootmenu.enable=on,network,hd"
|
||||||
|
loop: "{{ vm_exists.results }}"
|
||||||
|
when: item.rc != 0
|
||||||
|
|
||||||
|
- name: Check existing VBMC entries
|
||||||
|
ansible.builtin.command: vbmc list
|
||||||
|
register: vbmc_list
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Add VMs to VirtualBMC
|
||||||
|
ansible.builtin.command: >
|
||||||
|
vbmc add {{ item.name }}
|
||||||
|
--port {{ item.vbmc_port }}
|
||||||
|
--username {{ vbmc_user }}
|
||||||
|
--password {{ vbmc_password }}
|
||||||
|
--address 0.0.0.0
|
||||||
|
loop: "{{ vms }}"
|
||||||
|
when: item.name not in vbmc_list.stdout
|
||||||
|
|
||||||
|
- name: Start VBMC for each VM
|
||||||
|
ansible.builtin.command: vbmc start {{ item.name }}
|
||||||
|
loop: "{{ vms }}"
|
||||||
|
register: vbmc_start
|
||||||
|
changed_when: "'started' in vbmc_start.stdout or vbmc_start.rc == 0"
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Get VBMC status
|
||||||
|
ansible.builtin.command: vbmc list
|
||||||
|
register: vbmc_status
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Display VBMC status
|
||||||
|
ansible.builtin.debug:
|
||||||
|
var: vbmc_status.stdout_lines
|
||||||
|
|
||||||
|
- name: Get VM list
|
||||||
|
ansible.builtin.command: virsh list --all
|
||||||
|
register: vm_list
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Display VM list
|
||||||
|
ansible.builtin.debug:
|
||||||
|
var: vm_list.stdout_lines
|
||||||
|
|
||||||
|
- name: Display summary
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
Virtual Baremetal VMs created!
|
||||||
|
|
||||||
|
| VM | MAC Address | VBMC Port | VBMC Address |
|
||||||
|
|-----|-------------------|-----------|------------------|
|
||||||
|
| vm1 | 52:54:00:12:34:01 | 6231 | 172.16.81.254 |
|
||||||
|
| vm2 | 52:54:00:12:34:02 | 6232 | 172.16.81.254 |
|
||||||
|
| vm3 | 52:54:00:12:34:03 | 6233 | 172.16.81.254 |
|
||||||
|
|
||||||
|
Test IPMI with:
|
||||||
|
ipmitool -I lanplus -U admin -P password -H 172.16.81.254 -p 6231 power status
|
||||||
|
|
||||||
|
Start a VM:
|
||||||
|
ipmitool -I lanplus -U admin -P password -H 172.16.81.254 -p 6231 power on
|
||||||
|
|
||||||
|
Set PXE boot:
|
||||||
|
ipmitool -I lanplus -U admin -P password -H 172.16.81.254 -p 6231 chassis bootdev pxe
|
||||||
52
ansible/virtual-bm/destroy-vms.yml
Normal file
52
ansible/virtual-bm/destroy-vms.yml
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
---
|
||||||
|
- name: Destroy virtual baremetal VMs and VirtualBMC
|
||||||
|
hosts: local
|
||||||
|
become: true
|
||||||
|
gather_facts: false
|
||||||
|
|
||||||
|
vars:
|
||||||
|
disk_path: /var/lib/libvirt/images
|
||||||
|
vms:
|
||||||
|
- name: vm1
|
||||||
|
- name: vm2
|
||||||
|
- name: vm3
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Stop VBMC for each VM
|
||||||
|
ansible.builtin.command: vbmc stop {{ item.name }}
|
||||||
|
loop: "{{ vms }}"
|
||||||
|
failed_when: false
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Delete VBMC entries
|
||||||
|
ansible.builtin.command: vbmc delete {{ item.name }}
|
||||||
|
loop: "{{ vms }}"
|
||||||
|
failed_when: false
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Destroy VMs
|
||||||
|
ansible.builtin.command: virsh destroy {{ item.name }}
|
||||||
|
loop: "{{ vms }}"
|
||||||
|
failed_when: false
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Undefine VMs with NVRAM
|
||||||
|
ansible.builtin.command: virsh undefine {{ item.name }} --nvram
|
||||||
|
loop: "{{ vms }}"
|
||||||
|
failed_when: false
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Remove VM disk images
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ disk_path }}/{{ item.name }}-disk.img"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ vms }}"
|
||||||
|
|
||||||
|
- name: Get remaining VM list
|
||||||
|
ansible.builtin.command: virsh list --all
|
||||||
|
register: vm_list
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Display VM list
|
||||||
|
ansible.builtin.debug:
|
||||||
|
var: vm_list.stdout_lines
|
||||||
4
ansible/virtual-bm/inventory.yml
Normal file
4
ansible/virtual-bm/inventory.yml
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
all:
|
||||||
|
hosts:
|
||||||
|
local:
|
||||||
|
ansible_connection: local
|
||||||
54
apps/bm/tinkerbell.yaml
Normal file
54
apps/bm/tinkerbell.yaml
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: tinkerbell
|
||||||
|
namespace: argo
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: ghcr.io/tinkerbell/charts
|
||||||
|
targetRevision: v0.22.0
|
||||||
|
chart: tinkerbell
|
||||||
|
helm:
|
||||||
|
values: |
|
||||||
|
publicIP: 172.16.81.254
|
||||||
|
artifactsFileServer: http://172.16.81.254:7173
|
||||||
|
trustedProxies:
|
||||||
|
- 10.244.0.0/24
|
||||||
|
deployment:
|
||||||
|
init:
|
||||||
|
sourceInterface: br-mgmt
|
||||||
|
hostNetwork: true
|
||||||
|
strategy:
|
||||||
|
type: Recreate
|
||||||
|
rollingUpdate: null
|
||||||
|
envs:
|
||||||
|
rufio:
|
||||||
|
metricsAddr: 172.16.81.254:9090
|
||||||
|
probeAddr: 172.16.81.254:9091
|
||||||
|
smee:
|
||||||
|
dhcpBindInterface: br-mgmt
|
||||||
|
ipxeHttpScriptBindAddr: 172.16.81.254
|
||||||
|
syslogBindAddr: 172.16.81.254
|
||||||
|
tftpServerBindAddr: 172.16.81.254
|
||||||
|
tinkController:
|
||||||
|
metricsAddr: 172.16.81.254:9092
|
||||||
|
probeAddr: 172.16.81.254:9093
|
||||||
|
tinkServer:
|
||||||
|
bindAddr: 172.16.81.254
|
||||||
|
metricsAddr: 172.16.81.254:9094
|
||||||
|
probeAddr: 172.16.81.254:9095
|
||||||
|
tootles:
|
||||||
|
bindAddr: 172.16.81.254
|
||||||
|
secondstar:
|
||||||
|
bindAddr: 172.16.81.254
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: tinkerbell
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
25
apps/capi/cluster-api-operator.yaml
Normal file
25
apps/capi/cluster-api-operator.yaml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: cluster-api-operator
|
||||||
|
namespace: argo
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://kubernetes-sigs.github.io/cluster-api-operator
|
||||||
|
targetRevision: 0.15.1
|
||||||
|
chart: cluster-api-operator
|
||||||
|
helm:
|
||||||
|
values: |
|
||||||
|
cert-manager:
|
||||||
|
enabled: false
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: capi
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
23
apps/capi/cluster-api-providers.yaml
Normal file
23
apps/capi/cluster-api-providers.yaml
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: capi-providers
|
||||||
|
namespace: argo
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
path: manifests/capi-stack
|
||||||
|
repoURL: "ssh://git@git.weystrom.dev:2222/pbhv/apps.git"
|
||||||
|
targetRevision: HEAD
|
||||||
|
directory:
|
||||||
|
recurse: true
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: capi
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
24
apps/infra-apps.yaml
Normal file
24
apps/infra-apps.yaml
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: infra
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://git.weystrom.dev/argodent/turbo-mothership.git
|
||||||
|
path: apps/infra
|
||||||
|
targetRevision: HEAD
|
||||||
|
directory:
|
||||||
|
recurse: true
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: argocd
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
57
apps/infra/argocd.yaml
Normal file
57
apps/infra/argocd.yaml
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: argocd
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://argoproj.github.io/argo-helm
|
||||||
|
chart: argo-cd
|
||||||
|
targetRevision: "9.1.6"
|
||||||
|
helm:
|
||||||
|
releaseName: turbo
|
||||||
|
valuesObject:
|
||||||
|
fullnameOverride: turbo-argocd
|
||||||
|
global:
|
||||||
|
domain: argo.turbo.weystrom.dev
|
||||||
|
configs:
|
||||||
|
params:
|
||||||
|
server.insecure: true
|
||||||
|
cm:
|
||||||
|
admin.enabled: true
|
||||||
|
server:
|
||||||
|
ingress:
|
||||||
|
enabled: true
|
||||||
|
ingressClassName: nginx
|
||||||
|
annotations:
|
||||||
|
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
|
||||||
|
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
|
||||||
|
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||||
|
extraTls:
|
||||||
|
- hosts:
|
||||||
|
- argo.turbo.weystrom.dev
|
||||||
|
secretName: argocd-ingress-http
|
||||||
|
ingressGrpc:
|
||||||
|
enabled: true
|
||||||
|
ingressClassName: nginx
|
||||||
|
hostname: argo-grpc.turbo.weystrom.dev
|
||||||
|
annotations:
|
||||||
|
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
|
||||||
|
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||||
|
extraTls:
|
||||||
|
- hosts:
|
||||||
|
- argo-grpc.turbo.weystrom.dev
|
||||||
|
secretName: argocd-ingress-grpc
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: argocd
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
33
apps/infra/cert-manager.yaml
Normal file
33
apps/infra/cert-manager.yaml
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: cert-manager
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://charts.jetstack.io
|
||||||
|
chart: cert-manager
|
||||||
|
targetRevision: "1.19.2"
|
||||||
|
helm:
|
||||||
|
releaseName: turbo
|
||||||
|
valuesObject:
|
||||||
|
fullnameOverride: turbo-certmgr
|
||||||
|
crds:
|
||||||
|
enabled: true
|
||||||
|
ingressShim:
|
||||||
|
defaultIssuerName: letsencrypt-prod
|
||||||
|
defaultIssuerKind: ClusterIssuer
|
||||||
|
defaultIssuerGroup: cert-manager.io
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: cert-manager
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
35
apps/infra/cilium.yaml
Normal file
35
apps/infra/cilium.yaml
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: cilium
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://helm.cilium.io/
|
||||||
|
chart: cilium
|
||||||
|
targetRevision: "1.18.4"
|
||||||
|
helm:
|
||||||
|
releaseName: cilium
|
||||||
|
valuesObject:
|
||||||
|
cluster:
|
||||||
|
name: local
|
||||||
|
k8sServiceHost: 65.109.94.180
|
||||||
|
k8sServicePort: 6443
|
||||||
|
kubeProxyReplacement: true
|
||||||
|
operator:
|
||||||
|
replicas: 1
|
||||||
|
routingMode: tunnel
|
||||||
|
tunnelProtocol: vxlan
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: kube-system
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=false
|
||||||
|
- ServerSideApply=true
|
||||||
34
apps/infra/ingress-nginx.yaml
Normal file
34
apps/infra/ingress-nginx.yaml
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: ingress-nginx
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://kubernetes.github.io/ingress-nginx
|
||||||
|
chart: ingress-nginx
|
||||||
|
targetRevision: "4.14.1"
|
||||||
|
helm:
|
||||||
|
releaseName: turbo
|
||||||
|
valuesObject:
|
||||||
|
fullnameOverride: turbo-ingress
|
||||||
|
controller:
|
||||||
|
admissionWebhooks:
|
||||||
|
enabled: false
|
||||||
|
service:
|
||||||
|
externalIPs:
|
||||||
|
- 65.109.94.180
|
||||||
|
type: ClusterIP
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: ingress-nginx
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
48
apps/infra/openebs.yaml
Normal file
48
apps/infra/openebs.yaml
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: openebs
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://openebs.github.io/openebs
|
||||||
|
chart: openebs
|
||||||
|
targetRevision: "4.2.0"
|
||||||
|
helm:
|
||||||
|
releaseName: openebs
|
||||||
|
valuesObject:
|
||||||
|
preUpgradeHook:
|
||||||
|
enabled: false
|
||||||
|
localpv-provisioner:
|
||||||
|
localpv:
|
||||||
|
basePath: /var/openebs/local
|
||||||
|
engines:
|
||||||
|
replicated:
|
||||||
|
mayastor:
|
||||||
|
enabled: false
|
||||||
|
local:
|
||||||
|
zfs:
|
||||||
|
enabled: false
|
||||||
|
rawfile:
|
||||||
|
enabled: false
|
||||||
|
lvm:
|
||||||
|
enabled: false
|
||||||
|
loki:
|
||||||
|
enabled: false
|
||||||
|
minio:
|
||||||
|
enabled: false
|
||||||
|
alloy:
|
||||||
|
enabled: false
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: openebs
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
27
apps/infra/sealed-secrets.yaml
Normal file
27
apps/infra/sealed-secrets.yaml
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: sealed-secrets
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://bitnami-labs.github.io/sealed-secrets
|
||||||
|
chart: sealed-secrets
|
||||||
|
targetRevision: "2.17.9"
|
||||||
|
helm:
|
||||||
|
releaseName: turbo
|
||||||
|
valuesObject:
|
||||||
|
fullnameOverride: turbo-sealedsecrets
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: kube-system
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=false
|
||||||
|
- ServerSideApply=true
|
||||||
24
apps/netris-apps.yaml
Normal file
24
apps/netris-apps.yaml
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: netris
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://git.weystrom.dev/argodent/turbo-mothership.git
|
||||||
|
path: apps/netris
|
||||||
|
targetRevision: HEAD
|
||||||
|
directory:
|
||||||
|
recurse: true
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: argocd
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
37
apps/netris/netris-controller.yaml
Normal file
37
apps/netris/netris-controller.yaml
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: netris-controller
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://netrisai.github.io/charts
|
||||||
|
chart: netris-controller
|
||||||
|
targetRevision: "*"
|
||||||
|
helm:
|
||||||
|
releaseName: netris-controller
|
||||||
|
valuesObject:
|
||||||
|
ingress:
|
||||||
|
hosts:
|
||||||
|
- netris.turbo.weystrom.dev
|
||||||
|
tls:
|
||||||
|
- secretName: netris-tls
|
||||||
|
hosts:
|
||||||
|
- netris.turbo.weystrom.dev
|
||||||
|
annotations:
|
||||||
|
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||||
|
haproxy:
|
||||||
|
enabled: false
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: netris-controller
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
31
apps/netris/netris-operator.yaml
Normal file
31
apps/netris/netris-operator.yaml
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
apiVersion: argoproj.io/v1alpha1
|
||||||
|
kind: Application
|
||||||
|
metadata:
|
||||||
|
name: netris-operator
|
||||||
|
namespace: argocd
|
||||||
|
finalizers:
|
||||||
|
- resources-finalizer.argocd.argoproj.io
|
||||||
|
spec:
|
||||||
|
project: default
|
||||||
|
source:
|
||||||
|
repoURL: https://netrisai.github.io/charts
|
||||||
|
chart: netris-operator
|
||||||
|
targetRevision: "*"
|
||||||
|
helm:
|
||||||
|
releaseName: netris-operator
|
||||||
|
valuesObject:
|
||||||
|
controller:
|
||||||
|
host: https://netris.turbo.weystrom.dev
|
||||||
|
login: netris
|
||||||
|
password: newNet0ps
|
||||||
|
insecure: false
|
||||||
|
destination:
|
||||||
|
server: https://kubernetes.default.svc
|
||||||
|
namespace: netris-operator
|
||||||
|
syncPolicy:
|
||||||
|
automated:
|
||||||
|
prune: true
|
||||||
|
selfHeal: true
|
||||||
|
syncOptions:
|
||||||
|
- CreateNamespace=true
|
||||||
|
- ServerSideApply=true
|
||||||
113
bootstrap/README.md
Normal file
113
bootstrap/README.md
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
# Bootstrap
|
||||||
|
|
||||||
|
Bootstrap chart for cluster initialization. Deploys all required infrastructure components in a single Helm release.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- 1-3 nodes
|
||||||
|
- External DNS for ingress access
|
||||||
|
- Internet access
|
||||||
|
|
||||||
|
## Components
|
||||||
|
|
||||||
|
The bootstrap umbrella chart (`charts/bootstrap/`) includes:
|
||||||
|
|
||||||
|
| Component | Description |
|
||||||
|
|-----------|-------------|
|
||||||
|
| Cilium | CNI for networking |
|
||||||
|
| ingress-nginx | Ingress controller |
|
||||||
|
| cert-manager | TLS certificate management |
|
||||||
|
| sealed-secrets | Encrypted secrets for GitOps |
|
||||||
|
| ArgoCD | GitOps continuous delivery |
|
||||||
|
| OpenEBS | Container storage (hostpath) |
|
||||||
|
|
||||||
|
Additional resources created:
|
||||||
|
- ClusterIssuer (Let's Encrypt)
|
||||||
|
- StorageClass (local-storage)
|
||||||
|
|
||||||
|
## Kubernetes
|
||||||
|
|
||||||
|
Install [k0s](https://k0sproject.io/) as the Kubernetes distribution:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
curl -sSf https://get.k0s.sh | sudo sh
|
||||||
|
sudo k0s install controller --enable-worker --no-taints --config ./k0s.yaml
|
||||||
|
sudo k0s start
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify and get kubeconfig:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo k0s status
|
||||||
|
k0s kubeconfig admin create > ~/.kube/config
|
||||||
|
```
|
||||||
|
|
||||||
|
## Bootstrap Installation
|
||||||
|
|
||||||
|
```sh
|
||||||
|
cd charts/bootstrap
|
||||||
|
|
||||||
|
# Download dependencies
|
||||||
|
helm dependency update
|
||||||
|
|
||||||
|
# Review what will be installed
|
||||||
|
helm template bootstrap . --namespace bootstrap | less
|
||||||
|
|
||||||
|
# Install
|
||||||
|
helm upgrade -i bootstrap . --namespace bootstrap --create-namespace
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sealed Secrets
|
||||||
|
|
||||||
|
[Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets) enables GitOps management of secrets using asymmetric encryption.
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Create a secret (do not commit)
|
||||||
|
kubectl create secret generic my-secret \
|
||||||
|
--from-literal=password=supersecret \
|
||||||
|
--dry-run=client -o yaml > plaintext.yaml
|
||||||
|
|
||||||
|
# Seal it
|
||||||
|
kubeseal < plaintext.yaml > sealed-secret.yaml
|
||||||
|
|
||||||
|
# Delete plaintext
|
||||||
|
rm plaintext.yaml
|
||||||
|
|
||||||
|
# Apply sealed secret
|
||||||
|
kubectl apply -f sealed-secret.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## ArgoCD
|
||||||
|
|
||||||
|
Available at https://argo.turbo.weystrom.dev
|
||||||
|
|
||||||
|
Get initial admin password:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl -n bootstrap get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
Edit `charts/bootstrap/values.yaml` to customize components. Each subchart is configured under its own key:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
cilium:
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
ingress-nginx:
|
||||||
|
enabled: true
|
||||||
|
controller:
|
||||||
|
service:
|
||||||
|
externalIPs:
|
||||||
|
- 1.2.3.4
|
||||||
|
|
||||||
|
cert-manager:
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# ... etc
|
||||||
|
```
|
||||||
|
|
||||||
|
To disable a component, set `enabled: false`.
|
||||||
18
bootstrap/charts/turbo-mothership-bootstrap/.helmignore
Normal file
18
bootstrap/charts/turbo-mothership-bootstrap/.helmignore
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# Patterns to ignore when building packages.
|
||||||
|
.DS_Store
|
||||||
|
.git/
|
||||||
|
.gitignore
|
||||||
|
.bzr/
|
||||||
|
.bzrignore
|
||||||
|
.hg/
|
||||||
|
.hgignore
|
||||||
|
.svn/
|
||||||
|
*.swp
|
||||||
|
*.bak
|
||||||
|
*.tmp
|
||||||
|
*.orig
|
||||||
|
*~
|
||||||
|
.project
|
||||||
|
.idea/
|
||||||
|
*.tmproj
|
||||||
|
.vscode/
|
||||||
15
bootstrap/charts/turbo-mothership-bootstrap/Chart.lock
Normal file
15
bootstrap/charts/turbo-mothership-bootstrap/Chart.lock
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
dependencies:
|
||||||
|
- name: ingress-nginx
|
||||||
|
repository: https://kubernetes.github.io/ingress-nginx
|
||||||
|
version: 4.14.1
|
||||||
|
- name: cert-manager
|
||||||
|
repository: https://charts.jetstack.io
|
||||||
|
version: v1.19.2
|
||||||
|
- name: sealed-secrets
|
||||||
|
repository: https://bitnami-labs.github.io/sealed-secrets
|
||||||
|
version: 2.17.9
|
||||||
|
- name: argo-cd
|
||||||
|
repository: https://argoproj.github.io/argo-helm
|
||||||
|
version: 9.1.6
|
||||||
|
digest: sha256:dae2d89a79210646255cfbd3d045717b9db40aaf0c9e180d64829b1306659cd0
|
||||||
|
generated: "2025-12-15T17:29:55.279201521+01:00"
|
||||||
27
bootstrap/charts/turbo-mothership-bootstrap/Chart.yaml
Normal file
27
bootstrap/charts/turbo-mothership-bootstrap/Chart.yaml
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
apiVersion: v2
|
||||||
|
name: turbo-mothership-bootstrap
|
||||||
|
description: Umbrella chart for cluster bootstrap components
|
||||||
|
type: application
|
||||||
|
version: 0.1.0
|
||||||
|
appVersion: "1.0.0"
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
- name: ingress-nginx
|
||||||
|
version: "4.14.1"
|
||||||
|
repository: https://kubernetes.github.io/ingress-nginx
|
||||||
|
condition: ingress-nginx.enabled
|
||||||
|
|
||||||
|
- name: cert-manager
|
||||||
|
version: "1.19.2"
|
||||||
|
repository: https://charts.jetstack.io
|
||||||
|
condition: cert-manager.enabled
|
||||||
|
|
||||||
|
- name: sealed-secrets
|
||||||
|
version: "2.17.9"
|
||||||
|
repository: https://bitnami-labs.github.io/sealed-secrets
|
||||||
|
condition: sealed-secrets.enabled
|
||||||
|
|
||||||
|
- name: argo-cd
|
||||||
|
version: "9.1.6"
|
||||||
|
repository: https://argoproj.github.io/argo-helm
|
||||||
|
condition: argo-cd.enabled
|
||||||
@@ -0,0 +1,19 @@
|
|||||||
|
{{- if .Values.clusterIssuer.enabled }}
|
||||||
|
apiVersion: cert-manager.io/v1
|
||||||
|
kind: ClusterIssuer
|
||||||
|
metadata:
|
||||||
|
name: {{ .Values.clusterIssuer.name }}
|
||||||
|
annotations:
|
||||||
|
"helm.sh/hook": post-install,post-upgrade
|
||||||
|
"helm.sh/hook-weight": "10"
|
||||||
|
spec:
|
||||||
|
acme:
|
||||||
|
server: {{ .Values.clusterIssuer.server }}
|
||||||
|
email: {{ .Values.clusterIssuer.email }}
|
||||||
|
privateKeySecretRef:
|
||||||
|
name: {{ .Values.clusterIssuer.name }}
|
||||||
|
solvers:
|
||||||
|
- http01:
|
||||||
|
ingress:
|
||||||
|
ingressClassName: nginx
|
||||||
|
{{- end }}
|
||||||
@@ -0,0 +1,41 @@
|
|||||||
|
{{- if index .Values "ingress-nginx" "enabled" }}
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: {{ index .Values "ingress-nginx" "namespaceOverride" | default "ingress-nginx" }}
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/managed-by: Helm
|
||||||
|
annotations:
|
||||||
|
meta.helm.sh/release-name: {{ .Release.Name }}
|
||||||
|
meta.helm.sh/release-namespace: {{ .Release.Namespace }}
|
||||||
|
"helm.sh/resource-policy": keep
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{- if index .Values "cert-manager" "enabled" }}
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: {{ index .Values "cert-manager" "namespace" | default "cert-manager" }}
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/managed-by: Helm
|
||||||
|
annotations:
|
||||||
|
meta.helm.sh/release-name: {{ .Release.Name }}
|
||||||
|
meta.helm.sh/release-namespace: {{ .Release.Namespace }}
|
||||||
|
"helm.sh/resource-policy": keep
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{- if index .Values "argo-cd" "enabled" }}
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: {{ index .Values "argo-cd" "namespaceOverride" | default "argocd" }}
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/managed-by: Helm
|
||||||
|
annotations:
|
||||||
|
meta.helm.sh/release-name: {{ .Release.Name }}
|
||||||
|
meta.helm.sh/release-namespace: {{ .Release.Namespace }}
|
||||||
|
"helm.sh/resource-policy": keep
|
||||||
|
{{- end }}
|
||||||
87
bootstrap/charts/turbo-mothership-bootstrap/values.yaml
Normal file
87
bootstrap/charts/turbo-mothership-bootstrap/values.yaml
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
# Bootstrap umbrella chart values
|
||||||
|
# Each subchart is configured under its own key
|
||||||
|
# NOTE: Cilium CNI is installed via k0s config (k0s.yaml)
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Ingress NGINX (ingress-nginx)
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
ingress-nginx:
|
||||||
|
enabled: true
|
||||||
|
fullnameOverride: turbo-ingress
|
||||||
|
namespaceOverride: ingress-nginx
|
||||||
|
controller:
|
||||||
|
admissionWebhooks:
|
||||||
|
enabled: false
|
||||||
|
service:
|
||||||
|
externalIPs:
|
||||||
|
- 65.109.94.180
|
||||||
|
type: ClusterIP
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Cert Manager (cert-manager)
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
cert-manager:
|
||||||
|
enabled: true
|
||||||
|
fullnameOverride: turbo-certmgr
|
||||||
|
namespace: cert-manager
|
||||||
|
crds:
|
||||||
|
enabled: true
|
||||||
|
ingressShim:
|
||||||
|
defaultIssuerName: letsencrypt-prod
|
||||||
|
defaultIssuerKind: ClusterIssuer
|
||||||
|
defaultIssuerGroup: cert-manager.io
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Sealed Secrets (kube-system)
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
sealed-secrets:
|
||||||
|
enabled: true
|
||||||
|
fullnameOverride: turbo-sealedsecrets
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Argo CD (argocd)
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
argo-cd:
|
||||||
|
enabled: true
|
||||||
|
fullnameOverride: turbo-argocd
|
||||||
|
global:
|
||||||
|
domain: argo.turbo.weystrom.dev
|
||||||
|
namespaceOverride: argocd
|
||||||
|
configs:
|
||||||
|
params:
|
||||||
|
server.insecure: true
|
||||||
|
cm:
|
||||||
|
admin.enabled: true
|
||||||
|
server:
|
||||||
|
ingress:
|
||||||
|
enabled: true
|
||||||
|
ingressClassName: nginx
|
||||||
|
annotations:
|
||||||
|
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
|
||||||
|
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
|
||||||
|
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||||
|
extraTls:
|
||||||
|
- hosts:
|
||||||
|
- argo.turbo.weystrom.dev
|
||||||
|
secretName: argocd-ingress-http
|
||||||
|
ingressGrpc:
|
||||||
|
enabled: true
|
||||||
|
ingressClassName: nginx
|
||||||
|
hostname: argo-grpc.turbo.weystrom.dev
|
||||||
|
annotations:
|
||||||
|
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
|
||||||
|
cert-manager.io/cluster-issuer: "letsencrypt-prod"
|
||||||
|
extraTls:
|
||||||
|
- hosts:
|
||||||
|
- argo-grpc.turbo.weystrom.dev
|
||||||
|
secretName: argocd-ingress-grpc
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# Raw manifests configuration
|
||||||
|
# ------------------------------------------------------------------------------
|
||||||
|
# NOTE: OpenEBS is installed via k0s config (k0s.yaml)
|
||||||
|
clusterIssuer:
|
||||||
|
enabled: true
|
||||||
|
name: letsencrypt-prod
|
||||||
|
email: mail@weystrom.dev
|
||||||
|
server: https://acme-v02.api.letsencrypt.org/directory
|
||||||
101
bootstrap/k0s.yaml
Normal file
101
bootstrap/k0s.yaml
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
apiVersion: k0s.k0sproject.io/v1beta1
|
||||||
|
kind: ClusterConfig
|
||||||
|
metadata:
|
||||||
|
name: k0s
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
api:
|
||||||
|
address: 65.109.94.180
|
||||||
|
ca:
|
||||||
|
certificatesExpireAfter: 8760h0m0s
|
||||||
|
expiresAfter: 87600h0m0s
|
||||||
|
k0sApiPort: 9443
|
||||||
|
port: 6443
|
||||||
|
sans:
|
||||||
|
- 65.109.94.180
|
||||||
|
- 2a01:4f9:3051:48ca::2
|
||||||
|
controllerManager: {}
|
||||||
|
extensions:
|
||||||
|
helm:
|
||||||
|
concurrencyLevel: 5
|
||||||
|
repositories:
|
||||||
|
- name: cilium
|
||||||
|
url: https://helm.cilium.io/
|
||||||
|
- name: openebs
|
||||||
|
url: https://openebs.github.io/openebs
|
||||||
|
charts:
|
||||||
|
- name: cilium
|
||||||
|
chartname: cilium/cilium
|
||||||
|
version: "1.18.4"
|
||||||
|
namespace: kube-system
|
||||||
|
order: 1
|
||||||
|
values: |
|
||||||
|
cluster:
|
||||||
|
name: local
|
||||||
|
k8sServiceHost: 65.109.94.180
|
||||||
|
k8sServicePort: 6443
|
||||||
|
kubeProxyReplacement: true
|
||||||
|
operator:
|
||||||
|
replicas: 1
|
||||||
|
routingMode: tunnel
|
||||||
|
tunnelProtocol: vxlan
|
||||||
|
- name: openebs
|
||||||
|
chartname: openebs/openebs
|
||||||
|
version: "4.2.0"
|
||||||
|
namespace: openebs
|
||||||
|
order: 2
|
||||||
|
values: |
|
||||||
|
localpv-provisioner:
|
||||||
|
localpv:
|
||||||
|
basePath: /var/openebs/local
|
||||||
|
engines:
|
||||||
|
replicated:
|
||||||
|
mayastor:
|
||||||
|
enabled: false
|
||||||
|
local:
|
||||||
|
zfs:
|
||||||
|
enabled: false
|
||||||
|
rawfile:
|
||||||
|
enabled: false
|
||||||
|
lvm:
|
||||||
|
enabled: false
|
||||||
|
loki:
|
||||||
|
enabled: false
|
||||||
|
minio:
|
||||||
|
enabled: false
|
||||||
|
alloy:
|
||||||
|
enabled: false
|
||||||
|
installConfig:
|
||||||
|
users:
|
||||||
|
etcdUser: etcd
|
||||||
|
kineUser: kube-apiserver
|
||||||
|
konnectivityUser: konnectivity-server
|
||||||
|
kubeAPIserverUser: kube-apiserver
|
||||||
|
kubeSchedulerUser: kube-scheduler
|
||||||
|
network:
|
||||||
|
clusterDomain: cluster.local
|
||||||
|
dualStack:
|
||||||
|
enabled: true
|
||||||
|
IPv6podCIDR: fd00::/108
|
||||||
|
IPv6serviceCIDR: fd01::/108
|
||||||
|
kubeProxy:
|
||||||
|
disabled: true
|
||||||
|
nodeLocalLoadBalancing:
|
||||||
|
enabled: false
|
||||||
|
envoyProxy:
|
||||||
|
apiServerBindPort: 7443
|
||||||
|
konnectivityServerBindPort: 7132
|
||||||
|
type: EnvoyProxy
|
||||||
|
podCIDR: 10.240.0.0/16
|
||||||
|
provider: custom
|
||||||
|
serviceCIDR: 10.99.0.0/12
|
||||||
|
scheduler: {}
|
||||||
|
storage:
|
||||||
|
etcd:
|
||||||
|
ca:
|
||||||
|
certificatesExpireAfter: 8760h0m0s
|
||||||
|
expiresAfter: 87600h0m0s
|
||||||
|
peerAddress: 127.0.0.1
|
||||||
|
type: etcd
|
||||||
|
telemetry:
|
||||||
|
enabled: false
|
||||||
5
manifests/capi-stack/core-provider.yaml
Normal file
5
manifests/capi-stack/core-provider.yaml
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
apiVersion: operator.cluster.x-k8s.io/v1alpha2
|
||||||
|
kind: CoreProvider
|
||||||
|
metadata:
|
||||||
|
name: cluster-api
|
||||||
|
namespace: capi
|
||||||
42
manifests/capi-stack/k0smotron-providers.yaml
Normal file
42
manifests/capi-stack/k0smotron-providers.yaml
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
---
|
||||||
|
# k0smotron Bootstrap Provider
|
||||||
|
apiVersion: operator.cluster.x-k8s.io/v1alpha2
|
||||||
|
kind: BootstrapProvider
|
||||||
|
metadata:
|
||||||
|
name: k0sproject-k0smotron
|
||||||
|
namespace: capi
|
||||||
|
spec:
|
||||||
|
version: v1.10.1
|
||||||
|
manager:
|
||||||
|
health: {}
|
||||||
|
metrics: {}
|
||||||
|
verbosity: 1
|
||||||
|
webhook: {}
|
||||||
|
---
|
||||||
|
# k0smotron ControlPlane Provider
|
||||||
|
apiVersion: operator.cluster.x-k8s.io/v1alpha2
|
||||||
|
kind: ControlPlaneProvider
|
||||||
|
metadata:
|
||||||
|
name: k0sproject-k0smotron
|
||||||
|
namespace: capi
|
||||||
|
spec:
|
||||||
|
version: v1.10.1
|
||||||
|
manager:
|
||||||
|
health: {}
|
||||||
|
metrics: {}
|
||||||
|
verbosity: 1
|
||||||
|
webhook: {}
|
||||||
|
---
|
||||||
|
# k0smotron Infrastructure Provider (for hosted control planes)
|
||||||
|
apiVersion: operator.cluster.x-k8s.io/v1alpha2
|
||||||
|
kind: InfrastructureProvider
|
||||||
|
metadata:
|
||||||
|
name: k0sproject-k0smotron
|
||||||
|
namespace: capi
|
||||||
|
spec:
|
||||||
|
version: v1.10.1
|
||||||
|
manager:
|
||||||
|
health: {}
|
||||||
|
metrics: {}
|
||||||
|
verbosity: 1
|
||||||
|
webhook: {}
|
||||||
18
manifests/capi-stack/tinkerbell-infraprovider.yaml
Normal file
18
manifests/capi-stack/tinkerbell-infraprovider.yaml
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
---
|
||||||
|
# Tinkerbell Infrastructure Provider for CAPI
|
||||||
|
apiVersion: operator.cluster.x-k8s.io/v1alpha2
|
||||||
|
kind: InfrastructureProvider
|
||||||
|
metadata:
|
||||||
|
name: tinkerbell
|
||||||
|
namespace: capi
|
||||||
|
spec:
|
||||||
|
version: v0.6.7
|
||||||
|
configSecret:
|
||||||
|
name: tinkerbell-provider-credentials
|
||||||
|
fetchConfig:
|
||||||
|
url: https://github.com/tinkerbell/cluster-api-provider-tinkerbell/releases/download/v0.6.7/infrastructure-components.yaml
|
||||||
|
manager:
|
||||||
|
health: {}
|
||||||
|
metrics: {}
|
||||||
|
verbosity: 1
|
||||||
|
webhook: {}
|
||||||
Reference in New Issue
Block a user