Hello, so some days ago I did this, basically I built a k3s (lightweight Kubernetes) cluster over Alpine Linux over OpenBSD with vmm(4).
I choose Alpine because it's really small and very close to OpenBSD in many ways, and the struggle to install it is quite low, what you need for it, are the following (I will "install" 3 VMs, one master and 2 workers):
I kinda fan already of veb(4)/vport(4) so we'll build our VMs over it, and the network for it will be 100.65.0.0/24 but you can use something else, I just suggest keeping it away from your original network at home or work to avoid problems.
Let's start downloading the .iso, create the vm.conf file and the network for it.
$ pwd
/home/gonzalo
$ mkdir VMs && cd VMs
$ ftp -V https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86_64/alpine-virt-3.16.2-x86_64.iso
--- INTERNET MAGIC ---
$ vmctl create -s 20G alpine01.qcow2
vmctl: qcow2 imagefile created
$ doas cat /etc/vm.conf
switch "vms_65" {
interface veb65
}
vm alpine01 {
disable
owner gonzalo
memory 1G
cdrom "/home/gonzalo/VMs/alpine-virt-3.16.2-x86_64.iso"
disk "/home/gonzalo/VMs/alpine01.qcow2"
interface tap0 { switch "vms_65" }
}
#vm alpine02 {
# enable
# owner gonzalo
# memory 1G
# disk "/home/gonzalo/VMs/alpine02.qcow2"
# interface tap1 { switch "vms_65" }
#}
#vm alpine03 {
# enable
# owner gonzalo
# memory 1G
# disk "/home/gonzalo/VMs/alpine03.qcow2"
# interface tap2 { switch "vms_65" }
#}
Yes, the alpine02 and alpine03 are commented, since we will install and setup just alpine01 and then copy the qcow2 2 times and modify some files to make it work, of course, you can use some ansible playbook, install each one of them by hand and so on, but for this article, we will keep it simple and easy.
Time for networking, as I said we are using 100.65.0.1/24, so we need to create the veb(4)/vport(4) and a pf entry to take care of the VMs traffic, as you see on the vm.conf we are using veb65 and our switch on vmd(8) is "vms_65".
$ doas cat /etc/hostname.veb65
add vport65
add tap0 # alpine01
add tap1 # alpine02
add tap2 # alpine03
up
$ doas cat /etc/hostname.vport65
inet 100.65.0.1 255.255.255.0
up
$ doas grep vport65 /etc/pf.conf
match out log on egress from vport65:network to any nat-to (egress)
$ doas sh /etc/netstart vport65
-- MAYBE SOME NOISE --
$ doas sh /etc/netstart veb65
-- MAYBE SOME NOISE --
$ doas pfctl -f /etc/pf.conf
-- PROBABLY NOTHING ;) --
$ doas /etc/rc.d/vmd start
vmd(ok)
$ vmctl status
ID PID VCPUS MAXMEM CURMEM TTY OWNER STATE NAME
1 - 1 1.0G - - gonzalo stopped alpine01
2 - 1 1.0G - - gonzalo stopped alpine02
3 - 1 1.0G - - gonzalo stopped alpine03
$ vmctl start 1 && vmctl console 1
Now you are on Loonix land, and you should install it, I kept it simple and probably as much as next next finish thing, but you can take your time and make it nice, you can check the Alpine documents to get it right, remember your network, for my case I used:
alpine01 - 100.65.0.101
alpine02 - 100.65.0.102
alpine03 - 100.65.0.103
Gateway - 100.65.0.1
Nameserver - 9.9.9.9 (you can setup unbound or something else but... k.i.s.s)
A typical network interface file on Alpine looks like this, as you can see this is alpine01
# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 100.65.0.101
netmask 255.255.255.0
gateway 100.65.0.1
Now that we have alpine01 installed and with the network working, we need virtio_vmmci & vmm_clock from the great voutilad, I will not go deep on that, because he already explained it on the repo, but the short version is, your machine can start to go nuts on high CPU and weird stuff while you work with alpine (or other linux) and docker, in particular, this is my experience, some other machine I used nothing happened, but it is a good idea to install those modules.
Inside the alpine00 we should update all the things, get the repositories, compile the modules and install them.
# apk update && apk upgrade
-- SOME STUFF --
/home/gonzalo # apk add doas gcc make linux-virt-dev
-- SOME MORE LOONIX STUFF --
/home/gonzalo # git clone https://github.com/voutilad/virtio_vmmci
-- GIT MAGIC --
/home/gonzalo # cd virtio_vmmci && make && make install
-- SOME voutilad MAGIC --
/home/gonzalo # cd .. && git clone https://github.com/voutilad/vmm_clock
-- GIT MAGIC --
/home/gonzalo # cd vmm_clock && make && make install
-- SOME voutilad MAGIC --
/home/gonzalo # ls -l /lib/modules/$(uname -r)/extra
total 20
-rw-r--r-- 1 root root 5471 Sep 20 09:23 virtio_pci_obsd.ko.gz
-rw-r--r-- 1 root root 6157 Sep 20 09:23 virtio_vmmci.ko.gz
-rw-r--r-- 1 root root 4090 Sep 20 09:18 vmm_clock.ko.gz
/home/gonzalo # modprobe vmm_clock && modprobe virtio_vmmci
/home/gonzalo # lsmod | grep 'vmm_clock\|virtio_vmmci'
vmm_clock 28672 0
virtio_vmmci 16384 0
/home/gonzalo # cat /etc/modules-load.d/vmm_clock.conf
vmm_clock
/home/gonzalo # cat /etc/modules-load.d/virtio_vmmci.conf
virtio_vmmci
To be able to use k3s on Alpine we need to add some parameters to the kernel with the following:
/home/gonzalo # grep default_kernel_opts /etc/update-extlinux.conf
default_kernel_opts="quiet rootfstype=ext4 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory"
/home/gonzalo # update-extlinux
/home/gonzalo # reboot
Now your alpine01 should be up again after the reboot with the right modules loaded, I assume you are super cool and already loaded your ssh key in the VM, so time to create alpine02 and alpine03, let's shutdown alpine01 and copy the image, uncomment the vm.conf and modify the "!new" machines to make them look new. I will show you only for alpine02 you must do the same for alpine03:
$ pwd
/home/gonzalo/VMs
$ vmctl status
ID PID VCPUS MAXMEM CURMEM TTY OWNER STATE NAME
1 - 1 1.0G - - gonzalo stopped alpine01
2 - 1 1.0G - - gonzalo stopped alpine02
3 - 1 1.0G - - gonzalo stopped alpine03
$ cp alpine01.qcow2 alpine02.qcow2
$ cp alpine01.qcow2 alpine03.qcow2
$ doas cat /etc/vm.conf
switch "vms_65" {
interface veb65
}
vm alpine01 {
enable
owner gonzalo
memory 1G
cdrom "/home/gonzalo/VMs/alpine-virt-3.16.0-x86_64.iso"
disk "/home/gonzalo/VMs/alpine01.qcow2"
interface tap { switch "vms_65" }
}
vm alpine02 {
enable
owner gonzalo
memory 1G
disk "/home/gonzalo/VMs/alpine02.qcow2"
interface tap { switch "vms_65" }
}
vm alpine03 {
enable
owner gonzalo
memory 1G
disk "/home/gonzalo/VMs/alpine03.qcow2"
interface tap { switch "vms_65" }
}
$ doas /etc/rc.d/vmd restart
vmd(ok)
vmd(ok)
Now you will have the 3 VMs on and up, but alpine02/03 are alpine01 still, so I will show you what to modify on alpine02 and then you can do it on alpine03
$ vmctl console 2
-- NOW WE ARE OVER SERIAL ON ALPINE02 --
alpine01:~$ doas su
/home/gonzalo # cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 100.65.0.102
netmask 255.255.255.0
gateway 100.65.0.1
/home/gonzalo # cat /etc/hostname
alpine02
/home/gonzalo # rm /etc/ssh/ssh_host_*
/home/gonzalo # dbus-uuidgen > /etc/machine-id
/home/gonzalo # reboot
Look at that! All the VMs are running with unique ips and ssh keys with modules up, let's just be double sure of it with a little ansible because we are cool:
$ cat hosts-vmd
[alpines:vars]
ansible_python_interpreter=/usr/bin/python3.10
[alpines]
100.65.0.101
100.65.0.102
100.65.0.103
$ ansible -i hosts-vmd alpines -m shell -a "uname -a"
100.65.0.102 | CHANGED | rc=0 >>
Linux alpine02 5.15.68-0-virt #1-Alpine SMP Fri, 16 Sep 2022 06:29:31 +0000 x86_64 GNU/Linux
100.65.0.103 | CHANGED | rc=0 >>
Linux alpine03 5.15.68-0-virt #1-Alpine SMP Fri, 16 Sep 2022 06:29:31 +0000 x86_64 GNU/Linux
100.65.0.101 | CHANGED | rc=0 >>
Linux alpine01 5.15.68-0-virt #1-Alpine SMP Fri, 16 Sep 2022 06:29:31 +0000 x86_64 GNU/Linux
Login into alpine01 and let's install k3s, I will not go into much detail about it, the documentation is good and probably you know a lot about Kubernetes already ;).
alpine01:~$ doas su
/home/gonzalo # curl -sfL https://get.k3s.io | sh -
[INFO] Finding release for channel stable
[INFO] Using v1.24.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl
[INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl
[INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/rancher/k3s/k3s.env
[INFO] openrc: Creating service file /etc/init.d/k3s
[INFO] openrc: Enabling k3s service for default runlevel
[INFO] openrc: Starting k3s
* Caching service dependencies ... [ ok ]
* Starting k3s ... [ ok ]
But is it running?
/home/gonzalo # ps auwx | grep k3s
7653 root 0:00 supervise-daemon k3s --start --stdout /var/log/k3s.log --stderr /var/log/k3s.log --pidfile /var/run/k3s.pid --respawn-delay 5 --respawn-max 0 /usr/local/bin/k3s -- server
7655 root 0:22 {k3s-server} /usr/local/bin/k3s server
7672 root 0:09 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
8314 root 0:05 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1e1baffa6c3634ba534f44cc83462c90f96400222c8658643206fb33863fd9f1 -address /run/k3s/containerd/containerd.sock
8330 root 0:04 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace k8s.io -id cdf30cd5981fd42161a3418dc0c212255450c0ed54c86f879cd06ab7ff183bf9 -address /run/k3s/containerd/containerd.sock
8355 root 0:03 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 88ddbb0b4437aa96a15b30927b87312c5cf8686fa73ffc15e695839863c7ba7c -address /run/k3s/containerd/containerd.sock
9370 root 0:03 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7ad84c3c1eabbe43a6a773717a28eed8a48a6ed757c294b1cb54d2d8a01dbe7f -address /run/k3s/containerd/containerd.sock
9408 root 0:02 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2a15d7ccaf18c14575b2a545c4c881fd8d12a00b593ca17178b72f8193618a95 -address /run/k3s/containerd/containerd.sock
It is! We need to adjust our kubernetes conf and then use our token to add the nodes to this cluster:
/home/gonzalo # vim /etc/rancher/k3s/k3s.yaml
-- SOME ADJUSTMENTS --
/home/gonzalo # service k3s status
* status: started
/home/gonzalo # service k3s restart
* Stopping k3s ... [ ok ]
* Starting k3s ... [ ok ]
/home/gonzalo # cat /var/lib/rancher/k3s/server/node-token
K10d3c66c80442424224beefd54353562d7339da578d831c3803bff2b1a2114c7eaea1::server:4ce50ab638839137eec4317efee58bb7
Time to add alpine02 to the cluster for that we need to use K3S_URL & K3S_TOKEN, then you can do the same for alpine03:
alpine02:~$ doas su
/home/gonzalo # curl -sfL https://get.k3s.io | K3S_URL=https://100.65.0.101:6443 K3S_TOKEN=K10d3c66c80442424224beefd54353562d7339da578d831c3803bff2b1a2114c7eaea1::server:4ce50ab638839137eec4317efee58bb7 sh -
[INFO] Finding release for channel stable
[INFO] Using v1.24.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO] Skipping binary downloaded, installed k3s matches hash
[INFO] Skipping installation of SELinux RPM
[INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl
[INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl
[INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/rancher/k3s/k3s-agent.env
[INFO] openrc: Creating service file /etc/init.d/k3s-agent
[INFO] openrc: Enabling k3s-agent service for default runlevel
[INFO] openrc: Starting k3s-agent
* Caching service dependencies ... [ ok ]
* Starting k3s ... [ ok ]
Thanks to rsadowski@ we have kubectl on ports, so we can install it, download our /etc/rancher/k3s/k3s.yaml from alpine01 and we can manage our cluster from OpenBSD:
$ doas pkg_add kubectl
-- ESPIE MAGIG --
$ mkdir ~.kube
## Place your /etc/rancher/k3s/k3s.yaml from 100.65.0.101 inside of ~.kube/config
## be sure that the IP in it isn't 127.0.0.1 so you can access it.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
alpine01 Ready control-plane,master 71m v1.24.4+k3s1
alpine03 Ready <none> 2m3s v1.24.4+k3s1
alpine02 Ready <none> 6m4s v1.24.4+k3s1
$ kubectl label node alpine02 node-role.kubernetes.io/worker=worker
$ kubectl label node alpine03 node-role.kubernetes.io/worker=worker
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
alpine02 Ready worker 18m v1.24.4+k3s1
alpine01 Ready control-plane,master 83m v1.24.4+k3s1
alpine03 Ready worker 14m v1.24.4+k3s1
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://100.65.0.101:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl cluster-info
Kubernetes control plane is running at https://100.65.0.101:6443
CoreDNS is running at https://100.65.0.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://100.65.0.101:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You can run 'kubectl cluster-info dump' yourself since the output is huge, but as you see, the cluster is up and running, now we need to play with it, but I will not do it now, since this got too long, so you can get dirty yourself or just wait for a next article. Thanks for reading, sorry for the typos.