TL;DR: Steps to install CRI-O and Kubeadm provisioned cluster on Ubuntu 22.04.
Hard for me to believe, it has nearly been five years since my first (and only) post on Kubeadm. That was circa Kubernetes version 1.09, and I was sort of posting what I had learned about Kubeadm, as I went through it.
I recently came up on my latest expiring home lab vSphere license. This annual event always causes me to reconsider how I use my server and whether handing out more cash is the best option. This year, I’ve decided to let go of the automation niceness, and see how I do without. You may have read some of my past posts where I tried Proxmox/KVM for a bit. I never really got the network and storage performing well enough, but may give it another try at some point.
For now, I’ve gone with the free esxi licensing and cobbled OVFs. No more ClusterAPI CAPV (albeit, that setup seemed a little more work than I cared for on my home lab anyways). On the upside, I now have all that compute consumption of vCenter back. So this brings us to the point of this post. I’m back to using OVF templates with Kubeadm ready to init or join. With the advent of CRI, I decided to go with CRI-O for my container runtime.
CRI-O documents the install for all Debian based linux flavors as one set. As it is documented, it doesn’t really work with Ubuntu. So I thought I’d take this opportunity to share my learning for anyone else that would like to follow the same pattern in their lab setup. As before, the workflow is to create a linux VM OVF (Ubuntu in this case) that has CRI-O installed and configured for K8s, and Kubeadm ready to go.
I should probably just add what I’m going to say next to the banner of my site… Because my WordPress editor consistently destroys any, and all, console or yaml formatting, I will link to my Github repo for the directions. Hope they are useful for at least one person out there.