Thought I’d make a quick writeup on how I setup my home lab with Kubernetes. I tend to use two different models.
For light work, I really like KinD (K8s in Docker). KinD is incredibly easy to install and use (use the pre-compiled binaries to bypass compiling with GO). You can spin up multiple clusters, tear clusters down and start fresh (This is a blessing and a curse in the long run. It cultivates bad habits of not troubleshooting cluster issues, but it’s a big time saver), and it behaves very much like a host backed K8s cluster. There are some nuances to networking that can become blockers, but for 99% of light load work, it’s great. And because it runs on your laptop, you always have a K8s environment at the ready.
For more complex and/or heavier load use, I run VM backed K8s clusters on a Dell R720 with a hypervisor. Both ESXi (VMware hypervisor) and KVM are free and provide the functionality you need for a home lab. KVM offers the option of Proxmox which is similar to vCenter without licensing requirements. Whichever you choose is up to you.
Basics: Find a good server on ebay, load it up with storage, CPU, and RAM. Google how to configure it. Install whatever hypervisor you prefer (Make sure the hypervisor is compatible with the server you purchase beforehand). Put it as far away from your living space as possible because it’s loud. You should be able to find a decent used server for around $800 USD. I seem to do fine with a 32 cores/256Gi RAM/3TB disk setup. I believe my total cost after server, additional RAM, and some additional disks came in around $950. I have no arrangement or relationship (Outside of being a customer), and get nothing from this company if you shop with them, but I was happy with the purchases and their customer service: Orange Computers.
The base elements we want for a K8s lab cluster are:
- K8s nodes, joined into a functional cluster
- Pod networking
- Persistent storage
- Load balancer to expose services to routable IPs
- Ingress controller to expand service exposure options
- Bonus, a proxy to allow for multi (HA) control plane nodes
To achieve the above, I’m using as much of an open source shopping list as possible. This is not meant to be an endorsement or recommendation for any production use. These are simply what I use today, because they’re the best balance of function, production replication, and ease of use for my home lab. The biggest advantage of this combination of projects is that it will work on any infrastructure. So if you deploy this stack to your home lab, it doesn’t matter which hypervisor you’re using. If you deploy it to a cloud service provider, it doesn’t matter which one it is.
The list of projects/products used here is:
- Ubuntu server (Base linux distro)
- Docker (Container runtime)
-
- You could elect to install  containerd or CRI-O for the container runtime. I am stil using Docker because I always have. You can read more about the various runtime options here
-
- kubeadm (K8s cluster bootstrapping)
- Contour (Ingress)
- MetalLB (Load balancer to expose services outside the cluster)
- HAProxy (Control plane load balancer)
- Longhorn (Replicated persistent storage) or OpenEBS
- Cilium (Pod networking)
For this post, I am skipping HA control plane. It’s something you can come back to later. The gist is to install and configure HAProxy and then bootstrap your K8s cluster with kubeadm with some parameters to tee it up. For now, I’ll cover a single node control plane. You can get more information on HA control plane here. I’ll probably do a post dedicated to just this topic.
Create a VM and install your preferred Linux distro. I recommend 200Gi disk, 4-8 CPU, and 16Gi RAM. The number of ways/formats you can create templates between hypervisors, use startup scripts, and so on is not trivial. So I won’t go into infrastructure-as-code here. What is consistent between hypervisors is the ability to create VMs with snapshots. I’ll take that approach here. The downside of that is you will not be able to create multiple clusters. The upside is that I can more easily get through explaining a base K8s home lab, that you can easily reset.
I’ll start from the point after a Linux VM is created. Rather than repeat each step here, I am providing links to the individual install/config pages. Many of the steps after bootstrapping the cluster are single kubectl apply commands. Docker is easy to install, kubeadm init is pretty straight forward, MetalLB requires some minor config info, the rest are single command install.
- Install Docker
- Create a K8s cluster with kubeadm
- Once you’ve gotten the step where you’re ready to run ‘kubeadm init’, stop and make the VM a template. You will create as many of these as you need for your control plane and worker nodes.
- Create your VMs from the above template and use them to bootstrap the cluster per the directions.
- Use the –pod-network-cidr switch with kubeadm init and provide a non-routable/non-overlapping IP CIDR (e.g. 10.30.0.0/16)
- Install Cilium
- Install MetalLB
- MetalLB will require you to configure a range of IPs that are routable on your home network. These are the IPs that will be used when you expose a service with type LoadBalancer.
- For most cases, use the Layer 2 configuration
- Install Longhorn
- when you kubectl apply the manifest, you may need to do it twice. Often on the first run it will miss one or two of the CRDs. there is a Helm installer that is more reliable. Take your pick. I typically do helm but keep the manifest to remove CRDs after Helm delete.
- OpenEBS is another option for this component. Equally as simple to install. Both are CNCF Sandbox projects.
- Install Contour
- Now that you have your base cluster (e.g. one control plane node, three worker nodes), power the VMs down and take a snapshot of each.
- This allows you to ‘reset’ your cluster anytime
- This is not the best model. It is better to create automation that builds everything from a base image. But that’s too much to cover here.
Once you have the above configured, your cluster should look like the output below (You won’t see the minio pod, that is running for S3 storage, but wasn’t covered in the above steps). You have pod networking capable of applying net policy, persistent storage capable of RWX, an ingress controller, and a load balancer to expose services to routable IPs.
From here, you can install kubectl on your workstation, copy the /etc/kubernetes/admin.conf file located on the control plane node to your local workstation at {home directory}/.kube/, rename it config, and you’re ready to deploy whatever you’d like to the new cluster.