Deploy multiple vCluster instances with minimal required input, preconfigured at Ingress Controller for access and TLS.
In my previous post, I covered deploying vClusters with OIDC preconfigured. There were two manual steps of the kubeconfig file remaining in that writeup. First, we needed to replace the identity/user certificate and key with our OIDC client config. Second, we needed to update the server URL with the IP address provisioned by our LoadBalancer service.
In this post I’ll cover automating the second piece by replacing the LoadBalancer with an Ingress Controller config. I have long-used Contour for Ingress in K8s, but will be using NGINX in this case (because I don’t know how to configure SSL passthrough with Contour and we can’t do this without SSL passthrough).
We’ll use a DNS wildcard host record to point to our Ingress Controller. In my case, I created an *.vrelevant.lab record. I won’t be covering setting up the DNS part of this though.
So, minimum requirements are: a K8s cluster with LoadBalancer to expose Ingress Controller. This combination will vary from one implementation of K8s to another. I’m using MetalLB and NGINX on a K8s cluster installed on VMs. Refer to your K8s platform docs for configuring Ingress Controller, the overall config steps here will generally apply in any case.
I started to cover this setup a few posts back, covering how to deploy tenanted Crossplane instances. Then pumped the brakes because I felt like I was just publishing a recipe without explaining the moving parts. Hopefully the past few posts on OIDC and vCluster have laid out a better understanding.
With my past posts on Argo CD, Crossplane, K8s, and now OIDC/vCluster, I believe you have enough to assemble a fairly robust Crossplane platform for multiple teams.
Let’s switch over to a Github repo for the steps….