Loft for vCluster with Crossplane

I’ve covered vCluster with Crossplane in a few previous posts. In this post, I’m taking a quick tour of the commercial vCluster offering called Loft, combined with Crossplane. This combination of Loft and Crossplane results in what I describe in this post , without the need to go through a lot of contortions.  Loft brings many of the features I’ve covered on vCluster (e.g. SSO, ingress config, etc.), as well as app integration and more under a single pane of glass.

I was planning to show a complete self-service workflow of Crossplane being provisioned with truly tenanted K8s. Loft provides ‘Project’ level secrets that address the tenanting of Crossplane Provider credentials. In Loft, a Project is essentially a tenant level. Loft also provides SSO with Argo CD and integration of provisioned vClusters with Argo CD. Basically, you add Loft to Crossplane and you have a true turnkey multi-tenant, self-service, pipeline capable Crossplane platform.

As I said, I was planning to show that. Unfortunately, Loft doesn’t want to provide me with the license key to enable the SSO pieces between IDP and Argo CD to show it.

So, I won’t show it, because I can’t. But, if you’re looking to use Crossplane on-prem or in your own VPC, want SSO, Argo CD integration, self-service, auditing, resource quotas, etc., checkout Loft. I’m guessing they’d be happy to float a POC license for you (https://loft.sh/).

Looking at the commercial offerings of Crossplane and vCluster (Upbound and Loft respectively), I believe there is more value in Loft than Upbound with the combination of the two OSS projects.

vCluster with Automated Ingress Config

Deploy multiple vCluster instances with minimal required input, preconfigured at Ingress Controller for access and TLS.

In my previous post, I covered deploying vClusters with OIDC preconfigured. There were two manual steps of the kubeconfig file remaining in that writeup. First, we needed to replace the identity/user certificate and key with our OIDC client config. Second, we needed to update the server URL with the IP address provisioned by our LoadBalancer service.

In this post I’ll cover automating the second piece by replacing the LoadBalancer with an Ingress Controller config. I have long-used Contour for Ingress in K8s, but will be using NGINX in this case (because I don’t know how to configure SSL passthrough with Contour and we can’t do this without SSL passthrough).

We’ll use a DNS wildcard host record to point to our Ingress Controller. In my case, I created an *.vrelevant.lab record. I won’t be covering setting up the DNS part of this though.

So, minimum requirements are: a K8s cluster with LoadBalancer to expose Ingress Controller. This combination will vary from one implementation of K8s to another. I’m using MetalLB and NGINX on a K8s cluster installed on VMs. Refer to your K8s platform docs for configuring Ingress Controller, the overall config steps here will generally apply in any case.

I started to cover this setup a few posts back, covering how to deploy tenanted Crossplane instances. Then pumped the brakes because I felt like I was just publishing a recipe without explaining the moving parts. Hopefully the past few posts on OIDC and vCluster have laid out a better understanding.

With my past posts on Argo CD, Crossplane, K8s, and now OIDC/vCluster, I believe you have enough to assemble a fairly robust Crossplane platform for multiple teams.

If you implemented Loft Labs enterprise vcluster suite, you’d have everything ready out-of-the-box. Loft enables SSO, pre-propulated secrets, ingress config, additional package install (e.g. argo cd, crossplane, etc.), and more. Basically, you end up with a one click (or pipeline generated) crossplane instance that is truly isolated from other teams/tenants.

Let’s switch over to a Github repo for the steps….

vCluster with OIDC

This post covers the use of vCluster  with the addition of OIDC. The outcome will be a namespaced control plane (vCluster), preconfigured for OIDC with a specified group added to the cluster-admin role. This is a stepping stone to fully automated vCluster deployment. With this setup, we can use email, or Slack, or whatever we’d like to distribute the kubeconfig files. Continue reading

OIDC – DevOps and SRE Level – Part 3

Putting Our OIDC Knowledge to Use

In this final OIDC post, I’ll cover setting up an Identity Provider/IdP with  OAuth + OIDC + Directory Service, configuring an OAuth/OIDC application definition, configuring kube-apiserver to trust the OAuth/OIDC service, and configuring  kubectl as an OAuth Client. We’ll rely a lot on the understandings from the first two posts. So if you haven’t already, go over the first and second post before diving into this one. Continue reading

OIDC – DevOps and SRE Level – Part 2

OIDC and the Circle of Trust

In the first post, I spent time describing what a JWT is and how it is signed with an x509 certificate. That will be useful in understanding this second piece. In this second part of OIDC for SRE and DevOps level, I’ll cover the primary entities involved and their relationships for identity authentication. As I mentioned in the first post, OIDC is an extension of OAuth. Without OAuth, there is no OIDC. OIDC is actually a very small addition to the OAuth specification. So a lot of the explanation here requires explaining OAuth. I will get to OIDC by the end. Continue reading

OIDC – DevOps and SRE Level -Part 1

I know what I’d be thinking if I were you, “what, another OIDC explanation, why?”. There is so much written on OIDC, much of which makes it seem way more complicated to me than it is. So I’m writing a series that explains it in a way I think will make more sense to Ops/SREs. In my previous post, I started out planning to layout an entire recipe for setting up an automated Crossplane cluster tenant provisioning process. Including automated OIDC configuration so that only the proper identities could access them Then I pumped the brakes. Continue reading

Crossplane – Automated, Multi-Tenant, with DR, OIDC Auth, and GitOps

I teased this post earlier last year, then got distracted with life. I figure it’s time to knock it out. In this one, I’ll provide an overview for setting up  a truly multi-tenant Crossplane platform with automation provided by Crossplane.

Using 100% open source software; including: Crossplane, Argo CD, vCluster, Dex, OpenLDAP,, Pinniped, and Velero. The result is a base architecture with GitOps , OIDC auth, Velero DR, and vCluster based tenanting. You could use this recipe for just about any K8s based service you wanted to automate for multiple teams. Continue reading

K8s – Stacked etcd to External – Zero Downtime

Because sometimes you start off with stacked etcd nodes, and then decide you really wanted external.

First and foremost, my blog disclaimer is in full effect, especially on this one. Use this info at your own risk! In this post, I’ll cover the steps to convert a Kubeadm deployed stacked etcd cluster into one consuming external etcd nodes, with no downtime.

While this guide is based on Kubeadm clusters, the process can be applied to any cluster if you have access to the etcd config and certs/keys. Note: This will split the etcd service away from the Kubeadm upgrade process. You will need to ensure etcd upgrades and version compatibility manually. You will also need to update your kubeadm-config ConfigMap to tell Kubeadm that the etcd is external. Continue reading

Install etcd Cluster with TLS

This topic has been written up multiple times, so not exactly cutting edge info here. But I’ve found so many of the tutorials to be dated, and/or lacking in specific detail for key steps. So, I decided to post this no-nonsense, just works guide. This post will cover installing an etcd cluster, secured with TLS. In my previous post, I covered some basics on creating self-signed certs with openssl. The one additional openssl detail in this post will be the openssl config file to configure generated CSRs. Continue reading