PSA – This Blog is Evolving

My blog focus has largely telegraphed that of my professional career. It has covered the topics of infrastructure automation/virtualization, containerized refactoring, cloud computing, CI/CD, and DevOps .

As I’ve recently accepted a position with LatticeFlow.AI, it will evolve to cover machine learning, artificial intelligence, and MLOps. MLOps is really just a specialized type of DevOps. So it should dovetail nicely with the past focus on DevOps and CI/CD.

I’m really excited to start my next journey in tech and share what I learn with everyone here. Cheers.

vCluster with Automated Ingress Config

Deploy multiple vCluster instances with minimal required input, preconfigured at Ingress Controller for access and TLS.

In my previous post, I covered deploying vClusters with OIDC preconfigured. There were two manual steps of the kubeconfig file remaining in that writeup. First, we needed to replace the identity/user certificate and key with our OIDC client config. Second, we needed to update the server URL with the IP address provisioned by our LoadBalancer service.

In this post I’ll cover automating the second piece by replacing the LoadBalancer with an Ingress Controller config. I have long-used Contour for Ingress in K8s, but will be using NGINX in this case (because I don’t know how to configure SSL passthrough with Contour and we can’t do this without SSL passthrough).

We’ll use a DNS wildcard host record to point to our Ingress Controller. In my case, I created an *.vrelevant.lab record. I won’t be covering setting up the DNS part of this though.

So, minimum requirements are: a K8s cluster with LoadBalancer to expose Ingress Controller. This combination will vary from one implementation of K8s to another. I’m using MetalLB and NGINX on a K8s cluster installed on VMs. Refer to your K8s platform docs for configuring Ingress Controller, the overall config steps here will generally apply in any case.

I started to cover this setup a few posts back, covering how to deploy tenanted Crossplane instances. Then pumped the brakes because I felt like I was just publishing a recipe without explaining the moving parts. Hopefully the past few posts on OIDC and vCluster have laid out a better understanding.

With my past posts on Argo CD, Crossplane, K8s, and now OIDC/vCluster, I believe you have enough to assemble a fairly robust Crossplane platform for multiple teams.

If you implemented Loft Labs enterprise vcluster suite, you’d have everything ready out-of-the-box. Loft enables SSO, pre-propulated secrets, ingress config, additional package install (e.g. argo cd, crossplane, etc.), and more. Basically, you end up with a one click (or pipeline generated) crossplane instance that is truly isolated from other teams/tenants.

Let’s switch over to a Github repo for the steps….

vCluster with OIDC

This post covers the use of vCluster  with the addition of OIDC. The outcome will be a namespaced control plane (vCluster), preconfigured for OIDC with a specified group added to the cluster-admin role. This is a stepping stone to fully automated vCluster deployment. With this setup, we can use email, or Slack, or whatever we’d like to distribute the kubeconfig files. Continue reading

OIDC – DevOps and SRE Level – Part 3

Putting Our OIDC Knowledge to Use

In this final OIDC post, I’ll cover setting up an Identity Provider/IdP with  OAuth + OIDC + Directory Service, configuring an OAuth/OIDC application definition, configuring kube-apiserver to trust the OAuth/OIDC service, and configuring  kubectl as an OAuth Client. We’ll rely a lot on the understandings from the first two posts. So if you haven’t already, go over the first and second post before diving into this one. Continue reading

OIDC – DevOps and SRE Level – Part 2

OIDC and the Circle of Trust

In the first post, I spent time describing what a JWT is and how it is signed with an x509 certificate. That will be useful in understanding this second piece. In this second part of OIDC for SRE and DevOps level, I’ll cover the primary entities involved and their relationships for identity authentication. As I mentioned in the first post, OIDC is an extension of OAuth. Without OAuth, there is no OIDC. OIDC is actually a very small addition to the OAuth specification. So a lot of the explanation here requires explaining OAuth. I will get to OIDC by the end. Continue reading

OIDC – DevOps and SRE Level -Part 1

I know what I’d be thinking if I were you, “what, another OIDC explanation, why?”. There is so much written on OIDC, much of which makes it seem way more complicated to me than it is. So I’m writing a series that explains it in a way I think will make more sense to Ops/SREs. In my previous post, I started out planning to layout an entire recipe for setting up an automated Crossplane cluster tenant provisioning process. Including automated OIDC configuration so that only the proper identities could access them Then I pumped the brakes. Continue reading

K8s – Stacked etcd to External – Zero Downtime

Because sometimes you start off with stacked etcd nodes, and then decide you really wanted external.

First and foremost, my blog disclaimer is in full effect, especially on this one. Use this info at your own risk! In this post, I’ll cover the steps to convert a Kubeadm deployed stacked etcd cluster into one consuming external etcd nodes, with no downtime.

While this guide is based on Kubeadm clusters, the process can be applied to any cluster if you have access to the etcd config and certs/keys. Note: This will split the etcd service away from the Kubeadm upgrade process. You will need to ensure etcd upgrades and version compatibility manually. You will also need to update your kubeadm-config ConfigMap to tell Kubeadm that the etcd is external. Continue reading

Install etcd Cluster with TLS

This topic has been written up multiple times, so not exactly cutting edge info here. But I’ve found so many of the tutorials to be dated, and/or lacking in specific detail for key steps. So, I decided to post this no-nonsense, just works guide. This post will cover installing an etcd cluster, secured with TLS. In my previous post, I covered some basics on creating self-signed certs with openssl. The one additional openssl detail in this post will be the openssl config file to configure generated CSRs. Continue reading

Openssl CLI Self-Signed Certs – 2023

I set out this morning to write a post on configuring external ETCD for Kubernetes, with openssl self-signed certs (you know, the kind you use for your home lab). I got sidetracked on openssl and all of its ever-changing/deprecated options. So, this will be a preamble to that original intent.

Other than key sizes and algorithms, only a few command options have changed in openssl v3. I won’t dive into key size and algorithms (too much to cover there). In this post, I will cover the why and how of creating self-signed certificates with openssl, along with up to date commands for v3.  I think the first time I grappled with understanding SSL (now TLS) was 1997. Although SSL has now become TLS, not a lot has changed with the underlying basic dynamics of public key encryption. Continue reading