This post covers the use of vCluster with the addition of OIDC. The outcome will be a namespaced control plane (vCluster), preconfigured for OIDC with a specified group added to the cluster-admin role. This is a stepping stone to fully automated vCluster deployment. With this setup, we can use email, or Slack, or whatever we’d like to distribute the kubeconfig files.
There are additional automation requirements remaining before we get to a completely turnkey control plane provisioning level. This is just the first step in that direction.
I’m using the default vCluster Helm chart with some added values. The vCluster Helm templates and value files would need additional work for complete automation. This is primarily due to the way vCluster constructs the kubeconfig files. Beyond the Chart, vCluster uses some ‘helper’ code from a pod to construct it. That is likely another area we’d have to dive into. Short of that, we can modify the kubeconfig files manually (which I’m doing in this post).
One additional part of the automation we’d want to expand on is the inclusion of an Ingress definition for each cluster (would also need to wrap back into the kubeconfig file). I’m shortcutting and using LB service.
As always, I’ll transition to a GitHub repo for the rest of this…
In this final OIDC post, I’ll cover setting up an Identity Provider/IdP with OAuth + OIDC + Directory Service, configuring an OAuth/OIDC application definition, configuring kube-apiserver to trust the OIauth/OIDC service, and configuring kubectl as an OAuth Client. We’ll rely a lot on the understandings from the first two posts. So if you haven’t already, go over the first and second post before diving into this one.
In the first post, I spent time describing what a JWT is and how it is signed with an x509 certificate. That will be useful in understanding this second piece. In this second part of OIDC for SRE and DevOps level, I’ll cover the primary entities involved and their relationships for identity authentication. As I mentioned in the first post, OIDC is an extension of OAuth. Without OAuth, there is no OIDC. OIDC is actually a very small addition to the OAuth specification. So a lot of the explanation here requires explaining OAuth. I will get to OIDC by the end.
I know what I’d be thinking if I were you, “what, another OIDC explanation, why?”. There is so much written on OIDC, much of which makes it seem way more complicated to me than it is. So I’m writing a series that explains it in a way I think will make more sense to Ops/SREs. In my previous post, I started out planning to layout an entire recipe for setting up an automated Crossplane cluster tenant provisioning process. Including automated OIDC configuration so that only the proper identities could access them Then I pumped the brakes.
Choosing whether to use the client-side load balancing built into the kube-apiserver’s etcd client or to use an external LB/proxy is not as simple as one might expect. I’ve discussed this with a lot of people and it seems there are no well defended opinions. I know there is one out there, just haven’t found it yet.
First and foremost, my blog disclaimer is in full effect, especially on this one. Use this info at your own risk! In this post, I’ll cover the steps to convert a Kubeadm deployed stacked etcd cluster into one consuming external etcd nodes, with no downtime.
This topic has been written up multiple times, so not exactly cutting edge info here. But I’ve found so many of the tutorials to be dated, and/or lacking in specific detail, and/or using tools I don’t have interest in (e.g. cfssl). So, I decided to post this no-nonsense, just works guide. This post will cover installing an etcd cluster, secured with TLS. In my previous post, I covered some basics on creating self-signed certs with openssl. The one additional openssl detail in this post will be the openssl config file to configure generated CSRs.
I set out this morning to write a post on configuring external ETCD for Kubernetes, with openssl self-signed certs (you know, the kind you use for your home lab). I got sidetracked on openssl and all of its ever-changing/deprecated options. So, this will be a preamble to that original intent.
Short post, to hopefully save someone from trashing their R720 home lab server too soon. TL;DR, ESXi 8.0 works on an R720 with a PERC H710 controller.