OIDC – DevOps and SRE Level – Part 3

Putting Our OIDC Knowledge to Use

In this final OIDC post, I’ll cover setting up an Identity Provider/IdP with  OAuth + OIDC + Directory Service, configuring an OAuth/OIDC application definition, configuring kube-apiserver to trust the OAuth/OIDC service, and configuring  kubectl as an OAuth Client. We’ll rely a lot on the understandings from the first two posts. So if you haven’t already, go over the first and second post before diving into this one.

One of the easiest ways to get an IDP up and running for a lab is to use an Okta developer account. For free (nice price), Okta gives you a developer account where you can host a minimal number of OIDC application definitions, Directory services, Auth Server, etc.. It’s a great asset that I’ll use for this writeup.

While I use Okta here, the concepts and steps translate to any other IdP combination you might choose.

First thing to understand, kube-apiserver is not an OAuth Client. We don’t configure it to receive JWTs from the Auth Server. We configure kube-apiserver to trust ID Tokens minted and signed by the Auth Server/IdP. The service that is configured to trust the Identity Claims from the IdP is referred to as the Relying Party/RP.

The Client in this case will be kubectl. We use kubectl to send requests to the kube-apiserver. Rather than send a certificate from our kube-config file, we’ll append an OIDC JWT that kube-apiserver can validate/verify and includes our user ID as well as group memberships (as Scopes).

There are a handful of solutions to get the OAuth flow to work in this scenario. I’ll use the kubelogin plugin for kubectl. From the previous posts, we should understand that this kubectl scenario will require a Public Client grant flow. And as it’s a localhost process, we will redirect to the localhost. In the kubelogin plugin case, this will default to port 8000. And since we’re using a Public Client app, running with a localhost redirect, we’ll use PKCE to reduce the potential of a malicious app intercepting our Access/ID tokens.

If you combine this, with my posts on ArgoCD, Crossplane, and vCluster (+one additional post after this to automate the ingress path), you should be able to work out how to setup a platform that enables you to deploy clusters preconfigured with OIDC, enabling you to deliver clusters on demand, configured with any service desired, and secured without having to distribute certificates to users. You could throw cluster-api into the mix and end up with a complete on-prem through multi-cloud coverage platform.

This is going to be heavy on console and command documentation, so I’ll transition to a GitHub repo with markdown.

Continue reading here…