Blast from the Past – Installing Kubernetes with Kubeadm – 2023

TL;,DR: Steps to install CRI-O and Kubeadm provisioned cluster on Ubuntu 22.04.

Hard for me to believe, it has nearly  been five years since my first (and only) post on Kubeadm. That was circa Kubernetes version 1.09, and I was sort of posting what I had learned about Kubeadm, as I went through it.

I recently came up on my latest expiring home lab vSphere license. This annual event always causes me to reconsider how I use my server and whether handing out more cash is the best option. This year, I’ve decided to let go of the automation niceness, and see how I do without. You may have read some of my past posts where I tried Proxmox/KVM for a bit. I never really got the network and storage performing well enough, but may give it another try at some point.

For now, I’ve gone with the free esxi licensing and cobbled OVFs. No more ClusterAPI CAPV (albeit, that setup seemed a little more work than I cared for on my home lab anyways). On the upside, I now have all that compute consumption of vCenter back. So this brings us to the point of this post. I’m back to using OVF templates with Kubeadm  ready to init or join. With the advent of CRI, I decided to go with CRI-O for my container runtime.

CRI-O documents the install for all Debian based linux flavors as one set. As it is documented, it doesn’t really work with Ubuntu. So I thought I’d take this opportunity to share my learning for anyone else that would like to follow the same pattern in their lab setup. As before, the workflow is to create a linux VM OVF (Ubuntu in this case) that has CRI-O installed and configured for K8s, and Kubeadm ready to go.

I should probably just add what I’m going to say next to the banner of my site… Because my WordPress editor consistently destroys any, and all, console or yaml formatting, I will link to my Github repo for the directions. Hope they are useful for at least one person out there.

Link to Directions

Crossplane Troubleshooting

This post is a published draft. I will be adding to it as I can. If you have items you think should be added, I’ve enabled the comments section for it. Happy to add your troubleshooting scenarios.

This post will cover basic troubleshooting of Crossplane. I have other posts that describe the components of Crossplane, so I won’t rehash all of that here. This is certainly not all encompassing. It is simply a list of common issues and how to resolve them.

Additional tips can be found here: link to Crossplane.io troubleshooting tips.

Everything that happens before the Managed Resource is created is controlled by the Crossplane core. If you are experiencing issues with Claims, XRs, or Compositions being created, then troubleshoot at the Crossplane core.

Issues that involve MRs not being created are almost always due to an issue with Composition. MRs not provisioning to the remote API are almost always going to be Provider related or due to the Provider not getting the MR data in the format it expects. In these cases, focus on the Composition and Provider.

The majority of Crossplane issues can be investigated through K8s events. kubectl describe and kubectl get events will satisfy 90% of the troubleshooting needs. For cluster scoped resources, Crossplane streams events to the default namespace. In cases where you absolutely can’t get the hints you need from events, you can hope for something from the logs.

For Provider log review, you can enable more verbosity for providers with a Controller Config (link to docs). Crossplane typically sends more to events than logs, but this may help in some niche situations. You can also kubectl execute into the provider and core crossplane containers to evaluate whats happening at a more granular level.

Before getting into the basics, if you see the following with a ‘jet’ based provider: the object has been modified; please apply your changes to the latest version and try again, ignore it.

Issue 1: Crossplane package won’t build (or it does build), there are little to no errors emitted. It won’t load and become healthy.

Resolution: kubectl describe lock. Look for errors. Then kubectl create each XRD and Composition manually until you find the one with the error.

Issue 2: Creating Claim or XR does not result in expected provisioning

Resolution: Begin with checking the XRD (kubectl get xrd), verify that is ready and offered if a Claim. Then kubectl describe the Claim, and then the XR. Look for errors in the events that might lead to resolution. There are often errors that are meaningless, like can’t update, try again. You will have to use Crossplane enough to figure out which you can ignore and which are more meaningful (until those errors are suppressed). Next use kubectl describe your Composition, and look for meaningful errors there. Finally, kubectl describe your Managed Resources to look for errors. If all of those are good, troubleshoot the Provider and ProviderConfig (see Issue 3).

Issue 3: Provider not provisioning Managed Resources

Resolution: kubectl get providers to check your provider is installed and healthy. kubectl get providerconfigs.<your provider name> to check your ProviderConfig is present. kubectl get po -n <your crossplane system namespace> to ensure the provider pod is running. kubectl describe <provider instance>, look for errors. kubectl describe providerconfigs.<your provider> to look for errors. Ensure your ProviderConfig credentials are configured correctly.

Issue 4: The Composition Resource Template isn’t provisioning the MR

Check to make sure your Resource Template has a name field if all others do. kubectl describe <managed resource> and evaluate errors. It’s possible that you or the api dpcs for the resource are incorrect. Check the api docs again and if that doesn’t help, kubectl get <provider crd in question> -o yaml and make sure you aren’t using a list where you should be using an object (this will require you to understand openapi v3 schema).

Catchall:

Use event emits to track from Claim through MR. Understand the way Crossplane works in that path. If you’re following the docs to a tee, and it’s not working, dig a bit deeper on CRD definitions and investigate the remote side of your provisioning (e.g. look in the cloud provider console to see what’s happening).

If none of that resolves the issue, post it on Crossplane slack and/or open an issue in the appropriate Provider repo.

Crossplane ProviderConfig and Argo CD

From two recent slack threads, I was reminded I’ve set some things between Argo CD and Crossplane, that I haven’t posted here.

This setting is required whether you use Argo CD or not. But it came up on a question about why the Argo CD custom health check wasn’t working. When creating a ProviderConfig for provider-kubernetes or provider-helm in a Composition, you must add base.readinessChecks[0].type: None (And because wordpress destroys all of my yaml formatting, see example of it here. at lines 291-292):

The other recent issue that came up was how Argo CD complains that a Crossplane ProviderConfig CRD doesn’t exist before the Provider is fully initialized. The way around that is with a simple annotation.

For that annotation, see line #7 of this ProviderConfig:

Hopefully this single post’s title shows up in searches for both issues.

Crossplane – All the Patches with AWS IRSA Config

TL;DR: Repo = https://github.com/n8sOrganization/cp-aws-irsa

The previous three posts were overviews of the six types of patches available to Crossplane Compositions. In this post, I’ll walk through a configuration that uses all of them. I’ll also touch on transforms when they come up. While the configuration is for EKS IRSA, explaining IRSA is not the intent of this post. In a nutshell, IRSA enables a K8s Role to attain the privileges of an AWS Role. In the case of this config, we are using IRSA so that the EBS CSI containers have appropriate privileges to provision EBS volumes.

The config consists of four XRs: XCluster, XK8s, XNetwork, and XChart. XCluster exposes a claim that accepts basic input of spec.id, spec.cloud, spec.parameters.nodes.size, and spec.parameters.nodes.count. XCluster selects a Composition that has XK8s and XNetwork nested. XNetwork provisions the VPC and all of the networking required for our EKS cluster. XK8s provisions the EKS cluster, IAM Roles, creates a ProviderConfig for provider-helm, and creates an instance of XChart to deploy the AWS EBS CSI helm chart. The XChart also configures IAM resources for IRSA. Continue reading

Crossplane Composition Patches – Combine Patches

In the previous two posts, I covered FromCompositeFieldPath and ToCompositeFieldPath. In this post, I’ll cover CombineFromComposite and CombineToComposite.

The Combine patches enable us to include multiple string values into a single patch. For this example, I’ll focus on CombineFromComposite, you can infer the CombineToComposite  patch from the previous ToCompositeFieldPath post. Continue reading

Crossplane Composition Patches – ToCompositeFieldPath

By the end of this post, combined with my previous Crossplane intro posts, you should be capable of implementing a fairly robust Crossplane configuration. There is a lot more to be familiar with, but you can get a long way just knowing what I’ve covered so far.

Continuing with the series on Crossplane Composition patches, Im covering ToCompositeFieldPath here. As the name implies, this is the inverse of FromCompositeFieldPath. It’s used when we need a value created in one resource template (Managed Resource) supplied to another resource template or nested XR. We can even use combinations of ToCompositeFieldPath and FromCompositeFieldPath to pass values from one nested XR to another,

Continue reading

Crossplane Composition Patches – FromCompositeFieldPath

I’ve covered Composite Resources (XRs) and Compositions in previous posts. I’ll provide a brief summary here, but you can review them in more detail if needed.. In the next series of posts, I’ll cover how we use Composition patches to pass values between/to related resources (all patch methods are documented here)..

It is important to keep the concepts of Composite Resource and Composition clearly separated in your understanding. A Composite Resource (XR) defines a custom API, exposed via kube-apiserver. It selects a Composition that composes it into a collection of Managed Resources . The Composition is simply a ‘recipe’ to compose the Composite Resource(s). We patch to, from, and within Composite Resources via a Composition. We do not patch Compositions.

Continue reading

Update – Configuring Argo CD with Crossplane

As an update to my previous config of Crossplane with Argo CD, we can now configure Argo CD to use Annotations for sync tracking. This means we no longer have to configure resource ignore/deny rules per Composite Resource as we did with Label based tracking.

That said, I’m still undecided on XRC or XR as starting point for Crossplane in a GitOps pattern. Remove namespace boundary (that is a daydream at best for K8s multi-tenancy) by using vcluster, and you don’t need XRC. I’d personally throw XRC out the window and just use XR with vcluster. Makes the rest of this sort of irrelevant.

To configure Argo CD for Annotation resource tracking, edit the argocd-cm ConfigMap in the argocd Namespace (Argo CD version 2.4.8 or greater is recommended). Add application.resourceTrackingMethod: annotation to the data section as below: Continue reading

Crossplane and Provider-Kubernetes – Propagate Data/Secrets

External Secret Store is currently a beta feature of Crossplane. This feature enables us to publish connection details (Crossplane currently outputs these to a Kubernetes Secret in the Crossplane cluster) to an external vault, This then enables us to reference those secrets/connection details from a context outside of the Crossplane cluster. See this post for more on External Secret Store. But there is another interesting option when we simply need secrets from the Crossplane cluster to be available to Pods in a remote cluster. Continue reading

Upbound Cloud and Argo CD

Update: Upbound Cloud has been retired. So the basic premise of this post is no longer relevant. I believe it still has merit for similar use case, so I’ll leave it  active.

Disclaimer: Commercial product post. Ok, disclaimer out of the way, this topic comes up a lot in my day-to-day at work and I figured it would be helpful to provide some detail here.

Upbound Cloud is Upbound’s commercial SaaS offering backed by Crossplane. Argo CD is a popular GitOps platform that benefits greatly from  Crossplane. So why don’t they work nicely together ‘out of the box’? I have a series of previous posts that cover Crossplane and Argo CD in general. In this post, I’ll cover Upbound Cloud with Argo CD.  I’ll try to keep it short and to the point.

To follow along with this post, you’ll need an Upbound Cloud control plane and an instance of Argo CD running in a separate cluster.  You can follow the directions here for an Upbound Cloud trial. In that process, you’ll find directions to download a kubeconfig for your UBC instance. This will be used in proceeding steps. To install Argo CD, refer to Argo CD project page here. Continue reading