Crossplane – Beyond the Basics – Nested XRs and Composition Selectors

Now that we’ve covered the basics, let’s go a bit deeper (If you missed my previous 100 level series, start here). In this post I’ll move into some 200 level concepts, beginning with nested Composite Resources and Composition Selectors.

A nested XR is one created via a higher-order XR/Composition pair. So far, we’ve looked at a virtual network created by directly instantiating an XR/Composition. While network infrastructure is required in nearly all cases, it isn’t necessarily something we want the creator of ‘things’ to be engaged with. Continue reading

Crossplane – Bringing the Basics Together

My previous three posts introduced Crossplane and two of its key components (XR and Composition). This post will primarily be video based to demonstrate those concepts. But first I’ll take a moment to cover XR Claims, as I left them out of the XR introduction to avoid potential confusion. If you missed the post on Composite Resources, you can find it here.

As we now know, XRs call on Compositions to get the Crossplane machine doing something new for us. We looked at how an XRD is converted to a CRD in our cluster. To recap that flow, we define a CompositeResourceDefinition and create it, Crossplane creates a K8s CRD from the XRD with the schema we defined. We then create an instance of that CRD which becomes our XR. Continue reading

The Universal Control Plane – Crossplane 103 – Composition

In the previous posts of this series, I introduced core Crossplane and more detail on Composite Resource (XR). In this post, I’ll dig deeper into Composition. This is where we get to see how Crossplane does something with the data we feed it.

In review, Managed Resources (MR) are high-fidelity Crossplane representations of various external API resources (i.e., they have values that are essentially a 1:1 match for the external resources they represent). They are the most discreet entity in the Crossplane machine.  Similar to a K8s Pod, we could create MRs directly. But like a K8s pod, we’re generally better served with an abstraction that does a bit more than just creating something and forgetting about it. So let’s spend a few seconds looking at why the XR -> Composition -> MR pattern is more desirable. Continue reading

The Universal Control Plane – Crossplane 102 – XRDs

In the previous post, I introduced the basic components of the Crossplane machine. In this post, I’ll dig a little deeper into Compositions and Composite Resource Definitions.

We know Kubernetes allows us to extend the functionality of a K8s control plane with Custom Controllers and Custom Resources. The Custom Resource Definition (CRD) API provides us with the interface to define our custom resources and register them with the K8s API server. Continue reading

The Universal Control Plane – Crossplane 101

In this next series of posts, I’m going to cover Crossplane in increasing detail.

The most basic/simplified description I can think of for Crossplane is: A platform that leverages the Kubernetes api/controller model to deliver continuously reconciled infrastructure as code. It is really much more than that, but let’s start there for the sake of simplicity.

Some K8s principles upfront. By now, most in our industry know what Kubernetes is; and minimally, that it orchestrates containers. But K8s offers much more than just container orchestration these days, Continue reading

Argo Rollouts – Shift Up, A better Deployment/ReplicaSet manager

The K8s Deployment resource enables us to manage ReplicaSet resources at a higher level. The Deployment controller tracks image update history (versioning) and manages the sequence of Pod replacement during updates. This enables us to rollback to previous versions and control how many ‘new’ members of a ReplicaSet are deployed at a time. However, the Deployment controller hasn’t proven to deliver real-world safe and intelligent updates. Continue reading

Armory + OPA = CD Governed Admission Policy

In this demonstration, I’ve configured OPA policy at the K8s cluster, allowing only the Spinnaker service account to create a deployment via the kube-api server. Additionally, I’ve defined a policy within the pipeline via the Armory Policy Engine, requiring all images be specified with a tag other than ‘latest’. This demonstrates centralization and consolidation of admission policy for K8s.

Safe Continuous Deployment/Delivery is a key focus for Armory. Establishing policy-driven governance as a first-class citizen in CD pipelines is one way Armory enhances this goal. Through policy, elements of a CD pipeline are controlled at a much finer grain than RBAC alone. Continue reading

Deploying K8s Clusters with Spinnaker and Cluster API – The Setup

My last post was a bit on the dry/theory/boring side. But, necessary to tee up where I’m going now. This post, and a good number of the following, will be focused on implementing a CI/CD pipeline that spans on-prem and public cloud.

I’ll use Spinnaker for CD, Travis CI for CI, Docker Hub for image registry, and Github for git repo. I’ll also make use of Cluster API as posted previously. Full disclosure: I have no bias for the CI/image registry/Git repo. As with the CD component, there are many options. I currently work at Armory.io, so I do have a bias for Spinnaker.

Let’s get to it…

Continue reading

Multi-Cloud Continuous Delivery – Infrastructure Economics

In the world of multi/hybrid-cloud, where are CD patterns best aligned and applied?

Some housekeeping up front. Continuous integration (The CI of CI/CD) is something all to itself. While CI and CD are commonly conflated, they are two very different stages of the software development lifecycle. As a Detroiter, I see CI as the automobile assembly, while CD is the iterative testing of the car’s multiple components, as they’re integrated, all the way through to the proving grounds. It will go back to engineering for tweaks until cleared for production.

CI can be done anywhere. Take a specification and build it, tell me if there are errors in the code, etc.. CD takes the passed CI build, measures it for function and performance, and determines whether to deploy or rollback. (Overly simplified, but the gist of each)

Continue reading

Cluster API vSphere Provider – Customize Image

TL;DR Steps 1-9 at the bottom. If you read my last post, you’ll recall I was in the middle of troubleshooting provisioning issues related to disk I/O latency. In the end, I returned my lab server to its prior form, a vSphere esxi host with vCenter self managing it.

Disk I/O issues resolved, I set back to my original task of configuring a Cluster API control plane to provision K8s clusters. The setup is fairly simple (These steps vary by infrastructure provider, here I am covering the vSphere provider). Link to the docs: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md

  1. Download the clusterctl CLI binary
  2. Download an OVA template and load in vCenter (You can use the image URLs in the docs to configure in vSphere web client. If you choose to download them locally, change them from http to https)
  3. Take a snapshot and convert to template
  4. Set values needed for the control plane initialization (e.g. vCenter address, admin user, password, etc.)
  5. Create a single node KinD cluster and use clusterctl to initialize it with the required control plane pods
  6. Deploy a managed cluster to vSphere, use clusterctl to initialize it as a control plane cluster (i.e Bootstrap and pivot)
  7. Delete the KinD cluster
  8. Use pivot control plane to provision managed clusters from there on

Continue reading