The Universal Control Plane – Crossplane 101

In this next series of posts, I’m going to cover Crossplane in increasing detail.

The most basic/simplified description I can think of for Crossplane is: A platform that leverages the Kubernetes api/controller model to deliver continuously reconciled infrastructure as code. It is really much more than that, but let’s start there for the sake of simplicity.

Some K8s principles upfront. By now, most in our industry know what Kubernetes is; and minimally, that it orchestrates containers. But K8s offers much more than just container orchestration these days, Continue reading

Argo Rollouts – Shift Up, A better Deployment/ReplicaSet manager

The K8s Deployment resource enables us to manage ReplicaSet resources at a higher level. The Deployment controller tracks image update history (versioning) and manages the sequence of Pod replacement during updates. This enables us to rollback to previous versions and control how many ‘new’ members of a ReplicaSet are deployed at a time. However, the Deployment controller hasn’t proven to deliver real-world safe and intelligent updates. Continue reading

Armory + OPA = CD Governed Admission Policy

In this demonstration, I’ve configured OPA policy at the K8s cluster, allowing only the Spinnaker service account to create a deployment via the kube-api server. Additionally, I’ve defined a policy within the pipeline via the Armory Policy Engine, requiring all images be specified with a tag other than ‘latest’. This demonstrates centralization and consolidation of admission policy for K8s.

Safe Continuous Deployment/Delivery is a key focus for Armory. Establishing policy-driven governance as a first-class citizen in CD pipelines is one way Armory enhances this goal. Through policy, elements of a CD pipeline are controlled at a much finer grain than RBAC alone. Continue reading

Deploying K8s Clusters with Spinnaker and Cluster API – The Setup

My last post was a bit on the dry/theory/boring side. But, necessary to tee up where I’m going now. This post, and a good number of the following, will be focused on implementing a CI/CD pipeline that spans on-prem and public cloud.

I’ll use Spinnaker for CD, Travis CI for CI, Docker Hub for image registry, and Github for git repo. I’ll also make use of Cluster API as posted previously. Full disclosure: I have no bias for the CI/image registry/Git repo. As with the CD component, there are many options. I currently work at Armory.io, so I do have a bias for Spinnaker.

Let’s get to it…

Continue reading

Multi-Cloud Continuous Delivery – Infrastructure Economics

In the world of multi/hybrid-cloud, where are CD patterns best aligned and applied?

Some housekeeping up front. Continuous integration (The CI of CI/CD) is something all to itself. While CI and CD are commonly conflated, they are two very different stages of the software development lifecycle. As a Detroiter, I see CI as the automobile assembly, while CD is the iterative testing of the car’s multiple components, as they’re integrated, all the way through to the proving grounds. It will go back to engineering for tweaks until cleared for production.

CI can be done anywhere. Take a specification and build it, tell me if there are errors in the code, etc.. CD takes the passed CI build, measures it for function and performance, and determines whether to deploy or rollback. (Overly simplified, but the gist of each)

Continue reading

Cluster API vSphere Provider – Customize Image

TL;DR Steps 1-9 at the bottom. If you read my last post, you’ll recall I was in the middle of troubleshooting provisioning issues related to disk I/O latency. In the end, I returned my lab server to its prior form, a vSphere esxi host with vCenter self managing it.

Disk I/O issues resolved, I set back to my original task of configuring a Cluster API control plane to provision K8s clusters. The setup is fairly simple (These steps vary by infrastructure provider, here I am covering the vSphere provider). Link to the docs: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md

  1. Download the clusterctl CLI binary
  2. Download an OVA template and load in vCenter (You can use the image URLs in the docs to configure in vSphere web client. If you choose to download them locally, change them from http to https)
  3. Take a snapshot and convert to template
  4. Set values needed for the control plane initialization (e.g. vCenter address, admin user, password, etc.)
  5. Create a single node KinD cluster and use clusterctl to initialize it with the required control plane pods
  6. Deploy a managed cluster to vSphere, use clusterctl to initialize it as a control plane cluster (i.e Bootstrap and pivot)
  7. Delete the KinD cluster
  8. Use pivot control plane to provision managed clusters from there on

Continue reading

Home Lab Config Update – Proxmox vs. vSphere – Update 2

Proxmox had a lot of promise for my home lab. Most compelling were the ability to avoid updating licenses, good support for cloud-init, and naturally good performance for linux workloads. As I had hoped to nest vSphere for those times when I wanted to test vSphere workloads, I found it to be a no-go. Of course this is not an intended use case for it, just as nesting proxmox in vSphere isn’t. So I am not faulting it for this.

But I need to test vSphere workloads and I can’t get sufficient performance out of it to do this. So it’s back to vSphere. I’m not bummed about it, I know and like vSphere. It just means I need to take on some extra management tasks and overhead during the times I’m not specifically working with vSphere.

What I ran into was incredibly poor disk IOPS and latency when running VMs in a nested esxi host. So much so, that a k8s etcd service was unable to complete a single read or write before the default 100ms timeout was hit. A rudimentary  test via IOZONE showed 1/10th performance of a linux host running at the base proxmox level.

Farewell proxmox, hello again vSPhere. Once I get enough patience-reserve built back up, I’ll reinstall vSPhere on the home server, reconfigure everything, and then setup a demo for my next post covering cluster-api with capv and capa providers to incorporate on-prem with aws in a CI/CD workflow with canary analysis.

Building Your K8s Home Lab – Key Components

Thought I’d make a quick writeup on how I setup my home lab with Kubernetes. I tend to use two different models.

For light work, I really like KinD (K8s in Docker). KinD is incredibly easy to install and use (use the pre-compiled binaries to bypass compiling with GO). You can spin up multiple clusters, tear clusters down and start fresh (This is a blessing and a curse in the long run. It cultivates bad habits of not troubleshooting cluster issues, but it’s a big time saver), and it behaves very much like a host backed K8s cluster. There are some nuances to networking that can become blockers, but for 99% of light load work, it’s great. And because it runs on your laptop, you always have a K8s environment at the ready.

For more complex and/or heavier load use, I run VM backed K8s clusters on a Dell R720 with a hypervisor. Both ESXi (VMware hypervisor) and KVM are free and provide the functionality you need for a home lab. KVM offers the option of Proxmox which is similar to vCenter without licensing requirements. Whichever you choose is up to you.

Continue reading

Home Lab Config Update – Proxmox vs. vSphere

I recently decided to try out Proxmox/KVM for my home lab hypervisor. So far, I’m impressed with it for my use case. It is relatively easy to install and configuration is pretty straight forward (basic linux know-how helps). It could be used for production with a support contract, but I believe vSphere still has the edge by a wide margin for more mainstream consumers.

Without disagreement, it makes more sense to run vSphere in your home lab if your goal is to understand and build competency on the hypervisor you’ll most likely encounter at an employer. But, I just need to run workloads on a home server, and I don’t care what runs them, as long as they run (I actually do care about a ew things, but Proxmox checks the list). 85% of my VMs are identical (Linux serving as a K8s node). Continue reading

GitOps – An intro to git repo-centric CI/CD for beginners

I’m admittedly a ‘weekend warrior’ when it comes to coding. That said, you don’t need to be a developer, or even know how to code a single line to understand this topic, participate in the process, or to delve into it on your own. A key benefit of cloud native architectures (i.e. infrastructure as code coupled with orchestration and containerization) is the pattern of updating components of an application, committing the changes to a git repository, in-turn triggering an automated rebuild/test of the definition and redeployment. Continue reading