Deploying K8s Clusters with Spinnaker and Cluster API – The Setup

My last post was a bit on the dry/theory/boring side. But, necessary to tee up where I’m going now. This post, and a good number of the following, will be focused on implementing a CI/CD pipeline that spans on-prem and public cloud.

I’ll use Spinnaker for CD, Travis CI for CI, Docker Hub for image registry, and Github for git repo. I’ll also make use of Cluster API as posted previously. Full disclosure: I have no bias for the CI/image registry/Git repo. As with the CD component, there are many options. I currently work at Armory.io, so I do have a bias for Spinnaker.

Let’s get to it…

Continue reading

Multi-Cloud Continuous Delivery – Infrastructure Economics

In the world of multi/hybrid-cloud, where are CD patterns best aligned and applied?

Some housekeeping up front. Continuous integration (The CI of CI/CD) is something all to itself. While CI and CD are commonly conflated, they are two very different stages of the software development lifecycle. As a Detroiter, I see CI as the automobile assembly, while CD is the iterative testing of the car’s multiple components, as they’re integrated, all the way through to the proving grounds. It will go back to engineering for tweaks until cleared for production.

CI can be done anywhere. Take a specification and build it, tell me if there are errors in the code, etc.. CD takes the passed CI build, measures it for function and performance, and determines whether to deploy or rollback. (Overly simplified, but the gist of each)

Continue reading

Cluster API vSphere Provider – Customize Image

TL;DR Steps 1-9 at the bottom. If you read my last post, you’ll recall I was in the middle of troubleshooting provisioning issues related to disk I/O latency. In the end, I returned my lab server to its prior form, a vSphere esxi host with vCenter self managing it.

Disk I/O issues resolved, I set back to my original task of configuring a Cluster API control plane to provision K8s clusters. The setup is fairly simple (These steps vary by infrastructure provider, here I am covering the vSphere provider). Link to the docs: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md

  1. Download the clusterctl CLI binary
  2. Download an OVA template and load in vCenter (You can use the image URLs in the docs to configure in vSphere web client. If you choose to download them locally, change them from http to https)
  3. Take a snapshot and convert to template
  4. Set values needed for the control plane initialization (e.g. vCenter address, admin user, password, etc.)
  5. Create a single node KinD cluster and use clusterctl to initialize it with the required control plane pods
  6. Deploy a managed cluster to vSphere, use clusterctl to initialize it as a control plane cluster (i.e Bootstrap and pivot)
  7. Delete the KinD cluster
  8. Use pivot control plane to provision managed clusters from there on

Continue reading

Home Lab Config Update – Proxmox vs. vSphere – Update 2

Proxmox had a lot of promise for my home lab. Most compelling were the ability to avoid updating licenses, good support for cloud-init, and naturally good performance for linux workloads. As I had hoped to nest vSphere for those times when I wanted to test vSphere workloads, I found it to be a no-go. Of course this is not an intended use case for it, just as nesting proxmox in vSphere isn’t. So I am not faulting it for this.

But I need to test vSphere workloads and I can’t get sufficient performance out of it to do this. So it’s back to vSphere. I’m not bummed about it, I know and like vSphere. It just means I need to take on some extra management tasks and overhead during the times I’m not specifically working with vSphere.

What I ran into was incredibly poor disk IOPS and latency when running VMs in a nested esxi host. So much so, that a k8s etcd service was unable to complete a single read or write before the default 100ms timeout was hit. A rudimentary  test via IOZONE showed 1/10th performance of a linux host running at the base proxmox level.

Farewell proxmox, hello again vSPhere. Once I get enough patience-reserve built back up, I’ll reinstall vSPhere on the home server, reconfigure everything, and then setup a demo for my next post covering cluster-api with capv and capa providers to incorporate on-prem with aws in a CI/CD workflow with canary analysis.

Building Your K8s Home Lab – Key Components

Thought I’d make a quick writeup on how I setup my home lab with Kubernetes. I tend to use two different models.

For light work, I really like KinD (K8s in Docker). KinD is incredibly easy to install and use (use the pre-compiled binaries to bypass compiling with GO). You can spin up multiple clusters, tear clusters down and start fresh (This is a blessing and a curse in the long run. It cultivates bad habits of not troubleshooting cluster issues, but it’s a big time saver), and it behaves very much like a host backed K8s cluster. There are some nuances to networking that can become blockers, but for 99% of light load work, it’s great. And because it runs on your laptop, you always have a K8s environment at the ready.

For more complex and/or heavier load use, I run VM backed K8s clusters on a Dell R720 with a hypervisor. Both ESXi (VMware hypervisor) and KVM are free and provide the functionality you need for a home lab. KVM offers the option of Proxmox which is similar to vCenter without licensing requirements. Whichever you choose is up to you.

Continue reading

Home Lab Config Update – Proxmox vs. vSphere

I recently decided to try out Proxmox/KVM for my home lab hypervisor. So far, I’m impressed with it for my use case. It is relatively easy to install and configuration is pretty straight forward (basic linux know-how helps). It could be used for production with a support contract, but I believe vSphere still has the edge by a wide margin for more mainstream consumers.

Without disagreement, it makes more sense to run vSphere in your home lab if your goal is to understand and build competency on the hypervisor you’ll most likely encounter at an employer. But, I just need to run workloads on a home server, and I don’t care what runs them, as long as they run (I actually do care about a ew things, but Proxmox checks the list). 85% of my VMs are identical (Linux serving as a K8s node). Continue reading

GitOps – An intro to git repo-centric CI/CD for beginners

I’m admittedly a ‘weekend warrior’ when it comes to coding. That said, you don’t need to be a developer, or even know how to code a single line to understand this topic, participate in the process, or to delve into it on your own. A key benefit of cloud native architectures (i.e. infrastructure as code coupled with orchestration and containerization) is the pattern of updating components of an application, committing the changes to a git repository, in-turn triggering an automated rebuild/test of the definition and redeployment. Continue reading

Beyond the K8s Downward API – Expanding runtime cluster values for containers

In this post, I’m taking on the task of getting cluster object properties to a container at initialization. Specifically, values not exposed by the downward api. I am addressing a very specific need to add region and az specific values to a replicated database instance. But this pattern can be used for many other cases.

I am working with a replicated YugaByte redis database, in a K8s cluster spanned across AWS and GCP, with a micro-service app leveraging it for the shopping cart service. If you’ve worked with YugaByte, or other replicated databases, you’ll likely be familiar with  conveying the az, rack, host. that the workload is running in via metadata. In this case, I wanted to dynamically add the metadata to each pod at scheduling to convey the CSP, region, and availability zone they were scheduled to. Continue reading

kubectl create -f K8s-cluster.yaml

A couple of years back, I endeavored to create a proof of concept for predictive auto-scaling of K8s clusters. At the time (k8s 1.10), there really wasn’t a way to easily and ‘automatedly’ scale a cluster out and back in. Flash forward to today, and I can now leverage the work of the Cluster API project to solve that problem.

Cluster API (CAPI) delivers the capability of a K8s cluster managing the life cycle of other K8s clusters. Bit of a chicken and egg at first glance, but if you utilize a low friction cluster bootstrapper (e.g. Kubeadm) for the first cluster, you can then deploy and manage the rest of your clusters from this first instance. Better yet, we could leverage a cluster that was pre-built for us by a K8s as a service provider. Continue reading

K8s Stateful – Storage Basics

In this post, I was going to cover two controller kinds (Deployment and StatefulSet), the resource kinds PersistentVolume, PersistentVolumeClaim, and StorageClass., and go over examples of non-persistent and persistent service delivery. Then I realized it was going to be way too long. So I’ll cover the basics of stateless/stateful and storage in this post. Next post, I’ll cover Deployment and StatefulSet controllers and provide some examples of use. Continue reading