In the previous post, I wrote the function to scale a K8s deployment with a REST API call. In this post, I’ll write the other function required to codify the scaling of a K8s cluster across physical resources.
The function here will power on a K8s node VM that resides on an ESXi host and make itself available to the K8s cluster for additional compute. I will use the vSphere REST API and then combine it with the previous function to complete the scale out operation. Continue reading
To be pragmatic with this exercise, I’ll begin with a focus on the plumbing required to scale both components in parallel.. I’ll follow up in another post with predictive automation based on some primitive analytics.
This post is the second, in a series that will consider the topic of predictive auto-scaling of vSphere VMs and containerized services. The purpose of this post is to describe how to build the first components with the installation and configuration of Docker and Kubernetes on VMs. I will be using Docker 1.13, Kubernetes 1.10, and CentOS 7 x64 for the implementation.
Harbor – The Enterprise-Grade Open Source Container Registry
Why Containers?