This post is the second, in a series that will consider the topic of predictive auto-scaling of vSphere VMs and containerized services. The purpose of this post is to describe how to build the first components with the installation and configuration of Docker and Kubernetes on VMs. I will be using Docker 1.13, Kubernetes 1.10, and CentOS 7 x64 for the implementation.
You will need either a vSphere environment or VMware Workstation/Fusion, CentOS install media, and an internet connection. There are a handful of ways to implement a test-bed Kubernetes cluster. Running on a cloud provider CaaS platform, orchestrating as a Docker-in-Docker deployment, building manually from scratch, or building with a helper tool from scratch. Continue reading
Reactive and predictive auto-scaling have existed for some time already. Predictive auto-scaling has been custom developed and leveraged by large service providers for almost a decade.
To define a few terms: Scaling is the process of adjusting resources to satisfactorily serve demand. Auto-scaling is replacing manual process with automated IT processes that react to current-state metrics. Predictive auto-scaling relies on observed metrics, over time, to predict an upcoming change in demand and auto-scale ahead of actual demand. This is advantageous because the resources required to scale will not contend with the services demanding more. Continue reading
Professional football coach, Paul Brown once said, “The key to winning is poise under stress”. This is also the key for optimal performance of your vSphere data center resources. We want our resources stressed just enough to take full advantage of our capacity investment, without delivering substandard performance.
One of the vROps questions I am often asked is:, “How does vROps come up with this ‘right-size’ recommendation?”. It’s often followed with :, “Why should I trust this?”.
The answer is found with an understanding of two factors. The first being how vROps is set to analyze CPU and memory (This is configured by policy or the Monitoring Goals wizard). The default is CPU Demand | Memory Consumed.
There are three options, ranging from what is considered conservative to aggressive. ‘Allocation’, ‘Consumed’, and ‘Demand’ are terms with specific meanings for vROps capacity analysis in this regard. Within this post, I will use the term ‘demand’ in a general sense and will not delve into the differences between them.
The second is the stress policy and calculation. I’ll be focusing on this factor as it performs the heavy lifting here.
Stress is an evaluation of how much something has, in relation to how much is demanded of it. It can be derived for anything that provides resources; such as, virtual machines, hosts, and clusters. Continue reading
This has been in my drafts folder for three months now. I figure it’s time to get back to it.
Previous posts have delved into the benefits of leveraging virtualization to provide automation, elasticity, governance, and ‘day 2’ management to a container-centric DevOps architecture. For VMware, VIC has become the platform for repackaged applications. There isn’t much focus on orchestration with VIC. There is some, but it’s proprietary to vSphere. That’s not to say it’s inferior to a more widely adopted K8s model, it’s just purpose oriented. Continue reading
vSphere Integrated Containers
In this post I’ll attempt to cover vSphere Integrated Containers (aka VIC). I’ve mulled over how much to cover vs. how much I should link to. There are many great posts on this topic. I will attempt to frame out VIC and then provide links to posts that fill in some of the details. Continue reading
Containers in Virtual Machines – The Best of Both Worlds
My plan was to follow the previous post on Harbor with a rundown of vSphere Integrated Containers. But as I began to write, I realized just the topic of the relationship between containers and VMs was fairly lengthy. I’ve decided to cover just that in this post. In the next post, I will cover VIC. Continue reading
Harbor – The Enterprise-Grade Open Source Container Registry
Ok, we have a basic understanding of containers and a few of the architectural and operational challenges they come with. And of the DevOps practice that aims to improve software release and life-cycle management outcomes. Continue reading
Containers, DevOps, Orchestration, and Challenges
In the first post of this series, I covered the basics of containers. In this post, I’ll cover why containers are attractive in DevOps initiatives and some of the challenges organizations face when trying to leverage them for such. This will lay the foundation for the next post, where I will cover VMware’s vision and initiatives to address these challenges. Continue reading
In the coming series of posts, I will share some thoughts around Docker containers and how VMware has embraced and enhanced the benefits of container based application delivery. This is a big topic, with many alleys and winding roads. I will try to focus on some of the basics. Continue reading
In this post, I’ll try to clear up the myths and untruths of licensing Oracle on vSphere, and provide a method for accurately asserting to Oracle the processors that do need to be licensed.
I’ve noticed a resurgence in Oracle licensing questions from customers and partners. Most people seem to believe they need to license every host in a cluster due to DRS. As of vSphere 6.x with shared nothing vMotion, many are being told they need to license every host in the vCenter data center. Continue reading