vSphere Integrated Containers
In this post I’ll attempt to cover vSphere Integrated Containers (aka VIC). I’ve mulled over how much to cover vs. how much I should link to. There are many great posts on this topic. I will attempt to frame out VIC and then provide links to posts that fill in some of the details.
VIC takes the construct of a single Docker host and scales it across multiple ESXi hosts. It is similar to orchestration architectures addressed by Swarm, Kubernetes (K8s), and Mesos. Bur it has its own nuances, and it doesn’t aim to replace them. It does provide container networking, container name resolution, role based access control, an HTML5 management plane (Admiral) an enterprise-grade registry (Harbor), a majority of Docker command parity, and integration with vSphere tooling.
VIC will schedule containers across hosts and ensure they have network connectivity with vSphere constructs. This removes a lot of the complexity that comes when trying to achieve just these two attributes on a non VIC container implementation,. VIC also places boundaries around container resource utilization with vSphere constructs.
VIC is not Platform as a Service (PaaS) or even a full Containers as a Service (CaaS) platform. It is a good platform to provide RBAC access to container services for a DevOps initiative where developers desire a Docker experience and operations desires a vSphere administration experience.
I see VIC as a platform for organizations that are in application repackaging mode. For organizations implementing 12 factor apps, and seeking CI/CD integration, VIC is probably not the way to go right now.. That said, VIC can be used as a container host tool as part of a larger CaaS architecture. This would entail developers and operations utilizing the RBAC plane of VIC to allow developers to scale Docker hosts on demand, with vSphere operations tooling on the back end. According to the latest VIC documentation, integration of this type is not supported. I have seen it configured in a lab though.
If you need supported Kubernetes orchestration and/or CI/CD capabilities backed by VMware based IaaS, you should look into VMware PKS and/or Pivotal Cloud Foundry. These are CaaS and PaaS solutions. More on these in coming posts.
I think the following video, from our Cloud Native Apps channel, does a good job of walking through how to install VIC and what it does from a base level. What you will learn in this segment is that there are three primary components installed with VIC,, and there is a specific tool with syntax to setup a vSphere “container cluster” (VMware calls this a vSphere Container Host/VCH). It can be a bit confusing. A VCH is both a boundary for resources and a VM that serves as a Docker endpoint that you submit your Docker client commands to. The VCH VM will handle scheduling and container networking with vSphere tooling.
The install process added the aforementioned three components. We install and run an OVA packaged VM to provide Admiral management portal, Harbor registry, and the file repository to extend vCeneter for VIC. We use vic-machine to create a vSphere Container Host (VCH). It may not have been clear in the video, you target a cluster and/or resource pool when creating a VCH.You can target an standalone ESXi host, but this makes zero sense, so I won’t explain that option.
All of the configuration performed during the appliance installation is specific to the Admiral and Harbor services. If you have a proper certificate publishing process, you would use that to provide trusted certs during the appliance configuration process. If not, you will do what is conveyed in the recording, and go with self-signed certificates. If that is your path, you saw later in the video where you would verify and accept those self-signed certs.
Once the Admiral, Harbor, and file service running at port 9443 are running, you download the file from the file service to obtain vic-machine and use the appropriate binary for your OS. We use vic-machine to create a VCH on vSphere infrastructure. Then Admiral and Harbor come into play. You can deploy a VCH from the Admiral portal today, but it is intended for limited testing only. Harbor, Admiral, and VCH are the three components I alluded to earlier.
Once a VCH is provisioned, we configure tenants and RBAC rights in the management plane and registry. Beyond the registry and vSphere operations, this controls how developers can interact with vSphere resources when deploying images in containers.
VIC goes much deeper than this as we get into integration with the Dev side of the house. We need to configure vSphere constructs to provide services for expected Docker constructs. Networking is one example. In order for developers to leverage Docker on vSphere, we need to understand what the VIC result is to properly provide and manage via vSphere.
The following two videos cover networking. You need to have an understanding of Docker networking to fully grasp this detail, so the first video covers that. The second covers how things work on VIC.
VIC is not complex, but it is very robust. Docker at enterprise-grade is complex. We use VIC to allow developers to consume vSphere capacity for containers and to manage these workloads with the same vSphere tooling our infrastructure operation admins are familiar with.
For example, because the networking provided to VIC container instances is traditional vSphere networking, they are managed the same as existing VM networks. We can provide true L7 firewall rules between each container with robust introspection into network flows. Because they are 1:1 with a VM, we can monitor container capacity demand with vRealize Operations. And the list goes on. This is an Operations benefit that AppDev and application owners should be happy to receive.
For more information, continue reading here.