In the coming series of posts, I will share some thoughts around Docker containers and how VMware has embraced and enhanced the benefits of container based application delivery. This is a big topic, with many alleys and winding roads. I will try to focus on some of the basics.
What are containers and why use them? Containers are simply an isolated (contained) instance of user space within an OS. If you think of two VMs running on a hypervisor, they think they are running directly and alone on the hardware. The hypervisor performs many functions to virtualize physical hardware and obfuscate the shared environment from the two VMs. A container engine running on a Linux host performs similar obfuscation to make an user space think it is the only one on an OS image.
This is important to understand. Hypervisors perform virtualization. Containers perform NO virtualization. Containers isolate code running in individual spaces. Containers on the same host rely on a singular OS kernel. Scheduling and resource management is performed by the container engine to make sure each container has equitable shares of the OS kernel.
So the primary difference between a VM and a container is that the obfuscation occurs at a different layer. Why would we choose one approach over the other? Actually, it doesn’t have to be an either/or at all. In future posts within this series, I will explain why running containers in VMs is the best decision.
There is a definite benefit to OS isolation when it comes to application development and deployment. A developer writes her code and tests it on her machine. It runs fine, so she sends it to the test environment. Then things start to go wrong.
It won’t run on the same Linux host as another application it was written to integrate with. This is because the application she wrote used a library that isn’t the same version of the library the existing application requires.
This is only one example of a conflict that can occur when we mix code on a host. But it should be clear why a container or VM approach would prevent this from happening.
So why not just use a VM for each app and move on? Glad you asked. You could and can. But if you want to ship your code as a VM template, the receiver needs to be able to support that VM image There are other reasons that can be debated, but that is the primary argument in favor of containers. And why Docker uses the shipping container analogy.
So, we use a container to package our applications with all required supporting files, we post them to a repository, IT creates a Linux host for us, we run a Docker engine on it and load the containers we want from the repository, and we’re set. Perhaps, but probably not.
What happens when one physical Linux host isn’t enough? How do we consume clusters of physical hosts? How do we provide network connectivity between containers? How do we decide where to place containers, and where not to? How do we monitor container performance and security? These questions only scratch the surface of the “what if/how/when”.
For some of the questions above, this is where container orchestration comes into play. And if you’ve had any exposure to container solutions, you’re familiar with some of the popular options: Kubernetes (k8s), Swarm, and Mesos. I won’t go into each of these (there is plenty out there already), but we need to know what they do and how they do it, what they don’t do, and where we can add improvement.
And then there are the other components that need to be added to address the other questions above. A network overlay, for example, to allow communication between containers. All told, there are many moving parts that need to be brought together to achieve enterprise class container adoption. This is where things get complex and where VMware adds tremendous value. In my next post, I’ll cover a few of the innovations we’ve introduced to make container adoption enterprise-ready.