Containers, VMs, and VMware – Containers, DevOps, Orchestration, and Challenges

Containers, DevOps, Orchestration, and Challenges

In the first post of this series, I covered the basics of containers. In this post, I’ll cover why containers are attractive in DevOps initiatives and some of the challenges organizations face when trying to leverage them for such. This will lay the foundation for the next post, where I will cover VMware’s vision and initiatives to address these challenges.

DevOps is the practice of tightly integrating application development with service delivery and operations for improved outcomes. Moving from legacy application development models (e.g. Waterfall) to Agile is a key component of this.

The Agile development model aims to develop applications in sub components, using short sprints with continuous feedback from stakeholders and continuous testing.  In doing so, we develop applications that are checked many times along the way and developed with compartmentalized code bases. This significantly increases the likelihood of a final product that meets the application users’ needs and absolutely improves the organizations capability to alter code without breaking an entire application.

So, If we’re going to accelerate delivery schedules (from perhaps six months to two weeks with testing multiple times per day), and we’re going to accomplish this every two weeks in a continuous loop, we need to accelerate every process involved. Some of this is people process (e.g. People simply need to be available more often than before.). You also need to improve processes requiring infrastructure/network/storage engineers, QA testers and script writers, DR and HA architects, and the list goes on.

The processes of provisioning network, storage, and compute are a natural target for improvement. Taking a largely manual set of processes and automating them.

If we can take the ‘human hand’ out of configuring network/storage,compute, we improve quality and time to delivery. This, as most readers that have stuck along this far already know, is automation and orchestration. That is simple to comprehend. But there is a benefit factor derived from how this automation and orchestration is enabled. This is where infrastructure-as-code and RESTful APIs come in. And it’s where the approach to container adoption for DevOps becomes more interesting.

As mentioned in the first session, portability of applications by means of targeting a standard and antonymous operating system image is a key benefit. The evolution of containers includes the realization by developers that they could target a container host in their own data center, to deploy applications without having to wait for a host to be provisioned.

This seemed promising as a way to decrease time to delivery and the burden on operations. But this would never work without a way to have multiple hosts to meet capacity demand and automation of configuration and management of the distributed containers. As an example, Kubernetes (k8s) takes care of this challenge. k8s uses a Master node that manages many Worker nodes. A developer targets the master node when publishing code, the master node determines which worker node the container should be placed on (scheduling), registers it with DNS, handles service discovery, enforces desired state, load balances, etc..

Networking is a challenge as well. Containers need a way to address each other when they reside on the same host and to have conflicts in address space avoided, they need a way of discovering other containers and their addresses on other hosts, we need to be able to provide firewall rules between containers, and so on. Some of these challenges have been resolved with orchestration and container overlay networks such as Flannel and Calico but, some remain.

A glaring challenge is provisioning the initial container infrastructure and services. Container hosts can be run on bare metal (non-virtualized) or as VMs. On bare metal, they do not maximize use of purchased capacity and can not share resources with other types of workloads. More on that in the next post. For now, it is important to understand that speed to delivery and exact parity between environments are critical attributes of a successful DevOps practice. There are a lot of moving parts in a container service and implementing and managing them can become a bottleneck.

That’s it for now, have to get back to my day job. In the next session I will dive into VMware’s current and future state technology that address these challenges.