Just back from our annual kickoff training. It’s an opportunity to connect with teams from all over the world, product managers, and senior leadership to learn about what’s relevant to our customers and industry across the globe.
I invested a fair amount of time focusing on the ever evolving “Cloud” topic. I define Cloud as IT as a Service, irrespective of location and ownership of physical infrastructure. Providing capacity to the business while assuring governance, meeting an agreed upon SLA, faster, more economically, and more reliably than they can achieve on their own. Granted, there are application architectures that would expand this definition.
The facets of meeting SLA, assuring reliability & governance, and achieving economic efficiency are very important. These details generate the majority of effort in connecting an enterprise data center to a public cloud.
The move toward public/hybrid cloud has been a paradigm shift for many years, from many perspectives. The need to understand capacity requirements presents the number one overlooked in my opinion. It has the largest impact on long-term cost benefit.
Many (most) organizations tend to initially focus on which application would be better served from the cloud. This is understandable as most organizations still do not consider the capacity they manage internally as a cloud for their business units. To them, Cloud means an entirely different construct, owned and operated by a third party, and with an entirely different architecture than their own.
They are right in many cases. Amazon Web Services has application specific requirements that are not the same for a typical internal implementation. Application architectures are also evolving. With the advent of containers (essentially OS virtualization) and micro-services, capacity needs to be modeled differently. But this new architecture will eventually become the norm within the internal data center as well. These use cases do require consideration of application architecture.
In other cases, the application to capacity architecture is the same on either side. For example, VMware has brought to market its vCloud Air Network with partners like IBM, and soon to be AWS. In this case the underlying constructs are identical on both sides. This model eliminates the need to consider architecture of an application with regards to consuming capacity.
In either of the above cases, we still need to consider how an application’s architecture and behavior will respond to differences in the network conduit.
The next focal point for most organizations tends to be automation and service catalog integration. This begins a long journey, with lots of meetings, and lots of opinions (Layer 8). More on this in another post.
With all of the above considered, if the IT organization doesn’t clearly understand the capacity required to meet SLA, all the benefit of agility, economy, and governance achieved will be outweighed by the cost of over-provisioning. Understanding how an application consumes capacity in a given cloud needs to be a priority for organizations. This knowledge is the only way we can provision correctly to achieve the best performance at the right cost.
This might be a familiar routine in your organization: The IT business unit receives a request from the application team that says, “software vendor ‘x’ says I need this much ‘y’, then IT provisions y + 20% to be sure. All while y – 20% would have been the best level to provision at. This results in underutilized capacity and poorer performance (i.e. CapEx and OpEx down the drain.).
The first step is to leverage analytics within your private cloud (i.e. your data center) to understand if there is contention between workloads, how those workloads trend on demand and consumption, and how they should be configured on a given resource pool to operate most efficiently (Cost to SLA).
This task is least difficult within your own data center. You own the hardware, the hypervisor, storage, and network. You can build a profile for a workload in short-time that allows for right-sizing. However, that profile won’t necessarily be the same in a public cloud (e.g. When the cloud provider is using a different underlying architecture.).
With vCloud Air Network partners, you can trust the right-sized VM configuration you have internally will be the right-sized VM in the public cloud. This is a tremendous benefit of the model. In other cases (e.g. Native AMI on AWS.), we need to leverage additional tools to compare workload performance from one cloud to another. Not an impossible task, just additional effort.
Simply put: Make workload utilization analysis a priority in your cloud initiative. Deliver services from any cloud with the right performance at the right cost.