Connecting K8s Pods – Services – clusterIP

Photo by Tj Holowaychuk on UnsplashIn the previous two posts, I went over the basic networking components of K8s, and how a packet flows within the cluster. I discussed the node and pod CIDRs, but purposely avoided the clusterIP/Service CIDR ( –service-cluster-ip-range). Reason being, we need to cover the concept of endpoints and services first.

While pods in cluster can all communicate with each other via unique pod IP addresses, it isn’t ideal for pod-to-pod communication. As we know,  a pod is ephemeral and potentially short lived. There is no guarantee the IP address will remain the same across recreation. In fact, K8s will always attempt to change it. K8s does not automatically create DNS records for pods and their corresponding IP addresses because DNS and DNS resolvers are not designed to deal with frequent changes of an A record. Finally, we often replicate pods, complicating how a replicaset of pods would be rationalized by a requesting pod. Enter the service object.

A service provides an IP address that load balances to a back-end set of pod endpoint addresses. It is cluster based and will exist so long as the the service exists, irrespective of pod life. A service address is created as an A record in K8s DNS and enables requests to connect to a back-end ReplicaSet of pods (i.e. endpoints) without needing be concerned with the endpoints’ addresses.. By directing pods and external ingress traffic to a service address, we avoid the issue of pods/endpoints IPs changing. K8s typically utilizes NAT and iptables to redirect and load balance a service address to the endpoint pod address(es). The process that orchestrates all of this is kube-proxy, which runs as a daemonset in the cluster (A daemonset is simply a deployment that is automatically added to every worker node in a cluster by the scheduler.). If a pod dies and is recreated with a new IP address, kube proxy updates the service endpoints table.

So, if you want something to connect to a pod, you expose it with a service and connect to the service IP via the service DNS name. There are three service types commonly in use: clusterIP, NodePort, and LoadBalancer. HostNetowrk and HostPort are two other types you might encounter, but are not recommended unless absolutely required (I’ll cover those two in another post).

ClusterIP is the base form of a service in K8s, any and all services will have a clusterIP assigned. It is only advertised within the cluster (i.e Only nodes and pods in the cluster can access the address) and provides a stable IP for other pods in the cluster to address. For any pod-to-pod communication requirement, we should expose a pod/deployment/ReplicaSet via a clusterIP.

Here’s an example of creating a clusterIP for a replica set and then accessing it via its service address. You’ll see that I’m creating a ReplicaSet via the deployment controller object. A deployment is the preferred way to create a ReplicaSet, due to the controllers ability to manage updates and rollbacks of ReplicaSets. First, I’ll create the deployment and access it via one of the pod addresses and demonstrate why accessing via pod IP is not advisable. Then I’ll expose it via a clusterIP, access it via the service address, and demonstrate how that abstraction prevents issues created by pod life cycle events.

Ok, so that’s the first service type. We know:

  • Services front-end and load balance pods with a stable IP address.
  • A pod is considered an endpoint by relation when it’s load balanced behind a service IP.
  • K8s uses a process called kube-proxy, by default, that configures iptables/netfilter and serves as a cluster load balancer to endpoints.
  • clusterIP service type is for pod-to-pod communication and does not expose its address outside the cluster.
  • If a pod needs to accept requests from within the cluster only, it should be exposed as a ClusterIP type of service.

Next up, I’ll cover NodePort, LoadBalancer, HostPort, and HostNetwork as methods to expose a pod to the outside world.

Kubernetes Networking – Nodes and Pods – Sample Packet Walk

In my previous post, I covered the basics of Kubernetes networking. I thought it would be useful to follow up with a visual walk through of a packet’s path from pod to a remote node. In this example, I’m using K8s deployed with one master, three worker nodes, and nsx-t networking at both the node and pods level.

You can click the images in this post to open a larger view in a new tab. As you can see, I have three worker nodes with IP addrs 172.15.0.3 – 172.15.0.5. The node IP addressing for this cluster is configured using the 172.15.0.0/24 CIDR.

Next, let’s look at one of the pods I have running. This is the kube-dns pod. As you can see, it’s running in the kube-system namespace, on worker node 172.15.0.3 (As seen in the list of worker nodes above), has an IP addr assigned of 172.16.1.2, and hosts a container called kubedns. Continue reading

Kubernetes Networking – Nodes and Pods

I’ve been procrastinating with preparing for my CNCF Certified Kubernetes Administrator certification. Figure it was time to get to it and thought a series of blog posts on the various topics I’m digging into would be of interest to others.

Beginning with K8s networking, I’ll go into the details of the various layers of networking, how they come together, and how k8s leverages them to provide us with functioning container services.

This is a big topic, so I’ve decided to take a multi-post approach with it. I’ll start with basic networking of nodes and pods. I’ll cover network policies in another post. And then I’ll cover services, deployments, service discovery, etc. in a final post. Continue reading

VMworld K8s Annoucements

If you work with VMware tech at all, it’s unlikely you’ve not heard the buzz related to the K8s announcements by now. And I’ve been waiting a long time to be able to openly discuss them. Gladly, the day has finally arrived.

Tanzu (pronounced tahn-zu) is a brand name announcement. Not too exciting but conveys a direction to bring the various cloud native product lines together. Moving forward, we’ll see all products related to cloud native branded as Tanzu ‘something’. Continue reading

Automating Kubernetes Operations with Enterprise PKS

In building toward a k8s predictive auto-scale capability, I’ve built some simple constructs with virtual machines running on vSphere with kubeadm installed k8s. If you’ve followed the series, you know I reached the anticipated point of manually installed k8s becoming too inefficient to operate and had decided to implement VMware PKS to address that pain point.

Over the past year, many blogs have posted instructions on how to install and configure PKS. I suggest following the VMware and/or Pivotal online documentation. PKS is rapidly developed and released, and it doesn’t take long for a blog post on installation to become out of date. So I won’t post installation instructions here. Continue reading

Predictive Auto-scaling of vSphere VMs and Container Services – Shifting Gears

In the previous posts, I detailed the two main functions for performing the auto-scaling procedure. One to scale the K8s VM backed cluster across additional physical hosts, and one to scale the K8s pod deployment across the added hosts/nodes. The predictive trigger of these functions was to be the focus of this post.

As time passed and work took me away from this project, I am now without my former lab setup to test it. I could rebuild the lab and continue  the final bit of code that predicts a pattern in past CPU demand and calls the functions. But for my sanity’s sake, I’m going to pass on that for now and move on to the next logical progression in this series. Continue reading

Predictive Auto-scaling of vSphere VMs and Container Services – The Plumbing (2 of 2)

In the previous post, I wrote the function to scale a K8s deployment with a REST API call. In this post, I’ll write the other function required to codify the scaling of a K8s cluster across physical resources.

The function here will power on a K8s node VM that resides on an ESXi host and make itself available to the K8s cluster for additional compute. I will use the vSphere REST API and then combine it with the previous function to complete the scale out operation. Continue reading

Predictive Auto-scaling of vSphere VMs and Container Services – The Plumbing (1 of 2)

Photo by chuttersnap on UnsplashTo be pragmatic with this exercise, I’ll begin with a focus on the plumbing required to scale both components in parallel.. I’ll follow up in another post with predictive automation based on some primitive analytics.

As discussed in the first post of this series, I intend to rely on APIs exposed by VMware and the Kubernetes/Docker distributions to enable predictive orchestration of scaling. For my proof of concept, I will focus on scale-out and scale-in use cases. Scale-up and down will be excluded. Continue reading

Installing a VM Backed K8s 1.10 Cluster with Kubeadm

This post is the second, in a series that will consider the topic of predictive auto-scaling of vSphere VMs and containerized services. The purpose of this post is to describe how to build the first components with the installation and configuration of Docker and Kubernetes on VMs. I will be using Docker 1.13, Kubernetes 1.10, and CentOS 7 x64 for the implementation.

You will need either a vSphere environment or VMware Workstation/Fusion, CentOS install media, and an internet connection. There are a handful of ways to implement a test-bed Kubernetes cluster. Running on a cloud provider CaaS platform, orchestrating as a Docker-in-Docker deployment, building manually from scratch, or building with a helper tool from scratch. Continue reading

Predictive Auto-scaling of vSPhere VMs and Container Services

Photo by Samuel Zeller on Unsplash

Reactive and predictive auto-scaling have existed for some time already. Predictive auto-scaling has been custom developed and leveraged by large service providers for almost a decade.

To define a few terms: Scaling is the process of adjusting resources to satisfactorily serve demand. Auto-scaling is replacing manual process with automated IT processes that react to current-state metrics. Predictive auto-scaling relies on observed metrics, over time, to predict an upcoming change in demand and auto-scale ahead of actual demand. This is advantageous because the resources required to scale will not contend with the services demanding more. Continue reading