In the previous post, I covered the concept of a K8s Service and the clusterIP service. ClusterIP is a method to create a stable IP address with a DNS A record for the service, load balance requests to endpoint pod replicas (endpoints), and is only exposed within the cluster. In this post, I’ll cover the remaining service kinds: of headless, nodePort, and LoadBalancer. I’ll also cover non-service methods of exposing a pod via Ingress Controller and hostNetwork,\hostPort.
The Headless service, like ClusterIP, is an in-cluster only addressing component. I’ll go into this in more detail when I cover the use of StatefulSet in a future post. A headless service is created by defining clusterIP field as ‘none’ in a Service spec.
A DNS CNAME record is used for the Headless service name. The CNAME service name record points to SRV records that are created for each pod endpoint. DNS name resolution queries to the Headless CNAME record will return a rotating result of endpoint pod IPs associated via those SRVs. No load balancing is configured.
Deployment and clusterIP resources address the needs of stateless applications well. But, with stateful applications (e.g. Applications that mount storage and expect changes to persist over restarts, applications that are replicated but understand they are running in a quorum with other similar services, etc.), you do not want to use a Deployment due to limitations of how a Deployment is scaled, how it’s storage is allocated, and how it will randomly bounce you from one pod to another.
For now, the key concept to understand is that a clusterIP service uses a single DNS A record to provide the IP address of the service. That service dynamically/randomly routes requests to back-end pods (endpoints) where the service is running. Headless creates no load balancer method, uses a CNAME that points to SRV records per endpoint, and, when combined with StatefulSet, allows you to explicitly connect to specific endpoints.
What about the outside world? For the majority of our needs to expose a pod outside of the cluster, we will rely on either LoadBalancer and/or nodePort (An Ingress Controller, non-service type, can be used as well, but will leverage nodePort on the back-end).
NodePort service tells Kubernetes to randomly select an unused port number between 30,000 and 32767 to assign, and then serve it at the IP address of any node in the cluster. So, if we creates a nodePort for our nginx pod, we’d define the target port as 80 and K8s would assign a port like 31234. If we curl’d a worker node’s IP at port 31234, we’d retrieve the index.html We can now access the pod’s service from any IP address that can reach a node IP address. But, not that great overall. Ports are randomly assigned and will become next to impossible to keep straight.
This is where a LoadBalancer comes in. In K8s, a LoadBalancer service creates a nodePort, assigns a routable IP address on predefined exposed port, maps the exposed IP and predefined port to the NodePort, and finally configures a reverse proxy rule on an L3 load balancer. With this, we can expose our nginx on port 80 via a predefined routable IP, then access via that IP and known port (e.g. port 80).
Alternatively, you could expose a pod via nodePort and then configure your own L3 load balancer to front end it. The automated method available via LoadBalancer kind requires a load balancer that monitors and interacts with K8s apiserver to auto-create the networking on the load balancer. This is typically only found within cloud provider K8s like EKS, GKS, etc., or in some on-prem packaged K8s like VMware Enterprise PKS or RH Openshift.
Another option for exposing pods to the outside world is an Ingress Controller. Unlike LoadBalancer service’s 1:1 relationship of service and unique routable IP, an Ingress Controller serves at a single routable IP address and matches patterns within the incoming request to dynamically route traffic to the correct service. Ingress Controllers are layer 7 load balancers and most often implemented as an http/https sort.
It’s fairly straight forward to implement an Ingress Controller in your cluster. There are a handful of Ingress Controller to select from in the open source community, a list of popular Ingress Controllers can be found at https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/.
Ingress Controllers are not service kinds, they are their own kind in K8s. K8s comes ready for programming ingress controllers that are written to interface with it. So there is no need to develop your own automation as with the L3 load balancer method. We use ingress rules to map requests to services. For example, if we had app1 and app2 services, we could write a rule that said a request arriving at the Ingress Controller IP requesting www.mycorp.lab/app1 would be connected to service app1, while a request arriving at the Ingress Controller IP requesting www.mycorp.lab/app2 would be connected to service app2.
An ingress controller simplifies routable IP management, but is limited to the L7 protocols it supports. A LoadBalancer requires additional IP address management, but is more flexible with many different service type since it operates at L3.
So for in-cluster communication, we rely on clusterIP or headless services, and for external access we rely on LoadBalancer and/or NodePort service kinds, or Ingress kind.
There are two last ways we can expose a pod IP address outside of the cluster, hostPort and hostNetwork.
The hostPort and hostNetwork are akin to constructs used with standalone Docker host container networking. These will most likely be encountered with a K8s Daemonset (as mentioned in previous post, a K8s Daemonset is simply a pod that is deployed by the scheduler to every worker node.). If we prefer a pod be exposed with the worker node’s IP and at a predetermined port, we might consider this option. Outside of that, k8s.io recommends to avoid their use.
Kubernetes.io on hostPort and hostNetwork:
- Don’t specify a
hostPortfor a Pod unless it is absolutely necessary. When you bind a Pod to a
hostPort, it limits the number of places the Pod can be scheduled, because each <
protocol> combination must be unique. If you don’t specify the
protocolexplicitly, Kubernetes will use
0.0.0.0as the default
TCPas the default
- If you only need access to the port for debugging purposes, you can use the apiserver proxy or
- If you explicitly need to expose a Pod’s port on the node, consider using a NodePort Service before resorting to
- Avoid using
hostNetwork, for the same reasons as
So that’s all the services, plus three additional ways to expose a pod to the outside world. We’ve covered clusterIP, headless, nodePort, LoadBalancer, Ingress Controller, hostNetwork, and hostPort. clusterIP and headless are methods of managing intra-cluster addressing and serving, the rest are methods to expose pods to the outside world.
In the next post, I’ll cover StatefulSet and details associated with stateful applications on Kubernetes (Including a revisit of ‘headless’ service kind)..