Kubernetes Services: ClusterIP | NodePort | Load Balancer | Multi-Port | Headless
Managing and Scaling Applications with Kubernetes Services
Table of contents
Prerequisite
To fully understand the concept of Kubernetes services, it is recommended that you have a basic understanding of containerization and Kubernetes architecture. If you are new to Kubernetes, you can read my previous article: Get Hands-On with Kubernetes: Running it Locally with Minikube
Introduction
In a Kubernetes cluster, pods are the smallest deployable units that run containerized applications. It is possible to access pods using their IP address, but this approach is not reliable because pods are frequently recreated and destroyed. Once a new pod is created, it will be automatically assigned a new IP address so that you will lose control over it. This makes it difficult to manage the access to the pods over time. To solve this problem, we need some way by which we can reliably access all these pods associated with a single stable IP address.
Service is an abstraction layer that provides a stable IP address and DNS name for a deployment. Services allow you to access a group of Pods using a single, static IP address and DNS name, even as the Pods are destroyed and recreated frequently.
In this article, we will explore the concept of Kubernetes service and how it can be used to reliably access the pods associated with a deployment, even as they are destroyed and recreated.
Features that Kubernetes Service offers
π°: Stable IP address: Service is a valuable Kubernetes object that provides the functionality of a stable IP address to a deployment. This allows clients to connect to the service using that static IP address.
π°: Load Balancing: The service also provides load balancing by distributing the incoming traffic. If a pod is not responding, it will automatically redirect the request to another pod so that no pod is overwhelmed with requests.
π°: Loose Coupling: Clients can also make a single request to a specific pod instead of calling each pod individually. Service takes care of routing the requests to the appropriate pods. This makes the deployment more flexible and scalable.
Types of services in Kubernetes
ClusterIP
This is the default type of Kubernetes service. When you don't mention the type of service you want to create, K8s automatically creates ClusterIP service.
curl
Let's create a deployment with 4 replicas using the following manifestation file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
selector:
matchLabels:
app: myApp
template:
metadata:
labels:
app: myApp
spec:
containers:
- image: nginx
name: nginx-container
ports:
- containerPort: 80
$ kubectl create -f nginx.yaml
deployment.apps/nginx-pod created
Now, let's create a service using the following configuration.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 8083
targetPort: 80
selector:
app: myApp
$ kubectl create -f nginx-svc.yaml
service/nginx-service created
Check the newly created service by using kubectl get svc
Since we didn't specify the type of service, Kubernetes automatically created a ClusterIP service. It automatically gets a static IP address 10:111:94:38
.
In the service manifestation file, I have mentioned targetPort
and port
. The targetPort
defines the port at which we can access the service. port
80 is where the application is listening inside the pods. Remember, the ClusterIP service cannot be accessed outside the cluster. If you use it curl 10:111:94:38
, it will not work.
But, this service is accessible from any of the pods inside the cluster. Let's view the pod's IP and get into one of the pods.
- List the pod's IP
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-pod-58594f9dd9-62r4m 1/1 Running 0 16m 10.244.0.82 minikube <none> <none>
nginx-pod-58594f9dd9-fc2j5 1/1 Running 0 16m 10.244.0.80 minikube <none> <none>
nginx-pod-58594f9dd9-kdhcz 1/1 Running 0 16m 10.244.0.81 minikube <none> <none>
nginx-pod-58594f9dd9-r5qv2 1/1 Running 0 16m 10.244.0.83 minikube <none> <none>
- Get into any of the pod
$ kubectl exec -it <POD_NAME> -- bash
root@nginx-pod-58594f9dd9-62r4m:/#
- Ping the service using the IP or name of the service.
curl 10.111.94.38:8083
# OR
curl nginx-service:8083
Now, you can see the response from nginx. Here 8083
is the port number in which the service is receiving requests.
Port Forwarding
Kubernetes ClusterIP services are not accessible outside the cluster, however, you can forward the port to access it from the local machine during debugging.
- Use
kubectl port-forward
: For example, ClusterIP is running on port8083
and we can forward it to8888
in our local machine.
$ kubectl port-forward svc/nginx-service 8888:8083
Open your browser and search for the forwarder port localhost:8888
How does ClusterIP figure out which pods to route the traffic to?
When we created services we specified a label
selector tag. For example, here I instructed K8s to route the incoming traffic to app: myApp
.
When a client sends a request to the ClusterIP service, the service looks at the label selector you specified and sends the request to one of the pods that match the selector. To make sure that each pod gets a fair share of traffic, the ClusterIP service uses a simple round-robin algorithm. This means that it cycles through the available pods and sends requests to each pod in turn. So if you have three pods that match the label selector, the first request will go to the first pod, the second request will go to the second pod, and so on.
NodePort
This service type provides a way to expose a service on a specific port on each worker node in the cluster. This allows you to access the service from outside the cluster using the IP address of any worker node.
Now add the type: NodePort
field under the spec
section of the YAML configuration of the nginx-service.yaml file. Here is the modified version.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- port: 8083
targetPort: 80
selector:
app: myApp
Choosing your node port
When you create a NodePort
service in Kubernetes, you have the option to specify the port number that the service should listen on for incoming traffic from outside the cluster. If you don't specify a port number, the Kubernetes control plane will automatically assign a random port number between 30000
and 32767
. Otherwise, you can provide nodepPort: 31212
it under port
the key.
Modified Service configuration with nodePort.
Now let's use the cluster ip and the node port to access any of the pods using the NodePort service
Load Balancer
Both NodePort
and LoadBalancer
services can be used to route traffic to Kubernetes pods, but they serve different purposes. A NodePort
service exposes a port on each worker node in the cluster and routes traffic to the pods. On the other hand, LoadBalancer
services are more powerful and provide advanced traffic control like SSL termination, firewalls, connection throttling, and access controls.
Create LoadBalancer Service
All you have to do is change the type: NodePort
to type: LoadBalancer
. Then apply it using kubectl apply -f <FILE_NAME>
and list the services to see the changes.
Now again use the curl <CLUSTER_IP>:<NODE_PORT>
to access any of the pods. Before doing that, let's do the following things step by step to simply test the load balancer. π List all the pods using kubectl get pods
π Run the logs of each pod in multiple windows to see the requests using the command kubectl logs <POD_NAME> -f
. I have 4 replicas, hence I will open 4 terminal windows.
π Now let's open the service using <CLUSTER_IP>:<NODE_PORT>
in browser and keep refreshing it only using the keyboard shortcut ctrl+R
so that you can multiple requests within a short period. Then open all the terminals and you will see that if one pod is unable to respond, the load balancer will automatically route traffic to the other pods. This helps to ensure that the service remains available and responsive, even if one or more of the pods encounter issues.
β© Alternatively you can use Grafana k6 for better load testing.
Multi-Port Service
One more than one container is running inside a single port, define a multiport service like this to expose more than one port.
apiVersion: v1
kind: Service
metadata:
name: multiport-service
spec:
type: LoadBalancer
ports:
- name: proxy
port: 8083
targetPort: 80
- name: application
port: 8084
targetPort: 8000
selector:
app: myApp
Headless Service
Imagine a client wants to communicate directly with a specific pod bypassing the load-balancer or ClusterIP service. In that case, we can create a headless service by specifying none
to cluster.
Headless services can be used to manage stateful applications, where each pod has its own unique identity and you have to communicate with a specific pod only to get the data.
Conclusion
In conclusion, Kubernetes provides different types of services to serve a specific use case. Services in Kubernetes provide a way to expose and manage a set of pods and allow clients to access the pods by using a stable network address.
If you enjoyed reading this article, please give it a like and consider sharing it with your colleagues, and friends on social media. Additionally, you can follow me on Twitter and subscribe to my Newsletter for more updates. Thank you for reading!