Introduction
Kubernetes is a container orchestration service that spin-ups application containers in a pod, which is referred to as the smallest object in Kubernetes, and we have to make sure that these pods can communicate with each other to make this communication possible both within the k8 cluster or from outside the cluster like external users. Kubernetes uses Services which enable both the users and other applications to connect with the running instance of the application, which runs as a container inside pods and it’s creates an entry gateway for the incoming traffic into the pods. Through services we ensure loose coupling between the microservices in our k8 cluster. In this blog we will discuss different types of Services with their practical implementation.
Services mostly use selectors in order to attach itself with the resources based on the labels that are assigned to the POD in our Kubernetes cluster; labels and selectors both work hand in hand and need to be same in order to work, labels for a single pod or selector for a single service can be one or multiple based on the use-cases. Labels ensure which pod will receive the traffic from a service we can change the service of a pod by simply updating the label in pod definition file. In some cases we don't want to assign service through a selector and label in that scenario we do not specify selector in service definition file instead we use EndpointSlice (recommended replacement of an endpoint). Below are some scenarios where we can create service without the selector.
- Migrating workload to Kubernetes.
- Point service to different namespace or cluster.
- Having different database in production and test environment.
By using service we make our POD available on the network so we can interact with it from both internally and externally. We can create service by creating an object definition file same as we create for other resources in Kubernetes, the definition file is in YAML format which holds all the attributes and defines what resource will be created. A Service can be created, viewed and modified by using the Kubernetes API (kubectl).
Below is the basic structure of Service definition file, it can be noted that all things will remain same with another service type except the spec section which varies based on the service type.
---
- apiVersion: Kubernetes apiVersion for the specific resource, which will be v1.
- kind: kind of the resource which will be Service.
- metadata: metadata field include name and labels.
- spec: spec includes the specification of the resource.
- spec.type: it denotes the type of service.
Kubernetes has four types of services that act differently and are used for different purposes, each service can be exposed to the cluster for both internal or external traffic. Below we have discussed each service type in detail.
ClusterIP Service
ClusterIP is used to establish connectivity between different applications internally within the Kubernetes cluster, this service type is the default service in Kubernetes and if no type is defined in the definition file it is assumed that the service is ClusterIP, also several other services types are build on top of it as a foundation.
To understand how ClusterIP works we have to assume an actual scenario where we have 2 microservices a frontend and a backend running in a pod in these same Kubernetes cluster, frontend needs to fetch data through an api call to backend microservice. Hence, the best option is to use the ip of the backend pod with the exposed port but this cannot be possible without using a service because both the ip and the port have limitations like ip address of a pod changes every time in the pod creation and removal process it is not constant secondly the port of the container application running inside a pod can’t be accessed from outside the pod directly.
To make connectivity possible internally between different applications inside a cluster Kubernetes assigns an IP address to a service ClusterIP from a pool of IP addresses and then this IP is used to connect with the pod. A cluster IP can also be manually assigned by specifying the IP address in the field spec.clusterIP or we can define its value to “None” for headless service also we have to map a random port number to the port of the application running inside a pod, by using both the ip and the port we can communicate through the backend application. This same approach of creating clusterip service can be followed by other pod applications for communicating with each other within the cluster.
By creating a clusterip service we create an interface for applications running inside pods to establish connectivity with each other internally, these pods are bundled based on same labels which are defined in pod definition file. To enable communication we have to attach the created service with the pods and it can be attached through labels and selectors as same as we do with the other service types like NodePort, define the selector in the service definition file and if the service finds any labels which match it will be attached to that pod providing one interface which have one dedicated ip and port.
How to configure ClusterIP: Below is the structure of the Service definition file which has type ClusterIP, It can be noted that the basic structure that we discussed above in the introduction will remain the same.
---
- spec.ports: array of ports which include all the ports properties.
- spec.prots.name: name of the port, port name should always contain small alphanumeric characters and “-”.
- spec.ports.targetPort: port of the running application inside pod.
- spec.ports.protocol: protocol through which communication will be established, default protocol for service is TCP.
- spec.port: service port number which can be the same as the target port and vice-versa.
- spec.selector: used to define the name of labels under the selector which will be app and type.
Commands: All the commands will remain same for other service types except the service creation command which will be different based on the service type
Below is the command to create a service type clusterip with the name, protocol and <port>:<targetPort>, the best practice is to create any resource in Kubernetes by using the definition file(yaml). In order to create the definition file we have first dry-run and save the output into yaml format. Once file is generated we can edit it according to our needs and then create the resource.
kubectl create service clusterip redis-clusterip-service --tcp=9344:6379 --dry-run=client -o yaml > redis-ci-service.yaml
Creating ClusterIP service manually with defining cluster IP by using the below command.
kubectl create service clusterip redis-clusterip-service --tcp=9344:6379 --cluster-ip=10.0.171.239 --dry-run=client -o yaml > redis-ci-service.yaml
After saving the output we can create this resource by using below command.
kubectl apply -f redis-ci-service.yaml
To get all services in the current name space use below command.
kubectl get svc
To get all services in different name space use below command.
kubectl get svc -n=my-ns
To get detail about specific service in current name space use below command.
kubectl describe svc redis-clusterip-service
To get detail about specific service in different name space use below command.
kubectl describe svc redis-clusterip-service -n=my-ns
For deleting service in current namespace use below command.
kubectl delete svc redis-clusterip-service
NodePort Service
Node Port is the type of service where this service is attached to every node in the cluster and the application can be accessed from outside the cluster using any of the node ip of the cluster. Node Port service spans across all the nodes in cluster irrespective if the selected pod is present on that node or not. If we dissect the word nodeport it consists of two words node and port, node is the physical machine in the Kubernetes cluster as k8 cluster can consist of 1 to many nodes and the second word is port which is the term used for the gateway from where the traffic can be entered. In node port service we have to select a port from the defined port range which is from 30000 to 32767 and in nodeport each node proxies the same port number across the cluster so we can access the same application using the same port with different node ip’s.
If multiple pods are running with the same labels NodePort will automatically span around all the pods whose labels are matched. It uses internal load-balancing(Random Algorithm) to route the traffic between the matched label pods. This type of scenario can occur in replicaset and deployment where a pod have multiple instances running with the same labels.
To specify nodePort in our service definition file we have to use any unused port from the range and place it under the key .spec.ports[*].nodePort, if we don't specify nodePort it can be automatically assigned from the range.
So far we understand both the Node and Port in Service type NodePort, but how this nodePort can route the external traffic to our application running inside a pod for this we have to map our nodePort to the exposed port of application known as the targetPort which can be specified in following manner .spec.ports[*].targetPort inside the service definition file.
For mapping port we need some sort of bridge through which external traffic can be routed from the nodePort to targetPort. To fill this gap Service came into picture where it bridges the traffic between both ports. Service has its dedicated ip address which is called CLUSTER-IP as well it also have PORT, we can define the service port inside the service definition file in the following manner .spec.ports[*].port, and if we don't specify targetPort it is assumed as same as port.
How to configure NodePort : Below is the structure of Service definition file which has the type NodePort, It can be noted that the basic structure that we discussed above in the introduction and clusterip section will remain same, we have to edit the value of nodePort from yaml file before creating the resource.
---
- spec.ports.nodePort: nodePort value which will be between the set default range that is 30000 to 32767
Command: Below is the command to create NodePort service, once file created then edit name and selector
kubectl create service nodeport redis-service --tcp=6379 --dry-run=client -o yaml > redis-np-service.yaml
After saving the output we can create this resource by using below command
kubectl apply -f redis-np-service.yaml
LoadBalancer Service
LoadBalancer service types use the load balancing component to route traffic to our pod based on the request load. It is similar to the NodePort service and the main difference between them other than the load balancer component is that we can access our application using any worker node ip address in NodePort where in LoadBalancer service we can access our application through one ip address with having load balancer in-front which then route traffic to pods based on the load.
Kubernetes natively does not provide the load balancer component so in order to use this service type we have to setup our own Load Balancer like AJ/Proxy or Nginx on a different Virtual Machine which then route traffic to our cluster pods or if we are using any of the supported cloud providers like (AWS, GCP or Azure) then we can leverage their native support of load balancing for Kubernetes. When we define the service type LoadBalancer in our service definition file then if our K8 cluster is in supported cloud environment a LB is created asynchronously and a public ip of that LB is assigned to our service(before v1.24 loadBalancerIp can be manually assigned under .spec.loadBalancerIP and it deprecated from specified version). If the K8 cluster is not in supported cloud environment then the type LoadBalancer will act as a NodePort until and unless LB software is configured manually into our machine to route traffic to our pods.
LoadBalancer Type: In service type LoadBalancer there are primarily two types one is external and other is internal, internal lb is used to communicate with the applications which are in the k8 cluster by restricting external traffic whereas external lb is used to route external traffic to our applications and exposes out application to external user and to the applications which are outside the cluster. Different LB provide different load-balancing strategy it totally depends upon what LB we are using some strategy that are commonly used by the provider are (Round Robin, Source Ip Affinity, Session Persistence, Port Based Load-Balancing, Least Connection and Custom Load Balancing).
How to configure LoadBalancer: Below is the structure of Service definition file which have type LoadBalancer, It can be noted that the basic structure that we discussed above in the introduction and clusterip section will remain same.
- spec.externalTrafficPolicy: There are mainly 2 types of “externalTrafficPolicy” one is Local and other is Cluster, when using cluster instead of local K8 perform an extra load balancing by forwarding the request to another node which increases the efficiency of load balancing and this policy is the default externalTrafficPolicy for service type LoadBalancer, whereas for local when the request arrives to pod the kube-proxy will not spread the load to other nodes in cluster.
To use load balancing for internal traffic we have to use different annotation for different providers like for AWS we can specify the following key .metadata.annotations.service.beta.kubernetes.io/aws-load-balancer-internal and it’s value “true”. For different providers we can check their keys and value from k8 official documentation (https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer)
Command: Below is the command to create NodePort service, once file created then edit the name and selector
kubectl create service loadbalancer redis-lb-service --tcp=6379:6379 --dry-run=client -o yaml > redis-lb-service.yaml
After saving the output we can create this resource by using below command
kubectl apply -f redis-lb-service.yaml
PS: In below ss loadbalncer service is not created on managed cluster which is using the cloud k8 service like (AKS, EKS or GKE) hence for this reason EXTERNAL-IP is in PENDING state
ExternalName Service
In other service types we use selector and labels inorder to attach service to a specific pod while in ExternalName we map a service to an external domain using its DNS name, the service is mapped to the content of exteralName which returns the value of CNAME records. Once service is created externalName value will be use as an EXTERNAL-IP and it act as a proxy and routes the traffic to other services which is sitting outside the cluster. The main difference between this service type and others is that the redirection is done on DNS level rather than proxying or forwarding.
A best use case for this type of service is to connect with the components like a database running outside of the k8 cluster another use case is allowing a pod in one namespace to communicate with a service in another namespace and the main purpose this service is to connect with outside elements.
How to configure ExternalName: When we create an external name service using kubectl command and apply it to the cluster then an internal DNS system record is created. Below is the structure of Service definition file which have type ExternalName.
- spec.externalName: we define the DNS name of the external component
Command: Below is the command to create NodePort service, once file created then edit it according to your need
kubectl create service externalname db-en-service --external-name=my.database.example.com dry-run=client -o yaml > db-en-service.yaml
After saving the output we can create this resource by using below command
kubectl apply -f db-en-service.yaml
Conclusion: In Kubernetes Service plays an vital role in making communication possible both internally and externally, some services are used for internal connectivity like ClusterIP and some services are best suited for enabling external connectivity to our applications like NodePort, LoadBalancer and ExternalName. In prod we generally use LoadBalancer service whereas NodePort is generally used in the testing phase and ExternalName used for connecting services which are outside the cluster.
Conclusion on Kubernetes Services
In conclusion, Kubernetes Services are essential components for enabling communication within a Kubernetes cluster and facilitating connectivity to applications from both internal and external sources. Through the use of Services, Kubernetes ensures loose coupling between microservices and provides a mechanism for accessing running instances of applications.
ClusterIP Services allow internal connectivity between different applications within the cluster, providing a stable IP address for communication. NodePort Services enable external access to applications by attaching to every node in the cluster and routing traffic through specified ports. LoadBalancer Services distribute traffic across pod instances based on load, allowing external access through a single IP address with load balancing capabilities. ExternalName Services map services to external domains using DNS, enabling communication with elements outside the cluster.
Understanding the different types of Services and their configurations is crucial for efficiently managing communication within Kubernetes environments. While LoadBalancer Services are typically used in production environments, NodePort Services are often employed during testing phases, and ExternalName Services facilitate connections with external components. By leveraging Kubernetes Services effectively, organizations can ensure seamless communication and robust connectivity for their applications.