Kubernetes
Containers
Linux

Kubernetes Ingress: Efficient Strategies for Service Routing

Kubernetes Ingress, in short, is an API object that manages external access to the services in the cluster. It supports load balancing, SSL termination, etc
April 25, 2024

Introduction

Kubernetes Ingress, in short, is an API object that manages external access to the services in the cluster. It supports load balancing, SSL termination, and name-based virtual hosting. It is also the most popular routing management method on Kubernetes.

An Ingress provides a single entry point into the supported resources in the cluster, like Service. It receives the traffic, HTTP or HTTPS, which are commonly used in this case, and routes to the destination according to the defined routing rules. Compared to Kubernetes Service, an Ingress is a dedicated load balancer placed in front of the Service.

All the routings controlled by Ingress provide flexibility in application management and simplify the troubleshooting of routing issues. If you are using Ingress on a cloud-managed Kubernetes cluster, its powers could be expanded, such as provisioning cloud-native load balancers, auto-applying SSL certificates, and monitoring.

Kubernetes Ingress Controller

The Ingress controller, is what makes Ingress work in the cluster. It implements the Kubernetes Ingress, which works as the load balancer. Ingress itself defines the routing rules, while the Ingress controller manages those rules. As it is running as a Pod within the cluster, it monitors the creation, change, and deletion of Ingress resources and then configures the underlying load balancer accordingly. It is also the one that handles the traffic routing based on the rules.

This is a list of the most popular Ingress controllers.

If you are using a cloud-managed Kubernetes cluster like EKS, AKS, or GKE, you may also use these Ingress controllers.

  • AWS ALB Ingress Controller
  • Azure Application Gateway Ingress Controller
  • GKE Ingress Controller

As there are many choices for an Ingress controller, you should choose the one that best fits your needs.

Type of Ingress

Single Service Ingress

A single service ingress is used to expose a specific Service externally. To do that, you need a `defaultBackend` field in the Ingress specification with a correct routing rule pointing to a specific port number of the Service.

Figure 1 - Single Service Ingress | Example, Source: Kubernetes Docs
Figure 1 - Single Service Ingress | Example, Source: Kubernetes Docs

An example of a single-service Ingress manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:  
  name: single-service-ingress
spec:  
  defaultBackend:    
    service:      
      name: single-svc      
      port:        
        number: 80

The above example defined a single-service Ingress that will route the incoming HTTP traffic to a Service named `single-svc` and its port 80.

This is the most straightforward way to use Ingress, but not the best way. As previously mentioned, Ingress supports multiple routing rules and becomes a single point of entry as a load balancer. It is overkill to expose a single Service by setting up the Ingress controller and Ingress.

In the current DevOps world, with the popularity of the microservices concept, business use cases of single-service Ingress may probably not exist anymore.

Simple Fanout Ingress

The simple fanout Ingress can route traffic to more than one Service, based on the HTTP URL in the request. When you have multiple Services and want to serve with one single domain name, it is a great option.

Figure 2 - Simple Fanout Ingress | Example, Source: Kubernetes Docs
Figure 2 - Simple Fanout Ingress | Example, Source: Kubernetes Docs

An example of a simple fanout Ingress manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:  
  name: simple-fanout-ingress
spec:  
  rules:  
  - host: yukccy.com    
    http:      
      paths:      
      - path: /home        
        pathType: Prefix        
        backend:          
          service:            
            name: home-svc            
            port:              
              number: 80      
      - path: /blog        
        pathType: Prefix        
        backend:          
          service:            
            name: blog-svc            
            port:              
              number: 8080

The above example showed a simple fanout Ingress definition that will route the traffic to two Services `home-svc` and `blog-svc` depending on the provided path in the HTTP URL of the request. Both routes share the same hostname.

In this case, you will only need one IP address but expose multiple services. It is more like a real-world use case as reducing the total number of IP addresses or load balancers is a big task for a DevOps Engineer, especially after the recent new charge of public IP addresses from public cloud providers.

Name-Based Virtual Hosting

The name-based virtual hosting is a bit like the simple fanout Ingress, as they are both routing the HTTP traffic to different services with a single IP address. The biggest difference is that it supports multiple host names while simple fanout Ingress supports a single host name. This ingress type usually routes the traffic to the defined host directly before evaluating the routing rules.

A diagram of a serverDescription automatically generated
Figure 3 Name-Based Virtual Hosting | Example, Source: Kubernetes Docs

An example of a name-based virtual hosting Ingress manifest:

apiVersion: networ
king.k8s.io/v1kind: Ingress
metadata:  
  name: name-based-virtual-host-ingress
spec:  
  rules:  
  - host: home.yukccy.com    
    http:      
      paths:      
      - pathType: Prefix        
        path: "/"        
        backend:          
          Service:            
            name: home-svc            
            port:              
              number: 80  
  - host: blog.yukccy.com    
    http:      
      paths:      
      - pathType: Prefix        
        path: "/"        
        backend:          
          service:            
            name: blog-svc            
            port:              
              number: 8080

The above example is a name-based virtual hosting Ingress that incoming traffic will be routed to `home-svc` and `blog-svc` only if the hostname matches. If you want to catch the traffic, you can add a rule without any hostname specified. All the traffic didn’t match any rule with the hostname, they will all go to the one without the hostname.

In this case, you use a single Ingress routing HTTP traffic to different services whose traffic is not from the same hostname. Setting the DNS records of the sub-domain to be the IP address of the Ingress will make everything work.

In my opinion, it is the most used type of Ingress as it covers all the capabilities of previously mentioned types, and with a powerful hostname routing. With a good design of routing rules, it minimizes the use of IP addresses and maintains everything in the same place.

Different Traffic Control on Kubernetes

There are several ways to manage traffic on the Kubernetes cluster.

  • Ingress. It allows you to consolidate multiple routing rules in one single resource. With the use of the cloud-managed service, you could get a cloud-native load balancer attached to Ingress.
  • ClusterIP. The best option for internal services because it exposes services internally. It is usually used for debugging services, internal-only applications, and development purposes.
  • NodePort. It exposes services with a static port number in a range of 30000-32767 by default. Not recommended for any production environment as it does not provide load balancing or multi-service routing.
  • LoadBalancer. This type creates an external load balancer to expose the service. A bit similar to a single-service Ingress, but the latter provides more flexibility on routing control.

When to use

Ingress is always recommended when you have a complex traffic routing, no matter whether a single service with multiple paths or multiple services. Also, if there is more than one application or system hosted on your Kubernetes cluster, you should manage their routing with Ingress.

Moreover, if you are using some cloud provider-managed Kubernetes cluster, you may consider Ingress as one of the options for cost optimization. For example, using the Kubernetes service provided by AWS, aka EKS, and the AWS Load Balancer Ingress Controller, each Ingress creation will provision an AWS Load Balancer. As they charge for each Load Balancer and public IP address, you should minimize the number of them while Ingress helps.

Ingress with Nginx Ingress Controller

Before we try to create an Ingress resource on the cluster, we have to set up an Ingress Controller. And we will use the Nginx Ingress Controller as an example.

Steps below,

Step 1 - Install the Nginx Ingress Controller using Helm.

helm upgrade --install ingress-nginx ingress-nginx 
--repo https://kubernetes.github.io/ingress-nginx
--namespace ingress-nginx --create-namespace
Figure 4 - Ingress with Nginx Ingress Controller | Install by Helm
Figure 4 - Ingress with Nginx Ingress Controller | Install by Helm

Step 2 - Check if the Pod of Nginx Ingres Controller is ready

kubectl get pods --namespace=ingress-nginx
Figure 5 - Ingress with Nginx Ingress Controller | Pod status
Figure 5 - Ingress with Nginx Ingress Controller | Pod status

Step 3 - Create a Deployment for a simple web server

kubectl create deployment demo --image=httpd --port=80
Figure 6 - Ingress with Nginx Ingress Controller | Sample web server Deployment

Step 4 - Expose the Deployment

kubectl expose deployment demo
Figure 7 - Ingress with Nginx Ingress Controller | Expose the web server Deployment
Figure 7 - Ingress with Nginx Ingress Controller | Expose the web server Deployment

Step 5 - Create an Ingress resource that uses a host mapped to `localhost.`

kubectl create ingress demo-localhost --class=nginx
--rule="demo.yukccy.local/*=demo:80"
Figure 8 - Ingress with Nginx Ingress Controller | Create Ingress resource
Figure 8 - Ingress with Nginx Ingress Controller | Create Ingress resource

Step 6 - Forward a local port to the ingress controller

kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
Figure 9 - Ingress with Nginx Ingress Controller | Forward a local port for Ingress Controller
Figure 9 - Ingress with Nginx Ingress Controller | Forward a local port for Ingress Controller

Step 7 - Test if the Ingress works fine

curl --resolve demo.yukccy.local:8080:127.0.0.1 http://demo.yukccy.local:8080
Figure 10 - Ingress with Nginx Ingress Controller | Test the Ingress
Figure 10 - Ingress with Nginx Ingress Controller | Test the Ingress

This is an example of deploying the Nginx Ingress Controller on your local Kubernetes cluster and accessing the deployment using the hostname defined in the Ingress rules.

Conclusion & Extra Tips

There could be more than one Ingress Controller at the same time in a Kubernetes cluster. The Ingress resource could be implemented using a different controller. To achieve this, Kubernetes supports a resource named `IngressClass` that contains which controller it refers to and additional configuration for the controller. 

For better practice, all the Ingress resources should specify a class even if you only configured one controller only. As it could be a Namespaced object, you may get an advantage from its nature to scope usage of the Ingress Controller. Different controllers for development, staging, and production environments could be useful, as you may need customized network design and control per environment.

Here is an example of `IngressClass` definition.

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:  
  name: external-lb
spec:  
  controller: example.com/ingress-controller  
  parameters:    
    apiGroup: k8s.example.com    
    kind: IngressParameters    
    name: external-lb