Skip to main content

Kubernetes Ingress — Route External Traffic Like a Pro

· 6 min read
Goel Academy
DevOps & Cloud Learning Hub

You have a ClusterIP Service. You have a NodePort Service. You even have a LoadBalancer Service. But the moment you need path-based routing, virtual hosts, or TLS termination for multiple apps behind a single IP, Services alone fall apart. That is where Ingress takes over.

Why Services Are Not Enough

Consider a scenario: you have three microservices — api, dashboard, and docs — and you want all three served behind app.example.com. With plain Services, you would need three separate LoadBalancers, three public IPs, and three DNS records. That is expensive and ugly.

Ingress solves this by acting as a single entry point that routes traffic based on hostnames and URL paths. One LoadBalancer, one IP, unlimited routing rules.

# Without Ingress: 3 LoadBalancer Services = 3 external IPs
kubectl get svc
# NAME TYPE EXTERNAL-IP
# api LoadBalancer 34.120.10.1
# dashboard LoadBalancer 34.120.10.2
# docs LoadBalancer 34.120.10.3

# With Ingress: 1 LoadBalancer (ingress controller) = 1 external IP
# app.example.com/api -> api service
# app.example.com/dashboard -> dashboard service
# docs.example.com -> docs service

Ingress Controllers — The Engine Behind Ingress

An Ingress resource by itself does nothing. You need an Ingress Controller — a reverse proxy that reads Ingress objects and configures itself accordingly.

ControllerMaintained ByProtocolsBest For
NGINX IngressKubernetes communityHTTP/HTTPS, gRPC, WebSocketGeneral purpose, most widely adopted
TraefikTraefik LabsHTTP/HTTPS, TCP, UDP, gRPCAutomatic Let's Encrypt, middleware chains
HAProxyHAProxy TechnologiesHTTP/HTTPS, TCPHigh performance, connection-heavy workloads
AWS ALB IngressAWSHTTP/HTTPSNative AWS ALB integration, EKS clusters
Istio GatewayIstioHTTP/HTTPS, gRPC, TCPService mesh environments
ContourVMware/Project ContourHTTP/HTTPS, gRPCEnvoy-based, multi-team IngressRoute CRD

Installing NGINX Ingress Controller with Helm

The NGINX Ingress Controller is the most common choice. Install it with Helm:

# Add the ingress-nginx Helm repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

# Install into its own namespace
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.replicaCount=2 \
--set controller.metrics.enabled=true

# Verify the controller is running
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx
# The LoadBalancer service gets an external IP — that is your cluster's front door

Path-Based Routing

Route different URL paths to different backend Services:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: production
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /api(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: api-service
port:
number: 80
- path: /dashboard(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: dashboard-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
# Apply and verify
kubectl apply -f app-ingress.yaml
kubectl get ingress -n production
kubectl describe ingress app-ingress -n production

Host-Based Routing

Route traffic based on the hostname in the request:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-host-ingress
namespace: production
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- host: dashboard.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dashboard-service
port:
number: 80
- host: docs.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docs-service
port:
number: 3000

TLS/SSL Termination with cert-manager

No production Ingress is complete without HTTPS. cert-manager automates certificate provisioning from Let's Encrypt.

# Install cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true

Create a ClusterIssuer for Let's Encrypt:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-prod-key
solvers:
- http01:
ingress:
class: nginx

Now add TLS to your Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: secure-ingress
namespace: production
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
- api.example.com
secretName: app-tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80

cert-manager will automatically request certificates from Let's Encrypt, store them in app-tls-secret, and renew them before expiry.

IngressClass Resource

Kubernetes 1.18+ introduced IngressClass to support multiple ingress controllers in the same cluster:

# List available IngressClasses
kubectl get ingressclass

# NAME CONTROLLER PARAMETERS AGE
# nginx k8s.io/ingress-nginx <none> 5d

Set a default IngressClass so you do not have to specify ingressClassName on every Ingress:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx

Default Backend

Handle requests that do not match any Ingress rule:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: catch-all-ingress
spec:
ingressClassName: nginx
defaultBackend:
service:
name: default-backend-service
port:
number: 80

Useful Annotations for NGINX Ingress

Annotations let you customize behavior without touching the controller's global config:

metadata:
annotations:
# Rate limiting
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "5"

# Timeouts
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "10"

# Body size (file uploads)
nginx.ingress.kubernetes.io/proxy-body-size: "50m"

# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.example.com"

# Custom headers
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Frame-Options: DENY";
more_set_headers "X-Content-Type-Options: nosniff";

# Redirect HTTP to HTTPS
nginx.ingress.kubernetes.io/ssl-redirect: "true"

# Sticky sessions (cookie-based)
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "SERVERID"

Debugging Ingress Issues

When traffic is not reaching your backend, check these in order:

# 1. Is the ingress controller running?
kubectl get pods -n ingress-nginx

# 2. Does the Ingress resource look correct?
kubectl describe ingress app-ingress -n production

# 3. Is the backend service healthy?
kubectl get endpoints api-service -n production

# 4. Check ingress controller logs for errors
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --tail=50

# 5. Test connectivity from inside the cluster
kubectl run curl-test --image=curlimages/curl --rm -it -- \
curl -H "Host: app.example.com" http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/health

Nine times out of ten, the problem is either a missing Endpoints object (Service selector does not match pod labels) or a wrong port number in the Ingress backend.


Next up, we will explore Helm Charts — the package manager that makes deploying and managing Ingress controllers (and everything else) repeatable and version-controlled.