Skip to main content

Kubernetes Services — ClusterIP, NodePort, LoadBalancer, and ExternalName

· 6 min read
Goel Academy
DevOps & Cloud Learning Hub

Your deployment has 5 pods running happily. But pods die and get replaced constantly — each time with a new IP address. So how does your frontend talk to your backend if the backend's IP keeps changing? The answer is Services, and getting them right is the difference between a working cluster and a networking nightmare.

Why Services Exist

Here is the fundamental problem:

# Pod IPs are ephemeral — they change on every restart
kubectl get pods -o wide
# NAME READY IP NODE
# api-abc123 1/1 10.244.1.15 node-1
# api-def456 1/1 10.244.2.22 node-2

# Delete a pod — the replacement gets a NEW IP
kubectl delete pod api-abc123

kubectl get pods -o wide
# NAME READY IP NODE
# api-xyz789 1/1 10.244.1.31 node-1 <-- different IP!
# api-def456 1/1 10.244.2.22 node-2

A Service gives you a stable IP address and DNS name that automatically routes traffic to healthy pods matching a label selector. Think of it as a permanent phone number that forwards calls to whoever is on duty.

Service Types — The Complete Comparison

TypeAccessible FromGets External IPUse CasePort Range
ClusterIPInside cluster onlyNoInternal service-to-service communicationAny
NodePortOutside via <NodeIP>:<NodePort>NoDev/test, on-prem without LB30000-32767
LoadBalancerInternet via cloud LBYesProduction external access (AWS/GCP/Azure)Any
ExternalNameInside cluster (DNS alias)NoProxy to external services (RDS, SaaS APIs)N/A

ClusterIP — The Default

ClusterIP is the most common service type. It creates a virtual IP inside the cluster that routes to your pods.

apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
type: ClusterIP # This is the default, you can omit it
selector:
app: api # Routes to pods with label app=api
ports:
- name: http
port: 80 # The port the Service listens on
targetPort: 8080 # The port the container listens on
protocol: TCP
# Create and verify
kubectl apply -f api-service.yaml
kubectl get svc api-service

# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
# api-service ClusterIP 10.96.45.123 <none> 80/TCP

# Test from inside the cluster
kubectl run curl --image=curlimages/curl --rm -it -- curl http://api-service/health

Understanding port vs targetPort vs nodePort

This trips up almost everyone:

ports:
- port: 80 # Service listens on this port (what other pods use)
targetPort: 8080 # Container listens on this port (where traffic goes)
nodePort: 30080 # Node listens on this port (NodePort/LB only)
Client → NodePort (30080) → Service Port (80) → Container targetPort (8080)

If you omit targetPort, it defaults to the same value as port.

NodePort — Expose on Every Node

NodePort opens a specific port on every node in the cluster. Traffic hitting any node on that port gets routed to the Service.

apiVersion: v1
kind: Service
metadata:
name: api-nodeport
spec:
type: NodePort
selector:
app: api
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # Optional: Kubernetes picks one if omitted (30000-32767)
# Access from outside the cluster
curl http://<any-node-ip>:30080/health

# Find node IPs
kubectl get nodes -o wide

NodePort works for quick testing but is not ideal for production. You have to manage port conflicts across services, and the port range (30000-32767) is awkward for users.

LoadBalancer — Production External Access

In cloud environments (AWS, GCP, Azure), LoadBalancer provisions an actual cloud load balancer that routes internet traffic to your service.

apiVersion: v1
kind: Service
metadata:
name: api-loadbalancer
annotations:
# AWS-specific annotations
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
type: LoadBalancer
selector:
app: api
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
# Wait for the external IP to be provisioned
kubectl get svc api-loadbalancer -w

# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
# api-loadbalancer LoadBalancer 10.96.78.200 a1b2c3.elb.aws 80:31234/TCP,443:31567/TCP

# Access from the internet
curl http://a1b2c3.elb.amazonaws.com/health

Each LoadBalancer service creates a separate cloud LB, which costs money. For multiple services, use an Ingress controller instead.

ExternalName — DNS Alias to External Services

ExternalName does not proxy traffic. It creates a CNAME DNS record that points to an external service.

apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: mydb.abc123.us-east-1.rds.amazonaws.com
# Now pods can connect to the RDS database using the Service name
# Inside a pod:
# psql -h external-db -U admin -d myapp
# DNS resolves external-db → mydb.abc123.us-east-1.rds.amazonaws.com

This is useful for abstracting external dependencies. If you migrate from RDS to a self-hosted database, you just change the Service definition — no application code changes.

Service Discovery

Kubernetes provides two mechanisms for discovering services:

DNS-Based Discovery (Preferred)

Every service gets a DNS entry automatically:

# Format: <service-name>.<namespace>.svc.cluster.local
# Examples:
api-service.default.svc.cluster.local
api-service.default.svc
api-service.default
api-service # Works within the same namespace

# SRV records for port discovery
_http._tcp.api-service.default.svc.cluster.local

Environment Variable Discovery

Kubernetes injects environment variables for services that exist before the pod starts:

# Inside a pod, for a service named "api-service" on port 80:
echo $API_SERVICE_SERVICE_HOST # 10.96.45.123
echo $API_SERVICE_SERVICE_PORT # 80

DNS is preferred because it works regardless of creation order.

Endpoints — What Services Actually Route To

A Service does not magically know about pods. It uses an Endpoints object that lists the IPs of matching pods.

# View the endpoints for a service
kubectl get endpoints api-service

# NAME ENDPOINTS AGE
# api-service 10.244.1.15:8080,10.244.2.22:8080 5m

# If endpoints are empty, your selector doesn't match any pods!
kubectl describe svc api-service | grep Selector
kubectl get pods -l app=api

Headless Services — For StatefulSets

A headless service has clusterIP: None. Instead of a single virtual IP, DNS returns the IPs of individual pods. This is essential for StatefulSets where each pod has a unique identity.

apiVersion: v1
kind: Service
metadata:
name: postgres-headless
spec:
clusterIP: None # Makes it headless
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
# DNS returns individual pod IPs instead of a virtual IP
# nslookup postgres-headless
# Returns:
# postgres-0.postgres-headless.default.svc.cluster.local → 10.244.1.5
# postgres-1.postgres-headless.default.svc.cluster.local → 10.244.2.8
# postgres-2.postgres-headless.default.svc.cluster.local → 10.244.3.3

# Connect to a specific replica
psql -h postgres-0.postgres-headless -U admin

Session Affinity

By default, Services distribute traffic randomly. If you need a client to always reach the same pod (for example, sticky sessions), use session affinity:

apiVersion: v1
kind: Service
metadata:
name: api-sticky
spec:
selector:
app: api
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 1800 # Stick for 30 minutes
ports:
- port: 80
targetPort: 8080

Multi-Port Services

A single service can expose multiple ports:

apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp
ports:
- name: http
port: 80
targetPort: 8080
- name: metrics
port: 9090
targetPort: 9090
- name: grpc
port: 50051
targetPort: 50051

When defining multiple ports, you must give each one a name.


Next up: Kubernetes Namespaces — how to organize resources, isolate teams, and enforce resource quotas in multi-tenant clusters.