Skip to main content

Kubernetes Pods — Init Containers, Sidecars, and Pod Lifecycle

· 7 min read
Goel Academy
DevOps & Cloud Learning Hub

You created your first pod, it ran nginx, and you felt like a Kubernetes wizard. But then production happened — your app needed a config file fetched before startup, logs shipped to a central collector, and a graceful shutdown that did not drop in-flight requests. Welcome to the real world of Kubernetes pods.

Pod Internals — More Than Just a Container

A pod is not a container. It is a group of one or more containers that share the same network namespace, IPC namespace, and storage volumes. Every container inside a pod sees localhost as the same machine, and they can communicate over shared volumes without any network overhead.

# Create a simple pod and inspect its internals
kubectl run debug-pod --image=busybox --command -- sleep 3600

# Check what's really running — you'll see a "pause" container too
kubectl get pod debug-pod -o jsonpath='{.status.containerStatuses[*].name}'

# The pause container holds the network namespace alive
kubectl describe pod debug-pod | grep -A 5 "Containers:"

Multi-Container Pod Patterns

The Sidecar Pattern

A sidecar runs alongside your main container and extends its functionality. The classic example: shipping logs.

apiVersion: v1
kind: Pod
metadata:
name: app-with-log-sidecar
labels:
app: web
spec:
containers:
- name: app
image: nginx:1.25
ports:
- containerPort: 80
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx

- name: log-shipper
image: fluent/fluent-bit:2.1
volumeMounts:
- name: shared-logs
mountPath: /var/log/input
- name: fluent-config
mountPath: /fluent-bit/etc/
volumes:
- name: shared-logs
emptyDir: {}
- name: fluent-config
configMap:
name: fluent-bit-config

The Ambassador Pattern

An ambassador proxies network traffic from your main container to external services. Think of it as a local proxy that handles connection pooling, retries, or TLS termination.

apiVersion: v1
kind: Pod
metadata:
name: app-with-ambassador
spec:
containers:
- name: app
image: myapp:1.0
env:
- name: DB_HOST
value: "localhost" # Talks to ambassador on localhost
- name: DB_PORT
value: "5432"

- name: db-ambassador
image: haproxy:2.8
ports:
- containerPort: 5432
volumeMounts:
- name: haproxy-config
mountPath: /usr/local/etc/haproxy/
volumes:
- name: haproxy-config
configMap:
name: haproxy-cfg

The Adapter Pattern

An adapter transforms the output of your main container into a format that external systems expect. A common use case: converting custom metrics into Prometheus format.

apiVersion: v1
kind: Pod
metadata:
name: app-with-adapter
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9113"
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
volumeMounts:
- name: nginx-status-conf
mountPath: /etc/nginx/conf.d/

- name: prometheus-adapter
image: nginx/nginx-prometheus-exporter:0.11
args:
- "-nginx.scrape-uri=http://localhost:80/stub_status"
ports:
- containerPort: 9113
volumes:
- name: nginx-status-conf
configMap:
name: nginx-status

Init Containers — Run Setup Before Your App Starts

Init containers run to completion before any app containers start. They are perfect for setup tasks: waiting for a database, downloading config files, or running migrations.

apiVersion: v1
kind: Pod
metadata:
name: app-with-init
spec:
initContainers:
- name: wait-for-db
image: busybox:1.36
command: ['sh', '-c', 'until nc -z postgres-service 5432; do echo "Waiting for DB..."; sleep 2; done']

- name: run-migrations
image: myapp:1.0
command: ['python', 'manage.py', 'migrate']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url

containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url

Init containers run sequentially. The second init container will not start until the first completes successfully. If any init container fails, Kubernetes restarts the pod (subject to restartPolicy).

Pod Lifecycle Phases

Every pod goes through a lifecycle. Understanding it helps you debug why pods get stuck.

PhaseDescriptionCommon Causes
PendingPod accepted but not yet scheduled or images pullingInsufficient resources, image pull errors, node affinity mismatch
RunningPod bound to a node, all containers startedNormal operation
SucceededAll containers terminated successfully (exit 0)Jobs and batch workloads
FailedAll containers terminated, at least one failed (exit non-zero)Application crash, OOMKilled
UnknownPod state cannot be determinedNode communication failure
# Check pod phase
kubectl get pod myapp -o jsonpath='{.status.phase}'

# Check pod conditions for detailed status
kubectl get pod myapp -o jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.reason}{"\n"}{end}'

# Common debugging flow
kubectl describe pod myapp | grep -A 10 "Events:"

Pod Conditions

Beyond the phase, pods have granular conditions:

ConditionMeaning
PodScheduledPod has been scheduled to a node
ContainersReadyAll containers in the pod are ready
InitializedAll init containers have completed
ReadyPod is ready to serve traffic (passes readiness probes)

Restart Policies

The restartPolicy field controls what happens when a container exits.

PolicyBehaviorUse Case
AlwaysRestart container regardless of exit code (default)Long-running services (web servers, APIs)
OnFailureRestart only on non-zero exit codeJobs that should retry on failure
NeverNever restartOne-shot tasks, debugging
apiVersion: v1
kind: Pod
metadata:
name: batch-job
spec:
restartPolicy: OnFailure
containers:
- name: processor
image: myapp:1.0
command: ['python', 'process_data.py']

Graceful Shutdown — Termination Grace Period and preStop Hooks

When Kubernetes wants to terminate a pod (scaling down, node drain, rolling update), it does not just kill it. Here is the sequence:

  1. Pod is set to Terminating state
  2. preStop hook runs (if defined)
  3. SIGTERM is sent to the main process
  4. Kubernetes waits for terminationGracePeriodSeconds (default: 30s)
  5. SIGKILL is sent if the process is still running
apiVersion: v1
kind: Pod
metadata:
name: graceful-app
spec:
terminationGracePeriodSeconds: 60
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- |
echo "Starting graceful shutdown..."
# Deregister from service discovery
curl -X POST http://localhost:8080/admin/drain
# Wait for in-flight requests to complete
sleep 15

This is critical for production. Without a proper preStop hook, your users will see 502 errors during deployments because the load balancer still sends traffic to a pod that is shutting down.

Pod Disruption Budgets

A Pod Disruption Budget (PDB) tells Kubernetes how many pods from a set can be unavailable during voluntary disruptions like node drains or cluster upgrades.

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: app-pdb
spec:
minAvailable: 2 # At least 2 pods must remain running
# OR use maxUnavailable: 1 — at most 1 pod can be down
selector:
matchLabels:
app: web
# Check PDB status
kubectl get pdb app-pdb

# Try draining a node — PDB will block if it would violate the budget
kubectl drain node-1 --ignore-daemonsets --delete-emptydir-data

Without a PDB, a cluster autoscaler or admin running kubectl drain can take down all your replicas at once. In production, always define a PDB.

Complete Multi-Pattern Pod YAML

Here is a production-ready pod spec that combines init containers, sidecars, lifecycle hooks, and resource limits:

apiVersion: v1
kind: Pod
metadata:
name: production-pod
labels:
app: api
version: v2
spec:
terminationGracePeriodSeconds: 45
initContainers:
- name: config-loader
image: busybox:1.36
command: ['sh', '-c', 'wget -O /config/app.conf http://config-service/api-config']
volumeMounts:
- name: config-vol
mountPath: /config

containers:
- name: api
image: myapi:2.0
ports:
- containerPort: 8080
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10"]
volumeMounts:
- name: config-vol
mountPath: /app/config

- name: log-forwarder
image: fluent/fluent-bit:2.1
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "100m"
memory: "128Mi"
volumeMounts:
- name: shared-logs
mountPath: /var/log/input

volumes:
- name: config-vol
emptyDir: {}
- name: shared-logs
emptyDir: {}

Next up: we will build on pods and explore Kubernetes Deployments — rolling updates, rollbacks, and scaling strategies that keep your app online during changes.