Skip to main content

From Docker Compose to Kubernetes — A Migration Guide

· 8 min read
Goel Academy
DevOps & Cloud Learning Hub

Docker Compose is excellent for running multi-container applications on a single machine. But when your application needs to run across multiple nodes, survive hardware failures, scale to thousands of replicas, or serve traffic globally, Compose cannot follow. Kubernetes was built for exactly these problems. This post maps every Compose concept to its Kubernetes equivalent, walks through both automated and manual migration, and shows the hybrid workflow most teams actually use in practice.

When to Migrate

Not every application needs Kubernetes. Here are the signals that tell you it is time.

Migrate when you need:

  • Horizontal scaling beyond a single host
  • High availability (automatic failover across nodes)
  • Zero-downtime deployments with rollback
  • Multi-team service ownership with namespace isolation
  • Service mesh, advanced traffic routing, or canary deployments
  • Centralized secrets management and RBAC

Stay with Compose when:

  • Your application runs on a single server
  • You have fewer than 5 services
  • Your team is small and manages everything
  • You do not need automatic failover
  • A few seconds of downtime during deploys is acceptable

Compose to Kubernetes Mapping

Every Compose concept has a Kubernetes equivalent, but the mapping is rarely one-to-one. One Compose service becomes multiple Kubernetes resources.

Docker ComposeKubernetesNotes
serviceDeployment + ServiceDeployment manages pods, Service handles networking
portsService (ClusterIP/NodePort) + IngressInternal vs external exposure split
volumes (named)PersistentVolumeClaim + PersistentVolumeStorage provisioned separately
volumes (bind mount)HostPath or ConfigMapHostPath for dev, ConfigMap for config files
environmentenv in Pod spec or ConfigMapConfigMaps for non-secret config
env_fileConfigMap (from file)Created with kubectl create configmap
secretsSecretBase64 encoded, can use external secret managers
networksNetworkPolicyK8s pods communicate by default; policies restrict
depends_oninitContainers or readiness probesNo direct equivalent; design for independent startup
restart: alwaysrestartPolicy: Always (default)K8s default is already Always
deploy.replicasspec.replicas in DeploymentSame concept, different syntax
healthchecklivenessProbe + readinessProbeK8s splits into two probe types
docker-compose.ymlMultiple YAML files or Helm chartOne file becomes many resources

Automated Conversion With Kompose

Kompose is a tool that converts Docker Compose files to Kubernetes manifests. It handles the mechanical translation, though you will need to refine the output.

# Install Kompose
# macOS
brew install kompose

# Linux
curl -L https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-amd64 -o kompose
chmod +x kompose && sudo mv kompose /usr/local/bin/

# Convert a Compose file
kompose convert -f docker-compose.yml
# INFO Kubernetes file "api-service.yaml" created
# INFO Kubernetes file "api-deployment.yaml" created
# INFO Kubernetes file "db-deployment.yaml" created
# INFO Kubernetes file "db-service.yaml" created
# INFO Kubernetes file "redis-deployment.yaml" created
# INFO Kubernetes file "redis-service.yaml" created
# INFO Kubernetes file "pgdata-persistentvolumeclaim.yaml" created
# Convert and apply directly to a cluster
kompose convert -f docker-compose.yml | kubectl apply -f -

# Generate Helm chart instead of raw manifests
kompose convert -f docker-compose.yml --chart

# Generate manifests for specific controller type
kompose convert -f docker-compose.yml --controller deployment # default
kompose convert -f docker-compose.yml --controller statefulset # for databases

Kompose gets you 70% of the way there. The remaining 30% requires manual adjustment — resource limits, proper probe configuration, ingress rules, and production-grade storage classes.

Manual Migration Step by Step

Let us take a real Compose file and migrate it to Kubernetes manually.

# Original docker-compose.yml
services:
api:
build: ./api
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://app:secret@db:5432/myapp
- REDIS_URL=redis://redis:6379
- JWT_SECRET=my-jwt-secret
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3

db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD: secret

redis:
image: redis:7-alpine

volumes:
pgdata:

Step 1 — ConfigMap for non-secret environment variables:

# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: api-config
data:
DATABASE_URL: "postgresql://app@db:5432/myapp"
REDIS_URL: "redis://redis:6379"
POSTGRES_DB: "myapp"
POSTGRES_USER: "app"

Step 2 — Secret for sensitive values:

# Create secrets from the command line (never commit secret YAMLs to git)
kubectl create secret generic api-secrets \
--from-literal=JWT_SECRET=my-jwt-secret \
--from-literal=POSTGRES_PASSWORD=secret

Step 3 — Deployment and Service for the API:

# k8s/api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myregistry.io/myapp-api:1.0.0
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: api-config
env:
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: api-secrets
key: JWT_SECRET
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 15
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- port: 3000
targetPort: 3000
type: ClusterIP

Step 4 — StatefulSet for the database:

# k8s/db-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
spec:
serviceName: db
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: api-config
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: api-secrets
key: POSTGRES_PASSWORD
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: pgdata
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: db
spec:
selector:
app: db
ports:
- port: 5432
clusterIP: None # Headless service for StatefulSet

Health Checks to Probes

Compose has a single healthcheck. Kubernetes splits this into three probe types, each serving a different purpose.

# Compose healthcheck:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3

# Kubernetes equivalent — two separate probes:
livenessProbe: # "Is the process alive?" — restart if it fails
httpGet:
path: /health
port: 3000
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 3

readinessProbe: # "Can it handle traffic?" — remove from service if it fails
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 2

startupProbe: # "Has it finished starting?" — gives slow apps time to start
httpGet:
path: /health
port: 3000
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 30 # 30 * 5s = 150s max startup time

The critical difference: liveness probe failures cause restarts. Readiness probe failures remove the pod from the Service load balancer without restarting it. Startup probes delay liveness and readiness checks until the application is done initializing.

What Compose Features Do Not Translate

Some Compose features have no direct Kubernetes equivalent and require rethinking your approach.

Compose FeatureK8s ChallengeSolution
depends_onNo startup orderingUse init containers or retry logic
build:K8s does not build imagesBuild in CI/CD, push to registry
volumes (host bind)No host paths in productionUse PVCs or ConfigMaps
linksDeprecated even in ComposeUse DNS service discovery
network_mode: hostSecurity riskUse HostNetwork (rarely)
profilesNo equivalentUse Kustomize overlays or Helm values
# depends_on replacement: init container that waits for the database
initContainers:
- name: wait-for-db
image: busybox:1.36
command:
- sh
- -c
- |
until nc -z db 5432; do
echo "Waiting for database..."
sleep 2
done
echo "Database is ready!"

The Hybrid Approach

Most teams do not go all-in on Kubernetes for every environment. The pragmatic approach: Compose for local development, Kubernetes for staging and production.

# Project structure for hybrid workflow
myapp/
docker-compose.yml # Local development
docker-compose.override.yml
k8s/
base/ # Shared K8s manifests (Kustomize)
deployment.yaml
service.yaml
configmap.yaml
kustomization.yaml
overlays/
staging/
kustomization.yaml # Staging-specific patches
production/
kustomization.yaml # Production-specific patches
Dockerfile # Same image for all environments
# Development workflow
docker compose up # Fast, local, with volume mounts

# Staging deployment
docker build -t myregistry.io/api:$(git rev-parse --short HEAD) .
docker push myregistry.io/api:$(git rev-parse --short HEAD)
kubectl apply -k k8s/overlays/staging

# Production deployment
kubectl apply -k k8s/overlays/production
kubectl rollout status deployment/api

The same Dockerfile produces the same image. The only difference is where and how it runs. Compose provides the fast feedback loop developers need. Kubernetes provides the reliability and scalability production demands.

Wrapping Up

Migrating from Docker Compose to Kubernetes is not about replacing one tool with another — it is about moving from single-host simplicity to distributed-system resilience. Start with Kompose for the mechanical translation, then refine: replace environment with ConfigMaps and Secrets, convert health checks to liveness and readiness probes, use StatefulSets for databases, and add resource limits to every container. Keep Compose for local development — it is faster and simpler for the inner development loop. Use Kubernetes for staging and production where you need scaling, self-healing, and rolling updates. The hybrid approach gives you the best of both worlds: developer productivity where it matters and operational reliability where it counts.