Docker Compose in Production — Profiles, Depends-On, and Restart Policies
"Docker Compose is only for development." You hear this constantly, but it is not universally true. Compose is not the right choice for a 200-service microservices platform, but for a team running 5-15 services on a single server or small cluster, Compose provides everything you need: restart policies, health-based dependency ordering, resource limits, logging, and deployment configuration. The question is not whether Compose can run in production — it is whether your use case fits.
When Compose in Production Is OK
Compose works well for:
- Small to medium applications (2-15 services) on a single host or small cluster.
- Internal tools that do not need multi-region HA or auto-scaling.
- Staging environments that mirror production topology.
- Side projects and small SaaS products where operational simplicity matters more than infinite scalability.
Compose is the wrong choice when you need:
- Automatic horizontal scaling based on CPU/memory/custom metrics.
- Multi-host orchestration with automatic failover (use Swarm or Kubernetes).
- Advanced deployment strategies (canary, blue-green with traffic splitting).
- Service mesh features (mutual TLS, circuit breaking, observability).
Restart Policies
Restart policies determine what happens when a container exits. In production, every service should have one.
services:
api:
image: myapp:latest
restart: unless-stopped # Recommended for most services
worker:
image: myworker:latest
restart: on-failure # Only restart if the process exits with non-zero
db:
image: postgres:16-alpine
restart: always # Always restart, even after manual stop + reboot
migration:
image: myapp:latest
command: ["python", "manage.py", "migrate"]
restart: "no" # Run once and exit — do not restart
| Policy | Behavior | Use Case |
|---|---|---|
no | Never restart (default) | One-shot tasks, migrations, backups |
on-failure | Restart only on non-zero exit code | Workers that should not restart on graceful shutdown |
always | Always restart, even after manual docker stop | Critical services that must survive host reboot |
unless-stopped | Like always, but respects manual docker stop | Most production services |
# Check restart policy of a running container
docker inspect api --format '{{.HostConfig.RestartPolicy.Name}}'
# unless-stopped
# Check how many times a container has been restarted
docker inspect api --format '{{.RestartCount}}'
# 3
The difference between always and unless-stopped matters. With always, if you manually stop a container and then reboot the host, Docker restarts it. With unless-stopped, it stays stopped. Use unless-stopped so you can manually stop a broken service without it fighting you on restart.
depends_on with Health Conditions
Basic depends_on only guarantees start order — it does not wait for the dependency to be ready. The service_healthy condition fixes this.
services:
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: secret
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
start_period: 10s
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
api:
image: myapp:latest
depends_on:
db:
condition: service_healthy # Wait for DB to be READY, not just started
redis:
condition: service_healthy # Wait for Redis to accept connections
environment:
DATABASE_URL: postgres://postgres:secret@db:5432/myapp
REDIS_URL: redis://redis:6379
worker:
image: myworker:latest
depends_on:
api:
condition: service_started # Just wait for api to start (no health check)
redis:
condition: service_healthy
migration:
image: myapp:latest
command: ["python", "manage.py", "migrate"]
depends_on:
db:
condition: service_healthy
restart: "no"
# Watch the startup order
docker compose up
# ✔ Container db Healthy
# ✔ Container redis Healthy
# ✔ Container migration Exited (0) ← ran migration, exited successfully
# ✔ Container api Started ← started after db and redis were healthy
# ✔ Container worker Started
Profiles for Environment-Specific Services
Profiles let you define services that only start when a specific profile is activated. This avoids separate Compose files for development and production.
services:
api:
image: myapp:latest
ports:
- "3000:3000"
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
# Only in development
adminer:
image: adminer:latest
ports:
- "8080:8080"
profiles:
- dev
mailhog:
image: mailhog/mailhog:latest
ports:
- "1025:1025"
- "8025:8025"
profiles:
- dev
# Only in production
monitoring:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
profiles:
- prod
backup:
image: postgres:16-alpine
command: >
sh -c "while true; do
pg_dump -h db -U postgres myapp > /backups/backup_$$(date +%Y%m%d_%H%M%S).sql;
sleep 86400;
done"
volumes:
- ./backups:/backups
profiles:
- prod
volumes:
pgdata:
# Start without profiles — only api and db start
docker compose up -d
# Start with the dev profile — adds adminer and mailhog
docker compose --profile dev up -d
# Start with the prod profile — adds monitoring and backup
docker compose --profile prod up -d
# Start with multiple profiles
docker compose --profile dev --profile debug up -d
Services without a profile are always started. Services with a profile only start when that profile is explicitly activated.
Resource Limits in Compose
services:
api:
image: myapp:latest
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
worker:
image: myworker:latest
deploy:
resources:
limits:
cpus: "2.0"
memory: 1G
reservations:
cpus: "0.5"
memory: 256M
db:
image: postgres:16-alpine
deploy:
resources:
limits:
cpus: "2.0"
memory: 2G
reservations:
cpus: "1.0"
memory: 512M
shm_size: 256m # Shared memory for PostgreSQL
In Compose v2 (the current standard), deploy.resources works without Swarm mode. Limits are enforced by cgroups — exceeding the memory limit triggers an OOM kill.
Logging Configuration
Production containers should not dump logs to Docker's JSON file driver indefinitely. Configure log rotation and, optionally, centralized logging.
services:
api:
image: myapp:latest
logging:
driver: json-file
options:
max-size: "50m" # Rotate after 50 MB
max-file: "5" # Keep 5 rotated files
compress: "true" # Compress rotated files
worker:
image: myworker:latest
logging:
driver: json-file
options:
max-size: "20m"
max-file: "3"
# Send logs to a centralized system
log-collector:
image: myapp:latest
logging:
driver: syslog
options:
syslog-address: "tcp://logserver.example.com:514"
tag: "myapp-{{.Name}}"
Without max-size and max-file, Docker's JSON log driver writes unbounded log files. A busy service can fill a disk in days.
Deploy Section — Replicas, Updates, and Rollbacks
The deploy section configures replication, update strategy, and rollback behavior. With Compose v2 on a single host, replicas work out of the box.
services:
api:
image: myapp:latest
deploy:
replicas: 3
update_config:
parallelism: 1 # Update one replica at a time
delay: 10s # Wait 10 seconds between updates
failure_action: rollback
order: start-first # Start new before stopping old (zero downtime)
rollback_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
resources:
limits:
cpus: "1.0"
memory: 512M
# Scale services with Compose
docker compose up -d --scale api=5
# This creates api-1, api-2, api-3, api-4, api-5
# All sharing the same published port through Docker's load balancing
Named Volumes with External
For production data, use named volumes and mark critical ones as external so docker compose down -v does not accidentally delete them.
services:
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
volumes:
pgdata:
external: true # Must be created manually — docker compose down -v won't delete it
name: production-pgdata
redis-data: # Managed by Compose — docker compose down -v WILL delete it
# Create the external volume before starting
docker volume create production-pgdata
# Now docker compose up will use the existing volume
docker compose up -d
# docker compose down -v removes redis-data but NOT production-pgdata
docker compose down -v
This protects your database from accidental deletion during cleanup.
env_file for Configuration Management
Separate configuration from your Compose file using env_file.
# .env.production
DATABASE_URL=postgres://app:password@db:5432/production
REDIS_URL=redis://redis:6379
API_SECRET=production-secret-key
LOG_LEVEL=warn
services:
api:
image: myapp:latest
env_file:
- .env.production
environment:
# These override values from env_file
NODE_ENV: production
worker:
image: myworker:latest
env_file:
- .env.production
environment:
WORKER_CONCURRENCY: 10
# Use different env files per environment
docker compose --env-file .env.production up -d
docker compose --env-file .env.staging up -d
# Check the resolved configuration
docker compose config
Keep .env.production out of version control. Add it to .gitignore and manage it through a secrets manager or deployment tooling.
Docker Compose vs Docker Stack
docker compose runs on a single host. docker stack deploy runs on a Swarm cluster. They use the same Compose file format but differ in capabilities.
| Feature | docker compose | docker stack deploy |
|---|---|---|
| Hosts | Single | Multi-host (Swarm) |
| Build | Yes (build: directive) | No (images must be pre-built) |
| depends_on | Yes (with health conditions) | No (use health checks + restart) |
| Profiles | Yes | No |
| env_file | Yes | No (use environment + secrets) |
| Secrets | File-based only | Swarm encrypted secrets |
| Networking | Bridge (default) | Overlay (default) |
| Scaling | --scale flag | deploy.replicas |
| Rolling updates | Recreate | True rolling updates |
| Load balancing | Port-based | Routing mesh |
# Deploy with Compose (single host)
docker compose up -d
# Deploy as a Swarm stack (multi-host)
docker stack deploy -c docker-compose.yml myapp
# The same Compose file works for both,
# but stack deploy ignores build, depends_on, and profiles
Wrapping Up
Docker Compose in production is not a sin — it is a pragmatic choice for the right use case. Set restart policies so services survive crashes and reboots. Use health-check-based depends_on so your API does not start before the database is ready. Use profiles to keep development tools out of production. Set resource limits so one service cannot starve the others. Configure log rotation so your disk does not fill up. And mark critical volumes as external so docker compose down -v does not destroy your database. These configurations turn Compose from a development convenience into a production-ready deployment tool for small-to-medium applications.
In the next post, we will cover container runtime alternatives — Podman, containerd, and CRI-O — and help you decide whether Docker is still the right runtime for your workload.
