Docker Development Workflow — Hot Reload, Debugging, and Dev Containers
Most developers experience Docker as a production tool — build an image, push to a registry, deploy. But the development experience is equally important. Waiting 60 seconds for a rebuild every time you change a line of code is a productivity killer. This post covers the tools and patterns that make Docker development feel as fast as local development: volume mounts, compose watch, remote debugging, and Dev Containers that give every team member an identical environment in seconds.
Development vs Production Dockerfiles
The first rule of Docker development: do not use your production Dockerfile for local development. Production images are optimized for size and security. Development images are optimized for speed and feedback.
# Dockerfile (production)
FROM node:20-slim AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production
COPY . .
RUN npm run build
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["dist/server.js"]
# Dockerfile.dev (development)
FROM node:20
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
# Development server with hot reload
CMD ["npm", "run", "dev"]
The development Dockerfile skips multi-stage builds, installs dev dependencies, and runs the development server instead of a compiled binary. The trade-off is intentional — a 1 GB image that rebuilds in 2 seconds beats a 100 MB image that takes 90 seconds.
Volume Mounts for Hot Reload
Volume mounts are the simplest way to get live code changes into a running container. Your local file changes appear instantly inside the container without rebuilding.
# docker-compose.dev.yml
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
# Mount source code — changes appear instantly
- ./src:/app/src
# Mount config files
- ./tsconfig.json:/app/tsconfig.json
# Anonymous volume to preserve node_modules from the image
- /app/node_modules
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- DEBUG=app:*
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
volumes:
- ./frontend/src:/app/src
- ./frontend/public:/app/public
- /app/node_modules
ports:
- "5173:5173"
environment:
- VITE_API_URL=http://localhost:3000
# Start the dev environment
docker compose -f docker-compose.dev.yml up --build
# The anonymous volume trick explained:
# Without "- /app/node_modules":
# Host mount overwrites the container's node_modules
# Your app crashes because dependencies are missing
# With the anonymous volume:
# node_modules from the image is preserved in a Docker volume
# Host mount covers everything else in /app
The anonymous volume pattern (- /app/node_modules) is critical. Without it, your host mount at ./src:/app/src would shadow the node_modules directory installed during the image build.
Docker Compose Watch
Docker Compose watch (introduced in Compose 2.22) goes beyond simple volume mounts. It watches for file changes and can sync files, rebuild images, or restart services depending on the change.
# docker-compose.yml with watch configuration
services:
api:
build:
context: ./api
ports:
- "3000:3000"
develop:
watch:
# Sync source files — instant hot reload
- action: sync
path: ./api/src
target: /app/src
# Rebuild when dependencies change
- action: rebuild
path: ./api/package.json
# Restart when config changes
- action: sync+restart
path: ./api/config
target: /app/config
frontend:
build:
context: ./frontend
ports:
- "5173:5173"
develop:
watch:
- action: sync
path: ./frontend/src
target: /app/src
- action: rebuild
path: ./frontend/package.json
# Start with watch mode
docker compose watch
# What each action does:
# sync — copies changed files into the container (like volume mount)
# rebuild — rebuilds the image and recreates the container
# sync+restart — copies files then restarts the container process
# Watch is better than volume mounts because:
# 1. No file permission issues (copies, not mounts)
# 2. Works across all platforms (no Docker Desktop file sharing overhead)
# 3. Selective rebuild/restart based on what changed
# 4. No anonymous volume hacks needed
Compose watch is particularly useful on macOS and Windows where Docker volume mounts go through a file sharing layer that adds latency. The sync action copies files directly into the container filesystem, avoiding that overhead entirely.
Debugging in Containers
Attaching a debugger to a containerized process requires exposing the debug port and configuring your IDE to connect.
# docker-compose.debug.yml
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./src:/app/src
- /app/node_modules
ports:
- "3000:3000"
- "9229:9229" # Node.js debug port
command: ["node", "--inspect=0.0.0.0:9229", "src/server.js"]
# For Python:
# command: ["python", "-m", "debugpy", "--listen", "0.0.0.0:5678", "app.py"]
// .vscode/launch.json — attach to Node.js in Docker
{
"version": "0.2.0",
"configurations": [
{
"name": "Docker: Attach to Node",
"type": "node",
"request": "attach",
"port": 9229,
"address": "localhost",
"localRoot": "${workspaceFolder}/src",
"remoteRoot": "/app/src",
"restart": true,
"skipFiles": ["<node_internals>/**"]
},
{
"name": "Docker: Attach to Python",
"type": "debugpy",
"request": "attach",
"connect": { "host": "localhost", "port": 5678 },
"pathMappings": [
{ "localRoot": "${workspaceFolder}", "remoteRoot": "/app" }
]
}
]
}
The localRoot/remoteRoot mapping is what tells the debugger how to translate file paths between your machine and the container. Set breakpoints in VS Code, start the container, attach the debugger, and you get the full debugging experience — breakpoints, variable inspection, step-through — all running inside the container.
Dev Containers Specification
Dev Containers take the Docker development experience further. Instead of running your code in a container while your editor runs on the host, Dev Containers put everything — editor, terminal, extensions, tools — inside the container.
// .devcontainer/devcontainer.json
{
"name": "My Project Dev Container",
"dockerComposeFile": "../docker-compose.dev.yml",
"service": "api",
"workspaceFolder": "/app",
"customizations": {
"vscode": {
"extensions": [
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"ms-python.python",
"bradlc.vscode-tailwindcss"
],
"settings": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"terminal.integrated.defaultProfile.linux": "bash"
}
}
},
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"forwardPorts": [3000, 5173, 5432],
"postCreateCommand": "npm install && npm run db:migrate",
"remoteUser": "node"
}
# .devcontainer/Dockerfile
FROM mcr.microsoft.com/devcontainers/typescript-node:20
# Install additional tools your team needs
RUN apt-get update && apt-get install -y \
postgresql-client \
redis-tools \
jq \
&& rm -rf /var/lib/apt/lists/*
# Install global npm tools
RUN npm install -g @nestjs/cli prisma tsx
When a new developer clones the repo and opens it in VS Code, they get a prompt: "Reopen in Container." One click, and they have the complete development environment — correct Node.js version, all extensions configured, database running, ports forwarded. No "works on my machine" — it works in everyone's container.
Multi-Service Development
Real applications are rarely a single service. Development environments need databases, caches, message queues, and multiple application services running together.
# docker-compose.dev.yml — full development stack
services:
api:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api/src:/app/src
- /app/node_modules
ports:
- "3000:3000"
- "9229:9229"
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
environment:
- DATABASE_URL=postgresql://dev:dev@db:5432/myapp
- REDIS_URL=redis://redis:6379
worker:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api/src:/app/src
- /app/node_modules
command: ["npm", "run", "worker:dev"]
depends_on:
- db
- redis
environment:
- DATABASE_URL=postgresql://dev:dev@db:5432/myapp
- REDIS_URL=redis://redis:6379
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
- ./scripts/seed.sql:/docker-entrypoint-initdb.d/seed.sql
environment:
POSTGRES_DB: myapp
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev -d myapp"]
interval: 5s
timeout: 3s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
volumes:
pgdata:
Database Seeding in Development
Every developer needs consistent test data. Docker makes this straightforward with initialization scripts and health checks.
# scripts/seed.sql — automatically runs on first container start
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
INSERT INTO users (email, name) VALUES
('alice@example.com', 'Alice Developer'),
('bob@example.com', 'Bob Tester'),
('carol@example.com', 'Carol Admin')
ON CONFLICT (email) DO NOTHING;
# Reset and reseed the database
docker compose -f docker-compose.dev.yml down -v # -v removes volumes
docker compose -f docker-compose.dev.yml up -d # Fresh start with seed data
# Or run migrations without destroying the volume
docker compose -f docker-compose.dev.yml exec api npm run db:migrate
docker compose -f docker-compose.dev.yml exec api npm run db:seed
The docker-entrypoint-initdb.d directory in Postgres (and similar mechanisms in MySQL and MongoDB) runs SQL files automatically on first container creation. Combined with named volumes, your data persists across container restarts but can be wiped clean with down -v.
Environment Parity — Dev Equals Production
The most dangerous gap in any deployment pipeline is the difference between development and production environments. Docker closes this gap, but only if you design for it.
# docker-compose.yml — base configuration (shared)
services:
api:
image: myapp/api
environment:
- DATABASE_URL
- REDIS_URL
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
# docker-compose.override.yml — development overrides (auto-loaded)
services:
api:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- ./api/src:/app/src
ports:
- "3000:3000"
- "9229:9229"
environment:
- DATABASE_URL=postgresql://dev:dev@db:5432/myapp
- NODE_ENV=development
# Development: automatically merges docker-compose.yml + docker-compose.override.yml
docker compose up
# Production: explicitly specify only the production file
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# The principle:
# Same Dockerfile base, same OS, same runtime versions
# Different: build mode, debug tools, volume mounts, port exposure
The key principle is that your development container should run the same OS, the same language version, and the same system libraries as production. The only differences should be debug tools, volume mounts, and configuration values. When your development database is Postgres 16 and production is Postgres 16, you catch compatibility issues before they reach users.
Wrapping Up
A well-designed Docker development workflow eliminates the "works on my machine" problem without sacrificing developer productivity. Use separate development Dockerfiles that prioritize rebuild speed. Use volume mounts or compose watch for instant file synchronization. Expose debug ports so you can set breakpoints and step through code inside containers. Adopt Dev Containers to give every team member an identical, reproducible environment with a single click. And keep development and production environments as close as possible — same base images, same dependency versions, same database engines. The few hours you invest in setting this up will save hundreds of hours of "but it worked locally" debugging across your team.
