Docker Volumes — Your Data Survives Container Restarts
You spin up a Postgres container, create tables, load data, and everything works. Then you run docker rm and all your data vanishes. This is not a bug — it is by design. Containers are ephemeral. But your data should not be.
Why Containers Are Ephemeral
Every container gets its own writable layer on top of the image's read-only layers. When the container is removed, that writable layer is deleted. This is the fundamental design: containers are disposable, replaceable, and should be treated like cattle, not pets.
# Prove it: write data, remove container, data is gone
docker run --name temp-box alpine sh -c "echo 'important data' > /data.txt && cat /data.txt"
docker rm temp-box
docker run --name temp-box alpine cat /data.txt
# Error: No such file or directory
docker rm temp-box
This is where volumes come in.
The Three Types of Docker Storage
| Type | Managed By | Location on Host | Use Case |
|---|---|---|---|
| Named Volume | Docker | /var/lib/docker/volumes/ | Databases, persistent app data |
| Anonymous Volume | Docker | /var/lib/docker/volumes/ | Temporary data, auto-cleanup |
| Bind Mount | You | Anywhere on host | Development, config files |
Named Volumes: The Right Default
Named volumes are Docker's recommended approach for persistent data. Docker manages the filesystem location, and they survive container removal.
# Create a named volume
docker volume create app-data
# Run a container with the volume mounted
docker run -d --name db \
-v app-data:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:16-alpine
# Insert some data
docker exec -it db psql -U postgres -c "CREATE TABLE test (id serial, name text);"
docker exec -it db psql -U postgres -c "INSERT INTO test (name) VALUES ('survived');"
# Destroy the container
docker stop db && docker rm db
# Start a new container with the same volume — data is still there
docker run -d --name db2 \
-v app-data:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:16-alpine
docker exec -it db2 psql -U postgres -c "SELECT * FROM test;"
# id | name
# ----+----------
# 1 | survived
The container is gone. The data lives on.
Managing Volumes
# List all volumes
docker volume ls
# Inspect a volume (see mountpoint, driver, labels)
docker volume inspect app-data
# Remove a specific volume
docker volume rm app-data
# Remove all unused volumes (dangerous — be careful)
docker volume prune
# Remove all unused volumes without confirmation
docker volume prune -f
Bind Mounts: Your Code, Live in the Container
Bind mounts map a directory on your host directly into the container. They are perfect for development — you edit code on your laptop, and the container sees the changes instantly.
# Mount your current project directory into the container
docker run -d --name dev-server \
-v "$(pwd)/src:/app/src" \
-v "$(pwd)/package.json:/app/package.json" \
-p 3000:3000 \
node:20-alpine sh -c "cd /app && npm install && npm run dev"
Key difference from named volumes: you control where the data lives on the host. But this also means Docker cannot manage it for you — no easy backup, no portability across machines.
| Feature | Named Volume | Bind Mount |
|---|---|---|
| Managed by Docker | Yes | No |
| Portable across hosts | Yes (with export) | No |
| Pre-populated with image data | Yes | No — host dir overwrites |
| Works in CI/CD | Yes | Path issues |
| Best for | Production data | Development code |
Volume Permissions: The Silent Killer
One of the most common Docker headaches is permission errors with volumes. The container process runs as a specific user (often root or a service user), but the volume files may be owned by a different UID.
# Problem: App runs as UID 1000 but volume was created by root
docker run -d --name app \
-v app-data:/app/data \
myapp:latest
# Error: EACCES: permission denied, open '/app/data/config.json'
# Solution 1: Set ownership in Dockerfile
# In your Dockerfile:
# RUN mkdir -p /app/data && chown -R 1000:1000 /app/data
# USER 1000
# Solution 2: Init container pattern — fix permissions before app starts
docker run --rm -v app-data:/data alpine chown -R 1000:1000 /data
Volume Backup and Restore
Named volumes live inside Docker's internal storage. You cannot just cp them. Use a temporary container to access the volume data.
# Backup: Mount volume + host directory, tar the data
docker run --rm \
-v app-data:/source:ro \
-v "$(pwd)/backups:/backup" \
alpine tar czf /backup/app-data-backup.tar.gz -C /source .
# Restore: Extract tar into a new volume
docker volume create app-data-restored
docker run --rm \
-v app-data-restored:/target \
-v "$(pwd)/backups:/backup" \
alpine tar xzf /backup/app-data-backup.tar.gz -C /target
This pattern works for any volume — databases, file uploads, configuration stores.
Volumes in Docker Compose
Compose makes volume management declarative. Define volumes once, reference them across services.
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- pg-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
ports:
- "5432:5432"
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
command: redis-server --appendonly yes
app:
build: .
volumes:
- ./src:/app/src # Bind mount for dev
- app-uploads:/app/uploads # Named volume for user uploads
depends_on:
- postgres
- redis
volumes:
pg-data: # Docker manages this
redis-data: # Docker manages this
app-uploads: # Docker manages this
# Start everything
docker compose up -d
# Check volume status
docker volume ls | grep myproject
# Tear down containers but KEEP data
docker compose down
# Tear down containers AND DELETE volumes (destructive!)
docker compose down -v
Notice the difference between docker compose down and docker compose down -v. The -v flag destroys your volumes. In production, that flag can ruin your day.
Real-World Pattern: Database with Automated Backups
Here is a production-ready pattern — Postgres with a sidecar container that runs nightly backups.
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
POSTGRES_DB: production
volumes:
- pg-data:/var/lib/postgresql/data
secrets:
- db_password
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
backup:
image: postgres:16-alpine
volumes:
- pg-backups:/backups
environment:
PGHOST: postgres
PGUSER: postgres
PGPASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
entrypoint: >
sh -c "while true; do
pg_dump production > /backups/backup_$$(date +%Y%m%d_%H%M%S).sql;
echo 'Backup completed';
sleep 86400;
done"
depends_on:
postgres:
condition: service_healthy
secrets:
db_password:
file: ./secrets/db_password.txt
volumes:
pg-data:
pg-backups:
Wrapping Up
Volumes are the bridge between Docker's ephemeral world and the persistent data your applications need. Named volumes for production data, bind mounts for development, and always have a backup strategy before you need one.
In the next post, we will explore Docker Networking — how containers talk to each other, the difference between bridge, host, and overlay networks, and how DNS resolution works inside Docker.
