Skip to main content

Bind Mounts vs Volumes vs tmpfs — Docker Storage Deep Dive

· 10 min read
Goel Academy
DevOps & Cloud Learning Hub

Containers are ephemeral. When a container is removed, everything inside it — application data, uploaded files, database tables — is gone. Docker offers three storage mechanisms to persist data beyond the container lifecycle, and choosing the wrong one causes problems ranging from poor performance to data loss. Here is when and why to use each.

The Three Storage Types

Docker has three ways to give a container access to storage outside its writable layer:

# 1. Bind mount — maps a host path to a container path
docker run -v /host/path:/container/path myapp:latest

# 2. Named volume — Docker-managed storage
docker run -v mydata:/container/path myapp:latest

# 3. tmpfs mount — memory-only, no disk persistence
docker run --tmpfs /container/path myapp:latest

They look similar in the docker run command but behave very differently under the hood.

Bind Mounts: Host Path to Container

A bind mount takes a file or directory on the host machine and makes it available inside the container. The container sees the host's actual files.

# Mount current directory into the container
docker run -v $(pwd):/app myapp:latest

# Mount a specific host directory (read-write)
docker run -v /var/data/uploads:/app/uploads myapp:latest

# Mount as read-only
docker run -v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro nginx:alpine

# Using the --mount syntax (more explicit)
docker run --mount type=bind,source=/var/data,target=/app/data myapp:latest

# Read-only with --mount
docker run --mount type=bind,source=/var/data,target=/app/data,readonly myapp:latest

Bind mounts have a critical behavior that trips people up: the container can modify host files. If you mount /etc into a container running as root, the container can modify your host's system configuration.

# DANGEROUS — container has write access to host files
docker run -v /etc:/host-etc ubuntu:latest bash -c "echo 'hacked' >> /host-etc/motd"

# SAFE — read-only mount
docker run -v /etc:/host-etc:ro ubuntu:latest

When to Use Bind Mounts

  • Local development. Mount source code into the container for hot-reloading.
  • Configuration files. Mount nginx.conf, prometheus.yml, etc.
  • Sharing host data. Logs, certificates, socket files.
# docker-compose.yml — development with bind mount
services:
api:
build: .
volumes:
- ./src:/app/src # Source code for hot reload
- ./config:/app/config:ro # Config files (read-only)
- /var/run/docker.sock:/var/run/docker.sock:ro # Docker socket

Named Volumes: Docker-Managed Storage

Named volumes are managed by Docker. Docker chooses where to store the data on the host (typically /var/lib/docker/volumes/), and you reference volumes by name.

# Create a named volume
docker volume create pgdata

# Use a named volume
docker run -d --name db \
-v pgdata:/var/lib/postgresql/data \
postgres:16-alpine

# Using --mount syntax
docker run -d --name db \
--mount type=volume,source=pgdata,target=/var/lib/postgresql/data \
postgres:16-alpine

# Anonymous volume (auto-generated name, harder to manage)
docker run -d -v /var/lib/postgresql/data postgres:16-alpine
# List all volumes
docker volume ls

# Inspect a volume
docker volume inspect pgdata
# {
# "Name": "pgdata",
# "Driver": "local",
# "Mountpoint": "/var/lib/docker/volumes/pgdata/_data",
# "Labels": {},
# "Scope": "local"
# }

# The data lives here on the host
ls /var/lib/docker/volumes/pgdata/_data/

When to Use Named Volumes

  • Database storage. PostgreSQL, MySQL, MongoDB, Redis.
  • Persistent application data. User uploads, generated files.
  • Data that should survive container replacement. Upgrade the image, keep the data.
# docker-compose.yml — production with named volumes
services:
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data

redis:
image: redis:7-alpine
volumes:
- redis-data:/data

api:
image: myapp:latest
volumes:
- uploads:/app/uploads

volumes:
pgdata:
redis-data:
uploads:

tmpfs Mounts: Memory-Only Storage

tmpfs mounts exist in the host's memory (RAM) only. They are never written to disk, and they disappear when the container stops. This makes them perfect for sensitive data that should not persist.

# Basic tmpfs mount
docker run -d --tmpfs /app/tmp myapp:latest

# With size limit and options
docker run -d \
--tmpfs /app/tmp:rw,noexec,nosuid,size=100m \
myapp:latest

# Using --mount syntax
docker run -d \
--mount type=tmpfs,target=/app/tmp,tmpfs-size=100m \
myapp:latest

When to Use tmpfs

  • Sensitive temporary data. Session tokens, encryption keys in transit.
  • High-performance scratch space. Processing temporary files faster than disk.
  • Read-only containers. Use --read-only with tmpfs for write paths.
# Read-only container with tmpfs for necessary write paths
docker run -d --name secure-api \
--read-only \
--tmpfs /tmp:rw,noexec,nosuid,size=50m \
--tmpfs /var/run:rw,noexec,nosuid \
myapp:latest
# docker-compose.yml
services:
api:
image: myapp:latest
read_only: true
tmpfs:
- /tmp:size=100m
- /var/run

Comparison Table

FeatureBind MountNamed Volumetmpfs
Managed byUser (host path)DockerKernel (RAM)
Persists after container removalYes (on host)Yes (in Docker)No
Survives host rebootYesYesNo
Host path requiredYesNoNo
Portable across hostsNo (path-dependent)Yes (by name)N/A
PerformanceHost filesystem speedSlightly better on LinuxFastest (RAM speed)
Pre-populated from imageNo (host overwrites)Yes (first use)No
Works in Swarm servicesNoYesYes
Backup easeDirect file accessdocker volume cp or tarN/A
Best forDev, config filesDatabases, persistent dataTemp files, secrets

The "pre-populated from image" row is important. When you mount a named volume to a container path that already contains files in the image, Docker copies those files into the volume (only on first use). Bind mounts do not do this — the host path completely replaces the container path.

Storage Drivers

Storage drivers manage the container's writable layer — the filesystem that is internal to the container and lost when it is removed. This is separate from volumes and bind mounts.

# Check which storage driver Docker is using
docker info | grep "Storage Driver"
# Storage Driver: overlay2
DriverFilesystemPerformanceRecommended?
overlay2ext4, xfsExcellentYes (default, best choice)
btrfsbtrfsGoodFor btrfs hosts only
zfszfsGoodFor zfs hosts only
devicemapperDirect LVMFairLegacy, avoid
vfsAnyPoor (no CoW)Testing only

overlay2 is the default and recommended driver for almost all use cases. It uses a union filesystem with copy-on-write, meaning layers are shared between images and containers efficiently.

# Check overlay2 details
docker info | grep -A 5 "Storage Driver"
# Storage Driver: overlay2
# Backing Filesystem: extfs
# Supports d_type: true
# Using metacopy: false
# Native Overlay Diff: true

Disk Usage and Cleanup

Docker can consume enormous amounts of disk space over time — images, containers, volumes, and build cache all accumulate.

# See what Docker is using
docker system df
# TYPE TOTAL ACTIVE SIZE RECLAIMABLE
# Images 45 12 8.5GB 5.2GB (61%)
# Containers 15 8 250MB 50MB (20%)
# Local Volumes 23 10 12GB 4GB (33%)
# Build Cache - - 3.5GB 3.5GB (100%)

# Detailed view — shows each image, container, volume
docker system df -v

Volume Cleanup

# List all volumes
docker volume ls

# Find dangling volumes (not attached to any container)
docker volume ls --filter dangling=true

# Remove a specific volume
docker volume rm pgdata

# Remove ALL dangling volumes
docker volume prune

# Nuclear option: remove ALL unused volumes (even named ones not in use)
docker volume prune --all
# WARNING: This deletes data! Only do this if you are sure.

Full Cleanup

# Remove unused images, containers, networks, and build cache
docker system prune

# Include volumes in the cleanup
docker system prune --volumes

# Remove everything unused, no confirmation prompt
docker system prune -a --volumes -f
# WARNING: This removes all stopped containers, all unused images,
# all unused volumes, and all build cache. Use with extreme caution.

NFS Volumes

For multi-host deployments, NFS volumes allow containers on different hosts to share the same storage.

# Create an NFS volume
docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.100,rw,nfsvers=4 \
--opt device=:/shared/data \
nfs-data

# Use the NFS volume
docker run -d -v nfs-data:/app/data myapp:latest
# docker-compose.yml with NFS volume
services:
api:
image: myapp:latest
volumes:
- nfs-data:/app/shared

volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.100,rw,nfsvers=4
device: ":/shared/data"

Volume Plugins

Docker supports third-party volume plugins for cloud storage, distributed filesystems, and enterprise storage systems.

# Install a volume plugin (example: REX-Ray for AWS EBS)
docker plugin install rexray/ebs

# Create a volume backed by AWS EBS
docker volume create --driver rexray/ebs --opt size=100 ebs-data

# Use it like any other volume
docker run -d -v ebs-data:/app/data myapp:latest

Popular volume plugins:

PluginBackendUse Case
rexray/ebsAWS EBSPersistent block storage on AWS
rexray/efsAWS EFSShared filesystem on AWS
azure/azurefileAzure FilesShared storage on Azure
vieux/sshfsSSH/SFTPRemote storage via SSH
local-persistLocalNamed volumes at custom host paths

Performance Considerations

Storage type has a significant impact on I/O performance, especially for databases and write-heavy workloads.

# Benchmark write performance with dd
# Bind mount
docker run --rm -v /tmp/bench:/data alpine \
sh -c "dd if=/dev/zero of=/data/testfile bs=1M count=1000 2>&1 | tail -1"

# Named volume
docker run --rm -v bench-vol:/data alpine \
sh -c "dd if=/dev/zero of=/data/testfile bs=1M count=1000 2>&1 | tail -1"

# tmpfs
docker run --rm --tmpfs /data:size=2g alpine \
sh -c "dd if=/dev/zero of=/data/testfile bs=1M count=1000 2>&1 | tail -1"

# Container writable layer
docker run --rm alpine \
sh -c "dd if=/dev/zero of=/tmp/testfile bs=1M count=1000 2>&1 | tail -1"

Typical relative performance (varies by host):

Storage TypeSequential WriteRandom I/OLatency
tmpfsFastest (RAM)FastestLowest
Named volumeFastFastLow
Bind mountFast (host speed)FastLow
Container layer (overlay2)Slower (CoW overhead)SlowestHigher

The container's writable layer (overlay2) is the slowest because of copy-on-write overhead. Every time a file is modified for the first time, the entire file is copied up from the lower layer. For database workloads, always use a named volume — never store database files in the container layer.

When to Use Each Type

Decision tree:

Is the data temporary and sensitive?
→ tmpfs

Is it development source code or config files?
→ Bind mount

Is it a database or persistent application data?
→ Named volume

Does it need to be shared across hosts?
→ NFS volume or volume plugin

Is it a high-performance scratch space?
→ tmpfs (if it fits in RAM)
→ Named volume (if it does not)

Wrapping Up

Docker storage is not one-size-fits-all. Use bind mounts for development workflows where you need host file access. Use named volumes for any data that needs to survive container replacement — databases, uploads, application state. Use tmpfs for sensitive temporary data or when you need RAM-speed I/O. And always monitor disk usage with docker system df and clean up with docker volume prune before your disk fills up at 3 AM.

This post wraps up the advanced Docker topics. From multi-stage builds to security scanning, health checks, environment variables, CI/CD pipelines, logging, resource limits, and storage — you now have the knowledge to run Docker in production with confidence. The next step is to take these containerized applications and orchestrate them at scale with Kubernetes.