Skip to main content

Docker Networking — Bridge, Host, Overlay, and When to Use Each

· 7 min read
Goel Academy
DevOps & Cloud Learning Hub

Two containers are running on the same machine. One is your API server, the other is your database. They need to talk to each other. But how? They are isolated processes — they do not share localhost. Understanding Docker networking is the difference between containers that communicate effortlessly and hours of debugging "connection refused" errors.

The Default Bridge Network

When you install Docker, it creates a network called bridge. Every container you start without specifying a network joins this default bridge.

# Start two containers on the default bridge
docker run -d --name web alpine sleep 3600
docker run -d --name api alpine sleep 3600

# They can ping each other by IP...
docker inspect web --format '{{.NetworkSettings.IPAddress}}'
# 172.17.0.2

docker exec api ping -c 2 172.17.0.2
# PING 172.17.0.2: 64 bytes from 172.17.0.2

# ...but NOT by container name
docker exec api ping -c 2 web
# ping: bad address 'web'

docker stop web api && docker rm web api

The default bridge network does not provide DNS resolution between containers. You must use IP addresses, which change every time a container restarts. This is why the default bridge is essentially useless for real applications.

User-Defined Bridge Networks (Use These)

User-defined bridge networks solve every problem the default bridge has. Containers can reach each other by name, and you get network isolation between different application stacks.

# Create a custom network
docker network create app-network

# Start containers on the custom network
docker run -d --name postgres --network app-network \
-e POSTGRES_PASSWORD=secret postgres:16-alpine

docker run -d --name api --network app-network \
-e DATABASE_URL=postgresql://postgres:secret@postgres:5432/postgres \
my-api:latest

# DNS resolution works — 'postgres' resolves to the container's IP
docker exec api ping -c 2 postgres
# PING postgres (172.18.0.2): 56 data bytes
# 64 bytes from 172.18.0.2
FeatureDefault BridgeUser-Defined Bridge
DNS resolution by nameNoYes
Automatic isolationNo (all containers share it)Yes (per-network)
Connect/disconnect liveNoYes
Link legacy supportYesNot needed
Recommended for productionNoYes

Host Network Mode

Host networking removes network isolation entirely. The container shares the host's network stack — no NAT, no port mapping, no virtual bridge.

# Container binds directly to host port 80
docker run -d --name nginx --network host nginx:alpine

# No -p flag needed — nginx listens on host's port 80 directly
curl http://localhost:80

When to use host networking:

  • Performance-critical applications where NAT overhead matters (high-throughput, low-latency)
  • Monitoring tools that need to see all host network traffic
  • Applications binding to many ports where mapping each one is impractical

When to avoid it:

  • You lose port isolation — two containers cannot bind the same port
  • Only works on Linux (Docker Desktop on Mac/Windows uses a VM, so "host" is the VM's network)

Overlay Networks (Swarm and Multi-Host)

Overlay networks span multiple Docker hosts. They are the networking backbone of Docker Swarm and enable containers on different physical machines to communicate as if they were on the same network.

# Initialize Swarm mode (required for overlay)
docker swarm init

# Create an overlay network
docker network create --driver overlay --attachable my-overlay

# Deploy a service across multiple nodes
docker service create --name web \
--network my-overlay \
--replicas 3 \
nginx:alpine

# Containers on any node can reach each other by service name

Overlay networks use VXLAN tunneling under the hood — they encapsulate container traffic inside UDP packets that travel between hosts. The performance overhead is minimal for most workloads.

None Network

The none network gives a container no network access at all. It only has a loopback interface.

docker run --rm --network none alpine ip addr
# 1: lo: <LOOPBACK,UP> ...
# inet 127.0.0.1/8 scope host lo
# That's it. No eth0.

Use cases: batch processing jobs that should never make network calls, security-sensitive workloads, or containers that communicate only through shared volumes.

Macvlan: Direct Physical Network Access

Macvlan assigns a real MAC address to a container, making it appear as a physical device on your network. Your router sees it as another machine.

# Create a macvlan network tied to your physical interface
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
physical-net

# Container gets a real IP on your LAN
docker run -d --name legacy-app \
--network physical-net \
--ip 192.168.1.100 \
my-legacy-app:latest

This is useful for legacy applications that expect to be directly on the network, or IoT scenarios where devices need to discover the container via mDNS/broadcast.

Port Mapping Deep Dive

Port mapping (-p) creates NAT rules that forward traffic from the host to a container. The syntax is more flexible than most people realize.

# Basic: host port 8080 → container port 80
docker run -d -p 8080:80 nginx:alpine

# Bind to specific interface only (not all interfaces)
docker run -d -p 127.0.0.1:8080:80 nginx:alpine

# Random host port → container port 80
docker run -d -p 80 nginx:alpine
docker port $(docker ps -q) 80
# 0.0.0.0:32768

# UDP port mapping
docker run -d -p 5353:53/udp my-dns-server

# Multiple ports
docker run -d -p 80:80 -p 443:443 nginx:alpine

DNS Resolution in User-Defined Networks

Docker runs an embedded DNS server at 127.0.0.11 for user-defined networks. Every container is registered by its name and any network aliases.

# Create network and containers with aliases
docker network create backend

docker run -d --name postgres-primary \
--network backend \
--network-alias db \
--network-alias database \
postgres:16-alpine -e POSTGRES_PASSWORD=secret

# All of these resolve to the same container
docker run --rm --network backend alpine nslookup postgres-primary
docker run --rm --network backend alpine nslookup db
docker run --rm --network backend alpine nslookup database

Network aliases are powerful for blue-green deployments — point the alias at the new container without changing application configuration.

Inspecting and Troubleshooting Networks

# List all networks
docker network ls

# Inspect a network — see connected containers and their IPs
docker network inspect app-network

# See which networks a container is connected to
docker inspect api --format '{{json .NetworkSettings.Networks}}' | python3 -m json.tool

# Connect a running container to an additional network
docker network connect frontend-network api

# Disconnect a container from a network
docker network disconnect app-network api

# Remove unused networks
docker network prune

Network Driver Comparison

DriverScopeDNSMulti-HostPerformanceUse Case
bridge (default)Single hostNoNoGoodQuick testing only
bridge (user-defined)Single hostYesNoGoodMost single-host apps
hostSingle hostHost's DNSNoBestPerformance-critical
overlayMulti-hostYesYesGood (VXLAN overhead)Swarm / cluster
macvlanSingle hostNoNoBestLegacy apps, LAN integration
noneSingle hostNoNoN/AIsolated batch jobs

Practical Example: Isolated Microservices

Here is a common pattern — frontend and backend on separate networks, with the API server bridging both.

# Create isolated networks
docker network create frontend
docker network create backend

# Database — only on backend
docker run -d --name db --network backend \
-e POSTGRES_PASSWORD=secret postgres:16-alpine

# API — connected to both networks
docker run -d --name api --network backend my-api:latest
docker network connect frontend api

# Web server — only on frontend, talks to API by name
docker run -d --name web --network frontend \
-p 80:80 my-frontend:latest

# web can reach api, but web CANNOT reach db directly

This network segmentation is a simple but effective security boundary.

Wrapping Up

Docker networking is not one-size-fits-all. Use user-defined bridge networks for single-host applications, host mode for performance-critical workloads, overlay for multi-host clusters, and macvlan when you need containers to live on your physical network. Always create custom networks — never rely on the default bridge.

In the next post, we will bring everything together with Docker Compose — defining multi-container applications with networking, volumes, and service dependencies in a single declarative file.