Docker Overlay Networks — Multi-Host Container Communication
Bridge networks work on a single host. Your containers can talk to each other when they are on the same machine. But when you have three application servers across three hosts, a bridge network does nothing for you. Overlay networks solve this — they create a virtual network that spans multiple Docker hosts, letting containers communicate as if they were on the same LAN, regardless of which physical machine they are running on.
Why Overlay Networks
Consider a typical microservices deployment: an API server on host A, a worker on host B, and a database on host C. Without overlay networks, you have two options — expose every service on a host port and use IP addresses (fragile, no service discovery), or set up a third-party networking tool. Overlay networks give you:
- Automatic DNS-based service discovery across hosts.
- Encryption of traffic between hosts.
- Network isolation — services on different overlay networks cannot communicate.
- Load balancing across service replicas.
- No port mapping required for inter-service communication.
VXLAN Under the Hood
Overlay networks use VXLAN (Virtual Extensible LAN) to encapsulate Layer 2 Ethernet frames inside Layer 3 UDP packets. This means container-to-container traffic is tunneled through the host network without any special router configuration.
# The overlay network packet flow:
#
# Container A (Host 1) → sends packet to Container B (Host 2)
# 1. Packet leaves Container A on the overlay network (10.0.1.5)
# 2. Docker encapsulates it in a VXLAN frame
# 3. VXLAN frame is wrapped in a UDP packet (port 4789)
# 4. UDP packet is sent from Host 1 (192.168.1.10) to Host 2 (192.168.1.11)
# 5. Host 2 receives the UDP packet, decapsulates the VXLAN frame
# 6. Original packet is delivered to Container B (10.0.1.8)
#
# Containers see: 10.0.1.5 → 10.0.1.8 (simple L2 communication)
# Hosts see: 192.168.1.10:random → 192.168.1.11:4789 (UDP tunnel)
# Verify VXLAN port is open between hosts
# On each host, Docker uses UDP port 4789 for VXLAN
# and TCP/UDP port 7946 for control plane gossip
# Check from Host 1
nc -zvu 192.168.1.11 4789
# Connection to 192.168.1.11 4789 port [udp/*] succeeded!
# Firewall rules needed for overlay networking
sudo ufw allow 4789/udp # VXLAN data plane
sudo ufw allow 7946/tcp # Control plane
sudo ufw allow 7946/udp # Control plane
sudo ufw allow 2377/tcp # Swarm cluster management
Creating Overlay Networks
Overlay networks require Swarm mode to be initialized (even for standalone containers with --attachable).
# Initialize Swarm if not already done
docker swarm init
# Create a basic overlay network
docker network create --driver overlay app-net
# Create with a specific subnet
docker network create --driver overlay \
--subnet 10.10.0.0/24 \
--gateway 10.10.0.1 \
app-net
# List overlay networks
docker network ls --filter driver=overlay
# NETWORK ID NAME DRIVER SCOPE
# abc123 ingress overlay swarm
# def456 app-net overlay swarm
# Inspect the overlay network
docker network inspect app-net
# [
# {
# "Name": "app-net",
# "Driver": "overlay",
# "Scope": "swarm",
# "IPAM": {
# "Config": [{ "Subnet": "10.10.0.0/24", "Gateway": "10.10.0.1" }]
# },
# "Peers": [
# { "Name": "manager-1", "IP": "192.168.1.10" },
# { "Name": "worker-1", "IP": "192.168.1.11" },
# { "Name": "worker-2", "IP": "192.168.1.12" }
# ]
# }
# ]
Encrypted Overlay Networks
By default, overlay traffic between hosts is not encrypted. VXLAN packets travel as plain UDP. For sensitive traffic, enable encryption.
# Create an encrypted overlay network
docker network create --driver overlay \
--opt encrypted \
secure-net
# IPsec tunnels are created between all nodes using the network
# This adds CPU overhead for encryption/decryption
# Verify encryption is enabled
docker network inspect secure-net --format '{{.Options}}'
# map[encrypted:]
# Under the hood, Docker uses IPsec ESP (Encapsulating Security Payload)
# to encrypt VXLAN traffic. Keys are managed automatically by Swarm.
| Network Type | Encrypted | Performance | Use Case |
|---|---|---|---|
| Overlay (default) | No | Best | Internal services, trusted network |
Overlay (--opt encrypted) | Yes (IPsec) | ~20-30% overhead | Sensitive data, compliance requirements |
| Bridge | N/A (single host) | Best | Single-host development |
Service Discovery Across Hosts
Every service deployed on an overlay network gets a DNS entry. Containers resolve service names to virtual IPs that load-balance across all replicas.
# Deploy services on the same overlay network
docker service create --name api --network app-net --replicas 3 myapp:latest
docker service create --name db --network app-net --replicas 1 postgres:16-alpine
docker service create --name cache --network app-net --replicas 2 redis:7-alpine
# From inside any container on app-net, DNS resolution works:
docker exec -it $(docker ps -q -f name=api) sh
# Resolve service names
nslookup db
# Name: db
# Address: 10.10.0.5 (Virtual IP — load balanced)
nslookup api
# Name: api
# Address: 10.10.0.10 (Virtual IP — balances across 3 replicas)
# See individual task IPs (round-robin DNS)
nslookup tasks.api
# Name: tasks.api
# Address: 10.10.0.11 (replica 1)
# Address: 10.10.0.12 (replica 2)
# Address: 10.10.0.13 (replica 3)
The VIP (Virtual IP) approach means your application connects to db:5432 and Docker handles routing to the correct container, even if it is on a different physical host. No configuration files with IP addresses. No consul or etcd for service discovery.
The Ingress Network
The ingress network is a special overlay network created automatically when you initialize Swarm. It handles external traffic coming into published ports.
# When you publish a port, it uses the ingress network
docker service create --name web --publish 80:80 --replicas 3 nginx:alpine
# The ingress network routes external traffic:
# Client → any_swarm_node:80 → ingress network → container running nginx
#
# This is the "routing mesh" — port 80 works on ALL nodes, even those
# not running nginx replicas
# Inspect the ingress network
docker network inspect ingress
# You can see the VIPs and load balancing entries
docker service inspect web --format '{{.Endpoint.VirtualIPs}}'
# [{ingress 10.0.0.5/24} {app-net 10.10.0.10/24}]
# Bypass the routing mesh — publish directly on the host running the task
docker service create --name web \
--publish mode=host,target=80,published=80 \
--mode global \
nginx:alpine
# Each node publishes port 80, but only routes to its local container
# Requires --mode global (one container per node) for reliability
Load Balancing in Overlay Networks
Swarm provides two levels of load balancing on overlay networks.
# Level 1: Internal load balancing (service VIP)
# When container A calls "api:3000", the request is balanced
# across all replicas of the api service using IPVS
# Check the IPVS rules (on any Swarm node)
sudo ipvsadm -L -n
# TCP 10.10.0.10:3000 rr
# -> 10.10.0.11:3000 Masq 1 0 0
# -> 10.10.0.12:3000 Masq 1 0 0
# -> 10.10.0.13:3000 Masq 1 0 0
# Level 2: Ingress load balancing (published ports)
# External traffic to any node's published port is balanced
# across all replicas via the ingress network
# DNS round-robin (alternative to VIP)
docker service create --name api \
--network app-net \
--endpoint-mode dnsrr \
--replicas 3 \
myapp:latest
# With dnsrr, resolving "api" returns all container IPs
# The client is responsible for load balancing
# Useful when you need client-side load balancing or sticky sessions
Debugging Overlay Networks
When overlay networking breaks, containers on different hosts cannot communicate. Here is a systematic debugging approach.
# Step 1: Verify the network exists on all nodes
docker network ls --filter driver=overlay
# Step 2: Check that services are attached to the network
docker service inspect api --format '{{.Spec.TaskTemplate.Networks}}'
# Step 3: Ping between containers on different hosts
docker exec -it api_container ping db
# If this fails, the overlay network has a problem
# Step 4: Check VXLAN connectivity between hosts
# On Host 1:
sudo tcpdump -i eth0 udp port 4789 -c 5
# You should see VXLAN traffic when containers communicate
# Step 5: Verify Docker networking internals
docker run --rm --net host nicolaka/netshoot \
tcpdump -i any udp port 4789 -c 10
# Step 6: Check for network namespace issues
# Find the container's network namespace
docker inspect mycontainer --format '{{.NetworkSettings.SandboxKey}}'
# /var/run/docker/netns/abc123
# Enter the namespace and check interfaces
sudo nsenter --net=/var/run/docker/netns/abc123 ip addr
# Look for the overlay interface (vxlan0 or similar)
# Step 7: Nuclear option — recreate the overlay network
docker service update --network-rm app-net api
docker network rm app-net
docker network create --driver overlay app-net
docker service update --network-add app-net api
Overlay vs Bridge Performance
Overlay networks add encapsulation overhead. Here is what to expect.
| Metric | Bridge Network | Overlay Network | Encrypted Overlay |
|---|---|---|---|
| Latency | ~0.05 ms | ~0.2-0.5 ms | ~0.5-1.0 ms |
| Throughput | Near line rate | ~90-95% of line rate | ~70-80% of line rate |
| CPU overhead | Minimal | Low (VXLAN encap) | Moderate (IPsec) |
| MTU | 1500 | 1450 (50-byte VXLAN header) | ~1400 |
# Check the MTU on an overlay network
docker exec mycontainer cat /sys/class/net/eth0/mtu
# 1450
# If you see fragmentation issues or poor performance,
# ensure your host network MTU supports the VXLAN overhead
# Host MTU should be at least 1550 for overlay networks
ip link show eth0 | grep mtu
Attachable Overlay Networks
By default, overlay networks only work with Swarm services. The --attachable flag lets standalone containers (started with docker run) join an overlay network.
# Create an attachable overlay network
docker network create --driver overlay --attachable shared-net
# Swarm services can use it
docker service create --name api --network shared-net myapp:latest
# Standalone containers can ALSO use it
docker run -d --name debug-tools --network shared-net nicolaka/netshoot sleep infinity
# The standalone container can now reach Swarm services by name
docker exec debug-tools ping api
This is useful for running one-off debugging containers, database migrations, or admin tools that need to reach Swarm services.
Wrapping Up
Overlay networks are what make multi-host Docker deployments practical. Without them, you are left with host port mappings and manual IP management — a fragile setup that breaks the moment a container moves to a different host. With overlay networks, containers get automatic DNS-based service discovery, load balancing, and network isolation across any number of hosts. The VXLAN overhead is small, and for sensitive workloads, encrypted overlays add IPsec protection with manageable CPU cost. If you are running anything beyond a single-host development setup, overlay networks are not optional — they are the foundation.
In the next post, we will cover Docker Monitoring — setting up cAdvisor, Prometheus, and Grafana to get real visibility into container resource usage, performance trends, and health alerting.
