Advanced Linux Networking — VLANs, Bridges, and Network Namespaces
Kubernetes networking makes no sense until you understand these Linux primitives. Every pod-to-pod connection, every service load balancer, and every network policy maps directly to Linux networking concepts that have existed for decades. Master these fundamentals and container networking becomes transparent.
The Building Blocks
Linux networking is built from composable primitives. Each one does one thing well.
| Primitive | What It Does | Container Equivalent |
|---|---|---|
| Network namespace | Isolated network stack | Pod network |
| veth pair | Virtual ethernet cable | Pod-to-host connection |
| Bridge | L2 switch | Docker bridge, CNI bridge |
| VLAN | Network segmentation | Network policies (conceptually) |
| iptables/nftables | Packet filtering/NAT | Service routing, network policies |
| tc (traffic control) | Bandwidth shaping | QoS, rate limiting |
| Bond | Link aggregation | High availability NICs |
Network Namespaces
A network namespace is a complete, isolated copy of the network stack: its own interfaces, routing table, iptables rules, and ARP table.
# Create two network namespaces (simulating two containers)
sudo ip netns add container1
sudo ip netns add container2
# List namespaces
ip netns list
# Run a command inside a namespace
sudo ip netns exec container1 ip addr
# Only loopback exists, and it's DOWN
# Bring up loopback inside the namespace
sudo ip netns exec container1 ip link set lo up
# Each namespace has its own routing table
sudo ip netns exec container1 ip route
# Empty — no routes at all
veth Pairs: Connecting Namespaces
A veth (virtual ethernet) pair is like a virtual network cable with two ends. Put one end in a namespace and the other on the host (or another namespace).
# Create a veth pair
sudo ip link add veth-c1 type veth peer name veth-c1-br
# Move one end into container1's namespace
sudo ip link set veth-c1 netns container1
# Configure the container side
sudo ip netns exec container1 ip addr add 10.244.1.2/24 dev veth-c1
sudo ip netns exec container1 ip link set veth-c1 up
# Configure the host side
sudo ip addr add 10.244.1.1/24 dev veth-c1-br
sudo ip link set veth-c1-br up
# Test connectivity
sudo ip netns exec container1 ping -c 2 10.244.1.1
This is exactly how Docker connects containers to the host. When you run docker run, Docker creates a veth pair, moves one end into the container's namespace, and attaches the other end to the docker0 bridge.
Linux Bridges: Building a Virtual Switch
A bridge is a Layer 2 switch in software. Multiple veth pairs connect to the bridge, and the bridge forwards frames between them.
# Create a bridge (this is what docker0 is)
sudo ip link add br0 type bridge
sudo ip addr add 10.244.0.1/16 dev br0
sudo ip link set br0 up
# Create veth pairs for two containers
sudo ip link add veth-c1 type veth peer name veth-c1-br
sudo ip link add veth-c2 type veth peer name veth-c2-br
# Move container ends into their namespaces
sudo ip link set veth-c1 netns container1
sudo ip link set veth-c2 netns container2
# Attach host ends to the bridge
sudo ip link set veth-c1-br master br0
sudo ip link set veth-c2-br master br0
sudo ip link set veth-c1-br up
sudo ip link set veth-c2-br up
# Configure container interfaces
sudo ip netns exec container1 ip addr add 10.244.1.2/16 dev veth-c1
sudo ip netns exec container1 ip link set veth-c1 up
sudo ip netns exec container1 ip link set lo up
sudo ip netns exec container2 ip addr add 10.244.2.2/16 dev veth-c2
sudo ip netns exec container2 ip link set veth-c2 up
sudo ip netns exec container2 ip link set lo up
# Test container-to-container connectivity through the bridge
sudo ip netns exec container1 ping -c 2 10.244.2.2
Inspecting Bridge State
# Show bridge details
bridge link show
# Show MAC address table (which MACs are on which port)
bridge fdb show br br0
# Show spanning tree state
bridge stp show br0
VLANs: Network Segmentation
VLANs (802.1Q) tag traffic with an ID, allowing a single physical interface to carry multiple isolated networks.
# Load the 802.1Q module
sudo modprobe 8021q
# Create VLAN interfaces on eth0
sudo ip link add link eth0 name eth0.100 type vlan id 100
sudo ip link add link eth0 name eth0.200 type vlan id 200
# Assign IP addresses to each VLAN
sudo ip addr add 10.100.0.1/24 dev eth0.100
sudo ip addr add 10.200.0.1/24 dev eth0.200
# Bring up VLAN interfaces
sudo ip link set eth0.100 up
sudo ip link set eth0.200 up
# Verify VLAN configuration
ip -d link show eth0.100
cat /proc/net/vlan/config
| VLAN ID | Subnet | Purpose |
|---|---|---|
| 100 | 10.100.0.0/24 | Application servers |
| 200 | 10.200.0.0/24 | Database servers |
| 300 | 10.300.0.0/24 | Management network |
Network Bonding: Link Aggregation
Bonding combines multiple physical NICs into one logical interface for redundancy and bandwidth aggregation.
# Load the bonding module
sudo modprobe bonding
# Create a bond interface
sudo ip link add bond0 type bond mode 802.3ad
# Set bonding parameters
sudo ip link set bond0 type bond miimon 100 lacp_rate fast
# Add physical interfaces to the bond
sudo ip link set eth0 down
sudo ip link set eth1 down
sudo ip link set eth0 master bond0
sudo ip link set eth1 master bond0
# Configure and bring up
sudo ip addr add 10.0.0.10/24 dev bond0
sudo ip link set bond0 up
# Verify bond status
cat /proc/net/bonding/bond0
| Bond Mode | Name | Use Case |
|---|---|---|
| 0 | balance-rr | Round-robin, requires switch config |
| 1 | active-backup | One active, others standby (simplest HA) |
| 2 | balance-xor | Hash-based distribution |
| 4 | 802.3ad | LACP, requires switch support (best throughput) |
| 6 | balance-alb | Adaptive load balancing, no switch config needed |
Traffic Control (tc): Bandwidth Shaping
tc controls how packets are queued and transmitted. It's used for rate limiting, traffic shaping, and simulating network conditions.
Rate Limiting
# Limit outgoing bandwidth on eth0 to 100Mbit
sudo tc qdisc add dev eth0 root tbf rate 100mbit burst 32kbit latency 400ms
# Verify
tc qdisc show dev eth0
# Remove the limit
sudo tc qdisc del dev eth0 root
Simulating Network Conditions
This is invaluable for testing how your application behaves under adverse network conditions.
# Add 100ms latency with 10ms jitter
sudo tc qdisc add dev eth0 root netem delay 100ms 10ms
# Add 1% packet loss
sudo tc qdisc change dev eth0 root netem delay 100ms 10ms loss 1%
# Add packet corruption (useful for testing TCP resilience)
sudo tc qdisc change dev eth0 root netem delay 50ms corrupt 0.1%
# Remove all tc rules
sudo tc qdisc del dev eth0 root
Per-IP Rate Limiting
# Create a class-based queue for rate limiting specific traffic
sudo tc qdisc add dev eth0 root handle 1: htb default 10
sudo tc class add dev eth0 parent 1: classid 1:10 htb rate 1gbit
sudo tc class add dev eth0 parent 1: classid 1:20 htb rate 10mbit ceil 10mbit
# Send traffic from a specific IP to the limited class
sudo tc filter add dev eth0 parent 1: protocol ip prio 1 u32 \
match ip dst 10.0.0.50/32 flowid 1:20
Routing Tables and Policy Routing
Linux supports multiple routing tables, allowing different routing decisions based on source IP, protocol, or other criteria.
# View the main routing table
ip route show
# View all routing rules (policy routing)
ip rule show
# Add a custom routing table for a specific source
echo "100 custom" | sudo tee -a /etc/iproute2/rt_tables
sudo ip route add default via 10.0.0.1 table custom
sudo ip rule add from 10.244.0.0/16 table custom
# View routes in the custom table
ip route show table custom
Putting It All Together: Container Networking Lab
Let's build a complete multi-container network with internet access — essentially recreating what Docker does.
#!/bin/bash
# container-network-lab.sh
# Creates 2 namespaces connected via bridge with NAT internet access
# Create namespaces
ip netns add web
ip netns add api
# Create bridge
ip link add cni0 type bridge
ip addr add 10.244.0.1/24 dev cni0
ip link set cni0 up
# Create veth pairs and connect
for ns in web api; do
idx=$((RANDOM % 200 + 2))
ip link add veth-${ns} type veth peer name veth-${ns}-br
ip link set veth-${ns} netns ${ns}
ip link set veth-${ns}-br master cni0
ip link set veth-${ns}-br up
done
# Configure IPs
ip netns exec web ip addr add 10.244.0.2/24 dev veth-web
ip netns exec web ip link set veth-web up
ip netns exec web ip link set lo up
ip netns exec web ip route add default via 10.244.0.1
ip netns exec api ip addr add 10.244.0.3/24 dev veth-api
ip netns exec api ip link set veth-api up
ip netns exec api ip link set lo up
ip netns exec api ip route add default via 10.244.0.1
# Enable NAT for internet access
sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A POSTROUTING -s 10.244.0.0/24 -o eth0 -j MASQUERADE
iptables -A FORWARD -i cni0 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o cni0 -m state --state RELATED,ESTABLISHED -j ACCEPT
echo "Lab ready. Test with: ip netns exec web ping 10.244.0.3"
# Test the lab
sudo ip netns exec web ping -c 2 10.244.0.3 # web -> api
sudo ip netns exec api ping -c 2 10.244.0.2 # api -> web
sudo ip netns exec web ping -c 2 8.8.8.8 # web -> internet (if NAT is configured)
Cleanup
# Remove everything from the lab
sudo ip netns del web
sudo ip netns del api
sudo ip link del cni0
sudo iptables -t nat -D POSTROUTING -s 10.244.0.0/24 -o eth0 -j MASQUERADE
Now that you understand network isolation, the next logical step is process isolation — specifically, Mandatory Access Control. We'll compare SELinux and AppArmor, the two MAC frameworks that prevent processes from doing things they shouldn't, even when running as root.
