G
GuideDevOps
Lesson 22 of 28

Container Networking Fundamentals

Part of the Networking Basics tutorial series.

Container networking is fundamentally different from traditional networking. Understanding how containers discover each other, communicate, and connect to external networks is essential for DevOps.

Containers and Network Isolation

Traditional Host Networking

Host A: IP 192.168.1.100
 Can reach: 192.168.1.0/24 network

Container Networking

Host: IP 192.168.1.100
├── Container 1: IP 172.17.0.2 (isolated)
├── Container 2: IP 172.17.0.3 (isolated)
└── Container 3: IP 172.17.0.4 (isolated)

Each container has its own:
- Network namespace (isolated network stack)
- Loopback interface (127.0.0.1)
- Virtual Ethernet interface

Container Network Modes

1. Bridge Network (Default)

┌─────────────────────────────────────┐
│ Host (192.168.1.100)                │
│ ┌────────────────────────────────┐  │
│ │ Docker Bridge (172.17.0.1)     │  │
│ ├── Container 1: 172.17.0.2      │  │
│ ├── Container 2: 172.17.0.3      │  │
│ └── Container 3: 172.17.0.4      │  │
│ └───────────────────────────────┘  │
└─────────────────────────────────────┘

Containers can reach each other (172.17.0.0/16)

Port Mapping:

External: 0.0.0.0:8080
↓ (NAT by Docker)
Container: 172.17.0.2:80

2. Host Network Container shares host's network:

Container uses: 192.168.1.100 (same as host)
No isolation, container port conflicts with host
Performance best, isolation worst

3. Overlay Network (Multi-Host)

Host 1 (10.0.1.0)          Host 2 (10.0.2.0)
├── Container A 172.18.0.2 ← Tunnel → Container B 172.18.0.3
└── docker0: 172.17.0.1    └── docker0: 172.17.0.1

Container A (172.18.0.2) → Tunnel → Host 2 → Container B (172.18.0.3)
All traffic encapsulated (VXLAN, IP-in-IP, etc.)

4. None Network No networking, only loopback interface:

Container isolated completely
Use when container doesn't need network

Docker Networking

Default docker0 Bridge

# View docker networks
docker network ls
 
# Inspect bridge
docker network inspect bridge
 
# Output shows:
# - Network ID
# - Driver: bridge
# - Subnet: 172.17.0.0/16
# - Containers connected

Creating Custom Networks

# Create overlay network
docker network create \
  --driver overlay \
  --subnet 192.168.0.0/16 \
  mynetwork
 
# Container joins network
docker run -d \
  --network mynetwork \
  --name web \
  nginx
 
# Another container reaches first by name
docker exec -it web ping web
# Docker DNS resolves 'web' → container IP

DNS in Docker

Container: "Reach web service"
Docker DNS (127.0.0.11:53) resolves:
  web → 192.168.0.2 (container IP)

Service Discovery by name!

Kubernetes Networking Model

Requirements Every pod gets its own IP:

Pod A: 10.244.1.50
Pod B: 10.244.1.51
Pod C: 10.244.2.50 (different node)

All can reach each other directly
No NAT between pods

CNI (Container Network Interface) Kubernetes delegates networking to plugins:

  • Calico — IP-in-IP encapsulation
  • Flannel — Simple VXLAN overlay
  • Weave — Mesh networking
  • Cilium — eBPF-based, advanced

Example: Pod-to-Pod Communication

Node 1:                          Node 2:
┌─────────────┐                  ┌─────────────┐
│ Pod A       │                  │ Pod C       │
│ 10.244.1.50 │                  │ 10.244.2.50 │
│ eth0        │                  │ eth0        │
└──────┬──────┘                  └──────┬──────┘
       │ veth                           │ veth
       │                               │
   ┌───▼──────┐ VXLAN Tunnel        ┌──▼────┐
   │ CNI      │ ════════════════════ │ CNI   │
   │ Bridge   │ (encapsulation)      │ Bridge│
   └──────────┘                      └───────┘

Service Discovery in Containers

DNS-Based Discovery

In Docker:

Services registered with names
Container DNS queries by name
Automatic IP resolution

Example (Docker Compose):

services:
  web:
    image: nginx
  db:
    image: postgres
 
# Container web reaches database:
# Hostname: db
# DNS resolves to container IP
# Can connect: postgres://db:5432

Kubernetes Service Discovery:

apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: default
spec:
  selector:
    app: backend
  ports:
  - port: 8080

Automatic DNS:

Service name: backend
Namespace: default
FQDN: backend.default.svc.cluster.local

Pod resolves: backend → 10.x.x.x (virtual IP)
kube-proxy redirects to actual pod IPs

Network Policies in Containers

Limit which containers can communicate:

Docker (with external plugin only)

Default: All containers can reach each other
Need external CNI to enforce policies

Kubernetes (built-in):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-cross-ns
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend

Multi-Host Networking Challenges

Problem 1: How do containers on different hosts reach each other?

Solution: Overlay Networks

Option 1: VXLAN
- Encapsulate container traffic in UDP
- Virtual network on top of physical network
- Simple, standard

Option 2: IP-in-IP
- Wrap IP packets in IP packets
- Lower overhead than VXLAN
- Less standard

Option 3: Host Routes
- No encapsulation, just routing
- Requires routing in underlying network
- Most performant

Problem 2: Load Balancing to Multiple Containers

Application Load Balancing

External traffic → Load Balancer
                  ├─ Round-robin
                  ├─ Health checks
                  └─ Container A, B, C

Container port mapping:
Load balancer: port 8080
├─ Container A: 172.17.0.2:8080
├─ Container B: 172.17.0.3:8080
└─ Container C: 172.17.0.4:8080

Networking Best Practices

Docker Best Practices

✓ Use custom networks, not default bridge
✓ Enable container DNS (automatic)
✓ Use service names, not IP addresses
✓ Expose necessary ports only
✗ Don't use host network for security
✗ Don't hardcode container IPs

Kubernetes Best Practices

✓ Use Services for stable endpoints
✓ Apply default-deny NetworkPolicies
✓ Use DNS names (FQDN) consistently
✓ Monitor inter-pod communication
✓ Design pods per application purpose
✗ Don't expose raw pod IPs
✗ Don't assume all pods can reach all pods

Troubleshooting Container Networking

Container can't reach service:

# Check DNS resolution
kubectl exec -it pod-name -- nslookup service-name
 
# Check network policy
kubectl get networkpolicy
 
# Verify service endpoints
kubectl get endpoints service-name
 
# Test connectivity
kubectl exec -it pod-name -- curl http://service-name:8080

Service DNS not working:

# Check CoreDNS pods
kubectl get pods -n kube-system | grep coredns
 
# Check logs
kubectl logs -n kube-system deployment/coredns
 
# Verify DNS config
cat /etc/resolv.conf (from inside pod)

No inter-pod connectivity:

# Verify CNI plugin
kubectl get daemonset -A | grep cni
 
# Check pod IPs assigned
kubectl get pods -o wide
 
# Verify routing table on node
ip route show

Container Networking Architecture

Layers:

Application Layer (HTTP, gRPC)
      ↓
Service Discovery Layer (DNS)
      ↓
Transport Layer (TCP, UDP)
      ↓
Container Networking (Bridge/Overlay)
      ↓
Host Networking (Physical or Cloud Network)

Key Concepts

  • Containers isolated in network namespaces
  • Bridge network connects containers on same host
  • Overlay network connects containers across hosts
  • Service discovery automatic with DNS
  • Service abstractions provide load balancing
  • Network policies control container communication
  • CNI plugins implement networking for Kubernetes
  • Performance vs isolation tradeoff
  • Design networks for security (default-deny)