Docker Networking
Docker networking modes, custom networks, DNS resolution, and multi-host connectivity patterns
You are an expert in Docker networking for containerized application development and deployment. ## Key Points - Automatic DNS resolution by container name - Better isolation from containers on other networks - Containers can be connected/disconnected at runtime - Always use user-defined bridge networks instead of the default bridge to get automatic DNS and better isolation. - Use `internal: true` on backend networks in Compose to prevent containers from reaching the internet when they do not need to. - Bind published ports to `127.0.0.1` on development machines to avoid exposing services to the local network. - Relying on the default bridge network and then wondering why containers cannot resolve each other by name; only user-defined networks have built-in DNS. - Publishing ports on `0.0.0.0` (the default) in production without a firewall, which exposes the service to all network interfaces including public ones. ## Quick Example ```bash docker run -d --network host myapp/low-latency:latest ```
skilldb get containerization-skills/Docker NetworkingFull skill: 182 linesDocker Networking — Containerization
You are an expert in Docker networking for containerized application development and deployment.
Overview
Docker networking controls how containers communicate with each other, the host, and external systems. Understanding the available network drivers and their trade-offs is essential for building secure, performant containerized architectures. Docker provides bridge, host, overlay, macvlan, and none network modes.
Core Concepts
Network Drivers
| Driver | Scope | Use Case |
|---|---|---|
bridge | Single host | Default; isolated container-to-container communication |
host | Single host | Container shares host network stack; no isolation |
overlay | Multi-host | Swarm/multi-node communication across hosts |
macvlan | Single host | Container gets its own MAC address on the physical network |
none | Single host | No networking; fully isolated |
Bridge Networks
The default bridge network provides basic connectivity but lacks DNS resolution between containers. Always create user-defined bridge networks:
# Create a custom bridge network
docker network create --driver bridge app-network
# Run containers on the custom network
docker run -d --name api --network app-network myapp/api:latest
docker run -d --name db --network app-network postgres:16-alpine
# The api container can now reach db by hostname:
# postgres://user:pass@db:5432/app
User-defined bridges provide:
- Automatic DNS resolution by container name
- Better isolation from containers on other networks
- Containers can be connected/disconnected at runtime
DNS Resolution
On user-defined networks, Docker's embedded DNS server (127.0.0.11) resolves container names:
# From inside a container on the same network
dig db # resolves to the db container's IP
ping api # resolves to the api container's IP
# Custom aliases
docker network connect --alias cache app-network redis-container
Port Publishing
# Map host port 8080 to container port 3000
docker run -p 8080:3000 myapp:latest
# Bind to specific interface
docker run -p 127.0.0.1:8080:3000 myapp:latest
# Publish all exposed ports to random host ports
docker run -P myapp:latest
# UDP port
docker run -p 5353:53/udp dns-server:latest
Implementation Patterns
Multi-Network Isolation
Separate frontend and backend traffic so the web tier cannot directly reach the database:
docker network create frontend
docker network create backend
# Web proxy: only on frontend
docker run -d --name nginx --network frontend -p 80:80 nginx:latest
# API: bridges both networks
docker run -d --name api --network frontend myapp/api:latest
docker network connect backend api
# Database: only on backend
docker run -d --name db --network backend postgres:16-alpine
With Compose:
services:
nginx:
image: nginx:latest
networks:
- frontend
ports:
- "80:80"
api:
build: ./api
networks:
- frontend
- backend
db:
image: postgres:16-alpine
networks:
- backend
networks:
frontend:
backend:
internal: true # no external access
Host Networking
Eliminates NAT overhead for latency-sensitive workloads:
docker run -d --network host myapp/low-latency:latest
The container binds directly to host ports. No -p flag is needed or accepted.
Inspecting and Debugging
# List networks
docker network ls
# Inspect a network and see connected containers
docker network inspect app-network
# Debug DNS from inside a container
docker run --rm --network app-network busybox nslookup api
# Capture traffic
docker run --rm --net=container:api nicolaka/netshoot tcpdump -i eth0 port 8080
Best Practices
- Always use user-defined bridge networks instead of the default bridge to get automatic DNS and better isolation.
- Use
internal: trueon backend networks in Compose to prevent containers from reaching the internet when they do not need to. - Bind published ports to
127.0.0.1on development machines to avoid exposing services to the local network.
Core Philosophy
Docker networking is the foundation of container communication, and getting it right is essential for both security and functionality. The core principle is explicit network segmentation: containers should only be able to reach the services they need, and nothing more. The default bridge network's permissive, DNS-less connectivity is a starting point to outgrow, not a model to follow.
User-defined networks are always the right choice. They provide automatic DNS resolution by container name, better isolation between groups of containers, and the ability to connect and disconnect containers at runtime. Creating a user-defined bridge network takes one command and eliminates an entire class of "why can't container A reach container B" debugging sessions. There is no good reason to use the default bridge network in any real workflow.
Network architecture should mirror your application's trust boundaries. A web proxy that accepts public traffic should not be on the same network as the database that stores sensitive data. Multi-network topologies, where a middle-tier API service bridges a frontend network and a backend network, enforce these boundaries at the infrastructure level rather than relying on application-level access control alone.
Anti-Patterns
-
Using the default bridge network for everything. The default bridge network does not provide DNS resolution between containers, forcing you to use IP addresses or
--link(deprecated). User-defined networks solve this and provide better isolation. -
Publishing ports on all interfaces in production. The default
-p 8080:3000binds to0.0.0.0, exposing the service to every network interface on the host, including public ones. In production, bind to specific interfaces or use an internal network with a reverse proxy handling external traffic. -
Connecting all containers to a single flat network. Putting every container on one network means any compromised container can reach every other container. Segment networks by trust level: frontend, backend, and database networks with explicit bridging only where needed.
-
Using
--linkfor container communication. The--linkflag is a legacy feature that only works on the default bridge network and does not support dynamic discovery. It has been functionally replaced by user-defined networks and DNS-based service discovery. -
Ignoring DNS caching behavior. Docker's embedded DNS server caches resolutions, which can cause stale entries when containers are recreated with new IPs. Be aware of TTL behavior and use health checks rather than assuming DNS always reflects the current state.
Common Pitfalls
- Relying on the default bridge network and then wondering why containers cannot resolve each other by name; only user-defined networks have built-in DNS.
- Publishing ports on
0.0.0.0(the default) in production without a firewall, which exposes the service to all network interfaces including public ones.
Install this skill directly: skilldb add containerization-skills
Related Skills
Container Registries
Container registry setup, authentication, and image management for ECR, GCR, GHCR, and Docker Hub
Container Security
Container image scanning, runtime hardening, and security best practices for production workloads
Docker Compose
Docker Compose configuration for multi-service development, testing, and local orchestration
Dockerfile Optimization
Multi-stage builds, layer caching, and image size optimization for production Docker images
Helm Charts
Helm chart creation, templating, dependency management, and release lifecycle for Kubernetes
Kubernetes Autoscaling
Kubernetes autoscaling with HPA, VPA, Cluster Autoscaler, and event-driven scaling with KEDA