Skip to main content
Technology & EngineeringNetworking Infrastructure270 lines

Reverse Proxy

Reverse proxy configuration with Nginx and Caddy for routing, TLS termination, and request handling

Quick Summary24 lines
You are an expert in reverse proxy configuration with Nginx and Caddy for building reliable networked systems.

## Key Points

- **TLS termination** — decrypt HTTPS at the proxy, forward plain HTTP to backends
- **Virtual hosting** — serve multiple domains from one IP address
- **Path-based routing** — send `/api` to one service, `/app` to another
- **Static file serving** — serve assets directly without hitting the application
- **Request buffering** — absorb slow client uploads before forwarding to the backend
- **Response compression** — gzip/brotli compress responses at the edge
- **Rate limiting** — protect backends from excessive request rates
- **Set appropriate timeouts** for each proxy location — API endpoints may need 30–60s, WebSockets need long read timeouts (hours), and static files can use short timeouts.

## Quick Example

```
Client ──HTTPS──→ Reverse Proxy ──HTTP──→ Backend App (:3000)
                       │
                       ├──HTTP──→ API Service (:8080)
                       │
                       └──HTTP──→ Static Files (/var/www)
```
skilldb get networking-infrastructure-skills/Reverse ProxyFull skill: 270 lines
Paste into your CLAUDE.md or agent config

Reverse Proxy — Networking & Infrastructure

You are an expert in reverse proxy configuration with Nginx and Caddy for building reliable networked systems.

Core Philosophy

Overview

A reverse proxy sits between clients and backend servers, forwarding client requests to the appropriate backend and returning responses. It provides TLS termination, request routing, caching, compression, rate limiting, and security filtering. Nginx and Caddy are the two most widely used reverse proxies for modern deployments.

Core Concepts

Reverse Proxy Architecture

Client ──HTTPS──→ Reverse Proxy ──HTTP──→ Backend App (:3000)
                       │
                       ├──HTTP──→ API Service (:8080)
                       │
                       └──HTTP──→ Static Files (/var/www)

What a Reverse Proxy Handles

  • TLS termination — decrypt HTTPS at the proxy, forward plain HTTP to backends
  • Virtual hosting — serve multiple domains from one IP address
  • Path-based routing — send /api to one service, /app to another
  • Static file serving — serve assets directly without hitting the application
  • Request buffering — absorb slow client uploads before forwarding to the backend
  • Response compression — gzip/brotli compress responses at the edge
  • Rate limiting — protect backends from excessive request rates

Implementation Patterns

Nginx Reverse Proxy

# /etc/nginx/sites-available/example.com
server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         HIGH:!aNULL:!MD5;

    # Security headers
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-Frame-Options "SAMEORIGIN" always;

    # Gzip compression
    gzip on;
    gzip_types text/plain text/css application/json application/javascript text/xml;
    gzip_min_length 1000;

    # Static files
    location /assets/ {
        root /var/www/example.com;
        expires 1y;
        add_header Cache-Control "public, immutable";
        try_files $uri =404;
    }

    # API proxy
    location /api/ {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_connect_timeout 5s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        proxy_buffering on;
        proxy_buffer_size 8k;
        proxy_buffers 8 8k;
    }

    # WebSocket proxy
    location /ws/ {
        proxy_pass http://127.0.0.1:8080;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_read_timeout 86400s;  # keep WS connections open
    }

    # SPA fallback
    location / {
        root /var/www/example.com/dist;
        try_files $uri $uri/ /index.html;
    }
}

Nginx Rate Limiting

# Define rate limit zones in http block
http {
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;

    server {
        location /api/ {
            limit_req zone=api_limit burst=20 nodelay;
            limit_req_status 429;
            proxy_pass http://127.0.0.1:8080;
        }

        location /api/auth/login {
            limit_req zone=login_limit burst=5;
            limit_req_status 429;
            proxy_pass http://127.0.0.1:8080;
        }
    }
}

Caddy Reverse Proxy

# Caddyfile — automatic HTTPS via Let's Encrypt
example.com {
    # Compression
    encode gzip zstd

    # Static files
    handle /assets/* {
        root * /var/www/example.com
        file_server
        header Cache-Control "public, max-age=31536000, immutable"
    }

    # API proxy
    handle /api/* {
        reverse_proxy localhost:8080 {
            header_up X-Real-IP {remote_host}
            header_up X-Forwarded-Proto {scheme}

            transport http {
                keepalive 30s
                keepalive_idle_conns 10
            }

            # Health checks
            health_uri /health
            health_interval 10s
            health_timeout 5s

            # Load balancing multiple backends
            # to localhost:8081 localhost:8082
            # lb_policy least_conn
        }
    }

    # WebSocket
    handle /ws/* {
        reverse_proxy localhost:8080
    }

    # SPA fallback
    handle {
        root * /var/www/example.com/dist
        try_files {path} /index.html
        file_server
    }

    # Security headers
    header {
        Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
        X-Content-Type-Options "nosniff"
        X-Frame-Options "SAMEORIGIN"
        -Server
    }

    # Rate limiting
    rate_limit {remote.ip} 10r/s
}

# Multiple domains
api.example.com {
    reverse_proxy localhost:8080
}

staging.example.com {
    basicauth * {
        admin $2a$14$hashhere
    }
    reverse_proxy localhost:3001
}

Docker Compose with Nginx Proxy

version: "3.8"
services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/etc/nginx/certs:ro
    depends_on:
      - app
      - api
    networks:
      - frontend

  app:
    build: ./app
    expose:
      - "3000"
    networks:
      - frontend

  api:
    build: ./api
    expose:
      - "8080"
    networks:
      - frontend
      - backend

networks:
  frontend:
  backend:

Best Practices

  • Always forward client identity headers (X-Real-IP, X-Forwarded-For, X-Forwarded-Proto) so backend applications can correctly identify client IPs and protocol for logging, rate limiting, and redirect generation.
  • Use Caddy for simpler deployments where automatic HTTPS is valuable; use Nginx for high-traffic or complex routing scenarios where fine-grained control over buffering, caching, and connection handling is needed.
  • Set appropriate timeouts for each proxy location — API endpoints may need 30–60s, WebSockets need long read timeouts (hours), and static files can use short timeouts.

Common Pitfalls

  • Missing WebSocket upgrade headers: Without explicitly setting Upgrade and Connection headers, WebSocket connections fail silently and fall back to polling. Always configure WebSocket locations separately.
  • Proxy buffering with streaming responses: Nginx buffers responses by default, which breaks Server-Sent Events (SSE) and streaming APIs. Disable buffering (proxy_buffering off) or use X-Accel-Buffering: no for streaming endpoints.

Anti-Patterns

Over-engineering for hypothetical requirements. Building for scenarios that may never materialize adds complexity without value. Solve the problem in front of you first.

Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide wastes time and introduces risk.

Premature abstraction. Creating elaborate frameworks before having enough concrete cases to know what the abstraction should look like produces the wrong abstraction.

Neglecting error handling at system boundaries. Internal code can trust its inputs, but boundaries with external systems require defensive validation.

Skipping documentation. What is obvious to you today will not be obvious to your colleague next month or to you next year.

Install this skill directly: skilldb add networking-infrastructure-skills

Get CLI access →