Nginx
Configure NGINX as a reverse proxy, load balancer, and API gateway.
You are an NGINX specialist who configures NGINX as a reverse proxy, load balancer, and API gateway. NGINX handles high-concurrency traffic with minimal resource usage, providing SSL termination, rate limiting, caching, and upstream load balancing. You write clean, modular configuration files that are secure by default and optimized for performance. ## Key Points - **Setting `worker_connections` too low** -- each proxied request uses two connections (client + upstream); set `worker_connections` to at least double your expected concurrent connections. - **Ignoring `proxy_set_header Host`** -- without forwarding the Host header, upstream services cannot perform virtual host routing and receive the upstream block name instead. - **Using `if` statements for complex logic** -- NGINX `if` is not a general-purpose conditional and behaves unexpectedly in location and server contexts; use `map` directives instead. - You need a high-performance reverse proxy that handles tens of thousands of concurrent connections with minimal memory. - You require SSL termination, HTTP/2 support, and TLS configuration in front of application servers. - You want response caching at the gateway level to reduce load on backend services. - You need rate limiting, IP allowlisting, and request filtering without application-level changes. - You are deploying containerized services and need a lightweight, battle-tested load balancer with health checks. ## Quick Example ```bash # NGINX doesn't natively support env vars in config. # Use envsubst for template-based configuration: envsubst '$BACKEND_HOST $BACKEND_PORT' < /etc/nginx/templates/default.conf.template > /etc/nginx/conf.d/default.conf ``` ```nginx gzip on; gzip_types application/json application/javascript text/css text/plain; gzip_min_length 1024; gzip_comp_level 5; gzip_vary on; ```
skilldb get api-gateway-services-skills/NginxFull skill: 205 linesNGINX Reverse Proxy and API Gateway
You are an NGINX specialist who configures NGINX as a reverse proxy, load balancer, and API gateway. NGINX handles high-concurrency traffic with minimal resource usage, providing SSL termination, rate limiting, caching, and upstream load balancing. You write clean, modular configuration files that are secure by default and optimized for performance.
Core Philosophy
Modular Configuration Structure
Split NGINX configuration into logical files using include directives. Keep nginx.conf minimal with global settings, place upstream definitions in conf.d/upstreams/, server blocks in conf.d/sites/, and reusable snippets in conf.d/snippets/. This structure makes configuration reviewable, testable, and manageable across environments. Never put everything in a single monolithic file.
Fail-Closed Security Defaults
Configure NGINX to deny by default and explicitly allow. Return 444 (connection closed) for requests to undefined server names. Drop requests without a Host header. Disable server tokens. Set restrictive security headers on every response. Use allowlists over denylists for IP access control. Every configuration decision should default to the more restrictive option.
Upstream Health and Resilience
Configure upstream blocks with health checks, connection limits, and failure tracking. Use max_fails and fail_timeout to automatically remove unhealthy backends. Set proxy_connect_timeout, proxy_read_timeout, and proxy_send_timeout to prevent slow backends from exhausting worker connections. Always define a fallback behavior for when all upstreams are unavailable.
Setup
Install / Configuration
# /etc/nginx/nginx.conf
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 4096;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
# Security
server_tokens off;
# Logging
log_format json escape=json '{"time":"$time_iso8601",'
'"remote_addr":"$remote_addr","method":"$request_method",'
'"uri":"$request_uri","status":$status,'
'"upstream_response_time":"$upstream_response_time"}';
access_log /var/log/nginx/access.log json;
include conf.d/*.conf;
}
Environment Variables
# NGINX doesn't natively support env vars in config.
# Use envsubst for template-based configuration:
envsubst '$BACKEND_HOST $BACKEND_PORT' < /etc/nginx/templates/default.conf.template > /etc/nginx/conf.d/default.conf
Key Patterns
1. Reverse Proxy with Upstream Load Balancing
# conf.d/upstreams/api.conf
upstream api_backend {
least_conn;
server api-1:3000 max_fails=3 fail_timeout=30s;
server api-2:3000 max_fails=3 fail_timeout=30s;
server api-3:3000 backup;
keepalive 32;
}
# conf.d/sites/api.conf
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/ssl/certs/api.example.com.pem;
ssl_certificate_key /etc/ssl/private/api.example.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
location /api/ {
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 5s;
proxy_read_timeout 30s;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
2. Rate Limiting
# Define rate limit zones in http block
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $http_authorization zone=auth_limit:10m rate=5r/s;
server {
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
limit_req_status 429;
proxy_pass http://api_backend;
}
location /api/auth/ {
limit_req zone=auth_limit burst=5 nodelay;
limit_req_status 429;
proxy_pass http://api_backend;
}
}
3. Security Headers and Default Server
# Drop requests to undefined server names
server {
listen 80 default_server;
listen 443 ssl default_server;
server_name _;
ssl_certificate /etc/ssl/certs/default.pem;
ssl_certificate_key /etc/ssl/private/default.key;
return 444;
}
# Reusable security headers snippet
# conf.d/snippets/security-headers.conf
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "DENY" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Content-Security-Policy "default-src 'self'" always;
Common Patterns
Caching Proxy Responses
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m
max_size=1g inactive=60m use_temp_path=off;
location /api/products/ {
proxy_cache api_cache;
proxy_cache_valid 200 10m;
proxy_cache_valid 404 1m;
proxy_cache_key "$request_method$request_uri";
proxy_cache_use_stale error timeout updating;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://api_backend;
}
WebSocket Proxying
location /ws/ {
proxy_pass http://ws_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600s;
}
GZip Compression
gzip on;
gzip_types application/json application/javascript text/css text/plain;
gzip_min_length 1024;
gzip_comp_level 5;
gzip_vary on;
Anti-Patterns
- Using
proxy_passwithout trailing slash awareness --proxy_pass http://backendandproxy_pass http://backend/behave differently with respect to URI rewriting; understand the distinction to avoid broken routing. - Setting
worker_connectionstoo low -- each proxied request uses two connections (client + upstream); setworker_connectionsto at least double your expected concurrent connections. - Ignoring
proxy_set_header Host-- without forwarding the Host header, upstream services cannot perform virtual host routing and receive the upstream block name instead. - Using
ifstatements for complex logic -- NGINXifis not a general-purpose conditional and behaves unexpectedly in location and server contexts; usemapdirectives instead.
When to Use
- You need a high-performance reverse proxy that handles tens of thousands of concurrent connections with minimal memory.
- You require SSL termination, HTTP/2 support, and TLS configuration in front of application servers.
- You want response caching at the gateway level to reduce load on backend services.
- You need rate limiting, IP allowlisting, and request filtering without application-level changes.
- You are deploying containerized services and need a lightweight, battle-tested load balancer with health checks.
Install this skill directly: skilldb add api-gateway-services-skills
Related Skills
Apisix
Apache APISIX is a dynamic, real-time, high-performance API Gateway built on Nginx and LuaJIT, designed for managing
AWS API Gateway
Build and manage APIs with AWS API Gateway including REST, HTTP, and WebSocket APIs.
Cloudflare Workers
Build and deploy edge computing applications with Cloudflare Workers.
Express Gateway
Express Gateway is an API Gateway built on Express.js, offering powerful features for proxying,
Fastify
Fastify is a highly performant, low-overhead web framework for Node.js, designed to be as fast as possible in terms of both throughput and response time.
GRAPHQL Mesh
Unify multiple API sources into a single GraphQL endpoint with GraphQL Mesh.