What Is a Reverse Proxy?
A reverse proxy sits in front of your application servers, accepting client requests and forwarding them to one or more backend instances. Unlike a forward proxy (which represents clients), a reverse proxy represents servers.
Reverse proxies provide four core capabilities:
- Request routing — send traffic to different backends based on path, host, or headers
- TLS termination — handle HTTPS at the proxy layer so backends speak plain HTTP
- Load distribution — spread traffic across multiple backend instances
- Caching and buffering — absorb slow clients and serve cached responses
Every major production setup uses at least one reverse proxy layer. Understanding the configuration model of each tool lets you choose the right one and troubleshoot production issues efficiently.
Nginx Configuration
Nginx is the most widely deployed reverse proxy. Its configuration is declarative and organized around upstream blocks (backend pools) and server blocks (virtual hosts).
Upstream Blocks
upstream app_servers {
# Default: round-robin load balancing
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001 weight=1;
server 127.0.0.1:8002 backup; # Only used if others fail
# Keep persistent connections to backends
keepalive 32;
}
Load balancing methods available in Nginx:
| Method | Directive | Use Case |
|---|---|---|
| Round-robin | (default) | Stateless APIs |
| Least connections | `least_conn` | Variable request duration |
| IP hash | `ip_hash` | Session stickiness |
| Random with least conn | `random two least_conn` | Large upstream pools |
proxy_pass and Header Forwarding
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://app_servers;
# Forward real client information
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Buffering and timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
}
WebSocket Proxying
WebSocket requires HTTP Upgrade — Nginx needs two extra headers:
location /ws/ {
proxy_pass http://app_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_read_timeout 3600s; # Keep WS connections alive
}
gRPC Proxying
server {
listen 50051 http2;
location / {
grpc_pass grpc://127.0.0.1:9090;
error_page 502 = /error502grpc;
}
location = /error502grpc {
internal;
default_type application/grpc;
add_header grpc-status 14;
add_header content-length 0;
return 204;
}
}
Caddy Configuration
Caddy's killer feature is automatic HTTPS — it provisions and renews Let's Encrypt certificates without any manual configuration. Its Caddyfile syntax is terse and readable.
Basic Reverse Proxy
example.com {
reverse_proxy localhost:8000
}
This single block gives you automatic TLS, HTTP/2, and health-checked proxying. Caddy handles certificate renewal automatically via ACME.
Load Balancing Policies
example.com {
reverse_proxy {
to localhost:8000 localhost:8001 localhost:8002
# Load balancing policy
lb_policy least_conn
# Active health checks
health_uri /healthz
health_interval 10s
health_timeout 5s
health_status 200
# Header forwarding
header_up Host {upstream_hostport}
header_up X-Real-IP {remote_host}
header_up X-Forwarded-Proto {scheme}
}
}
Multiple Upstreams with Path Routing
api.example.com {
# Route /v2/* to new backend
handle /v2/* {
reverse_proxy localhost:9000
}
# Everything else to legacy backend
handle {
reverse_proxy localhost:8000
}
}
HAProxy Configuration
HAProxy is purpose-built for high-availability load balancing. It uses a frontend/backend model and supports sophisticated ACL-based routing.
Frontend/Backend Model
global
maxconn 50000
log /dev/log local0
defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
option httplog
option forwardfor
frontend http_in
bind *:80
bind *:443 ssl crt /etc/ssl/example.pem
# ACL routing rules
acl is_api path_beg /api/
acl is_ws hdr(Upgrade) -i websocket
use_backend api_servers if is_api
use_backend ws_servers if is_ws
default_backend app_servers
backend app_servers
balance roundrobin
option httpchk GET /healthz HTTP/1.1\r\nHost:\ example.com
http-check expect status 200
server app1 127.0.0.1:8000 check inter 10s rise 2 fall 3
server app2 127.0.0.1:8001 check inter 10s rise 2 fall 3
server app3 127.0.0.1:8002 check inter 10s rise 2 fall 3 backup
backend api_servers
balance leastconn
option httpchk GET /api/healthz
server api1 127.0.0.1:9000 check
server api2 127.0.0.1:9001 check
Connection Draining in HAProxy
backend app_servers
# Wait up to 30s for in-flight requests before hard-closing
timeout tunnel 30s
option http-server-close
# Drain a server without stopping it:
# haproxy -sf $(cat /run/haproxy.pid) — or use socket:
# echo 'set server app_servers/app1 state drain' | socat stdio /run/haproxy.sock
Common Patterns Across All Proxies
Real Client IP Extraction
When your proxy is behind a CDN or another proxy, X-Forwarded-For contains a comma-separated chain of IP addresses. Your application needs the rightmost trusted IP:
# Django: set TRUSTED_PROXIES in settings
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
USE_X_FORWARDED_HOST = True
# Get real IP in view:
def get_client_ip(request):
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
if x_forwarded_for:
return x_forwarded_for.split(',')[0].strip()
return request.META.get('REMOTE_ADDR')
Choosing Between Nginx, Caddy, and HAProxy
| Use Case | Recommended |
|---|---|
| General-purpose web + auto TLS | Caddy |
| Static file serving + proxy combo | Nginx |
| High-throughput TCP/HTTP load balancing | HAProxy |
| Kubernetes ingress | Nginx Ingress or Traefik |
| Complex ACL routing | HAProxy |
All three tools are production-proven. The right choice depends on your operational familiarity and specific feature requirements — not raw performance differences, which are negligible for most workloads.