Why Compress?
Text-based HTTP responses — HTML, JSON, CSS, JavaScript — compress exceptionally well. A typical JSON API response compresses to 20–30% of its original size. For a 100 KB payload that becomes 20–25 KB, which means:
- Faster transfer — less data over the wire
- Lower bandwidth costs — especially significant at scale
- Improved Core Web Vitals — smaller responses reduce LCP and INP
Modern CPUs decompress gzip in microseconds — the CPU cost is negligible compared to the network savings.
Accept-Encoding and Content-Encoding
HTTP compression is a two-step negotiation:
1. Client advertises supported algorithms in Accept-Encoding:
GET /api/data HTTP/1.1
Accept-Encoding: gzip, deflate, br, zstd
2. Server compresses and declares the algorithm in Content-Encoding:
HTTP/1.1 200 OK
Content-Encoding: br
Content-Type: application/json; charset=utf-8
Vary: Accept-Encoding
The Vary: Accept-Encoding header is critical — it tells CDNs to cache separate copies for each encoding variant.
gzip: The Universal Standard
gzip (RFC 1952) is the most widely supported compression algorithm. Every HTTP client and server supports it; it is the safe default when nothing else is negotiated.
| Property | Value |
|---|---|
| Compression ratio | ~65–70% reduction on JSON |
| Decompression speed | ~500 MB/s |
| Levels | 1 (fastest) to 9 (best) |
| Default level | 6 (good balance) |
# Nginx gzip configuration
gzip on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript;
gzip_min_length 1024; # Skip tiny responses
Use level 1 for dynamic responses where CPU is precious; use level 9 only for pre-compressed static assets where you compress once and serve many times.
Brotli: Better Compression
Brotli (br, RFC 7932) was developed by Google and achieves ~15–25% better compression than gzip at equivalent CPU cost. It uses a combination of LZ77, Huffman coding, and a pre-built static dictionary of common web patterns.
| Property | Value |
|---|---|
| Compression ratio | ~80–85% reduction on JSON |
| Browser support | All modern browsers (95%+ global) |
| Level range | 0 (fastest) to 11 (best) |
| Levels 1–4 | Comparable speed to gzip |
| Level 11 | 10–100× slower, for pre-compression only |
Important: Browsers only send br in Accept-Encoding over HTTPS. Brotli over HTTP is not supported by browsers (though libraries and CLI clients do support it).
# Nginx with brotli module
brotli on;
brotli_comp_level 4;
brotli_types text/plain application/json application/javascript text/css;
For static assets, pre-compress at level 11 at build time and serve the .br file directly — avoid runtime compression overhead entirely.
zstd: The Fastest Newcomer
Zstandard (zstd, RFC 8878) is Facebook's compression algorithm optimized for real-time compression. It achieves gzip-comparable ratios at 3–5× the speed, making it ideal for dynamic API responses.
| Property | Value |
|---|---|
| Compression ratio | ~65–75% reduction on JSON |
| Decompression speed | ~1,500 MB/s |
| Level range | -7 (ultrafast) to 22 (ultramax) |
| Default level | 3 |
As of 2024, Chrome, Firefox, and Safari all support zstd in Accept-Encoding. Server support is growing: Nginx requires a third-party module, but Caddy, HAProxy, and many CDNs support it natively.
Compression Levels and Trade-offs
Algorithm Level Ratio Speed Use Case
--------- ----- ----- -------- --------
gzip 1 60% Very fast Dynamic responses, CPU-bound
gzip 6 68% Fast General purpose (default)
gzip 9 70% Slow Pre-compressed static files
brotli 4 78% Fast Dynamic responses over HTTPS
brotli 11 85% Very slow Pre-compressed static files
zstd 3 72% Very fast Dynamic API responses
zstd 19 78% Slow Pre-compressed storage
Recommended strategy:
- Static assets: pre-compress with Brotli 11 + gzip 9, serve pre-compressed
- Dynamic API responses: Brotli 4 or zstd 3 (fast algorithms, good ratio)
- Fallback: always support gzip for compatibility
When NOT to Compress
Compression adds CPU overhead and can actually *increase* response size for already-compressed content:
- Already-compressed formats: JPEG, PNG, WebP, AVIF, MP4, ZIP, PDF — compressing these again wastes CPU with no benefit
- Tiny responses under ~1 KB — compression headers + overhead may exceed the savings; use
gzip_min_length 1024in Nginx - High-frequency streaming — per-chunk compression on SSE or streaming JSON can starve the CPU; disable for byte-stream endpoints
- Encrypted payloads — already randomized data does not compress
Check your Content-Type header before enabling compression — configure your server to only compress text-based types (text/*, application/json, application/javascript, image/svg+xml).