Networking & Protocols • HTTP/HTTPS & Protocol EvolutionEasy⏱️ ~2 min
HTTP Protocol Evolution: From Sequential HTTP/1.1 to Multiplexed HTTP/2
HTTP/1.1 fundamentally constrained web performance through its sequential request model. Browsers limited concurrent TCP connections to 6 to 8 per hostname on desktop and 4 to 6 on mobile, forcing sites to employ domain sharding across 3 to 4 hostnames to achieve 18 to 24 parallel connections. Each connection required its own TCP slow start phase and handshake overhead. HTTP/2 revolutionized this by switching to binary framing and multiplexing many logical streams over a single TCP connection, eliminating the need for domain sharding.
HTTP/2 introduced HPACK header compression that typically reduces header sizes by over 50%, plus flow control and request prioritization. The single connection model reduces handshake overhead and allows better congestion control coordination. However, this creates a critical weakness: HTTP/2 multiplexing only solves head of line blocking at the application layer. At the transport layer, TCP still enforces strict ordering, meaning a single dropped packet stalls all streams on that connection until retransmission completes. On pristine corporate networks with loss rates under 0.1%, HTTP/2 dramatically outperforms HTTP/1.1, but on lossy mobile networks with 1 to 2% packet loss, the transport layer head of line blocking can degrade throughput and inflate tail latencies significantly.
💡 Key Takeaways
•HTTP/1.1 browsers enforce 6 to 8 concurrent connections per hostname on desktop and 4 to 6 on mobile, leading to domain sharding across multiple hostnames to increase parallelism at the cost of additional handshakes
•HTTP/2 binary framing multiplexes unlimited streams over a single TCP connection, eliminating domain sharding and reducing connection overhead by 75% or more in typical deployments
•HPACK header compression in HTTP/2 reduces header sizes by approximately 50%, critical for mobile networks where headers can dominate small API responses
•Transport layer head of line blocking remains in HTTP/2 because TCP enforces strict ordering; a single lost packet blocks all multiplexed streams until retransmission completes
•On networks with 1 to 2% packet loss, HTTP/2 tail latencies (p95/p99) can exceed HTTP/1.1 with multiple connections because multiple TCP connections create independent loss domains
•Production recommendation is HTTP/2 for stable networks with loss under 0.5% and good RTT; consider maintaining HTTP/1.1 fallback for legacy clients and specific failure scenarios
📌 Examples
Large ecommerce sites historically sharded assets across cdn1.example.com, cdn2.example.com, cdn3.example.com under HTTP/1.1 to achieve 18 to 24 parallel downloads, but this tripled DNS lookups and TCP handshakes
After migrating to HTTP/2, Amazon and similar retailers consolidated to single origin domains, reducing TLS handshake CPU by approximately 60% and eliminating DNS lookup overhead for sharded domains
Netflix measured HTTP/2 performing 15 to 20% better than HTTP/1.1 on wired broadband but seeing degraded performance on cellular networks with over 1% loss due to transport head of line blocking affecting video manifest fetches