Networking & ProtocolsHTTP/HTTPS & Protocol EvolutionEasy⏱️ ~2 min

HTTP Protocol Evolution: From Sequential HTTP/1.1 to Multiplexed HTTP/2

Definition
HTTP protocol evolution addresses a fundamental performance problem: how do you efficiently send many requests over a network connection? HTTP/1.1 sends requests sequentially, HTTP/2 multiplexes many requests over one TCP connection, and HTTP/3 moves to UDP-based QUIC to solve remaining bottlenecks. Each version trades complexity for performance.

The HTTP/1.1 Bottleneck

HTTP/1.1 fundamentally constrains web performance through its sequential request model: each TCP connection can only process one request at a time. After sending a request, the client must wait for the complete response before sending the next request. Browsers work around this by opening multiple parallel connections, but limit themselves to 6-8 connections per hostname on desktop and 4-6 on mobile. Sites needing more parallelism use domain sharding (spreading assets across multiple hostnames like cdn1.example.com, cdn2.example.com) to achieve 18-24 parallel connections. Each connection pays its own TCP handshake cost (1 RTT for the three-way handshake) plus TCP slow start (an algorithm that gradually increases sending rate, meaning new connections start slower than established ones).

HTTP/2 Multiplexing Solution

HTTP/2 revolutionized the protocol by introducing binary framing and multiplexing. Instead of text-based headers, HTTP/2 uses binary-encoded frames that machines parse more efficiently. Multiplexing allows many logical streams (independent request/response pairs) to flow over a single TCP connection simultaneously, interleaved at the frame level. This eliminates the need for domain sharding because one connection handles all requests. HTTP/2 also introduced HPACK header compression (a scheme that uses static and dynamic tables to compress repeated header values), typically reducing header sizes by over 50%. On a mobile connection where headers can dominate small API responses, this matters significantly.

HTTP/2 Hidden Weakness

HTTP/2's single-connection model creates a critical weakness: head-of-line blocking at the transport layer. TCP (Transmission Control Protocol) enforces strict in-order delivery. When packet N is lost, TCP cannot deliver packets N+1, N+2, etc. to the application until packet N is retransmitted, even if those later packets arrived successfully. In HTTP/2, where many streams share one TCP connection, losing a packet carrying data for stream A blocks delivery of all streams B, C, D until retransmission completes. On pristine networks with loss rates under 0.1%, HTTP/2 dramatically outperforms HTTP/1.1. But on lossy mobile networks with 1-2% packet loss, transport-layer head-of-line blocking can inflate tail latencies significantly.

Key Trade-off: HTTP/2 trades connection efficiency (one connection instead of 6-8) for head-of-line blocking risk under packet loss. The right choice depends on network quality: HTTP/2 excels on stable connections, but its single-connection model becomes a liability when packets are lost.
💡 Key Takeaways
HTTP/1.1 processes requests sequentially; browsers open 6-8 connections per hostname, forcing domain sharding to achieve parallelism
HTTP/2 multiplexing sends unlimited streams over one TCP connection via binary framing, eliminating domain sharding and reducing overhead by 75%+
HPACK header compression reduces header sizes by 50%+, critical for mobile where headers can dominate small API responses
Transport-layer head-of-line blocking: when TCP loses a packet, all HTTP/2 streams stall until retransmission completes (1 RTT minimum)
📌 Interview Tips
1Explain the parallelism problem: HTTP/1.1 with 6 connections can only process 6 requests in parallel; loading 60 assets takes 10 sequential rounds
2Describe multiplexing benefit: HTTP/2 interleaves frames from all streams, so slow response A does not block fast response B at application layer
3Common follow-up: why does HTTP/2 struggle on lossy networks? Answer: TCP's strict ordering means one lost packet blocks ALL streams, not just the affected one
← Back to HTTP/HTTPS & Protocol Evolution Overview