Networking & Protocols • TCP vs UDP Trade-offsMedium⏱️ ~3 min
Production Use Cases: When to Choose TCP vs UDP Based Transports
Real world deployment decisions between TCP and UDP center on latency tolerance, loss behavior, and engineering complexity. TCP excels for bulk data transfer and transactional workloads: database replication streams that require strict ordering semantics, file and object transfer, and transactional APIs. TCP's mature kernel implementations benefit from decades of tuning, NIC offloads for checksums and segmentation, and nearly universal middlebox compatibility. For workloads tolerating 50 to 200 ms of added latency and requiring strong delivery guarantees, TCP minimizes development effort.
UDP based custom transports dominate interactive and real time systems. Online gaming at tick rates of 20 to 128 Hz sends packets of 50 to 300 bytes with 0.5 to 2% loss tolerance; UDP allows dead reckoning, delta updates, and selective reliability where key events are guaranteed but transient position updates can be dropped. Xbox Live and Riot Games report that moving these over TCP introduces rubber banding from head of line blocking, adding tens of milliseconds of interactive latency. Real time conferencing on Google Meet, Microsoft Teams, and Zoom faces 0.5 to 3% last mile loss and 30 to 200 ms variable RTTs on Wi-Fi and cellular. UDP with jitter buffers, forward error correction, and selective retransmission keeps one way media latency at 100 to 200 ms; forcing TCP pushes latency and jitter up by 100 to 300 ms and increases freeze and rebuffer events.
High frequency trading represents an extreme case: market data is disseminated via UDP multicast to thousands of recipients at millions of packets per second aggregate in colocation. Loss recovery happens via gap fill requests over separate TCP channels or redundant multicast paths. Orders use TCP or custom reliable protocols where absolute delivery is required. Latency budgets are microseconds to low milliseconds; avoiding TCP's per connection overhead and head of line blocking on fanout paths is critical. CDNs adopting HTTP/3 see consistent tail latency improvements on lossy mobile and Wi-Fi networks, with 0 RTT resumption yielding tangible wins for short lived connections like API calls and small object fetches.
💡 Key Takeaways
•Online gaming at 20 to 128 Hz tick rates with 50 to 300 byte packets uses UDP to avoid tens of milliseconds of rubber banding that TCP head of line blocking would cause at 0.5 to 2% loss
•Real time conferencing keeps one way latency at 100 to 200 ms with UDP and selective retransmission; TCP would add 100 to 300 ms latency and increase rebuffer events under typical 0.5 to 3% Wi-Fi loss
•High frequency trading uses UDP multicast for market data fanout at millions of packets per second, achieving microsecond to low millisecond latency budgets that TCP cannot meet
•CDN HTTP/3 deployments show tail latency improvements on mobile networks where 0 RTT resumption saves 100 to 200 ms per connection compared to TCP plus TLS
•Database replication and bulk file transfer prefer TCP because strict in order semantics are required and development complexity of custom reliability is not justified
•UDP paths may be blocked or rate limited in 10 to 20% of enterprise networks, requiring TCP fallback paths and increasing operational test matrix complexity
📌 Examples
Netflix tested QUIC for video streaming and found 0 RTT resumption reduced startup time by 100 to 150 ms on mobile, improving user engagement metrics
Financial exchanges send market data snapshots via UDP multicast; a single packet reaches thousands of subscribers simultaneously, impossible to achieve efficiently with TCP fanout
Microsoft Teams switches to TCP fallback when UDP is blocked, accepting higher latency and jitter rather than failing calls entirely
PostgreSQL and MySQL replication use TCP because the cost of out of order or duplicate transactions would require complex application level reconciliation