Design FundamentalsBack-of-the-envelope CalculationsEasy⏱️ ~3 min

Essential Conversion Factors and Mental Models for Quick Calculations

Mastering a small set of conversion factors and heuristics enables rapid mental math during design discussions. The most fundamental conversion is 1 million per day equals approximately 11.6 per second, which simplifies to 12 per second for estimation purposes. This means 100 million daily events translate to roughly 1,160 per second, and 1 billion daily events approach 11,600 per second. Combined with the approximation that 86,400 seconds per day is close enough to 100,000 (1e5) for division, you can quickly convert any daily rate to a per second rate by dividing by roughly 100,000. Bandwidth conversions are equally critical. One gigabit per second equals 125 megabytes per second (dividing by 8 bits per byte), so 10 Gbps becomes 1.25 GB per second and 100 Gbps reaches 12.5 GB per second. For video streaming, memorize that HD quality (1080p) typically consumes 3 to 5 Mbps, while 4K requires 15 to 25 Mbps depending on codec. These numbers let you instantly calculate that 10,000 concurrent HD streams at 5 Mbps demand 50 Gbps of egress capacity. Adding 20 percent overhead for headers, TLS handshakes, and adaptive bitrate variant requests brings the total to 60 Gbps, which translates to 7.5 GB per second or 27 TB per hour. Caching and working set estimation relies on empirical patterns. Real world access patterns typically follow Zipfian distributions where 20 to 30 percent of data accounts for 70 to 80 percent of reads (the 80/20 rule). For cache sizing, start by estimating the working set at 20 to 30 percent of total dataset size, then multiply by replication factor. If your application has 1 TB of data and you expect 25 percent to be hot, allocate roughly 250 GB of cache capacity per replica. Typical cache hit rates for read heavy workloads improve by 20 to 30 percentage points with properly sized caches, but always verify with production traces since write heavy workloads or high skew can drastically reduce effectiveness.
💡 Key Takeaways
Core time conversion: 1 million per day approximately equals 12 per second, 100 million per day equals 1,160 per second, using 86,400 seconds per day simplified to 100,000 for quick division
Bandwidth math: 1 Gbps equals 125 MB per second, 10 Gbps equals 1.25 GB per second, 100 Gbps equals 12.5 GB per second by dividing bits by 8
Video streaming rates: HD (1080p) uses 3 to 5 Mbps, 4K uses 15 to 25 Mbps, so 10,000 concurrent HD streams at 5 Mbps require 50 Gbps plus 20 percent overhead totaling 60 Gbps or 7.5 GB per second
Working set heuristic: 20 to 30 percent of data typically accounts for 70 to 80 percent of reads in Zipfian access patterns, forming basis for cache sizing
Cache hit improvements: properly sized caches increase hit rates by 20 to 30 percentage points for read heavy workloads, but write heavy or skewed access degrades effectiveness
Safety margins: plan for 30 to 50 percent headroom per tier to handle diurnal peaks and zone failures without SLO violations, keeping steady state utilization below 70 percent
📌 Examples
Converting upload rate: If a platform receives 50 million photo uploads per day at 2 MB average size, that is 50M times 2 MB equals 100 TB per day raw. Dividing by 86,400 gives roughly 1.16 GB per second ingest rate. With 3x replication, storage grows at 300 TB per day or approximately 110 PB per year.
Calculating concurrent capacity: For 5 million DAU with average session length of 1,800 seconds (30 minutes), concurrent users approximate 5M times 1,800 divided by 86,400 equals roughly 104,000 concurrent sessions. If each session requires 10 MB of application server memory, total memory footprint is about 1 TB across the fleet.
Database load from cache misses: Application serves 100 million reads per day (1,160 reads per second average) with 70 percent cache hit rate. Database handles 30 percent of reads, which is 348 reads per second average. With 5x peak factor, database must sustain 1,740 reads per second during peak hours.
← Back to Back-of-the-envelope Calculations Overview