Object Storage & Blob StorageMultipart Uploads & Resumable TransfersMedium⏱️ ~2 min

Chunk and Part Sizing Trade-offs

Choosing the right chunk or part size is a critical algorithmic decision that balances request overhead, retry efficiency, throughput, and cost. Larger parts mean fewer requests but more wasted bandwidth on failures; smaller parts enable finer grained retries but increase per-request costs and server load. Amazon S3's 10,000 part limit creates a hard constraint: for a 1 TB object, parts must be at least 100 MB (1 TB ÷ 10,000 parts). If you naively chose 64 MB parts, you would need 16,384 parts and exceed the limit. The formula is: minimum part size = ceiling(total object size ÷ maximum allowed parts). For very large objects approaching 50 TB, you are forced into multi-gigabyte parts. Cost scales linearly with part count. Uploading 10 TB with 8 MB parts requires approximately 1.3 million PUT requests at $0.005 per 1,000 requests, totaling around $6.55. Using 128 MB parts drops this to 81,920 requests and $0.41, a 16x cost reduction. However, on a lossy mobile network, smaller parts reduce wasted retransmission: if a 128 MB upload fails at 99% completion, you retry 127 MB; with 8 MB parts, you retry only 8 MB. Production guidelines vary by environment. Mobile or high latency links typically use 4 to 16 MB chunks for resilience and smooth progress reporting. Data center uploads on stable 10+ Gbps links prefer 64 to 256 MB parts to minimize overhead and maximize throughput. Google Cloud Storage clients commonly default to 8 to 256 MB chunks depending on file size and expected reliability.
💡 Key Takeaways
Amazon S3 enforces minimum 5 MB per part (except last) and maximum 10,000 parts, requiring part size >= ceiling(object size ÷ 10,000)
Request cost example: 10 TB with 8 MB parts = 1.3M PUTs ($6.55) vs 128 MB parts = 81,920 PUTs ($0.41), a 16x difference
Larger parts (128 to 512 MB) reduce request overhead and cost but waste more bandwidth on retry: a failed 512 MB part at 99% still requires full 512 MB retransmission
Smaller parts (4 to 16 MB) provide smoother progress on flaky networks and reduce retry waste but increase per-request latency and risk server throttling
Production heuristic: mobile/flaky networks use 4 to 16 MB; stable data center links use 64 to 256 MB to balance efficiency and resilience
📌 Examples
1 TB object with 64 MB parts = 16,384 parts, exceeds S3 10,000 limit; must use >= 100 MB parts to stay within bounds
Dropbox historically uses approximately 4 MB blocks for client uploads, optimizing for resume granularity and deduplication alignment
Netflix encodes uploading large video assets with 128 to 256 MB parts on stable AWS links to minimize request overhead while maintaining reasonable retry granularity
← Back to Multipart Uploads & Resumable Transfers Overview
Chunk and Part Sizing Trade-offs | Multipart Uploads & Resumable Transfers - System Overflow