Object Storage & Blob StorageMultipart Uploads & Resumable TransfersMedium⏱️ ~2 min

Chunk and Part Sizing Trade-offs

Part Size Trade offs

Smaller parts mean less work wasted on retry but more HTTP overhead. Each part requires a separate request with headers, authentication, and connection setup. For 1MB parts, a 10GB file needs 10,000 requests. Each request adds 10-50ms of overhead. At 100MB parts, only 100 requests, far less overhead.

Larger parts risk more wasted work on failure. A 500MB part at 100 Mbps takes 40 seconds. Network failure at 39 seconds wastes 39 seconds. The optimal part size depends on network reliability: stable datacenter links tolerate larger parts, flaky mobile connections need smaller parts.

Platform Limits Shape Design

Cloud storage services impose limits. Common constraints: minimum part size 5MB (except last part), maximum part size 5GB, maximum 10,000 parts per upload. These limits constrain file sizes: 5GB * 10,000 = 50TB maximum with largest parts, 5MB * 10,000 = 50GB maximum with smallest parts.

For files approaching limits, compute part size dynamically. A 100GB file needs at least 100GB / 10,000 = 10MB parts. A 1TB file needs at least 100MB parts. Clients should calculate minimum part size from file size and part limit.

Adaptive Part Sizing

Smart clients adjust part size based on observed conditions. Start with moderate parts (50-100MB). If failures are frequent, shrink part size to reduce retry cost. If uploads succeed reliably, grow part size to reduce overhead. This adapts to network conditions without manual configuration.

Track success rate per part size. If 100MB parts fail 20% of the time but 25MB parts fail 2%, the smaller parts waste less total bandwidth despite higher overhead. The math: 20% * 100MB = 20MB wasted per attempt versus 2% * 25MB = 0.5MB.

Memory and Buffering Constraints

The client must buffer at least one part in memory (or on disk) before uploading. A 500MB part size on a device with 256MB available memory fails. Mobile apps typically use 5-20MB parts. Server side batch processing can use 100MB+ parts. Match part size to client capabilities, not just network conditions.

🎯 When To Use: Start with 50-100MB parts for server workloads, 5-10MB for mobile. Shrink if retry rate exceeds 10%. Grow if network is stable and parts complete consistently.
💡 Key Takeaways
Smaller parts (5MB) reduce retry cost but add HTTP overhead; larger parts (500MB) risk more wasted work on failure
Platform limits constrain design: 5MB minimum, 5GB maximum, 10,000 parts maximum sets 50TB file size ceiling
Calculate minimum part size dynamically: 100GB file needs at least 10MB parts to stay under 10,000 part limit
Adaptive sizing: start moderate (50-100MB), shrink on frequent failures, grow on consistent success
Memory constraints matter: mobile apps buffer 5-20MB parts; server workloads can use 100MB+
📌 Interview Tips
1Calculate part size on the spot. Interviewer gives 500GB file with 10,000 part limit. Answer: minimum 50MB parts. Then explain you would use 100MB for margin, giving 5,000 parts.
2Present the trade off quantitatively. 100MB parts with 20% failure waste 20MB per attempt. 25MB parts with 2% failure waste 0.5MB. Smaller parts win despite 4x overhead.
3Mention client memory constraints. A mobile app with 100MB memory budget cannot buffer 500MB parts. Design APIs to accept part size as a parameter so clients can match their capabilities.
← Back to Multipart Uploads & Resumable Transfers Overview
Chunk and Part Sizing Trade-offs | Multipart Uploads & Resumable Transfers - System Overflow