Object Storage & Blob Storage • Multipart Uploads & Resumable TransfersEasy⏱️ ~2 min
What Are Multipart Uploads and Resumable Transfers?
Multipart uploads and resumable transfers are two complementary patterns for reliably moving large objects over unreliable networks. Both solve the problem of uploading multi-gigabyte or terabyte scale files without restarting from zero after network failures.
Multipart upload splits a large object into N independent parts (think slicing a 10 GB file into 128 MB chunks). Each part can be uploaded in parallel, and the server atomically assembles them after all parts arrive. Amazon S3 allows parts between 5 MB and 5 GB with a maximum of 10,000 parts per object, enabling objects up to 50 TB. For example, uploading a 1 TB file with 128 MB parts creates 8,192 independent PUT requests that can run concurrently, saturating multi-gigabit network links.
Resumable transfer maintains a single logical byte stream but allows clients to resume from a known byte offset after interruptions. Google Cloud Storage implements this with HTTP 308 status codes: the server reports "I have bytes 0 through 524,288,000" and the client resumes at byte 524,288,001. This works exceptionally well on mobile or flaky networks where connections drop frequently but you want a simpler mental model than managing thousands of parts.
Both patterns require a control plane (to create upload sessions and finalize objects) and a data plane (to stream bytes and return progress receipts). The fundamental difference is parallelism versus simplicity: multipart maximizes throughput through concurrency, while resumable provides easier progress tracking with a single offset.
💡 Key Takeaways
•Multipart upload splits objects into independent parts (S3: 5 MB to 5 GB each, max 10,000 parts) enabling parallel upload for maximum throughput
•Resumable transfer uses a single byte stream with server reported offset (Google Cloud Storage uses HTTP 308 with last committed byte)
•Both require a control plane for session management and a data plane for byte streaming with progress receipts
•Production throughput: S3 multipart with 16 concurrent 128 MB parts can saturate 10+ Gbps links from EC2 instances
•Pattern choice: Use multipart for maximum parallelism on stable links, resumable for simpler offset tracking on flaky mobile networks
📌 Examples
Amazon S3 multipart: 1 TB file with 128 MB parts = 8,192 PUT requests at $0.005 per 1,000 = $0.041 in request costs
Google Cloud Storage resumable: Client uploads 500 MB, connection drops, queries server status and receives "last committed: byte 524,288,000", resumes from that offset
Dropbox client sync: Uses 4 MB chunks in upload sessions, allowing process restarts to resume from last acknowledged chunk boundary