Message Queues & Streaming • Delivery Guarantees (At-least-once, Exactly-once)Medium⏱️ ~2 min
Amazon SQS FIFO vs Standard: Throughput and Guarantee Trade offs
Amazon SQS offers two queue types with fundamentally different trade offs. Standard queues provide at least once delivery with best effort ordering and nearly unlimited throughput per queue. Messages can be delivered multiple times and may arrive out of order, requiring consumers to implement idempotency. FIFO (First In First Out) queues provide exactly once processing with strict ordering via content based deduplication windows and message group identifiers. However, throughput is capped at roughly 300 messages per second per queue without batching and up to approximately 3,000 messages per second with batching (10 messages per batch).
The deduplication mechanism in FIFO queues works by hashing message body content or using client provided deduplication identifiers. SQS tracks these identifiers in a 5 minute deduplication window. Messages with duplicate identifiers within this window are accepted but not delivered, achieving exactly once processing semantics. Message groups enable ordered processing; messages within the same group are processed in order, while different groups can be processed in parallel.
The choice between Standard and FIFO is a product of throughput requirements versus tolerance for duplicates and ordering violations. If you need more than 3,000 messages per second, you must either shard across multiple FIFO queues (partitioning by message group) or use Standard queues and implement application level idempotency. Financial systems, inventory management, and ledgers typically choose FIFO for critical paths, while analytics, logging, and monitoring use Standard queues for higher throughput.
💡 Key Takeaways
•Standard queues scale to nearly unlimited throughput per queue with at least once delivery, while FIFO queues cap at 300 messages per second (3,000 with 10 message batching) for exactly once processing.
•FIFO deduplication uses a 5 minute rolling window based on message body hash or client provided deduplication identifiers. Duplicate messages within the window are accepted but not delivered to consumers.
•Message groups enable parallel processing in FIFO queues. Messages in the same group are strictly ordered, while different groups process concurrently. Choose high cardinality group identifiers to maximize parallelism.
•To exceed 3,000 messages per second with FIFO guarantees, shard across multiple FIFO queues partitioned by message group or business key, adding complexity in consumer coordination and monitoring.
•Standard queues work well for analytics aggregation, search indexing, and cache warming where duplicates are tolerable. FIFO queues suit financial charges, inventory decrements, and entitlement grants where duplicates are costly.
📌 Examples
A payment processing system uses SQS FIFO queues with customer_id as the message group identifier. This ensures all operations for a given customer are processed in order (charge, then fulfill, then notify). Throughput per customer is 300 messages per second, but parallelism across customers scales horizontally.
A clickstream analytics pipeline uses SQS Standard queues to ingest millions of events per second. Downstream aggregation jobs implement idempotent upserts to handle duplicates, prioritizing throughput and availability over strict deduplication.
An inventory management system initially used SQS Standard queues but experienced duplicate decrements causing negative stock levels. After migrating to FIFO queues with product_id as message group (sharded across 10 queues), the system maintained 2,500 messages per second throughput while eliminating duplicates.