

# Amazon SQS message deduplication and grouping
<a name="best-practices-message-deduplication"></a>

This topic provides best practices for ensuring consistent message processing in Amazon SQS. It explains how to use:
+ [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html#API_SendMessage_RequestSyntax](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html#API_SendMessage_RequestSyntax) to prevent duplicate messages in FIFO queues.
+ [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) to manage message ordering within distinct message groups.

****Topics****
+ [Avoiding inconsistent message processing in Amazon SQS](avoiding-inconsistent-message-processing.md)
+ [Using the message deduplication ID](using-messagededuplicationid-property.md)
+ [Using the message group ID](using-messagegroupid-property.md)
+ [Using the receive request attempt ID](using-receiverequestattemptid-request-parameter.md)

# Avoiding inconsistent message processing in Amazon SQS
<a name="avoiding-inconsistent-message-processing"></a>

Because Amazon SQS is a distributed system, it is possible for a consumer to not receive a message even when Amazon SQS marks the message as delivered while returning successfully from a [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API method call. In this case, Amazon SQS records the message as delivered at least once, although the consumer has never received it. Because no additional attempts to deliver messages are made under these conditions, we don't recommend setting the number of maximum receives to 1 for a [dead-letter queue](sqs-dead-letter-queues.md).

# Using the message deduplication ID in Amazon SQS
<a name="using-messagededuplicationid-property"></a>

[https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) is a token used only in Amazon SQS FIFO queues to prevent duplicate message delivery. It ensures that within a 5-minute deduplication window, only one instance of a message with the same deduplication ID is processed and delivered.

If Amazon SQS has already accepted a message with a specific deduplication ID, any subsequent messages with the same ID will be acknowledged but not delivered to consumers.

**Note**  
Amazon SQS continues tracking the deduplication ID even after the message has been received and deleted.

**Topics**
+ [When to provide a message deduplication ID in Amazon SQS](providing-message-deduplication-id.md)
+ [Enabling deduplication for a single-producer/consumer system in Amazon SQS](single-producer-single-consumer.md)
+ [Outage recovery scenarios in Amazon SQS](designing-for-outage-recovery-scenarios.md)
+ [Configuring visibility timeouts in Amazon SQS](working-with-visibility-timeouts.md)

# When to provide a message deduplication ID in Amazon SQS
<a name="providing-message-deduplication-id"></a>

A producer should specify a message deduplication ID in the following scenarios:
+ When sending identical message bodies that must be treated as unique.
+ When sending messages with the same content but different message attributes, ensuring each message is processed separately.
+ When sending messages with different content (for example, a retry counter in the message body) but requiring Amazon SQS to recognize them as duplicates.

# Enabling deduplication for a single-producer/consumer system in Amazon SQS
<a name="single-producer-single-consumer"></a>

If you have a single producer and a single consumer, and messages are unique because they include an application-specific message ID in the body, follow these best practices:
+ Enable content-based deduplication for the queue (each of your messages has a unique body). The producer can omit the message deduplication ID.
+ When content-based deduplication is enabled for an Amazon SQS FIFO queue, and a message is sent with a deduplication ID, the [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) deduplication ID overrides the generated content-based deduplication ID.
+ Although the consumer isn't required to provide a receive request attempt ID for each request, it's a best practice because it allows fail-retry sequences to execute faster.
+ You can retry send or receive requests because they don't interfere with the ordering of messages in FIFO queues.

# Outage recovery scenarios in Amazon SQS
<a name="designing-for-outage-recovery-scenarios"></a>

The deduplication process in FIFO queues is time-sensitive. When designing your application, ensure that both the producer and consumer can recover from client or network outages without introducing duplicates or processing failures.

Producer considerations
+ Amazon SQS enforces a 5-minute deduplication window.
+ If a producer retries a [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) request after 5-minutes, Amazon SQS treats it as a new message, potentially creating duplicates.

Consumer considerations
+ If a consumer fails to process a message before the visibility timeout expires, another consumer may receive and process it, leading to duplicate processing.
+ Adjust the visibility timeout based on your application's processing time.
+ Use the [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html) API to extend the timeout while a message is still being processed.
+ If a message repeatedly fails to process, route it to a [dead-letter queue (DLQ)](sqs-dead-letter-queues.md) instead of allowing it to be reprocessed indefinitely.
+ The producer must be aware of the deduplication interval of the queue. Amazon SQS has a deduplication interval of 5 minutes. Retrying `SendMessage` requests after the deduplication interval expires can introduce duplicate messages into the queue. For example, a mobile device in a car sends messages whose order is important. If the car loses cellular connectivity for a period of time before receiving an acknowledgement, retrying the request after regaining cellular connectivity can create a duplicate.
+ The consumer must have a visibility timeout that minimizes the risk of being unable to process messages before the visibility timeout expires. You can extend the visibility timeout while the messages are being processed by calling the `ChangeMessageVisibility` action. However, if the visibility timeout expires, another consumer can immediately begin to process the messages, causing a message to be processed multiple times. To avoid this scenario, configure a [dead-letter queue](sqs-dead-letter-queues.md).

# Configuring visibility timeouts in Amazon SQS
<a name="working-with-visibility-timeouts"></a>

To ensure reliable message processing, set the visibility timeout to be longer than the AWS SDK read timeout. This applies when using the [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API with both short polling and long polling. A longer visibility timeout prevents messages from becoming available to other consumers before the original request completes, reducing the risk of duplicate processing.

# Using the message group ID with Amazon SQS FIFO Queues
<a name="using-messagegroupid-property"></a>

In FIFO (First-In-First-Out) queues, [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) is an attribute that organizes messages into distinct groups. Messages within the same message group are always processed one at a time, in strict order, ensuring that no two messages from the same group are processed simultaneously. In standard queues, using `MessageGroupId` enables [fair queues](sqs-fair-queues.md). If strict ordering is required, use a FIFO queue. 

**Topics**
+ [Interleaving multiple ordered message groups in Amazon SQS](interleaving-multiple-ordered-message-groups.md)
+ [Preventing duplicate processing in a multiple-producer/consumer system in Amazon SQS](avoding-processing-duplicates-in-multiple-producer-consumer-system.md)
+ [Avoid large message backlogs with the same message group ID in Amazon SQS](avoid-backlog-with-the-same-message-group-id.md)
+ [Avoid reusing the same message group ID with virtual queues in Amazon SQS](avoiding-reusing-message-group-id-with-virtual-queues.md)

# Interleaving multiple ordered message groups in Amazon SQS
<a name="interleaving-multiple-ordered-message-groups"></a>

To interleave multiple ordered message groups within a single FIFO queue, assign a [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) to each group (for example, session data for different users). This allows multiple consumers to read from the queue simultaneously while ensuring that messages within the same group are processed in order.

When a message with a specific `MessageGroupId` is being processed and is invisible, no other consumer can process messages from that same group until the visibility timeout expires or the message is deleted.

# Preventing duplicate processing in a multiple-producer/consumer system in Amazon SQS
<a name="avoding-processing-duplicates-in-multiple-producer-consumer-system"></a>

In a high-throughput, low-latency system where message ordering is not a priority, producers can assign a unique [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) to each message. This ensures that Amazon SQS FIFO queues eliminate duplicates, even in a multiple-producer/multiple-consumer setup. While this approach prevents duplicate messages, it does not guarantee message ordering since each message is treated as its own independent group.

In any system with multiple producers and consumers, there is always a risk of duplicate delivery. If a consumer fails to process a message before the visibility timeout expires, Amazon SQS makes the message available again, potentially allowing another consumer to pick it up. To mitigate this, ensure proper message acknowledgment and visibility timeout settings based on processing time.

# Avoid large message backlogs with the same message group ID in Amazon SQS
<a name="avoid-backlog-with-the-same-message-group-id"></a>

FIFO queues support a maximum of 120,000 in-flight messages (messages received by a consumer but not yet deleted). If this limit is reached, Amazon SQS does not return an error, but processing may be impacted. You can request an increase beyond this limit by contacting [AWS Support](https://docs.aws.amazon.com/awssupport/latest/user/create-service-quota-increase.html).

FIFO queues scan the first 120,000 messages to determine available message groups. If a large backlog builds up in a single message group, messages from other groups sent later will remain blocked until the backlog is processed.

**Note**  
A message backlog can occur when a consumer repeatedly fails to process a message. This could be due to message content issues or consumer-side failures. To prevent message processing delays, configure a [dead-letter queue](sqs-dead-letter-queues.md) to move unprocessed messages after multiple failed attempts. This ensures that other messages in the same message group can be processed, preventing system bottlenecks.

# Avoid reusing the same message group ID with virtual queues in Amazon SQS
<a name="avoiding-reusing-message-group-id-with-virtual-queues"></a>

When using virtual queues with a shared host queue, avoid reusing the same [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) across different virtual queues. If multiple virtual queues share the same host queue and contain messages with the same `MessageGroupId`, those messages can block each other, preventing efficient processing. To ensure smooth message processing, assign unique `MessageGroupId` values for messages in different virtual queues.

# Using the Amazon SQS receive request attempt ID
<a name="using-receiverequestattemptid-request-parameter"></a>

The receive request attempt ID is a unique token used to deduplicate [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) calls in Amazon SQS. During a network outage or connectivity issue between your application and Amazon SQS, it is best practice to:
+ Provide a receive request attempt ID when making a `ReceiveMessage` call.
+ Retry using the same receive request attempt ID if the operation fails.