

# Using the message deduplication ID in Amazon SQS
<a name="using-messagededuplicationid-property"></a>

[https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) is a token used only in Amazon SQS FIFO queues to prevent duplicate message delivery. It ensures that within a 5-minute deduplication window, only one instance of a message with the same deduplication ID is processed and delivered.

If Amazon SQS has already accepted a message with a specific deduplication ID, any subsequent messages with the same ID will be acknowledged but not delivered to consumers.

**Note**  
Amazon SQS continues tracking the deduplication ID even after the message has been received and deleted.

**Topics**
+ [

# When to provide a message deduplication ID in Amazon SQS
](providing-message-deduplication-id.md)
+ [

# Enabling deduplication for a single-producer/consumer system in Amazon SQS
](single-producer-single-consumer.md)
+ [

# Outage recovery scenarios in Amazon SQS
](designing-for-outage-recovery-scenarios.md)
+ [

# Configuring visibility timeouts in Amazon SQS
](working-with-visibility-timeouts.md)

# When to provide a message deduplication ID in Amazon SQS
<a name="providing-message-deduplication-id"></a>

A producer should specify a message deduplication ID in the following scenarios:
+ When sending identical message bodies that must be treated as unique.
+ When sending messages with the same content but different message attributes, ensuring each message is processed separately.
+ When sending messages with different content (for example, a retry counter in the message body) but requiring Amazon SQS to recognize them as duplicates.

# Enabling deduplication for a single-producer/consumer system in Amazon SQS
<a name="single-producer-single-consumer"></a>

If you have a single producer and a single consumer, and messages are unique because they include an application-specific message ID in the body, follow these best practices:
+ Enable content-based deduplication for the queue (each of your messages has a unique body). The producer can omit the message deduplication ID.
+ When content-based deduplication is enabled for an Amazon SQS FIFO queue, and a message is sent with a deduplication ID, the [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) deduplication ID overrides the generated content-based deduplication ID.
+ Although the consumer isn't required to provide a receive request attempt ID for each request, it's a best practice because it allows fail-retry sequences to execute faster.
+ You can retry send or receive requests because they don't interfere with the ordering of messages in FIFO queues.

# Outage recovery scenarios in Amazon SQS
<a name="designing-for-outage-recovery-scenarios"></a>

The deduplication process in FIFO queues is time-sensitive. When designing your application, ensure that both the producer and consumer can recover from client or network outages without introducing duplicates or processing failures.

Producer considerations
+ Amazon SQS enforces a 5-minute deduplication window.
+ If a producer retries a [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) request after 5-minutes, Amazon SQS treats it as a new message, potentially creating duplicates.

Consumer considerations
+ If a consumer fails to process a message before the visibility timeout expires, another consumer may receive and process it, leading to duplicate processing.
+ Adjust the visibility timeout based on your application's processing time.
+ Use the [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html) API to extend the timeout while a message is still being processed.
+ If a message repeatedly fails to process, route it to a [dead-letter queue (DLQ)](sqs-dead-letter-queues.md) instead of allowing it to be reprocessed indefinitely.
+ The producer must be aware of the deduplication interval of the queue. Amazon SQS has a deduplication interval of 5 minutes. Retrying `SendMessage` requests after the deduplication interval expires can introduce duplicate messages into the queue. For example, a mobile device in a car sends messages whose order is important. If the car loses cellular connectivity for a period of time before receiving an acknowledgement, retrying the request after regaining cellular connectivity can create a duplicate.
+ The consumer must have a visibility timeout that minimizes the risk of being unable to process messages before the visibility timeout expires. You can extend the visibility timeout while the messages are being processed by calling the `ChangeMessageVisibility` action. However, if the visibility timeout expires, another consumer can immediately begin to process the messages, causing a message to be processed multiple times. To avoid this scenario, configure a [dead-letter queue](sqs-dead-letter-queues.md).

# Configuring visibility timeouts in Amazon SQS
<a name="working-with-visibility-timeouts"></a>

To ensure reliable message processing, set the visibility timeout to be longer than the AWS SDK read timeout. This applies when using the [https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API with both short polling and long polling. A longer visibility timeout prevents messages from becoming available to other consumers before the original request completes, reducing the risk of duplicate processing.