fix: allow message processing of duplicate message after 60 seconds #2074
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes a bug for AWS SQS queues with dead-letter queue configured which would trigger if SQS.DeleteMessage calls would unexpectedly fail or fail all retries for a message and the same message would be received again to the same consumer over and over again, where it would be ignored as a duplicated SQS message, until it would be sent to the DLQ. Even if the message was redrived to the queue and the same consumer would keep receiving it, it would still ignore the message as a duplicate. This would cause the message to be stuck in a loop.
Note that this change will run the handler function of the consumer after 60 seconds has passed and it receives the message again, which in a setup where a service has a single receiving consumer leaves responsibility of the handler to handle idempotency more strictly, in the same way as in setups with multiple consumers.