Is message consumption from AWS SQS atomic? [duplicate] - amazon-web-services

Simple question:
I want to run an autoscale group on Amazon, which fires up multiple instance which processes the messages from a SQS queue. But how do I know that the instances aren't processing the same messages?
I can delete a message from the queue when it's processed. But if it's not deleted yet and still being processed by an instance, another instance CAN download that same message and processing it also, to my opinion.

Aside from the fairly remote possibility of SQS incorrectly delivering the same message more than once (which you still need to account for, even though it is unlikely), I suspect your question stems from a lack of familiarity with SQS's concept of "visibility timeout."
Immediately after the component receives the message, the message is still in the queue. However, you don't want other components in the system receiving and processing the message again. Therefore, Amazon SQS blocks them with a visibility timeout, which is a period of time during which Amazon SQS prevents other consuming components from receiving and processing that message.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html
This is what keeps multiple queue runners from seeing the same message. Once the visibility timeout expires, the message will be delivered again to a queue consumer, unless you delete it, or it exceeds the maximum configured number of deliveries (at which point it's deleted or goes into a separate dead letter queue if you have configured one). If a job will take longer than the configured visibility timeout, your consumer can also send a request to SQS to change the visibility timeout for that individual message.
Update:
Since this answer was originally written, SQS has introduced FIFO Queues in some of the AWS regions. These operate with the same logic described above, but with guaranteed in-order delivery and additional safeguards to guarantee that occasional duplicate message delivery cannot occur.
FIFO (First-In-First-Out) queues are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can't be tolerated. FIFO queues also provide exactly-once processing but are limited to 300 transactions per second (TPS).
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html
Switching an application to a FIFO queue does require some code changes, and requires that a new queue be created -- existing queues can't be changed over to FIFO.

You can receive duplicate messages, but only "on rare occasions". And so you should aim for idempotency.

An instance can receive duplicate messages only once the SQS visibility time out has expired. By default the visibility timeout is 30 seconds. So you have 30 seconds to make sure that your processing is done, else other instances may welcome new messages.
See AWS SQS Timeout for timeout details.

Related

How does AWS Lambda determine if messages are still in SQS queue?

When using AWS Lambda with a SQS queue (as event source), it is written in the doc
If messages are still available, Lambda increases the number of
processes that are reading batches by up to 60 more instances per
minute.
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
My question here is how does the Lambda service determine "If messages are still available" ?
Answering the "how" question in a slightly different way:
Behind the scenes, Lambda operates a "State Manager" control-plane service that discovers work from the queue. State Manager also manages scaling of the fleet of "Poller" workers that do the actual retrieving, batching, invoking, and deleting.
These implementation details are from the Event Source Mapping section of the re:Invent 2022 video A closer look at AWS Lambda (SVS404-R). Here is a screenshot:
One of the calls to the SQS API is to get queue attributes (Java API, others similar). This returns a response and one of the attributes of the response is "approximate number of messages". With this you or AWS can determine about how many messages are in the queue.
From this, AWS can determine if it's worth spinning up additional instances. You too can get this information from the queue.
I imagine it uses the ApproximateNumberOfMessagesVisible metric on the SQS queue to check how many messages are available, and uses that number, plus your batch size configuration, to determine how many more Lambda instances your function needs to be scaled out to.
I believe the documentation refers to Lambda polling the queue to know whether there are still messages. Read more about it here.
Lambda polls the queue and invokes your Lambda function synchronously
with an event that contains queue messages. Lambda reads messages in
batches and invokes your function once for each batch. When your
function successfully processes a batch, Lambda deletes its messages
from the queue.
Event Source Mapping:
Lambda only sees messages that are visible, via the visibility timeout setting on the SQS queue. This is to prevent other queue consumers processing the message. I believe as an event-source, Lambda receives messages from the SQS queue, via being mapped to it.
As per the documentation you shared,for standard queues, Long Polling is in effect. Long polling basically waits for a certain amount of time to verify if there is a message in the queue. refer to the following docs :
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/confirm-queue-is-empty.html

Why does SQS FIFO queue with lambda trigger cannot guarantee only once delivery?

I came across an AWS article where it is mentioned only once delivery of a message is not guaranteed when the FIFO queue is used with a lambda trigger.
Amazon SQS FIFO queues ensure that the order of processing follows the message order within a message group. However, it does not guarantee only once delivery when used as a Lambda trigger. If only once delivery is important in your serverless application, it’s recommended to make your function idempotent. You could achieve this by tracking a unique attribute of the message using a scalable, low-latency control database like Amazon DynamoDB.
I am more interested in knowing the reason behind this behaviour when it comes to lambda trigger. I assume, with standard queues only once delivery is not guaranteed since SQS stores messages in multiple servers for redundancy and high availability and there is a chance of same message getting delivered again while multiple lambdas polling the queue.
Can someone please explain the reason for the same behaviour in FIFO queue with lambda trigger or the working internally?
By default lambda polls synchronously from SQS. So when lambda processes messages from the queue they become invisible i.e Visibility timeout gets triggered till the lambda either finishes the process to eventually delete them from the queue or fails to retry them again.
That's why lambda cannot guarantee exactly-once delivery since there can be a retry in lambda cause of timeout (15min max) or other code dependency errors.
To prevent this you either make your process idempotent or use Batch response to delete the message even in case of failure.

AWS SQS redrive policy, which end of the queue do messges go to

In an AWS SQS standard queue you can set a redrive policy which will cause messages to be retried if there is a failure where by the message is not deleted from the queue.
In my case i have > 1,000,000 messages on the queue which take a couple of hours to process. When a message fails and is put back on the queue will it be put to the end of the queue or the front. Will the messages get retried in a minute or two or in two or three hours when all the other messages have been processed?
There is no guarantee which order messages are returned, so once you return a message it could be retried immediately, when all the others are processed, or anywhere in the middle - there may be some undocumented common patterns for when retries happen, but its not something you can count on or design around.
Q: Does Amazon SQS provide message ordering?
Yes. FIFO (first-in-first-out) queues preserve the exact order in
which messages are sent and received. If you use a FIFO queue, you
don't have to place sequencing information in your messages. For more
information, see FIFO Queue Logic in the Amazon SQS Developer Guide.
Standard queues provide a loose-FIFO capability that attempts to
preserve the order of messages. However, because standard queues are
designed to be massively scalable using a highly distributed
architecture, receiving messages in the exact order they are sent is
not guaranteed.
https://aws.amazon.com/sqs/faqs/

How to prevent AWS SQS from deleting a message when Lambda function triggered fails to process that message?

I have deployed a AWS Lambda function that triggers when a SQS queue receives a message. The function makes a request to a Rest API and if the response is not Ok the SQS message needs to be processed again.
That's why I need to resend the message to the queue but I would prefer to delete the SQS messages programatically, although I can't find how to configure SQS. I have tried message retention but it seems the trigger event causes the message being deleted anyway.
Other possible options could be back up the message in S3 or persisting it in DynamoDB but I wonder if there's a better option.
Any insights on this question would be very helpful.
From AWS Lambda Retry Behavior - AWS Lambda:
If you configure an Amazon SQS queue as an event source, AWS Lambda will poll a batch of records in the queue and invoke your Lambda function. If the invocation fails or times out, every message in the batch will be returned to the queue, and each will be available for processing once the Visibility Timeout period expires. (Visibility timeouts are a period of time during which Amazon Simple Queue Service prevents other consumers from receiving and processing the message).
Once an invocation successfully processes a batch, each message in that batch will be removed from the queue. When a message is not successfully processed, it is either discarded or if you have configured an Amazon SQS Dead Letter Queue, the failure information will be directed there for you to analyze.
So, it seems (from reading this) that a simple option would be set a high visibility timeout on the queue and then raise an error if the function cannot process the message. This message will remain invisible for the configured timeout period, then would reappear on the queue for processing. If it exceeds the permitted number of retries, it would be deleted or moved to a Dead Letter Queue (if configured).
There is a lambda-powertools library created and maintained by AWSLabs and one of the feature is batch processing.
The batch processing utility handles partial failures when processing
batches from Amazon SQS, Amazon Kinesis Data Streams, and Amazon
DynamoDB Streams.
Check out the documentation here. This is the python version, but there are versions for other environments.
So after some research I found the following:
Frankly there was an workaround options to selectively filter out messages processed as good ones from a batch - before aws implemented it.
Kindly refer to approaches 1-3 demonstrated in here
As for using aws's implementation use approach No.4

AWS SQS standard queue or FIFO queue when message can not be duplicated?

We plan to use AWS SQS service to queue events created from web service and then use several workers to process those events. One event can only be processed one time. According to AWS SQS document, AWS SQS standard queue can "occasionally" produce duplicated message but with unlimited throughput. AWS SQS FIFO queue will not produce duplicated message but with throughput limitation of 300 API calls per second (with batchSize=10, equivalent of 3000 messages per second). Our current peak hour traffic is only 80 messages per second. So, both are fine in terms of throughput requirement. But, when I started to use AWS SQS FIFO queue, I found that I need to do extra work like providing extra parameters
"MessageGroupId" and "MessageDeduplicationId" or need to enable "ContentBasedDeduplication" setting. So, I am not sure which one is a better solution. We just need the message not duplicated. We don't need the message to be FIFO.
Solution #1:
Use AWS SQS FIFO queue. For each message, need to generate a UUID for "MessageGroupId" and "MessageDeduplicationId" parameters.
Solution #2:
Use AWS SQS FIFO queue with "ContentBasedDeduplcation" enabled. For each message, need to generate a UUID for "MessageGroupId".
Solution #3:
Use AWS SQS standard queue with AWS ElasticCache (either Redis or Memcached). For each message, the "MessageId" field will be saved in the cache server and checked for duplication later on. Existence means this message has been processed. (By the way, how long should the "MessageId" exists in the cache server. AWS SQS document does not mention how far back a message could be duplicated.)
You are making your systems complicated with SQS.
We have moved to Kinesis Streams, It works flawlessly. Here are the benefits we have seen,
Order of Events
Trigger an Event when data appears in stream
Deliver in Batches
Leave the responsibility to handle errors to the receiver
Go Back with time in case of issues
Buggier Implementation of the process
Higher performance than SQS
Hope it helps.
My first question would be that why is it even so important that you don't get duplicate messages? An ideal solution would be to use a standard queue and design your workers to be idempotent. For e.g., if the messages contain something like a task-ID and store the completed task's result in a database, ignore those whose task-ID already exists in DB.
Don't use receipt-handles for handling application-side deduplication, because those change every time a message is received. In other words, SQS doesn't guarantee same receipt-handle for duplicate messages.
If you insist on de-duplication, then you have to use FIFO queue.