AWS SQS Batch size and messages - amazon-web-services

I am working with AWS SQS and Lambda. I wanted to know that if the batchsize = 5 and sqs messages left = 3.
Will the Lambda be triggered by a batch of 3 messages or will sqs wait for the message count to become 5?

From docs:
Batch size – The number of items to read from the queue in each batch, up to 10. The event might contain fewer items if the batch that Lambda read from the queue had fewer items.
Thus, based on this, you should get 3 messages. Lambda should not be waiting for 5.

Related

How does AWS Lambda internal pollers manage SQS API calls?

in the AWS doc, it is written
Lambda reads up to five batches and sends them to your function.
(https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#events-sqs-scaling)
I am a bit confused about that part
"reads up to five batches".
Does it mean:
5 SQS ReceiveMessage API calls are made in parallel at the same time ?
5 SQS ReceiveMessage API calls are made one by one (each one creating a new lambda environment)
Lambda polls 5 batches in parallel.
AWS Lambda, in python for example, uses the queue.receive_messages function, to receive messages. This function is able to receive a batch of messages in a single request from an SQS queue.
The default is 10 messages per batch as seen here and may range to 10000 for standard queues. But there is a limit for simultaneous batches and that's 5 batches, sent to the same lambda.
If there are still messages in the Queue, lambda launches up to 60 more lambdas per minute to consume them.
Finally, event source mapping (lambda's link to the SQS queue) can handle up to 1000 batches of messages simultaneously.

Is it possible to calculate total time taken by each message group to process all messages in FIFO queue?

So I have created a FIFO queue where 10000 messages gets pushed which belong to 100 message groups and consumed by lambda.
So wanted to see the following questions metrics:
What is the total time taken to process all messages in each message group in FIFO queue?
What is the total time taken to process all messages in FIFO queue?
Lambda Invocations and Concurrent executions metrics are available.
SQS is just a data store for messages. How much time it takes for the message to process or execute is not SQS concern. You can't get what you're looking for from SQS default metrics. You have to write custom logic in your message processor service (lambda here) to log the values into CloudWatch and create a metric from there.
Default AWS SQS available metrics can be found here https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-available-cloudwatch-metrics.html

Trigger AWS Lambda once SQS fifo queue is not empty

I got an SQS FIFO queue, I want to know if there is a way to trigger an AWS lambda once the queue is not empty.
For example, if my queue is empty and a new message enters trigger lambda, but if the queue is already containing at least one message and a new message enters no lambda will be triggered.
Is it possible?
There is an Amazon CloudWatch metric called ApproximateNumberOfMessagesVisible that shows the number of messages in the queue. The documentation says that "For FIFO queues, the result is exact."
You could create a CloudWatch Alarm that triggers when the number of messages drops to zero for a period of time. The Alarm can send a message to an Amazon SNS topic. If you subscribe your AWS Lambda function to this topic, it will be triggered when the queue is empty for the specified duration (eg over a period of 5 minutes). It will only be triggered when the alarm enters the 'Alarm' state and it will not trigger again until the alarm exits the state and enters the state again.
Important: When configuring the alarm, go to the Additional configuration and set Missing data treatment to "Treat missing data as bad (breaching threshold)". This is required because the SQS queue will not send metrics if the queue is empty. (Many queues are empty, so this saves a lot of metric storage!)
Unusual pattern.
You could perhaps set the Lambda function concurrency to 1, meaning that only one invocation can happen concurrently, and then have your Lambda function kick off your workflow and then remove the actual SQS event trigger that caused the Lambda to be invoked in the first place. That should prevent further invocations. Add the SQS event trigger back when you're done to get ready for the next batch of messages.
You may set a concurrent execution limit to 1 to make sure only 1 lambda instance reads the queue. But I'm not sure this is something you may want to do. Lambda can read 10 messages at most on single execution and if your queue gets too many incoming messages then your message consumption process may take too much time.

Draining of SQS Queue using Lambda after a certain days

So, I am putting some entries in SQS Queue which is set as an event source for the Lambda, and this flow is working fine. As soon as entry comes in SQS queue lambda process it. so far so good.
But I have a situation where I want to let the entries to stay in SQS for 3-4 days and then let a lambda process them.
So basically if I see that okey, I have 100 entries in my SQS Queue and it's been 4 days now. I want to let lambda drain them and run some logic. Is this possible, Kindly guide me?
I think disabling lambda is not the way to fulfil the requirement, as you will miss other messages too.
SQS is messaging service and when it integrated with Lambda you can just configure retry and process the message, keeping the message in SQS, not in user control but lambda do that by design.
Lambda polls the queue and invokes your function synchronously with an
event that contains queue messages. Lambda reads messages in batches
and invokes your function once for each batch. When your function
successfully processes a batch, Lambda deletes its messages from the
queue.
enter link description here
One solution that can work to deal with your query
But I have a situation where I want to let the entries to stay in SQS
for 3-4 days and then let a lambda process them.
You also need to decide which SQS should not be processed at the moment and push these message to DynamoDb and then process these message after 4 or 5 days base on Dynamo DB TTL that was added during insertion. You can follow below steps
Add property to SQS is_dynamodb to identify the message that should not be processed at the moment
Push such message to DynamoDB
Add TTL during insertion
Check event in Lambda function that stream from DynamoDb is removed not insertion
Process messages if the event is Removed

AWS lambda missing few SQS event miss leading to message in flight

My Lambda configuration is as below
Lambda Concurrency is set to 50
And SQS trigger batch size is set to 1
Issue:
When my queue is flooded with 200+ messages, some of the sqs triggers are missed and the message from the queue goes to inflight state without even triggering the lambda. This is adding a latency in processing by the timeout value set for lambda as I need to wait for the message to come out of flight for it to be reprocessed.
Any inputs will be highly appreciated.
SQS is integrated with Lambda through event source mappings.
Thanks to the mappings, the Lambda service is long polling the SQS queue, and invoking your function on your behalf. What's more it automatically removes the messages from the queue if your Lambda successfully processes them.
Since you want to process 200+ messages, and you set concurrency to 50 with batch size of 1, it means that you can process only 50 messages in parallel. The rest will be throttled. When this happens:
If your function is throttled, returns an error, or doesn't respond, the message becomes visible again. All messages in a failed batch return to the queue, so your function code must be able to process the same message multiple times without side effects.
To rectify the issue, the following two immediate actions can be considered:
increase concurrency of your function to 200 or more.
increase batch size to 10. With the batch size and concurrency of 50, you can process 500 (10 x 50) messages concurrently.
Also since you are heavily throttled, setting up a dead-letter queue can be useful. The DLQ helps captures problematic or missed messages from the queue, so that you can process them later or inspect:
If a message fails to be processed multiple times, Amazon SQS can send it to a dead-letter queue. When your function returns an error, Lambda leaves it in the queue. After the visibility timeout occurs, Lambda receives the message again. To send messages to a second queue after a number of receives, configure a dead-letter queue on your source queue.