I have lambda with a SQS trigger with
batch size 10
batch window 60 secs
Lambda timeout 5 mins
Visibility Timeout 6 mins
zero lambda retries
After triggering 30 messages in the queue, all the message are becoming inflight and the lambda process 15 to 16 messages and once the visibility time is left over the messages are going to DLQ.
Tried with 100 and 1 batch size, it stops unexpectedly after processing some messages and drop rest to DLQ.
Not able to understand why it stop processing after certain time.
Any help will be highly appreciated
Related
I have a FIFO Queue in the AWS SQS, which is trigger's Lambda function.
I want to process each messages in Lambda function without parallel execution (one message at a time)
For example: If I have a message A, B, C in the queue. My lambda should complete A, then start B etc.,
My current config of the FIFO queue is
Message retention period: 4 Days
Default visibility timeout: 1 Hour 30 Minutes
Delivery delay: 0 sec
Receive message wait time: 0 Second
set up another lamba(add/setup trigger to invoke this lamba for every one minute using Amazon event brige rule).
In the lamba funtion using receivemessage api call( https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) you can fetch desired number of messages from sqs queue (max 10 messages) and delete after processing each message.
So I have 10 message groups in FIFO queue having 2 messages per group and I have also reserved lambda concurrency set to 5. (Lambda completes execution in 1 min and SQS visibility timeout set to 2 mins)
when all 20 messages are pushed to queue, SQS inflight messages gets set to 10 and then after the execution time, 5 messages gets processed successfully and other 5 moves to DLQ.
And then the next executions inflight messages gets set to 5 (as reserved lambda concurrency set to 5.) and processes as expected (This should be the expected behaviour right?)
Any particular reason why this is happening?
My requirement is to process files that gets created in S3 and stream the content of the file to SQS queue which will be consumed by other processes.
When a new files gets created in the S3 bucket, notification is published to SQS queue which triggers the lambda and the lambda written in Python process the file and publishes the content to SQS queue. File size at max is 100 MB so it might have 300K message but it is being processed very slow. I am not sure where the problem is, I have set the lambda memory limit to 10 GB and runtime to 15 mins. also I have set the concurrency limit to 100
S3---->SQS--->lambda-->SQS
I have the set the visibility timeout to 30 mins for the message; the processing is so slow that it moves the file creation message to dead letter queue.
It will take somewhere between 10 and 50 milliseconds to write a single message to SQS. If you have 300,000 messages that you're trying to write in a single Lambda invocation, then that's 3,000 seconds in the best case, which is larger than the Lambda timeout.
Once the Lambda times out, any SQS messages that it was processing will go back on the queue, and delivered against once their visibility timeout expires.
You can try multi-threading the code that writes messages to SQS. Since it's mostly network IO, you should be able to scale linearly up to a dozen or so threads. That may, however, just mean that your downstream message handlers get overloaded.
Also, reduce your batch size to 1. SQS will invoke multiple Lambdas if there are more messages in the queues.
I have an AWS SQS (Standard Queue) which listens to third party SNS. I have a lambda setup which has SQS trigger with Batch size 10000 and Batch window 300. My SQS receives approx. 150 messages at a time but lambda gets triggered in batches of 20-30 messages at a time even i configured Batch size 10000. I don't understand why this is happening... even the SQS have enough messages and enough time (300 seconds Batch window) to fill the batch, its not doing it.
I googled for the issue and found that maximum payload size for lambda can be 6MB. I checked my message and its approx. 2.5 KB per message. so 30*2.5 = 75 KB only and not touching the limit 6MB.
Additionally, I suspected lambda concurrency so i have set it up to value 1 only. so no parallel lambda instances.
Can somebody help me to understand where the problem is please?
Lambda uses five parallel long-polling connections to check your queue. So if you have 150 msgs, each connection gets about 30 msgs, exactly explaining what you see.
Sadly, you can't change the number of these connections. There are always five.
In AWS SQS console,
I have created Standard SQS Queue and configured as following:
Message retention period: 4 hours
Default visibility timeout: 1 hour
Receive message wait time: 0 seconds
Delivery Delay: 0 seconds
Poll settings as following:
Polling Duration: 60 seconds
Maximum message count: 500
But, What if the count of messages sent to the queue is 1500?
There's a lambda that's processing the messages every half an hour and deleting the (read) SQS messages.
Will other 1000 messages get lost or will they get into SQS whenever another messages in SQS are getting deleted?
From docs:
A single Amazon SQS message queue can contain an unlimited number of messages.
So they will not get deleted from your SQS. Instead they will be send to your lambda as a second batch.