AWS lambda not processing S3 file fast enough - amazon-web-services

My requirement is to process files that gets created in S3 and stream the content of the file to SQS queue which will be consumed by other processes.
When a new files gets created in the S3 bucket, notification is published to SQS queue which triggers the lambda and the lambda written in Python process the file and publishes the content to SQS queue. File size at max is 100 MB so it might have 300K message but it is being processed very slow. I am not sure where the problem is, I have set the lambda memory limit to 10 GB and runtime to 15 mins. also I have set the concurrency limit to 100
S3---->SQS--->lambda-->SQS
I have the set the visibility timeout to 30 mins for the message; the processing is so slow that it moves the file creation message to dead letter queue.

It will take somewhere between 10 and 50 milliseconds to write a single message to SQS. If you have 300,000 messages that you're trying to write in a single Lambda invocation, then that's 3,000 seconds in the best case, which is larger than the Lambda timeout.
Once the Lambda times out, any SQS messages that it was processing will go back on the queue, and delivered against once their visibility timeout expires.
You can try multi-threading the code that writes messages to SQS. Since it's mostly network IO, you should be able to scale linearly up to a dozen or so threads. That may, however, just mean that your downstream message handlers get overloaded.
Also, reduce your batch size to 1. SQS will invoke multiple Lambdas if there are more messages in the queues.

Related

AWS Lambda read from SQS without concurrency

My requirement is like this.
Read from a SQS every 2 hours, take all the messages available and then process it.
Processing includes creating a file with details from SQS messages and sending it to an sftp server.
I implemented a AWS Lambda to achieve point 1. I have a Lambda which has an sqs trigger. I have set batch size as 50 and then batch window as 2 hours. My assumption was that Lambda will get triggered every 2 hours and 50 messages will be delivered to the lambda function in one go and I will create a file for every 50 records.
But I observed that my lambda function is triggered with varied number of messages(sometimes 50 sometimes 20, sometimes 5 etc) even though I have configured batch size as 50.
After reading some documentation I got to know(I am not sure) that there are 5 long polling connections which lambda spawns to read from SQS and this is causing this behaviour of lambda function being triggered with varied number of messages.
My question is
Is my assumption on 5 parallel connections being established correct? If yes, is there a way I can control it? I want this to happen in a single thread / connection
If 1 is not possible, what other alternative do I have here. I do not want to have one file created for every few records. I want one file to be generated every two hours with all the messages in sqs.
A "SQS Trigger" for Lambda is implemented with the so-called Event Source Mapping integration, which polls, batches and deletes messages from the queue on your behalf. It's designed for continuous polling, although you can disable it. You can set a maximum batch size of up to 10,000 records a function receives (BatchSize) and a maximum of 300s long polling time (MaximumBatchingWindowInSeconds). That doesn't meet your once-every-two-hours requirement.
Two alternatives:
Remove the Event Source Mapping. Instead, trigger the Lambda every two hours on a schedule with an EventBridge rule. Your Lambda is responsible for the SQS ReceiveMessage and DeleteMessageBatch operations. This approach ensures your Lambda will be invoked only once per cron event.
Keep the Event Source Mapping. Process messages as they arrive, accumulating the partial results in S3. Once every two hours, run a second, EventBridge-triggered Lambda, which bundles the partial results from S3 and sends them to the SFTP server. You don't control the number of Lambda invocations.
Note on scaling:
<Edit (mid-Jan 2023): AWS Lambda now supports SQS Maximum Concurrency>
AWS Lambda now supports setting Maximum Concurrency to the Amazon SQS event source, a more direct and less fiddly way to control concurrency than with reserved concurrency. The Maximum Concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke. The valid range is 2-1000 concurrent instances.
The create and update Event Source Mapping APIs now have a ScalingConfig option for SQS:
aws lambda update-event-source-mapping \
--uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
--scaling-config '{"MaximumConcurrency":2}' # valid range is 2-1000
</Edit>
With the SQS Event Source Mapping integration you can tweak the batch settings, but ultimately the Lambda service is in charge of Lambda scaling. As the AWS Blog Understanding how AWS Lambda scales with Amazon SQS standard queues says:
Lambda consumes messages in batches, starting at five concurrent batches with five functions at a time. If there are more messages in the queue, Lambda adds up to 60 functions per minute, up to 1,000 functions, to consume those messages.
You could theoretically restrict the number of concurrent Lambda executions with reserved concurrency, but you would risk dropped messages due to throttling errors.
You could try to set the ReservedConcurrency of the function to 1. That may help. See the docs for reference.
A simple solution would be to create a CloudWatch Event Trigger (similar to a Cronjob) that triggers your Lambda function every two hours. In the Lambda function, you call ReceiveMessage on the Queue until you get all messages, process them and afterward delete them from the Queue. The drawback is that there may be too many messages to process within 15 minutes so that's something you'd have to manage.

How does AWS Lambda internal pollers manage SQS API calls?

in the AWS doc, it is written
Lambda reads up to five batches and sends them to your function.
(https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#events-sqs-scaling)
I am a bit confused about that part
"reads up to five batches".
Does it mean:
5 SQS ReceiveMessage API calls are made in parallel at the same time ?
5 SQS ReceiveMessage API calls are made one by one (each one creating a new lambda environment)
Lambda polls 5 batches in parallel.
AWS Lambda, in python for example, uses the queue.receive_messages function, to receive messages. This function is able to receive a batch of messages in a single request from an SQS queue.
The default is 10 messages per batch as seen here and may range to 10000 for standard queues. But there is a limit for simultaneous batches and that's 5 batches, sent to the same lambda.
If there are still messages in the Queue, lambda launches up to 60 more lambdas per minute to consume them.
Finally, event source mapping (lambda's link to the SQS queue) can handle up to 1000 batches of messages simultaneously.

Why Lambda never reach or close to the reserved concurrency? Want SQS triggers more Lambda function to process message concurrently

I've set a Lambda trigger with a SQS Queue. Lambda's reserved concurrency is set to 1000. However, there are millions of messages waiting in the queue need to be processed and it only invokes around 50 Lambdas at the same time. Ideally, I want SQS to trigger 1000 (or close to 1000) Lambda functions concurrently. Do I miss any configuration in SQS or Lambda? Thank you for any suggestion.
As stated in AWS Lambda developer guide:
...Lambda increases the number of processes that are reading batches by up to 60 more instances per minute. The maximum number of batches that an event source mapping can process simultaneously is 1,000.
So the behavior that you encountered (only invokes around 50 Lambdas at the same time) is actually expected.
If you are not using already, I would suggest doing batch processing in your lambda (so you can process 10 messages per invocation). If that is still not enough, you can potentially create more queues and lambdas to divide your load (considering that order is not relevant in your case), or move away from it and start polling the queue directly with EC2/ECS (which can increase your costs considerably however).

Lambda SQS Trigger Batch window and Batch size not working as expected

I have an AWS SQS (Standard Queue) which listens to third party SNS. I have a lambda setup which has SQS trigger with Batch size 10000 and Batch window 300. My SQS receives approx. 150 messages at a time but lambda gets triggered in batches of 20-30 messages at a time even i configured Batch size 10000. I don't understand why this is happening... even the SQS have enough messages and enough time (300 seconds Batch window) to fill the batch, its not doing it.
I googled for the issue and found that maximum payload size for lambda can be 6MB. I checked my message and its approx. 2.5 KB per message. so 30*2.5 = 75 KB only and not touching the limit 6MB.
Additionally, I suspected lambda concurrency so i have set it up to value 1 only. so no parallel lambda instances.
Can somebody help me to understand where the problem is please?
Lambda uses five parallel long-polling connections to check your queue. So if you have 150 msgs, each connection gets about 30 msgs, exactly explaining what you see.
Sadly, you can't change the number of these connections. There are always five.

AWS SQS Lambda Processing n files at once

I have setup an SQS queue where S3 paths are being pushed whenever there is a file upload.
I have also set up a Lambda with an SQS trigger and a batch size of 1.
In my scenario, I have to process n files at a time. Lets say (n = 10).
Say, there are 100 messages in the queue. In my current implementation I'm doing the following steps:
Whenever there is a message in the input queue, Lambda will be triggered
First I check the active number of concurrent executions I have. If am already running 10 executions, the code will simply return without doing anything. If it is less than 10, it reads one message from the queue and calls for processing.
Once the processing is done, the message will be manually deleted from the queue.
With the above mentioned approach, I'm able to process n files at a time. However, Say 100 files lands into S3 at the same time.
It leads to 100 lambda calls. Since we have a condition check in Lambda, the first 10 messages go for processing and the remaining 90 messages go to the in-flight mode.
Now, when some of my processing is done (say 3/10 got over), still the main queue is empty since the messages are still in-flight.
As per my understanding, if processing a file takes x minutes, the visibility timeout of the messages in the queue should be lesser than x (<x) . So that the message would once be available in the queue.
But it also leads to another problem. Say the batch took some more time to complete, message would come back to queue. Lambda would be triggered and once again it goes to the flight mode.
Is there any way, I can control the number of triggers made in lambda. For example: only first 10 messages should be processed however remaining 90 messages should remain visible in the queue. Or is there any other way I can make this design simple ?
I don't want to wait until 10 messages. Even if there are only 5 messages, it should trigger those files. And I don't want to call the Lambda in timely fashion (ex: calling it every 5 minutes).
There is a setting in Lambda called Reserved Concurrency, I'm going to quote from the docs (emphasis mine):
Reserved concurrency – Reserved concurrency creates a pool of requests that can only be used by its function, and also prevents its function from using unreserved concurrency.
[...]
To ensure that a function can always reach a certain level of concurrency, configure the function with reserved concurrency. When a function has reserved concurrency, no other function can use that concurrency. Reserved concurrency also limits the maximum concurrency for the function, and applies to the function as a whole, including versions and aliases.
For a deeper dive, check out this article from the documentation.
You can use this to limit how many Lambdas can be triggered in parallel - if no Lambda execution contexts are available, SQS invocations will wait.
This is only necessary if you want to limit how many files can be processed in parallel. If there is no actual need to limit this, it won't cost you more to let Lambda scale out for you.
You don't have to limit your concurrent Lambda execution. AWS already handling that for you. Here are the list of maximum concurrent per region from this document: https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
Burst concurrency quotas
3000 – US West (Oregon), US East (N. Virginia), Europe (Ireland)
1000 – Asia Pacific (Tokyo), Europe (Frankfurt), US East (Ohio)
500 – Other Regions
In this document: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
Scaling and processing
For standard queues, Lambda uses long polling to poll a queue until it
becomes active. When messages are available, Lambda reads up to 5
batches and sends them to your function. If messages are still
available, Lambda increases the number of processes that are reading
batches by up to 60 more instances per minute. The maximum number of
batches that can be processed simultaneously by an event source
mapping is 1000.
For FIFO queues, Lambda sends messages to your function in the order
that it receives them. When you send a message to a FIFO queue, you
specify a message group ID. Amazon SQS ensures that messages in the
same group are delivered to Lambda in order. Lambda sorts the messages
into groups and sends only one batch at a time for a group. If the
function returns an error, all retries are attempted on the affected
messages before Lambda receives additional messages from the same
group.
Your function can scale in concurrency to the number of active message
groups. For more information, see SQS FIFO as an event source on the
AWS Compute Blog.
You can see that Lambda is handling the scaling up automatically. No need to artificially limit the number of Lambda running to 10.
The idea of Lambda is you want to run as many tasks as possible so that you can achieve parallel execution in the shortest time.