AWS SQS triggered lambda suddenly stalling and not deleting messages - amazon-web-services

I have a lambda python function function that is connected to an SQS queue trigger with batch size 1. The SQS messages contain a file location on S3, along with a few metadata values.
When a message becomes available, the function reads some metadata from the file on S3 referenced in the message, creates a YAML file with more metadata which is then dumped to S3 and references the metadata file in an RDS database.
After I submit a load of messages to the queue (~1.7k) , all seems to go well initially, with the number of messages available dropping and the lambda executions ramping up.
But after some time, the execution time increases significantly to the point where the functions time out (time out is set at 90 secs). I don't see any errors in the logs, and the executions are still successfully (if they don't time out).
All of this can be seen in the monitoring:
Here in the lambda monitoring, you can see the sudden increase in duration, coinciding with a drop in concurrent executions and sudden appearance of errors (at worst there are two errors, 60 %s success rate). The gap you see is me disabling and enabling the trigger hoping for a change.
Here's the SQS monitoring for the same period:
You can see the number of messages visible leveling out at 192, and the number of messages received at 5. More puzzling for me, even though there are successful executions, the number of messages deleted drops to 0.
I really can't figure out why this issue is appearing now, I've been using this configuration w/o issues and changes.
Can it be that the SQS trigger configuration blocks the queue when there's a timeout reading from S3? Any clues?
Thanks!
Edit:
The RDS cluster metrics:

If the lambda successfully processes messages but the SQS queues does not delete any messages that most likely indicates a mismatch between the queue visibility timeout and the lambda timeout. You should make sure that the lambda service that picks up the message has enough time to finish the message and to tell SQS to delete the message. If the lambda takes 70 seconds but the queue only has a visibility timeout of 60s that means the DeleteMessage request by the lambda service will be silently rejected and the message will remain in the queue and will be re-processed again at a later time, potentially with the exact same outcome.
First note: If you have a concurrency limit set for the lambda the visibility timeout for the queue should not only be equal to the lambda timeout but to a multiple of the lambda timeout, 5 or 6 times the lambda timeout. The reason for that is that the lambda service may pick up the message, try to invoke a lambda, but the lambda throttles it, the lambda service then waits (lambda timeout) to retry the message. During all that the lambda services does not return the message to the queue, it keeps it in memory, does not extend the visibility timeout or anything like that. It retries a couple of (5 or 6 times) before the messages is actually discarded / returned to SQS. You should be able to try this out by creating a lambda with a timeout of e.g. 10 seconds, having it simply sleep / wait for 9 seconds, have a concurrency limit of 1 and then putting 1000 messages into the queue.
Second note: these kind of sudden bulk operations can cause all sorts of throttling issues that don't occur normally, either by other down-stream services of your own or even AWS' services. E.g. if your lambda performs an assume-role call or retrieves some config object from S3 having 500 requests the instant the messages are in the queue will often get you into trouble. The underlying database may become slow / unresponsive buffering all the incoming requests, etc.
An easy solution to that problem is to throttle the lambda by setting its concurrency limit. At that point make sure the queues has a proper visibility timeout as detailed in the previous section. And to make sure you are alerted of an actual increase in requests make sure that you watch ApproximateAgeOfOldestMessage metric of the queue to be alerted if there is an increasing backlog.
Third note: if the lambda only misbehaves when a lot of requests are coming in one potential reason is a memory leak in the lambda. Since the execution contexts of a lambda are reused between different invocations the memory leak lives across different invocations as well. If there are few requests coming in you may always get a new execution context meaning the lambda starts with fresh memory each time, but if a lot of requests are coming in the execution contexts are certainly getting reused which might cause the leak to get so big the lambda basically freezes up due to garbage collection kicking in. Same goes for the /tmp directory in the lambda.

Related

AWS Lambda read from SQS without concurrency

My requirement is like this.
Read from a SQS every 2 hours, take all the messages available and then process it.
Processing includes creating a file with details from SQS messages and sending it to an sftp server.
I implemented a AWS Lambda to achieve point 1. I have a Lambda which has an sqs trigger. I have set batch size as 50 and then batch window as 2 hours. My assumption was that Lambda will get triggered every 2 hours and 50 messages will be delivered to the lambda function in one go and I will create a file for every 50 records.
But I observed that my lambda function is triggered with varied number of messages(sometimes 50 sometimes 20, sometimes 5 etc) even though I have configured batch size as 50.
After reading some documentation I got to know(I am not sure) that there are 5 long polling connections which lambda spawns to read from SQS and this is causing this behaviour of lambda function being triggered with varied number of messages.
My question is
Is my assumption on 5 parallel connections being established correct? If yes, is there a way I can control it? I want this to happen in a single thread / connection
If 1 is not possible, what other alternative do I have here. I do not want to have one file created for every few records. I want one file to be generated every two hours with all the messages in sqs.
A "SQS Trigger" for Lambda is implemented with the so-called Event Source Mapping integration, which polls, batches and deletes messages from the queue on your behalf. It's designed for continuous polling, although you can disable it. You can set a maximum batch size of up to 10,000 records a function receives (BatchSize) and a maximum of 300s long polling time (MaximumBatchingWindowInSeconds). That doesn't meet your once-every-two-hours requirement.
Two alternatives:
Remove the Event Source Mapping. Instead, trigger the Lambda every two hours on a schedule with an EventBridge rule. Your Lambda is responsible for the SQS ReceiveMessage and DeleteMessageBatch operations. This approach ensures your Lambda will be invoked only once per cron event.
Keep the Event Source Mapping. Process messages as they arrive, accumulating the partial results in S3. Once every two hours, run a second, EventBridge-triggered Lambda, which bundles the partial results from S3 and sends them to the SFTP server. You don't control the number of Lambda invocations.
Note on scaling:
<Edit (mid-Jan 2023): AWS Lambda now supports SQS Maximum Concurrency>
AWS Lambda now supports setting Maximum Concurrency to the Amazon SQS event source, a more direct and less fiddly way to control concurrency than with reserved concurrency. The Maximum Concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke. The valid range is 2-1000 concurrent instances.
The create and update Event Source Mapping APIs now have a ScalingConfig option for SQS:
aws lambda update-event-source-mapping \
--uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
--scaling-config '{"MaximumConcurrency":2}' # valid range is 2-1000
</Edit>
With the SQS Event Source Mapping integration you can tweak the batch settings, but ultimately the Lambda service is in charge of Lambda scaling. As the AWS Blog Understanding how AWS Lambda scales with Amazon SQS standard queues says:
Lambda consumes messages in batches, starting at five concurrent batches with five functions at a time. If there are more messages in the queue, Lambda adds up to 60 functions per minute, up to 1,000 functions, to consume those messages.
You could theoretically restrict the number of concurrent Lambda executions with reserved concurrency, but you would risk dropped messages due to throttling errors.
You could try to set the ReservedConcurrency of the function to 1. That may help. See the docs for reference.
A simple solution would be to create a CloudWatch Event Trigger (similar to a Cronjob) that triggers your Lambda function every two hours. In the Lambda function, you call ReceiveMessage on the Queue until you get all messages, process them and afterward delete them from the Queue. The drawback is that there may be too many messages to process within 15 minutes so that's something you'd have to manage.

AWS SQS Lambda Processing n files at once

I have setup an SQS queue where S3 paths are being pushed whenever there is a file upload.
I have also set up a Lambda with an SQS trigger and a batch size of 1.
In my scenario, I have to process n files at a time. Lets say (n = 10).
Say, there are 100 messages in the queue. In my current implementation I'm doing the following steps:
Whenever there is a message in the input queue, Lambda will be triggered
First I check the active number of concurrent executions I have. If am already running 10 executions, the code will simply return without doing anything. If it is less than 10, it reads one message from the queue and calls for processing.
Once the processing is done, the message will be manually deleted from the queue.
With the above mentioned approach, I'm able to process n files at a time. However, Say 100 files lands into S3 at the same time.
It leads to 100 lambda calls. Since we have a condition check in Lambda, the first 10 messages go for processing and the remaining 90 messages go to the in-flight mode.
Now, when some of my processing is done (say 3/10 got over), still the main queue is empty since the messages are still in-flight.
As per my understanding, if processing a file takes x minutes, the visibility timeout of the messages in the queue should be lesser than x (<x) . So that the message would once be available in the queue.
But it also leads to another problem. Say the batch took some more time to complete, message would come back to queue. Lambda would be triggered and once again it goes to the flight mode.
Is there any way, I can control the number of triggers made in lambda. For example: only first 10 messages should be processed however remaining 90 messages should remain visible in the queue. Or is there any other way I can make this design simple ?
I don't want to wait until 10 messages. Even if there are only 5 messages, it should trigger those files. And I don't want to call the Lambda in timely fashion (ex: calling it every 5 minutes).
There is a setting in Lambda called Reserved Concurrency, I'm going to quote from the docs (emphasis mine):
Reserved concurrency – Reserved concurrency creates a pool of requests that can only be used by its function, and also prevents its function from using unreserved concurrency.
[...]
To ensure that a function can always reach a certain level of concurrency, configure the function with reserved concurrency. When a function has reserved concurrency, no other function can use that concurrency. Reserved concurrency also limits the maximum concurrency for the function, and applies to the function as a whole, including versions and aliases.
For a deeper dive, check out this article from the documentation.
You can use this to limit how many Lambdas can be triggered in parallel - if no Lambda execution contexts are available, SQS invocations will wait.
This is only necessary if you want to limit how many files can be processed in parallel. If there is no actual need to limit this, it won't cost you more to let Lambda scale out for you.
You don't have to limit your concurrent Lambda execution. AWS already handling that for you. Here are the list of maximum concurrent per region from this document: https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
Burst concurrency quotas
3000 – US West (Oregon), US East (N. Virginia), Europe (Ireland)
1000 – Asia Pacific (Tokyo), Europe (Frankfurt), US East (Ohio)
500 – Other Regions
In this document: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
Scaling and processing
For standard queues, Lambda uses long polling to poll a queue until it
becomes active. When messages are available, Lambda reads up to 5
batches and sends them to your function. If messages are still
available, Lambda increases the number of processes that are reading
batches by up to 60 more instances per minute. The maximum number of
batches that can be processed simultaneously by an event source
mapping is 1000.
For FIFO queues, Lambda sends messages to your function in the order
that it receives them. When you send a message to a FIFO queue, you
specify a message group ID. Amazon SQS ensures that messages in the
same group are delivered to Lambda in order. Lambda sorts the messages
into groups and sends only one batch at a time for a group. If the
function returns an error, all retries are attempted on the affected
messages before Lambda receives additional messages from the same
group.
Your function can scale in concurrency to the number of active message
groups. For more information, see SQS FIFO as an event source on the
AWS Compute Blog.
You can see that Lambda is handling the scaling up automatically. No need to artificially limit the number of Lambda running to 10.
The idea of Lambda is you want to run as many tasks as possible so that you can achieve parallel execution in the shortest time.

AWS Lambda is seemingly not highly available when invoked from SNS

I am invoking a data processing lambda in bulk fashion by submitting ~5k sns requests in an asynchronous fashion. This causes all the requests to hit sns in a very short time. What I am noticing is that my lambda seems to have exactly 5k errors, and then seems to "wake up" and handle the load.
Am I doing something largely out of the ordinary use case here?
Is there any way to combat this?
I suspect it's a combination of concurrency, and the way lambda connects to SNS.
Lambda is only so good at automatically scaling up to deal with spikes in load.
Full details are here: (https://docs.aws.amazon.com/lambda/latest/dg/scaling.html), but the key points to note that
There's an account-wide concurrency limit, which you can ask to be
raised. By default it's much less than 5k, so that will limit how
concurrent your lambda could ever become.
There's a hard scaling limit (+1000 instances/minute), which means even if you've managed to convince AWS to let you have a concurrency limit of 30k, you'll have to be under sustained load for 30 minutes before you'll have that many lambdas going at once.
SNS is a non-stream-based asynchronous invocation (https://docs.aws.amazon.com/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-sns) so what you see is a lot of errors as each SNS attempts to invoke 5k lambdas, but only the first X (say 1k) get through, but they keep retrying. The queue then clears concurrently at your initial burst (typically 1k, depending on your region), +1k a minute until your reach maximum capacity.
Note that SNS only retries three times at intervals (AWS is a bit sketchy about the intervals, but it is probably based on the retry: delay the service returns, so should be approximately intelligent); I suggest you setup a DLQ to make sure you're not dropping messages because the time for the queue to clear.
While your pattern is not a bad one, it seems like you're very exposed to the concurrency issues that surround lambda.
An alternative is to use a stream based event-source (like Kinesis), which processes in batches at a set concurrency (e.g. 500 records per lambda, concurrent by shard count, rather than 1:1 with SNS), and waits for each batch to finish before processing the next.

AWS Lambda Polling from SQS: in-flight messages count

I have 20K message in SQS queue. I also have a lambda will process the SQS messages, and put data into ElasticSearch server.
I have configured SQS as the lambda's trigger, and limited the Lambda's SQS batch size to be 10. I also limited the only one instance of the lambda can be run at a giving time.
However, sometime I see over 10K in-flight messages from the AWS console. Should it be max at 10 in-flight messages?
Because of this, the lambdas will only able to process 9K of the SQS message properly.
Below is a screen capture to show that I have limited the lambda to have only 1 instance running at a giving time.
I've been doing some testings and contacting AWS tech support at the same time.
What I do believe at the moment is that:
Amazon Simple Queue Service supports an initial burst of 5 concurrent function invocations and increases concurrency by 60 concurrent invocations per minute. Doc
1/ The thing that does that polling, is a separate entity. It is most likely to be a lambda function that will long-poll the SQS and then, invoke our lambda functions.
2/ That polling Lambda does not take into account any of our Receiver-Lambda at all. It does not care whether the function is running at max capacity or not, or how many max concurrency is available for the Receiver-Lambda
3/ Due to that combination. The behavior is not what we expected from the Lambda-SQS integration. And worse, If you have suddenly, millions of message burst in your queue. The Receiver-Lambda concurrency can never catch up with the amount of messages that the polling Lambda is sending, result in loss of work
The test:
Create one Lambda function that takes 30 seconds to return true;
Set that function's concurrency to 50;
Push 300 messages into the queue ( Visibility timeout : 10 Minutes, batch message count: 1, no re-drive )
The result:
Amount of messages available just increase gradually
At first, there are few enough messages to be processed by Receiver-Lambda
After half a minute, there are more messages available than what Receiver-Lambda can handle
These message would be discarded to dead queue. Due to polling Lambda unable to invoke Receiver-Lambda
I will update this answer as soon as I got the confirmation from AWS support
Support answer. As of Q1 2019, TL;DR version
1/ The assumption was correct, there was a "Poller"
2/ That Poller do not take into consideration of reserved concurrency
as part of its algorithm
3/ That poller have hard limit of 1000
Q2-2019 :
The above information need to be updated. Support said that the poller correctly consider reserved concurrency but it should be at least 5. The SQS-Lambda integration is still being updated and this answer will not. So please consult AWS if you get into some weird issues

AWS Lambda > what happens when reach concurrent limit

Lambda has a 100 function limit.
What happens when you submit a 101 function when 100 are already running?
Will it:
fail with an error
queue up
If you are talking about Concurrent executions there isn't a limit of 100. The limit depends on the region but by default it's 1000 Concurrent executions.
To answer your question: As soon as the Concurrent executions limit is reached the next execution gets throttled. Each throttled invocation increases the Amazon CloudWatch Throttles metric for the function.
If your AWS Lambda is invoked asynchronous AWS Lambda automatically retries the throttled event for up to six hours, with delays between retries. If you didn't setup a Dead Letter Queue (DLQ) for your AWS Lambda your event is lost as soon as all retries fails.
For more information please check the AWS Lambda - Throttling Behavior
If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors (429) and system errors (500-series), Lambda returns the event to the queue and attempts to run the function again for up to 6 hours. The retry interval increases exponentially from 1 second after the first attempt to a maximum of 5 minutes. However, it might be longer if the queue is backed up. Lambda also reduces the rate at which it reads events from the queue.
As mentioned here.