Rate limiting / scheduling AWS Cognito operations to avoid TooManyRequestsException - amazon-web-services

AWS Cognito UserUpdate related operations have a quota of 25 requests per second (a hard limit which can't be increased)
I have a Lambda function which gets 1000 simultaneous requests and is responsible for calling Cognito's AdminUpdateUserAttributes operation. as a result, some requests pass and some fails do to TooManyRequestsException.
Important to note that these 1000 requests happens on a daily basis, one time on each day in the morning. there are no requests at all during the entire day.
Our stack is completely serverless and managed by cloudformation (with serverless framework) and we tend to avoid using EC2 if possible.
What is the best way to handle these daily 1000 requests so that they will be handled as soon a I get them, while avoiding failures due to TooManyRequestsException
A solution I tried:
A lambda that receives the requests and sends them to an SQS + another lambda with reserved concurrency of 1 that is triggered from events in the SQS which calls Congito's AdminUpdateUserAttributes operation.
This solution partially worked as I didn't get TooManyRequestsException exceptions anymore but looks like some of the messages got lost in the way (I think that is because SQS got throttled).
Thanks!

AWS recommends exponential backoff with jitter for any API operations that are rate-limited or produce retryable failures.

Standard queues support a nearly unlimited number of API calls per second, per API action (SendMessage, ReceiveMessage, or DeleteMessage).
are you sure the SQS got throttled?
another option to increase failed lambda retires.

Related

Autoscale AWS Lambda concurrency based off throttling errors

I have a AWS Lambda function using an AWS SQS trigger to pull messages, process them with an AWS Comprehend endpoint, and put the output in AWS S3. The AWS Comprehend endpoint has a rate limit which goes up and down throughout the day based off something I can control. The fastest way to process my data, which also optimizes the costs I am paying for the AWS Comprehend endpoint to be up, is to set concurrency high enough that I get throttling errors returned from the api. This however comes with the caveat, that I am paying for more AWS Lambda invocations, the flip side being, that to optimize the costs I am paying for AWS Lambda, I want 0 throttling errors.
Is it possible to set up autoscaling for the concurrency limit of the lambda such that it will increase if it isn't getting any throttling errors, but decrease if it is getting too many?
Very interesting use case.
Let me start by pointing out something that I found out the hard way in an almost 4 hour long call with AWS Tech Support after being puzzled for a couple days.
With SQS acting as a trigger for AWS Lambda, the concurrency cannot go beyond 1K. Even if the concurrency of Lambda is set at a higher limit.
There is now a detailed post on this over at Knowledge Center.
With that out of the way and assuming you are under 1K limit at any given point in time and so only have to use one SQS queue, here is what I feel can be explored:
Either use an existing cloudwatch metric (via Comprehend) or publish a new metric that is indicative of the load that you can handle at any given point in time. you can then use this to set an appropriate concurrency limit for the lambda function. This would ensure that even if you have SQS queue flooded with messages to be processed, lambda picks them up at the rate at which it can actually be processed.
Please Note: This comes out of my own philosophy of being proactive vs being reactive. I would not wait for something to fail to trigger other processes eg invocation errors in this case to adjust concurrency. System failures should be rare and actually raise alarm (if not panic!) rather than being normal that occurs a couple of times a day !
To build up on that, if possible I would suggest that you approach this the other way around i.e. scale Comprehend processing limit and AWS Lambda concurrency based on the messages in the SQS queue (backlog) or a combination of this backlog and the time of the day etc. This way, if every part of your pipeline is a function of the amount of backlog in the Queue, you can be rest assured that you are not spending more than you have at any given point in time.
More importantly, you always have capacity in place should the need arise or something out of normal happens.

Which service has a lower response time? SQS vs DynamoDB

Well, I have built a serverless application a time ago by using AWS Lambda. My current application flow is as follows:
API Gateway (1) → Lambda Function (2) → SQS (3) → Lambda Function (4) → DynamoDB (5)
Now, there are some considerations:
The client is going to send a request to my API (1).
There's a lambda function (2) attached to the API event, so it's going to receive the API request and process it.
Now, here is where the question enters. After processing the request, the result of this processing MUST be inserted at the DynamoDB (5). Currently, I send it to SQS (3) and return the response of the HTTP request sent by the client.
Although the request is finished and responded, the SQS (3) messages are going to be event-pulled by another lambda function (4), that is going to insert the processed message on DynamoDB (5).
When I first prototyped this flow I had a presumption: that sending a message to SQS was faster than inserting it on DynamoDB. However, I never did a real benchmark nor something like this, so my presumption was merely arbitrary.
The question is, finally: which one of the actions is faster? Sending the processed request do SQS or directly to DynamoDB?
Consider that, in both cases, it's going to be executed from within a lambda function (2), so, teorically, as it's in the same context as AWS itself, it won't have the same response time than requesting it from other machine.
If the answer for this question is:
Inserting directly on DynamoDB is faster
Inserting directly on DynamoDB is not faster but the difference is negligible
I may remove both SQS (3) and the second lambda function (4), resulting in a simpler and more direct flow.
However, if there's greater response times by sending first to SQS, I may keep this flow.
You're asking if SQS is cheaper than DynamoDB, but in your flow you're using both...it will of course be cheaper to just do API Gateway (1) → Lambda Function (2) → DynamoDB (3).
Performance wise, DynamoDB is known to be fast for small, frequent writes, so I wouldn't worry much about that.
The difference between SQS and DynamoDB response time should be very similar, unless your DynamoDB capacity isn't provisioned properly in which case you could have issues with throttles. If provisioned capacity isn't a concern for you, then I suggest testing both SQS and DynamoDB with timers inside your Lambda function (or using AWS x-ray) and decide whether or not the performance difference is worth the cost of adding an SQS and an extra Lambda function
If you keep the connection open between invocations I've seen DynamoDB response times below 10ms. I don't have data on SQS latency.
Regarding cost, you are basically doubling your lambda cost, and adding whatever SQS costs you. SQS costs about 33% more than DynamoDB if you are using on-demand writes.
+1 to Deiv's and cementblocks responses.
Let me share the following additional perspectives to help you evolve your proposed design.
If you need to strictly abide by async processing, i.e., decouple request processing from response, then stick with your SQS based solution.
If the request processing latency is consistent and acceptable for the consumers of the API endpoint, then I'd recommend the solution that Diev recommended to process request, persist to DynamoDB and return response to client. As bonus, you will have a lower AWS bill (as pointed out above).
DynamoDB is designed to offer "consistent" P99 (i.e., 99th percentile) latency of < 10 ms for single item reads and < 20 ms for single item writes.
Hope this helps!

How does SQS messages behaves while using aws lambda with sqs event source?

Above is my serverless config for my lambda. We want only limited parallel lambda(10) running, since it has db operations, using this configuration we were expecting Lambda to only pick 10 messages(reserved concurrency) at a time and only 1 message in each request(batchSize)
However as soon as I publish bulk messages to lambda, there are many messages InFlight. I was expecting only 10 messages to be InFlight.
Based on below monitoring it seems like lambda is getting invoked many times but gets throttled and the concurrent executions are always 10.
Questions: What is the concept behind this behavior? Also, are the throttled lambda instances waiting for others to finish? Does this impact other lambda's running under the same account? AWS Documentation doesn't give much information regarding the functioning.

AWS Lambda is seemingly not highly available when invoked from SNS

I am invoking a data processing lambda in bulk fashion by submitting ~5k sns requests in an asynchronous fashion. This causes all the requests to hit sns in a very short time. What I am noticing is that my lambda seems to have exactly 5k errors, and then seems to "wake up" and handle the load.
Am I doing something largely out of the ordinary use case here?
Is there any way to combat this?
I suspect it's a combination of concurrency, and the way lambda connects to SNS.
Lambda is only so good at automatically scaling up to deal with spikes in load.
Full details are here: (https://docs.aws.amazon.com/lambda/latest/dg/scaling.html), but the key points to note that
There's an account-wide concurrency limit, which you can ask to be
raised. By default it's much less than 5k, so that will limit how
concurrent your lambda could ever become.
There's a hard scaling limit (+1000 instances/minute), which means even if you've managed to convince AWS to let you have a concurrency limit of 30k, you'll have to be under sustained load for 30 minutes before you'll have that many lambdas going at once.
SNS is a non-stream-based asynchronous invocation (https://docs.aws.amazon.com/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-sns) so what you see is a lot of errors as each SNS attempts to invoke 5k lambdas, but only the first X (say 1k) get through, but they keep retrying. The queue then clears concurrently at your initial burst (typically 1k, depending on your region), +1k a minute until your reach maximum capacity.
Note that SNS only retries three times at intervals (AWS is a bit sketchy about the intervals, but it is probably based on the retry: delay the service returns, so should be approximately intelligent); I suggest you setup a DLQ to make sure you're not dropping messages because the time for the queue to clear.
While your pattern is not a bad one, it seems like you're very exposed to the concurrency issues that surround lambda.
An alternative is to use a stream based event-source (like Kinesis), which processes in batches at a set concurrency (e.g. 500 records per lambda, concurrent by shard count, rather than 1:1 with SNS), and waits for each batch to finish before processing the next.

Is AWS Lambda good for real-time API Rest?

I'm learning about AWS Lambda and I'm worried about synchronized real-time requests.
The fact the lambda has a "cold start" it doesn't sounds good for handling GET petitions.
Imagine a user is using the application and do a GET HTTP Request to get a Product or a list of Products, if the lambda is sleeping, then it will take 10 seconds to respond, I don't see this as an acceptable response time.
Is it good or bad practice to use AWS Lambda for classic (sync responses) API Rest?
Like most things, I think you should measure before deciding. A lot of AWS customers use Lambda as the back-end for their webapps quite successfully.
There's a lot of discussion out there on Lambda latency, for example:
2017-04 comparing Lambda performance using Node.js, Java, C# or Python
2018-03 Lambda call latency
2019-09 improved VPC networking for AWS Lambda
2019-10 you're thinking about cold starts all wrong
In December 2019, AWS Lambda introduced Provisioned Concurrency, which improves things. See:
2019-12 AWS Lambda announces Provisioned Concurrency
2020-09 AWS Lambda Cold Starts: Solving the Problem
You should measure latency for an environment that's representative of your app and its use.
A few things that are important factors related to request latency:
cold starts => higher latency
request patterns are important factors in cold starts
if you need to deploy in VPC (attachment of ENI => higher cold start latency)
using CloudFront --> API Gateway --> Lambda (more layers => higher latency)
choice of programming language (Java likely highest cold-start latency, Go lowest)
size of Lambda environment (more RAM => more CPU => faster)
Lambda account and concurrency limits
pre-warming strategy
Update 2019-12: see Predictable start-up times with Provisioned Concurrency.
Update 2021-08: see Increasing performance of Java AWS Lambda functions using tiered compilation.
As an AWS Lambda + API Gateway user (with Serverless Framework) I had to deal with this too.
The problem I faced:
Few requests per day per lambda (not enough to keep lambdas warm)
Time critical application (the user is on the phone, waiting for text-to-speech to answer)
How I worked around that:
The idea was to find a way to call the critical lambdas often enough that they don't get cold.
If you use the Serverless Framework, you can use the serverless-plugin-warmup plugin that does exactly that.
If not, you can copy it's behavior by creating a worker that will invoke the lambdas every few minutes to keep them warm. To do this, create a lambda that will invoke your other lambdas and schedule CloudWatch to trigger it every 5 minutes or so. Make sure to call your to-keep-warm lambdas with a custom event.source so you can exit them early without running any actual business code by putting the following code at the very beginning of the function:
if (event.source === 'just-keeping-warm) {
console.log('WarmUP - Lambda is warm!');
return callback(null, 'Lambda is warm!');
}
Depending on the number of lamdas you have to keep warm, this can be a lot of "warming" calls. AWS offers 1.000.000 free lambda calls every month though.
We have used AWS Lambda quite successfully with reasonable and acceptable response times. (REST/JSON based API + AWS Lambda + Dynamo DB Access).
The latency that we measured always had the least amount of time spent in invoking functions and large amount of time in application logic.
There are warm up techniques as mentioned in the above posts.