My lambda function works less than 1300 MS when I click the test button on the Lambda Page. (Lambda Page: https://eu-central-1.console.aws.amazon.com/lambda/home?region=eu-central-1#/functions/myfunc?tab=graph)
When I send a request to Lambda via API Gateway then I have to wait for 4300 ms.
The HTTP requests, which goes to Lambda via Gateway work 3-4 times slow.
I saw some similar forum posts. However, I couldn't find a solution for this issue.
How can I reduce the latency?
API Gateway is known to introduce a lot of latency.
Lambda is not ideal for synchronous requests + responses as you seem to be using it. It's more suited to asynchronous processes where the latency between invocation and execution are not as critical.
You should probably think about whether your system needs to be synchronous, and if it needs to be synchronous whether lambda is the best answer.
Related
I have an AWS lambda function that uploads a file to the dropbox using its api.
It works well in normal but when there are hundreds~thousands of requests in a short amount of time (within a few min, for example), dropbox file upload requests are throttled and get penalty to retry in 30/60/300 seconds. This obviously causes time-out in lambda functions (which is not efficient in pricing term as well).
Can someone help me how to control such a spike in lambda invocation? I tried to set the reserved concurrency for that function but it doesn't seem so reliable because lambda invocations are throttled and retries randomly.
Or is it something that I need to resolve with dropbox side? So that they do not put limit to their api calls?
I am trying to use API Gateway as my API interface between Frontend and Lambda functions. Since API gateway has the maximum timout for 30 seconds and lambda take much time to do the computation, can we use the API Gateway Web socket to make this possible?
I currently am creating RESTful API's on API Gateway and found out about the Web sockets on API Gateway.
Do anyone has suggestions on how to make this possible?
Depending on what your Lambda function is doing, it may be worthwhile to increase the Lambda Memory configuration. Copied from the Lambda Developer Guide emphasis mine:
Memory – The amount of memory available to the function during execution. [...] Lambda allocates CPU power linearly in proportion to the amount of memory configured.
Thus, by increasing the amount of Lambda memory, you also increase the Lambda CPU performance. For computational intensive operations this configuration can significantly decrease the response time, hopefully less than the API Gateway 30s timeout.
I'm learning about AWS Lambda and I'm worried about synchronized real-time requests.
The fact the lambda has a "cold start" it doesn't sounds good for handling GET petitions.
Imagine a user is using the application and do a GET HTTP Request to get a Product or a list of Products, if the lambda is sleeping, then it will take 10 seconds to respond, I don't see this as an acceptable response time.
Is it good or bad practice to use AWS Lambda for classic (sync responses) API Rest?
Like most things, I think you should measure before deciding. A lot of AWS customers use Lambda as the back-end for their webapps quite successfully.
There's a lot of discussion out there on Lambda latency, for example:
2017-04 comparing Lambda performance using Node.js, Java, C# or Python
2018-03 Lambda call latency
2019-09 improved VPC networking for AWS Lambda
2019-10 you're thinking about cold starts all wrong
In December 2019, AWS Lambda introduced Provisioned Concurrency, which improves things. See:
2019-12 AWS Lambda announces Provisioned Concurrency
2020-09 AWS Lambda Cold Starts: Solving the Problem
You should measure latency for an environment that's representative of your app and its use.
A few things that are important factors related to request latency:
cold starts => higher latency
request patterns are important factors in cold starts
if you need to deploy in VPC (attachment of ENI => higher cold start latency)
using CloudFront --> API Gateway --> Lambda (more layers => higher latency)
choice of programming language (Java likely highest cold-start latency, Go lowest)
size of Lambda environment (more RAM => more CPU => faster)
Lambda account and concurrency limits
pre-warming strategy
Update 2019-12: see Predictable start-up times with Provisioned Concurrency.
Update 2021-08: see Increasing performance of Java AWS Lambda functions using tiered compilation.
As an AWS Lambda + API Gateway user (with Serverless Framework) I had to deal with this too.
The problem I faced:
Few requests per day per lambda (not enough to keep lambdas warm)
Time critical application (the user is on the phone, waiting for text-to-speech to answer)
How I worked around that:
The idea was to find a way to call the critical lambdas often enough that they don't get cold.
If you use the Serverless Framework, you can use the serverless-plugin-warmup plugin that does exactly that.
If not, you can copy it's behavior by creating a worker that will invoke the lambdas every few minutes to keep them warm. To do this, create a lambda that will invoke your other lambdas and schedule CloudWatch to trigger it every 5 minutes or so. Make sure to call your to-keep-warm lambdas with a custom event.source so you can exit them early without running any actual business code by putting the following code at the very beginning of the function:
if (event.source === 'just-keeping-warm) {
console.log('WarmUP - Lambda is warm!');
return callback(null, 'Lambda is warm!');
}
Depending on the number of lamdas you have to keep warm, this can be a lot of "warming" calls. AWS offers 1.000.000 free lambda calls every month though.
We have used AWS Lambda quite successfully with reasonable and acceptable response times. (REST/JSON based API + AWS Lambda + Dynamo DB Access).
The latency that we measured always had the least amount of time spent in invoking functions and large amount of time in application logic.
There are warm up techniques as mentioned in the above posts.
Has anyone found a solution to API Gateway latency issues?
With a simple function testing API Gateway -> Lambda interaction, I regularly see cold starts in the 2.5s range, and once "warmed," response times in the 900ms - 1.1s range are typical.
I understand the TLS handshake has its own overhead, but testing similar resources (AWS-based or general sites that I believe are not geo-distributed) from my location shows results that are half that, ~500ms.
Is good news coming soon from AWS?
(I've read everything I could find before posting.)
engineer with the API Gateway team here.
You said you've read "everything", but for context for others I want to link to a number of threads on our forums where I've documented publicly where a lot of this perceived latency when executing a single API call comes from:
Forum Post 1
Forum Post 2
In general, as you increase your call rates, your average latency will shrink as connection reuse mechanisms between your clients and CloudFront as well as between CloudFront and API Gateway can be leveraged. Additionally, a higher call rate will ensure your Lambda is "warm" and ready to serve requests.
That being said, we are painfully aware that we are not meeting the performance bar for a lot of our customers and are making strides towards improving this:
The Lambda team is constantly working on improving cold start times as well as attempting to remove them for functions that are seeing continuous load.
On API Gateway, we are currently in the process of rolling out improved connection reuse between CloudFront and API Gateway, where customers will be able to benefit from connections established via other APIs. This should mean that the percentage of requests that need to do a full TLS handshake between CloudFront and API Gateway should be reduced.
I've been playing with Lambda recently and am working on creating an API using API Gateway and Lambda. I have a lambda function in place that returns a JSON and an API Gateway endpoint that invokes the function. Everything works well with this simple setup.
I tried loadtesting the API gateway endpoint with the loadtest npm module. While Lambda processes the concurrent requests (albeit with an increase in mean latency over the course of execution), when I send it 40 requests per second or so, it starts throwing errors, only partially completing the requests.
I read in the documentation that by default, Lambda invocation is of type RequestResponse (which is what the API does right now) which is synchronous in nature, and it looks like it is non-blocking. For asynchronous invocation, the invocation type is Event. But lambda discards the return type for async invocations and the API returns nothing.
Is there something I am missing either with the sync, async or concurrency definitions in regards to AWS? Is there a better way to approach this problem? Any insight is helpful. Thank you!
You will have to use Synchronous execution if you want to get a return response from API Gateway. It doesn't make sense to use Async execution in this scenario. I think what you are missing is that while each Lambda execution is blocking, single threaded, there will be multiple instances of your function running in multiple Lambda server environments.
The default number of concurrent Lambda executions is fairly low, for safety reasons. This is to prevent you from accidentally writing a run-away Lambda process that would cost lots of money while you are still learning about Lambda. You need to request an increase in the Lambda concurrent execution limit on your account.