I'm trying to warm a lambda (inside VPC which access a private RDS) function with cloudwatch. The rate is 5 minutes (only for experimental) I intend to make it 35 minutes later on.
After I saw the cloudwatch logs which indicate that the function has been called (and completed which I have set up a condition if no input was given, return an API gateway response immediately), I call the function from API gateway URL.
However, I'm still getting that cold starts which it return a response in 2sec. If I do it again, I get the response in 200ms.
So my question is:
What did I do wrong? can I actually warm a lambda function with a cloudwatch schedule?
Is dropping the request immediately affects this behaviour? the db connection is not established if the request is from cloudwatch
Thanks!
****EDIT****
I tried to connect to the db before I drop the function when it's called by cloudwatch. But it doesn't change anything. The first request through API call still around 2s and the next ones are around 200ms.
****EDIT 2****
I tried to remove the schedule entirely and the cold start achieves 9s. So I guess the 2s has discounted the cold start. Is it possible that the problem lies in other services? Such as API Gateway?
Related
I am using multiple lambdas in the step functions. The client hits the API gateway, which invokes a lambda, and that lambda invokes the step function's lambdas to get the response.
The response time is 3-7 seconds when lambdas are already warmed up.
But when it has cold-start, it takes more than 30 seconds for some requests based on input.
As the API gateway has a timeout of 30 seconds, I get a “503- service Unavailable error" for some requests request though the execution is running. Though AWS says, it should give 504 for request time out.
Then I used provisioned concurrency for the latency issue, which will likely solve the timeout issue. But still, sometimes, I get a service Unavailable error.
I checked cloudwatch trace. Interestingly, Each lambda takes an initialization time of 3 seconds even after provisioning. Now I doubt whether provisioned-concurrency is working or I do not understand it.
Any advice?
I have a python lambda that forwards requests to an external API. The lambda is part of a target group that an ALB targets. The lambda goes through surges where it has to handle hundreds of invocations per second.
Everything works well for the most part except for when we hit some odd issue where it will take upwards of 20 seconds or so to retrieve a secure string param from parameter store. When that delay of 20 seconds occurs, the system making the call to our alb times out and throws an error.
I was thinking that I could do the ssm param retrieval in an init method of the lambda and then keep the lambda always warm but that seems like a waste of resources just to manage the ssm param reading issue.
Are there any suggestions on how this should be done or configured (or if perhaps I'm overlooking something that I should be doing)?
Every AWS API has a request limit - https://aws.amazon.com/premiumsupport/knowledge-center/ssm-parameter-store-rate-exceeded/
So, yes you should cache your parameters - How do I cache multiple AWS Parameter Store values in an AWS Lambda?
I've got an API Gateway V2 (Protocol:HTTP) style endpoint, it simply makes a request to my Lambda function and gives me the response. I've noticed that if I make no request for about 10 minutes or so, that on a new request it's going WAY slower than the requests afterwards. It's the same function that does the same thing every time so I'm not sure why this is happening, has anyone else ever had this and found a solution?
The reason for this is that your Lambda function has to be started before it can handle requests.
This is also called cold start.
Starting a new instance of your Lambda does take some time. Once it is started it will serve several requests. At some point in time the AWS Lambda service is going to shutdown your Lambda function. For example when there has not been any traffic for a while.
That's where your observation is coming from:
I've noticed that if I make no request for about 10 minutes or so, that on a new request it's going WAY slower than the requests afterwards.
When there are no instances of your Lambda running and a new requests is coming in, the AWS Lambda service needs to instantiate a "fresh" instance of your Lambda.
You could read this blog which touches this topic:
https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/
I have two lambda functions in my AWS. One acts as a custom authorizer and the other acts as a notification service which calls the firebase FCM notification service.
When a request is made first time in a day to the notification lambda there is no response. The lambda does not work and hence does not call the firebase service.
It seemed like a cold start problem to me so I added the provisioned concurrency for both auth and notification lambda to 1 in the hope that it will work. But the problem persists.
Cloudwatch logs are of no help at all since nothing gets printed to it which I can use to figure out the issue. Either the authorizer lambda goes cold and does not response or the primary notification lambda goes cold and does not response or even both of them have issues.
After the first call to lambda fails any subsequent calls then work smoothly like a charm.
I do not want to install any plugin which will keep the lambda warm (not an option from the client) so is there some other way I can diagnose this problem and fix it?
I've built an API with API gateway and Lambda. I've noticed that when left idle (usually a few hours), it will fail on the first call. Has anyone else encountered this issue?
Should I implement retries on my API calls or is there some configuration for Lambda that I am missing out on?
[INFO] 2019-04-15T03:18:58.263Z SUCCESS: Connection to RDS MySQL instance succeeded
This is the only line that was logged in CloudWatch for my Lambda function.
I have found out that AWS Lambda will take longer than usual to invoke a function if left idle due to cold starts.
The error that I have received was due to the Lambda taking longer than my defined timeout for http requests to return a response.
I've removed VPC from my Lambda function as suggested to lower the cold start time and I have not experience any cold start issue with Lambda since.