AWS API Gateway timeout after around 30 seconds. I connected the API gateway to a lambda function that runs far longer than 30 seconds.
So the API response times out after around ~30 seconds and give back something like a timeout response.
How do I solve this problem and get my response back from Lambda?
Thank you.
The API Gateway has a maximum integration timeout of 30 seconds (API Gateway Limits), so there is nothing you can do to increase it.
What you could do is accept the request, create and ID and put it in a queue.
Then you send an HTTP 202 Message with the request id back to the client.
Now a Lambda function can be triggered from the Queue asynchronously that performs the work.
It later persists the results of the query somewhere under the request id (maybe only for a period of time).
The client can then use the request ID and poll a second API gateway for the status, which is able to return the response once it's present.
You cannot do anything about api Gateway timeout, it is having hard limit of 30 seconds.
A Solution for this is to use alb with aws
lambda, alb is not having any timeout limit.
Another option is polling based as suggested in previous answer by #Maurice.
Related
I am using multiple lambdas in the step functions. The client hits the API gateway, which invokes a lambda, and that lambda invokes the step function's lambdas to get the response.
The response time is 3-7 seconds when lambdas are already warmed up.
But when it has cold-start, it takes more than 30 seconds for some requests based on input.
As the API gateway has a timeout of 30 seconds, I get a “503- service Unavailable error" for some requests request though the execution is running. Though AWS says, it should give 504 for request time out.
Then I used provisioned concurrency for the latency issue, which will likely solve the timeout issue. But still, sometimes, I get a service Unavailable error.
I checked cloudwatch trace. Interestingly, Each lambda takes an initialization time of 3 seconds even after provisioning. Now I doubt whether provisioned-concurrency is working or I do not understand it.
Any advice?
We need to have an endpoint to receive http POST requests and send them to SQS with both headers and payload. API Gateway with REGIONAL type and SQS integration works great and satisfies our needs. However, we are slightly worried about the limits of 600 requests per second as it might not be enough for our case. Do we correctly understand that API Gateway HTTP API (that is not REST API with REGIONAL or EDGE types) can receive 10.000 requests per second, but in this case we would need to "build" our own integration to SQS (e.g. using lambdas)?
A bit late, but I believe there is a quota of 600 regional APIs (not request rate) per region. That mean, you can create 600 APIs, each of them with up to 10k requests per second. The quota 10k requests per second is, however, shared across these APIs, so if you have 100 APIs, each of them can receive can, in average, receive 1k requests per second. However, if 99 of them sit idle, the one API can get 10k requests per second.
I've got an API Gateway V2 (Protocol:HTTP) style endpoint, it simply makes a request to my Lambda function and gives me the response. I've noticed that if I make no request for about 10 minutes or so, that on a new request it's going WAY slower than the requests afterwards. It's the same function that does the same thing every time so I'm not sure why this is happening, has anyone else ever had this and found a solution?
The reason for this is that your Lambda function has to be started before it can handle requests.
This is also called cold start.
Starting a new instance of your Lambda does take some time. Once it is started it will serve several requests. At some point in time the AWS Lambda service is going to shutdown your Lambda function. For example when there has not been any traffic for a while.
That's where your observation is coming from:
I've noticed that if I make no request for about 10 minutes or so, that on a new request it's going WAY slower than the requests afterwards.
When there are no instances of your Lambda running and a new requests is coming in, the AWS Lambda service needs to instantiate a "fresh" instance of your Lambda.
You could read this blog which touches this topic:
https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/
We are trying to use serverless Websocket feature of API Gateway on AWS platform.
During initial observation, we have seen like the idle connection timeout for such websocket is 10 minutes. We have a requirement where we need to increase this time to 30 minutes so that the websocket connection should not close.
Is there any setting or alternate way of increasing this default idle time?
If you take a look at the table under API Gateway quotas for configuring and running a WebSocket API in this AWS documentation, you can see that increase of idle timeout is currently not supported.
My solution was to send a heartbeat (without any data since we just need to interact through the websocket to let API Gateway know that the connection is not idle) through the websocket every 5 minutes, and it has been working well.
I'm trying to warm a lambda (inside VPC which access a private RDS) function with cloudwatch. The rate is 5 minutes (only for experimental) I intend to make it 35 minutes later on.
After I saw the cloudwatch logs which indicate that the function has been called (and completed which I have set up a condition if no input was given, return an API gateway response immediately), I call the function from API gateway URL.
However, I'm still getting that cold starts which it return a response in 2sec. If I do it again, I get the response in 200ms.
So my question is:
What did I do wrong? can I actually warm a lambda function with a cloudwatch schedule?
Is dropping the request immediately affects this behaviour? the db connection is not established if the request is from cloudwatch
Thanks!
****EDIT****
I tried to connect to the db before I drop the function when it's called by cloudwatch. But it doesn't change anything. The first request through API call still around 2s and the next ones are around 200ms.
****EDIT 2****
I tried to remove the schedule entirely and the cold start achieves 9s. So I guess the 2s has discounted the cold start. Is it possible that the problem lies in other services? Such as API Gateway?