Workaround aws apigateway timeout with lambda - asynchronous processing - amazon-web-services

I have a serverless backend running on lambda. The runtime usually varies betweeen 40-250s which is over the apigateway max allowed runtime (29s). As such I think my only option is to resort to asynchronous processing. I get the idea behind it, but help online seems sparse and I'd like to know if there are any best practices out there? Or what would be the simplest way for me to get around this timeout problem–using asynchronous processing or other?

It really depends on your use case. But probably an asynchronous approach is best fitted for this scenario given that it's not usually a good idea from the calling side of your API to wait 250 seconds to get the reply back (probably that's why the 29s limitation on API Gateway).
Asynchronous simply means that you will be replying back from Lambda saying that you received the request and you are going to work on it but it will be available only later.
Then, you will be changing the logic on the client side, too, to check back after some time or perform some checks in a loop until the requested resource is ready.
Depending on what work needs to be done you could create an S3 bucket on the fly and reply back to the client with an S3 presigned URL. Then your worker will upload their results to the S3 bucket and the client will poll that bucket for the results until they are present.

Related

Overload Lambda Concurrent

I have a use-case with lambda concurrent.
My system uses the API Gateway + Lambda function. I requested AWS for more Limit Concurrent and I get 10000 CCU. But my system is going to solve about 30000 concurrent users.
First, I tried using lambda async by this guide [https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-integration-async.html]
But my application use REST API and the result get 500 code of the response. The document said
"In this case, the backend Lambda function is invoked asynchronously,
and the front-end REST API method doesn't return the result"
Then, I search for one more solution that use Lambda Async. It used lambda async invoke from another sync Lambda. But this way didn't use for API GET of some methods that required validation.
Now, we go back with the solution that uses the provision concurrent, but this solution may not be useful. I am concerned about the concurrent that not enough.
Please, help me. Sorry about my English skill

How to reject the same POST request sent twice in a short gap of time

I am wondering if there is a standard way to reject requests with the same body sent within a few seconds at the API gateway itself.
Forex: Reddit rejects if I try to post the same content within few seconds in a different group. Similarly, if I make a credit card payment for the second time, it automatically rejects it.
I am wondering if there is a way to have the same behavior in the AWS API gateway itself so that we are not handling it in lambda functions with dynamoDB and stuff.
Looking forward to efficient ways of doing it.
The API Gateway currently doesn't offer a feature like that, you'd have to implement this yourself.
If I was to implement this, I'd probably use an in-memory cache like ElastiCache for Redis or Memcached as the storage backend for deduplications.
For each incoming request I'd determine what makes it unique and create a hash from that.
Then I check if that hash value is in the cache already. If that's the case it 's a duplicate and I reject the request. If it isn't already in the cache, I'd add it with a time to live of n seconds (The time interval in which I wish to deduplicate).

Fallback for DynamoDB with SQS

We have a synchronous REST endpoint that does other processing apart from saving item to DynamoDB database which will be used for later purpose.
The requirement is to not error out if the database save fails due to any type of exception.
How do we handle the case where dynamo db is down in the entire region(rare but possible).Is it the right pattern to publish to SQS and have a seperate process consume and save to DynamoDB by pinging it(ListTables or ping).
Should we fallback to another region or publish to SQS? Is it worth using resilience4j circuit breaker pattern?
It is a common pattern to have the API simple enqueue a request to SQS. This has many benefits such as allowing higher throughput, decoupling the producer and consumer and better fault tolerance.
This would be a fine design but your REST API will no longer be synchronous and the caller won't quite know whether the operation was successfully processed so you may need to add another endpoint to get the status of the request.
I am not super familiar with resilence4j circuit breakers but this may not be necessary as the Amazon SDKs already have built in retries if that is the main benefit you are seeking.

Manage idle time in an API request from an AWS Lambda function

I'm trying to build a chatbot on AWS Lambda.
However, 90% of my Lambda duration is lost in requests wait time.
For each interaction a user has with my chatbot, I send approximately 3 requests (1 to Dialogflow and 2 to Messenger). I have to wait until those requests are completed because:
for Dialogflow, I need the answer
for Messenger, I need to make sure the previous message has been sent before sending the next one
Requests take approximately 400ms so for every API call to my Lambda function, I "lose" most of my duration time waiting...
Do you have any hints about how I can avoid waiting 4000ms each time ?
Maybe I should move to a more common ec2 instance.
I was first really interested in stateless and Lambda because I thought it would make sense for a chatbot, but the more I add feature in my project, the more problems I get (database connection is really long...)
It sounds like you're mostly stuck. Maybe one thing you could do is try to make as many async calls as you can in parallel. It sounds like your flow is currently like this:
Event -> Dialogflow -> Messenger -> Messenger -> Finish
You could try and combine some of these calls and execute them in parallel:
Event -> Messenger -> Messenger -> Finish
-> Dialogflow ->
AWS Lambda may not be cost-effective in cases like that.
To optimize the cost you can consider:
Use async requests as much as possible.
Reduce the lambda's memory size. It will also make it run slower, so the optimized value can be usually found by trial and error. In your case, reducing it to the minimum possible may be of best fit. Check this example.
Batch multiple events to a single invocation, and process them asynchronously. For example, in your case, you can aggregate multiple interactions of different users using services such as Kinesis Data Streams and SQS, handle them in the same invocation, and send the separate response for each one of them.

Is it possible to make an HTTP request from one Lambda function, and handle the response in another?

AWS Lambda functions are supposed to respond quickly to events. I would like to create a function that fires off a quick request to a slow API, and then terminates without waiting for a response. Later, when a response comes back, I would like a different Lambda function to handle the response. I know this sounds kind of crazy, when you think about what AWS would have to do to hang on to an open connection from one Lambda function and then send the response to another, but this seems to be very much in the spirit of how Lambda was designed to be used.
Ideas:
Send messages to an SQS queue that represent a request to be made. Have some kind of message/HTTP proxy type service on an EC2 / EB cluster listen to the queue and actually make the HTTP requests. It would put response objects on another queue, tagged to identify the associated request, if necessary. This feels like a lot of complexity for something that would be trivial for a traditional service.
Just live with it. Lambda functions are allowed to run for 60 seconds, and these API calls that I make don't generally take longer than 10 seconds. Not sure how costly it would to have LFs spend 95% of their running time waiting on a response, but "waiting" isn't what LFs are for.
Don't use Lambda for anything that interacts with 3rd party APIs that aren't lightning fast :( That is what most of my projects do these days, though.
It depends how many calls will this lambda execute monthly, and how many memory are you allocating for those lambda. The new timeout for lambda is 5 minutes, which should (hopefully :p) be more than enough for an API to respond. I think you should let lambda deal with all of it to not over complicate the workflow. Lambda pricing is generally really cheap.
E.g: a lambda executed 1 million times with 128 MB allocated during 10 seconds would cost approximatively 20$ - this without considering the potential free tier.