How good is lambda functions to hit a REST API after a fixed amount of time? - amazon-web-services

I am using distributed scheduler 'Chronos'(distributed crontab) to hit a REST API after few minute of job addition(example: Add job at time T to schedule it at T+5minutes).This run on a bigger infrastructure and take care of fault-tolerant and no-data loss, however it has significant cost and I am thinking some alternative to the similar requirement. Please help if it can be done using a lambda function.

Its possible to do invoke a lambda function, block/wait for X seconds and continue execution, but not recommended. You cannot wait for more than 300 seconds though as thats the max timeout legally allowed by Lambda functions.
Moreover, you will hit concurrent execution limits from AWS and will need to keep calling AWS support to increase your concurrent execution limits.
Another approach to solve this problem could be to use Actor based system such as Akka, to create an Actor for each job and do the needful.

Related

Timeout in lambda function after 15 minutes

I have written a function which queries data and then I process that data and call two external API's. My function works fine if the number of records are 2000, but more than that causes timeout error after 900 seconds. I have allocated 4GB for this fucntion.
What else can be done in this case?
If you have a monolithic application that you need to run serverless and requires an execution time greater than 15 minutes, you could consider using ECS instead:
Create a Docker image with your function
Upload the Docker image to ECR
Create an ECS Task Definition to run the container image
Run an ECS task
Lambda is great and super-easy to use, but you have a time limit of 15 that you can not increase in any way. You also have a limit of 10GB of memory (CPU is scaled accordingly), so if you are thinking of increasing performances, take this in mind. I had the same issue and I am moving to Fargate, where you can define a task which run a docker container uploaded to ECR. You have no timeout, you can have multi-CPU environments and you can invoke the task with a lambda. It's a similar approach to what #Paolo described, look here for differences between the two services.
Looks like the maximum time limit for lambda is 15 min, from this AWS Lambda Time limit
Try to redesign your solution to be more efficient, you can make the two API calls concurrent and use batchgets or parallel scans here is a good guide Best Practices for Querying and Scanning Data
You could use the initial lambda execution to trigger other asynchronous lambda calls. You would loop through your 2000 records and for each one trigger another lambda, passing in details about the record to be processed. Each asynchronously-triggered lambda would process just the single record it got sent. That way you essentially process records in parallel instead of in a serial fashion.
These resources explain things a bit more:
https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html
https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
With async invocation, your initial lambda does little more than loop through records and trigger async lambda calls for each record. You will need to think about concurrency to ensure you don't get throttled by having too many lambda executing concurrently.

Need to run a aws lambda function which takes more than 15 minutes to complete?

My Lambda function has limit 15 minutes which was 5 minutes ealier.Lambda process is automatically terminated after 15 minutes but my process takes more than 15 minutes. How I can manage ?
There is no way around this. If you're doing some sort of long running processing then your other option may be to run this task on an EC2 instance. If this long running process can be broken down in to multiple steps then you could look in to Lambda Step Functions.
15 Minutes is the max and this max can not be extended.
EDIT:
Recently I started running some long running tasks that are variable in length (anywhere from a couple minutes to several hours). To accomplish this I've been using AWS Fargate and my task is node.js script that is stored as a Docker container in ECR. Doing this was fairly easy and also is fairly cheap (I think we spent a little over $1 for running this task daily in a month). This may be something worth looking in to for others who may come across this answer.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/scheduled_tasks.html
Typically use a Fat Lambda strategy or Step Function
Fat Lambda Strategy
A Fat Lambda strategy is used when your task is singular but has a
long-running execution time and/or you have heavy hardware
requirements. The idea is that you would create a script that executes your long
process and put it into a docker container that's hosted in Fargate.
Meaning no limits to execution time and access to powerful hardware (How to create a Fat Lambda https://youtu.be/XUp9SHIHU8w)
Step Function strategy
A Step Function strategy is used to break down your entire process
into smaller steps. Usually, a step function strategy would work for
you if your process could have lots of miniature stages linked
together instead of a big colossal job attempting to do everything
simultaneously. Bear in mind that a "Fat Lambda" can also be triggered
by a Step Function (How to create a Step Function
https://www.youtube.com/watch?v=s0XFX3WHg0w)
Also, another note, remember lambdas can also trigger other lambdas. So you might even be able to have different lambdas run bits of your lambda code. For example, a FOR loop sends off a lot of mini lambdas to run small tasks. You might not even need a Step Function or a Fat Lambda.
If you're stuck on what to choose, follow the below. It will help you reason with your problem.
Singular Lambda >> Lambda invoke another Lambda? >> Step Functions? >> Fargate (Fat Lambda)?
If you can checkpoint the task then you can check the getRemainingTimeInMillis (docs) and if the time is running out then invoke the same lambda with a parameter where to continue.
Something like this flow:
start working (0% done)
time is running low (40% done) => start a new lambda telling it to start from 40%
old lambda is terminated, new lambda starts working (40%)
when its time is running low, start a new lambda again (80%)
the third lambda finishes the job
But it requires a very specific type of task to support this. If your require a single execution from start to finish then lambda is not a good choice for this.
What do you think about using a lambda to trigger an ECS task? An ECS task just runs a containerized application for as long as it needs to run.
This blog post is relevant: https://www.gravitywell.co.uk/insights/using-ecs-tasks-on-aws-fargate-to-replace-lambda-functions/.
Aws lambda is meant to be used for quick processing. if your task is this long then better choose some other way to develop that functionality. Although you can define the timeout property for AWS lambda, but that can not exceed 15 minutes.
As per you use case better to use EC2 for deploying you application and then terminate the EC2 instance when the processing is done or it remains idle more than the threshold time.
Refer the AW Lambda documentation - https://docs.aws.amazon.com/lambda/index.html
To add to the Step function answer - here's a very simple playbook:
Work for 10 minutes
Write progress to S3
kick off another lambda to consume your progress
terminate
Once you're done, output. Viola, infinite runtime lambda with very little effective overhead.
No, you cannot run a lambda for more than 15 mins!
But Yes you can manage this using Signals.
Basically, this will inform you to start plan B when plan A is not enough within 15 mins. If you can decouple the tasks in your process and add checkpoints in your process then the next lambda invocation can be picked up in plan B or you can somehow create entries in db in the plan B for the unprocessed parts. And reprocess them as a part of another run.
Framework here -
https://gist.github.com/kuharan/c2bfddac7bd8dc5702f6eec31729fb48

Lambdas calls speed changes

I've created a simple lambda that reads data from dynamodb.
First time I call the lambda it takes about 1500ms to complete, but then after I run the lambda again it takes about 150ms. How is it possible?
What type of caching response does AWS preform to achieve this?
AWS Lambda is provision infrastructure on your first call and it's required time also AWS needs to start a JVM with the code to be able to call the function. Starting the JVM takes time and thus will incur some overhead.
Another issue is cold ,if there is no idle container available waiting to run the code. This is all invisible to the user and AWS has full control over when to kill containers.
So above steps are involved during first call and you can see 1500 ms
Next call you have everything on place so lambda give you response in 150 ms or less .
This is as per design of serverless to save infrastructure cost ,only provision infrastructure when needed and get first call.
I would suggest please read documents
- https://aws.amazon.com/lambda/
This happens due to cold start. This happens mainly when we invoke the lambda for the first time after deployment or when a lambda function is idle for sometime.
These articles explains about how language, memory or size of the lambda affects the cold start
https://read.acloud.guru/does-coding-language-memory-or-package-size-affect-cold-starts-of-aws-lambda-a15e26d12c76
https://mikhail.io/serverless/coldstarts/aws/

Warming Lambda via API Gateway

I have some Lambda functions that are utilized with API Gateway. I'm attempting to find a good warming strategy to help minimize cold starts. Currently, I'm invoking these functions from another lambda every 2 minutes. I can see in logs that this is correctly invoking the functions.
Consistently however, if I wait 15 or 30 minutes and hit the API, it appears to be a cold start (1-3s response time), typical response is around 100ms +- 50ms.
I understand that this warming method would warm for single concurrency and if I wanted to better effectively warm these at higher concurrency I would need to invoke them as such. But this is not an active api, just my own invokes/requests so concurrency issues shouldn't be coming in to play.
My question is that when using API Gateway, does invoking your lambdas outside of API Gateway actually "warm" them? It just doesn't seem to be in practice or perhaps 2 minute increments isn't enough? Would I be better actually hitting the endpoints themselves vs. invoking the lambdas?
Appreciate any suggestions, just seems that warming every 2 minutes using invoke, isn't really doing anything.

Is it possible to detect an AWS account is nearing the Lambda concurrency limit?

Lambda has some concurrency limits that when hit, cause subsequent invocations to get throttled.
This makes sense, but is it possible to detect this situation ahead of time and start applying backpressure?
The problem is that (according to the docs) the concurrency limit is per-account, which means a single runaway microservice can block ALL unrelated services.
For example: a lambda fn with an s3 event source could easily lead to API Gateway handlers being throttled and unhappy API users.
Is there any QoS for lambda functions? It'd be great to be able to give public-facing functions priority. (I know the answer is no, but I wish there were.)
Short of that, is it possible to detect that you're nearing this concurrency limit and build backpressure in?
I'm not seeing anything, and the only solution I can think of at this moment is to create a metric that watches for Throttles and as soon as one happens, toggle some flag somewhere? This adds significant complexity though...
Any ideas?