Is there a way to set a maximum running time for AWS Batch jobs (or queues)? This is a standard setting in most batch managers, which avoids wasting resources when a job hangs for whatever reason.
As of April, 2018, AWS Batch now supports setting a Job Timeout when submitting a Job, or in the job definition.
https://aws.amazon.com/about-aws/whats-new/2018/04/aws-batch-adds-support-for-automatic-termination-with-job-execution-timeout/
You specify an attemptDurationSeconds parameter, which must be at least 60 seconds, either in your job definition, or when you submit the job. When this number of seconds has passed following the job attempt's startedAt timestamp, AWS Batch terminates the job. On the compute resource, your job's container receives a SIGTERM signal to give your application a chance to shut down gracefully; if the container is still running after 30 seconds, a SIGKILL signal is sent to forcefully shut down the container.
Source: https://docs.aws.amazon.com/batch/latest/userguide/job_timeouts.html
POST /v1/submitjob HTTP/1.1
Content-type: application/json
{
...
"timeout": {
"attemptDurationSeconds": number
}
}
AFAIK there is no feature to do this. However, a workaround was suggested in the forum for a similar question.
One idea is to call Batch as an Activity from Step Functions, pingback
back on a schedule (e.g. every minute) from that job. If it stops
responding then you can detect that situation as a Timeout in the
activity and act accordingly (terminate the job etc.). Not an ideal
solution (especially if the job continues to ping back as a "zombie"),
but it's a start. You'd also likely have to store activity tokens in a
database to trace them to Batch job id.
Alternatively, you split that setup into 2 steps, and schedule a Batch
job from a Lambda in the first state, then pass the Batch job id to
the second step which then polls Batch (from another Lambda) for its
state with Retry and IntervalSeconds (e.g. once every minute, or even
with exponential backoff), and MaxAttempts calculated based on your
timeout. This way, you don't need any external state storage
mechanism, long polling or even a "ping back" from the job (it CAN be
a zombie), but the downside is more steps.
There is no option to set timeout on batch job but you can setup a lambda function that triggers every 1 hour or so and deletes jobs created before say 24 hours.
working with aws for some time now and could not find a way to set a maximum running time for batch jobs.
However there are some alternative way which you could utilize.
AWS Forum
Sadly there is no way to set the limit execution time on AWS Batch.
One solution may be to edit the docker's entry point to schedule the execution time limit.
Related
I have written a function which queries data and then I process that data and call two external API's. My function works fine if the number of records are 2000, but more than that causes timeout error after 900 seconds. I have allocated 4GB for this fucntion.
What else can be done in this case?
If you have a monolithic application that you need to run serverless and requires an execution time greater than 15 minutes, you could consider using ECS instead:
Create a Docker image with your function
Upload the Docker image to ECR
Create an ECS Task Definition to run the container image
Run an ECS task
Lambda is great and super-easy to use, but you have a time limit of 15 that you can not increase in any way. You also have a limit of 10GB of memory (CPU is scaled accordingly), so if you are thinking of increasing performances, take this in mind. I had the same issue and I am moving to Fargate, where you can define a task which run a docker container uploaded to ECR. You have no timeout, you can have multi-CPU environments and you can invoke the task with a lambda. It's a similar approach to what #Paolo described, look here for differences between the two services.
Looks like the maximum time limit for lambda is 15 min, from this AWS Lambda Time limit
Try to redesign your solution to be more efficient, you can make the two API calls concurrent and use batchgets or parallel scans here is a good guide Best Practices for Querying and Scanning Data
You could use the initial lambda execution to trigger other asynchronous lambda calls. You would loop through your 2000 records and for each one trigger another lambda, passing in details about the record to be processed. Each asynchronously-triggered lambda would process just the single record it got sent. That way you essentially process records in parallel instead of in a serial fashion.
These resources explain things a bit more:
https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html
https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
With async invocation, your initial lambda does little more than loop through records and trigger async lambda calls for each record. You will need to think about concurrency to ensure you don't get throttled by having too many lambda executing concurrently.
My Lambda function has limit 15 minutes which was 5 minutes ealier.Lambda process is automatically terminated after 15 minutes but my process takes more than 15 minutes. How I can manage ?
There is no way around this. If you're doing some sort of long running processing then your other option may be to run this task on an EC2 instance. If this long running process can be broken down in to multiple steps then you could look in to Lambda Step Functions.
15 Minutes is the max and this max can not be extended.
EDIT:
Recently I started running some long running tasks that are variable in length (anywhere from a couple minutes to several hours). To accomplish this I've been using AWS Fargate and my task is node.js script that is stored as a Docker container in ECR. Doing this was fairly easy and also is fairly cheap (I think we spent a little over $1 for running this task daily in a month). This may be something worth looking in to for others who may come across this answer.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/scheduled_tasks.html
Typically use a Fat Lambda strategy or Step Function
Fat Lambda Strategy
A Fat Lambda strategy is used when your task is singular but has a
long-running execution time and/or you have heavy hardware
requirements. The idea is that you would create a script that executes your long
process and put it into a docker container that's hosted in Fargate.
Meaning no limits to execution time and access to powerful hardware (How to create a Fat Lambda https://youtu.be/XUp9SHIHU8w)
Step Function strategy
A Step Function strategy is used to break down your entire process
into smaller steps. Usually, a step function strategy would work for
you if your process could have lots of miniature stages linked
together instead of a big colossal job attempting to do everything
simultaneously. Bear in mind that a "Fat Lambda" can also be triggered
by a Step Function (How to create a Step Function
https://www.youtube.com/watch?v=s0XFX3WHg0w)
Also, another note, remember lambdas can also trigger other lambdas. So you might even be able to have different lambdas run bits of your lambda code. For example, a FOR loop sends off a lot of mini lambdas to run small tasks. You might not even need a Step Function or a Fat Lambda.
If you're stuck on what to choose, follow the below. It will help you reason with your problem.
Singular Lambda >> Lambda invoke another Lambda? >> Step Functions? >> Fargate (Fat Lambda)?
If you can checkpoint the task then you can check the getRemainingTimeInMillis (docs) and if the time is running out then invoke the same lambda with a parameter where to continue.
Something like this flow:
start working (0% done)
time is running low (40% done) => start a new lambda telling it to start from 40%
old lambda is terminated, new lambda starts working (40%)
when its time is running low, start a new lambda again (80%)
the third lambda finishes the job
But it requires a very specific type of task to support this. If your require a single execution from start to finish then lambda is not a good choice for this.
What do you think about using a lambda to trigger an ECS task? An ECS task just runs a containerized application for as long as it needs to run.
This blog post is relevant: https://www.gravitywell.co.uk/insights/using-ecs-tasks-on-aws-fargate-to-replace-lambda-functions/.
Aws lambda is meant to be used for quick processing. if your task is this long then better choose some other way to develop that functionality. Although you can define the timeout property for AWS lambda, but that can not exceed 15 minutes.
As per you use case better to use EC2 for deploying you application and then terminate the EC2 instance when the processing is done or it remains idle more than the threshold time.
Refer the AW Lambda documentation - https://docs.aws.amazon.com/lambda/index.html
To add to the Step function answer - here's a very simple playbook:
Work for 10 minutes
Write progress to S3
kick off another lambda to consume your progress
terminate
Once you're done, output. Viola, infinite runtime lambda with very little effective overhead.
No, you cannot run a lambda for more than 15 mins!
But Yes you can manage this using Signals.
Basically, this will inform you to start plan B when plan A is not enough within 15 mins. If you can decouple the tasks in your process and add checkpoints in your process then the next lambda invocation can be picked up in plan B or you can somehow create entries in db in the plan B for the unprocessed parts. And reprocess them as a part of another run.
Framework here -
https://gist.github.com/kuharan/c2bfddac7bd8dc5702f6eec31729fb48
I have a use case where I am reading the time from DB in every 30mins and if found time to be executed in next 30mins I put in a AWS SQS.
Example- I am running cron every 30 mins with lambda that reads schedule_at from DB. And find tasks which needs to be execute in next 30mins I put it in a AWS SQS queue;
Like the cron run time is 11:30 and the tasks is scheduled at 11:16. I want to add them to queue and only execute it when 11:16 time(which is schedule_at it would be different for every tasks).
Here I want to set the time to execute the message, or visible only when time schedule_at time and at that time it will trigger another lambda to deal with business logic.
I am not sure how to solve this using what attribs of AWS SQS, Can any one help me with this?
You can delay message visibility for up to 15 minutes.
Since you don't specify what SDK you're using, here's the API doc; look at DelaySeconds. And note that it only works for a "standard" SQS queue, not a FIFO queue.
I'm a developer on a startup and right now we are using around 30 cronjobs, some of them run each minute, others run once per day while other run on specific days. The problem are the ones that run every minute, when most of the time is not necessary.
This somewhat increases our expenses because during the night, they still run when most of the times our services have nobody online (and don't require to be run).
We have been talking about using AWS to replace those cronjobs into something like event based. Yet, I cannot find a solution. Here's an example of one of our cronjobs:
One costumer starts to make a registration and has 8 minutes to complete it. Right now, we have a cronjob that runs every minute to validate if he completed, and if not, to "delete" it.
I though I could replace this with a SNS + Lambda event. Basically, when an user starts registration, send an message to SNS, that would triger a lambda function. Yet, it could only run after 8 minutes, and not instantly.
I've seen on SNS that we can delay up to 15 minutes, but we got some other service that sends an email after few hours, which would not work
Anyone have a clue on how can I do it?
Thanks
You can use AWS step functions to implement the workflow and add a delay to wait before invoking the Lambda function.
After starting a new batch geocoding job (step one here), does the amount of time it takes to get a response (step two here) depend on the amount of individual geocode requests? (ie. does it take longer to get a response for 10,000 locations VS 10 locations?)
On a similar note, what are the different possible statuses that can be returned in the response? (for instance, "accepted" in step two here)
I tried looking for these answers in the HERE batch geocoding documentation, but couldn't find anything.
The HERE API FAQ page directed me here for any technical support.
Although there is a bigger infrastructure in the backend of the BGC and job items are processed in parallel to some degree, not all items of a batch job can be processed in parallel at the same time. So yes, it takes longer for bigger jobs.
A batch job can be in one of the following states:
submitted
A batch job was submitted to the batch system and is ready to be started. The batch job can be started by the user by sending the HTTP PUT "action=run" request.
accepted
Batch job has been verified for correctness and validity and has been added to the queue, waiting to be scheduled for execution.
running
Job is being processed now.
complete
Job processing completed.
cancelled
Job has been cancelled by user with HTTP PUT command action=cancel.
deleted
Job was deleted by user with HTTP DELETE command
failed
Job failed while running. This is unusual and caused by an internal error. You can try to restart the job with a PUT request and action=run or delete the job.
State transition graph