Cloud task deadline more 30 sec - google-cloud-platform

I use the cloud task without a problem. I have some task that's longer than the 30s it has by default. Is possible to extend it? Because I have some task that are quite long.

When you create a task, you can specify the parameter dispatchDeadline. It set the timeout for the task.
By default it's 10 minutes, I'm surprised of your 30 seconds. If you need more help, share pieces of your code!

Related

Why is AWS Glue taking time to start the execution?

Once execution started it's completing in just few seconds but pending execution itself is taking 10 to 15 minutes, I agree that it's setting up the env for running the job, but In my case, I need to run this job(transforming JSON) for every 15 min, will this workout or any option is there? Am I missing any configuration?

AWS lambda function timeout

I have 2 microservices. Each microservice lambda function timeout is set to 15 minutes but I get a timeout in 5 minutes when I monitor the logs on logz.io, any idea why this is the case. I increased the limit from 5 minutes to 15 minutes but it looks like this has no effect. Please help!
There are chances that your function completes in 5 minutes .Try adding sleep(10*60*1000) i.e 10 minutes in the code and check the total execution time.
All calls made to AWS Lambda must complete execution within 300 seconds. The default timeout is 3 seconds, but you can set the timeout to any value between 1 and 300 seconds.
seems like your function is not reading your set time out.
https://lumigo.io/blog/aws-lambda-timeout-best-practices/
make the change in the serverless.yaml file by adding "timeout" as one of the parameter and set its value. With this every new deployment will keep this value as timeout value.

Do Google Cloud background functions have max timeout?

We have been using Google Cloud Functions with http-triggers, but ran into the limitation of a maximum timeout of 540 s.
Our jobs are background jobs, typically datapipelines, with processing times often longer than 9 minutes.
Do background functions have this limit, too? It is not clear to me from the documentation.
All functions have a maximum configurable timeout of 540 seconds.
If you need something to run longer than that, consider delegating that work to run on another product, such as Compute Engine or App Engine.
2nd Generation Cloud Functions that are triggered by https can have a maximum timeout of 1 hour instead of the 10 minute limit.
See also: https://cloud.google.com/functions/docs/2nd-gen/overview
You can then trigger this 2nd gen Cloud Function with for example Cloud Scheduler.
When creating the job on Cloud Scheduler you can set the Attempt deadline config to 30 minutes. This is the deadline for job attempts. Otherwise it is cancelled and considered a failed job.
See also: https://cloud.google.com/scheduler/docs/reference/rest/v1/projects.locations.jobs#Job
The maximum run time of 540 seconds applies to all Cloud Functions, no matter how they're triggered. If you want to run something longer you will have to either chop it into multiple parts, or run it on a different platform.

Google Cloud Function Timeout Setting doesn't work

I can't get a Google Cloud Function to run for more than 60secs, even when the timeout is set to 540secs!! Any suggestions?
I set the timeout flag on deployment to --timeout=540, and I know the setting goes through, because the 540 sec timeout setting appears in the GCP WEB UI. I have also tried to manually edit the timeout to 540 through the GCP WEB UI. But in any case i still get the DEADLINE_EXCEEDED after just ~ 62000 ms.
I have tried both the pub/sub and https methods as the func trigger, but still get the premature function timeout at ~60s.
Im running the latest CLI, with these these func settings:
trigger: http/pubsub (both tested, same result)
availableMemoryMb: 2048
runtime: nodejs6
status: ACTIVE
timeout: 540s
Thanks for any inputs!
Br Markus
I have used the documentation code for delay and executed a Cloud Function with the same specifications as yours. In the documentation, the execution is delayed 120000 ms (2 mins). I edited that and put it at 500000 ms. This plus the normal time that the CF takes to execute, will reach the desired execution time (around 9 minutes). If you add 540000 to test the code, it will execute with timeout error at ~540025, because the value itself is exceeding the timeout limit of the Cloud Function and at the same time the default maximum timeout limit of a Cloud Function, which is 9 minutes.
I also tried the creating the function using this command
gcloud functions deploy [FUNCTION_NAME] --trigger-http --timeout=540.
After successful deployment, I updated the code manually in the GCP Cloud Function UI as follows
exports.timeoutTest = (req, res) => {
setTimeout(() => {
let message = req.query.message || req.body.message || 'Hello World today!';
res.status(200).send(message);
res.end();
}, 500000);
};
Both times the Cloud Function was executed and returned with status code 200. This means that you can set a timeout to be more than 60 secs which is the default value.
If you revised everything correctly and you still have this issue, I recommend you to start afresh, create a new CF and use the documentation link I provided.
The 60 seconds timeout is not resulting from GCP Cloud Function setting. For instance if this is a Django/Gunicorn App, the timeout is coming from the timeout of gunicorn that is set in app.yaml
entrypoint: gunicorn -t 3600 -b :$PORT project_name.wsgi
For instance, this will achieve a timeout of 3600 seconds for gunicorn.
I believe I'm some years late but here is my suggestion.
If you're using the "Test the function" button in the "Testing tab" of the Cloud Function (in the gcp "Cloud Console") it says right next to the button that:
Testing in the Cloud Console has a 60s timeout. Note that this is different from the limit set in the function configuration.
I hope you fixed it and this answer can help someone in the future.
Update: Second try ("Test the function") was precisely 9 minutes
From: 23:15:38
Till: 23:24:38
And it is exactly the 9 minutes, although the message again was about 60 seconds only and popped up much earlier than the actual stop.
Function execution took 540004 ms, finished with status: 'timeout'
This time with a lot of memory (2 GB), timeout clearly made it stop. The message is perhaps just popping up earlier since it has not been programmed in detail, my guess. You should always look at the logs to see what is happening.
I guess that the core of your question is outdated then: At least in 01/2022, you do have the demanded timeout time regardless of the what you may read, and you just should not care about the messages.
First try ("Test the function") 8 minutes after reached memory limit
A screenshot of how it looks like in 2022/01 if you get over the 60 seconds (with 540s maximum timeout for this example function set in the "Edit" menu of the CF):
Function being tested has exceeded the 60s timeout imposed by the Cloud Functions testing utility.
Yet, in reality, when using just the "Testing tab" the timeout is at least after 300s / 5 minutes which can be seen next to the "Test the function" button:
Testing in the Cloud Console has a 5 minute timeout. Note that this is different from the limit set in the function configuration.
But it is even more. I know from testing (started from the "Testing tab" --> "Test Function" in the Cloud Function) that you have at least 8 minutes:
From 22:31:43:
Till 22:39:53
And this was at first stopped by the 256 MB limit, secondly only by time (a bit unclear why there were both messages).
Therefore, your question about why you get only 60 seconds timeout time might rather ask why these messages are wrong (like in my case). Perhaps GCP did not make the effort to parametrize the messages for each function.
Perhaps you get even slightly more time when you start with gcloud from terminal, but that is not so likely since 9 minutes are the maximum anyway.

Django Celery application with timer

I am having a problem with my django celery application. Let's assume that I have two related models - Transaction and Account. I am pushing transactions over my API. What I want to achieve is to calculate the balance for a specified account.
I have a celery task which calculates balance. The problem is that I need a timer per each account, set to eg. 60 seconds. When transaction is coming for the same account timer is set to 60 again. I want to make it this way because I don't want to run the same task many times. When transactions for a specified account will not come for 60 seconds then task should be executed.
Any architectural suggestions how to achieve this? In fact, I only have no idea how to setup these "timers".
Thanks for answers!
You can follow the aproach of django-celery-transactions. They subclass the Task class in order for the execution logic to be customized.
For your case, you should customized apply_async to check if there are queued tasks to be executed during the following 60 seconds (for this you will use the Celery API for inspecting workers). If there already exists an scheduled task, you can ignore the execution of the current task; if not, set an execution time of 60 seconds in the future and call with it the super().apply_async(...) method.