How to stop a running GCP Cloud Task? - google-cloud-platform

Is there some way to stop a running Cloud Task? I accidentally started a task which is processing a lot of data and don't see a way to stop it.

Up there you'll see two choices: Pause Queue Delete Queue.

According to the official doc, you cannot pause the task itself, just the queue. If you pause the queue, it should pause the task as well, or, you can just delete the task.
And, as commented on the other anwer this is all manage by the UI the command line is still in Alpha

Related

Is it possible to trigger/call another program when kafka HdfsSinkConnector finish

I want to trigger the impala refresh job when kafka HdfsSinkConnector task finish it. Is it possible to get notification when task complete or any other way to trigger/call my other program?
HDFS has an inotify feature which essentially translates those log entries into events that can be consumed.
https://issues.apache.org/jira/browse/HDFS-6634
Here's a Java based example: https://github.com/onefoursix/hdfs-inotify-example
Alternatively, rather than having Oozie monitor many directories and waste resources, a script can execute 'hdfs dfs -ls -R /folder|grep|sed' every minute or so but that's still not event based, so it depends how fast of a reaction you need vs how easy you can implement/use the inotify API
https://community.cloudera.com/t5/Support-Questions/HDFS-Best-way-to-trigger-execution-at-File-arrival/td-p/163423

Deleting a Google Cloud Task does not stop task running

I have a task queue which users can push tasks onto, only one task can run at a time enforced by the concurrency setting for the queue. In some cases (e.g. long running task) they may wish to cancel a running task in order to free up the queue to process the next task.
To achieve this I have been running the task queue as a Flask application, should a user wish to cancel a task I call the delete_task method of the python client library for a given queue & task.
However, I am seeing that the underlying task continues to be processed even after the task has been deleted. Have been trying to find documentation of how Cloud Tasks handles a task being deleted, but haven't found anything concrete.
Hoping that i'd be able to listen for a signal of some sort in order to gracefully shut down the process if a deletion is received. Or that the underlying process would be killed if the parent task is deleted.
Has anyone worked with the Cloud Tasks API before? Is it correct to assume that a deleted task will cleanup any processes that are running?
I don't see how a worker would be able to find out that the task it is working on has been deleted.
In the eyes of the worker, a task is an incoming Http request. I don't know how the Queue could tell that specific process to stop. I'm fairly certain that "deleting" a task just removes it from the Queue only.
You'd have to build a custom 'cancel' function that would be able to reach out to this worker.
Or this worker would have to periodically check with the Queue to see if its task still exists.
https://cloud.google.com/tasks/docs/reference/rest/v2/projects.locations.queues.tasks/get
https://googleapis.dev/python/cloudtasks/latest/gapic/v2/api.html#google.cloud.tasks_v2.CloudTasksClient.get_task
I'm not actually sure what the Queue will return if you try to call 'get task' a deleted task since i don't see a 'status' property for task. Maybe it will return an error like 'task does not exist'

Temporarily pause an SQS worker tier app

I have an SQS Worker Tier beanstalk application listening to a queue. If we encounter any issues, for example a database crash, is there a way for us to temporarily stop the worker tier from working that queue without having to terminate the environment and rebuilding it again when we want to resume?
One hack I guess would be for us to point it to an empty queue, but I'd rather avoid that type of thing.
Thanks
For anybody who is in the same boat as me, I just want to post my own, inelegant solution.
We have created another SQS Queue, and whenever we want to turn off the processing of messages, we just update the worker tier app to point to this new queue. It isn't clean, but it does what we need.
Another option is to just leave it as is. In case of database crash, or any other error, your application will return for example 500 instead of 200 and message will be returned back to the queue for future processing.
Not sure if this helps, but you can add a delivery delay to SQS queue: right click the queue -> configure queue -> set Delivery Delay up to 15 minutes. Any message will be received after this delay. This allows me to "pause" the queue for up to 15 minutes.
You can terminate the environment and recreate it. In case you do not have a way to recreate same environment via just one command, take a look at: https://github.com/ThoughtWorksStudios/eb_deployer

AWS AutoScaling, downscale - wait for processes termination

I want to use AWS AutoScaling to scaledown a group of instances when SQS queue is short.
These instances do some heavy work that sometimes requires 5-10 minutes to complete. And I want this work to be completed before the instance termination.
I know a lot of people should have faced the same problem. Is it possible on EC2 to handle the AWS termination request and complete all my running processes before the instance is actually terminated? What is the best approach to this?
You could also use Lifecycle hooks. You would need a way to control a specific worker remotely, because AWS will select a particular instance to put in Terminating:Wait state and you need to manage that instance. You would want to take the following actions:
instruct the worker process running on the instance to not accept any more work.
wait for the worker to finish the work it already is handling
call the complete-lifecycle action.
AWS will take care of the rest for you.
ps. if you are using celery to power your workers then you can remotely ask a worker to shutdown gracefully. It won't shutdown unless it finishes with the tasks it had started executing.
Assuming you are using linux, you can create a pre-baked AMI that you use in your Launch Config attached to your Auto Scaling Group.
In the AMI you can put a script under /etc/init.d say /etc/init.d/servicesdown. This script would execute anything that you need to shutdown which would be scripts under /usr/share/services for example.
Here's kind like the gist:
servicesdown
It would always get executed when doing a graceful shutdown.
Then say on Ubuntu/Debian you would do something like this to add it to your shutdown sequence:
/usr/sbin/update-rc.d servicesdown stop 25 0 1 6 .
On CentOS/RedHat you can use the chkconfig command to add it to the right shutdown runlevel.
I stumbled onto this problem because I didn't want to terminate an instance that was doing work. Thought I'd share my findings here. There are two ways to look at this though :
I need to terminate a worker, but I only want to terminate one that's not working
I need to terminate a SPECIFIC worker and I want that specific worker to wait until it's done with the work.
If you're goal is #1, Amazon's new "Instance Protection" looks like it was designed to resolve this.
See the below link for an example, they give this code snippet as an example:
https://aws.amazon.com/blogs/aws/new-instance-protection-for-auto-scaling/
while (true)
{
SetInstanceProtection(False);
Work = GetNextWorkUnit();
SetInstanceProtection(True);
ProcessWorkUnit(Work);
SetInstanceProtection(False);
}
I haven't tested this myself, but I see API calls related to setting the protection, so it appears that this could be integrated into the EC2 Worker App code-base and then when Scaling In, instances shouldn't be terminated if they are protected (currently working).
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/autoscaling/AmazonAutoScaling.html
As far as I know currently there is no option to terminate instance while gracefully shutdown and let process to complete work.
I suggest you to look at http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-configure-healthcheck.html.
We implemented it for resque workers while we are moving instance to unhealthy state and than downsizing AS. There is a script which checking constantly health state on each instance. Once instance moved to unhealthy state it stops all services gracefully and sending terminate signal to ec2.
Hope it helps you.

Make celery stop consuming tasks

Preconditions: There is a small celery cluster processing some tasks. Each celery instance has few workers running. Everything is running under flask.
Tasks: I need an ability to pause/resume consuming of tasks from a particular node from the code. I.e. task can make a decision if current celery instance and all her workers should pause or resume consuming of tasks.
Didn't find any straight forward way to solve this. Any suggestions?
Thanks in advance!
Control.cancel_consumer(queue, **kwargs) (reference) is all that you probably need for your use case.
Perhaps a better strategy would be to divide the work across several queues.
Have a default queue where all tasks start. The workers watching the default queue can, according to your logic, add subtasks to the other active queues. You may not need this extra queue if you can add tasks to the active queues directly from flask.
That way, each node does not have to worry about whether it's paused or active. It just consumes everything that's been added to its queue. These location-specific queues will be empty (and thus paused) unless the default workers have added subtasks.