Heroku/Celery: Simultaneous tasks on one worker? - django

Is it possible to use Celery to set up multiple updating tasks to run simultaneously on Django/Heroku on just ONE worker? If I schedule certain functions to run every 5 minutes, will they automatically overlap in terms of when they start running, or will they wait till all other tasks are finished? I'm new to Celery and frankly vary confused over what it can do? ):

By default Celery uses multiprocessing to perform concurrent execution of tasks. Celery worker launches a pool of processes to consume tasks. The number of processes in a pool is set by --concurrency argument and defaults to the number of CPUs available on the machine.
So if the concurrency level is greater than one then the tasks will be processed in parallel.

Related

Celery: Numerous small tasks or one long running task?

I have a Django app and using Celery to process long running tasks.
Let's say I need to generate a file (takes 5 seconds), attach it to an email and send it to 1000 users, which of these methods are the preferred way?
Method 1: For loop outside task - generates numerates background tasks, each running a couple of seconds
#share_task
def my_task(usr):
#gen file + send email...
def send_to_all_users(users): # called to start task
for usr in users:
my_task.delay(usr)
Method 2: For loop inside task - generates 1 background tasks that could be running for hours
#share_task
def my_task(users):
for usr in users:
#gen file + send email...
def send_to_all_users(users): # called to start task
my_task.delay(users)
With method 1, I can scale up the number of workers to complete the entire task quicker, but creating all those tasks might take a while and I'm not sure if my task queue can fill up and then jobs get discarded?
Method 2 seems simpler, but it might run a very long time and I can't scale up the number of workers.
Not sure if it matters, but my app is running on Heroku and I'm using Redis as the message broker. I'm currently using a single background worker.
Celery docs on Task Granularity:
The task granularity is the amount of computation needed by each
subtask. In general it is better to split the problem up into many
small tasks rather than have a few long running tasks.
With smaller tasks you can process more tasks in parallel and the
tasks won’t run long enough to block the worker from processing other
waiting tasks.
However, executing a task does have overhead. A message needs to be
sent, data may not be local, etc. So if the tasks are too fine-grained
the overhead added probably removes any benefit.
So the first method should be preferred in general, but you have to benchmark your particular case to assess the overhead.

AWS SWF Simple Workflow - Best Way to Keep Activity Worker Scripts Running?

The maximum amount of time the pollForActivityTask method stays open polling for requests is 60 seconds. I am currently scheduling a cron job every minute to call my activity worker file so that my activity worker machine is constantly polling for jobs.
Is this the correct way to have continuous queue coverage?
The way that the Java Flow SDK does it and the way that you create an ActivityWorker, give it a tasklist, domain, activity implementations, and a few other settings. You set both the setPollThreadCount and setTaskExecutorSize. The polling threads long poll and then hand over work to the executor threads to avoid blocking further polling. You call start on the ActivityWorker to boot it up and when wanting to shutdown the workers, you can call one of the shutdown methods (usually best to call shutdownAndAwaitTermination).
Essentially your workers are long lived and need to deal with a few factors:
New versions of Activities
Various tasklists
Scaling independently on tasklist, activity implementations, workflow workers, host sizes, etc.
Handle error cases and deal with polling
Handle shutdowns (in case of deployments and new versions)
I ended using a solution where I had another script file that is called by a cron job every minute. This file checks whether an activity worker is already running in the background (if so, I assume a workflow execution is already being processed on the current server).
If no activity worker is there, then the previous long poll has completed and we launch the activity worker script again. If there is an activity worker already present, then the previous poll found a workflow execution and started processing so we refrain from launching another activity worker.

Workflow of celery

I am a beginner with django, I have celery installed.
I am confused about the working of the celery, if the queued works are handled synchronously or asynchronously. Can other works be queued when the queued work is already being processed?
Celery is a task queuing system, that is backed by a message queuing system, Celery allows you to invoke tasks asynchronously, in a way that won't block your process for the task to finish, you can wait for the task to finish using the AsyncResult.get.
Other tasks can be queued while a task is being processed, and if Celery is running more than one process/thread (which is the default case), tasks will be executed in parallel to each others.
It is your responsibility to make sure that related tasks are executed in the correct order, e.g. if the output of a task A is an input to the other task B then you should make sure that you get the result from task A before you start the task B.
Read Avoid launching synchronous subtasks from Celery documentation.
I think you're possibly a bit confused about what Celery does.
Celery isn't really responsible for queueing at all. That is taken care of by the queue itself - RabbitMQ, Redis, or whatever. The only way Celery gets involved at this end is as a library that you call inside your app to serialize to task into something suitable for putting onto the queue. Since that is done by your web app, it is exactly as synchronous or asynchronous as your app itself: usually, in production, you'd have multiple processes running your site, each of those could put things onto the queue simultaneously, but each queueing action is done in-process.
The main point of Celery is the separate worker processes. This is where the asynchronous bit comes from: the workers run completely separately from your web app, and pick tasks off the queue as necessary. They are not at all involved in the process of putting tasks onto the queue in the first place.

Celery all generated tasks status

Django produces multiple Celery tasks through chains in one script run (f.e. if / is opened in browser, 1000 tasks are called by delay method).
I need something that will restrict new task generation, if tasks, queued in previous script run, are still running.
You need a distributed lock for this, which celery doesn't offer natively.
For these kinds of locks I've found redis.Lock useful to most cases. If you need a semaphore, you can use redis' atomic incr/decr functions along with some kind of watchdog mechanism to ensure your processes are still running.
You can restrict the number of tasks of one type running at the same time by setting:
rate_limit = “1000/m”
=> only 1000 tasks of this type can run per minute.
(see http://docs.celeryproject.org/en/latest/userguide/tasks.html#list-of-options)

Heroku Scheduler - why enqueue long-running jobs

The Heroku Scheduler documentation says:
Scheduled jobs are meant to execute short running tasks or enqueue longer running tasks into a background job queue. Anything that takes longer than a couple of minutes to complete should use a worker process to run
If the Scheduler starts a new dyno for these jobs and the cost is the same for a dyno vs. a worker, what is the advantage to adding a task to the queue and having a worker process run it?
It is an architectural best practice to only schedule, and not execute, interval tasks on the scheduler task (or your own custom clock process). The motivation for this is explained in the scheduled jobs article but, to summarize, you want your scheduler process/task to be as light-weight as possible since there should only be one of them. When you start overloading scheduling with execution you often run into schedule conflicts and erratic behavior.
Imagine that one interval job hangs, or takes much longer than expected. If your intervals are tight enough this will start causing a backlog and future intervals could be pushed back or skipped all together.
Also, it is just wise to keep component responsibilities as separated as possible - not having a single component be responsible for orthogonal tasks. This is a common design practice which is reflected in the scheduled job use-case by keeping scheduling and execution independent.
Best practices aside, if you're in development or bootstrap mode and understand the consequences stated above you can certainly choose to ignore such advice and run everything within the scheduler task. Just be careful for hard to debug job conflicts or apparent duplication.
Well, I think this is just a recommendation. If you have a task which is ran by Scheduler and you'll run this task manually (in the Heroku administration), you'll get an error - this error is caused by timeout (because each task has limit 30s). But in fact, this task will not be interrupted - the task is gonna be finished correctly.
If you have 1 dyno, so this one dyno use Heroku for your application. If you run some scheduled job, so this dyno gonna be taken be the Scheduler -> if you have long-time running task, your page will be "idle" (not correctly working till the time, when the scheduled job will be finished).