I am running (Django) Celery for scheduling tasks to remote workers w1, w2, w3. Each of these workers has their own queue from which they are consuming tasks placed by a "scheduler", which is another celery task on beat on the master server:
w1: q1
w2: q2
w3: q3
The scheduler schedules tasks based on a db check, i.e. it will reschedule a task with the same parameters if the db doesn't get updated as per the task's running. So if one or more of the queues are piling up, multiple tasks with the same parameters ("duplicates" from my app's perspective) may be in multiple queues at the same time.
I'm seeing some strange behavior with this: when there are duplicate tasks in multiple queues, if one of the queues runs its instance of the task, just a few milliseconds before, the other queued up "duplicate" tasks get executed. So all of a sudden all the tasks execute at the same time, even if they were enqueued minutes apart from each other.
Is there any documentation or other reference that explains this behavior? Is it known behavior, if so how do I turn it off? I only want one instance of this task to run.
Related
Everyday I have a set of tasks (A few hundred) that needs to performed at random time. So a periodic task is probably not what I want. It seems like I cannot dynamically change crontab
Should I:
Dynamically schedule a task whenever it's received a user, and let celery "wake up" at the scheduled time to perform the task? If so, how is it done?
OR
Create a celery tasks that wakes up every 60 seconds to look into the database for tasks scheduled current time. So the database acts as a queue. I am wondering if this would put too much load on the server?
I am a beginner with django, I have celery installed.
I am confused about the working of the celery, if the queued works are handled synchronously or asynchronously. Can other works be queued when the queued work is already being processed?
Celery is a task queuing system, that is backed by a message queuing system, Celery allows you to invoke tasks asynchronously, in a way that won't block your process for the task to finish, you can wait for the task to finish using the AsyncResult.get.
Other tasks can be queued while a task is being processed, and if Celery is running more than one process/thread (which is the default case), tasks will be executed in parallel to each others.
It is your responsibility to make sure that related tasks are executed in the correct order, e.g. if the output of a task A is an input to the other task B then you should make sure that you get the result from task A before you start the task B.
Read Avoid launching synchronous subtasks from Celery documentation.
I think you're possibly a bit confused about what Celery does.
Celery isn't really responsible for queueing at all. That is taken care of by the queue itself - RabbitMQ, Redis, or whatever. The only way Celery gets involved at this end is as a library that you call inside your app to serialize to task into something suitable for putting onto the queue. Since that is done by your web app, it is exactly as synchronous or asynchronous as your app itself: usually, in production, you'd have multiple processes running your site, each of those could put things onto the queue simultaneously, but each queueing action is done in-process.
The main point of Celery is the separate worker processes. This is where the asynchronous bit comes from: the workers run completely separately from your web app, and pick tasks off the queue as necessary. They are not at all involved in the process of putting tasks onto the queue in the first place.
1) I am currently working on a web application that exposes a REST api and uses Django and Celery to handle request and solve them. For a request in order to get solved, there have to be submitted a set of celery tasks to an amqp queue, so that they get executed on workers (situated on other machines). Each task is very CPU intensive and takes very long (hours) to finish.
I have configured Celery to use also amqp as results-backend, and I am using RabbitMQ as Celery's broker.
Each task returns a result that needs to be stored afterwards in a DB, but not by the workers directly. Only the "central node" - the machine running django-celery and publishing tasks in the RabbitMQ queue - has access to this storage DB, so the results from the workers have to return somehow on this machine.
The question is how can I process the results of the tasks execution afterwards? So after a worker finishes, the result from it gets stored in the configured results-backend (amqp), but now I don't know what would be the best way to get the results from there and process them.
All I could find in the documentation is that you can either check on the results's status from time to time with:
result.state
which means that basically I need a dedicated piece of code that runs periodically this command, and therefore keeps busy a whole thread/process only with this, or to block everything with:
result.get()
until a task finishes, which is not what I wish.
The only solution I can think of is to have on the "central node" an extra thread that runs periodically a function that basically checks on the async_results returned by each task at its submission, and to take action if the task has a finished status.
Does anyone have any other suggestion?
Also, since the backend-results' processing takes place on the "central node", what I aim is to minimize the impact of this operation on this machine.
What would be the best way to do that?
2) How do people usually solve the problem of dealing with the results returned from the workers and put in the backend-results? (assuming that a backend-results has been configured)
I'm not sure if I fully understand your question, but take into account each task has a task id. If tasks are being sent by users you can store the ids and then check for the results using json as follows:
#urls.py
from djcelery.views import is_task_successful
urlpatterns += patterns('',
url(r'(?P<task_id>[\w\d\-\.]+)/done/?$', is_task_successful,
name='celery-is_task_successful'),
)
Other related concept is that of signals each finished task emits a signal. A finnished task will emit a task_success signal. More can be found on real time proc.
Is it possible to use Celery to set up multiple updating tasks to run simultaneously on Django/Heroku on just ONE worker? If I schedule certain functions to run every 5 minutes, will they automatically overlap in terms of when they start running, or will they wait till all other tasks are finished? I'm new to Celery and frankly vary confused over what it can do? ):
By default Celery uses multiprocessing to perform concurrent execution of tasks. Celery worker launches a pool of processes to consume tasks. The number of processes in a pool is set by --concurrency argument and defaults to the number of CPUs available on the machine.
So if the concurrency level is greater than one then the tasks will be processed in parallel.
The Heroku Scheduler documentation says:
Scheduled jobs are meant to execute short running tasks or enqueue longer running tasks into a background job queue. Anything that takes longer than a couple of minutes to complete should use a worker process to run
If the Scheduler starts a new dyno for these jobs and the cost is the same for a dyno vs. a worker, what is the advantage to adding a task to the queue and having a worker process run it?
It is an architectural best practice to only schedule, and not execute, interval tasks on the scheduler task (or your own custom clock process). The motivation for this is explained in the scheduled jobs article but, to summarize, you want your scheduler process/task to be as light-weight as possible since there should only be one of them. When you start overloading scheduling with execution you often run into schedule conflicts and erratic behavior.
Imagine that one interval job hangs, or takes much longer than expected. If your intervals are tight enough this will start causing a backlog and future intervals could be pushed back or skipped all together.
Also, it is just wise to keep component responsibilities as separated as possible - not having a single component be responsible for orthogonal tasks. This is a common design practice which is reflected in the scheduled job use-case by keeping scheduling and execution independent.
Best practices aside, if you're in development or bootstrap mode and understand the consequences stated above you can certainly choose to ignore such advice and run everything within the scheduler task. Just be careful for hard to debug job conflicts or apparent duplication.
Well, I think this is just a recommendation. If you have a task which is ran by Scheduler and you'll run this task manually (in the Heroku administration), you'll get an error - this error is caused by timeout (because each task has limit 30s). But in fact, this task will not be interrupted - the task is gonna be finished correctly.
If you have 1 dyno, so this one dyno use Heroku for your application. If you run some scheduled job, so this dyno gonna be taken be the Scheduler -> if you have long-time running task, your page will be "idle" (not correctly working till the time, when the scheduled job will be finished).