Preconditions: There is a small celery cluster processing some tasks. Each celery instance has few workers running. Everything is running under flask.
Tasks: I need an ability to pause/resume consuming of tasks from a particular node from the code. I.e. task can make a decision if current celery instance and all her workers should pause or resume consuming of tasks.
Didn't find any straight forward way to solve this. Any suggestions?
Thanks in advance!
Control.cancel_consumer(queue, **kwargs) (reference) is all that you probably need for your use case.
Perhaps a better strategy would be to divide the work across several queues.
Have a default queue where all tasks start. The workers watching the default queue can, according to your logic, add subtasks to the other active queues. You may not need this extra queue if you can add tasks to the active queues directly from flask.
That way, each node does not have to worry about whether it's paused or active. It just consumes everything that's been added to its queue. These location-specific queues will be empty (and thus paused) unless the default workers have added subtasks.
Related
So I'm struggling to figure out the optimal way to schedule some events to happen at some point in the future using celery. An example of this is when a new user has registered, we want to send them an email the next day.
We have celery setup and some of you may allude to the eta parameter when calling apply_async. However that won't work for us, as we use SQS which has a visibility timeout that would conflict and generally the eta param shouldn't be used for lengthy periods.
One solution we've implemented at this point is to create events and store them in the database with a 'to-process' timestamp (refers to when to process the event). We use the celery beat scheduler with a task that runs literally every second to see if there are any new events that are ready to process. If there are, we carry out the subsequent tasks.
This solution works, although it doesn't feel great since we're queueing a task every second on SQS. Any thoughts or ideas on this would be great?
I am a beginner with django, I have celery installed.
I am confused about the working of the celery, if the queued works are handled synchronously or asynchronously. Can other works be queued when the queued work is already being processed?
Celery is a task queuing system, that is backed by a message queuing system, Celery allows you to invoke tasks asynchronously, in a way that won't block your process for the task to finish, you can wait for the task to finish using the AsyncResult.get.
Other tasks can be queued while a task is being processed, and if Celery is running more than one process/thread (which is the default case), tasks will be executed in parallel to each others.
It is your responsibility to make sure that related tasks are executed in the correct order, e.g. if the output of a task A is an input to the other task B then you should make sure that you get the result from task A before you start the task B.
Read Avoid launching synchronous subtasks from Celery documentation.
I think you're possibly a bit confused about what Celery does.
Celery isn't really responsible for queueing at all. That is taken care of by the queue itself - RabbitMQ, Redis, or whatever. The only way Celery gets involved at this end is as a library that you call inside your app to serialize to task into something suitable for putting onto the queue. Since that is done by your web app, it is exactly as synchronous or asynchronous as your app itself: usually, in production, you'd have multiple processes running your site, each of those could put things onto the queue simultaneously, but each queueing action is done in-process.
The main point of Celery is the separate worker processes. This is where the asynchronous bit comes from: the workers run completely separately from your web app, and pick tasks off the queue as necessary. They are not at all involved in the process of putting tasks onto the queue in the first place.
Django produces multiple Celery tasks through chains in one script run (f.e. if / is opened in browser, 1000 tasks are called by delay method).
I need something that will restrict new task generation, if tasks, queued in previous script run, are still running.
You need a distributed lock for this, which celery doesn't offer natively.
For these kinds of locks I've found redis.Lock useful to most cases. If you need a semaphore, you can use redis' atomic incr/decr functions along with some kind of watchdog mechanism to ensure your processes are still running.
You can restrict the number of tasks of one type running at the same time by setting:
rate_limit = “1000/m”
=> only 1000 tasks of this type can run per minute.
(see http://docs.celeryproject.org/en/latest/userguide/tasks.html#list-of-options)
My django web-app logic is heavily geared towards background task execution (both periodic as well as stand alone, synchronous as well as asynchronous). All the research seems to point to using Celery being the most recommended approach. I plan to eventually deploy on Heroku and the fact that it has support for Celery + Redis (what I'm using for local development) is a big plus for me.
However I need more extensive scheduling capabilities than celery provides. I need some of my periodic tasks to be able to run schedules like 'run on last sun of the month' etc. So I've implemented my own models in django to store a recurrence rule and other needed parameters.
Now I'm stumped with how to interface my tables with celery. Ideally what I'd like to do is to have my own Job model which has the schedule, the task which should be run when it becomes due as well as the parameters for the task. Sort of like function ptr in C++. Then I would run a daemon which keeps checking the job queue for which job has become due, if its periodic it creates the next job instance and pushes it into queue, then runs the associated task with parameters using celery's delay method or similar.
questions:
Does this approach even make sense?
If not what other alternative approach(es) can I use
If yes how do I go about designing that Job/Event queue...
I'd love to hear a better approach to doing this or if there's an existing implementation of a job queue that might be suitable or a way to use celery's job queue itself...
Thanks heaps..
The periodic tasks in Celery works pretty much like this. There's a dedicated scheduler process (celery beat) which simply sends off tasks when they are due.
You can also create new schedulers to use with beat by subclassing the celery.beat.Scheduler class, and you can create custom schedules too (like the crontab schedule that is already built-in) by subclassing celery.schedules.schedule.
There's a database-backed scheduler implementation in the django-celery extension (djcelery.schedulers.DatabaseScheduler), which uses many tricks to avoid too frequent polling of the database and so on (sadly it's not well commented).
Scheduler: https://github.com/celery/celery/tree/master/celery/beat.py
schedules: https://github.com/celery/celery/tree/master/celery/schedules.py
DatabaseScheduler: https://github.com/celery/django-celery/tree/master/djcelery/schedulers.py
SQS expects your application to be idempotent and I've got multiple consumers/producers where (even if SQS had a deliver-once mechanism) I will have race conditions creating duplicates and race conditions consuming because my consumers run via cron jobs.
My current plan is to use the Django 1.4 select_for_update which should block other consumers on the same row, doing something like:
reminders = EmailReminder.objects.select_for_update().filter(id=some_id)
if not reminders[0].finished:
reminder.send()
reminder.update(finished=datetime.now())
# Delete job.
Are there better ways of dealing with this?
Hook up django-celery to SQS and have it designate a periodic job using celerybeat. Then have celeryd worker(s) running on the same queue anywhere you want. Only one will pick up a job at a time and execute it. No need to introduce DB locking on any level.
As long as your worker is guaranteed to finish its current task before celerybeat fires a new one you will never have a need for a lock. Now if you think there is a chance they may overlap you can introduce states for your notifications where:
Any reminder starts in "unsent" state.
Your celerybeat sends a request to process unsent emails to the queue.
Some worker picks it up and grabs all of them.
Immediately the worker transitions all of them to "sending" state.
Proceeds to send them one at a time (or in bulk).
If sending fails for any, revert their state back to unsent.
For all that succeeded transition to sent.
This way if celerybeat fires another job while your original job is not done with the initial batch, you won't have duplicate emails sent. As an added bonus you can scale the solution and distribute the load.