Django Celery register task from module - django

I'm confused about how can I register only subset of tasks from one django app.
For example we have 2 apps with set of tasks but we need to register 1 app and subset of tasks from second app. How can I achieve that?
Or this can be explained another way. For example we have 2 different projects which are using reusable app with some tasks. And we need to import part of tasks in first project and another part in second. How can we achieve that?
Now I have celery.autodiscover but this also importing tasks, which I don't need. Thanks.

in your celery.py file do the configuration like this,
from django.conf import settings
app = Celery('redington')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(settings.INSTALLED_APPS, related_name='tasks')
in your all apps create file tasks.py file and register your tasks,that will take every app's

i haven't tested it but it should work
If you disable autodiscover_tasks you can register spesific task with
app.register_task(your_task)
from this issue https://github.com/celery/celery/issues/4112#issuecomment-313215784

Related

Django, how to trigger functions at a specific time? [duplicate]

I've been working on a web app using Django, and I'm curious if there is a way to schedule a job to run periodically.
Basically I just want to run through the database and make some calculations/updates on an automatic, regular basis, but I can't seem to find any documentation on doing this.
Does anyone know how to set this up?
To clarify: I know I can set up a cron job to do this, but I'm curious if there is some feature in Django that provides this functionality. I'd like people to be able to deploy this app themselves without having to do much config (preferably zero).
I've considered triggering these actions "retroactively" by simply checking if a job should have been run since the last time a request was sent to the site, but I'm hoping for something a bit cleaner.
One solution that I have employed is to do this:
1) Create a custom management command, e.g.
python manage.py my_cool_command
2) Use cron (on Linux) or at (on Windows) to run my command at the required times.
This is a simple solution that doesn't require installing a heavy AMQP stack. However there are nice advantages to using something like Celery, mentioned in the other answers. In particular, with Celery it is nice to not have to spread your application logic out into crontab files. However the cron solution works quite nicely for a small to medium sized application and where you don't want a lot of external dependencies.
EDIT:
In later version of windows the at command is deprecated for Windows 8, Server 2012 and above. You can use schtasks.exe for same use.
**** UPDATE ****
This the new link of django doc for writing the custom management command
Celery is a distributed task queue, built on AMQP (RabbitMQ). It also handles periodic tasks in a cron-like fashion (see periodic tasks). Depending on your app, it might be worth a gander.
Celery is pretty easy to set up with django (docs), and periodic tasks will actually skip missed tasks in case of a downtime. Celery also has built-in retry mechanisms, in case a task fails.
We've open-sourced what I think is a structured app. that Brian's solution above alludes too. We would love any / all feedback!
https://github.com/tivix/django-cron
It comes with one management command:
./manage.py runcrons
That does the job. Each cron is modeled as a class (so its all OO) and each cron runs at a different frequency and we make sure the same cron type doesn't run in parallel (in case crons themselves take longer time to run than their frequency!)
If you're using a standard POSIX OS, you use cron.
If you're using Windows, you use at.
Write a Django management command to
Figure out what platform they're on.
Either execute the appropriate "AT" command for your users, or update the crontab for your users.
Interesting new pluggable Django app: django-chronograph
You only have to add one cron entry which acts as a timer, and you have a very nice Django admin interface into the scripts to run.
Look at Django Poor Man's Cron which is a Django app that makes use of spambots, search engine indexing robots and alike to run scheduled tasks in approximately regular intervals
See: http://code.google.com/p/django-poormanscron/
I had exactly the same requirement a while ago, and ended up solving it using APScheduler (User Guide)
It makes scheduling jobs super simple, and keeps it independent for from request-based execution of some code. Following is a simple example.
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
job = None
def tick():
print('One tick!')\
def start_job():
global job
job = scheduler.add_job(tick, 'interval', seconds=3600)
try:
scheduler.start()
except:
pass
Hope this helps somebody!
Django APScheduler for Scheduler Jobs. Advanced Python Scheduler (APScheduler) is a Python library that lets you schedule your Python code to be executed later, either just once or periodically. You can add new jobs or remove old ones on the fly as you please.
note: I'm the author of this library
Install APScheduler
pip install apscheduler
View file function to call
file name: scheduler_jobs.py
def FirstCronTest():
print("")
print("I am executed..!")
Configuring the scheduler
make execute.py file and add the below codes
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
Your written functions Here, the scheduler functions are written in scheduler_jobs
import scheduler_jobs
scheduler.add_job(scheduler_jobs.FirstCronTest, 'interval', seconds=10)
scheduler.start()
Link the File for Execution
Now, add the below line in the bottom of Url file
import execute
You can check the full code by executing
[Click here]
https://github.com/devchandansh/django-apscheduler
Brian Neal's suggestion of running management commands via cron works well, but if you're looking for something a little more robust (yet not as elaborate as Celery) I'd look into a library like Kronos:
# app/cron.py
import kronos
#kronos.register('0 * * * *')
def task():
pass
RabbitMQ and Celery have more features and task handling capabilities than Cron. If task failure isn't an issue, and you think you will handle broken tasks in the next call, then Cron is sufficient.
Celery & AMQP will let you handle the broken task, and it will get executed again by another worker (Celery workers listen for the next task to work on), until the task's max_retries attribute is reached. You can even invoke tasks on failure, like logging the failure, or sending an email to the admin once the max_retries has been reached.
And you can distribute Celery and AMQP servers when you need to scale your application.
I personally use cron, but the Jobs Scheduling parts of django-extensions looks interesting.
Although not part of Django, Airflow is a more recent project (as of 2016) that is useful for task management.
Airflow is a workflow automation and scheduling system that can be used to author and manage data pipelines. A web-based UI provides the developer with a range of options for managing and viewing these pipelines.
Airflow is written in Python and is built using Flask.
Airflow was created by Maxime Beauchemin at Airbnb and open sourced in the spring of 2015. It joined the Apache Software Foundation’s incubation program in the winter of 2016. Here is the Git project page and some addition background information.
Put the following at the top of your cron.py file:
#!/usr/bin/python
import os, sys
sys.path.append('/path/to/') # the parent directory of the project
sys.path.append('/path/to/project') # these lines only needed if not on path
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproj.settings'
# imports and code below
I just thought about this rather simple solution:
Define a view function do_work(req, param) like you would with any other view, with URL mapping, return a HttpResponse and so on.
Set up a cron job with your timing preferences (or using AT or Scheduled Tasks in Windows) which runs curl http://localhost/your/mapped/url?param=value.
You can add parameters but just adding parameters to the URL.
Tell me what you guys think.
[Update] I'm now using runjob command from django-extensions instead of curl.
My cron looks something like this:
#hourly python /path/to/project/manage.py runjobs hourly
... and so on for daily, monthly, etc'. You can also set it up to run a specific job.
I find it more managable and a cleaner. Doesn't require mapping a URL to a view. Just define your job class and crontab and you're set.
after the part of code,I can write anything just like my views.py :)
#######################################
import os,sys
sys.path.append('/home/administrator/development/store')
os.environ['DJANGO_SETTINGS_MODULE']='store.settings'
from django.core.management impor setup_environ
from store import settings
setup_environ(settings)
#######################################
from
http://www.cotellese.net/2007/09/27/running-external-scripts-against-django-models/
You should definitely check out django-q!
It requires no additional configuration and has quite possibly everything needed to handle any production issues on commercial projects.
It's actively developed and integrates very well with django, django ORM, mongo, redis. Here is my configuration:
# django-q
# -------------------------------------------------------------------------
# See: http://django-q.readthedocs.io/en/latest/configure.html
Q_CLUSTER = {
# Match recommended settings from docs.
'name': 'DjangoORM',
'workers': 4,
'queue_limit': 50,
'bulk': 10,
'orm': 'default',
# Custom Settings
# ---------------
# Limit the amount of successful tasks saved to Django.
'save_limit': 10000,
# See https://github.com/Koed00/django-q/issues/110.
'catch_up': False,
# Number of seconds a worker can spend on a task before it's terminated.
'timeout': 60 * 5,
# Number of seconds a broker will wait for a cluster to finish a task before presenting it again. This needs to be
# longer than `timeout`, otherwise the same task will be processed multiple times.
'retry': 60 * 6,
# Whether to force all async() calls to be run with sync=True (making them synchronous).
'sync': False,
# Redirect worker exceptions directly to Sentry error reporter.
'error_reporter': {
'sentry': RAVEN_CONFIG,
},
}
Yes, the method above is so great. And I tried some of them. At last, I found a method like this:
from threading import Timer
def sync():
do something...
sync_timer = Timer(self.interval, sync, ())
sync_timer.start()
Just like Recursive.
Ok, I hope this method can meet your requirement. :)
A more modern solution (compared to Celery) is Django Q:
https://django-q.readthedocs.io/en/latest/index.html
It has great documentation and is easy to grok. Windows support is lacking, because Windows does not support process forking. But it works fine if you create your dev environment using the Windows for Linux Subsystem.
I had something similar with your problem today.
I didn't wanted to have it handled by the server trhough cron (and most of the libs were just cron helpers in the end).
So i've created a scheduling module and attached it to the init .
It's not the best approach, but it helps me to have all the code in a single place and with its execution related to the main app.
I use celery to create my periodical tasks. First you need to install it as follows:
pip install django-celery
Don't forget to register django-celery in your settings and then you could do something like this:
from celery import task
from celery.decorators import periodic_task
from celery.task.schedules import crontab
from celery.utils.log import get_task_logger
#periodic_task(run_every=crontab(minute="0", hour="23"))
def do_every_midnight():
#your code
I am not sure will this be useful for anyone, since I had to provide other users of the system to schedule the jobs, without giving them access to the actual server(windows) Task Scheduler, I created this reusable app.
Please note users have access to one shared folder on server where they can create required command/task/.bat file. This task then can be scheduled using this app.
App name is Django_Windows_Scheduler
ScreenShot:
If you want something more reliable than Celery, try TaskHawk which is built on top of AWS SQS/SNS.
Refer: http://taskhawk.readthedocs.io
For simple dockerized projects, I could not really see any existing answer fit.
So I wrote a very barebones solution without the need of external libraries or triggers, which runs on its own. No external os-cron needed, should work in every environment.
It works by adding a middleware: middleware.py
import threading
def should_run(name, seconds_interval):
from application.models import CronJob
from django.utils.timezone import now
try:
c = CronJob.objects.get(name=name)
except CronJob.DoesNotExist:
CronJob(name=name, last_ran=now()).save()
return True
if (now() - c.last_ran).total_seconds() >= seconds_interval:
c.last_ran = now()
c.save()
return True
return False
class CronTask:
def __init__(self, name, seconds_interval, function):
self.name = name
self.seconds_interval = seconds_interval
self.function = function
def cron_worker(*_):
if not should_run("main", 60):
return
# customize this part:
from application.models import Event
tasks = [
CronTask("events", 60 * 30, Event.clean_stale_objects),
# ...
]
for task in tasks:
if should_run(task.name, task.seconds_interval):
task.function()
def cron_middleware(get_response):
def middleware(request):
response = get_response(request)
threading.Thread(target=cron_worker).start()
return response
return middleware
models/cron.py:
from django.db import models
class CronJob(models.Model):
name = models.CharField(max_length=10, primary_key=True)
last_ran = models.DateTimeField()
settings.py:
MIDDLEWARE = [
...
'application.middleware.cron_middleware',
...
]
Simple way is to write a custom shell command see Django Documentation and execute it using a cronjob on linux. However i would highly recommend using a message broker like RabbitMQ coupled with celery. Maybe you can have a look at
this Tutorial
One alternative is to use Rocketry:
from rocketry import Rocketry
from rocketry.conds import daily, after_success
app = Rocketry()
#app.task(daily.at("10:00"))
def do_daily():
...
#app.task(after_success(do_daily))
def do_after_another():
...
if __name__ == "__main__":
app.run()
It also supports custom conditions:
from pathlib import Path
#app.cond()
def file_exists(file):
return Path(file).exists()
#app.task(daily & file_exists("myfile.csv"))
def do_custom():
...
And it also supports Cron:
from rocketry.conds import cron
#app.task(cron('*/2 12-18 * Oct Fri'))
def do_cron():
...
It can be integrated quite nicely with FastAPI and I think it could be integrated with Django as well as Rocketry is essentially just a sophisticated loop that can spawn, async tasks, threads and processes.
Disclaimer: I'm the author.
Another option, similar to Brian Neal's answer it to use RunScripts
Then you don't need to set up commands. This has the advantage of more flexible or cleaner folder structures.
This file must implement a run() function. This is what gets called when you run the script. You can import any models or other parts of your django project to use in these scripts.
And then, just
python manage.py runscript path.to.script

How to know when the database is ready in Django?

I need to do stuff as soon as the database is ready in Django. Specifically, I need to perform some calculations on values from db and fill the results into cache.
Since django 1.7, the application registry makes it easy to know when an app or models are ready to be used. You can write:
from django.apps import apps
if apps.ready:
do_some_stuff()
But I found out that the models being ready does not mean the database can be queried. Django doc's says:
Although you can access model classes as described above, avoid
interacting with the database in your ready() implementation
I tried to hook up to the post_migrate event. It works if I'm rebuilding the database (e.g launching the test suite), but does not if I'm just using an existing db (e.g using runserver).
Is there a way to know if the database is fully available in Django >= 1.7?
I use also the post_migrate signal. (as : https://github.com/mrjmad/django_badgificator/blob/master/badgificator/apps.py).
I realize by reading your question it does not work with 'runserver' ...
You can try hooking up a receiver for the connection_created signal.
If I correctly understand what you are trying to do, you want to fill the cache with data from DB when you start runserver. Since in production, runserver won't reload you will fill your cache only once until you restart the server (and I'm not even sure that gunicorn would behave the same way as runserver for this.)
So you probably have another way to update your cache using celery or something similar after the startup? Why not just use the same way to perform the first run?
You could setup your code in the wsgi.py file after the application is imported and called, like so:
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
print MyModel.objects.all()[0:5]
# Setup your startup code here since you already have access to your models
I found this answer based on this link:
Entry point hook for Django projects
Any code in project/__init__.py will run on startup after the Database is ready but before any views/urls can be accessed, so just put some code in __init__.py and it will run as you expect. post_migrate might be redundant because as far as I'm aware, you can't run migrations with the app running, If you absolutely need it, just have a function that runs on startup and when the signal is called.

Celery + Django best practice

I have been reading about the celery and django in these posts (here and here), and all the logic/tasks works in the celery.py, but in the official documentation they separated in two files: celery.py and tasks.py. So which is the best practice? This affects the performance?
The location of the tasks shouldn't have any noticeable affect on performance. The suggestion to use a separate tasks.py is for better organization.
From the Celery docs:
Note that this example project layout [a separate tasks.py for each app] is suitable for larger projects, for simple projects you may use a single contained module that defines both the app and tasks, like in the First Steps with Celery tutorial.

celery save task to database

so in my previous project i used django-celery but in my current project i am using celery because djcelery THIS PROJECT IS NO LONGER REQUIRED
Im using Redis as my backend, and i would like to keep track of all the tasks that have executed so that in the future i can do some comparisons like for instance average time that it used to take for a task to complete 6 months ago and now.
I know there are apps like flower but i find it a little buggy and im not sure the tasks are saved or not. I need something a bit more reliable even if that means creating my own model and forcing celery to save the task and the parameters i want and reading it in the admin.
Is this approach correct or is there a built in celery way to keep track of that information?
cheers
I'm still using django-celery with recent versions of Celery, because it works well (Django 1.6 and 1.7). It's not needed anymore but you can still use it. The celerycam management command is simple to use, and enables admin monitoring of tasks.
If you're after the official way of doing things, I cannot see better than the celery documentation
django-celery celerycam seems to use the celery console event viewer to do its stuff (code below). The admin integration, however, seems to be the sole work of djcelery. Have a look at djcelery/admin.py
from __future__ import absolute_import, unicode_literals
from celery.bin import events
from djcelery.app import app
from djcelery.management.base import CeleryCommand
ev = events.events(app=app)
class Command(CeleryCommand):
"""Run the celery curses event viewer."""
options = (CeleryCommand.options
+ ev.get_options()
+ ev.preload_options)
help = 'Takes snapshots of the clusters state to the database.'
def handle(self, *args, **options):
"""Handle the management command."""
options['camera'] = 'djcelery.snapshot.Camera'
ev.run(*args, **options)

Django import loop between celery tasks and my models

I am using celery with my django project.
In the celery tasks file, I need to import my models, in order to trigger model methods.
However, I would also like my model to be able to trigger certain celery tasks.
Right now I am importing my models to celery, however trying to import celery tasks into my models file results in an import loop and import error.
What is the correct way to go about this?
What I ended up doing, is using imports within methods, instead of a general import at the top of the models file. Obviously, i didn't really need circular imports. My problem was that I was importing the model at the top of the celery tasks file and importing the celery tasks at the top of the models file. That wasn't really necessary. by compartmentalizing the imports I was able to avoid the circular import problem
Celery provides the send_task() method, which allows to send a task by name, thus eliminating the need to import it - for example:
# models.py
from celery import current_app
# no need to import do_stuff from tasks because it will be sent by name
current_app.send_task('myapp.tasks.do_stuff', args=(1, 'two'), kwargs={'foo': 'bar'})
More in the documentation.
The general approach to solve these seeming circular dependency problems, is to factor out code that can be imported by both the models and the tasks. For example, you could factor out the model methods that you mention. Your models would import this factored out code, and so would the tasks.
How about not using a tasks.py file and just your applying the task decorators to methods in models.py?