Django and Celery - re-loading code into Celery after a change - django

If I make a change to tasks.py while celery is running, is there a mechanism by which it can re-load the updated code? or do I have to shut Celery down a re-load?
I read celery had an --autoreload argument in older versions, but I can't find it in the current version:
celery: error: unrecognized arguments: --autoreload

Unfortunately --autoreload doesn't work and it is deprecated.
You can use Watchdog which provides watchmedo a shell utilitiy to perform actions based on file events.
pip install watchdog
You can start worker with
watchmedo auto-restart -- celery worker -l info -A foo
By default it will watch for all files in current directory. These can be changed by passing corresponding parameters.
watchmedo auto-restart -d . -p '*.py' -- celery worker -l info -A foo
Add -R option to recursively watch the files.
If you are using django and don't want to depend on watchdog, there is a simple trick to achieve this. Django has autoreload utility which is used by runserver to restart WSGI server when code changes.
The same functionality can be used to reload celery workers. Create a seperate management command called celery. Write a function to kill existing worker and start a new worker. Now hook this function to autoreload as follows. For Django >= 2.2
import sys
import shlex
import subprocess
from django.core.management.base import BaseCommand
from django.utils import autoreload
class Command(BaseCommand):
def handle(self, *args, **options):
autoreload.run_with_reloader(self._restart_celery)
#classmethod
def _restart_celery(cls):
if sys.platform == "win32":
cls.run('taskkill /f /t /im celery.exe')
cls.run('celery -A phoenix worker --loglevel=info --pool=solo')
else: # probably ok for linux2, cygwin and darwin. Not sure about os2, os2emx, riscos and atheos
cls.run('pkill celery')
cls.run('celery worker -l info -A foo')
#staticmethod
def run(cmd):
subprocess.call(shlex.split(cmd))
For django < 2.2
import sys
import shlex
import subprocess
from django.core.management.base import BaseCommand
from django.utils import autoreload
class Command(BaseCommand):
def handle(self, *args, **options):
autoreload.main(self._restart_celery)
#classmethod
def _restart_celery(cls):
if sys.platform == "win32":
cls.run('taskkill /f /t /im celery.exe')
cls.run('celery -A phoenix worker --loglevel=info --pool=solo')
else: # probably ok for linux2, cygwin and darwin. Not sure about os2, os2emx, riscos and atheos
cls.run('pkill celery')
cls.run('celery worker -l info -A foo')
#staticmethod
def run(cmd):
subprocess.call(shlex.split(cmd))
Now you can run celery worker with python manage.py celery which will autoreload when codebase changes.
This is only for development purposes and do not use it in production.

You could try SIGHUP on the parent worker process, it restarts the worker, but I'm not sure if it picks up new tasks. Worth a shot, thought :)

FYI, for anyone using Docker, I couldn't find an easy way to make the above options work, but I found (along with others) another little script here which does use watchdog and works perfectly.
Save it as some_name.py file in your main directory, add pip install psutil and watchdog to requirements.txt, update the path/cmdline variables at the top, then in the worker container of your docker-compose.yml insert:
command: python ./some_name.py

Watchmedog doesn't work for me inside a docker container.
This is the way I made it work with Django:
# worker_dev.py (put it next to manage.py)
from django.utils import autoreload
def run_celery():
from projectname import celery_app
celery_app.worker_main(["-Aprojectname", "-linfo", "-Psolo"])
print("Starting celery worker with autoreload...")
autoreload.run_with_reloader(run_celery)
Then run python worker_dev.py or set it as your Dockerfile CMD or docker-compose command.

Related

Celery + Django not working at the same time

I have Django 2.0 project that is working fine, its integrated with Celery 4.1.0, I am using jquery to send ajax request to the backend but I just realized its loading endlessly due to some issues with celery.
Celery Settings (celery.py)
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'converter.settings')
app = Celery('converter', backend='amqp', broker='amqp://guest#localhost//')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Celery Tasks (tasks.py)
from __future__ import absolute_import, unicode_literals
from celery import shared_task
#shared_task(time_limit=300)
def add(number1, number2):
return number1 + number2
Django View (views.py)
class AddAjaxView(JSONResponseMixin, AjaxResponseMixin, View):
def post_ajax(self, request, *args, **kwargs):
url = request.POST.get('number', '')
task = tasks.convert.delay(url, client_ip)
result = AsyncResult(task.id)
data = {
'result': result.get(),
'is_ready': True,
}
if result.successful():
return self.render_json_response(data, status=200)
When I send ajax request to the Django app it is loading endlessly but when terminate Django server, and I run celery -A demoproject worker --loglevel=info that's when my tasks are running.
Question
How do I automate this so that when I run Django project my celery tasks will work automatically when I send ajax request?
If you are on development environment, you have to run manually celery worker as it does not run automatically on the background, in order to process the jobs in the queue. So if you want to have a flawless workflow, you need both Django default server and celery worker running. As stated in the documentation:
In a production environment you’ll want to run the worker in the background as a daemon - see Daemonization - but for testing and development it is useful to be able to start a worker instance by using the celery worker manage command, much as you’d use Django’s manage.py runserver:
celery -A proj worker -l info
You can read their documentation for daemonization.
http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

Can't start the worker for Running celery with Flask

I am following the example given in the following url to run celery with Flask:
http://flask.pocoo.org/docs/0.12/patterns/celery/
I followed everything word by word. The only difference being, my make_celery function is created under the following hierarchy:
package1|
|------CeleryObjCreator.py
|
CeleryObjectCraetor.py has the make_celery function under CeleryObjectCreatorClass as follows:
from celery import Celery
class CeleryObjectHelper:
def make_celery(self, app):
celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
Now, I am facing problems with starting the celery worker.
In the end of the article, it suggests to start the celery worker as follows:
$ celery -A your_application.celery worker
In my case, I am using <> for your_application string which doesn't work and it gives the following error:
ImportError: No module named 'package1.celery'
So I am not sure what should be the value of your_application string here to start the celery worker.
EDIT
As suggested by Nour Chawich, i did try running the Flask app from the command line. my server does come up successfully.
Also, since app is a directory in my project structure where app.py is, in app.py code i replaced app = Flask(name) with flask_app = Flask(name) to separate out the variable names
But when i try to start the celery worker using command
celery -A app.celery -loglevel=info
it is not able to recognize the following imports that I have in my code
import app.myPackage as myPackage
it throws the following error
ImportError: No module named 'app'
So I am really not sure what is going on here. any ideas ?

Django-Celery: importing from another app into project's celery.py file

I am following the tutorial on here to get periodic tasks defined in my django project working.
The article suggests having a celery.py file of the form:
from celery import Celery
from celery.schedules import crontab
app = Celery()
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
# Calls test('hello') every 10 seconds.
sender.add_periodic_task(10.0, my_task.s('hello'), name='add every 10')
)
#app.task
def my_task(arg):
print(arg)
which works. Now this is good but I don't want to define my tasks locally. my question is, how can I add tasks from other apps?
I have created a blank project called my_proj and it has two apps: my_proj and app_with_tasks. the celery.py file above is at the root level in my_proj app's directory and I want to add periodic tasks from app_with_tasks 's tasks.py file.
I do have app_with_tasks listed in Installed-apps for my_proj settings file but I still can't import anything from an app to anther.
my understanding is that I should use:
from app_with_tasks.tasks import task1
but my_proj will then show as unresolved reference in PyCharm.
I'll tell you what I'm using. Maybe it helps you
my_proj/celery.py
import os
import celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_proj.settings')
app = celery.Celery('app_django')
app.config_from_object('django.conf.settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
then in app_with_tasks, add file tasks.py
from my_proj.celery import app
from django.apps import apps
#app.task(bind=False)
def your_task(some_arg):
A_Model = apps.get_model('my_proj', 'A_Model')
....
command to start celery server (restart this every time you change a task to reload tasks.py files)
/path/to/virtualenv/bin/celery --app=my_proj.celery:app --loglevel=INFO --concurrency=4 -n default_worker worker
To call the task (here you should use your add_periodic_task code)
from app_with_tasks.tasks import your_task
your_task.apply_async(args=[123], kwargs=None)

How to execute multiple django celery tasks with a single worker?

Here is my celery file:
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ets.settings')
from django.conf import settings # noqa
app = Celery('proj',
broker='redis://myredishost:6379/0',
backend='redis://myredishost:6379/0',
include=['tracking.tasks'])
# Optional configuration, see the application user guide.
app.conf.update(
CELERY_TASK_RESULT_EXPIRES=3600,
)
if __name__ == '__main__':
app.start()
Here is my task file:
#app.task
def escalate_to_sup(id, group):
escalation_email, created = EscalationEmail.objects.get_or_create()
escalation_email.send()
return 'sup email sent to: '+str(group)
#app.task
def escalate_to_fm(id, group):
escalation_email, created = EscalationEmail.objects.get_or_create()
escalation_email.send()
return 'fm email sent to: '+str(group)
I start the worker like this:
celery -A ets worker -l info
I have also tried to add concurrency like this:
celery -A ets worker -l info --concurrency=10
I attempt to call the tasks above with the following:
from tracking.tasks import escalate_to_fm, escalate_to_sup
def status_change(equipment):
r1 = escalate_to_sup.apply_async((equipment.id, [1,2]), countdown=10)
r2 = escalate_to_fm.apply_async((equipment.id, [3,4]), countdown=20)
print r1.id
print r2.id
This prints:
c2098768-61fb-41a7-80a2-f79a73570966
23959fa3-7f80-4e20-a42f-eef75e9bedeb
The escalate_to_sup and escalate_fm functions log to the worker intermittently. At least 1 executes, but never both.
I have tried spinning up more workers, and then both tasks execute. I do this like:
celery -A ets worker -l info --concurrency=10 -n worker1.%h
celery -A ets worker -l info --concurrency=10 -n worker2.%h
The problem is I don't know how many of the tasks might execute concurrently so spinning up a worker for every possible tasks to execute is not feasible.
Does celery expect a work for every active task?
How do I execute multiple tasks with a single worker?

celery: error: unrecognized arguments: -A, Flask, argparse

In a Flask based web application, taking two command line arguments ini filename, port number using argparse, in the same file celery app also defined.But while running the celery application I'm getting the above error.
import argparse
from flask import Flask
from celery import Celery
app = Flask(__name__)
parser = argparse.ArgumentParser(prog="testpgm")
parser.add_argument('-c','--cfgfile', default='domain.ini', help="provide ini file path")
parser.add_argument('-p','--port', default=5000, help="-p port number eg - 'python run.py -p <port>, default to 5000")
args = parser.parse_args()
ini_path = args.cfgfile
port = args.port
-------CELERY CONFIGS-------
app.config["CELERY_QUEUES"] = (
Queue('queue1', Exchange('queue1'), routing_key='queue1')
)
def make_celery(flaskapp):
#getting celery broker uri
celery_broker_uri= CeleryBrokerWrapper().get_broker_uri(broker,username,password,host,port,vhost)
celeryinit = Celery(flaskapp.import_name, broker=celery_broker_uri)
celeryinit.conf.update(flaskapp.config)
taskbase = celeryinit.Task
class ContextTask(taskbase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return taskbase.__call__(self, *args, **kwargs)
celeryinit.Task = ContextTask
return celeryinit
celery = make_celery(app)
but when I'm running celery using
celery -A testpgm.celery worker --loglevel=info --concurrency=5 -Q queue1
I'm getting the error like
testpgm: error: unrecognized arguments: -A testpgm.celery worker --loglevel=info --concurrency=5 -Q queue1
Its looks like an argparse error, how can I customise argparse for my application, with out having problem with celery's command line arguments..
Had a similar issue, argparse also complained for me.
Quick Fix: use parse_known_args, as opposed to parse_args
args, unknown = parser.parse_known_args()
source:
Python argparse ignore unrecognised arguments
Ugly Fix:
define the celery worker args as part of the argparse your main app has
"Do it right" Fix:
Consider using argparse in your main function so that celery does not clash with it
Handling argparse conflicts
you need to re-order the args:
celery worker -A testpgm.celery --loglevel=info --concurrency=5 -Q queue1