Use Django's call command to start celery worker with celerybeat - django

I'd like to start a Django celery worker from a Python script, with celerybeat. On the command line, I would do:
python manage.py celery worker --beat --schedule celerybeat-schedule.db
I tried this from a script, but it threw an exception:
from django.core.management import call_command
call_command("cerlery", "worker", "--beat", "--schedule", "celerybeat-schedule.db")

I worked around it by doing this:
from djcelery.management.commands import celery
args = ['manage.py', 'celery', 'worker', '--beat', '--schedule',
'celerybeat-schedule']
command = celery.Command()
command.run_from_argv(args)
But if it's possible to use call_command, I'd like to know how.

I tend to do this..
./manage.py celeryd --event --beat --loglevel=INFO --logfile=./celeryd.log
Then to run the camera..
./manage.py celeryev --camera=djcelery.snapshot.Camera --logfile=./celeryev.log
Hope this helps.

Related

Celery and Django on production machine

/opt/fubar/fubar/celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'fubar.settings')
app = Celery('fubar')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
/opt/fubar/fubar/init.py
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app', )
/etc/supervisor/conf.d/celeryworker.conf
[program:fubar-celery]
command=/opt/fubar/env/bin/celery worker -A fubar --loglevel=INFO
directory=/opt/fubar
user=www-data
numprocs=1
stdout_logfile=/var/log/celery/fubar/worker.log
stderr_logfile=/var/log/celery/fubar/worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true
priority=998
$ service rabbitmq-server status
● rabbitmq-server.service - RabbitMQ Messaging Server
Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2020-07-26 06:19:47 UTC; 19h ago
Main PID: 21884 (rabbitmq-server)
Tasks: 92 (limit: 4915)
CGroup: /system.slice/rabbitmq-server.service
├─21884 /bin/sh /usr/sbin/rabbitmq-server
├─21905 /bin/sh /usr/lib/rabbitmq/bin/rabbitmq-server
├─22073 /usr/lib/erlang/erts-9.2/bin/epmd -daemon
├─22185 /usr/lib/erlang/erts-9.2/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -stbt db -zdbbl 32000 -K true -B i
├─22299 erl_child_setup 65536
├─22372 inet_gethost 4
└─22373 inet_gethost 4
$ celery worker -A fubar --loglevel=INFO on localhost returns
[tasks]
. fubar.celery.debug_task
. apps.raptor.tasks.launchraptor
. apps.raptor.tasks.nolaunch
while I see no tasks in the log file in production
Apache error log shows:
mod_wsgi (pid=26854): Exception occurred processing WSGI script '/var/www/fubar/fubar.wsgi'.
...
celery.exceptions.NotRegistered: 'apps.raptor.tasks.launchraptor'
I installed supervisor with pip install supervisor to get v4.2.0
What can I run to test whether things are configured properly?
Why is celery worker started with supervisor not finding the tasks that show up when run as celery worker.
I got rid of RabbitMQ and moved to redis. I had more success installing/configuring redis. Doesn't answer the question asked.
Observation was that supervisor installed with pip install supervisor doesn't work and does when done via apt install supervisor. I don't know why.
tail -f /var/log/celery/fubar/worker.log is the best way to see what's going on with the worker.
celery inspect ... also works for a snapshot of whats happening
I had to change command= in the conf.d/fubar*.conf files to point to a shell script which worked better than calling celery from the conf file itself. Also, shell scripts must be owned by the user=value and set to +x.

raise ConnectionError(self._error_message(e)) kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection refused

minimal django/celery/redis is running locally, but when deployed to heroku gives me the following error, when I run on python:
raise ConnectionError(self._error_message(e))
kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection
refused.
This is my tasks.py file in my application directory:
from celery import Celery
import os
app = Celery('tasks', broker='redis://localhost:6379/0')
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
#app.task
def add(x, y):
return x + y
Requirements.txt:
django
gunicorn
django-heroku
celery
redis
celery-with-redis
django-celery
kombu
I have set worker dyno to 1.
Funny things is i could have sworn it was working before, now it doesnt work for some reason.
Once, you have a minimal django-celery-redis project setup on local, here is how you deploy it on heroku:
Add to your tasks.py:
import os
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
Make sure your requirements.txt is like this:
django
gunicorn
django-heroku
celery
redis
Add to your Procfile: "worker: celery worker --app=hello.tasks.app"
Make sure it still runs on local
enter into terminal: "export REDIS_URL=redis://"
run "heroku local&"
run python
import hello.tasks
hello.tasks.add.delay(1,2)
Should return something like:
<AsyncResult: e1debb39-b61c-47bc-bda3-ee037d34a6c4>
"heroku apps:create minimal-django-celery-redis"
"heroku addons:create heroku-redis -a minimal-django-celery-redis"
"git add ."
"git commit -m "Demo""
"git push heroku master"
"heroku open&"
"heroku ps:scale worker=1"
"heroku run python"
import hello.tasks
hello.tasks.add.delay(1, 2)
You should see the task running in the application logs: "heroku logs -t -p worker"
This solved it for me, i forgot to import celery in project/init.py like so
from .celery import app as celery_app
__all__ = ("celery_app",)

Django call_command permissions nginx+gunicorn+supervisord

I have installed this https://github.com/andybak/django-backup
App just provides backup management command
What i do:
1. $ python manage.py backup
Everything is fine. Backup created!
2. $ python manage.py shell
from django.core.management import call_command
call_command('backup')
Everything is fine. Backup created!
3. I've created view:
from django.core.management import call_command
def backup(request): # /admin/backup/
call_command('backup')
return redirect(request.META.get('HTTP_REFERER'))
4. $ python manage.py runserver 0.0.0.0:9999
Go to browser mywebsite.com:9999/admin/backup/
Everything is fine. Backup created!
5. But when I run my website through nginx+gunicorn+supervisord and go to browser mywebsite.com/admin/backup/ - backup file is empty.
Maybe it is all about permissions? Please help.
Django 1.7
EDIT:
6. /var/env/project/bin/gunicorn core.wsgi -b 0.0.0.0:9999 --user=root --group=root and go to browser mywebsite.com:9999/admin/backup/
Everything is fine. Backup created!
/etc/supervisord.conf:
[program:project]
command=/var/env/project/bin/gunicorn core.wsgi -b 0.0.0.0:8000 --user=root --group=root
directory=/var/www/project/
environment=PATH="/var/env/project/bin/activate",DJANGO_SETTINGS_MODULE="core.settings_prod"
user=root
autostart=true
autorestart=true
redirect_stderr=true
Got it! Mistake in supervisord.conf environment=PATH= it should be environment=PYTHONPATH=
environment=PYTHONPATH="/var/env/project/bin/activate",DJANGO_SETTINGS_MODULE="core.settings_prod"

How to call task properly?

I configured django-celery in my application. This is my task:
from celery.decorators import task
import simplejson as json
import requests
#task
def call_api(sid):
try:
results = requests.put(
'http://localhost:8000/api/v1/sids/'+str(sid)+"/",
data={'active': '1'}
)
json_response = json.loads(results.text)
except Exception, e:
print e
logger.info('Finished call_api')
When I add in my view:
call_api.apply_async(
(instance.service.id,),
eta=instance.date
)
celeryd shows me:
Got task from broker: my_app.tasks.call_api[755d50fd-0f0f-4861-9a18-7f4e4563290a]
Task my_app.tasks.call_api[755d50fd-0f0f-4861-9a18-7f4e4563290a] succeeded in 0.00513911247253s: None
so should be good, but nothing happen... There is no call to for example:
http://localhost:8000/api/v1/sids/1/
What am I doing wrong?
Are you running celery as a separate process?
For example in Ubuntu run using the command
sudo python manage.py celeryd
Till you run celery (or django celery) as a separate process, the jobs will be stored in the database (or queue or the persistent mechanism you have configured - generally in settings.py).

Celery. Why my task work ONLY if I run this manualy in shell(manage.py shell)?

>>> from app.tasks import SendSomething
>>> eager_result = SendSomething().apply()
Why my task work ONLY if I run this manualy in shell(manage.py shell)?
settings.py
from datetime import timedelta
CELERYBEAT_SCHEDULE = {'send-something':
{'task': 'app.tasks.SendSomething',
'schedule': timedelta(seconds=300),
}}
I run:
python manage.py celeryd
and I have:
[Tasks]
. app.tasks.SendSomething
[2013-05-01 18:44:22,895: WARNING/MainProcess] celery#aaa ready.
but not working.
celeryd is the worker process. By default it does not schedule the periodic tasks. You can either run with the -B option to run the beat process along with the worker
python manage.py celeryd -B
or you can run an additional celerybeat process
python manage.py celerybeat
See http://celery.readthedocs.org/en/latest/userguide/periodic-tasks.html#starting-the-scheduler