Daemonizing celery with supervisor issues (Django Rest Framework) - django

I cannot seem to get celery working in production using supervisor. I am normally using it in local development running the command. However, when I make a supervisor setup, it is not working. Reading the logs I am getting the error:
Unable to load celery application.
The module inp_proj was not found.
My .conf file for supervisor is:
[program:inp_proj]
directory=/www/[project_directory]/inp_proj
command=/www/[project_directory]/venv/bin/celery -A inp_proj worker --loglevel=INFO
user=jan
numprocs=1
stdout_logfile=/var/log/celery/inp_proj.log
stderr_logfile=/var/log/celery/inp_proj.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true
priority=998
this is my celery.py file:
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'inp_proj.settings')
app = Celery('inp_proj')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
It is located inside project directory and inside inp_proj.
I tried changing the celery command in the created config file, adding environment path to supervisor config file but nothing seems to be working. If I manually active the venv with
source venv/bin/activate
and then start the worker manually, it works normally. But when using supervisor, it doesn't work.

Related

Celery is not working as daemon

I am trying to use django and celery to automate the task and want
to run celery as daemon. I copied the code for celeryd and
celerybeat from its official documentation and put it inside the
/etc/init.d/ folder.
Below is the code of my celeryd file and I put it inside the 'etc/default' folder. The code in the bold letters is commented code.
CELERY_BIN="/home/user/.local/bin/celery"
# Name of nodes to start, here we have a single node
CELERYD_NODES="worker"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Where to chdir at start.
CELERYD_CHDIR="/home/user/django_project"
CELERY_APP="django_project"
#How to call "manage.py celeryd_multi"
#CELERYD_MULTI="$CELERYD_CHDIR/django_project/manage.py celeryd_multi"
# Extra arguments to celeryd
#CELERYD_OPTS="--time-limit 300 --concurrency=8"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="user"
CELERYD_GROUP="user"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
# beat settings
CELERYBEAT_OPTS="--scheduler django_celery_beat.schedulers:DatabaseScheduler"
CELERYBEAT_LOG_FILE="/var/log/celery/celeryBeat.log"
CELERYBEAT_PID_FILE="/var/run/celery/celeryBeat.pid"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I started the celeryd using "sudo /etc/init.d/celeryd/ start" command.
I started the celerybeat using "sudo /etc/init.d/celerybeat/ start" command.
When I checked the status of celery and celerybeat it is showing
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
Please do let me know how to run celery as daemon.

django: uwsgi not running with supervisor

This is my uwsgi ini file,
[uwsgi]
chdir=/root/projects/cbapis/cbapis
module=cbAPIs.wsgi:application
env = DJANGO_SETTINGS_MODULE=cbAPIs.settings.production
http=0.0.0.0:8002
workers=1
home=/root/projects/cbapis/cbapis/env
This is the django wsgi file,
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cbAPIs.settings.production")
application = get_wsgi_application()
When I run server with uwsgi, it runs fine,
uwsgi --ini cbapi_uwsgi_config.ini
Below is my supervisor conf for this project,
[program:djangocbapis]
command=uwsgi --ini /root/cbapi_uwsgi_config.ini
environment =
DJ_DEV_SERVER_DB_NAME="****",
DJ_DEV_SERVER_DB_USER="****",
DJ_DEV_SERVER_DB_HOST="*****",
DJ_DEV_SERVER_DB_PASSWORD="*****",
autostart=true
autorestart=true
user=root
priority=400
stderr_logfile=/var/app/cbapis/log/cbapis.log
When I run this uwsgi server via supervisor, it does not run. I get the following error in the log file,
--- no python application found, check your startup logs for errors ---
I have similar supervisor and uwsgi configuration for my other django projects in the same server, which are running fine with supervisor and uwsgi.
But I can't figure out why it cannot find python application while running with supervisor.
So, please help me in this matter.
Try to give full path to uwsgi like /usr/bin/uwsgi or /usr/local/bin/uwsgi. If you are on ubuntu/linux then type command which uwsgi to get full path. It will work.
[program:djangocbapis]
command=/usr/bin/uwsgi --ini /root/cbapi_uwsgi_config.ini
autostart=true
autorestart=true
user=root
priority=400
stderr_logfile=/var/app/cbapis/log/cbapis.log
setup the environment variable in uwsgi .ini file.
env = DJ_DEV_SERVER_DB_NAME="****",
env = DJ_DEV_SERVER_DB_USER="****",
env = DJ_DEV_SERVER_DB_HOST="*****",
env = DJ_DEV_SERVER_DB_PASSWORD="*****",

how to start worker in djcelery

how can I start a worker in djcelery using django.I am new to django and djcelery.I installed django and djcelery.but I don't know how to start the worker and how to add the worker and task.sorry for my bad english
You generally start a celery worker using the following command:
$ celery -A project worker -l info
Where -A is your application / project.
Since you are using djcelery, you can run it using manage.py too:
$ python manage.py celery worker
Read celery docs

Django on Heroku - how can I get a celery worker to run correctly?

I am trying to deploy the simplest possible "hello world" celery configuration on heroku for my Django app. My Procfile is as follows:
web: gunicorn myapp.wsgi
worker: celery -A myapp worker -l info -B -b amqp://XXXXX:XXXXX#red-thistle-3.bigwig.lshift.net:PPPP/XXXXX
This is the RABBITMQ_BIGWIG_RX_URL that I'm giving to the celery worker. I have the corresponding RABBITMQ_BIGWIG_TX_URL in my settings file as the BROKER_URL.
If I use these broker URLs in my local dev environment, everything works fine and I can actually use the Heroku RabbitMQ system. However, when I deploy my app to Heroku it isn't working.
This Procfile seems to work (although Celery is giving me memory leak issues).
web: gunicorn my_app.wsgi
celery: celery worker -A my_app -l info --beat -b amqp://XXXXXXXX:XXXXXXXXXXXXXXXXXXXX#red-thistle-3.bigwig.lshift.net:PPPP/XXXXXXXXX

Celery and Django, Logging Celery

I'm running celery with django and works great in development. But now I want to make it live
on my production server and I am running into some issues.
My setup is as follows:
Ubuntu
Nginx
Vitualenv
Upstart
Gunicorn
Django
I'm not sure how to now start celery with django when starting it with upstart and where does it log to?
Im starting django here:
~$ cd /var/www/webapps/minamobime_app
~$ source ../bin/activate
exec /var/www/webapps/bin/gunicorn_django -w $NUM_WORKERS \
--user=$USER --group=$GROUP --bind=$IP:$PORT --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE
how do I start celery?
exec python manage.py celeryd -E -l info -c 2
Consider configuring celery as a daemon. For logging speciy:
CELERYD_LOG_FILE="/var/log/celery/%n.log"
where %s will be replaced by the node name
You can install supervisor using apt-get and then you can add the following to a file named celeryd.conf (or any name you wish) to etc/supervisor/conf.d folder (create the conf.d folder if it is not present)
; ==================================
; celery worker supervisor example
; ==================================
[program:celery]
; Set full path to celery program if using virtualenv
command=/home/<path to env>/env/bin/celery -A <appname> worker -l info
;enter the directory in which you want to run the app
directory=/home/<path to the app>
user=nobody
numprocs=1
stdout_logfile=/home/<path to the log file>/worker.log
stderr_logfile=/home/<path to the log file>/worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 1000
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
Also add the following lines to etc/supervisor/supervisord.conf
[include]
files = /etc/supervisor/conf.d/*.conf
Now start the supervisor by typing supervisord in terminal and celery will start automatically according to the settings you made above.
You can run:
python manage.py celery worker
this will work if you have djcelery in your INSTALLED_APPS