Celery with Supervisord workers are not working Isolated - django

.conf file
[program:task1]
directory=/home/ubuntu/proj1
command=/usr/bin/python3 /usr/local/bin/celery -A proj1 worker -l info --concurrency=10 -n proj1_worker#%%h
user=ubuntu
numprocs=1
stdout_logfile=/var/log/proj1_celeryd.log
stderr_logfile=/var/log/proj1_celeryd.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600
priority=998
[program:task2]
directory=/home/ubuntu/proj2/
command=/usr/bin/python3 /usr/local/bin/celery -A proj2 worker -l info --concurrency=10 -n proj2_worker#%%h
user=ubuntu
numprocs=1
stdout_logfile=/var/log/proj2_celeryd.log
stderr_logfile=/var/log/proj2_celeryd.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs=600
priority=998
[group:celeryworkers]
programs=task1,task2
proj1_worker and proj2_worker are not getting isolated.
At first, always proj1_worker is called even I called proj2_worker
I don't know where I am going wrong. Kindly assist.
Thank you in advance

First of all I really recommend you to use virtualenv for each project. Create 2 separate virtualenvs (you can specify your own location), see https://docs.python.org/3/library/venv.html.
python3 -m venv /home/ubuntu/virtualenvs/proj1
python3 -m venv /home/ubuntu/virtualenvs/proj2
Activate virtualenv and install celery:
source /home/ubuntu/virtualenvs/proj1/bin/activate
pip install --upgrade celery
source /home/ubuntu/virtualenvs/proj2/bin/activate
pip install --upgrade celery
Your supervisor configuration should then look like this:
[program:task1]
directory=/home/ubuntu/proj1
command=/home/ubuntu/virtualenvs/proj1/bin/celery worker -A proj1 -l info --concurrency=10 -n proj1_worker#%%h
# ...
[program:task2]
directory=/home/ubuntu/proj2
command=/home/ubuntu/virtualenvs/proj1/bin/celery worker -A proj2 -l info --concurrency=10 -n proj2_worker#%%h
# ...
Next, create 2 separate virtual hosts for your projects:
rabbitmqctl add_user proj_1 <PASSWORD>
rabbitmqctl add_vhost proj_1_vhost
rabbitmqctl set_permissions -p proj_1_vhost proj_1 ".*" ".*" ".*"
rabbitmqctl add_user proj_2 <PASSWORD>
rabbitmqctl add_vhost proj_2_vhost
rabbitmqctl set_permissions -p proj_2_vhost proj_2 ".*" ".*" ".*"
Finally modify celery configuration to use newly created virtual hosts:
app = Celery('proj1_celery_app')
app.conf.update(
# ...
broker_url='amqp://proj1:<PASSWORD>#localhost:5672/proj_1_vhost'
# ...
)
app = Celery('proj2_celery_app')
app.conf.update(
# ...
broker_url='amqp://proj2:<PASSWORD>#localhost:5672/proj_2_vhost'
# ...
)
For more info about rabbit vhosts see this SO post: Running multiple instances of celery on the same server.

Related

django-celery-beat doesn't start within supervisor just manually

I've installed Celery[sqs] and django-celery-beat in my Django 1.10 project.
I've trying to run them both (worker and beat) using Supervisor on and Elastic Beanstalk instance.
The Supervisor config is being created dynamically with the following script:
#!/usr/bin/env bash
# get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# create celery beat config script
celerybeatconf="[program:celery-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A phsite --loglevel=DEBUG --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=false
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 10
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# create celery worker config script
celeryworkerconf="[program:celery-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A phsite --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# create files for the scripts
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf
# add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
After which the following ebextension is running:
container_commands:
01_create_celery_beat_configuration_file:
command: "cat .ebextensions/files/celery_configuration.sh > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && sed -i 's/\r$//' /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
02_chmod_supervisor_sock:
command: "chmod 777 /opt/python/run/supervisor.sock"
03_create_logs:
command: "touch /var/log/celery-beat.log /var/log/celery-worker.log"
04_chmod_logs:
command: "chmod 777 /var/log/celery-beat.log /var/log/celery-worker.log"
05_start_celery_worker:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celery-worker"
06_start_celery_beat:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf start celery-beat"
When logging-in to the Instance, and running
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf status
celery-beat is already "not started" (with an empty log file) while the celery-worker is running.
The weirdest part is that if I run it manually (e.g.
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf start celery-beat
It's running without errors.
Anyone has any idea how to debug it?
Why would it not load within the eb_extension while it does load later?
Maybe that has to do with the fact the Django is not up yet and am using django_celery_beat.schedulers:DatabaseScheduler configuration?
So the simple reason is, the shell script created in the eb_extension:
container_commands:
01_create_celery_beat_configuration_file:
command: "cat .ebextensions/files/celery_configuration.sh > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && sed -i 's/\r$//' /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
Is created in the appdeploy/post directory and therefore runs (post deployment and basically) after the following commands are executed.
The start/restart command don't do a thing because the shell script hasn't registered those services yet. 🤦‍♂️

How to config supervisor with Django channels and server daphne

I have a problem with my configuration supervisor, my file is in etc/supervisor/conf.d/realtimecolonybit.conf,
When I try command supervisorctl reread, show me the "No config updates to processes" and when I try the other command like this
supervisorctl status realtimecolonybit
Shows me this error
realtimecolonybit FATAL can't find command '/home/ubuntu/realtimecolonybit/bin/start.sh;'
And when try the supervisorctl start realtimecolonybit
show me this error
realtimecolonybit: ERROR (no such file)
My configuration in my file realtimecolonybit.conf is below
[program:realtimecolonybit]
command = /home/ubuntu/realtimecolonybit/bin/start.sh;
user = root
stdout_logfile = /home/ubuntu/realtimecolonybit/logs/realtimecolonybit.log;
redirect_strderr = true;
My configuration from my file start.sh is below
#!/bin/bash
NAME="realtimecolonybit"
DJANGODIR=/home/ubuntu/realtimecolonybit/colonybit
SOCKFILE=/home/ubuntu/realtimecolonybit/run/gunicorn.sock
USER=root
GROUP=root
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=colonybit.settings
echo "Starting $NAME as `whoami`"
cd $DJANGODIR
source /home/ubuntu/realtimecolonybit/bin/activate
# workon realtimecolonybit
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPAHT=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec daphne -b 0.0.0.0 -p 8001 colonybit.asgi:application
When I run without supervisor like this
(realtimecolonybit)realtimecolonybit/#/ ./bin/start.sh
it's run ok and working well, but sometimes down the server
I try to run a Django 1.11 and django_channel with supervisor my app is in aws.
I solve my problem, the error was in file .conf, I remove ; and remove the .sh, change the start.sh to start
wrong command
command = /home/ubuntu/realtimecolonybit/bin/start.sh;
correct command
command = /home/ubuntu/realtimecolonybit/bin/start

How to run a celery worker with Django app scalable by AWS Elastic Beanstalk?

How to use Django with AWS Elastic Beanstalk that would also run tasks by celery on main node only?
This is how I set up celery with django on elastic beanstalk with scalability working fine.
Please keep in mind that 'leader_only' option for container_commands works only on environment rebuild or deployment of the App. If service works long enough, leader node may be removed by Elastic Beanstalk. To deal with that, you may have to apply instance protection for your leader node. Check: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html#instance-protection-instance
Add bash script for celery worker and beat configuration.
Add file root_folder/.ebextensions/files/celery_configuration.txt:
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A django_app --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv
[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A django_app --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker
Take care about script execution during deployment, but only on main node (leader_only: true).
Add file root_folder/.ebextensions/02-python.config:
container_commands:
04_celery_tasks:
command: "cat .ebextensions/files/celery_configuration.txt > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
05_celery_tasks_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
Beat is configurable without need of redeployment, with separate django applications: https://pypi.python.org/pypi/django_celery_beat.
Storing task results is good idea to: https://pypi.python.org/pypi/django_celery_beat
File requirements.txt
celery==4.0.0
django_celery_beat==1.0.1
django_celery_results==1.0.1
pycurl==7.43.0 --global-option="--with-nss"
Configure celery for Amazon SQS broker
(Get your desired endpoint from list: http://docs.aws.amazon.com/general/latest/gr/rande.html)
root_folder/django_app/settings.py:
...
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'sqs://%s:%s#' % (aws_access_key_id, aws_secret_access_key)
# Due to error on lib region N Virginia is used temporarily. please set it on Ireland "eu-west-1" after fix.
CELERY_BROKER_TRANSPORT_OPTIONS = {
"region": "eu-west-1",
'queue_name_prefix': 'django_app-%s-' % os.environ.get('APP_ENV', 'dev'),
'visibility_timeout': 360,
'polling_interval': 1
}
...
Celery configuration for django django_app app
Add file root_folder/django_app/celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_app.settings')
app = Celery('django_app')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
Modify file root_folder/django_app/__init__.py:
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from django_app.celery import app as celery_app
__all__ = ['celery_app']
Check also:
How do you run a worker with AWS Elastic Beanstalk? (solution without scalability)
Pip Requirements.txt --global-option causing installation errors with other packages. "option not recognized" (solution for problems coming from obsolate pip on elastic beanstalk that cannto deal with global options for properly solving pycurl dependency)
This is how I extended the answer by #smentek to allow for multiple worker instances and a single beat instance - same thing applies where you have to protect your leader. (I still don't have an automated solution for that yet).
Please note that envvar updates to EB via the EB cli or the web interface are not relflected by celery beat or workers until app server restart has taken place. This caught me off guard once.
A single celery_configuration.sh file outputs two scripts for supervisord, note that celery-beat has autostart=false, otherwise you end up with many beats after an instance restart:
# get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# create celery beat config script
celerybeatconf="[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A lexvoco --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=false
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 10
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# create celery worker config script
celeryworkerconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A lexvoco --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# create files for the scripts
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf
# add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
Then in container_commands we only restart beat on leader:
container_commands:
# create the celery configuration file
01_create_celery_beat_configuration_file:
command: "cat .ebextensions/files/celery_configuration.sh > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && sed -i 's/\r$//' /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
# restart celery beat if leader
02_start_celery_beat:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat"
leader_only: true
# restart celery worker
03_start_celery_worker:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker"
If someone is following smentek's answer and getting the error:
05_celery_tasks_run: /usr/bin/env bash does not exist.
know that, if you are using Windows, your problem might be that the "celery_configuration.txt" file has WINDOWS EOL when it should have UNIX EOL. If using Notepad++, open the file and click on "Edit > EOL Conversion > Unix (LF)". Save, redeploy, and error is no longer there.
Also, a couple of warnings for really-amateur people like me:
Be sure to include "django_celery_beat" and "django_celery_results" in your "INSTALLED_APPS" in settings.py file.
To check celery errors, connect to your instance with "eb ssh" and then "tail -n 40 /var/log/celery-worker.log" and "tail -n 40 /var/log/celery-beat.log" (where "40" refers to the number of lines you want to read from the file, starting from the end).
Hope this helps someone, it would've saved me some hours!

Gunicorn settings for virtualenvwrapper

I am trying to deploy a test site which made using Django and virtualenvwrapper. I want to use nginx for requests.I used Taskbuster's tutorial. So my project layer is similar as below :
--abctasarim **main folder
--manage.py **django manage file
----/yogavidya ** project folder
----/yogavidya/wsgi.py **wsgi file
----/yogavidya/settings/base.py ***settings
I prepared a script to use with gunicorn. I addressed virtualenv to virtualenvwrapper envs
#!/bin/bash
NAME="yogavidya" #Name of the application (*)
DJANGODIR=/home/ytsejam/public_html/abctasarim # Django project directory (*)
SOCKFILE=/home/ytsejam/public_html/abctasarim/run/gunicorn.sock # we will communicate using this unix socket (*)
USER=ytsejam # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
DJANGO_SETTINGS_MODULE=yogavidya.settings.base # which settings file should Django use (*)
DJANGO_WSGI_MODULE=yogavidya.wsgi # WSGI module name (*)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/ytsejam/.virtualenvs/yv_dev/bin/activate
#export /home/ytsejam/.virtualenvs/yv_dev/bin/postactivate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/ytsejam/public_html/abctasarim/gunicorn \
--name $NAME \
--workers $NUM_WORKERS \
--env DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE \
--pythonpath $DJANGODIR \
--user $USER \
--bind=unix:$SOCKFILE yogavidya.wsgi:application
When I try to run it , I am getting an error for my service file :
...
ImportError: No module named ' '
...
How can I fix my script to serve the site correctly ?
Thanks
virtualenvwrapper is supposed to be what you use in development. You want to deploy using the package that virtualenvwrapper is built on—the virtualenv package. The best suggestion I have for you for how to make this work would be to try the steps that you would normally use to start your virtualenvwrapper environment, namely sourcing the shell script and then using workon:
NAME="yogavidya" #Name of the application (*)
DJANGODIR=/home/ytsejam/public_html/abctasarim # Django project directory (*)
SOCKFILE=/home/ytsejam/public_html/abctasarim/run/gunicorn.sock # we will communicate using this unix socket (*)
USER=ytsejam # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
DJANGO_SETTINGS_MODULE=yogavidya.settings.base # which settings file should Django use (*)
DJANGO_WSGI_MODULE=yogavidya.wsgi # WSGI module name (*)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGO_DIR
source /path/to/virtualenvwrapper.sh
workon yv_dev
You should also just try invoking gunicorn from the command line after activating your virtualenv.
Here’s how you can do it with virtualenv:
cd /home/ytsejam/public_html/abctasarim
sudo pip install virtualenv
virtualenv .
. bin/activate
pip install -r requirements.txt
pip install gunicorn
gunicorn script:
NAME="yogavidya" #Name of the application (*)
DJANGODIR=/home/ytsejam/public_html/abctasarim # Django project directory (*)
SOCKFILE=/home/ytsejam/public_html/abctasarim/run/gunicorn.sock # we will communicate using this unix socket (*)
USER=ytsejam # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
cd $DJANGO_DIR
. bin/activate
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec /home/ytsejam/public_html/abctasarim/bin/gunicorn \
--name $NAME \
--workers $NUM_WORKERS \
--user $USER \
--bind=unix:$SOCKFILE yogavidya.wsgi:application

How run a daemon of djcelery with celerycam

I'm working with django celery,I have a deamon of celery with supervisor but I have a problem in the django admin I can't see the state of the task,
I can only see the state of my tasks in the django admin, when I typed in console python manage.py celerycam,
How I run a daemon of celerycam.
You can start your celerycam daemon with your app and celery all together with supervisorctl.
Example config file (/etc/supervisor/conf.d/app_name.conf):
# app config
[program:app_name]
user = www-data
directory = /var/www/app_name
command = /var/www/app_name/bin/python /var/www/app_name/bin/gunicorn agora.wsgi_server:application --bind 127.0.0.1:8022 -t 90 --workers 4 --settings='app_name.settings.production'
redirect_stderr = true
autorestart=true
stdout_logfile = /var/log/supervisor/app_name.log
stderr_logfile = /var/log/supervisor/app_name_err.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
autostart = true
stopsignal=KILL
environment=LANG="en_US.UTF-8",LC_ALL="en_US.UTF-8",LC_LANG="en_US.UTF-8"
stopasgroup=true
killasgroup=true
# celerycam config
[program:app_name_celerycam]
user = www-data
directory = /var/www/app_name
command = /var/www/app_name/bin/python manage.py celerycam --settings='app_name.settings.production'
redirect_stderr = true
autorestart=true
stdout_logfile = /var/log/supervisor/app_name_celerycam.log
stderr_logfile = /var/log/supervisor/app_name_celerycam_err.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
autostart = true
stopwaitsecs=5
# celery config
[program:app_name_celery]
user = www-data
directory = /var/www/app_name
command = /var/www/app_name/bin/python manage.py celeryd -l INFO -E -B --settings='app_name.settings.production' --concurrency=1 --pidfile=/var/run/celery/app_name_celery.pid
redirect_stderr = true
autorestart=true
stdout_logfile = /var/log/supervisor/app_name_celery.log
stderr_logfile = /var/log/supervisor/app_name_celery_err.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=warn
autostart=true
stopwaitsecs=5
environment=C_FORCE_ROOT=1
stopasgroup=true
killasgroup=true
# group of our daemons
[group:app_name]
programs=app_name,app_name_celerycam,app_name_celery
priority=999
Reload our configuration:
supervisorctl reread
Now we can manage all daemons of our application with simple commands:
supervisorctl start app_name:*
supervisorctl stop app_name:*
supervisorctl restart app_name:*
supervisorctl status app_name:*