Upstart - Django and Celeryd - django

I have my Ubuntu server setup so my Django project will bestarted by upstart like this:
#!/bin/bash
set -e
LOGFILE=/var/log/gunicorn/foo.log
LOGDIR=$(dirname $LOGFILE)
NUM_WORKERS=3
# user/group to run as
USER=django
GROUP=django
cd /var/www/webapps/foo
source ../env/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec ../env/bin/gunicorn_django -w $NUM_WORKERS \
--user=$USER --group=$GROUP --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE && celeryd -l info -B
As you can see I also added celeryd at the end. But its not started Im sure it does not start as my tasks is not getting done. When I start it in the terminal on the server with:
manage.py celeryd -l info -B it does start and I can see the tasks being done.
How am I supposed to start it with Django?

You should create a separate upstart script for starting celeryd. This should get you started.

Related

Why django codes not updating automatically using supervisorctl and Gunicorn?

Using Supervisor, I deployed Django. Django codes are not updated, instead I must restart Nginx or Supervisor. If you could help me with this issue, that would be great.
Supervisor configuration
[program:program_name]
directory=/home/user/django/dir/django_dir/
command=/home/user/django/dir/venv/bin/gunicorn --workers 3 --bind unix:/home/user/django/dir/name.sock ingen>
#command=/home/user/django/dir/venv/bin/ssh_filename.sh
numprocs=3
process_name=name%(process_num)d
autostart=true
autorestart=true
stopasgroup=true
user=user
Group=www-data
stderr_logfile=/home/user/django/dir/logs/supervisor_err.log
stdout_logfile=/home/user/django/dir/logs/supervisor.log
redirect_stderr=true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
I am trying to restart the supervisor process with the below command in order to update the Django backend codes, but it doesn't always work.
sudo supervisorctl restart program_name:*
//'program_name' refers to the name of the program in the supervisor configuration file
METHOD 2
A second trail with a ssh_filename.sh file and the second command in the above supervisor configuration is used to run the ssh_filename.sh script.
#!/bin/bash
NAME="django_dir" #Django application name
DIR_PARENT=/home/user/django/dir
DIR=${DIR_PARENT}/django_dir #Directory where project is located
USER=user #User to run this script as
GROUP=www-data #Group to run this script as
WORKERS=3 #Number of workers that Gunicorn should spawn
SOCKFILE=unix:${DIR_PARENT}/dir.sock #This socket file will communicate with Nginx
DJANGO_SETTINGS_MODULE=django_dir.settings #Which Django setting file should use
DJANGO_WSGI_MODULE=django_dir.wsgi #Which WSGI file should use
LOG_LEVEL=debug
cd $DIR
source ${DIR_PARENT}/venv/bin/activate #Activate the virtual environment
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DIR_PARENT:$PYTHONPATH
#Command to run the progam under supervisor
exec ${DIR_PARENT}/venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $WORKERS \
--user=$USER \
--group=$GROUP \
--bind=$SOCKFILE \
--log-level=$LOG_LEVEL \
--log-file=-
Nginx web server throws a "BAD GATEWAY"" error when using the second method. If the above ssh_filename.sh script is run manually, the website works. Following are the codes I use to manually run the above ssh_filename.sh script
cd /home/user/django/dir
source venv/bin/activate
sudo venv/bin/sh_filename.sh //It works only if run with sudo
Questions
I activated the virtual environment in the above ssh_filename.sh script. Is it a smart idea?
What is the best way to update Django codes automatically without restarting Supervisor?
How to prevent the "BAD GATEWAY" error using the second method?

Deploy django app with gunicorn and nginx. ERROR supervisor: child process was not spawned

I'm trying to deploy my django app in a digital ocean droplet. I'm using nginx and gunicorn following this tutorial
This is how gunicorn_start looks like
#!/bin/sh
NAME="PlantArte"
DIR=/home/paulauzca/PlantArte
USER=paulauzca
GROUP=paulauzca
WORKERS=3
BIND=unix:/home/paulauzca/run/gunicorn.sock
DJANGO_SETTINGS_MODULE=project.settings
DJANGO_WSGI_MODULE=project.wsgi
LOG_LEVEL=error
cd $DIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DIR:$PYTHONPATH
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME\
--workers $WORKERS \
--user=$USER \
--group=$GROUP \
--bind=$BIND \
--log-level=$LOG_LEVEL \
--log-file=-
This is PlantArte.conf file
#!/bin/sh
[program:PlantArte]
command=/home/paulauzca/bin/gunicorn_start --daemon
user=paulauzca
startsecs=0
autostart=true
autorestart=true
redirect_stderr=true
environment=PYTHONPATH=$PYTHONPATH:/home/paulauzca/bin/python
stdout_logfile=/home/paulauzca/logs/gunicorn-error.log
Following the tutorial I run sudo supervisorctl status PlantArte
And I always get
PlantArte RUNNING pid 3566, uptime 0:00:00
With 0 second runtime. Which is weird. If I go into gunicorn-error.log I get
supervisor: couldn't exec /home/paulauzca/bin/gunicorn_start: EACCES
supervisor: child process was not spawned
I've tried the "solutions" to this problem I've found on the internet but there is not much information about this...

Difference between Celery and Gunicorn workers?

I'm deploying a Django app with gunicorn, nginx and supervisor.
I currently run the background workers using celery:
$ python manage.py celery worker
This is my gunicorn configuration:
#!/bin/bash
NAME="hello_app" # Name of the application
DJANGODIR=/webapps/hello_django/hello # Django project directory
SOCKFILE=/webapps/hello_django/run/gunicorn.sock # we will communicte using this unix socket
USER=hello # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=hello.settings # which settings file should Django use
DJANGO_WSGI_MODULE=hello.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=debug \
--bind=unix:$SOCKFILE
Is there a way to run celery background workers under gunicorn? Is it even referring to the same thing?
Celery and gunicorn are different things. Celery is an asynchronous task manager, and gunicorn is a web server. You can run both of them as background tasks (celeryd to daemonize celery), just feed them your django project.
A common way to run them is using supervisor, which will make sure they stay running after you log out from the server. The celery github repo has some sample scripts for using celery with supervisor.

gunicorn not starting workers

When i run this command
[jenia#arch app]../bin/gunicorn zones.wsgi:application --bind localht:8000
The gunicorn server runs at localhost:8000. It doesnt return anything to the console as I assume it should. Just runs silently.
When I run my script in bin/gunicorn_start the server still runs silently and features odd behaviour. If I input an address that django can't resolve it gives me internal server error and that's it. no stack trace no nothing.
This is the bin/gunicorn_start script:
#!/bin/bash
NAME="hello_app" # Name of the application
DJANGODIR=/srv/http/proj05/app # Django project directory
SOCKFILE=/srv/http/proj05/app/run/gunicorn.sock # we will communicte using this unix socket
USER=jenia # the user to run as
GROUP=jenia # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=zones.settings # which settings file should Django use
DJANGO_WSGI_MODULE=zones.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
echo "about to exec exec is" $DJANGO_WSGI_MODULE
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=debug \
--bind=unix:$SOCKFILE
By the way, I created a virtualen at by doing:
cd proj05
virtualenv .
source bin/activate
pip install django
pip install gunicorn
...
Can anyone tell me how to make gunicorn output the debug information instead of just internal server error?
Thanks in advance.
gunicorn doesn't return to the console by default now. Use the option --log-file=- to do it.
Also the error should be fixed in https://github.com/benoitc/gunicorn/issues/785 .
I will make a release tomorrow.
I was able to fix this problem by reverting back to Gunicorn 18.0.0.
pip uninstall gunicorn
pip install gunicorn==18.0.0
Not the ideal solution. Perhaps it's worth making a bug ticket about this problem. My concern is that I can't actually identify what the problem is...so how do I make a proper bug ticket? haha
You should use --log-file=- option
For more information see: http://gunicorn-docs.readthedocs.org/en/latest/faq.html#why-i-don-t-see-any-logs-in-the-console

How to write an Ubuntu Upstart job for Celery (django-celery) in a virtualenv

I really enjoy using upstart. I currently have upstart jobs to run different gunicorn instances in a number of virtualenvs. However, the 2-3 examples I found for Celery upstart scripts on the interwebs don't work for me.
So, with the following variables, how would I write an Upstart job to run django-celery in a virtualenv.
Path to Django Project:
/srv/projects/django_project
Path to this project's virtualenv:
/srv/environments/django_project
Path to celery settings is the Django project settings file (django-celery):
/srv/projects/django_project/settings.py
Path to the log file for this Celery instance:
/srv/logs/celery.log
For this virtual env, the user:
iamtheuser
and the group:
www-data
I want to run the Celery Daemon with celerybeat, so, the command I want to pass to the django-admin.py (or manage.py) is:
python manage.py celeryd -B
It'll be even better if the script starts after the gunicorn job starts, and stops when the gunicorn job stops. Let's say the file for that is:
/etc/init/gunicorn.conf
You may need to add some more configuration, but this is an upstart script I wrote for starting django-celery as a particular user in a virtualenv:
start on started mysql
stop on stopping mysql
exec su -s /bin/sh -c 'exec "$0" "$#"' user -- /home/user/project/venv/bin/python /home/user/project/django_project/manage.py celeryd
respawn
It works great for me.
I know that it looks ugly, but it appears to be the current 'proper' technique for running upstart jobs as unprivileged users, based on this superuser answer.
I thought that I would have had to do more to get it to work inside of the virtualenv, but calling the python binary inside the virtualenv is all it takes.
Here is my working config using the newer systemd running on Ubuntu 16.04 LTS. Celery is in a virtualenv. App is a Python/Flask.
Systemd file: /etc/systemd/system/celery.service
You'll want to change the user and paths.
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=nick
Group=nick
EnvironmentFile=-/home/nick/myapp/server_configs/celery_env.conf
WorkingDirectory=/home/nick/myapp
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
Environment file (referenced above):/home/nick/myapp/server_configs/celery_env.conf
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/nick/myapp/venv/bin/celery"
# App instance to use
CELERY_APP="myapp.tasks"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
To automatically create the log and run folder with the correct permissions for your user, create a file in /usr/lib/tmpfiles.d. I was having trouble with the /var/run/celery folder being deleted on rebooting and then celery could not start properly.
My /usr/lib/tmpfiles.d/celery.conf file:
d /var/log/celery 2775 nick nick -
d /var/run/celery 2775 nick nick -
To enable: sudo systemctl enable celery.service
Now you'll need to reboot your system for the /var/log/celery and /var/run/celery folders to be created. You can check to see if celery started after rebooting by checking the logs in /var/log/celery.
To restart celery: sudo systemctl restart celery.service
Debugging: tail -f /var/log/syslog and try restarting celery to see what the error is. It could be related to the backend or other things.
Hope this helps!