django nginx gunicorn,, start: Job failed to start - django

every one I am trying to deploying django project using nginx gunicorn,,but when I run
sudo service gunicorn start,
it said,
start: Job failed to start,
here it the working path,,
the gunicorn.conf file
/etc/init/gunicorn.conf
description "Gunicorn application server handling myproject"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid user
setgid www-data
chdir /home/parallels/books/mysite
exec myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/parallels/books/mysite/mysite.sock mysite.wsgi:application
when I run
gunicorn --bind 0.0.0.0:8000 mysite.wsgi:application
I can see mysite running
but when I try
sudo service gunicorn start
fail,,,
I just following the instruction here
any one who can reply thank you very much

I know this is too late, but in any case:
In your gunicorn.conf I see setuid user but there is user parallels in your prompt (it depends of your bash settings, but probably it is name of curent user).
There is chdir /home/parallels/books/mysite and then exec myprojectenv/bin/gunicorn and I guess your gunicorn location is not
/home/parallels/books/mysite/myprojectenv/bin/gunicorn. So you need to use appropriate path to run gunicorn.
Check gunicorn log file to find out problems. It should be here /var/log/upstart/gunicorn.log

You need to update setuid user to your user name and also the path.

Check the exec command and its parameters. If you have any problem in that it will fail
In my case I missed bin after venv (virtual env) path.
ie exec myprojectenv/bin/gunicorn --workers 3 --bind ...

Related

why supervisor gunicorn shows fatal error command not found?

when i check status(sudo supervisorctl status) it is shwoing like this
guni:gunicorn FATAL can't find command
'/home/ubuntu/myvenv/js/bin/gunicorn'
and my gunicorn conf is like this
[program:gunicorn]
directory=/home/ubuntu/js/main_jntu
command=/home/ubuntu/myvenv/js/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/js/app.sock
main_jntu.wsgi:application
autostart=true
autorestart=true
stderr_logfile=/var/log/gunicorn/gunicorn.err.log
stdout_logfile=/var/log/gunicorn/gunicorn.out.log
[group:guni]
programs:gunicorn
after this when i check status it shows an error like this
guni:gunicorn FATAL can't find command
'/home/ubuntu/myvenv/js/bin/gunicorn'
After installing gunicorn use below command
whereis gunicorn
It will give exact path where gunicorn located, then use that path in your gunicorn conf file.
Happy coding!!
I think that the issue is with the third line of your configuration file. The error message indicates that the gunicorn executable is not available in the /home/ubuntu/myvenv/js/bin directory. If you do ls /home/ubuntu/myvenv/js/bin/gunicorn you probably will get an error message.
I think you would need to check if you have correctly installed gunicorn in your virtual environment, using e.g. pip install gunicorn. Have a look at this article; I think it would be helpful.

Gunicorn Nginx host two websites - Gunicorn creates a sock file only for one

I have a Ubuntu server on which I'm trying to host two Django applications using Gunicorn and Nginx.
When I have only one website hosted all works fine. Gunicorn creates a .sock file and nginx redirects to it.
The problem I have is that I have a gunicorn config file for the second site and that doesn't create a .sock file therefore I get a bad gateway error.
Here are my files:
Gunicorn.conf
description "David Bien com application"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid ubuntu
setgid www-data
chdir /home/ubuntu/davidbien
exec venv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/davidbien/davidbien.sock davidbiencom.wsgi:application
The above file works fine. The second one:
Gunicorn1.conf
description "Projects"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid ubuntu
setgid www-data
chdir /home/user/dbprojects
exec virtual/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/dbprojects/dbproject.sock dbproject.wsgi:application
This one doesn't create the .sock file. I tried restarting gunicorn and even tried running the second file only by running sudo service gunicorn1 start but I get:
start: Job failed to start
In the logs there's nothing mentioning the second file running.
What am I doing wrong?
Ok, I just noticed a typo in my config file
chdir /home/user/dbprojects
Should be ubuntu no user. This can be closed.

Correct gunicorn.conf to get environmental variables for Django use

I am deploying a Django app on my VPS using Nginx as the web server and Gunicorn installed in virtualenv. (I am using virtualenv with virtualenvwrapper.)
When I run Gunicorn like this, environment variables (such as database password, name) can be found:
workon virtual_env_name
# from my_project's root path
gunicorn my_project.wsgi --bind localhost:9000 --daemon
# this works
I exported the environment variables this way:
# /home/user_name/Envs/virtual_env_name/bin/postactivate
export DATABASE_PASSWORD="secret_password"
However the way below does not (whether or not virtual_env_name is activated):
sudo service gunicorn start
# env variables can't be found - KeyError thrown
This is how my gunicorn.conf script looks like:
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid user_name
setgid www-data
chdir /home/user_name/my_project
exec /home/user_name/Envs/virtual_env_name/bin/gunicorn \
--name=my_project \
--pythonpath=/home/user_name/my_project \
--bind=127.0.0.1:9000 \
my_project.wsgi:application
I can confirm this gunicorn.conf works if I hard code all the password, keys into my Django's settings.py instead of using os.environ[..].
What do I need to do to make my environment variables found when I start Gunicorn with sudo service start? What's the difference between the first and second way? Thanks.
You need to define those variables inside gunicorn.conf.
env DATABASE_PASSWORD="secret_password"
You don't run the code in virtualenv.
Instead of exec use
pre-start script
workon virtual_env_name
end script
script
gunicorn \
--name=my_project \
--pythonpath=/home/user_name/my_project \
--bind=127.0.0.1:9000 \
my_project.wsgi:application
end script
workon update Your $PATH env variable.

How to start gunicorn via Django manage.py after server reboot

I am starting gunicon by calling python manage.py run_gunicorn (inside a virtualenv).
How can I achieve to restart gunicorn after my Ubuntu 12.04 server rebooted?
You can use supervisor to launch your app on startup and restart on crashes.
Install supervisor and create a config file /etc/supervisor/conf.d/your_app.conf with content like this:
[program:your_app]
directory=/path/to/app/working/dir
command=/path/to/virtualenv_dir/bin/python /path/to/manage_py/manage.py run_gunicorn
user=your_app_user
autostart=true
autorestart=true
Since I'm on Ubuntu and like to work with tools already included in the distro I used Upstart to start gunicorn after booting the machine.
I put the following code into /etc/init/django_sf_flavour.conf :
description "Run Custom Script"
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /home/USER/bin/start_gunicorn.sh
Which executes this file (/home/USER/bin/start_gunicorn.sh) after booting:
#!/bin/bash
set -e
cd MY_PROJ_ROOT
source venv/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec python MY_PROJ_PATH/manage.py run_gunicorn
this is based on #j7nn7k answer but with a bit changes
1 - create a .conf file in /etc/init dir to run with upstart
cd /etc/init
nano django_sf_flavour.conf
2 - put below lines in django_sf_flavour file and save it.
description "Run Custom Script"
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /home/start_gunicorn.sh
3 - create start_gunicorn.sh file in home dir with following lines
cd /home
nano start_gunicorn.sh
4 - put these code in it and save it.
#!/bin/bash
set -e
cd mysite
source myenv/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec gunicorn —bind 0.0.0.0:80 mysite.wsgi:application
5 - set runable permission to start_gunicorn.sh
cd /home
chmod 775 start_gunicorn.sh
** extra **
check syntax django_sf_flavour.conf with this command
init-checkconf /etc/init/django_sf_flavour.conf
and answer should be like this:
File /etc/init/django_sf_flavour.conf: syntax ok
you can see problems in upstart log file if needed:
cat /var/log/upstart/django_sf_flavour.log
test your django_sf_flavour.conf file like this without reboot
service django_sf_flavour start
test your bash file "start_gunicorn.sh" like this
cd /home
bash start_gunicorn.sh
check state of django_sf_flavour
initctl list | grep django_sf_flavour

How to write an Ubuntu Upstart job for Celery (django-celery) in a virtualenv

I really enjoy using upstart. I currently have upstart jobs to run different gunicorn instances in a number of virtualenvs. However, the 2-3 examples I found for Celery upstart scripts on the interwebs don't work for me.
So, with the following variables, how would I write an Upstart job to run django-celery in a virtualenv.
Path to Django Project:
/srv/projects/django_project
Path to this project's virtualenv:
/srv/environments/django_project
Path to celery settings is the Django project settings file (django-celery):
/srv/projects/django_project/settings.py
Path to the log file for this Celery instance:
/srv/logs/celery.log
For this virtual env, the user:
iamtheuser
and the group:
www-data
I want to run the Celery Daemon with celerybeat, so, the command I want to pass to the django-admin.py (or manage.py) is:
python manage.py celeryd -B
It'll be even better if the script starts after the gunicorn job starts, and stops when the gunicorn job stops. Let's say the file for that is:
/etc/init/gunicorn.conf
You may need to add some more configuration, but this is an upstart script I wrote for starting django-celery as a particular user in a virtualenv:
start on started mysql
stop on stopping mysql
exec su -s /bin/sh -c 'exec "$0" "$#"' user -- /home/user/project/venv/bin/python /home/user/project/django_project/manage.py celeryd
respawn
It works great for me.
I know that it looks ugly, but it appears to be the current 'proper' technique for running upstart jobs as unprivileged users, based on this superuser answer.
I thought that I would have had to do more to get it to work inside of the virtualenv, but calling the python binary inside the virtualenv is all it takes.
Here is my working config using the newer systemd running on Ubuntu 16.04 LTS. Celery is in a virtualenv. App is a Python/Flask.
Systemd file: /etc/systemd/system/celery.service
You'll want to change the user and paths.
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=nick
Group=nick
EnvironmentFile=-/home/nick/myapp/server_configs/celery_env.conf
WorkingDirectory=/home/nick/myapp
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
Environment file (referenced above):/home/nick/myapp/server_configs/celery_env.conf
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/nick/myapp/venv/bin/celery"
# App instance to use
CELERY_APP="myapp.tasks"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
To automatically create the log and run folder with the correct permissions for your user, create a file in /usr/lib/tmpfiles.d. I was having trouble with the /var/run/celery folder being deleted on rebooting and then celery could not start properly.
My /usr/lib/tmpfiles.d/celery.conf file:
d /var/log/celery 2775 nick nick -
d /var/run/celery 2775 nick nick -
To enable: sudo systemctl enable celery.service
Now you'll need to reboot your system for the /var/log/celery and /var/run/celery folders to be created. You can check to see if celery started after rebooting by checking the logs in /var/log/celery.
To restart celery: sudo systemctl restart celery.service
Debugging: tail -f /var/log/syslog and try restarting celery to see what the error is. It could be related to the backend or other things.
Hope this helps!