How socket file is created in Gunicorn - django

I am deploying a Django project using Gunicorn as application server and Nginx as a web server. This is the tutorial http://tutos.readthedocs.io/en/latest/source/ndg.html which I am following. Code for the gunicorn_start.sh (kept in the folder where the project resides).
#!/bin/bash
NAME="visualization"
DJANGODIR=/var/www/dist/adc # Django project directory*
SOCKFILE=/var/www/dist/run/gunicorn.sock
USER=barun
GROUP=barun
NUM_WORKERS=1
DJANGO_SETTINGS_MODULE=adc.settings
DJANGO_WSGI_MODULE=adc.wsgi
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/barun/anaconda3/envs/adcenv/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec /home/barun/anaconda3/envs/adcenv/bin/gunicorn
${DJANGO_WSGI_MODULE}:application \ #we should put the address
gunicorn address
--name $NAME \
--workers $NUM_WORKERS \
--user $USER \
--bind=unix:$SOCKFILE
Code for gunicorn.service (/lib/systemd/system/gunicorn.service):
[Unit]
Description=visualization gunicorn daemon
[Service]
Type=simple
User=barun
WorkingDirectory=/var/www/dist
ExecStart=/home/barun/anaconda3/envs/adcenv/bin/gunicorn --workers 3 --
bind unix:/var/www/dist/run/gunicorn.sock adc.wsgi:application
[Install]
WantedBy=multi-user.target
Everything is working fine. The location is given in gunicorn_start.sh file for creating a socket file, the run folder is created but gunicorn.sock file is not creating. There is not permission related problem
Apart from this, The error I am getting in Nginx error.log file is:
4783#4783: *1 connect() to unix:/var/www/dist/run/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 172.16.1.213, server: 192.16.1.213, request: "GET / HTTP/1.1", upstream: "http://unix:/var/www/dist/run/gunicorn.sock:/", host: "172.16.1.213:8080"
When I am executing the ./gunicorn_start.sh the socket file should be created but it is not happening.

Solved!
I have done everything from fresh. Uninstall and again install the Gunicorn and Nginx. delete all configuration file and write it again.
Now it's working well.

Related

Why django codes not updating automatically using supervisorctl and Gunicorn?

Using Supervisor, I deployed Django. Django codes are not updated, instead I must restart Nginx or Supervisor. If you could help me with this issue, that would be great.
Supervisor configuration
[program:program_name]
directory=/home/user/django/dir/django_dir/
command=/home/user/django/dir/venv/bin/gunicorn --workers 3 --bind unix:/home/user/django/dir/name.sock ingen>
#command=/home/user/django/dir/venv/bin/ssh_filename.sh
numprocs=3
process_name=name%(process_num)d
autostart=true
autorestart=true
stopasgroup=true
user=user
Group=www-data
stderr_logfile=/home/user/django/dir/logs/supervisor_err.log
stdout_logfile=/home/user/django/dir/logs/supervisor.log
redirect_stderr=true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
I am trying to restart the supervisor process with the below command in order to update the Django backend codes, but it doesn't always work.
sudo supervisorctl restart program_name:*
//'program_name' refers to the name of the program in the supervisor configuration file
METHOD 2
A second trail with a ssh_filename.sh file and the second command in the above supervisor configuration is used to run the ssh_filename.sh script.
#!/bin/bash
NAME="django_dir" #Django application name
DIR_PARENT=/home/user/django/dir
DIR=${DIR_PARENT}/django_dir #Directory where project is located
USER=user #User to run this script as
GROUP=www-data #Group to run this script as
WORKERS=3 #Number of workers that Gunicorn should spawn
SOCKFILE=unix:${DIR_PARENT}/dir.sock #This socket file will communicate with Nginx
DJANGO_SETTINGS_MODULE=django_dir.settings #Which Django setting file should use
DJANGO_WSGI_MODULE=django_dir.wsgi #Which WSGI file should use
LOG_LEVEL=debug
cd $DIR
source ${DIR_PARENT}/venv/bin/activate #Activate the virtual environment
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DIR_PARENT:$PYTHONPATH
#Command to run the progam under supervisor
exec ${DIR_PARENT}/venv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $WORKERS \
--user=$USER \
--group=$GROUP \
--bind=$SOCKFILE \
--log-level=$LOG_LEVEL \
--log-file=-
Nginx web server throws a "BAD GATEWAY"" error when using the second method. If the above ssh_filename.sh script is run manually, the website works. Following are the codes I use to manually run the above ssh_filename.sh script
cd /home/user/django/dir
source venv/bin/activate
sudo venv/bin/sh_filename.sh //It works only if run with sudo
Questions
I activated the virtual environment in the above ssh_filename.sh script. Is it a smart idea?
What is the best way to update Django codes automatically without restarting Supervisor?
How to prevent the "BAD GATEWAY" error using the second method?

How to configure gunicorn service file

I'm deploying a django project on digital ocean. My project has the following structure:
$/
flo-log/
logistics/
settings.py
wsgi.py
...
manage.py
...
My project gunicorn fille is shown below:
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=flolog
Group=flolog
WorkingDirectory=/home/flolog/flo-log/logistics
ExecStart=/home/flolog/flologenv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
logistics.wsgi:application
[Install]
WantedBy=multi-user.target
I created a user with sudo privilege. Whenever I run sudo systemctl status gunicorn I encountered ModuleNotFoundError: no module named logistics.
And from the details of the error, it's due to the logistics.wsgi: application component of gunicorn service file
The error stated that there's no module called logistics however,I have a directory called logistics which contains the wsgi file. Please how do I fix this error? Why is it that they are telling me logisticsbis not a directory. Better still, how do I set myproject.wsgi section of gunicorn service file properly based on my project structure
I would say just look at your project structure. Remove the flo-log first folder and have Logistics and manage.py inside the home directory of your server as well as your virtual env. The project file "Logistics" should be in the same folder as manage.py inside home.
Hope that helps.

Django virtualenvwrapper environment variables in production

I'm setting up a Django project on a server, following this guide on DigitalOcean. However, I'm very unfamiliar with Gunicorn and nginx, so I'm running into issues.
The server is running Ubuntu 18.04, Python 3.6.7, Django 2.1.3, virtualenvwrapper 4.8.4, gunicorn 19.9.0.
When running curl --unix-socket /run/gunicorn.sock localhost returns curl: (56) Recv failure: Connection reset by peer
I'm suspecting it is the way I handle the SECRET_KEY of Django.
I'm using virtualenvwrapper to handle my virtual environments and set environment variables in the postactivate and unset them in predeactivate in the following manner:
postactivate:
if [[ -n $SECRET_KEY ]]
then
export SECRET_KEY_BACKUP=$SECRET_KEY
fi
export SECRET_KEY='my-secret-key'
predeactivate:
if [[ -n $SECRET_KEY_BACKUP ]]
then
export SECRET_KEY=$SECRET_KEY_BACKUP
unset SECRET_KEY_BACKUP
else
unset SECRET_KEY
It's clear that the variables are not available unless the virtual env is activated. I retrieve the variables in settings.py with os.environ.get('SECRET_KEY').
How do gunicorn work with virtual environments? Is there a way to have gunicorn activate the environment?
Otherwise, what is best practice when it comes to the SECRET_KEY?
Thank you very much!
Edit: Added variable retrieval method.
Edit2: systemd service-file:
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=<name>
Group=www-data
WorkingDirectory=/home/<name>/projects/<projectname>/<appname>
ExecStart=/home/<name>/.virtualenvs/<appname>/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
<appname>.wsgi:application
[Install]
WantedBy=multi-user.target

how to run app django channels with supervisor and gunicorn or daphne

I have a problem with my configuration from supervisor, my app is using django_channles well, when I run my app using of two codes below
working well
(myenv)/colonybit/colonybitbasics/python manage.py runserver 0.0.0.0:8000
or
(myenv)/colonybit/colonybitbasics/daphne -b 0.0.0.0 -p 8000
and I have other app in vuejs, the code above is working, but when I try run my app with this code below like this
(myenv)/colonybit/ ./bin/start.sh
my file start.sh
NAME="colony_app"
DJANGODIR=/home/ubuntu/colonybit # Django project directory
SOCKFILE=/home/ubuntu/colonybit/run/gunicorn.sock
USER=ubuntu # the user to run as
GROUP=ubuntu # the group to run as
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=colonybit.settings
DJANGO_WSGI_MODULE=colonybit.asgi # ASGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/ubuntu/colonybit/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec colonybit ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=0.0.0.0:8000 \
--log-level=debug \
--log-file=-
the server is running well, but my app in vuejs, show me an error 500, can't cossuming my app in django_channels
please told me, how to config my file start.sh for working using ASGI
thanks for your time.
Your django app works with its development server, because this server handles both http and websocket requests for you. Now looks your problem is with production, and gunicorn couldn't handle both requests, so daphne comes to play.
A BF way to solve it is to start the daphne ASGI within another file, which contains- exec daphne -b 0.0.0.0 -p 8001 $DJANGO_ASGI_MODULE:application (note the different port used here), other parts of the two files should be quite similar. Lately, you can refer to this for more sight, or see whether abandon unix sockets is necessary (it works for me): https://github.com/django/channels/issues/919#issuecomment-422346729
Once you've done this, integrate with supervisor to make your run simple and stable.

uWSGI is not creating Unix socket at specified path

I've worked with Nginx and uWSGI numerous times, but I've never seen socket not being created even when specified explicitly.
My uWSGI config (/etc/uwsgi/sites/my_site.ini) is quite simple:
[uwsgi]
project = my_site
base = /root
chdir = %(base)/%(project)
home = %(base)/Env/%(project)
module = %(project).wsgi:application
master = true
processes = 5
socket = /run/uwsgi/%(project).sock
chmod-socket = 664
vacuum = true
As visible, uWSGI must create socket with 644 permissions (although some suggest it should be 666).
Nginx server configuration (/etc/nginx/nginx.conf) is quite simple as well:
server {
listen 80;
server_name mysite.com www.mysite.com;
location /static/ {
root /root/my_site;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/my_site.sock;
}
}
uwgsi.service (startup script) (/etc/systemd/system/uwsgi.service on CentOS):
[Unit]
Description=uWSGI Emperor service
After=syslog.target
[Service]
ExecStart=/usr/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
After making these configurations, I executed following commands:
sudo systemctl daemon-reload
sudo systemctl restart uwsgi
sudo systemctl restart nginx
sudo systemctl status uwsgi
sudo systemctl status nginx
Everything was fine according to commands above, but Nginx returned 502 bad gateway) error when visiting website.
Thus I checked /var/log/nginx/error.log/:
2018/08/25 19:33:10 [crit] 14920#0: *3 connect() to unix:/run/uwsgi/my_site.sock failed (2: No such file or directory) while connecting to upstream, client: xx.xxx.xxx.xx, server: my_site.com, request: "GET / HTTP/1.1", upstream: $
As you can see above, uWSGI doesn't create socket at /run/uwsgi/ directory for some reason...
I'm sure that socket is not being created at /run/uwsgi/, since ls /run/uwsgi/ returns nothing.
Permissions:
sudo usermod -a -G root nginx;chmod 710 /root
Main problem:
uWSGI is not creating a unix socket at the specified path in the configuration above, what could be the problem?
Is there any possible cause?