Gunicorn-Supervisor Django Setup Issues - django

I am attempting to setup a DigitalOcean Droplet to hold a Django application, and I am following this overview: https://simpleisbetterthancomplex.com/tutorial/2016/10/14/how-to-deploy-to-digital-ocean.html
The application runs fine when I execute it via: python manage.py runserver 0.0.0.0:8000
However, once the application is started via sudo supervisorctl restart all and I run sudo supervisorctl status, I get this, but the app doesn't work when I got to the correct URL:
site#SiteDroplet:~$ sudo supervisorctl status Site
Site RUNNING pid 3071, uptime 0:00:19
Can someone help?
Here is my directory structure:
site#SiteDroplet:~$ cd ../
site#SiteDroplet:/home$ cd ../
site#SiteDroplet:/$ ls
bin dev home initrd.img.old lib64 media opt root sbin srv tmp var vmlinuz.old
boot etc initrd.img lib lost+found mnt proc run snap sys usr vmlinuz
site#SiteDroplet:/$ ^C
site#SiteDroplet:/$ cd home
site#SiteDroplet:/home$ ls
site
site#SiteDroplet:/home$ cd site
site#SiteDroplet:~$ ls
Site bin include lib local logs run share
site#SiteDroplet:~$ cd bin
site#SiteDroplet:~/bin$ ls
activate activate.fish easy_install gunicorn_start pip2 python python2 wheel
activate.csh activate_this.py easy_install-2.7 pip pip2.7 python-config python2.7
site#SiteDroplet:~/bin$ cd ../
site#SiteDroplet:~$ cd Site
site#SiteDroplet:~/Site$ ls
Site app.yaml forms interface_login interface_management interface_resident interface_security objects requirements.txt utils
README.md cron.yaml interface_admin interface_maintenance interface_onsite interface_root manage.py objects_client templates
site#SiteDroplet:~/Site$ Site is my Django Project Directory
Here is my gunicorn_start file
#!/bin/bash
NAME="Site"
DIR=/home/site/Site
USER=site
GROUP=site
WORKERS=3
BIND=unix:/home/site/run/gunicorn.sock
DJANGO_SETTINGS_MODULE=Site.settings
DJANGO_WSGI_MODULE=Site.wsgi
LOG_LEVEL=error
cd $DIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DIR:$PYTHONPATH
exec ../bin/gunicorn_start ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $WORKERS \
--user=$USER \
--group=$GROUP \
--bind=$BIND \
--log-level=$LOG_LEVEL \
--log-file=-
Here is my supervisor config
[program:Site]
command=/home/site/bin/gunicorn_start
user=site
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/home/site/logs/gunicorn-error.log
Nginix config
upstream app_server {
server unix:/home/site/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name 157.230.230.54;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/site/logs/nginx-access.log;
error_log /home/site/logs/nginx-error.log;
location /static/ {
alias /home/site/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
Latest gunicorn-error.log
/home/site/Site/../bin/gunicorn_start: line 16:
/home/site/Site/../bin/gunicorn_start: Argument list too long
/home/site/Site/../bin/gunicorn_start: line 16: /
/home/site/Site/../bin/gunicorn_start: Success

I don't know why you have such complex setup, here is my account website conf, works like a charm:
[group:accounting]
programs=accounting_web
[program:accounting_web]
command = /home/web/.virtualenvs/accounting/bin/gunicorn accounting.wsgi --workers=1 --threads=4 -b unix:/tmp/gunicorn_accounting.sock --log-level=DEBUG --timeout=120
directory = /home/web/accounting
user = web
environment=PATH="/home/web/.virtualenvs/accounting/bin"

Probably, you have a syntax error on gunicorn_start script. My suggestion is to use a smaller command line directly on supervisor .conf.
On nginx.conf:
1) try to change the Servername parameter with a valid URL, for example:
exactestate.com
2) change the sock configuration to TCP:
upstream gunicorn_panel {
# For a TCP configuration:
server 127.0.0.1:9000 fail_timeout=0; }
server {
listen 8080;
On supervisor.conf, instead of use the gunicorn_start file, try to write all command directly on the command variable:
[program:powerpanel]
command=/home/web/.virtualenvs/accounting/bin/gunicorn powerpanel.wsgi -b 127.0.0.1:9000 -w1 --pythonpath=/home/exactestate/ExactEstate --error-logfile=/home/exactestate/logs/gunicorn-error.log
user=webapp
autostart=true
autorestart=unexpected
startsecs=1
startretries=3
redirect_stderr=true

Related

Best way to config gunicorn and nginx with django?

I am trying to deploy django with gunicorn and nginx on heroku, and i'm kinda confused with the way to config gunicorn and nginx, when i searched through internet, they usually create gunicorn.socket
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
and gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myprojectdir
ExecStart=/home/sammy/myprojectdir/myprojectenv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
myproject.wsgi:application
[Install]
WantedBy=multi-user.target
but when i go to gunicorn docs : https://docs.gunicorn.org/en/stable/deploy.html. nginx has a config file like this
worker_processes 1;
user nobody nogroup;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /var/log/nginx/access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name example.com www.example.com;
keepalive_timeout 5;
# path for static files
root /path/to/app/current/public;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/app/current/public;
}
}
}
So i wonder what the different between these and which is the best way to setup gunicorn, nginx.
Thanks
you can try following steps to deploy django project using nginx, supervisor and gunicron
1- Create new gunicorn script in /myprojectenv/bin/script name e.g gunicorn_start
#!/bin/bash
NAME="myproject"
DJANGODIR=/home/sammy/myprojectdir/myproject
SOCKFILE=/home/sammy/myprojectdir/myproject/run/proj_name.sock
USER=sammy
GROUP=www-data
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=myproject.settings
DJANGO_WSGI_MODULE=myproject.wsgi
echo "Starting $NAME as whoami"
cd $DJANGODIR
source ../myprojectenv/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec ../myprojectenv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application
--name $NAME
--workers $NUM_WORKERS
--user=$USER --group=$GROUP
--bind=unix:$SOCKFILE
--log-level=debug
--log-file=-
2- Install supervisorctl
pip or yum install supervisor
3- Create conf file under /etc/supervisor.d
Example config file
[program:myproject]
directory=/home/sammy/myprojectdir/myproject
command=/home/sammy/myprojectdir/myprojectenv/bin/gunicorn_start --workers 3 --bind uxix:/home/sammy/myprojectdir/myproject/run/proj_name.sock myproject.wsgi:application
autostart=true
autorestart=true
stderr_logfile=/home/sammy/myprojectdir/myproject/Logs/gunicorn_supervisor.log
stdout_logfile=/home/sammy/myprojectdir/myproject/Logs/gunicorn_supervisor.log
user=sammy
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
4- Supervisorctl reread & supervisorctl update
5- nano /etc/nginx/site-available/app.conf
6- ln -s /etc/nginx/sites-available/app.conf /etc/nginx/sites-enabled
7- systemctl restart nginx
Please change folder names and path according to your project.

Nginx not serving static files and user uploaded files in Django Kubernetes

Hi i am working on a kubernetes and can't get static files or user documents. This runs behind nginx ingress all queries and django works as expected. But unable to figure out why images and other documents cant be obtained via url. Application gets installed using helm charts and functionality seems to be okay other than serving files.
FROM python:3.8.12-alpine3.15
ADD ./requirements.txt /app/requirements.txt
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev build-base linux-headers jpeg-dev zlib-dev\
&& python -m venv /env \
&& /env/bin/pip install --upgrade pip \
&& /env/bin/pip install --no-cache-dir -r /app/requirements.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& apk del .build-deps
ADD ./ /app
WORKDIR /app
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
RUN apk add nginx
RUN rm /etc/nginx/http.d/default.conf
COPY helm/nginx.conf /etc/nginx/nginx.conf
ENTRYPOINT ["sh", "helm/run.sh"]
run.sh
nginx -g 'daemon on;'
gunicorn main_app.wsgi:application --bind 0.0.0.0:8080 --workers 3
nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
# default_type application/octet-stream;
access_log /var/log/nginx/access.log;
upstream django {
server 0.0.0.0:8080;
}
server {
include /etc/nginx/mime.types;
listen 8000;
location /static {
autoindex on;
alias /app/static/;
}
location / {
proxy_pass http://django;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /uploads{
autoindex on;
alias /app/uploads/;
}
# location ~* ^/[^/]+\.(?:gif|jpg|jpeg|pdf)$ {
# root /app/;
# try_files $uri =404;;;;
# }
}
}
Django settings file is like this:
STATIC_ROOT = BASE_DIR / 'static'
STATIC_URL = '/static/'
# media folder setting
MEDIA_URL = '/uploads/'
MEDIA_ROOT = BASE_DIR / 'uploads'
If i go to /admin url files are served but i get this error in a js file:
Uncaught SyntaxError: Unexpected token '<'
This was one hell of a ride to figure it out. following changes in nginx.conf worked.
location your_ingresspath_here/static {
autoindex on;
alias /app/static/;
}

Why can't I see my NGINX log's when my app is deployed to Azure app services, but it works fine locally?

I have a Dockerized Django application, which I'm orchestrating with Supervisor, which is not optimal but needed when hosting on Azure app services as their multi-app support with docker-compose is still in preview mode (aka. beta).
According to best-practises I have configured each application within supervisord to emit the logs to STDOUT. It works fine when I create the Docker image locally, run it and check the docker logs. However, when I have deployed it to Azure app services and check the logs, my web-application (Gunicorn) is logging as expected, however, the logs from NGINX don't appear at all.
I have tried different configurations in my Dockerfile for linking the log files generated by NGINX (linking to both /dev/stdout and /dev/fd/1 for example) and I have also gone into the the nginx.conf config and trying to log out directly to /dev/stdout. But whatever I do it work fine locally, but on Azure the logs don't show any NGINX-logs. I've pasted relevant configuration files, where you can see the commented lines with the options I've tried with. Hope someone can help me figure this one out.
EDIT:
I've also tried logging the NGINX app to a log-file in the system, which also works fine locally, but not in Azure app-services. I tried deactivating the "user nginx" part in nginx.conf as I though it could have something to do with permissions, but that didn't help either.
EDIT 2:
I also tried creating the log files in my home-directory in the web-app at Azure, thinking it may had to do with not being able to create logs in other directories - again, it works locally, but the logs in Azure are empty.
Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
###################
# PACKAGE INSTALLS
###################
RUN apt-get update
RUN apt-get install -y pgbouncer
RUN apt-get update && apt-get install -y supervisor
RUN apt-get install nano
RUN apt-get install -y git
RUN apt-get install curl
# Supervisor-stdout for consolidating logs
RUN pip install git+https://github.com/coderanger/supervisor-stdout
###################
# AZURE SSH SETUP
###################
# Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
RUN apt-get install -y --no-install-recommends openssh-server \
&& echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY ./bin/staging/sshd_config /etc/ssh/
# Copy and configure the ssh_setup file
RUN mkdir -p /tmp
COPY ./bin/staging/ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
##############
# NGINX SETUP
##############
ENV NGINX_VERSION 1.15.12-1~stretch
ENV NJS_VERSION 1.15.12.0.3.1-1~stretch
RUN set -x \
&& \
NGINX_GPGKEY=573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62; \
found=''; \
for server in \
hkp://keyserver.ubuntu.com:80 \
hkp://p80.pool.sks-keyservers.net:80 \
pgp.mit.edu \
; do \
echo "Fetching GPG key $NGINX_GPGKEY from $server"; \
apt-key adv --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$NGINX_GPGKEY" && found=yes && break; \
done; \
test -z "$found" && echo >&2 "error: failed to fetch GPG key $NGINX_GPGKEY" && exit 1; \
apt-get remove --purge --auto-remove -y gnupg1 && rm -rf /var/lib/apt/lists/* \
&& dpkgArch="$(dpkg --print-architecture)" \
&& nginxPackages=" \
nginx=${NGINX_VERSION} \
nginx-module-xslt=${NGINX_VERSION} \
nginx-module-geoip=${NGINX_VERSION} \
nginx-module-image-filter=${NGINX_VERSION} \
nginx-module-njs=${NJS_VERSION} \
" \
&& echo "deb https://nginx.org/packages/mainline/debian/ stretch nginx" >> /etc/apt/sources.list.d/nginx.list \
&& apt-get update \
&& apt-get install --no-install-recommends --no-install-suggests -y \
$nginxPackages \
gettext-base \
&& rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/nginx.list
COPY ./conf/nginx/staging.conf /etc/nginx/conf.d/default.conf
COPY ./conf/nginx/nginx.conf /etc/nginx/nginx.conf
# Linking logs to be able to print errors and logs to STDOUT
#RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log
RUN ln -sf /dev/fd/1 /var/log/nginx/access.log && ln -sf /dev/fd/2 /var/log/nginx/error.log
##########################
# DJANGO APPLICATION SETUP
##########################
# install app
RUN mkdir /var/app && chown www-data:www-data /var/app
WORKDIR /var/app
COPY ./requirements.txt /var/app/
RUN pip install -r requirements.txt
COPY . /var/app/
#############
# SUPERVISORD
#############
COPY ./bin/staging/supervisord_main.conf /etc/supervisor/conf.d/supervisord_main.conf
COPY ./bin/staging/prefix-log /usr/local/bin/prefix-log
##########
# VOLUMES
##########
VOLUME /var/logs
########
# PORTS
########
# Expose ports (Added from previous dockerfile)
EXPOSE 80 2222
#########################
# SUPERCRONIC (CRON-TABS)
#########################
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.12/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=048b95b48b708983effb2e5c935a1ef8483d9e3e
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
#############
# PERMISSIONS
#############
RUN ["chmod", "+x", "/var/app/bin/staging/entrypoint_main.sh"]
RUN ["chmod", "+x", "/usr/local/bin/prefix-log"]
############
# ENTRYPOINT
############
ENTRYPOINT ["/var/app/bin/staging/entrypoint_main.sh"]
supervisord_main.conf
[supervisord]
logfile=/var/logs/supervisord.log ; main log file; default $CWD/supervisord.log
logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=10 ; # of main logfile backups; 0 means none, default 10
loglevel=info ; log level; default info; others: debug,warn,trace
pidfile=/var/logs/supervisord.pid
nodaemon=true ; Run interactivelly instead of deamonizing
# user=www-data
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[inet_http_server]
port = 127.0.0.1:9001
[supervisorctl]
serverurl = http://127.0.0.1:9001
#serverurl=unix:///var/run/supervisor.sock
[program:nginx]
#command=/usr/local/bin/prefix-log /usr/sbin/nginx -g "daemon off;"
command=/usr/sbin/nginx -g "daemon off;"
directory=./projectile/
autostart=true
autorestart=true
stdout_events_enabled=true
stderr_events_enabled=true
stdout_logfile = /dev/fd/1
stdout_logfile_maxbytes=0
stderr_logfile = /dev/fd/2
stderr_logfile_maxbytes=0
[program:ssh]
command=/usr/local/bin/prefix-log /usr/sbin/sshd -D
stdout_events_enabled=true
stderr_events_enabled=true
stdout_logfile = /dev/fd/1
stdout_logfile_maxbytes=0
stderr_logfile = /dev/fd/2
stderr_logfile_maxbytes=0
[program:web]
user=www-data
command=/usr/local/bin/prefix-log gunicorn --bind 0.0.0.0:8000 projectile.wsgi:application # Run each app trough a SH script to prepend logs with the application name
#command=gunicorn --workers=%(ENV_WORKER_COUNT)s --bind 0.0.0.0:8000 myapp_project.wsgi:application
directory=./projectile/
autostart=true
autorestart=true
stdout_events_enabled=true
stderr_events_enabled=true
stdout_logfile = /dev/fd/1
stdout_logfile_maxbytes=0
stderr_logfile = /dev/fd/2
stderr_logfile_maxbytes=0
nginx.conf
user nginx;
worker_processes 2; # Set to number of CPU cores, 2 cores under Azure plan P1v3
error_log /var/log/nginx/error.log warn;
#error_log /dev/stdout warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
#access_log /dev/stdout main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
staging.conf
server {
listen 80 default_server;
error_log /dev/stdout info;
access_log /dev/stdout;
client_max_body_size 100M;
location /static {
root /var/app/ui/build;
}
location /site-static {
root /var;
}
location /media {
root /var;
}
location / {
root /var/app/ui/build; # try react build directory first, if file doesn't exist, route requests to django app
try_files $uri $uri/index.html $uri.html #app;
}
location #app {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https"; # assumes https already terminated by the load balancer in front of us
proxy_pass http://127.0.0.1:8000;
proxy_read_timeout 300;
proxy_buffering off;
}
}
Solved it. The issue was that the Azure App service had the configuration setting WEBSITES_PORT=8000 set, which made the app go straight to gunicorn and bypsasing NGINX, thus not creating any logs. Simply removing the setting fixed the issue.

Linode Django uwsgi Nginx

The following setup is only letting me see the default Nginx html page. How can I get to Django?
I've been following Linode's documentation on how to set this up (and numerous other tutorials), but they don't use systemd, so things are a bit different.
https://www.linode.com/docs/websites/nginx/deploy-django-applications-using-uwsgi-and-nginx-on-ubuntu-14-04
I am using Linode with Fedora24. I have installed my virutalenv at
/home/ofey/djangoenv and activated it,
Django is installed using pip at
/home/ofey/qqiProject
Into the virtualenv I've installed uwsgi.
Firstly,
/etc/systemd/system/uwsgi.service
[Unit]
Description=uWSGI Emperor service
After=syslog.target
[Service]
ExecStart=/home/ofey/djangoenv/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
This executes,
/etc/uwsgi/sites/qqiProject.ini
[uwsgi]
project = qqiProject
base = /home/ofey
chdir = %(base)/%(project)
home = %(base)/djangoenv
module = %(project).wsgi:application
master = true
processes = 2
socket = %(base)/%(project)/%(project).sock
chmod-socket = 664
vacuum = true
Also,
/etc/nginx/sites-available/qqiProject
server {
listen 80;
server_name qqiresources.com www.qqiresources.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/django/qqiProject;
}
location / {
include uwsgi_params;
uwsgi_pass unix:/home/django/qqiProject/qqiProject.sock;
}
}
The file /etc/nginx/nginx.conf has not been changed.
The user is ofey, I've used,
$ sudo systemctl daemon-reload
$ sudo systemctl restart nginx
$ sudo systemctl start uwsgi.service
Started Django with,
$ python manage.py runserver
To Django's settings.py I turned off debugging and added a host
DEBUG = False
ALLOWED_HOSTS = ['qqiresources.com']
I have also created a symbolic link,
sudo ln -s /etc/nginx/sites-available/qqiProject /etc/nginx/sites-enabled
Any help would be greatly appreciated,
Thanks
The following is working for me, however I am not sure if this is the correct or most efficient way to do it, especially where the sockets are concerned.
systemd runs the uwsgi.service and can be started with,
$ sudo systemctl start uwsgi.service
Sometimes it is necessary to reload systemd using,
$ sudo systemctl daemon-reload
/etc/sysemd/system/uwsgi.service
[Unit]
Description=uWSGI Emperor service
After=syslog.target
[Service]
ExecStart=/home/ofey/djangoenv/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
This calls the binary inside my django virtual environment directory with,
ExecStart=/home/ofey/djangoenv/bin/uwsgi
and also takes us to /etc/uwsgi/sites where the uwsgi configuration files is called djangoForum.ini
/etc/uwsgi/sites/djangoForum.ini
[uwsgi]
project = djangoForum
base = /home/ofey
chdir = %(base)/%(project)
home = %(base)/djangoenv
module = crudProject.wsgi:application
master = true
processes = 2
socket = 127.0.0.1:3031
chmod-socket = 664
vacuum = true
Django is at /home/ofey/djangoForum and my django project is at /home/ofey/djangoForum/crudProject
/etc/nginx/nginx.conf
events {
worker_connections 1024;
}
http{
upstream django {
# connect to this socket
# server unix:///tmp/uwsgi.sock; # for a file socket
server 127.0.0.1:3031; # for a web port socket
}
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name example.com; # substitute your machine's IP address or FQDN
charset utf-8;
#Max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/ofey/djangoForum/fileuploader/uploaded_files; # your Django project's media files
}
location /static {
alias /home/ofey/djangoForum/noAppBoundStaticDirectory; # your Django project's static files
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
# uwsgi_pass 127.0.0.1:3031;
include /etc/nginx/uwsgi_params; # or the uwsgi_params you installed manually
}
}
}
nginx can be turned on with,
$ sudo systemctl start nginx.service
'start' can be replaced with, 'restart' or 'stop'.
These configuration files for uwsgi and nginx worked for me.
Useful links:
https://gist.github.com/evildmp/3094281
https://www.linode.com/docs/websites/nginx/deploy-django-applications-using-uwsgi-and-nginx-on-ubuntu-14-04

Gunicorn is not binding my domain by using ".sock" file

I am trying to host multiple sites on VPS using sock file but the problem is that I can't see the website up and running using gunicorn sock. But I can't see my website live. I need to know how do I change the following screen showing my app binds with particular port instead of sock file or if it has to be a sock file then why I can't see it in browser at mydomain.com.
Gunicorn upscript is as follows:
#!/bin/bash
NAME="dressika" # Name of the application
DJANGODIR=/django/mydomain # Django project directory
SOCKFILE=/django/mydomain/run/gunicorn.sock # we will communicte using this unix socket
USER=django # the user to run as
GROUP=django # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=mydomain.settings # which settings file should Django use
DJANGO_WSGI_MODULE=mydomain.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--bind=unix:$SOCKFILE \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=debug \
--log-file=-
With above settings gunicorn startup script runs fine but I couldn't see my site live on browser or client end. I guess I need to bind it with some port. I am not sure if my assumption is correct. My app settings.py shows in ALLOWED_HOSTS=['mydomain.com', 'www.mydomain.com]. Still the url isn't working.
My Nginx settings are:
upstream mydomain_server {
server 127.0.0.1:9500 fail_timeout=0;
}
server {
listen 80;
listen [::]:80;
root /home/django/mydomain;
index index.html index.htm;
client_max_body_size 4G;
server_name mydomain.com www.mydomain.com;
keepalive_timeout 5;
location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2|woff|ttf)$ {
expires 365d;
}
# Your Django project's media files - amend as required
location /media {
alias /home/django/mydomain/media/;
}
# your Django project's static files - amend as required
location static/static-only {
alias /home/django/mydomain/static-only/;
}
# Django static images
location /static/mydomain/images {
alias /home/django/mydomain/static-only/images/;
}
# Proxy the static assests for the Django Admin panel
location /static/admin {
alias /usr/lib/python2.7/dist-packages/django/contrib/admin/static/admin;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://mydomain_server;
proxy_connect_timeout 60s;
}
}
I've also tried binding /home/django/mydomain/run/gunicorn.sock with upstream server instead of IP:Port but still couldn't see the site up and running.
I have the same problem, .sock doesn't create. This method helps me.
Prerequests:
Installed nginx: when you type in browser 127.0.0.1 - obtain "Wellcome to nginx...".
You install python2 or 3 no matter, and other stuffs: pip, django, gunicorn...
You installed and settled virtualenv. (in my case, I use virtualenvwrapper - this is good staffs, saves all you env in one folder: /home/user/.virtualenvs/)
You created django project, and when: python manage.py runserver -
you obtain "It works..." - this good news.
When you type gunicorn --bind 0.0.0.0:8000 myproject.wsgi:application - you have the same result, as a step 4.
Next step for setting you dj.project throgh gunicorn to nginx:
You create file in /etc/systemd/system/any_file_name.service - you can named this file as you want, at DO - it names as gunicorn.service.
my method:
$cd /etc/systemd/system
$sudo touch gunicorn.service
and open it your favorite text editor
$sudo subl gunicorn.service
Inside it you write:
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=vetal
Group=www-data
WorkingDirectory=/var/www/apple.net
ExecStart=/home/vetal/.virtualenvs/univ/bin/gunicorn --workers 3 --bind unix:/var/www/apple.net/mysite/mysite.sock mysite.wsgi:application
[Install] WantedBy=multi-user.target
ExecStart - what will be started by nginx, when your virualenv will be turned off. Do you remember, gunicorn was install through pip, when your env was turn on ?
-- bind unix:... - this address WHERE your .sock will created! Pay attention for this!
CHECK EVERY LETTER!TWISE!!! (of course with you links..)
Type:
$ls -l
if you see in attributes to your 'gunicorn.service' something:
-rw-r--r-- 1 root root 0 Янв 12 11:48 gunicorn.service
this means - this file is not executable, and you .sock - file will never created! Make next:
$sudo chmod 755 gunicorn.service
and check:
$ls -l
if you get:
-rwxr-xr-x 1 root root 305 Янв 11 19:48 gunicorn.service
this good! Everything allright!
Then you created nginx block, in /etc/nginx/site-available/ it likes next:
server {
listen 80;
root /var/www/apple.net;
server_name apple.net;
location = /favicon.ico { access_log off; log_not_found off; }
location = /static/ {
alias /var/www/apple.net/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/apple.net/mysite/mysite.sock;
} }
Notice: proxy_pass - must be identicaly correct with folder where .sock file created in gunicorn.service!
Copies this file to /sites-enable
$ sudo cp /etc/nginx/site-avaliable/apple.net /etc/nginx/site-enable
I don't have any domaine, so I modify my /etc/hosts file, add row:
127.0.0.10 apple.net
Very important steps!!!
$pkill gunicorn - this step kill daemon, which you may started before. gunicorn in this case, means name of file which you created before with .service extention, in /etc/systemd/system - folder.
Start gunicorn.service daemon:
$sudo systemctl start gunicorn
$sudo systemctl enable gunicorn
Start(or restart nginx)
$sudo /etc/init.d/nginx (re)start
Check your domane name in browser.
Since gunicorn is running on a socket, you need to bind to that socket, not to a port, in the upstream section.
upstream mydomain_server {
server unix:/home/django/mydomain/run/gunicorn.sock fail_timeout=0;
}
I have nginx serving up a .sock file from gunicorn. My typical gunicorn call looks like this:
exec gunicorn \
--pid /web/gunicorn.pid \
--workers '4' \
--name myapp \
--chdir /src/myapp \
--bind unix:/web/.sock \
--log-file=- \
myapp.wsgi:application
My nginx conf for / looks like this; the main difference seems to be that your proxy_pass statement doesn't point to the .sock file:
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/web/.sock;
}