when I try to access my site I only see 502 error.
Here's my nginx configuration:
upstream pzw_server {
# server unix:/home/pzw/pzw/run/gunicorn.sock fail_timeout=0;
server 127.0.0.1:8000 fail_timeout=0;
}
server {
listen 80;
server_name my_server_ip_addr;
client_max_body_size 4G;
access_log /home/pzw/pzw/log/nginx-access.log;
error_log /home/pzw/pzw/log/nginx-error.log;
location /static/ {
alias /home/pzw/pzw/static/;
}
location /media/ {
alias /home/pzw/pzw/media/;
}
location / {
try_files $uri #proxy;
}
location #proxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://my_server_ip_addr;
}
}
Gunicorn startup script which I'm using:
#!/bin/bash
NAME='app_name'
DJANGODIR=/home/pzw/pzw
SOCKFILE=/home/pzw/pzw/run/gunicorn.sock
USER=pzw
GROUP=pzw
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=app_name.settings
VIRTENVDIR=/home/pzw/.virtualenvs/pzw
echo "STARTING $NAME"
cd $DJANGODIR
source "${VIRTENVDIR}/bin/activate"
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec "${VIRTENVDIR}/bin/gunicorn_django" \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--debug \
--log-level debug #\
# --bind=unix:$SOCKFILE
Nginx logs following error:
2013/08/03 23:26:04 [error] 8582#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: my_ip, server: my_server_ip, request: "GET / HTTP/1.1", upstream: "http://my_server_ip:80/", host: "my_server_ip"
When I try to connect to 127.0.0.1:8000 on my server using lynx everything seems to be fine. Initially I tried to use unix socket, but since it didn't work(same error), I switched to TCP. Gunicorn logs nothing about connection with nginx.
The proxy_pass directive in your nginx server configuration should reflect the upstream server you configured.
proxy_pass http://pzw_server;
http://wiki.nginx.org/HttpUpstreamModule
xaxes,
Whenever you set your TEMPLATE_DEBUG to False, you also need to set ALLOWED_HOSTS so that Django knows which hosts/domains to process requests for. Apparently, the localhost works implicitly when ALLOWED_HOSTS is just the empty list.
I hope this helps!
Related
I am trying to deploy django with gunicorn and nginx on heroku, and i'm kinda confused with the way to config gunicorn and nginx, when i searched through internet, they usually create gunicorn.socket
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
and gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myprojectdir
ExecStart=/home/sammy/myprojectdir/myprojectenv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
myproject.wsgi:application
[Install]
WantedBy=multi-user.target
but when i go to gunicorn docs : https://docs.gunicorn.org/en/stable/deploy.html. nginx has a config file like this
worker_processes 1;
user nobody nogroup;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /var/log/nginx/access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name example.com www.example.com;
keepalive_timeout 5;
# path for static files
root /path/to/app/current/public;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/app/current/public;
}
}
}
So i wonder what the different between these and which is the best way to setup gunicorn, nginx.
Thanks
you can try following steps to deploy django project using nginx, supervisor and gunicron
1- Create new gunicorn script in /myprojectenv/bin/script name e.g gunicorn_start
#!/bin/bash
NAME="myproject"
DJANGODIR=/home/sammy/myprojectdir/myproject
SOCKFILE=/home/sammy/myprojectdir/myproject/run/proj_name.sock
USER=sammy
GROUP=www-data
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=myproject.settings
DJANGO_WSGI_MODULE=myproject.wsgi
echo "Starting $NAME as whoami"
cd $DJANGODIR
source ../myprojectenv/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec ../myprojectenv/bin/gunicorn ${DJANGO_WSGI_MODULE}:application
--name $NAME
--workers $NUM_WORKERS
--user=$USER --group=$GROUP
--bind=unix:$SOCKFILE
--log-level=debug
--log-file=-
2- Install supervisorctl
pip or yum install supervisor
3- Create conf file under /etc/supervisor.d
Example config file
[program:myproject]
directory=/home/sammy/myprojectdir/myproject
command=/home/sammy/myprojectdir/myprojectenv/bin/gunicorn_start --workers 3 --bind uxix:/home/sammy/myprojectdir/myproject/run/proj_name.sock myproject.wsgi:application
autostart=true
autorestart=true
stderr_logfile=/home/sammy/myprojectdir/myproject/Logs/gunicorn_supervisor.log
stdout_logfile=/home/sammy/myprojectdir/myproject/Logs/gunicorn_supervisor.log
user=sammy
group=www-data
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
4- Supervisorctl reread & supervisorctl update
5- nano /etc/nginx/site-available/app.conf
6- ln -s /etc/nginx/sites-available/app.conf /etc/nginx/sites-enabled
7- systemctl restart nginx
Please change folder names and path according to your project.
I am trying to deploy a Django project but I get 502 Bad Gateway
I used this tutorial
I used supervisor, Gunciorn, and Nginx
./virtualenvs/legaland_env/bin/gunicorn
#!/bin/bash
NAME="django_project"
DIR=/home/django/django_project
USER=django
GROUP=django
WORKERS=3
BIND=unix:/home/django/run/gunicorn.sock
DJANGO_SETTINGS_MODULE=django_project.settings
DJANGO_WSGI_MODULE=django_project.wsgi
LOG_LEVEL=error
cd $DIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DIR:$PYTHONPATH
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $WORKERS \
--user=$USER \
--group=$GROUP \
--bind=$BIND \
--log-level=$LOG_LEVEL \
--log-file=-
/etc/supervisor/conf.d/sqh.conf
[program:sqh]
startsecs=0
command=/home/admin/legaland/virtualenvs/legaland_env/bin/gunicorn
user=admin
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/home/admin/legaland/gunicorn-error.log
/etc/nginx/sites-available/sqh
upstream app_server {
server unix:/home/admin/legaland/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name ;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/admin/legaland/logs/nginx-access.log;
error_log /home/admin/legaland/logs/nginx-error.log;
location /static/ {
alias /home/admin/legaland/Legaland/src/static_root/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
Had a similar issue using the hosted platform heroku. I also got 502 bad gateway and the fix was to run the web application locally. You have to specify in the nginx config file these settings such as:
Using TCP/IP Connection:
upstream localhost {
# server unix:/tmp/nginx.socket fail_timeout=0;
server 127.0.0.1:8000
}
location / {
# Uncomment this if statement to force SSL/redirect http -> https
# if ($http_x_forwarded_proto != "https") {
# return 301 https://$host$request_uri;
# }
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host 127.0.0.1:8000;
proxy_redirect off
proxy_pass http://localhost;
}
Then in your gunicorn config file:
command='/myapp/django_env/bin/gunicorn'
pythonpath='/myapp'
bind='localhost'
workers=1
The Procfile:
web: gunicorn -c conf/gunicorn_config.py myproject.wsgi
worker: service nginx start
Last, type the following command in bash to run the heroku app:
heroku local
I've setup an application to run with gunicorn on nginx. Here's my gunicorn's setup:
#!/bin/bash
NAME="davidbien"
DJANGODIR=/home/ec2-user/davidbien
SOCKFILE=/home/ec2-user/davidbien/davidbien.sock
USER=ec2-user
GROUP=ec2-user
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=davidbien.settings
DJANGO_WSGI_MODULE=davidbien.wsgi
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ../virtual/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
Here's my nginx setup:
upstream app_server_djangoapp {
server unix:/home/ec2-user/davidbien/davidbien.sock fail_timeout=0;
}
server {
listen 80;
server_name ;
access_log /var/log/nginx/guni-access.log;
error_log /var/log/nginx/guni-error.log info;
keepalive_timeout 52.56.193.57;
# path for static files
root /home/ec2-user/davidbien/davidbien/static;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
}
When running nginx I keep getting the following error in the logs:
[crit] 23465#0: *1 connect() to unix:/home/ec2-user/davidbien/davidbien.sock failed (13: Permission denied) while connecting to upstream, client: 2.96.149.96, server: 52.56.193.57, request: "GET /faviconn.ico HTTP/1.1", upstream: "http://unix:/home/ec2-user/davidbien/davidbieen.sock:/favicon.ico", host: "52.56.193.57", referrer: "http://52.56.193.57/"
What is wrong with this? I believe the user I'm using is the owner of the folders inside its own directory.
I'm deploying my first attempt at using django+gunicorn+nginx.
I have django working (curl -XGET http://127.0.0.0.1:8000 works fine if I run the development server).
I have nginx working for static content (for example I can retrieve http://example.com/static/my_pic.png in my browser).
I'm not getting any wsgi content from my website, and I haven't been able to find a good troubleshooting guide (does it just work for everyone else?!). I start gunicorn using supervisor, which reports that it is indeed running:
(in shell:)
supervisorctl status my_app
my_app RUNNING pid 1002, uptime 0:29:51
Here's the boilerplate script I used to start it:
#!/bin/bash
#script variables
NAME="gunicorn_myapp" # Name of process
DJANGODIR=/webapps/www/my_project # Django project directory
SOCKFILE=/webapps/www/run/gunicorn.sock # communicte using this socket
USER=app_user # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=my_project.settings # settings file
DJANGO_WSGI_MODULE=my_project.wsgi # WSGI module name
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE
Here's the (condensed) nginx config file:
upstream my_server {
server unix:/webapps/www/run/gunicorn.sock fail_timeout=10s;
}
server {
listen 80;
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
client_max_body_size 4G;
access_log /webapps/www/logs/nginx-access.log;
error_log /webapps/www/logs/nginx-error.log;
location /favicon.ico { access_log off; log_not_found off; }
location /static/ {
autoindex on;
alias /webapps/www/my_project/my_app/static/;
}
location /media/ {
autoindex on;
alias /webapps/www/my_project/my_app/media/;
}
location / {
proxy_pass http://my_server;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://example.com;
break;
}
}
location /robots.txt {
alias /webapps/www/my_project/my_app/static/robots.txt ;
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /webapps/www/my_project/my_app/static/;
}
}
So: gunicorn is running, nginx is running ... what tests (and how?) should I perform to determine if gunicorn is doing the wsgi stuff properly (and if nginx is proxying the said stuff through correctly)?
Edit: I've narrowed the problem down to the communication between gunicorn and nginx via the unix socket. If I change the $SOCKFILE to be bound to 0.0.0.0:80 and stop nginx, then the app's pages are served from my website. The bad news is that the socket file strings are exactly the same between the two conf files, so I don't know why they aren't communicating. I suppose this means nginx isn't correctly fetching and passing the data through then?
Go to project directory :
cd projectname
gunicorn --log-file=- projectname.wsgi:application
and
sudo systemctl status gunicorn
I'm using a unix socket instead of a TCP port for gunicorn to serve my Django app from. However, when debug is off I get a 400 response unless I set ALLOWED_HOSTS = ['*']. What is a safer option than '*' in this scenario?
Here's my Gunicorn startup script(/opt/example.com/bin/gunicorn_start):
#!/bin/bash
NAME="myapp" # Name of the application
DJANGODIR=/opt/example.com/myapp # Django project directory
SOCKFILE=/opt/example.com/run/gunicorn.sock # we will communicate using this unix socket
USER= myuser # the user to run as
GROUP=mygroup # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=myapp.settings # which settings file should Django use
DJANGO_WSGI_MODULE=myapp.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=debug \
--bind=unix:$SOCKFILE
Turns out I just needed to add my server's hostname. I had been using ['localhost', '127.0.0.1'] but since I added the following nginx config too, the app needed to allow the website's URL.
upstream blog_app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
server unix:/opt/example.com/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name www.example.com example.com;
server_tokens off;
access_log /opt/example.com/logs/nginx-access.log;
error_log /opt/example.com/logs/nginx-error.log;
location /static/ {
alias /opt/example.com/static/;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://blog_app_server;
break;
}
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Specifically I think it was the line proxy_set_header Host $http_host; that meant I needed to add the site's name to ALLOWED_HOSTS.