I'm just making webserver with django.
Now, I want to publish Django by uwsgi+Nginx, So I read some documents(http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html).
While following that doc, I met some errors.
When I connect to mydomain.com:8000, It throws 502 Bad Gateway error.
(Actually, when I worked, changed mydomain.com to real domain that I have)
After error, /var/log/nginx/error.log is in below.
2018/02/20 14:56:15 [error] 7548#7548: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.30.1.254, server: mydomain.com, request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:8001", host: "mydomain.com:8000"
^C
This is my configure files.
[project_rest.conf]
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8000
# the domain name it will serve for
server_name .mydomain.com; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/app/project_rest/media;
}
location /static {
alias /home/app/project_rest/static;
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/app/project_rest/uwsgi_params; # the uwsgi_params file you installed
}
}
(I made that conf file in my django project's folder and linked to /etc/nginx/sites-enabled)
How can I connect to my server?
I can't find where is error occured.
Thanks.
Your Nginx configuration is correct, so let's take a look at your uwsgi configuration.
First of all, I assume you have installed uwsgi system-wide via apt-get, yum, etc.
The next thing you have to install (system-wide) is uwsgi-plugin-python3 (uwsgi-plugin-python if you are planning to execute Django with python2.7, what I don't recommend)
Then, you can create an ini file with the all the uwsgi configuration:
[uwsgi]
socket = 127.0.0.1:8001
uid = execuser
; Normally nginx, www-data
gid = nginx
chdir = /absolute/path/to/your/project
; Assuming your wsgi module is in chdir/yourmainapp/wsgi.py
module = yourmainapp.wsgi
; Path to your virtualenv. If you are not using virtualenv,
; you should.
home = /absolute/path/to/your/virtualenv
; Enables plugins: python
plugins = python
; Deamonize app
master = true
; Pass django settings module as environment variable
; (it is expected by Django).
; Assuming your settings is in chdir/yourmainapp/settings.py
env = DJANGO_SETTINGS_MODULE=yourmainapp.settings
Then, execute uwsgi:
:# /path/to/uwsgi --ini /path/to/your/config.ini --daemonize /path/to/your/logs
If you have installed uwsgi via apt-get or yum, you have to create the ini file in /etc/uwsgi/apps-enabled/yourproject.ini and simply execute uwsgi using:
:# service uwsgi start|restart
Finally, there are a lot of options to configure uwsgi: number of processes, threads, logs, and a lot of very interesting (and bad documented) stuff.
I hope it helps ;)
at /etc/nginx/default.d/xxxx
upstream django {
server 127.0.0.1:9000; # for a web port socket (we'll use this first)
keepalive 32;
}
then at /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/default.d/*;
# Settings for a TLS enabled server.
#
server {
listen 80;
listen [::]:80 default_server;
server_name ip;
root /path_prj/;
server_tokens off;
error_log /var/log/bill_error.log;
access_log /var/log/bill_access.log;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location / {
uwsgi_read_timeout 100;
uwsgi_pass django;
include /var/www/html/uwsgi_params; # the uwsgi_params file you installed
}
location /media/ {
internal;
root /path_proj/;
}
location /static/ {
root /path_proj/;
}
}
then try this command
$ sudo uwsgi -s :9000 -M --env DJANGO_SETTINGS_MODULE=sharing.settings --chdir /path_proj/ -w "django.core.wsgi:get_wsgi_application()" --chmod-socket=666 --enable-threads --thunder-lock --daemonize /tmp/uwsgi.log --workers 10 -b 32768
Related
The django project is deployed using uwsgi as application server, it also serves static files from a specified directory(as shown in the below command) and nginx is used as reverse proxy server. This is deployed using docker.
The uwsgi command to run the server is as follows:
uwsgi -b 65535 --socket :4000 --workers 100 --cpu-affinity 1 --module wui.wsgi --py-autoreload 1 --static-map /static=/project/static;
The application is working fine at this point. I would like to cache the static files into nginx server. So i have referred blog https://www.nginx.com/blog/maximizing-python-performance-with-nginx-parti-web-serving-and-caching and i have included following configuration in my nginx.conf :
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg
|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid
|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
After adding this into my Nginx conf, the Nginx server container exits with the following error:
[emerg] 1#1: invalid number of arguments in "location" directive in /etc/nginx/nginx.conf:43
Is this how uwsgi served static files can be cached into nginx? If yes please suggest me what is gone wrong here.
My complete nginx.conf is as follows:
events {
worker_connections 1024; ## Default: 1024
}
http {
include conf/mime.types;
# the upstream component nginx needs to connect to
upstream uwsgi {
server backend:4000; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 8443 ssl http2 default_server;
# the domain name it will serve for
server_name _; # substitute your machine's IP address or FQDN
charset utf-8;
ssl_certificate /secrets/server.crt;
ssl_certificate_key /secrets/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
add_header Strict-Transport-Security "max-age=31536000" always;
# Redirect HTTP to HTTPS
error_page 497 https://$http_host$request_uri;
# max upload size
client_max_body_size 75M; # adjust to taste
uwsgi_read_timeout 600s;
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass uwsgi;
include /config/uwsgi_params; # the uwsgi_params file you installed
}
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg
|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid
|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
}
}
Nginx version: 1.16
The problem with your config is that the location block has newlines in the list of filenames. I tried nginx -t -c <filename> with a modified version of your location block:
location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
expires max;
log_not_found off;
access_log off;
}
... and this passes the test!
Hi there i'm trying to put in production some django app using nginx + gunicorn + supervisor.
Following this guide i was able to reproduce all steps with success but for some reason i can't make it work. I believe that the problem is with the nginx part of the project since I'm not able to even serve a static file for testing. It's my first time using all these tools.
Config files are as follows:
nginx.conf:
worker_processes 1;
user nobody nogroup;
# 'user nobody nobody;' for systems with 'nobody' as a group instead
error_log /home/seba94/log/nginx/nginx.error.log warn;
#pid /run/nginx.pid;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include /etc/nginx/mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /home/seba94/log/nginx/nginx.access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
server unix:/tmp/gunicorn.sock fail_timeout=10s;
# for a TCP configuration
#server 127.0.0.1:8000 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name reg.rocstar.tv;
keepalive_timeout 5;
# path for static files
root /home/seba94/static;
location /register/ {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location /media/ {
#path for Django media files
alias /home/seba94/register-page/register_page/media;
}
location /static/ {
#path for Django static files
alias /home/seba94/register-page/register_page/static;
}
location /todd-logo.png {
alias /home/seba94/static/todd-logo.png;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/seba94/static;
}
}
supervisord.conf:
[supervisord]
logfile=/home/seba94/log/supervisord/supervisord.log
[inet_http_server]
port=127.0.0.1:9001
[rpcinterface:supervisor]
supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface
[program:register-page-django]
command=/home/seba94/.local/share/virtualenvs/register-page-jYLn8mRO/bin/gunicorn register_page.wsgi -c /home/seba94/conf/gunicorn.conf.py
directory=/home/seba94/register-page/register_page
user=seba94
autostart=true
autorestart=true
stdout_logfile=/home/seba94/log/supervisord/register_page.log
stderr_logfile=/home/seba94/log/supervisord/register_page.err.log
[supervisorctl]
gunicorn.conf.py:
import multiprocessing
#Server socket config
bind = "unix:/tmp/gunicorn.sock"
backlog = 2048
#Workers config. Eventlet is an asynchronus worker
workers = multiprocessing.cpu_count() * 2
worker_class = "eventlet"
worker_connections = 1000
#access-logfile = "/home/seba94/log/gunicorn/gunicorn.log"
#error-logfile = "/home/seba94/log/gunicorn/gunicorn.error.log"
name = "register-page-gunicorn"
#Server mechanics
#daemon = True
I'm able to run successfully all processes with no errors from the cmd using the following commands:
sudo service nginx start
sudo supervisord -c /home/seba94/conf/supervisord.conf
sudo supervisorctrl start register-page-django
Nginx status is the following:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-10-23 20:38:58 UTC; 31min ago
Docs: man:nginx(8)
Process: 5552 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
Process: 5599 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 5594 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 5601 (nginx)
Tasks: 2 (limit: 1108)
CGroup: /system.slice/nginx.service
├─5601 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─5760 nginx: worker process
oct 23 20:38:58 register-page-server systemd[1]: Starting A high performance web server and a reverse proxy server...
oct 23 20:38:58 register-page-server systemd[1]: Started A high performance web server and a reverse proxy server.
Honestly i can't find any errors not even in the log files so i don't know why i can't even see my static todd-logo.png file. Neither can i see the Django app running. Any help is more than welcomed
Edit:
Seems that all config files and commands in this issue are fine, the problem appeared to be a firewall configuration from a previous project. This could be an example of usage of these tools currently working
Please share the output of curl -v http://domain-name in the question as well.
I have a Django, Nginx, Gunicorn, and MySQL on AWS.
Running a postback from django which calls a stored procedure that takes longer than 30 seconds to complete causes a return of "502 Bad Gateway" nginx/1.4.6 (Ubuntu).
It sure looks like a timeout issue and that this post should resolve it.
But alas, it doesn't seem to be working.
Here is my gunicorn.conf file:
description "Gunicorn application server handling formManagement django app"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid ubuntu
setgid www-data
chdir /home/ubuntu/AARC-ServiceManager/ServerSide/formManagement
exec ve/bin/gunicorn --timeout 300 --workers 3 --bind unix:/home/ubuntu/AARC-ServiceManager/ServerSide/formManagement/formManagement.sock formManagement.wsgi:application
And my Nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
# set client body size (max http request size) #
client_max_body_size 50M;
#upping the timeouts to allow time for the DB to return from a long running sproc
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Any thoughts?
UPDATE:
This is the error in the nginx error log:
[error] 14316#0: *13 upstream prematurely closed connection while reading response header from upstream ...
I found the resolution!
I was updating the wrong gunicorn.conf file.
I saved to config file to my source control and when I was in the server, was updating that file.
However, I needed to be changing the file at location:
/etc/init/gunicorn.conf
... and I learned a lesson about having more than one config file on the server.
Thanks all who were offering help.
I have setup Django project on CentOS 6.5 with Nginx and uwsgi.
I am Getting error while accessing static content as below (/var/log/nginx/error.log)-
2015/11/02 19:05:37 [error] 29701#0: *52 open() "/home/amar/workspace/myproj/config/static/rest_framework/js/default.js" failed (13: Permission denied), client: 172.29.100.104, server: myapi.dev, request: "GET /static/rest_framework/js/default.js HTTP/1.1", host: "myapi.dev", referrer: "http://myapi.dev/api/v1/datasets/"
My /etc/nginx/conf.d/virtual.conf is as shown below -
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/uwsgi.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
#
#API
#
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name myapi.dev; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
location /static {
autoindex on;
alias /home/amar/workspace/myproj/config/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
Here is my uwsgi.ini file :
[uwsgi]
chdir = /home/amar/workspace/myproj
#home = %(base)/.virtualenvs/myproj
module = config.wsgi:application
home = /home/amar/.virtualenvs/myproj
master = true
processes = 3
socket = /tmp/uwsgi.sock
chmod-socket = 777
vacuum = true
Could someone point me in the right direction?
It took time but I've fixed the problem myself. Changed the user from amar to root and set static folder permission to 666. Hope it helps someone in future.
Probably related to SELinux. You will need to allow HTTPD scripts and modules to connect to the network.
setsebool httpd_can_network_connect on -P
Its my first time setting up nginx and unicorn.
My capistrano deployment went through and everything succeeded.
here is my unicorn.rb
#app_dir = File.expand_path('../../', __FILE__)
#shared_dir = File.expand_path('../../../shared/', __FILE__)
preload_app true
worker_processes 4
timeout 30
working_directory "home/deploy/appname"
shared_dir = "home/deploy/appname/shared"
# Set up socket location
# by default unicorn listens on 8080
listen "#{shared_dir}/tmp/sockets/unicorn.sock", :backlog => 64
# Logging
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stdout.log"
# Set master PID location
pid "#{shared_dir}/tmp/pids/unicorn.pid"
#must set preload app true to use before/after fork
before_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
#before forking, this is suppose to kill the master process that belongs to the oldbin
#enables 0 downtime to deploy
old_pid = "#{shared_dir}/tmp/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end
# before_exec do |server|
# ENV['BUNDLE_GEMFILE'] = "#{app_dir}/Gemfile"
# end
my nginx conf at /etc/nginx/nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
my default file at /etc/nginx/sites-enabled/default
upstream app_server {
#path to unicorn sock file, as defined previously
server unix:/home/deploy/appname/shared/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80;
root /home/deploy/appname;
try_files $uri/index.html $uri #app;
#click tracking
access_log /var/log/nginx/appname_access.log combined;
error_log /var/log/nginx/appname_error.log;
location #app {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
when I do this
deploy#localhost:~$ sudo nginx -s reload
nginx: [emerg] host not found in upstream "app" in /etc/nginx/sites-enabled/default:46
When I head into
/shared/tmp/sockets
I don't have a file in there. I don't think I should create it manually. I am using capistrano 3. Am I suppose to generate this file?
I am using
require 'capistrano3/unicorn' #in capfile
in deploy.rb
symbolic files and directories
set :linked_files, %w{config/database.yml config/secrets.yml}
set :linked_dirs, %w{tmp/pids tmp/cache tmp/sockets log bin vendor/bundle public/system}
#just pointing to our unicorn.rb
set :unicorn_config_path, "config/unicorn.rb"
#capistrano tasks and processes
after "deploy", "deploy:cleanup"
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
invoke 'unicorn:restart'
end
end
after :finishing, "deploy:cleanup"
end
I put my cap file here because I notice no log on unicorn restart in my cap production deploy log. I am not sure if this helps.
I made sure the working_directory matches the root in the default nginx page.
I made sure the listen in unicorn matches the upstream app server unix in the default page.
I made sure the nginx.conf file included the default config nginx page in sites-enabled.
Well this is 6 months old, but I'm going to answer it anyways. The issue is proxy_pass in #app in sites_enabled/default. It's trying to pass to the upstream server http://app , but you don't have that upstream set, you have it named app_server.
You need to rename:
proxy_pass http://app
to:
proxy_pass http://app_server