The issue I am having currently is that after I have launched my project with Gunicorn and do a symbolical link to Nginx’s sites-available to sites-enabled directory, followed with the shell command ... sudo service nginx reload … I’m given an error page html that says “ Server Error 500” when I attempt to connect locally and the project isn’t even talking to the outside world seeing as my domain is a blank canvas. I’m at a lost as to whether this error lies in my Nginx, Gunicorn, or Django. Any help, tips or good criticism will be gladly welcomed.
Project specifics:
This is being lauched off a small pc I built at home as a way to test the waters for different web ideas I have, by testing their potential locally first I can decide as to whether or not they're valuable.
Python 3.4
Virtualenv latest
Django 1.8
Gunicorn latest
Nginx latest
Ubuntu server 14.04.4 LTS
DB type: sqlite3(I have a very html based project)
settings.py
DEBUG = False
ALLOWED_HOSTS = ["*",]
wsgi.py
import os
from django.core.wsgi import get_wsgi_application
os.environ['DJANGO_SETTINGS_MODULE'] = "gartp.settings"
application = get_wsgi_application()
start_gunicorn.bash
this file has been … chmod +x … to make it excutable and is lauched with ./start_gunicorn.bash
#!/bin/bash
set -e
LOGFILE=/home/workarea/gart/gartp/gartp.log
ERRORFILE=/home/workarea/gart/gartp/error.log
LOGDIR=$(dirname $LOGFILE)
NUM_WORKERS=4
#The below address:port info will be used later to configure Nginx with Gunicorn
ADDRESS=127.0.0.1:8002
# user/group to run as
#USER=your_unix_user
#GROUP=your_unix_group
cd /home/workarea/gart/gartp/
source /home/workarea/gart/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec /home/workarea/gart/bin/gunicorn -w $NUM_WORKERS --bind=$ADDRESS gartp.wsgi \
--log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE 1>>$ERRORFILE &
nginx sites-available file ‘gartp’
linked with … sudo ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled...
upstream app_server_djangoapp {
server localhost:8002 fail_timeout=0;
}
server {
#EC2 instance security group must be configured to accept http connections over Port 80
listen 80;
server_name mydomain.com; # changed for a touch of privacy
#server_name ec2;
access_log /var/log/nginx/guni-access.log;
error_log /var/log/nginx/guni-error.log info;
keepalive_timeout 5;
# path for static files
root /home/workarea/gart/gartp/task/static;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
}
nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 128;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/gartp;
}
static ip configuration, values are not my own but from the example I used. I should state as well that I use a NIC on this pc.
/etc/network/interfaces
auto eth0
iface eth0 inet static
address 192.168.1.128
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
/etc/resolvconf/resolv.conf.d/base
search (domain name)
nameserver 8.8.8.8
nameserver 8.8.4.4
Related
I have attached two files in which I am hosting var/www/html files and localhost/:3000
please help me why nginx not serving when i am hitting ip of server.
Is there any solution for my problem if so then let me know what changes should I do so that it will work
I have configure port 81 for this application
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml>
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-enabled file
server {
listen 81 default_server;
listen [::]:81 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.php;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?args;
}
location /front/ {
proxy_pass http://localhost:3000/;
}
# pass PHP scripts to FastCGI server
#
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
location ~ /\.ht {
deny all;
}
}
In your NGINX config you've set port 81 but you're trying to hit port 3000?
Other than that verify
If the files working within the server? Like CURL or WGET
Make sure you've configured the security group to open the correct port for incoming traffic
Make sure that you're using PUBLIC IP of your instance
Make sure that your instance have access to server (should be in public subnet)
Even after all this if it doesn't work then update the question with more details as to exact error message.
I was tasked with creating a Django-Gunicorn demo app. In this task, I need to be able to handle 500 concurrent login requests in 1 second.
I have to deploy the app in a VM with 2GB RAM and 2 core CPUs (using Vagrant and VirtualBox, Ubuntu 16.04). I already tried the following for deployment.
gunicorn --workers 5 --bind "0.0.0.0:8000" --worker-class "gevent" --keep-alive 5 project.wsgi
Using JMeter test from the host machine, the test always takes around 7-10 seconds. Even if the login endpoint only returns empty response without any database access, the amount of the time is almost the same. Can you tell me what's wrong with this?
I use the default settings at /etc/nginx/nginx.conf.
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
And here is my reverse proxy settings I put in sites-available folder.
server {
listen 80;
location /static {
autoindex on;
alias /vagrant/static/;
}
location /media {
autoindex on;
alias /vagrant/uploads/;
}
location / {
proxy_redirect http://127.0.0.1:8000/ http://127.0.0.1:8080/;
proxy_pass http://127.0.0.1:8000;
}
}
Thanks
The short answer is that you miss worker connections in gunicorn. So it cannot handle more concurrent requests.
For 500 concurrent login requests, the concurrent active connections handled by the database is also important. If the database cannot handle the loading, you are going to fail too. If you're using PSQL, you have to change max connections and use a connection pool
I have a Django, Nginx, Gunicorn, and MySQL on AWS.
Running a postback from django which calls a stored procedure that takes longer than 30 seconds to complete causes a return of "502 Bad Gateway" nginx/1.4.6 (Ubuntu).
It sure looks like a timeout issue and that this post should resolve it.
But alas, it doesn't seem to be working.
Here is my gunicorn.conf file:
description "Gunicorn application server handling formManagement django app"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid ubuntu
setgid www-data
chdir /home/ubuntu/AARC-ServiceManager/ServerSide/formManagement
exec ve/bin/gunicorn --timeout 300 --workers 3 --bind unix:/home/ubuntu/AARC-ServiceManager/ServerSide/formManagement/formManagement.sock formManagement.wsgi:application
And my Nginx.conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
# set client body size (max http request size) #
client_max_body_size 50M;
#upping the timeouts to allow time for the DB to return from a long running sproc
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Any thoughts?
UPDATE:
This is the error in the nginx error log:
[error] 14316#0: *13 upstream prematurely closed connection while reading response header from upstream ...
I found the resolution!
I was updating the wrong gunicorn.conf file.
I saved to config file to my source control and when I was in the server, was updating that file.
However, I needed to be changing the file at location:
/etc/init/gunicorn.conf
... and I learned a lesson about having more than one config file on the server.
Thanks all who were offering help.
Its my first time setting up nginx and unicorn.
My capistrano deployment went through and everything succeeded.
here is my unicorn.rb
#app_dir = File.expand_path('../../', __FILE__)
#shared_dir = File.expand_path('../../../shared/', __FILE__)
preload_app true
worker_processes 4
timeout 30
working_directory "home/deploy/appname"
shared_dir = "home/deploy/appname/shared"
# Set up socket location
# by default unicorn listens on 8080
listen "#{shared_dir}/tmp/sockets/unicorn.sock", :backlog => 64
# Logging
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stdout.log"
# Set master PID location
pid "#{shared_dir}/tmp/pids/unicorn.pid"
#must set preload app true to use before/after fork
before_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect!
#before forking, this is suppose to kill the master process that belongs to the oldbin
#enables 0 downtime to deploy
old_pid = "#{shared_dir}/tmp/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection
end
# before_exec do |server|
# ENV['BUNDLE_GEMFILE'] = "#{app_dir}/Gemfile"
# end
my nginx conf at /etc/nginx/nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
my default file at /etc/nginx/sites-enabled/default
upstream app_server {
#path to unicorn sock file, as defined previously
server unix:/home/deploy/appname/shared/tmp/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80;
root /home/deploy/appname;
try_files $uri/index.html $uri #app;
#click tracking
access_log /var/log/nginx/appname_access.log combined;
error_log /var/log/nginx/appname_error.log;
location #app {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
when I do this
deploy#localhost:~$ sudo nginx -s reload
nginx: [emerg] host not found in upstream "app" in /etc/nginx/sites-enabled/default:46
When I head into
/shared/tmp/sockets
I don't have a file in there. I don't think I should create it manually. I am using capistrano 3. Am I suppose to generate this file?
I am using
require 'capistrano3/unicorn' #in capfile
in deploy.rb
symbolic files and directories
set :linked_files, %w{config/database.yml config/secrets.yml}
set :linked_dirs, %w{tmp/pids tmp/cache tmp/sockets log bin vendor/bundle public/system}
#just pointing to our unicorn.rb
set :unicorn_config_path, "config/unicorn.rb"
#capistrano tasks and processes
after "deploy", "deploy:cleanup"
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
invoke 'unicorn:restart'
end
end
after :finishing, "deploy:cleanup"
end
I put my cap file here because I notice no log on unicorn restart in my cap production deploy log. I am not sure if this helps.
I made sure the working_directory matches the root in the default nginx page.
I made sure the listen in unicorn matches the upstream app server unix in the default page.
I made sure the nginx.conf file included the default config nginx page in sites-enabled.
Well this is 6 months old, but I'm going to answer it anyways. The issue is proxy_pass in #app in sites_enabled/default. It's trying to pass to the upstream server http://app , but you don't have that upstream set, you have it named app_server.
You need to rename:
proxy_pass http://app
to:
proxy_pass http://app_server
We are having trouble with uploads to my site with django, gunicorn, running behind nginx. we also have a gluster mount on the app server where the files are uploaded and distributed-replicated across several servers. (All tiers are on AWS)
When we go to upload a file(~15mb), we get a 502 Bad Gateway. we also check the nginx logs which show a upstream prematurely closed connection while reading response header from upstream, client. Our upload speeds are being extremely slow (<5k). we can upload to other sites just fine, and our internet upload is around 10MB with anything else.
Is there any configuration file that I am missing to allow uploads of a file through gunicorn or nginx?
nginx.conf
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_names_hash_bucket_size 256;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml
application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
conf.d files:
client_max_body_size 256m;
_
proxy_read_timeout 10m;
proxy_buffering off;
send_timeout 5m;
_
We have a feeling that it may be either nginx or the gluster mount. We have been working on this for days, and have looked all through the timeout* variables in nginx and gunicorn and haven't made any progress.
Any help would be appreciated, Thank you!
So, we solved the problem. It had nothing to do with any of our code, server setup, or amazon. We narrowed it down to only linux machines uploading in our network. there was a bug with 'tcp window scaling' in the firewall that was resetting the upload after it reaches a limit.
Thanks for anyone that attempted.