I have a Django Gunicorn Nginx setup that is working without errors but the nginx access logs contains the following line every 5 seconds:
10.112.113.1 - - [09/Jan/2019:05:02:21 +0100] "HEAD / HTTP/1.1" 302 0 "-" "-"
The amount of information in this logging event is quite scarce, but a 302 every 5 seconds has to be something related to the nginx configuration right?
My nginx configuration is as follows:
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/.conf;
upstream app_server {
server unix:/path_to/gunicorn.sock fail_timeout=0;
}
server {
server_name example.com;
listen 80;
return 301 https://example.com$request_uri;
}
server {
listen 443;
listen [::]:443;
server_name example.com;
ssl on;
ssl_certificate /path/cert.crt;
ssl_certificate_key /path/cert.key;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /var/log/nginx/nginx-access.log;
error_log /var/log/nginx/nginx-error.log;
location /static/ {
alias /path_to/static/;
}
location /media/ {
alias /path_to/media/;
}
include /etc/nginx/mime.types;
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
Related
Hi I deployed my website. Everything worked find. Then I changed from http to https and now I get a blank white page or 502 502 Bad Gateway. I think the problem is in my nginx.conf.
I deploy my frontend and backend in the same task in the same service on AWS ESC.
Here are my ports:
Http->80
Https->443
Client Port 8080
Backend Port 4000
This is my nginx.conf originally before I changed to https (which worked):
worker_processes auto;
events {
worker_connections 60000;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $upstream_addr '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
server {
listen 8080;
server_name mydomain.com;
access_log /var/log/nginx/access.log compression;
root /usr/share/nginx/html;
index index.html index.htm;
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
}
location / {
try_files $uri $uri/ /index.html;
}
location /graphql {
proxy_pass http://localhost:4000/graphql;
}
location /subscriptions {
proxy_pass http://localhost:4000/subscriptions;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /refresh_token {
proxy_pass http://localhost:4000/refresh_token;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
}
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location ~ ^.+\..+$ {
try_files $uri =404;
}
location /static/ {
root /var/www;
}
}
}
I made many changed to my nginx.conf and nothing worked. I verified my domain with AWS.
Here some changes I made:
server {
listen 80;
listen [::]:80;
server_name mydomian.com
return 301 https://$server_name$request_uri;
}
server {
listen 443;
listen [::]:443;
server_name mydomian.com
access_log /var/log/nginx/access.log compression;
root /usr/share/nginx/html;
index index.html index.htm;
location / {
proxy_pass http://localhost:8080;
try_files $uri $uri/ /index.html;
}
}
I'm using a load-balencer to terminate SSL.
I focused on the nginx.conf but the conf is fine. I'm using websocket which caused the problem:
const host = window.location.host;
`ws://${host}/subscriptions`
so I added a s:
`wss://${host}/subscriptions`
For 2 days I tried everything in the nginx.conf and I just had to add a s.
I'm super stupid because I did't checked the console for errors.
I want to use http2 by nginx image, but i tried very long the protocol are still using http/1.1
Dockerfile for nginx:
FROM nginx
COPY ./docker/nginx/etc/nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./docker/nginx/etc/nginx/conf.d/default.conf.https /etc/nginx/conf.d/default.conf
/etc/nginx/nginx.conf is
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
# run ulimit -n to check
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
# Buffer size for post submission
client_body_buffer_size 10k;
client_max_body_size 8m;
# Buffer size for header
client_header_buffer_size 1k;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
/etc/nginx/conf.d/default.conf is:
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name 0.0.0.0;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
expires $expires;
location = /favicon.ico {
log_not_found off;
}
location /static/ {
alias /static_files/;
}
location / {
access_log /var/log/nginx/wsgi.access.log;
error_log /var/log/nginx/wsgi.error_log warn;
proxy_pass http://app_wsgi:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /ws/ {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
access_log /var/log/nginx/asgi.access.log;
error_log /var/log/nginx/asgi.error_log warn;
proxy_pass http://app_asgi:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
Docker-compose file for nginx part:
nginx:
restart: always
build:
context: .
dockerfile: docker/nginx/Dockerfile.https
ports:
- 80:80
- 443:443
volumes:
- ./app/static:/static_files
- ./ssl/certs:/etc/nginx/certs
depends_on:
- app_wsgi
- app_asgi
go inside nginx container and run nginx -V command:
root#0a15f404bf1d:/# nginx -V
nginx version: nginx/1.17.9
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1d 10 Sep 2019
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.17.9/debian/debuild-base/nginx-1.17.9=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'
is there anything wrong of my settings?
i checked in chrome dev tool and saw all the request are still send through http/1.1 protocol
my architecture is
Nginx <-> gunicorn <-> Django application
I had a similar issue, I was implementing a proxy pass and calling the nginx server, I had been receiving status 426, 'til I put set up following configuration:
upstream mservername {
server my.example.domain:443;
keepalive 20;
}
server {
listen 8443 ssl http2;
server_name my.example.domain;
access_log /opt/bitnami/nginx/logs/access_my_example_domain.log;
error_log /opt/bitnami/nginx/logs/error_my_example_domain.log;
ssl_certificate /opt/bitnami/nginx/conf/bitnami/certs/server.crt;
ssl_certificate_key /opt/bitnami/nginx/conf/bitnami/certs/server.key;
ssl_protocols TLSv1.3 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location /resource {
http2_push_preload on;
proxy_ssl_session_reuse off;
proxy_ssl_server_name on;
proxy_ssl_name my.example.domain;
proxy_ssl_trusted_certificate /opt/bitnami/nginx/conf/bitnami/certs/my_example_domain/my_domain_cert.crt;
proxy_set_header content-type "application/xml";
proxy_set_header accept "application/xml";
proxy_hide_header X-Frame-Options;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://my.example.domain/resource;
}
}
Hope can help. In my case it solved the issue.
I'm currently trying to deploy a Django app on a REHL 7.4 server using Nginx. I've followed these tutorials :
https://simpleisbetterthancomplex.com/tutorial/2017/05/23/how-to-deploy-a-django-application-on-rhel.html
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
The virtualenv and the nginx server seems to be allright. However I'm struggling with two errors:
Either I got a 500 error because of worker_connections parameter value (below are logs):
13494#0: *1021 1024 worker_connections are not enough while connecting to upstream, client: 192.168.1.33, server: 192.168.1.33, request: "GET /Syc/login HTTP/1.0", upstream: "http://192.168.1.33:80/Syc/login", host: "192.168.1.33"
Either I increase worker_connections value to > 4096 and I get a 400 error like in this thread 400 Bad Request - request header or cookie too large
Below are my nginx.conf and app.conf, please let me know if there are configuration mistakes and thanks in advance for any help.
nginx.conf:
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# set open fd limit to 30000
worker_rlimit_nofile 30000;
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
large_client_header_buffers 4 32k;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
app.conf
upstream app_server {
server unix:/opt/sycoma/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name 192.168.1.33; # <- insert here the ip address/domain name
large_client_header_buffers 4 16k;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /opt/sycoma/logs/nginx-access.log;
error_log /opt/sycoma/logs/nginx-error.log;
location /static/ {
alias /opt/sycoma/venv/Sycoma/Syc/static/;
}
location /media/ {
alias /opt/sycoma/venv/Sycoma/media/;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://192.168.1.33;
}
}
Try to remove/comment the line:
proxy_set_header Host $http_host;
or increase large_client_header_buffers.
I upgraded ubuntu and after up-gradation, example.com (name changed) stopped working and giving below error. I am using nginx (nginx/1.10.0 ) and ubuntu (release 16.04). The whole setup is on AWS EC2
502 Bad Gateway
Below are contents from nginx error log
[error] 1201#1201: *27 connect() failed (111: Connection refused) while connecting to upstream, client: 171.76.106.51, server: example.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:3000/favicon.ico", host: "public IP", referrer: "https://publicIP/"
Below is nginx.conf file
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
nginx sites-enabled default has symlink to sites-available default and below is the content
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
}
Another nginx conf file
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name localhost;
location / {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
When I am doing http://localhost:3000/ it says connection refused
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:3000... failed: Connection refused.
When I am doing "wget www.google.com" its says connected
Can you please help here ?
It's my first time to use django + nginx + gunicorn. I can't make server_name work. With the following configs, I am able to see django admin panel at localhost/admin. But should I be able to see admin panel when I access local-example/admin as well?
start my gunicorn
gunicorn web_wsgi_local:application
2012-10-14 19:45:50 [16532] [INFO] Starting gunicorn 0.14.6
2012-10-14 19:45:50 [16532] [INFO] Listening at: http://127.0.0.1:8000 (16532)
2012-10-14 19:45:50 [16532] [INFO] Using worker: sync
2012-10-14 19:45:50 [16533] [INFO] Booting worker with pid: 16533
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /Users/ruixia/www/x/project/logs/nginx_access.log main;
error_log /Users/ruixia/www/x/project/logs/nginx_error.log debug;
autoindex on;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
include /usr/local/etc/nginx/sites-enabled/*;
}
sites-enabled/x config
server {
listen 80;
server_name local-example;
root /Users/ruixia/www/x/project;
location /static/ {
alias /Users/ruixia/www/x/project/static/;
expires 30d;
}
location /media/ {
alias /Users/ruixia/www/x/project/media/;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 10;
proxy_read_timeout 10;
proxy_pass http://localhost:8000/;
}
}
um... I solved it by adding local-example to /etc/hosts