nginx not loading my project from other server - django

I have three servers 10.77.12.54,10.77.12.55,10.77.12.56 my application on 54, Postgres on 55, and Nginx on 56 servers I created gunicorn.shock file and gunicorn.server file and I added app IP to Nginx config file and also create sites-available, sites-enabled but Nginx not fetching my application is there anything I have to do?
server {
listen 80;
server_name 10.77.12.54;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
proxy_cookie_path / "/; Secure";
access_log /var/log/nginx/access_rkvy.log main buffer=32k;
location / {
access_log /var/log/nginx/app54log.log main;
proxy_pass http://loadblncer;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_buffering off;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
}

Related

NGINX filter requests to lan IP

I have received several requests via NGINX that appear to be to my LAN IP 192.168.0.1 as follows:
nginx.vhost.access.log:
192.227.134.73 - - [29/Jul/2021:10:33:47 +0000] "POST /GponForm/diag_Form?style/ HTTP/1.1" 400 154 "-" "curl/7.3.2"
and from Django:
Invalid HTTP_HOST header: '192.168.0.1:443'. You may need to add '192.168.0.1' to ALLOWED_HOSTS.
My NGINX configurations as follows:
upstream django_server {
server 127.0.0.1:8000;
}
# Catch all requests with an invalid HOST header
server {
server_name "";
listen 80;
return 301 https://backoffice.example.com$request_uri;
}
server {
listen 80;
# Redirect www to https
server_name www.backoffice.example.com;
modsecurity on;
modsecurity_rules_file /some_directory/nginx/modsec/modsec_includes.conf;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains;" always;
add_header X-Frame-Options "deny" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
#add_header Content-Security-Policy "script-src 'self' https://example.com https://backoffice.example.com https://fonts.gstatic.com https://code.jquery.com";
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
return 301 https://backoffice.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name www.backoffice.example.com backoffice.example.com;
modsecurity on;
modsecurity_rules_file /some_directory/nginx/modsec/modsec_includes.conf;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains;" always;
add_header X-Frame-Options "deny" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
#add_header Content-Security-Policy "script-src 'self' https://example.com https://backoffice.example.com https://fonts.gstatic.com https://code.jquery.com";
add_header Referrer-Policy "strict-origin-when-cross-origin";
ssl_certificate /etc/ssl/nginx-ssl/backofficebundle.crt;
ssl_certificate_key /etc/ssl/nginx-ssl/backoffice.key;
access_log /some_directory/nginx/nginx.vhost.access.log;
error_log /some_directory/nginx/nginx.vhost.error.log;
location / {
proxy_pass http://localhost:8000;
proxy_pass_header Server;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header REMOTE_ADDR $remote_addr;
}
location /media/ {
alias /some_directory/backoffice/media/;
}
location /static/ {
alias /some_directory/backoffice/static/;
}
}
My questions:
Is there any way of configuring NGINX to block requests to all LAN IP's?
Can this be done better by ModSecurity?
Is there any way of configuring NGINX to block requests to all LAN IP's?
There is, just make nginx listen only on the public IP (e.g. listen backoffice.example.com:443 ssl http2;). Although I have no idea why you'd want this…
Because if it's an internal IP it cannot be accessed externally (by definition – otherwise you wouldn't call it internal). If that would be the case you'd have more like a problem with your network/firewall.
Regarding the nginx access log, I cannot spot any problem. 192.227.134.73 is not a private IP.
Regarding the Django log, curl -H "Host: 192.168.0.1:443" https://backoffice.example.com would have caused such a request. The "Host" header is just a header after all that can contain anything.

Nginx/Daphne/WebSocket connection to failed: Error during WebSocket handshake: Unexpected response code: 404

so I am using Ubuntu 18.04.3 LTS, Django 3.0.0, Python 3.6, Nginx, Daphne, docker, Channels for a chat project on AWS EC2 instance.
I started a project like the tutorial from Channels. I'm trying to build websocket connection via wss protocol. Everything works great when host is http and websocket connection is ws protocol. But error if host is https and websocket connection is wss protocol.
The error as below.
(index):16 WebSocket connection to 'wss://mydomain/ws/chat/1/' failed: Error during WebSocket handshake: Unexpected response code: 404
I'm running my django aspi app by Daphne. Using Channels-redis as channel layer. And run redis by docker. Here's how I run my app server:
daphne -u /home/ubuntu/virtualenvs/src/werewolves/werewolves.sock werewolves.asgi:application
sudo docker run --restart unless-stopped -p 6379:6379 -d redis:2.8
My channel_layers in settings.py in django project.
#settings.py
...
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": ["127.0.0.1",6379],
"symmetric_encryption_keys": ["mykey"],
},
}
}
...
Here's my nginx setting:
upstream websocket {
server unix:/home/ubuntu/virtualenvs/src/werewolves/werewolves.sock;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name mydomain;
location = /favicon.ico { access_log off; log_not_found off; }
location ^~ /static/ {
autoindex on;
alias /home/ubuntu/virtualenvs/static/static-only/; #STATIC_ROOT
}
# SSL settings
ssl on;
listen 443 ssl http2;
listen [::]:443 ssl http2 default_server;
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/mydomain/fullchain.pem;
ssl_session_timeout 10m;
ssl_session cache shared:SSL:1m;
location / {
# proxy setting for django using wsgi
include proxy_params;
#proxy_pass 0.0.0.0:8000;
proxy_pass websocket;
# CORS config. settings for using AWS S3 serving static file
set $origin '*'; #origin url;
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' $origin;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
#
# Custom headers and headers various browsers *should* be OK
# with but aren't
#
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Co$
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' $origin;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Co$
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' $origin;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Co$
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
}
location ^~ /ws/ {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass http://websocket;
}
location ~* \.(js|css)$ {
expires -1;
}
}
Here's my routing.py.
# mysite/routing.py
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
import gameroom.routing
application = ProtocolTypeRouter({
# (http->django views is added by default)
'websocket': AuthMiddlewareStack(
URLRouter(
gameroom.routing.websocket_urlpatterns
)
),
})
# chat/routing.py
from django.urls import re_path
from . import consumers
websocket_urlpatterns = [
re_path(r'^ws/chat/(?P<room_name>\w+)/$', consumers.ChatConsumer),
]
And here's how I run my app.
This JS websocket command works with URL: http://mydomain/chat/1/
var chatSocket = new WebSocket('ws://' + window.location.host + '/ws/chat/1/');
The problem is that this JS websocket command doesn't work with URL: https://mydomain/chat/1/
var chatSocket = new WebSocket('wss://' + window.location.host + '/ws/chat/1/');
The error message from browser is:
(index):16 WebSocket connection to 'wss://mydomain/ws/chat/1/' failed: Error during WebSocket handshake: Unexpected response code: 404
Daphne returns below message.
None - - [23/Dec/2019:10:07:05] "GET /chat/1/" 200 1413
Not Found: /ws/chat/1/
2019-12-23 10:07:06,504 WARNING Not Found: /ws/chat/1/
None - - [23/Dec/2019:10:07:06] "GET /ws/chat/1/" 404 2083
How should I modify my Nginx setting?
By the way, I don't have a ELB(load balancer) for my AWS EC2 instance.
Example of nginx:
upstream channels-backend {
server 0.0.0.0:8099;
}
server {
listen 80;
server_name {domain};
return 301 https://$host$request_uri;
}
server {
proxy_connect_timeout 220s;
proxy_read_timeout 220s;
client_max_body_size 4G;
listen 443 ssl;
server_name {domain};
ssl_certificate {cert-path};
ssl_certificate_key {cert-path};
access_log /{log-path};
error_log /{log-path};
location /sockets {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
Daphane:
daphne -b 0.0.0.0 -p 8099 {your-app}.asgi:application

Django returns 404 when HTTPS with NGINX

I'm deploying a django app on aws and it worked fine when http so I used let's encrypt to enable https. It works fine and I can see my index page but when I try to log in (post request), it returns a 404 error. I have no clue about it.
This is my nginx configuration file:
upstream sample_project_server {
server unix:/home/ubuntu/django_env/run/gunicorn.sock fail_timeout=0;
}
server {
listen 443 ssl;
server_name mydomain.es www.mydomain.es;
ssl_certificate /etc/letsencrypt/live/mydomain.es/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.es/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
root /var/www/html;
index index.html index.htm;
server_name localhost;
client_max_body_size 4G;
access_log /home/ubuntu/logs/nginx-access.log;
error_log /home/ubuntu/logs/nginx-error.log;
location ~* \.(eot|otf|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
}
location /static/ {
alias /home/ubuntu/static/;
}
location /media/ {
alias /home/ubuntu/media/;
}
location / {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://sample_project_server;
break;
}
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/ubuntu/static/;
}
}
server {
listen 80;
server_name mydomain.es www.mydomain.es;
return 301 https://$host$request_uri;
}
And this is part of my settings file in django:
CORS_ORIGIN_ALLOW_ALL = True
CORS_ALLOW_CREDENTIALS = True
CORS_ORIGIN_WHITELIST = (
'localhost:8080','localhost:8000',
)
CORS_ORIGIN_REGEX_WHITELIST = (
'localhost:8080','localhost:8000',
)
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
Many thanks!

Access-Control-Allow-Origin is not allowed by Access-Control-Allow-Headers

I have two separate server,one is nginx with node,and another one is django with django-rest-framework for build ding REST API,nginx is responsible for the REST API request,node takes care of client request, also i use polymer for the frontend .Below are a brief description:
machine one:
nginx:192.168.239.149:8888 (API listening address) forward to 192.168.239.147:8080
node:192.168.239.149:80 (client listening address)
machine two:
unicorn:192.168.239.147:8080(listening address)
The process is when a request comes in,node server(192.168.239.149:80) responses to return html,in html an AJAX request ask for API server(nginx:192.168.239.149:8888 forward to unicorn:192.168.239.147:8080),and then unicorn(192.168.239.147:8080) returns the result.
but there is a CORS problem,I read a lot article,and many people met the same questions,I tried many methods,but no help.still error.
what i get is :
that is:
XMLHttpRequest cannot load http://192.168.239.149:8888/article/. Request header field Access-Control-Allow-Origin is not allowed by Access-Control-Allow-Headers.
What i do is :
core-ajax
<core-ajax auto headers='{"Access-Control-Allow-Origin":"*","X-Requested-With": "XMLHttpRequest"}' url="http://192.168.239.149:8888/article/" handleAs="json" response="{{response}}"></core-ajax>
nginx:
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log;
sendfile on;
upstream realservers{
#server 192.168.239.140:8080;
#server 192.168.239.138:8000;
server 192.168.239.147:8080;
}
server {
listen 8888 default;
server_name example.com;
client_max_body_size 4G;
keepalive_timeout 5;
location / {
add_header Access-Control-Allow-Origin *;
try_files $uri $uri/index.html $uri.html #proxy_to_app;
}
location #proxy_to_app{
add_header Access-Control-Allow-Origin *;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
#proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_pass http://realservers;
}
}
}
node:
app.listen(80, function() {
console.log('server.js running');
});
unicorn:
return Response(serializer.data,headers={'Access-Control-Allow-Origin':'*',
'Access-Control-Allow-Methods':'GET',
'Access-Control-Allow-Headers':'Access-Control-Allow-Origin, x-requested-with, content-type',
})
Because,I have not much experience on CORS,and I want to understand it thoroughly,can anyone point out what i was doing wrong here,I will thank you very much!
Wow,so excited,I sovled this all by my self,what i do wrong here is that the request header i sent is not included in the nginx config add_header 'Access-Control-Allow-Headers'
complete nginx config:
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log;
sendfile on;
upstream realservers{
#server 192.168.239.140:8080;
#server 192.168.239.138:8000;
server 192.168.239.147:8080;
}
server {
listen 8888 default;
server_name example.com;
client_max_body_size 4G;
keepalive_timeout 5;
location / {
add_header Access-Control-Allow-Origin *;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Access-Control-Allow-Orgin,XMLHttpRequest,Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Mx-ReqToken,X-Requested-With';
try_files $uri $uri/index.html $uri.html #proxy_to_app;
}
location #proxy_to_app{
add_header Access-Control-Allow-Origin *;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Access-Control-Allow-Orgin,XMLHttpRequest,Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Mx-ReqToken,X-Requested-With';
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
#proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
proxy_pass http://realservers;
}
}
}
because my request is :
core-ajax auto headers='{"Access-Control-Allow-Origin":"*","X-Requested-With": "XMLHttpRequest"}' url="http://192.168.239.149:8888/article/" handleAs="json" response="{{response}}"></core-ajax>
i didnt include the Access-Control-Allow-Origin and XMLHttpRequest header into the nginx config Access-Control-Allow-Headers,so that is the problem.
I hope its useful to whom has the same problem!
You do not have to include CORS header into request manualy. The browser takes care of it, you just need to allow it on the api server

Getting error 502 when processing an excel file with many rows

Getting error 502 when processing an excel file with many rows.
Using Django / Nginx
The problem is not the weight of the file is less than 1Mb.
This page works correctly with files of 200 rows, the problem starts when the file have more rows and then, the page take too long processing this file.
This is the error:
2012/07/28 14:29:54 [error] 18515#0: *34 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "POST /import/ HTTP/1.1", upstream: "http://127.0.0.1:9000/import/", host: "localhost:8080", referrer: "http://localhost:8080/import/"
I am using very large values ​​for the variables, but I keep getting the same error.
This is the configuration of the site:
upstream app_server {
server 127.0.0.1:9000 fail_timeout=3600s;
keepalive 3600s;
}
server {
listen 8080;
client_max_body_size 4G;
server_name localhost;
keepalive_timeout 3600s;
client_header_timeout 3600s;
client_body_timeout 3600s;
send_timeout 3600s;
location /static/ {
root /my path/;
autoindex on;
expires 7d;
}
location /media/ {
root /my path/;
autoindex on;
expires 7d;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
if (!-f $request_filename) {
proxy_pass http://app_server;
break;
}
}
}
And this is the global configuration:
user www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 3600s;
tcp_nodelay on;
client_header_timeout 3600s;
client_body_timeout 3600s;
send_timeout 3600s;
proxy_connect_timeout 3600s;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
client_max_body_size 200m;
client_body_buffer_size 128k;
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Can you give me some help?
Best regards
The best option is rewrite the routine to use django-celery but if you want a quick solution you could try upgrading the timeout for your proxy pass in Nginx by adding:
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
You should add this config on /var/nginx/sites-available/[site-config] to a specific site or /var/nginx/nginx.conf if you want to increase the timeout on all sites served by nginx.
If you are using gunicorn, You must add --timeout=300 as well. Example:
gunicorn_django -D -b 127.0.0.1:8901 --workers=2 --pid=/var/webapp/campus.pid --settings=settings.production --timeout 300 --pythonpath=/var/webapp/campus/
References:
http://wiki.nginx.org/HttpProxyModule#proxy_connect_timeout
http://reinout.vanrees.org/weblog/2011/11/24/apache-nginx-gunicorn-timeout.html
Gunicorn Nginx timeout problem