ERROR: Invalid HTTP_HOST header: '/webapps/../gunicorn.sock' - django

After I deployed my Django App last night I got tons of strange Emails saying:
ERROR: Invalid HTTP_HOST header: '/webapps/example_com/run/gunicorn.sock
I'm sure this is somehow related to the following nginx config:
upstream example_app_server {
server unix:/webapps/example_com/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name example.com;
client_max_body_size 4G;
access_log /webapps/example_com/logs/nginx-access.log;
error_log /webapps/example_com/logs/nginx-error.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://example_app_server;
break;
}
}
}

I found the answer to my my question in a django bug report.
proxy_set_header Host $http_host;
has to be replaced with:
proxy_set_header Host $host;
to make nginx pass the correct headers from that on instead of the gunicorn socket the requested page was in the django alerts.

This person explains a bit more what is going on based on this very same post. Here's his/her explanation:
...when a request is made to the server and the HTTP Host is empty, nginx sets the HTTP host to the gunicorn sock.
I can generate this error using curl:
curl -H "HOST:" MY_DOMAIN_NAME -0 -v
This sends a request without a HTTP Host. The -0 causes curl to use HTTP version 1.0. If you do not set this, the request will use HTTP version 1.1, which will cause the request to be rejected immediately and not generate the error.
The solution is to replace $http_host with $host (as pointed out on Stackoverflow). When HTTP Host is missing, $host will take on the value of the “server_name” directive. This is a valid domain name and is the one that should be used.

Add this in your settings.py file:
from django.http.request import HttpRequest
HttpRequest.get_host = HttpRequest._get_raw_host

Related

docker + nginx http requests not working in browsers

I have a AWS EC2 instance running Linux with docker containers running gunicorn/django and an nginx reverse proxy.
I don't want it to redirect to https at the moment.
When I try to reach the url by typing out http://url.com in the browser it seems to automatically change to https://url.com and gives me ERR_CONNECTION_REFUSED. The request doesn't show up at all in the nginx access_log.
But when I try to reach it with curl I get a normal response and it does show up in the nginx access_log.
I have ascertained that the django security middleware is not the cause as the HSTS options are disabled.
I've tried clearing the browser cache and deleting the domain from the chrome security policies.
nginx config:
upstream django_server {
server app:8001 fail_timeout=0;
}
server {
listen 80;
server_name url.com www.url.com;
client_max_body_size 4G;
charset utf-8;
keepalive_timeout 5;
location /static/ {
root /usr/share/nginx/sdev/;
expires 30d;
}
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_pass http://django_server;
}
}
}
What am I overlooking?

Does Django Channels uses ws:// protocol prefix to route between Django view or Channels app?

I am running Django + Channels server using Daphne. Daphne server is behind Nginx. My Nginx config looks like as given at end.
When I try to connect to ws://example.com/ws/endpoint I am getting NOT FOUNT /ws/endpoint error.
For me, it looks like Daphne is using protocol to route to either Django views or Channels app. If it sees http it routes to Django view and when it sees ws it routes to Channels app.
With following Nginx proxy pass configuration the URL always has http protocol prefix. So I am getting 404 or NOT FOUND in logs. If I change proxy_pass prefix to ws Nginx config fails.
What is the ideal way to setup Channels in the this scenario?
server {
listen 443 ssl;
server_name example.com
location / {
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
# redirect all HTTP traffic to localhost:8088;
proxy_pass http://0.0.0.0:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-NginX-Proxy true;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 999999999;
}
}
Yes, as in the question Channels detects the route based on the protocol header ws or http/https
Using ws prefix in proxy_pass http://0.0.0.0:8000/; is not possible. To forward the protocol information following config should be included.
proxy_set_header X-Forwarded-Proto $scheme;
This will forward the schema/protocol(ws) information to Channels app. And channels routes according to this information.

Nginx reverse proxy configuration for subdomain with multiple paths

I have a situation here with my Nginx reverse proxy configuration. My distribution is Ubuntu 14.04
I have a domain, let's call it foo.bar.net, and I want the /grafana endpoint to redirect to my grafana server (localhost:3000), the /sentry endpoint to redirect to my sentry server (localhost:9000) and finally, the /private endpoint to redirect to my django server (localhost:8001). I am using gunicorn for the tuneling between django and nginx.
Here is what I tried :
server {
# listen on port 80
listen 80 default_server;
# for requests to these domains
server_name foo.bar.net;
location /sentry {
# keep logs in these files
access_log /var/log/nginx/sentry.access.log;
error_log /var/log/nginx/sentry.error.log;
# You need this to allow users to upload large files
# See http://wiki.nginx.org/HttpCoreModule#client_max_body_size
# I'm not sure where it goes, so I put it in twice. It works.
client_max_body_size 0;
proxy_pass http://localhost:9000;
proxy_redirect off;
proxy_read_timeout 5m;
allow 0.0.0.0;
# make sure these HTTP headers are set properly
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /grafana {
proxy_pass http://localhost:3000;
proxy_redirect off;
proxy_read_timeout 5m;
allow 0.0.0.0;
# make sure these HTTP headers are set properly
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /private {
proxy_pass http://127.0.0.1:8001;
}
location /private/static/ {
autoindex on;
alias /home/user/folder/private/static/;
}
}
The server won't even start correctly, the config is not loading.
I would also like the / path to redirect to the private endpoint if possible.
Additionally, I am not even sure where to put this configuration (sites-available/??)
Can anyone help me with that ?
Thanks a lot,
There are some missing semicolons and other syntax errors. Look at main nginx error log for details and fix them one by one.
Where to put that config file depends on your distribution. For some of them it should be sites-available directory and symlink to that file inside sites-enabled directory for quick enabling and disabling sites, if you don't have sites-available and sites enabled directory, you should put it into conf.d dir in your distribution.

Django #login_required dropping https

I'm trying to test my Django app locally using SSL. I have a view with the #login_required decorator. So when I hit /locker, I get redirected to /locker/login?next=/locker. This works fine with http.
However, whenever I use https, the redirect somehow drops the secure connection, so I get something like https://cumulus.dev/locker -> http://cumulus.dev/locker/login?next=/locker
If I go directly to https://cumulus.dev/locker/login?next=locker the page opens fine over a secure connection. But once I enter the username and password, I go back to http://cumulus.dev/locker.
I'm using Nginx to handle the SSL, which then talks to runserver. My nginx config is
upstream app_server_djangoapp {
server localhost:8000 fail_timeout=0;
}
server {
listen 80;
server_name cumulus.dev;
access_log /var/log/nginx/cumulus-dev-access.log;
error_log /var/log/nginx/cumulus-dev-error.log info;
keepalive_timeout 5;
# path for static files
root /home/gaurav/www/Cumulus/cumulus_lightbox/static;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
}
server {
listen 443;
server_name cumulus.dev;
ssl on;
ssl_certificate /etc/ssl/cacert-cumulus.pem;
ssl_certificate_key /etc/ssl/privkey.pem;
access_log /var/log/nginx/cumulus-dev-access.log;
error_log /var/log/nginx/cumulus-dev-error.log info;
keepalive_timeout 5;
# path for static files
root /home/gaurav/www/Cumulus/cumulus_lightbox/static;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
}
Django is running on plain HTTP only behind the proxy, so it will always use that to construct absolute URLs (such as redirects), unless you configure it how to see that the proxied request was originally made over HTTPS.
As of Django 1.4, you can do this using the SECURE_PROXY_SSL_HEADER setting. When Django sees the configured header, it will treat the request as HTTPS instead of HTTP: request.is_secure() will return true, https:// URLs will be generated, and so on.
However, note the security warnings in the documentation: you must ensure that the proxy replaces or strips the trusted header from all incoming client requests, both HTTP and HTTPS. Your nginx configuration above does not do that with X-Forwarded-Ssl, making it spoofable.
A conventional solution to this is to set X-Forwarded-Protocol to http or https, as appropriate, in each of your proxy configurations. Then, you can configure Django to look for it using:
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')

Nginx + Gunicorn POST request error

Im using nginx as a proxy for a django application that uses gunicorn, the problem is that at some point I receive a POST request from another site.
The problem seems to be that nginx does not redirect the POST request properly to the gunicorn daemon.
What can I do to fix this, what I need is to be able to send the POST request as it arrives to the gunicorn daemor for my django app to process it... thank you...
This is my nginx conf
server {
server_name www.rinconcolombia.com;
access_log /var/log/nginx/rinconcolombia.log;
location / {
ssi on;
proxy_pass http://127.0.0.1:8888;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static/ {
autoindex on;
root /home/rincon/sites/rinconcolombia/checkouts/rinconcolombia/;
}
location /static/admin_media/ {
autoindex on;
root /home/rincon/sites/rinconcolombia/checkouts/rinconcolombia/;
}
}
server {
server_name www.rinconcolombia.com;
rewrite ^(.*) http://www.rinconcolombia.com$1;
}
UPDATE The app sending the POST is receiving a BAD REQUEST error... if I manually make a POST with resty or curl It does pass the post message to my server...
Your nginx configuration is slightly wrong as you're missing the fail_timeout bits. See here for the gunicorn/nginx example: https://github.com/benoitc/gunicorn/blob/master/examples/nginx.conf
Specifically lines 58 and 115.
If that doesn't help do you get anything in the nginx error.log?