NGINX proxy is passing HTTP GET requests instead of WebSocket handshakes to my django application.
Facts:
Rest of the non-websocket proxying to django app is working great.
I can get WebSockets to work if I connect to the django application container directly. (Relevant log entries below.)
The nginx configuration works localhost on my development machine (no containerizing). (Log example below.)
Relevant Logs:
Daphne log when connecting through containerized nginx proxy:
`xxx.xxx.xxx.xxx:40214 - - [24/May/2017:19:16:03] "GET /flight/all_flight_updates" 404 99`
Daphne log when bypassing the containerized proxy and connecting directly to the server:
xxx.xxx.xxx.xxx:6566 - - [24/May/2017:19:17:02] "WSCONNECTING /flight/all_flight_updates" - -
xxx.xxx.xxx.xxx:6566 - - [24/May/2017:19:17:02] "WSCONNECT /flight/all_flight_updates" - -
localhost testing of nginx (non-containerized) configuration works:
[2017/05/24 14:24:19] WebSocket HANDSHAKING /flight/all_flight_updates [127.0.0.1:65100]
[2017/05/24 14:24:19] WebSocket CONNECT /flight/all_flight_updates [127.0.0.1:65100]
Configuration files:
My docker-compose.yml:
version: '3'
services:
db:
image: postgres
redis:
image: redis:alpine
web:
image: nginx
ports:
- '80:80'
volumes:
- ./deploy/proxy.template:/etc/nginx/conf.d/proxy.template
links:
- cdn
- app
command: /bin/bash -c "envsubst '' < /etc/nginx/conf.d/proxy.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
cdn:
image: nginx
volumes:
- ./cdn_static:/usr/share/nginx/static
- ./deploy/cdn.template:/etc/nginx/conf.d/cdn.template
command: /bin/bash -c "envsubst '' < /etc/nginx/conf.d/cdn.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
app:
build: .
image: app
ports:
- '8000:8000'
links:
- redis
- db
volumes:
- ./cdn_static:/var/static
My proxy.template NGINX configuration template:
upstream cdn_proxy {
server cdn:80;
}
upstream daphne {
server app:8000;
keepalive 100;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
location /static {
proxy_pass http://cdn_proxy;
}
location / {
proxy_buffering off;
proxy_pass http://daphne;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
UPDATE
I have built a more compact example of the problem using the tutorial on the NGINX website and put it on github at https://github.com/c0yote/nginx-websocket-issue.
You get a 426 instead of a 404, but I believe this is because the simple server doesn't know how to handle the GET that NGINX is sending. I am reinforced in this thought by the fact that if you issue a GET (from a browser for example) directly against the 8000 port you get the same 426.
Therefore the core problem is still that NGINX is sending a GET.
MORE INFO:
tcpdump shows that the GET to the websocket server has an Upgrade field, but the GET against NGINX does not. This is confusing since the wscat command is identical with the exception of the target port.
GIGA UPDATE:*
If I take the NGINX proxy off port 80 to say, 8080 it works. My only guess is that the js client makes some assumption about port 80. If anyone knows why this is the case, I'd love to know.
It was my organization's firewall.
It was stripping the connection upgrade out of the GET header on port 80. When I changed to a different port, it worked fine.
Related
I can access Django admin by redirecting traffic from nginx port 80 to django port 8000. However, when I change nginx listen port to 81 I received, after signing in Django admin
Forbidden (403)
CSRF verification failed. Request aborted.
nginx.conf
server {
listen 81;
server_name localhost;
location = /favicon.ico {access_log off;log_not_found off;}
location /static/ {
include /etc/nginx/mime.types;
alias /static/;
}
location / {
proxy_pass http://backend:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
docker-compose file
version: '3.9'
services:
backend:
image: thequy/resume_builder_django:2.0
build:
context: ./backend
dockerfile: ./docker/django/Dockerfile
env_file:
- .env
command: gunicorn resume_builder.wsgi -w ${GUNICORN_WORKER_COUNT} -b 0.0.0.0:${DJANGO_PORT}
networks:
- resume_builder_network
backend_nginx:
image: thequy/resume_builder_django_nginx:1.0
build: ./backend/docker/nginx
ports:
- "${BACKEND_DJANGO_PORT}:${BACKEND_DJANGO_PORT}"
depends_on:
- backend
networks:
- resume_builder_network
networks:
resume_builder_network:
I have changed BACKEND_DJANGO_PORT=81.
I tried adding CORS_ALLOW_ALL_ORIGINS=True and CSRF_TRUSTED_ORIGINS=["http://backend_nginx:81"] but it doesn't help
Edit: I tried chaning ports of backend_nginx to different values and I realized that the host port must be on port 80, nginx port doesn't matter.
Since Django 4.0, origin checking is added in CSRF middleware as mentioned here https://docs.djangoproject.com/en/4.1/ref/csrf/.
So, if the request generated from a specific domain doesn't match with any trusted origins, it raises Forbidden (403) CSRF verification failed.
In your case, you need to set following in settings.py (I assume you are running this locally)
CSRF_TRUSTED_ORIGINS = ["http://localhost:81"]
Now the question arises why it works for 80 port without setting CSRF_TRUSTED_ORIGINS, I assume the default 80 port is always trusted, however I can't find any documentation of it.
I had a perfectly fine Django CMS 3.4.1 setup running behind Nginx as an edge-server with SSL termination. The complete chain was:
nginx (SSL) → nginx (django server) → gunicorn → django
All I did was to replace the first nginx reverse proxy with traefik, for a better integration of other services. Everything is run with docker (compose)
The issue is, that Django now wants to redirect HTTPS calls to admin files (and ajax calls) to HTTP, breaking functionality because those files are blocked by the browser.
I did not change anything with the django installation. In fact, it even is the same docker image as before.
Because it did work with the old setup, I don't think that it is an issue with the Django CMS code using hardcoded http://. SSL was terminated before the django reverse proxy, as well.
Does anyone see something I am missing?
Here are some configuration files, from top to bottom:
traefic.yml:
global:
sendAnonymousUsage: false
api:
dashboard: true
insecure: true
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
watch: true
exposedByDefault: false
log:
level: INFO
format: common
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
https:
address: ":443"
certificatesResolvers:
letsencrypt:
acme:
email: ***
storage: /etc/acme/acme.json
httpChallenge:
entryPoint: http
relevant parts of django-server docker-compose file:
# ...
services:
cms-nginx:
build: "./nginx"
depends_on:
- postgres
networks:
- proxy
- cms
volumes:
- cms_static:/usr/src/app/static
- cms_media:/usr/src/app/media
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.cms.rule=Host(`***`)"
- "traefik.http.routers.cms.tls=true"
- "traefik.http.routers.cms.tls.certresolver=letsencrypt"
cms:
restart: always
build: ./cms
links:
- postgres:postgres
- static:static
expose:
- "8000"
volumes:
- ./cms:/usr/src/app
- static_out:/usr/src/app/data/generated
- cms_static:/usr/src/app/data/static
- cms_media:/usr/src/app/data/media
depends_on:
- static
env_file:
- .env
- ./cms/.env
command: /bin/sh -c "./docker-init.sh"
networks:
- cms
django server nginx conf:
server {
listen 80;
server_name *** default_server;
charset utf-8;
client_max_body_size 75M;
location ^~ /static/ {
alias /usr/src/app/static/;
}
location ^~ /media/ {
alias /usr/src/app/media/;
}
location / {
proxy_pass http://cms:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
}
error_log /var/log/nginx/deckel_error.log;
}
gunicorn start command:
/usr/local/bin/gunicorn cms.wsgi:application -w 2 -b :8000
django settings part:
SESSION_COOKIE_SECURE = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
CSRF_COOKIE_SECURE = True
SECURE_SSL_REDIRECT = True
I've been setting a simple docker-compose for a Django application, in which I have 3 containers: the Django app, a Postgres container, and NGINX. I successfully set up both Django and Postgres and tested connecting directly to their containers, so now the only thing left was to set up NGINX on the docker-compose file. I used the following NGINX default.conf file, from another template repository:
upstream django {
server app:8000;
}
server {
listen 80;
server_name localhost;
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_pass http://django;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /static/ {
autoindex on;
alias /static/;
}
location /media/ {
autoindex on;
alias /media/;
}
}
And this was my docker-compose file:
version: "2"
services:
nginx:
image: nginx:latest
container_name: NGINX
ports:
- "80:80"
- "443:443"
volumes:
- ./test:/djangoapp/test
- ./config/nginx:/etc/nginx/conf.d
- ./test/static:/static
depends_on:
- app
app:
build: .
container_name: DJANGO
command: bash -c "./wait-for-it.sh db:5432 && python manage.py makemigrations && python manage.py migrate && gunicorn test.wsgi -b 0.0.0.0:8000"
depends_on:
- db
volumes:
- ./djangoapp/test:/djangoapp/test
- ./test/static:/static
expose:
- "8000"
env_file:
- ./config/djangoapp.env
db:
image: postgres:latest
container_name: POSTGRES
env_file:
- ./config/database.env
But for some reason I wasn't able to connect on the Django app at all via localhost:80 (the browser always threw me a 502 error, and the container wasn't logging anything when I tried). After a lot of troubleshooting, I found out that the offending line was proxy_set_header Host $host;, and commenting it out made me successfully connect to the Django app via localhost. So the problem was that my NGINX configuration had to use the proxy_host variable instead.
The problem is that I have no idea why that happened in the first place, because looking at this other question (Nginx: when to use proxy_set_header Host $host vs $proxy_host), I was suppose to use $host to proxy from my Django application, and other NGINX configuration examples also sets up the Host like that.
I may be missing something as NGINX is a tad bit confusing for me, but I don't understand why I wasn't able to connect and NGINX wasn't logging anything before I commented that line.
I'm deploying my Django/Nginx/Gunicorn webapp to EC2 instance using docker-compose. EC2 instance has static IP where mywebapp.com / www.mywebapp.com points to, and I've completed the certbot verification (site works on port 80 over HTTP) but now trying to get working over SSL.
Right now, HTTP (including loading static files) is working for me, and HTTPS dynamic content (from Django) is working, but static files are not. I think my nginx configuration is wonky.
I tried copying the location /static/ block to the SSL server context in the nginx conf file, but that caused SSL to stop working altogether, not just static files over SSL.
Here's the final docker-compose.yml:
services:
certbot:
entrypoint: /bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h &
wait $${!}; done;'
image: certbot/certbot
volumes:
- /home/ec2-user/certbot/conf:/etc/letsencrypt:rw
- /home/ec2-user/certbot/www:/var/www/certbot:rw
nginx:
command: /bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done
& nginx -g "daemon off;"'
depends_on:
- web
image: xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxx:latest
ports:
- 80:80/tcp
- 443:443/tcp
volumes:
- /home/ec2-user/certbot/conf:/etc/letsencrypt:rw
- static_volume:/usr/src/app/public:rw
- /home/ec2-user/certbot/www:/var/www/certbot:rw
web:
entrypoint: gunicorn mywebapp.wsgi:application --bind 0.0.0.0:7000"
image: xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxxxxx:latest
volumes:
- static_volume:/usr/src/app/public:rw
version: '3.0'
volumes:
static_volume: {}
nginx.prod.conf:
upstream mywebapp {
# web is the name of the service in the docker-compose.yml
# 7000 is the port that gunicorn listens on
server web:7000;
}
server {
listen 80;
server_name mywebapp;
location / {
proxy_pass http://mywebapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /usr/src/app/public/;
}
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
server {
# https://github.com/wmnnd/nginx-certbot/blob/master/data/nginx/app.conf
listen 443 ssl;
server_name mywebapp;
server_tokens off;
location / {
proxy_pass http://mywebapp;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# generated with help of certbot
ssl_certificate /etc/letsencrypt/live/mywebapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebapp.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
and finally the nginx service Dockerfile:
FROM nginx:1.15.12-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx.prod.conf /etc/nginx/conf.d
I simply build, push to ECR on local machine then docker-compose pull and run with docker-compose up -d on the EC2 instance.
The error I see in docker-compose logs is:
nginx_1 | 2019/05/09 02:30:34 [error] 8#8: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: mywebapp, request: "GET / HTTP/1.1", upstream: "http://192.168.111.3:7000/", host: "ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com"
And I'm not sure what's going wrong. I'm trying to get both dynamic content (gunicorn) and static content (from: /usr/src/app/public) served correctly under HTTPS using the certs I've generated and verified.
Anyone know what I might be doing wrong?
Check your configuration file with nginx -T - are you seeing the correct configuration? Is your build process pulling in the correct conf?
It's helpful to just debug this on the remote machine - docker-compose exec nginx sh to get inside and tweak the conf from there and nginx -s reload. This will speed up your iteration cycles debugging an SSL issue.
I'm using docker compose to build a project with django, nginx as services. When I launch the daphne server, and a client tries to connect to the websocket server, I get this error:
*1 recv() failed (104: Connection reset by peer) while reading response header from upstream
Client-side shows this
failed: Error during WebSocket handshake: Unexpected response code: 502
Here is my docker-compose.yml
version: '3'
services:
nginx:
image: nginx
command: nginx -g 'daemon off;'
ports:
- "1010:80"
volumes:
- ./config/nginx/nginx.conf:/etc/nginx/nginx.conf
- .:/makeup
links:
- web
web:
build: .
command: /usr/local/bin/circusd /makeup/config/circus/web.ini
environment:
DJANGO_SETTINGS_MODULE: MakeUp.settings
DEBUG_MODE: 1
volumes:
- .:/makeup
expose:
- '8000'
- '8001'
links:
- cache
extra_hosts:
"postgre": 100.73.138.65
Nginx:
server {
listen 80;
server_name thelab518.cloudapp.net;
keepalive_timeout 15;
root /makeup/;
access_log /dev/stdout;
error_log /dev/stderr;
location /api/stream {
proxy_pass http://web:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://web:8000;
}
And the circusd's web.ini file:
[watcher:web]
cmd = /usr/local/bin/gunicorn MakeUp.wsgi:application -c config/gunicorn.py
working_dir = /makeup/
copy_env = True
user = www-data
[watcher:daphne]
cmd = /usr/local/bin/daphne -b 0.0.0.0 -p 8001 MakeUp.asgi:channel_layer
working_dir = /makeup/
copy_env = True
user = root
[watcher:worker]
cmd = /usr/bin/python3 manage.py runworker
working_dir = /makeup/
copy_env = True
user = www-data
As quite explicitly stated in the fine manual, to successfully run Channels you need to have a dedicated application server implementing the ASGI protocol, such as the supplied daphne
The entire Django execution model has been changed with Channels, so that there are separate "interface servers" taking care of receiving and sending messages over, for example, WebSockets or HTTP or SMS, and "worker servers" that run the actual code (potentially on a different server or VM or container or...). The two are connected by a "Channel layer" that carries messages and replies back and forth.
The current implementation supplies 3 channel layers that talk ASGI between an interface server and a worker server:
An In-memory channel layer, used mainly for running the test server (it's single process)
An IPC based channel layer, usable to run different workers on the same server
A redis based channel layer, that should be used for heavy production sites, able to connect interface servers to multiple worker servers.
You configure them like you do for DATABASES::
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "my_project.routing.channel_routing",
"CONFIG": {
"hosts": [("redis-channel-1", 6379), ("redis-channel-2", 6379)],
},
},
}
Of course this means that your docker config has to change and add one or more interface servers instead of, or in addition to, nginx (even if, in that case, you'll need to accept websocket connections on a different port with all the connected possible problems) and, quite likely, an instance of redis connectin all them.
This in turn means that until circus and nginx can support ASGI, it won't be possible to use them with django-channels, or that this support will only be for the regular http part of your system.
You can find more info in the Deploying section of the official documentation.
It looks that you stared daphne on port 8001, and trying to expose port 8000 and 8001 in docker-compose. The port 8000 is not pointing to any server (daphne is on 8001). In your nginx please set proxy to 8001 ports and expose only port 8001 in docker-compose.
I have created a simple example how it can be set on github where I have proxy to asgi and wsgi servers, but you can go with only asgi server:
The nginx:
upstream app {
server wsgiserver:8000;
}
upstream ws_server {
server asgiserver:9000;
}
server {
listen 8000 default_server;
listen [::]:8000;
client_max_body_size 20M;
location / {
try_files $uri #proxy_to_app;
}
location /tasks {
try_files $uri #proxy_to_ws;
}
location #proxy_to_ws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_pass http://ws_server;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Url-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
}
The docker-compose.yml:
version: '2'
services:
nginx:
extends:
file: docker-common.yml
service: nginx
ports:
- 8000:8000
volumes:
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- asgiserver
asgiserver:
extends:
file: docker-common.yml
service: backend
entrypoint: /app/docker/backend/asgi-entrypoint.sh
links:
- postgres
- redis
- rabbitmq
expose:
- 9000
wsgiserver:
extends:
file: docker-common.yml
service: backend
entrypoint: /app/docker/backend/wsgi-entrypoint.sh
links:
- postgres
- redis
- rabbitmq
expose:
- 8000