Django Channels doesn't detect WebSocket request with NGINX - django

I am deploying a website on AWS. Everything works fine for HTTP and HTTPS. I am passing all requests to Daphne. However, incoming WebSocket connections are treated as HTTP requests by Django. I am guessing there is some header that isn't set in Nginx, but I have copied a lot of my Nginx config from tutorials.
Nginx Config:
upstream django {
server 127.0.0.1:9000;
}
server {
listen 80;
server_name 18.130.130.126;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name 18.130.130.126;
ssl_certificate /etc/nginx/certificate/certificate.crt;
ssl_certificate_key /etc/nginx/certificate/private.key;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
location / {
include proxy_params;
proxy_pass http://django;
}
}
Daphne is bonded to 0.0.0.0:9000. Channels have a very basic setup. A ProtocolTypeRouter, with AuthMiddlewareStack and then URLRouter, as shown on the Channels tutorial. And then a Consumer class. I am using Redis for the channel layer, but that doesn't seem to be a problem. This is some data about the request on response from fiddler. The request headers say Upgrade to WebSocket, but it returns a 404 HTTP request, as it doesn't see it as a WebSocket request.
Thanks for any help.

include proxy params was the problem. It was overwriting headers.

Related

HTTPS SSL certificate does not work on NGINX

I have two docker containers running on AWS elastic beanstalk. One container has my web application(django) and the other has my NGINX server. I have a positiveSSL certificate verified for my domain name, after configuring my NGINX to accept HTTPS and it seems like the website refuses to connect over HTTPS and only works on HTTP
I have my AWS security groups open to accept traffic from port 443 and my certificate is valid so I can only assume I am not setting my nginx correctly
upstream app {
server app:8000;
}
server {
listen 443 ssl;
server_name mysite.com www.mysite.com;
ssl_certificate /app/ssl/mysite_chain.crt;
ssl_certificate_key /app/ssl/mysite.key;
location / {
proxy_pass http://app;
proxy_ssl_session_reuse on;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /staticfiles/ {
alias /app/staticfiles/;
}
}
server {
listen 80;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /staticfiles/ {
alias /app/staticfiles/;
}
}
Everything is working fine when I use normal HTTP and I don't get any logs from NGINX on HTTPS for some reason. The only message I get is from my browser saying the 'site can't be reached' and that the 'website refused the connection'. Is there something obvious here I am missing?

Possible to serve Django Channels app only using Nginx and Daphne?

I was under the assumption that I could run a Django Channels app using only Daphne (ASGI) and Nginx as a proxy for my Django app to begin with.
The application would be running with Daphne on 127.0.0.1:8001
However, I am running into a 403 Forbidden error.
2019/03/06 17:45:40 [error] *1 directory index of "/home/user1/app/src/app/" is forbidden
And when I posted about that, another user mentioned
There is no directive to pass http request to django app in your
nginx config
And suggested to look into fastcgi_pass or uwsgi_pass or Gunicorn.
Obviously Django Channels runs on ASGI and I am passing all requests through that right now (not to uWSGI then on to ASGI depending on the request.)
Can I serve my Django app with only Nginx and Daphne? The Django Channels docs seem to think so as they don't mention needing Gunicorn or something similar.
my nginx config
upstream socket {
ip_hash;
server 127.0.0.1:8001 fail_timeout=0;
}
server {
listen 80;
#listen [::]:80 ipv6only=on;
server_name your.server.com;
access_log /etc/nginx/access.log;
root /var/www/html/someroot;
location / {
#autoindex on;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri =404;
#proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header Host $http_host;
#proxy_set_header X-NginX-Proxy true;
#proxy_pass http://socket;
#proxy_redirect off;
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
#proxy_redirect off;
#proxy_set_header X-Forwarded-Proto $scheme;
#proxy_cache one;
#proxy_cache_key sfs$request_uri$scheme;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/some/fullchain.pem;
# managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/some/privkey.pem;
# managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
}
Yes, it's possible. Try this config:
upstream socket {
ip_hash;
server $DAPHNE_IP_ADDRESS$ fail_timeout=0;
}
server {
...
location / {
proxy_pass http://socket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
...
}
Where $DAPHNE_IP_ADDRESS$ - your daphne IP and port without schema(127.0.0.1:8001).

Does Django Channels uses ws:// protocol prefix to route between Django view or Channels app?

I am running Django + Channels server using Daphne. Daphne server is behind Nginx. My Nginx config looks like as given at end.
When I try to connect to ws://example.com/ws/endpoint I am getting NOT FOUNT /ws/endpoint error.
For me, it looks like Daphne is using protocol to route to either Django views or Channels app. If it sees http it routes to Django view and when it sees ws it routes to Channels app.
With following Nginx proxy pass configuration the URL always has http protocol prefix. So I am getting 404 or NOT FOUND in logs. If I change proxy_pass prefix to ws Nginx config fails.
What is the ideal way to setup Channels in the this scenario?
server {
listen 443 ssl;
server_name example.com
location / {
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
# redirect all HTTP traffic to localhost:8088;
proxy_pass http://0.0.0.0:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-NginX-Proxy true;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 999999999;
}
}
Yes, as in the question Channels detects the route based on the protocol header ws or http/https
Using ws prefix in proxy_pass http://0.0.0.0:8000/; is not possible. To forward the protocol information following config should be included.
proxy_set_header X-Forwarded-Proto $scheme;
This will forward the schema/protocol(ws) information to Channels app. And channels routes according to this information.

Nginx Proxy uploading to s3?

I am using nginx proxy to force all traffic through HTTPS. However, I have a page (/upload) which posts to /upload-downloadable which then uploads the users files using a stream to aws (bucketname.s3.eu-west-1.amazonaws.com)
It uploads as I can see it on AWS s3 bucket, but doesn't respond back to the server to tell the user? Works without the proxy perfectly, but not with my current config.
So it does Client -> AWS, but AWS->Server/Client doesn't work.
Any ideas?
upstream site {
server 127.0.0.1:1337;
}
upstream project {
server localhost:27017;
}
# HTTP — redirect all traffic to HTTPS
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# HTTPS — proxy all requests to the Node app
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name tryhackme.com;
error_page 502 /down.html;
location /down.html {
root /var/www/html;
}
#error_page 500 502 503 504 /var/www/html/down.html;
# Use the Let’s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
location / {
#proxy_pass http://127.0.0.1:28017;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_read_timeout 3600;
proxy_pass http://localhost:1337/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

Rails 4 + Websocket-rails + Passenger + Nginx + Load balancer

I've added some features to a couple of our web apps that needs websocket-rails. Everything works fine in development, but I am not sure how to deploy all this in our production environment since it's a bit more complex.
The production setup:
1 server used as a Load balancer (Nginx).
2 servers used as web servers, where our rails apps run using Nginx and Passenger (both servers are identical).
Several other servers used by the app servers but I believe they are irrelevant for this question.
All sites are running on HTTPS.
Load balancer configs
Here's an example for one of the sites, the others have similar configs:
upstream example {
ip_hash;
server xx.xx.xx.xx:443;
server xx.xx.xx.xx:443;
}
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/example.chained.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
server_name example.com;
rewrite ^(.*) https://www.example.com$1 permanent;
}
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/example.chained.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
server_name www.example.com;
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
client_max_body_size 2000M;
location /css { root /home/myuser/maintenance; }
location /js { root /home/myuser/maintenance; }
location /img { root /home/myuser/maintenance; }
location /fonts { root /home/myuser/maintenance; }
error_page 502 503 #maintenance;
location #maintenance {
root /home/myuser;
if ($uri !~ ^/maintenance/) {
rewrite ^(.*)$ /maintenance/example.html break;
}
}
location / {
proxy_pass https://example;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Web server configs
Again, here's an example for one of the sites, the others have similar configs:
server {
server_name example.com;
rewrite ^(.*) https://www.example.com$1 permanent;
}
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/example.chained.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
root /var/www/example/public;
server_name www.example.com;
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
client_max_body_size 2000M;
passenger_enabled on;
rails_env production;
passenger_env_var SECRET_KEY_BASE "SOME_SECRET";
}
What I've gathered so far:
I'll need to enable passenger sticky sessions
I'll need to create a location in the site's server section where the websocket server is listening to.
I'll need to override the concurrent requests of passenger for the websocket location to unlimited.
My Questions:
Do I have to enable the passenger sticky sessions also in the load balancer's configs? I am guessing this is only for the web servers.
How would the location section for the websocket server look like?
Do I have to create the websocket location section also on the load balancer?
Having the sticky sessions is enough to keep the various apps and servers in synch?
I have various apps running on each server and they should all receive the same notifications (socket messages) so they should all connect to the same websocket server (I'm guessing). Now that websocket-rails is part of their gemsets, won't each app try to spawn their own websocket server? If so, how do I prevent that and make them spawn only one in case none is running yet?
As you can see I am quite confused about how websocket-rails works with passenger and nginx in production so even if you don't have all the answers, any input is greatly appreciated!
UPDATE
I've tried the following on the load balancer:
upstream websocket {
server xx.xx.xx.xx:443;
server xx.xx.xx.xx:443;
}
location /websocket {
proxy_pass https://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection Upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#also tried with this:
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
}
and on the app servers:
location /websocket {
proxy_pass https://www.example.com/websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection Upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#also tried with this:
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
}
On the client side I connect to the url WebSocketRails('www.example.com/websocket'); and i get the following error:
WebSocket connection to 'wss://www.example.com/websocket' failed: Error during WebSocket handshake: Unexpected response code: 404
Any ideas?
I don't think you'll need passenger sticky sessions on the load balancer
This blog covers relevant WebSocket config for NGINX. You need the WebSocket config on the load balancer, and also on the web server if you want to pass the Upgrade and Connection headers to the rails app.