Getting a 400 handshake error on POST requests to my Flask app running socket.io, but I've added in the configs for NGINX according to docs and posts I read online. I'm using an Application Load Balancer in AWS and have set a :80 Target Group and a :443 listener which forwards to the Target Group. I have also added a rule for the route /socket.io to forward request to the target group on :80 and have enabled sticky sessions within the target group. I'm also using a Route53 domain name and enforcing SSL everything works fine except the socket communication.
NGINX conf file:
server {
listen [::]:80;
listen 80;
server_name _domain_name_;
access_log /var/log/nginx/access.log;
location / {
proxy_pass http://127.0.0.1:8000;
include proxy_params;
}
location /socket.io {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
include proxy_params;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://127.0.0.1:8000/socket.io;
}
}
And js file connection for socket.io:
var socket = io();
socket.on('connect', () => {
console.log(socket.connected); // true
});
Connection returns true.
Listener Rule
UPDATE
Switched to NLB and am still having the same issues, however now on my NGINX logs I am seeing
connect() failed (111: Connection refused) while connecting to upstream
request: "GET /socket.io/?EIO=3&transport=polling&t=MvDPJhb HTTP/1.1",
upstream: "http://127.0.0.1:8000/socket.io/?
EIO=3&transport=polling&t=MvDPJhb"
Related
I have been trying to setup my Django backend since 2days but i can't get it to work with my domain name. I have the Next Frontend on Nginx(Port :80) too but it seems to work fine with domain name. But i did the same setup in backend with port 8000 i can't access it using the domain name but works fine with IP. I have tried everything found on the internet but nothing seems to work.
Config for the frontend (Working with domain)
server{
listen 80;
listen [::]:80;
listen 443 ssl;
include snippets/snakeoil.conf;
server_name {domainName};
location = /favicon.ico { access_log off; log_not_found off; }
location / {
# reverse proxy for next server
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Config for backend (Not working with domain name)
server {
listen 8000;
listen [::]:8000;
server_name dev.liqd.fi ipaddress;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /root/backend/lithexBackEnd;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
Allowed Hosts
ALLOWED_HOSTS = ['localhost','127.0.0.1','ip address','*.domain.com','domain.com']
The Gunicorn has been setup and tested and seems to be working fine .
The error log of Nginx gives the following error when i try to access the port via the domain name.
/var/log/nginx/error.log
2023/01/22 10:43:58 [error] 33980#33980: *3 connect() failed (111: Connection refused) while connecting to upstream, client:server: dev.domain, request: "GET /app/login HTTP/1.1", upstream: "http://[::1]:3000/app/login", host:domain
I have a simple vue.js and django (as REST API) application that I want to combine with nginx. Currently the frontend is working, but the backend is not. Here's my nginx config:
server {
listen 80;
location / {
root /usr/share/nginx/html/;
index index.html index.htm;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:8000/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
Visiting localhost works for the static files, but localhost/api leads to a bad gateway error:
[error] 29#29: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /api HTTP/1.1", upstream: "http://127.0.0.1:8000/", host: "localhost"
Also, trying to visit localhost/api via the frontend (axios) just returns the 'You need javascript to display this page' site, which is just part of the frontend.
Running the backend seperately, outside of docker and nginx, works fine on localhost:8000.
What can I do to make it work? It doesn't necessarily have to be done this way, as long as the frontend and backend can communicate.
You said you running Docker? Then you need to change localhost to the container name that running your backend.
I am deploying a website on AWS. Everything works fine for HTTP and HTTPS. I am passing all requests to Daphne. However, incoming WebSocket connections are treated as HTTP requests by Django. I am guessing there is some header that isn't set in Nginx, but I have copied a lot of my Nginx config from tutorials.
Nginx Config:
upstream django {
server 127.0.0.1:9000;
}
server {
listen 80;
server_name 18.130.130.126;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name 18.130.130.126;
ssl_certificate /etc/nginx/certificate/certificate.crt;
ssl_certificate_key /etc/nginx/certificate/private.key;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
location / {
include proxy_params;
proxy_pass http://django;
}
}
Daphne is bonded to 0.0.0.0:9000. Channels have a very basic setup. A ProtocolTypeRouter, with AuthMiddlewareStack and then URLRouter, as shown on the Channels tutorial. And then a Consumer class. I am using Redis for the channel layer, but that doesn't seem to be a problem. This is some data about the request on response from fiddler. The request headers say Upgrade to WebSocket, but it returns a 404 HTTP request, as it doesn't see it as a WebSocket request.
Thanks for any help.
include proxy params was the problem. It was overwriting headers.
I have two docker containers running on AWS elastic beanstalk. One container has my web application(django) and the other has my NGINX server. I have a positiveSSL certificate verified for my domain name, after configuring my NGINX to accept HTTPS and it seems like the website refuses to connect over HTTPS and only works on HTTP
I have my AWS security groups open to accept traffic from port 443 and my certificate is valid so I can only assume I am not setting my nginx correctly
upstream app {
server app:8000;
}
server {
listen 443 ssl;
server_name mysite.com www.mysite.com;
ssl_certificate /app/ssl/mysite_chain.crt;
ssl_certificate_key /app/ssl/mysite.key;
location / {
proxy_pass http://app;
proxy_ssl_session_reuse on;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location /staticfiles/ {
alias /app/staticfiles/;
}
}
server {
listen 80;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /staticfiles/ {
alias /app/staticfiles/;
}
}
Everything is working fine when I use normal HTTP and I don't get any logs from NGINX on HTTPS for some reason. The only message I get is from my browser saying the 'site can't be reached' and that the 'website refused the connection'. Is there something obvious here I am missing?
I've added some features to a couple of our web apps that needs websocket-rails. Everything works fine in development, but I am not sure how to deploy all this in our production environment since it's a bit more complex.
The production setup:
1 server used as a Load balancer (Nginx).
2 servers used as web servers, where our rails apps run using Nginx and Passenger (both servers are identical).
Several other servers used by the app servers but I believe they are irrelevant for this question.
All sites are running on HTTPS.
Load balancer configs
Here's an example for one of the sites, the others have similar configs:
upstream example {
ip_hash;
server xx.xx.xx.xx:443;
server xx.xx.xx.xx:443;
}
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/example.chained.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
server_name example.com;
rewrite ^(.*) https://www.example.com$1 permanent;
}
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/example.chained.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
server_name www.example.com;
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
client_max_body_size 2000M;
location /css { root /home/myuser/maintenance; }
location /js { root /home/myuser/maintenance; }
location /img { root /home/myuser/maintenance; }
location /fonts { root /home/myuser/maintenance; }
error_page 502 503 #maintenance;
location #maintenance {
root /home/myuser;
if ($uri !~ ^/maintenance/) {
rewrite ^(.*)$ /maintenance/example.html break;
}
}
location / {
proxy_pass https://example;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Web server configs
Again, here's an example for one of the sites, the others have similar configs:
server {
server_name example.com;
rewrite ^(.*) https://www.example.com$1 permanent;
}
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/example.chained.crt;
ssl_certificate_key /etc/nginx/ssl/example.key;
root /var/www/example/public;
server_name www.example.com;
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
client_max_body_size 2000M;
passenger_enabled on;
rails_env production;
passenger_env_var SECRET_KEY_BASE "SOME_SECRET";
}
What I've gathered so far:
I'll need to enable passenger sticky sessions
I'll need to create a location in the site's server section where the websocket server is listening to.
I'll need to override the concurrent requests of passenger for the websocket location to unlimited.
My Questions:
Do I have to enable the passenger sticky sessions also in the load balancer's configs? I am guessing this is only for the web servers.
How would the location section for the websocket server look like?
Do I have to create the websocket location section also on the load balancer?
Having the sticky sessions is enough to keep the various apps and servers in synch?
I have various apps running on each server and they should all receive the same notifications (socket messages) so they should all connect to the same websocket server (I'm guessing). Now that websocket-rails is part of their gemsets, won't each app try to spawn their own websocket server? If so, how do I prevent that and make them spawn only one in case none is running yet?
As you can see I am quite confused about how websocket-rails works with passenger and nginx in production so even if you don't have all the answers, any input is greatly appreciated!
UPDATE
I've tried the following on the load balancer:
upstream websocket {
server xx.xx.xx.xx:443;
server xx.xx.xx.xx:443;
}
location /websocket {
proxy_pass https://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection Upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#also tried with this:
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
}
and on the app servers:
location /websocket {
proxy_pass https://www.example.com/websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection Upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#also tried with this:
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
}
On the client side I connect to the url WebSocketRails('www.example.com/websocket'); and i get the following error:
WebSocket connection to 'wss://www.example.com/websocket' failed: Error during WebSocket handshake: Unexpected response code: 404
Any ideas?
I don't think you'll need passenger sticky sessions on the load balancer
This blog covers relevant WebSocket config for NGINX. You need the WebSocket config on the load balancer, and also on the web server if you want to pass the Upgrade and Connection headers to the rails app.