So I have been banging my head against the wall for the better part of 2 days, please help.
I am attempting to establish a Websocket connection using this
django-websocket-redis configuration.
There are 2 instances of uwsgi running, one for the website and one for the websocket communication.
I used wireshark heavily to find out what exactly is happening, and apparently nginx is eating the headers "Connection: Upgrade" and "Upgrade: websocket".
here is the critical nginx config part:
upstream websocket {
server 127.0.0.1:9868;
}
location /ws/ {
proxy_pass_request_headers on;
access_log off;
proxy_http_version 1.1;
proxy_pass http://websocket;
proxy_set_header Connection "Upgrade";
proxy_set_header Upgrade websocket;
}
As you can see on those 2 screenshots, tcpdump of internal communication shows that the handshake works nicely. but in my browser (second image) the headers are missing.
Any ideas are greatly appreciated. I am truly stuck here :(
Versions:
nginx - 1.7.4
uwsgi - 2.0.7
pip freeze:
Django==1.7
MySQL-python==1.2.5
django-redis-sessions==0.4.0
django-websocket-redis==0.4.2
gevent==1.0.1
greenlet==0.4.4
redis==2.10.3
six==1.8.0
uWSGI==2.0.7
wsgiref==0.1.2
I would use gunicorn for deploying a django application, but anyway.
I remembered that I saw this on the gunicorn docs:
If you want to be able to handle streaming request/responses or other
fancy features like Comet, Long polling, or Web sockets, you need to
turn off the proxy buffering. When you do this you must run with one
of the async worker classes.
To turn off buffering, you only need to add proxy_buffering off; to
your location block:
In your location would be:
location /ws/ {
proxy_pass_request_headers on;
access_log off;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://websocket;
proxy_set_header Connection "upgrade";
proxy_set_header Upgrade websocket;
}
Link to the guide of gunicorn for deploying in nginx.
http://docs.gunicorn.org/en/latest/deploy.html?highlight=header
Hope this helps
Related
Been struggling with an Nginx(1.18.0) configuration for a forward proxy. We use specific EC2 boxes as forward proxies and that allows us to send their EIP for whitelisting purpose. So Nginx has been used for several cases including mtls and that had always worked fine. But this time the partner is using AWS API Gateway and this seems not to work. When I used curl and openssl with the client cert and key it works fine but as long as I am using nginx it still throws an http error 400.
Below is the configuration I am currently using.
server {
listen 11013;
server_name localhost;
access_log /var/log/nginx/forward_partner_x_nginx_proxy_access.log;
error_log /var/log/nginx/forward_partner_x_nginx_proxy_error.log warn;
proxy_ssl_certificate /root/partner_x_ssl/uat_chain.crt;
proxy_ssl_certificate_key /root/partner_x_ssl/uat.key;
resolver 8.8.8.8;
set $partner_x_upstream https://api.partner_x.app;
location /test {
access_log off;
return 200;
}
location / {
proxy_set_header Host $proxy_host;
proxy_pass $partner_x_upstream;
#proxy_pass_request_headers on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_server_name on;
proxy_buffering off;
proxy_ssl_name xx.xx.xx.xx;
#add_header Content-Type application/json;
}
}
I have tried several iteration of this config and it still didn't work. Because of the client certificate, i am not able to see anything significant over a captured tcpdump.
has anyone able to have a similar config working ? Grateful if you could share some lights on what I am doing wrong here. Thanks in advance.
Best Regards,
I have been trying to get my app to run on https. It is a single instance, single container docker app, that runs dart code and serves on 8080. So far, the app runs on http perfectly. I do not have, nor want, a load balancer.
I have followed the directions here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-docker.html and here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-httpredirect.html. I also have it configured to connect to my site at "server.mysite.com". I am getting the refused to connect error. I am sort of a noob to this, so if you need more information let me know.
The issue is that the instance is not listening on 443. So it turns out that since I deployed on AWS Linux 2, there is a different way of configuring the location of the https.conf file that the docs make you make.
Here is a ref https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html. Essentially, I made a folder in the root (next to .ebextensions) and added a file with the following path .platform/nginx/conf.d/https.conf with the contents of the file the wanted in the docs, eg.
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I'm trying to configure my app to run on ec2 with some difficulty.
It's a multi container app built with docker-compose consisting of django, drf, channels, redis gunicorn, celery and nuxt for frontend.
I've an instance running and can SSH to the instance and install the relevant packages, docker nginx docker-compose etc.
What I can't do is edit my app.conf nginx file to use the public ip 33.455.234.23 (example ip)
To route the backend, rest and frontend.
I've created and app.conf nginx file which works fine local but when I try edit the nginx files
after install to configure my app to the public ip's I run into errors.
The error I have when writing my config is
2020/11/13 01:59:17 [emerg] 13935#0: "http" directive is not allowed here in /etc/nginx/default.d/app.conf:3
nginx: configuration file /etc/nginx/nginx.conf test failed
This is my nginx config
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
client_max_body_size 100m;
upstream asgiserver {
server asgiserver:8000;
}
upstream nuxt {
ip_hash;
server nuxt:3000;
}
server {
listen 80 default_server;
server_name localhost;
location ~ /(api|admin|static)/ {
proxy_pass http://asgiserver;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
}
location /ws/ {
proxy_pass http://asgiserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# proxy_redirect off;
}
location / {
proxy_pass http://nuxt;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
}
}
}
What am I doing wrong here? What to I need to do in order to have my app running via reverse proxy on my ec2 public address?
It looks like this configuration file isn't treated as the main configuration file by nginx but being included from the main main configuration file /etc/nginx/nginx.conf which in turn already contains the http block with the include directive within, something like
include /default.d/*.conf;
Check this, if it is true, remove everything except upstream and server configuration blocks from the /etc/nginx/default.d/app.conf file. Move the client_max_body_size directive inside the server block.
I am trying to launch my web app with Django, Angular, and Nginx. During the development phase I made services within Angular that send requests to 127.0.0.1:8000 I was able to get my Angular project to display over my domain name. However, when I try to log into my app over another network it won't work. Is this because I am pointing at 127.0.0.1:8000? Do I need to configure a web server gateway or api gateway for Django? Do I need to point the services in Angular to a different address? Or did I configure something wrong within Nginx? if anyone can help me I would greatly appreciate it.
upstream django_server{
server 127.0.0.1:8000;
}
server{
listen 80;
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate C:/Certbot/live/example.com/fullchain.pem;
ssl_certificate_key C:/Certbot/live/example.com/privkey.pem;
root /nginx_test/www1/example.com;
index index.html;
location = /favicon.ico {
return 204;
access_log off;
log_not_found off;
}
location /api-token/ {
proxy_pass http://django_server/api-token/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I think the reason is in your Angular service configuration. Instead of 127.0.0.1 try to change it to your REST API server IP address.
As I understand in your case when you open your app in the browser you load all static files into your pc/laptop browser. Because of that every time when you trigger frontend service you try to get response from your laptop/pc instead of your backed server.
It's probably related to this question: How to run more than one app on one instance of EC2
But that question only seemed to be talking about multiple node.js apps.
I am trying learn several different things, so I'm building different websites to learn Ruby on Rails, LAMP, and node.js. Along with my personal website and blog.
Is there any way to run all these on the same EC2 instance?
First, there's nothing EC2-specific about setting up multiple web apps on one box. You'll want to use nginx (or Apache) in "reverse proxy" mode. This way, the web server listens on port 80 (and 443), and your apps run on various other ports. Each incoming request reads the "Host" header to map the request to a backend. So different DNS names/domains show different content.
Here is how to setup nginx in reverse proxy mode: http://www.cyberciti.biz/tips/using-nginx-as-reverse-proxy.html
For each "back-end" app, you'll want to:
1) Allocate a port (3000 in this example)
2) write an upstream stanza that tells it where your app is
3) write a (virtual) server stanza that maps from the server name to the upstream location
For example:
upstream app1 {
server 127.0.0.1:3000; #App1's port
}
server {
listen *:80;
server_name app1.example.com;
# You can put access_log / error_log sections here to break them out of the common log.
## send request to backend
location / {
proxy_pass http://app1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I prefer to have Nginx in front of Apache for two reasons: 1) nginx can serve static files with much less memory, and 2) nginx buffers data to/from the client, so people on slow internet connections don't clog your back-ends.
When testing your config, use nginx -s reload to reload the config, and curl -v -H "Host: app1.example.com" http://localhost/ to test a specific domain from your config
Adding to the #Brave answer, I would like to mention the configuration of my nginx for those who are looking for the exact syntax in implementing it.
server {
listen 80;
server_name mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:3000;
}
}
server {
listen 80;
server_name api.mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:4500;
}
}
Just create two server objects with unique server name and the port address.
Mind proxy_pass in each object.
Thank you.