AWS loadbalancer for frontend vs backend servers - amazon-web-services

I am trying to load balance frontend (public) and backend (private) servers in AWS. I got the nginx file working with a single server with IP address, but the loadbalancer DNS name doesn't seem to work, below is my nginx.conf for the frontend server. In the listener section of the loadbalancer the loadbalancer port is 443 and instance port is 9000. Any suggessions greatly appreciated.
WORKING...
server {
listen 80;
rewrite ^(.*) https://example.com$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/chain.crt;
ssl_certificate_key /etc/ssl/key.crt;
listen localhost:443;
server_tokens off;
client_max_body_size 300M;
location / {
root /var/www/html;
index index.html index.htm;
}
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass https://<BackendIP>:9000/api/;
proxy_set_header Host $http_host;
}
}
}
NOT WORKING...
server {
listen 80;
rewrite ^(.*) https://example.com$request_uri;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/chain.crt;
ssl_certificate_key /etc/ssl/key.crt;
listen localhost:443;
server_tokens off;
client_max_body_size 300M;
location / {
root /var/www/html;
index index.html index.htm;
}
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass https://<LOADBALANCER-DNS>:9000/api/;
proxy_set_header Host $http_host;
}
}
}

Related

This site can’t be reached domain.de refused to connect after changing http to https in Nginx

I have a project, Frontend with Flutter and Backend with Django. It was working fine. I wanted to change HTTP to HTTPs. now I am getting the error This site can’t be reached domain.de refused to connect
The Nginx file for the Frontend:
server {
server_name visoon.de;
root /home/visoon_frontend/build/web;
index index.html;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/visoon.de/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/visoon.de/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
if ($host = visoon.de) {
return 301 https://$host$request_uri;
}
listen 80;
server_name visoon.de;
return 404;
}
And Nginx file for the Backend:
upstream visoon_app_server {
server unix:/home/visoon_backend/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name visoon.de;
client_max_body_size 4G;
proxy_read_timeout 1200s;
access_log /home/visoon_backend/logs/nginx-access.log;
error_log /home/visoon_backend/logs/nginx-error.log;
location /static/ {
alias /home/visoon_backend/visoon_backend/static/;
expires -1;
}
location /media/ {
alias /home/visoon_backend/visoon_backend/static/media/;
expires -1;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
# proxy_buffering off;
if (!-f $request_filename) {
proxy_pass http://visoon_app_server;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/visoon_backend/visoon_backend/static/;
}
}
Does anyone know why I am getting this error?
After searching for a couple of hours, I discovered that port 443 wasn't accessible on the server.

Nginx Reverse Proxy with Gunicorn Treats Site Names Differently

We have a Django project that is served in production using Nginx and Gunicorn reverse-proxy setup. Everything seems to work except for one small detail. Somehow, the browser "sees" the following addresses as different sessions.
Suppose I log into the site using the example.com address.
Then, if I visit https://www.example.com, the browser does not see that the user has logged in.
When I visit www.example.com, I get a 404 error in the browser from Nginx.
My suspicion is that this has something to do with the way Nginx or Gunicorn are setup. Any help on how to resolve this discrepancy is appreciated.
Nginx config:
server {
root /home/example/mysite;
# Add index.php to the list if you are using PHP
index index.html index.htm;
server_name example.com www.example.com;
client_max_body_size 512M;
location /static/ {
alias /home/example/mysite/static/;
expires 30d;
add_header Vary Accept-Encoding;
access_log off;
}
location /media {
alias /home/example/mysite/media/;
expires 30d;
add_header Vary Accept-Encoding;
access_log off;
}
location / {
# try_files $uri $uri/ =404;
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_connect_timeout 6000;
proxy_send_timeout 6000;
proxy_read_timeout 6000;
send_timeout 6000;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /home/ubuntu/ssl/example_com_chain.crt;
ssl_certificate_key /home/ubuntu/ssl/server.key;
#include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
#ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
return 404; # managed by Certbot
}
to redirect
http://www.example.com
http://example.com
https://www.example.com
to
https://example.com
you need to make changes in your nginx vhost config file like so:
# Resirect 'http www' and 'http non-www' traffic to 'https non-www'
server {
listen 80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
# Resirect 'https www' traffic to 'https non-www'
server {
listen 443 ssl;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
# https://example.com
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
server_name example.com;
root /home/example/mysite;
# Add index.php to the list if you are using PHP
index index.html index.htm;
client_max_body_size 512M;
location /static/ {
alias /home/example/mysite/static/;
expires 30d;
add_header Vary Accept-Encoding;
access_log off;
}
location /media {
alias /home/example/mysite/media/;
expires 30d;
add_header Vary Accept-Encoding;
access_log off;
}
location / {
# try_files $uri $uri/ =404;
proxy_pass http://127.0.0.1:8080; # HERE review this line it should be the server IP not localhost
proxy_set_header Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_connect_timeout 6000;
proxy_send_timeout 6000;
proxy_read_timeout 6000;
send_timeout 6000;
}
ssl_certificate /home/ubuntu/ssl/example_com_chain.crt;
ssl_certificate_key /home/ubuntu/ssl/server.key;
# include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
# ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
this thread may helps you https://www.digitalocean.com/community/questions/redirecting-https-www-domain-to-non-www-domain-with-nginx (my answer is based on)
and in your settings.py:
ALLOWED_HOSTS = [
'example.com', # https non-www
]
# SESSION_COOKIE_SECURE = True
# CSRF_COOKIE_SECURE = True
for more details see
https://docs.djangoproject.com/en/3.1/topics/security/#ssl-https
https://security.stackexchange.com/questions/8964/trying-to-make-a-django-based-site-use-https-only-not-sure-if-its-secure?newreg=bf8583d7f6d34236b7c6cbfb0fe315b4

Server running with daphne starts to response with code 504 on any http request after passing uncertain time

I'm using django-channels2+daphne in production.
After uncertain time passed I got this error twice (after 2 and after 6 hours correspondingly), which involved 504 answer on any HTTP request. I have no idea how should I debug the problem. Using nginx, django-channels2, daphne.
Application instance <Task pending coro=<AsgiHandler.__call__() running at /usr/local/lib/python3.7/site-packages/channels/http.py:202> wait_for=<Future pending cb=[_chain_future.<locals>._call_check_cancel() at /usr/local/lib/python3.7/asyncio/futures.py:348, <TaskWakeupMethWrapper object at 0x7ff116ef9708>()]>> for connection <WebRequest at 0x7ff116a86d30 method=GET uri=/api/v1/feed/?page_size=10&distance=-1000&not_reviewed=1 clientproto=HTTP/1.1> took too long to shut down and was killed
Here is my nginx config:
server {
server_name www.example.com example.com;
return 301 https://example.com$request_uri;
}
server {
server_name www.lvh.me lvh.me;
return 301 https://lvh.me$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/private/server.key;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/private/server.key;
server_name www.lvh.me;
return 301 https://lvh.me$request_uri;
}
server {
server_name example.com lvh.me;
charset UTF-8;
listen 443 ssl;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/private/server.key;
access_log /var/log/nginx/mini.access.log;
error_log /var/log/nginx/mini.error.log;
location /static/ {
autoindex on;
root /data/django;
}
location /media/ {
autoindex on;
root /data/django;
}
location / {
proxy_pass http://django:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Starting daphne.com with:
daphne -b 0.0.0.0 -p 8000 project.asgi:application

Serving multiple Django applications with Nginx and Gunicorn under same domain

Now I have one Django project in one domain. I want to server three Django projects under one domain separated by / .For example: www.domain.com/firstone/, www.domain.com/secondone/ etc. How to configure nGinx to serve multiple Django-projects under one domain? How configure static-files serving in this case?
My current nGinx config is:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name domain.com www.domain.com;
ssl_certificate /etc/nginx/ssl/Certificate.crt;
ssl_certificate_key /etc/nginx/ssl/Certificate.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /home/admin/web/project;
location /static {
alias /home/admin/web/project/static;
}
location /media {
alias /home/admin/web/project/media;
}
location /assets {
alias /home/admin/web/project/assets;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
proxy_pass http://127.0.0.1:8000/;
client_max_body_size 100M;
}
# Proxies
# location /first {
# proxy_pass http://127.0.0.1:8001/;
# }
#
# location /second {
# proxy_pass http://127.0.0.1:8002/;
# }
error_page 500 502 503 504 /media/50x.html;
You have to run your projects on different ports like firsrone on 8000 and secondone on 8001.
Then in nginx conf, in place of location /, you have to write location /firstone/ and proxy pass this to port 8000 and then write same location object for second one as location /secondone/ and proxy pass it to port 8001.
For static files and media, you have to make them available as /firstone/static and same for secondone.
Other way is to specify MEDIA_ROOT and STATIC_ROOT same for both the projects.
As #prof.phython correctly states, you'll need to run a separate gunicorn process for each of the apps. This results in you having each app running on a separate port.
Next create a separate upstream block, under http for each of these app servers:
upstream app1 {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
#server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:9000 fail_timeout=0;
}
Obviously change the title, and port number for each upstream block accordingly.
Then, under your http->server block define the following for each:
location #app1_proxy {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app1;
}
Make sure the last line there, points at what you called the upstream block (app1) and #app1_proxy should be specific to that app also.
Finally within the http->server block, use the following code to map a URL to the app server:
location /any/subpath {
# checks for static file, if not found proxy to app
try_files $uri #app1_proxy;
}
What prof.phython said should be correct. I'm not an expert on this but I saw a similar situation with our server as well. Hope the shared nginx.conf file helps!
server {
listen 80;
listen [::]:80;
server_name alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name web.alicebot.tech;
return 301 https://web.alicebot.tech$request_uri;
}
server {
listen 443 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/html/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 443 ssl;
server_name web.alicebot.tech;
ssl_certificate /etc/letsencrypt/live/web.alicebot.tech/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/web.alicebot.tech/privkey.pem; # managed by Certbot
location /static/ {
autoindex on;
alias /var/www/html/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
}
server {
listen 8000 ssl;
listen [::]:8000 ssl;
server_name alicebot.tech;
ssl_certificate /etc/ssl/alicebot_tech_cert_chain.crt;
ssl_certificate_key /etc/ssl/alicebot.key;
location /static/ {
autoindex on;
alias /var/www/alice_v2/static/;
expires 1M;
access_log off;
add_header Cache-Control "public";
proxy_ignore_headers "Set-Cookie";
}
location / {
include proxy_params;
proxy_pass http://unix:/var/www/alice_v2/alice/alice.sock;
}
}
As you can see we had different domain names here, which you wouldn't be needing. So you'll need to change the server names inside the server {...}

Nginx subdomain too many redirects

I currently have a working Django + Gunicorn + Nginx setup for https://www.example.com and http://sub.example.com. Note the main domain has ssl whereas the subdomain does not.
This is working correctly with the following two nginx configs. First is www.example.com:
upstream example_app_server {
server unix:/path/to/example/gunicorn/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name www.example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl;
server_name www.example.com;
if ($host = 'example.com') {
return 301 https://www.example.com$request_uri;
}
ssl_certificate /etc/nginx/example/cert_chain.crt;
ssl_certificate_key /etc/nginx/example/example.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ciphers removed to save space in post';
ssl_prefer_server_ciphers on;
client_max_body_size 4G;
access_log /var/log/nginx/www.example.com.access.log;
error_log /var/log/nginx/www.example.com.error.log info;
location /static {
autoindex on;
alias /path/to/example/static;
}
location /media {
autoindex on;
alias /path/to/example/media;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://example_app_server;
break;
}
}
}
Next is sub.example.com:
upstream sub_example_app_server {
server unix:/path/to/sub_example/gunicorn/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name sub.example.com;
client_max_body_size 4G;
access_log /var/log/nginx/sub.example.com.access.log;
error_log /var/log/nginx/sub.example.com.error.log info;
location /static {
autoindex on;
alias /path/to/sub_example/static;
}
location /media {
autoindex on;
alias /path/to/sub_example/media;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://sub_example_app_server;
break;
}
}
}
As mentioned, this is all working. What I am trying to do now is to use ssl on the subdomain as well. I have a second ssl certificate for this purpose which has been activated with the domain register for this subdomain.
I have updated the original nginx config from above for sub.example.com to have exactly the same format as example.com, but pointing to the relevant ssl cert/key etc:
upstream sub_example_app_server {
server unix:/path/to/sub_example/gunicorn/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
server_name sub.example.com;
return 301 https://sub.example.com$request_uri;
}
server {
listen 443 ssl;
server_name sub.example.com;
if ($host = 'sub.example.com') {
return 301 https://sub.example.com$request_uri;
}
ssl_certificate /etc/nginx/sub_example/cert_chain.crt;
ssl_certificate_key /etc/nginx/sub_example/example.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ciphers removed to save space in post';
ssl_prefer_server_ciphers on;
client_max_body_size 4G;
access_log /var/log/nginx/sub.example.com.access.log;
error_log /var/log/nginx/sub.example.com.error.log info;
location /static {
autoindex on;
alias /path/to/sub_example/static;
}
location /media {
autoindex on;
alias /path/to/sub_example/media;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://sub_example_app_server;
break;
}
}
}
I haven't changed anything with my domain register / dns because everything was already working correctly before adding the ssl for the subdomain. Not sure if there is something I need to change?
When browsing to http://sub.example.com I am redirected to https://sub.example.com, so that part appears to be working. However the site does not load and the browser error is: This page isn't working. sub.example.com redirected you too many times. ERR_TOO_MANY_REDIRECTS
https://www.example.com is still working.
I don't have any errors in my nginx or gunicorn logs. I can only guess I have configured something in the sub.example.com nginx config incorrectly.
The section in the ssl server configuration:
if ($host = 'sub.example.com') { return 301 sub.example.com$request_uri }
is the problem. That rule will always be triggered. Removing it should eliminate the too many redirect errors.