Elasticbeanstalk - Force HTTPs on Docker container with Nginx - amazon-web-services

I have a single-container Docker running a React environment on Elasticbeanstalk with Nginx. I pointed a subdomain to the ELB URL, and want to force a HTTPS redirection if you visit the subdomain (i.e. you type subdomain.domain.com and it should redirect you to HTTPS).
Now, if I visit the default ELB URL (something.eu-central-1.elasticbeanstalk.com), it will be redirected to HTTPS. But I want my custom domain (which is parked somewhere else but points to something.eu-centralblabla with a CNAME) to be forced to use HTTPS as well, but it doesn't happen. It allows regular HTTP requests.
I've tried several guides and followed AWS documentation, but I cannot seem to force it to redirect to HTTPS on my custom subdomain.
These are my files:
/.ebextensions folder
http-instance.config
files:
/etc/nginx/conf.d/https.conf:
mode: "000644"
owner: root
group: root
content: |
# HTTPS Server
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://docker;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
#SSL CRT and KEY below
https-instance-single.config
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {"Fn::GetAtt" : ["AWSEBSecurityGroup", "GroupId"]}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
/nginx folder
default.conf
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html?/$request_uri;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
error_page 500 504 /500.html;
error_page 502 /502.html;
error_page 503 /503.html;
client_max_body_size 4G;
keepalive_timeout 10;
location ~ ^/(favicon|static)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
# add_header Last-Modified "";
# add_header ETag "";
open_file_cache max=1000 inactive=500s;
open_file_cache_valid 600s;
open_file_cache_errors on;
break;
}
}
What am I doing wrong? Thanks for your help!

You should be able to manage this in your nginx config by adding this within the server context:
set $redirect_to_https 0;
if ($http_x_forwarded_proto != 'https') {
set $redirect_to_https 1;
}
if ($redirect_to_https = 1) {
rewrite ^ https://$host$request_uri? permanent;
}
Or something to that effect.

Route all http traffic to https:
server {
listen 80;
return 301 https://$host$request_uri;
}
Then hangle the proxy stuff in the 443 block

Related

CSRF token verification error in django admin using SSL, nginx

I have a csrf token error when trying to log in to the django admin in production after adding SSL.
So if I use the configuration below without ssl everything works fine:
upstream app_server {
server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}
server {
listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name 107.***.28.***;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/app/logs/nginx-access.log;
error_log /home/app/logs/nginx-error.log;
location /static/ {
alias /home/app/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
But if I change to configuration do listen SSL when filling in any form on the page I get the csrf_token error. My configuration nginx using SSL:
upstream app_server {
server unix:/home/app/run/gunicorn.sock fail_timeout=0;
}
server {
#listen 80;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
keepalive_timeout 5;
client_max_body_size 4G;
access_log /home/app/logs/nginx-access.log;
error_log /home/app/logs/nginx-error.log;
# Compression config
gzip on;
gzip_min_length 1000;
gzip_buffers 4 32k;
gzip_proxied any;
gzip_types text/plain application/javascript application/x-javascript text/javascript text/xml text/css;
gzip_vary on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
location /static/ {
alias /home/app/static/;
}
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 80;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
How can I fix the error or where to find the bug. I tried to clear cookies, use different browsers, reset the server and server configuration without result.
In Django ≥ 4 it is now necessary to specify CSRF_TRUSTED_ORIGINS in settings.py
CSRF_TRUSTED_ORIGINS = [
'https://your-domain.com'',
'https://www.your-domain.com'
]
See documentation

AWS Application Load Balancer - https not working properly

I have a web application developed with React JS, for server side rendering, I am using NodeJS. Following is the overall architecture -
Deployed React JS app on EC2 - Ubuntu 18.04 with Nginx
Obtained SSL from AWS ACM
Attached ALB to EC2 instance, added 2 listeners - PORT 80, PORT 443 (Forwarding request to target group on PORT 80)
Added A record on Godaddy with EC2 elastic IP, added CNAME record www pointing to ALB
Following is my nginx config file -
server {
server_name mydomain.ai;
return 301 https://www.mydomain.ai$request_uri;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
#server_name www.mydomain.ai;
if ($host !~ ^www\.) {
rewrite ^ https://$host$request_uri permanent;
}
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location /error {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:8000;
}
location /aws/ {
try_files $uri $uri/ /aws/aws.html;
}
}
server {
listen *:443 default_server;
server_name mydomain.ai www.mydomain.ai;
if ($host !~ ^www\.) {
rewrite ^ https://$host$request_uri permanent;
}
location / {
proxy_hide_header 'Access-Control-Allow-Origin';
add_header 'Access-Control-Allow-Origin' "*" always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
proxy_pass https://localhost:8000;
proxy_http_version 1.1;
}
}
When I type https://mydomain.ai it throws "ERR_SSL_PROTOCOL_ERROR", however following cases are working fine -
mydomain.ai //redirected to https://www.mydomain.ai
http://mydomain.ai //redirected to https://www.mydomain.ai
http://www.mydomain.ai //redirected to https://www.mydomain.ai
Can anyone please help me?
I think you forgot to attach the procured certificate to ALB.
It can be done from AWS console by following the steps mentioned:
https://aws.amazon.com/premiumsupport/knowledge-center/associate-acm-certificate-alb-nlb/

NGINX Reverse Proxy WSS to Amazon API Gateaway

Im trying to set up a reverse wss proxy with nginx to an amazon api gateaway websocket api but I have had no luck with the configuration of nginx so i would be glad if you helped me sort this out.
Let me give you some details:
I have an EC2 instance running nginx that has attached to it an elastic ip address.
I also have dns records to point traffic from connect.example.com to that ip address.
I have set up nginx as a reverse proxy to proxy the traffic from connect.example.com to app.example.com on port 443 with ssl(I have generated the relevant certificates).
On app.example.com lies a websockets api on amazon's api gateaway service.
I can see from nginx's access logs that my requests reach the ec2 instance but I always get error responses no matter how i change the nginx config file(400,403,500,502 etc).
I dont seem to understand where the problem lies even though I have searched around and tried various configurations.
Im attaching my config files below for reference:
nginx.conf
# Based on https://www.nginx.com/resources/wiki/start/topics/examples/full/#nginx-conf
user daemon daemon;
worker_processes auto;
error_log "/opt/bitnami/nginx/logs/error.log";
pid "/opt/bitnami/nginx/tmp/nginx.pid";
events {
worker_connections 1024;
}
http {
#include mime.types;
#default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log "/opt/bitnami/nginx/logs/access.log";
#add_header X-Frame-Options SAMEORIGIN;
client_body_temp_path "/opt/bitnami/nginx/tmp/client_body" 1 2;
proxy_temp_path "/opt/bitnami/nginx/tmp/proxy" 1 2;
fastcgi_temp_path "/opt/bitnami/nginx/tmp/fastcgi" 1 2;
scgi_temp_path "/opt/bitnami/nginx/tmp/scgi" 1 2;
uwsgi_temp_path "/opt/bitnami/nginx/tmp/uwsgi" 1 2;
#connection_pool_size 112;
#sendfile on;
#tcp_nopush on;
#tcp_nodelay on;
#gzip on;
#gzip_http_version 1.0;
#gzip_comp_level 2;
#gzip_proxied any;
#gzip_types text/plain text/css application/javascript text/xml application/xml+rss;
#keepalive_timeout 65;
#ssl_protocols TLSv1.2 TLSv1.3;
#ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 80M;
#server_tokens on;
#include "/opt/bitnami/nginx/conf/server_blocks/*.conf";
# HTTP Server
#server {
# Port to listen on, can also be set in IP:PORT format
# listen 80;
# include "/opt/bitnami/nginx/conf/bitnami/*.conf";
# include "/opt/bitnami/nginx/conf/ssl/ssl-redirect.conf";
# location /status {
# stub_status on;
# access_log off;
# allow 127.0.0.1;
# deny all;
# }
# }
include "/opt/bitnami/nginx/conf/ssl/ssl.conf";
}
ssl.conf
resolver app.example.com;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
#listen [::]:443 ssl;
server_name connect.example.com;
#ssl on;
ssl_certificate /opt/bitnami/nginx/conf/bitnami/certs/server.crt;
ssl_certificate_key /opt/bitnami/nginx/conf/bitnami/certs/server.key;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
root /usr/share/nginx/html;
underscores_in_headers on;
location / {
proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass https://ws-backend$uri$is_args$args;
proxy_read_timeout 9000;
proxy_pass_request_headers on;
#Websocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Sec-WebSocket-Protocol $http_sec_websocket_protocol;
proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;
proxy_set_header Sec-WebSocket-Accept $http_sec_websocket_accept;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
upstream ws-backend {
server app.example.com:443;
}
When i connect directly to app.example.com i have no problem and the response is the following:
expected response
But when i connect though connect.example.com i get the following response:
actual response

how to redirect any api request from http to https in django?

after migrating my Django web app from HTTP to https, when I type
for example r= requests.get('http://xxxx.com')
it gives me this error :
requests.exceptions.SSLError: HTTPSConnectionPool(host=my_host_name,port:443) Max retries exceeded with url:http://xxxx.com (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)'),))
but I made the nginx config for the redirection for example when I put any HTTP address on my browser it redirects me to the correct https address.
I would like to do the same thing on the API request.
I don't like to change my requests addresses on my backend code I just want to redirect the HTTP requests to https if it is possible?
my nginx config :
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
proxy_cache_path /path/cache keys_zone=cache:10m levels=1:2 inactive=600s
max_size=100m;
default_type application/octet-stream;
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
access_log /path/access.log;
error_log /Path/error.log error;
gzip on;
gzip_disable "msie6";
text/xml application/xml application/xml+rss text/javascript;
upstream app_servers {
server 127.0.0.1:8080;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl on;
ssl_certificate /PATH/certificate.crt;
ssl_certificate_key /PATH/certificate.key;
proxy_cache cache;
proxy_cache_valid 200 1s;
#ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
server_name my_host_name;
access_log /path/nginx-access.log compression;
location /static/ {
alias /path/static/;
}
location /nginx_status {
stub_status on;
allow all;
deny all;
}
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_pass_request_headers on;
proxy_read_timeout 1200;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 9999 ;
server_name my_host_name ;
return 307 https://my_domain.com$request_uri;
}
}
The error kind of puzzles me, but in you Nginx config file, I see that you're not listening on the default HTTP port. You should add a server block that listens on the HTTP (80) port, and redirect to https (443) from there.
Add this block inside your http:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name my_host_name;
return 301 https://$host$request_uri;
}

redirect http to https AWS LB configuration

Here is my nginx configuration
server {
listen 80;
location / {
if ($http_x_forwarded_proto != 'https') {
rewrite ^ https://test.com$request_uri?;
}
}
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/chain.crt;
ssl_certificate_key /etc/ssl/key.crt;
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_protocols TLSv1.2;
server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
client_max_body_size 300M;
location / {
root /var/www/html;
index index.html index.htm;
}
}
I configured both http and https to instance port 80 and the certificate.
when I try to hit the website the redirect works fine but it takes me to nginx landing page, it does not seem to read the config on 443.