NGINX on AWS EC2 to forward HTTPS to HTTP://localhost - amazon-web-services

I have some dockers containers deployed on AWS EC2, that listens on http.
My idea is using nginx as reverse proxy, to pass traffic from https, to http://localhost.
Each container listens on a specific http port. The EC2 instance will accept traffic just on http:80 and http:443 and I will use subdomain to chose the right port.
So I should have:
https://backend.my_ec2instance.com --> http://localhost:4000
https://frontend.my_ec2instance.com --> http://localhost:5000
I'v got my free TSL certificate from Let's Encrypt (it's just on file containing public and private keys), and put it in
/etc/nginx/ssl/letsencrypt.pem
Then I have configured nginx in this way
sudo nano /etc/nginx/sites-enabled/custom.conf
and wrote
server {
listen 443 ssl;
server_name backend.my_ec2instance;
# Certificate
ssl_certificate letsencrypt.pem;
# Private Key
ssl_certificate_key letsencrypt.pem;
# Forward
location / {
proxy_pass http://localhost:4000;
}
}
server {
listen 443 ssl;
server_name frontend.my_ec2instance;
# Certificate
ssl_certificate letsencrypt.pem;
# Private Key
ssl_certificate_key letsencrypt.pem;
# Forward
location / {
proxy_pass http://localhost:5000;
}
}
then
sudo ln -s /etc/nginx/sites-available/custom.conf /etc/nginx/sites-enbled/
Anyway, if I open my browser on https://backend.my_ec2instance it's not reachable.
http://localhost:80 instead correctly shows the nginx page.

HTTPS default port is port 443. HTTP default port is port 80. So this: https://localhost:80 makes no sense. You are trying to use the HTTPS protocol on an HTTP port.
In either case, I don't understand why you are entering localhost in your web browser at all. You should be trying to open https://frontend.my_ec2instance.com in your web browser. The locahost address in your web browser would refer to your local laptop, not an EC2 server.
Per the discussion in the comments you also need to include your custom Nginx config in the base configuration file.

Related

How to set up a reverse proxy server with alb on AWS?

I created a service publish structure as below
I don't know why can't access the domain successfully. Where maybe the issue?
The public LB's listener rules are
You can add HTTP to HTTPS redirect config in the Loadbalancer rule itself.
To proxy your HTTPS requests to Applications server
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass <internal_loadbalancer_dns_name>;
}
}

Enable more ports to public traffic on AWS? Nginx/Docker multiple applications setup

I currently have an EC2 instance running over on AWs, with two applications (a frontend app and an API, standard stuff)
By default I have port 80 and 443 enabled for public traffic, by my company's devops team.
Im trying to use Nginx, to redirect requests incoming to port :80, to different app/api ports setup on docker containers.
example: my app is currently running on port :8080, I expect that when user hits my domain, nginx redirects user to the app running on :8080
nginx.conf
events {}
http {
server {
listen 80;
server_name company.com;
location / {
proxy_pass http://0.0.0.0:8080;
}
}
server {
listen 443 ssl;
server_name company.locallabs.com;
ssl_certificate /etc/ssl/wildcard.company.com.crt;
ssl_certificate_key /etc/ssl/wildcard.company.com.key;
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://0.0.0.0:8081;
}
}
}
Question: port :8080 is accessible only within the server (when I access using SSH, and run a CURL inside the server it works fine).
Should :8080 be enabled to public traffic, even though I set the redirect on Nginx?
nginx logs gives me
2021/04/14 04:01:07 [error] 6#6: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 3.90.2.223, server: company.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "company.com"
No, you don't.
I recommend that you consider deploying the docker container jc21/nginx-proxy-manager. It works beautifully and has the added benefit that it manages LetsEncrypt SSL certs for each service it proxies.
Have the nginx container ingest host ports 80 and 443, and use the UI to define the domains and ports for each docker container your devs are working on.

Redirecting http to https using nginx behind AWS load balancer

I've searched high and low and I feel this is the final hurdle for my project. I'm trying to redirect all http traffic to https.
Currently when typing domain.info, it redirects to https://domain.info:80 and returns ERR_SSL_PROTOCOL_ERROR
But replacing the 80 with 443 gives me my webpage just fine.
My server is behind a load balancer too, I have my certificates in the LB and none in my server. I'm using NGINX as my webserver. Basically this is my setup:
user>https>load balancer>http>server
Thanks so much in advance!!
Firstly, catch all HTTP incoming:
server {
listen 80 default_server;
server_name _;
return ....;
}
then redirect to HTTPS permanently
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
or...
server {
listen 443 ssl default_server;
server_name foo.com;
}
server {
listen 443 ssl default_server;
server_name foo.com;
}
Directly in the load balancer, you can add a redirection rule, did you try to play with that?
In the load balancer listener, update the default rule for HTTP:80 and configure it like this :
HTTP 80: default action
IF
Requests otherwise not routed
THEN
Redirect to https://#{host}:443/#{path}?#{query}
Status code:HTTP_301

How to solve 502 Bad Gateway errors with Elastic Load Balancer and EC2/Nginx for HTTPS requests?

I'm running into '502 Bad Gateway' issues for HTTPS requests when using AWS Elastic Load Balancer (Application type) in front of EC2 instances running Nginx. Nginx is acting as a reverse proxy on each instance for a waitress server serving up a python app (Pyramid framework). I'm trying to use TLS termination at the ELB so that the EC2 instances are only dealing with HTTP. Here's the rough setup:
Client HTTPS request > ELB (listening on 443, forwarding to 80 on backend) > Nginx listening on port 80 (on Ec2 instance) > forwarded to waitress/Pyramid (on same ec2 instance)
When I make requests on HTTPS I get the 502 error. However, when I make regular HTTP requests I get a response as expected (same setup as above except ELB is listening on port 80).
Some additional info:
ELB health checks are working.
All VPC/Security groups are configured correctly (I believe).
I'm using an AWS certificate on the ELB using the standard setup/walkthrough on AWS.
I SSH'd into the Ec2 instance and in the Nginx access log it looks like the HTTPS request are still encrypted? Or some encoding issue?
And here's nginx.conf on the EC2 instance:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /etc/nginx/access.log;
sendfile on;
# Configuration containing list of application servers
upstream app_servers {
server 127.0.0.1:6543;
}
server {
listen 80;
server_name [MY-EC2-SERVER-NAME];
# Proxy connections to the application servers
# app_servers
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Ok I figured it out (I'm a dummy). I had two listeners set up on the ELB, one for 80 and one for 443, which was correct. The listener for 80 was set up correctly to forward to backend (Nginx) port 80 over HTTP as expected. The 443 listener was INCORRECTLY configured to send to port 80 on the backend over HTTPS. I updated the 443 listener to use the same rule as the 80 listener (i.e. listen on 443 but send to backend 80 over HTTP) and it worked. Disregard y'all.

CNAME to a secure url

I have an AWS Elastic Load Balancer with the following secure url:
https://example.us-west-2.elasticbeanstalk.com/
I also have the following 1&1 registered domain name:
example.com
I then in 1&1 config add a subdomain of www resulting in www.example.com.
I would like to add a CNAME alias to route traffic from the domain name to the ELB.
www.example.com -> https://example.us-west-2.elasticbeanstalk.com/
So I try add the CNAME:
As you can see, it is not accepting the url, as it is an Invalid host name.
I need the alias to pint to the secure (https) url. However, I think this may be the reason for the error.
Question
How do I set up a CNAME to point to a secure url?
Thanks
UPDATE
My Elastic Load Balanacer does have a secure listener.
You have to specify HTTPS in your NGINX using a redirect or Apache using mod_rewrite. If you want a little higher level HTTP to HTTPS roll over, you can do this (most of the time) in your application by specifying where your certs are located and doing a listen on Port 80 with a redirect/relocate to Port 443
On the DNS level you only specify the location. In your application, or on your server somewhere, you specify the HTTP/HTTPS protocol. DNS, being a protocol itself, cannot specify other protocols in its response. HTTPS is a processor intensive encryption operation done on your server.
I would highly recommend using AWS Certificate Manager to assign a certificate to your domain. If you'd rather have it in your beanstalk application, check out letsencrypt. It's a wonderful CLI tool for this stuff.
Here is a helpful resource
Ubuntu + NGINX + letsencrypt
Configuring HTTP to HTTPS on Ubuntu. Yes, only one operating system specific example, but letsencrypt should work anywhere with anything, anytime.
sudo apt-get update
sudo apt-get install letsencrypt
sudo apt-get install nginx
sudo systemctl stop nginx #if it starts by default...
sudo letsencrypt certonly --standalone -n -m richard#thewhozoo.com.com -d thewhozoo.com -d cname.thewhozoo.com --agree-tos
sudo ls -l /etc/letsencrypt/live/thewhozoo.com/ #you should see your stuff in this folder
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 #make yo'self a diffie
sudo vim /etc/nginx/sites-available/default
In your default file:
(Snippets from: HERE and HERE and HERE and HERE)
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name thewhozoo.com www.thewhozoo.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name thewhozoo.com www.thewhozoo.com;
ssl_certificate /etc/letsencrypt/live/thewhozoo.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/thewhozoo.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
}
Now that your NGINX file has your certs/keys/pems/whatever listed, you have to double check your firewall.
For Ubuntu and ufw, you can allow access via:
sudo ufw allow 'Nginx Full'
sudo ufw delete allow 'Nginx HTTP'
sudo ufw allow 'OpenSSH'
sudo ufw enable
sudo ufw status
And you should see Nginx HTTPS enabled.
No matter what your flavor of HTTPS is (SSL, TLSvXX, etc.) you'll need Port 22 open on the firewall level cause they all use it, hense the 'OpenSSH'.
BE SURE TO RUN allow 'OpenSSH' BEFORE ufw enable. If you do not... your SSH session will be terminated and...good luck.
Now your firewall is good to go, restart nginx and you should be set:
sudo systemctl start nginx
Helpful tips for the future:
NGINX by default set the renewal policy to 3 months. I'm not certain if this is a "standard" of internet law or not, but the add-on for renewing your certs is:
Add this to your crontab:
sudo systemctl stop nginx
sudo letsencrypt renew
sudo systemctl start nginx
HELPFUL NOTES:
You must have the domain name linked to the server of choice BEFORE running letsencrypt. It does a reverse IP Lookup to make sure you are the owner/admin of the domain.
You do not need the giant list of encryption types but I would highly recommend keeping most of them. Elliptical Curve Diffie Hellman is a must for the type of key used above, but you can probably cut it down to ECDH?E, AES, GCM, and RSA or SHA depending on how many cipher suites you want to support. IF you aren't going to support SSLvX and only do TLSvX you only need to support (and restrict) the following: ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5;
AWS Certificate Manager (ACM) + Elastic Load Balancer
Go to your Load Balancer in the EC2 Resource Console
Select your listener
Should probably say: HTTPS: 443 in bold letters
Check it and click Actions => Edit
Double check that your Protocol is HTTPS on Port 443 and your target group is good
At the bottom of the pop-up, select "Choose an existing certificate from AWS Certificate Manager (ACM)
Then select your ACM Certificate
Save it
SSH into your instance/application on EBS/whatever
Write an NGINX policy for redirecting HTTP traffic to HTTPS:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name thewhozoo.com www.thewhozoo.com;
return 301 https://$host$request_uri;
}
Restart NGINX
For Elastic Beanstalk environment check THIS INFO.
Wait about 5 minutes for everything to sink in and you should be good to go!
Check this for help if needed
Drop the 'http://' from the CNAME and just use:
example.us-west-2.elasticbeanstalk.com