I have an AWS Elastic Load Balancer with the following secure url:
https://example.us-west-2.elasticbeanstalk.com/
I also have the following 1&1 registered domain name:
example.com
I then in 1&1 config add a subdomain of www resulting in www.example.com.
I would like to add a CNAME alias to route traffic from the domain name to the ELB.
www.example.com -> https://example.us-west-2.elasticbeanstalk.com/
So I try add the CNAME:
As you can see, it is not accepting the url, as it is an Invalid host name.
I need the alias to pint to the secure (https) url. However, I think this may be the reason for the error.
Question
How do I set up a CNAME to point to a secure url?
Thanks
UPDATE
My Elastic Load Balanacer does have a secure listener.
You have to specify HTTPS in your NGINX using a redirect or Apache using mod_rewrite. If you want a little higher level HTTP to HTTPS roll over, you can do this (most of the time) in your application by specifying where your certs are located and doing a listen on Port 80 with a redirect/relocate to Port 443
On the DNS level you only specify the location. In your application, or on your server somewhere, you specify the HTTP/HTTPS protocol. DNS, being a protocol itself, cannot specify other protocols in its response. HTTPS is a processor intensive encryption operation done on your server.
I would highly recommend using AWS Certificate Manager to assign a certificate to your domain. If you'd rather have it in your beanstalk application, check out letsencrypt. It's a wonderful CLI tool for this stuff.
Here is a helpful resource
Ubuntu + NGINX + letsencrypt
Configuring HTTP to HTTPS on Ubuntu. Yes, only one operating system specific example, but letsencrypt should work anywhere with anything, anytime.
sudo apt-get update
sudo apt-get install letsencrypt
sudo apt-get install nginx
sudo systemctl stop nginx #if it starts by default...
sudo letsencrypt certonly --standalone -n -m richard#thewhozoo.com.com -d thewhozoo.com -d cname.thewhozoo.com --agree-tos
sudo ls -l /etc/letsencrypt/live/thewhozoo.com/ #you should see your stuff in this folder
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 #make yo'self a diffie
sudo vim /etc/nginx/sites-available/default
In your default file:
(Snippets from: HERE and HERE and HERE and HERE)
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name thewhozoo.com www.thewhozoo.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name thewhozoo.com www.thewhozoo.com;
ssl_certificate /etc/letsencrypt/live/thewhozoo.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/thewhozoo.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
}
Now that your NGINX file has your certs/keys/pems/whatever listed, you have to double check your firewall.
For Ubuntu and ufw, you can allow access via:
sudo ufw allow 'Nginx Full'
sudo ufw delete allow 'Nginx HTTP'
sudo ufw allow 'OpenSSH'
sudo ufw enable
sudo ufw status
And you should see Nginx HTTPS enabled.
No matter what your flavor of HTTPS is (SSL, TLSvXX, etc.) you'll need Port 22 open on the firewall level cause they all use it, hense the 'OpenSSH'.
BE SURE TO RUN allow 'OpenSSH' BEFORE ufw enable. If you do not... your SSH session will be terminated and...good luck.
Now your firewall is good to go, restart nginx and you should be set:
sudo systemctl start nginx
Helpful tips for the future:
NGINX by default set the renewal policy to 3 months. I'm not certain if this is a "standard" of internet law or not, but the add-on for renewing your certs is:
Add this to your crontab:
sudo systemctl stop nginx
sudo letsencrypt renew
sudo systemctl start nginx
HELPFUL NOTES:
You must have the domain name linked to the server of choice BEFORE running letsencrypt. It does a reverse IP Lookup to make sure you are the owner/admin of the domain.
You do not need the giant list of encryption types but I would highly recommend keeping most of them. Elliptical Curve Diffie Hellman is a must for the type of key used above, but you can probably cut it down to ECDH?E, AES, GCM, and RSA or SHA depending on how many cipher suites you want to support. IF you aren't going to support SSLvX and only do TLSvX you only need to support (and restrict) the following: ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5;
AWS Certificate Manager (ACM) + Elastic Load Balancer
Go to your Load Balancer in the EC2 Resource Console
Select your listener
Should probably say: HTTPS: 443 in bold letters
Check it and click Actions => Edit
Double check that your Protocol is HTTPS on Port 443 and your target group is good
At the bottom of the pop-up, select "Choose an existing certificate from AWS Certificate Manager (ACM)
Then select your ACM Certificate
Save it
SSH into your instance/application on EBS/whatever
Write an NGINX policy for redirecting HTTP traffic to HTTPS:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name thewhozoo.com www.thewhozoo.com;
return 301 https://$host$request_uri;
}
Restart NGINX
For Elastic Beanstalk environment check THIS INFO.
Wait about 5 minutes for everything to sink in and you should be good to go!
Check this for help if needed
Drop the 'http://' from the CNAME and just use:
example.us-west-2.elasticbeanstalk.com
Related
I have some dockers containers deployed on AWS EC2, that listens on http.
My idea is using nginx as reverse proxy, to pass traffic from https, to http://localhost.
Each container listens on a specific http port. The EC2 instance will accept traffic just on http:80 and http:443 and I will use subdomain to chose the right port.
So I should have:
https://backend.my_ec2instance.com --> http://localhost:4000
https://frontend.my_ec2instance.com --> http://localhost:5000
I'v got my free TSL certificate from Let's Encrypt (it's just on file containing public and private keys), and put it in
/etc/nginx/ssl/letsencrypt.pem
Then I have configured nginx in this way
sudo nano /etc/nginx/sites-enabled/custom.conf
and wrote
server {
listen 443 ssl;
server_name backend.my_ec2instance;
# Certificate
ssl_certificate letsencrypt.pem;
# Private Key
ssl_certificate_key letsencrypt.pem;
# Forward
location / {
proxy_pass http://localhost:4000;
}
}
server {
listen 443 ssl;
server_name frontend.my_ec2instance;
# Certificate
ssl_certificate letsencrypt.pem;
# Private Key
ssl_certificate_key letsencrypt.pem;
# Forward
location / {
proxy_pass http://localhost:5000;
}
}
then
sudo ln -s /etc/nginx/sites-available/custom.conf /etc/nginx/sites-enbled/
Anyway, if I open my browser on https://backend.my_ec2instance it's not reachable.
http://localhost:80 instead correctly shows the nginx page.
HTTPS default port is port 443. HTTP default port is port 80. So this: https://localhost:80 makes no sense. You are trying to use the HTTPS protocol on an HTTP port.
In either case, I don't understand why you are entering localhost in your web browser at all. You should be trying to open https://frontend.my_ec2instance.com in your web browser. The locahost address in your web browser would refer to your local laptop, not an EC2 server.
Per the discussion in the comments you also need to include your custom Nginx config in the base configuration file.
I have a requirement similar to this post,
Google cloud load balancer port 80, to VM instances serving port 9000
I like one of the answers (not the accepted), but how to do it ? or is there an alternate way ?
" If your app is HTTP-based (looks like it), then please have a look
at the new HTTP load balancing announced in June. It can take incoming
traffic at port 80 and forward to a user-specified port (eg. port
9000) on the backend. The doc link for the command is here:
https://developers.google.com/compute/docs/load-balancing/http/backend-service#creating_a_backend_service"
I dont want to create statis IP after static IP and loose track
Scenario:
A Compute Engine with an application running on port 8080 or 8043 (firewall open for 8080 and 8443 , has static IP )
Now I want to hook it a domain.
Problem:
I have to specify port number - like http://mywebsite:8080
Goal: Allow use like http://mywebsite
Please can I ask how Cloud DNS and Load Balancer work. Are both needed for my scenario? help me connect the dots.
Thanks
Note: Application works on 8080 only (wont run on 80)
DNS knows nothing about port numbers (except for special record types). DNS is a hostname to IP address translation service.
You can either use a proxy/load-balancer to proxy port 80 to 8080 or configure your app to run on port 80.
Port 80 requires permission to use. For Linux this means configuring your application to run with system privileges or starting the application with sudo.
Most applications that run on non-standard ports have a web server in front of them such as Apache or Nginx. The web server proxies port 80 to 8080 and provides a more resilient Internet facing service.
I don't want to create static IP after static IP and lose track
Unfortunately, you will need to manage your services and their resources. If you deploy a load balancer, then you can usually use private IP addresses for the compute instances. Only the load balancer requires a public IP address. The load balancer will proxy port 80 to 8080.
However, assuming that your requirements are small, you can assign a public IP address to the instance, install Apache or Nginx, and run your application on port 8080.
Today, it is rare that Internet-facing web services do not support HTTPS (port 443). Using a load balancer simplifies configuring TLS and certificate management. You can also configure TLS in Apache/Nginx and Let's Encrypt. That removes the requirement that your app supports TLS directly on something like port 8443.
I found this article and it works - https://eladnava.com/binding-nodejs-port-80-using-nginx/
Steps: (sudo apt-get update)
sudo apt-get install nginx
remove default
sudo rm /etc/nginx/sites-enabled/default
create new - node folder
sudo nano /etc/nginx/sites-available/node
<<<<<< add this to the file update domain name , port of your app ...8080 or other>>>>>
server {
listen 80;
server_name somedomain.co.uk;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:8080";
}
}
create a symbolic link:
sudo ln -s /etc/nginx/sites-available/node /etc/nginx/sites-enabled/node
Restart nginx
sudo service nginx restart
Credits to original author - Elad Nava
I have a Fastapi app hosted on EC2 instance using docker-compose.yml. Currently, the app is not secured (HTTP & not HTTPS). I am trying to secure the app via a self-signed cert by following the tutorial Deploy your FastAPI API to AWS EC2 using Nginx.
I have the following in the fastapi_nginx file in the /etc/nginx/sites-enabled/
server {
listen 80;
listen 443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
server_name x.xx.xxx.xxx;
location / {
proxy_pass http://0.0.0.0:8000/docs;
}
}
But it doesn't seem to work. When I do https://x.xx.xxx.xxx, I get the error:
This page isn’t working
x.xx.xxx.xxx didn’t send any data.
ERR_EMPTY_RESPONSE
But http://x.xx.xxx.xxx is working like before.
I am not sure if I am missing anything or making any mistakes.
P.S.: I also tried doing the steps mentioned in the article here and still it wasn't working.
Also, the inbound in security groups
You are redirecting https traffic to /docs, have you tried proxy_pass http://localhost:8000;?
Also 0.0.0.0 is not always a good solution, it means to all IP addresses on the local machine as referred here. Try 127.0.0.1 or localhost.
You can check any errors in /var/log/nginx/error.log.
Finally, see if your security group and route table allow the traffic.
Since you make use of the docker-compose.yml. You can probably configure as follows:
Extend your docker-compose.yml having nginx as well.
In the below mounts the nginx.conf is the file you have defined locally, certs are certificates. Also, it would be best to keep in the same network as per the fastapi app so that they communicate.
nginx.conf to be modified is to point to the Docker service name of the fastapi app:
location / {
proxy_pass http://my-fastapi-app:8000/docs;
}
An example snippet below:
...
networks:
app_net:
services:
my-fastapi-app:
...
networks:
- app_net
nginx:
image: 'bitnami/nginx:1.14.2'
ports:
- '80:8080'
- '443:8443'
volumes:
- ./nginx.conf:/opt/bitnami/nginx/conf/nginx.conf:ro
- ./certs:/opt/bitnami/nginx/certs/:ro
- ./tmp/:/opt/bitnami/nginx/tmp/:rw
networks:
- app_net
Additionally, I could also suggest looking into caddy. The certification process and renewal is automatically done.
I want to encrypt traffic between my load balancer and web servers in an Elastic Beanstalk environment. Amazon has a guide here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html but it involves manually generating a certificate for your servers. Is there a fully automatic alternative?
If you have your servers generate their own self-signed certificate as part of the deployment container commands, then each server will get an updated certificate every time you deploy and when a new server is started.
The best command I have found for this is the following, which creates certificates valid for 10 years:
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/pki/tls/certs/server.key -out /etc/pki/tls/certs/server.crt -days 3650 -nodes -subj "/CN=example.com"
Using this approach, as long as you deploy (including upgrading your EB container version) at least once a decade, your servers will stay up.
This drastically simplifies the setup for this as well. Now all you need to do is the following:
Add a config file to your elastic beanstalk project which generates self-signed certificates and adds HTTPS settings to the web server.
Have the web server security group accept port 443 connections from the load balancer security group.
Set your load balancer to forward traffic from port 443 to port 443.
Below is an example of a full HTTPS elastic beanstalk config file for python. This is a slight modification of AWS's suggested config file for python. I've added the generate certificate command to the beginning of container commands and removed the two file statements for /etc/pki/tls/certs/server.crt and /etc/pki/tls/certs/server.key as they are now auto generated. AWS examples for other languages can be found here.
AWS Linux 2, Apache-based deployment
Put the following in .ebextensions/ssl.config:
container_commands:
01_create_certs:
command: |
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/pki/tls/certs/server.key -out /etc/pki/tls/certs/server.crt -days 3650 -nodes -subj "/CN=example.com"
02_restart_httpd:
command: |
# Condition on whether httpd is running for compatibility with EB worker environments
sudo systemctl status httpd && sudo systemctl restart httpd || echo "httpd not running"
03_wait_for_httpd_restart:
command: "sleep 3"
Put the following in .platform/httpd/conf.d/ssl.conf:
Listen 443
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/server.crt"
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
# Limit requests to 100 MB
LimitRequestBody 104857600
<Proxy *>
Require all granted
</Proxy>
ProxyPass / http://localhost:8000/ retry=0
ProxyPassReverse / http://localhost:8000/
ProxyPreserveHost on
</VirtualHost>
AWS Linux 1, Apache-based deployment
Put the following in .ebextensions/ssl.config:
packages:
yum:
mod24_ssl : []
files:
/etc/httpd/conf.d/ssl.conf:
mode: "000644"
owner: root
group: root
content: |
LoadModule wsgi_module modules/mod_wsgi.so
WSGIPythonHome /opt/python/run/baselinenv
WSGISocketPrefix run/wsgi
WSGIRestrictEmbedded On
Listen 443
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile "/etc/pki/tls/certs/server.crt"
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
Alias /static/ /opt/python/current/app/static/
<Directory /opt/python/current/app/static>
Order allow,deny
Allow from all
</Directory>
WSGIScriptAlias / /opt/python/current/app/application.py
<Directory /opt/python/current/app>
Require all granted
</Directory>
WSGIDaemonProcess wsgi-ssl processes=1 threads=15 display-name=%{GROUP} \
python-path=/opt/python/current/app \
python-home=/opt/python/run/venv \
home=/opt/python/current/app \
user=wsgi \
group=wsgi
WSGIProcessGroup wsgi-ssl
</VirtualHost>
container_commands:
01_create_certs:
command: |
sudo openssl req -x509 -newkey rsa:4096 -keyout /etc/pki/tls/certs/server.key -out /etc/pki/tls/certs/server.crt -days 3650 -nodes -subj "/CN=example.com"
02_kill_httpd:
command: "sudo restart supervisord"
03_wait_for_httpd_death:
command: "sleep 3"
If you are doing this for a Classic Load Balancer in Apache-based AWS Linux 2 as opposed to a shinier, newer Application Load Balancer, in addition to adding the config files listed here, you will also want to perform the following steps:
Put the following in .platform/httpd/conf.d/keepalive.conf:
# Enable TCP keepalive
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 300
<IfModule mod_reqtimeout.c>
RequestReadTimeout header=300 body=300
</IfModule>
Ensure that your ELB is configured to route HTTP traffic to port 80 via HTTP and HTTPS traffic to port 443 via HTTPS.
This will prevent ELB pre-connections from being wrongfully terminated, which if left unchecked will pollute your health-check data (for more on this, see Mysterious Http 408 errors in AWS elasticbeanstalk-access_log). In my testing, not performing this step resulted in a 25% false negative rate for automated health checks.
You may also wish to consider adding an SSL rewrite file to the config to reroute all incoming insecure traffic to the secure port, e.g. .platform/httpd/conf.d/ssl_rewrite.conf with the contents:
RewriteEngine On
<If "-n '%{HTTP:X-Forwarded-Proto}' && %{HTTP:X-Forwarded-Proto} != 'https'">
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
</If>
https://www.petercuret.com/how-ssl-encrypt-your-django-heroku-projects-free-lets-encrypt/
This article about encrypt django app is a great tutorial. I did most of the process, except the last one. "Adding the security certificate to Heroku", mine is cloud server with Ubuntu 16.04. So it's not adapted to my server.
Googled around about "Nginx ssl encrypt", and found this tutorial(https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04)
I configured nginx server as this tutorial. When all finished, I tested it with "curl https://example.com(my domain)", it returns "Failed to connect to example.com port 443: Connection refused"
PS: nginx runs in host, and Django app runs in a docker container
Some results of my server:
root#i-atbxncfv:~# sudo ufw status
Status: active
To Action From
-- ------ ----
Nginx Full ALLOW Anywhere
443/tcp ALLOW Anywhere
443 ALLOW Anywhere
22/tcp ALLOW Anywhere
Nginx Full (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
root#i-atbxncfv:~# sudo ufw app list
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH
root#i-atbxncfv:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b082bf17c218 e5b11bf09f49 "/usr/sbin/sshd -D" 2 days ago Up 2 days 0.0.0.0:21->22/tcp, 0.0.0.0:32789->80/tcp, 0.0.0.0:32788->5000/tcp django_app_1
root#i-atbxncfv:/etc/nginx/sites-enabled# cat example-com.conf
server {
listen 80;
server_name example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://0.0.0.0:32788;
}
}
root#i-atbxncfv:/etc/nginx/sites-available# cat default
server {
listen 80;
listen [::]:80;
server_name test.doask.net;
return 301 https://$server_name$request_uri;
# SSL configuration
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# include snippets/snakeoil.conf;
}
server {
# SSL configuration
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
include snippets/ssl-test.doask.net.conf;
include snippets/ssl-params.conf;