I am using an AWS Lightsail Load Balancer in conjunction with another Lightsail ec2 instance so that I can use the free certificate manager built into the Lightsail Load Balancer. This seems to automatically forward all traffic from my Load Balancer to my ec2 nginx server on port 80 so that the following config also supports https connections:
server {
listen 80;
server_name mountainviewwebtech.ca www.mountainviewwebtech.ca;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
However, when I add the following lines to ensure that http redirects to https, I receive the error ERR_TOO_MANY_REDIRECTS as my ec2-instance is only receiving traffic on port 80 even when using a secure connection so it just keeps redirecting over and over again.
if ($scheme != https) {
return 301 https://$host$request_uri;
}
Is there anyway to obtain the original $scheme before it was forwarded to my ec2 instance?
If you are wanting a 'free' SSL cert, using Let's Encrypt's CertBot will get you there (and save you money). I have had good success with this option.
Using the load balancer as the SSL terminator ends the secure connection at the load balancer. Thus un-encrypted traffic is sent from the balancer to the compute instances (port 80). If you want to forward SSL traffic to the compute instances, see if the load balancer will do port forwarding. This being Lightsail I doubt it will be possible.
Related
I have an AWS elb loadbalancer with three dynamic IPs and domain- example.com, port 443.
our client wants to access API but he had outbound firewall rules which required to whitelist dynamic IPs every time.
for the resolution, we created a subdomain (api.example.com) with elastic IP and Nginx reverse proxy. So every request that comes on api.example.com will be forwarded to example.com.
The issue is that if the client whitelists load balancer IPs and makes a request on the proxy server(api.example.com) everything is working fine, but if elb IPs are not whitelisted he is getting a timeout error.
// server configuration api.pelocal.com
server {
server_name api.example.com ;
resolver 8.8.8.8 valid=10s;
resolver_timeout 10s;
set $upstream_endpoint https://example.com;
location / {
proxy_redirect off;
proxy_read_timeout 3600;
proxy_connect_timeout 1m;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_pass $upstream_endpoint;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
}
}
please help. Thanks in advance.
I'm running Django on Gunicorn behind a NGINX reverse proxy, and an AWS Application Load Balancer. The ALB has 2 listeners. The HTTPS listener forwards to a Target Group in port 80, and the HTTP listener redirects to HTTPS.
The ALB thus connects with an AWS ECS Cluster, which runs a task with 2 containers: one for the Django app, and another for the NGINX that acts as a reverse proxy for the Django app. Here's the configuration for the NGINX reverse proxy:
upstream django {
server web:8000;
}
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://django;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_ssl_server_name on;
proxy_redirect off;
}
}
This configuration ensures that whenever the client tries to hit the website app using an HTTP request, he gets redirected to HTTPS. And everything works fine with one exception. In Django, when I run request.is_secure() I'm getting False instead of True as expected. If I run request.build_absolute_uri(), I get http://mywebsite.com and not https://mywebsite.com as expected.
I already tried adding the following lines to settings.py:
USE_X_FORWARDED_HOST = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
as explained in the documentation, but it doesn't work. Whenever I inspect request.META (or the raw request.headers), I'm seeing 'HTTP_X_FORWARDED_PROTO': 'http' (and the equivalent raw 'X-Forwarded-Proto': 'http') instead of https as expected. The stack is correctly forwarding 'HTTP_X_FORWARDED_HOST': 'mywebsite.com' from the client, but the scheme is being ignored.
Can anyone help me identify what I'm doing wrong and how to fix it? Thanks
With a Classic ELB you specify the "instance port" (see the listeners tab) and that controls the protocol that you send downstream to nginx. In that scenario it is common to attach an SSL cert to the 443 port but send HTTP down port 80 to nginx. the port 80 listener also sends HTTP. In that setup, where only HTTP is coming in from the load balancer it is your job to inspect the X-Forwarded-Proto header and perform a permanent redirect to HTTPS. That's because the classic ELB could not redirect HTTP to HTTPS. With the Application Load Balancer (ALB) I believe you can redirect HTTP to HTTPS if you want to speak 443 to nginx.
In your specific case it seems like you were only listening on Port 80 so you were probably only sending HTTP from the load balancer. Check you Instance Port and protocol.
I have a load balancer and an Nginx that sits behind the LB.
Below is the nginx config.
upstream app {
server service_discovery_name.local:5005;
}
server { // Reverse proxy for VPC ES to be available on public
listen 80;
location / {
proxy_pass vpc-es-domain-url;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server { // reverse proxy for django app
listen 8005;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
I've a listener attached to the ALB, listening at port 80, that forwards the traffic to the target IP. The target group has the private IP of the Nginx container. I use Fargate-ECS container.
Now when I route to ALB_url:80, it opens up the elasticsearch. However, when I route to ALB_url:8005, it fails to load anything. The django_app is running at port 5005, check by explicitly browsing to the IP=:5005.
I believe the nginx config is right. I want my traffic to be routed via ALB -> Nginx -> apps. What exactly am I missing?
When you configure an ALB you must create a listener, specify the port and the action(forward the request to Target Group or make redirects), you can create a multiples listener using different ports, for example, you can have a listener listening in 80 port and doing redirects to HTTPs and another listener with 443 port forwarding the request to Target Group.
According to that, I understand that your configuration is:
- ALB listening in 80 port and sending the request to Target Group.
- Target Group listening in 80 port and sending the request to Fargate Task(nginx server)
When you route to ALB_URL:80 the request is forwarded to Target Group by 80 port and the request is sending to Fargate task. But when you route to ALB_URL:8005 that will no work because the ALB doesn't have a listener for that port.
You can create a listener with 8005 port that forwards the request to a Target Group listening in the 8005. with this configuration when you route ALB_url:8005 the request will be sent to TG created and then will send to the Fargate task and will take the configuration into the Nginx config.
ALB---> listener 80 ----> Target Groupt port 80 ----> ECS Task Nginx
ALB---> listener 8005 ---> Target Groupt port 8005 ----> ECS Task Nginx
Don't forget to validate the Security groups to allow 8005 port
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-group-routing-configuration
I want to add TLS to my AWS Elastic Beanstalk application. It is a node.js app running behind nginx proxy server.
Here are the steps I've completed
Get a wildcard certificate from Amazon Certificate Manager.
Add the certificate in the load balancer configuration section of my EB instance.
My relevant part of my nginx config is
files:
/etc/nginx/conf.d/proxy.conf:
mode: "000644"
content: |
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
server {
listen 8080;
proxy_set_header X-Forwarded-Proto $scheme;
if ( $http_x_forwarded_proto != 'https' ) {
return 301 https://$host$request_uri;
}
location / {
proxy_pass http://nodejs;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
When I try to access my application using https, I get a 408: Request Timed Out.
It is my understanding that to enable ssl on nginx we need to add the cert along with pem file and listen on port 443. But since I'm using ACM certificate, I don't have the cert and pem files.
What am I missing to add in my nginx.conf for this to work?
Thanks
In the load balancer listener configuration, for the port 443 listener, the "Instance Port" setting should be 80.
I'm running into '502 Bad Gateway' issues for HTTPS requests when using AWS Elastic Load Balancer (Application type) in front of EC2 instances running Nginx. Nginx is acting as a reverse proxy on each instance for a waitress server serving up a python app (Pyramid framework). I'm trying to use TLS termination at the ELB so that the EC2 instances are only dealing with HTTP. Here's the rough setup:
Client HTTPS request > ELB (listening on 443, forwarding to 80 on backend) > Nginx listening on port 80 (on Ec2 instance) > forwarded to waitress/Pyramid (on same ec2 instance)
When I make requests on HTTPS I get the 502 error. However, when I make regular HTTP requests I get a response as expected (same setup as above except ELB is listening on port 80).
Some additional info:
ELB health checks are working.
All VPC/Security groups are configured correctly (I believe).
I'm using an AWS certificate on the ELB using the standard setup/walkthrough on AWS.
I SSH'd into the Ec2 instance and in the Nginx access log it looks like the HTTPS request are still encrypted? Or some encoding issue?
And here's nginx.conf on the EC2 instance:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /etc/nginx/access.log;
sendfile on;
# Configuration containing list of application servers
upstream app_servers {
server 127.0.0.1:6543;
}
server {
listen 80;
server_name [MY-EC2-SERVER-NAME];
# Proxy connections to the application servers
# app_servers
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Ok I figured it out (I'm a dummy). I had two listeners set up on the ELB, one for 80 and one for 443, which was correct. The listener for 80 was set up correctly to forward to backend (Nginx) port 80 over HTTP as expected. The 443 listener was INCORRECTLY configured to send to port 80 on the backend over HTTPS. I updated the 443 listener to use the same rule as the 80 listener (i.e. listen on 443 but send to backend 80 over HTTP) and it worked. Disregard y'all.