How to set up a reverse proxy server with alb on AWS? - amazon-web-services

I created a service publish structure as below
I don't know why can't access the domain successfully. Where maybe the issue?
The public LB's listener rules are

You can add HTTP to HTTPS redirect config in the Loadbalancer rule itself.
To proxy your HTTPS requests to Applications server
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass <internal_loadbalancer_dns_name>;
}
}

Related

NGINX on AWS EC2 to forward HTTPS to HTTP://localhost

I have some dockers containers deployed on AWS EC2, that listens on http.
My idea is using nginx as reverse proxy, to pass traffic from https, to http://localhost.
Each container listens on a specific http port. The EC2 instance will accept traffic just on http:80 and http:443 and I will use subdomain to chose the right port.
So I should have:
https://backend.my_ec2instance.com --> http://localhost:4000
https://frontend.my_ec2instance.com --> http://localhost:5000
I'v got my free TSL certificate from Let's Encrypt (it's just on file containing public and private keys), and put it in
/etc/nginx/ssl/letsencrypt.pem
Then I have configured nginx in this way
sudo nano /etc/nginx/sites-enabled/custom.conf
and wrote
server {
listen 443 ssl;
server_name backend.my_ec2instance;
# Certificate
ssl_certificate letsencrypt.pem;
# Private Key
ssl_certificate_key letsencrypt.pem;
# Forward
location / {
proxy_pass http://localhost:4000;
}
}
server {
listen 443 ssl;
server_name frontend.my_ec2instance;
# Certificate
ssl_certificate letsencrypt.pem;
# Private Key
ssl_certificate_key letsencrypt.pem;
# Forward
location / {
proxy_pass http://localhost:5000;
}
}
then
sudo ln -s /etc/nginx/sites-available/custom.conf /etc/nginx/sites-enbled/
Anyway, if I open my browser on https://backend.my_ec2instance it's not reachable.
http://localhost:80 instead correctly shows the nginx page.
HTTPS default port is port 443. HTTP default port is port 80. So this: https://localhost:80 makes no sense. You are trying to use the HTTPS protocol on an HTTP port.
In either case, I don't understand why you are entering localhost in your web browser at all. You should be trying to open https://frontend.my_ec2instance.com in your web browser. The locahost address in your web browser would refer to your local laptop, not an EC2 server.
Per the discussion in the comments you also need to include your custom Nginx config in the base configuration file.

Enable more ports to public traffic on AWS? Nginx/Docker multiple applications setup

I currently have an EC2 instance running over on AWs, with two applications (a frontend app and an API, standard stuff)
By default I have port 80 and 443 enabled for public traffic, by my company's devops team.
Im trying to use Nginx, to redirect requests incoming to port :80, to different app/api ports setup on docker containers.
example: my app is currently running on port :8080, I expect that when user hits my domain, nginx redirects user to the app running on :8080
nginx.conf
events {}
http {
server {
listen 80;
server_name company.com;
location / {
proxy_pass http://0.0.0.0:8080;
}
}
server {
listen 443 ssl;
server_name company.locallabs.com;
ssl_certificate /etc/ssl/wildcard.company.com.crt;
ssl_certificate_key /etc/ssl/wildcard.company.com.key;
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass http://0.0.0.0:8081;
}
}
}
Question: port :8080 is accessible only within the server (when I access using SSH, and run a CURL inside the server it works fine).
Should :8080 be enabled to public traffic, even though I set the redirect on Nginx?
nginx logs gives me
2021/04/14 04:01:07 [error] 6#6: *7 connect() failed (111: Connection refused) while connecting to upstream, client: 3.90.2.223, server: company.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "company.com"
No, you don't.
I recommend that you consider deploying the docker container jc21/nginx-proxy-manager. It works beautifully and has the added benefit that it manages LetsEncrypt SSL certs for each service it proxies.
Have the nginx container ingest host ports 80 and 443, and use the UI to define the domains and ports for each docker container your devs are working on.

LoadBalancer in EC2 Instance showing status OutOfService

Here is My Scenario.
I'm using ACM to generate 2 SSL certificates. example.com and *.example.com
I have 2 load balancers linked to the same EC2 instance.
1) Linked to my wordpress website - example.com
2) Linked to my app - *.example.com
Checklist I followed to troubleshoot outofservice error:
1) Instance State - Running
2) Status Checks - 2/2
3) Security Group Setting - Port 80/443/22 are open
4) Below are my Health Check settings
Ping Target - HTTP:80/
Timeout - 5 seconds
Interval - 30 seconds
Unhealthy threshold - 2
Healthy threshold - 10
I'm using NGINX webserver. I have checked the status, it shows its active.
Here is my config file example.com:
server
{
server_name www. example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
listen 443;
server_name example.com;
root /opt/bitnami/apps/wordpress/htdocs;
}
server {
listen 80;
listen 443;
server_name ~^(.*)\. example\.com$ ;
root /opt/bitnami/apps/example_app;
}
What could be the problem here? Is the problem related to NGINX config settings or is it related to Load Balancer settings?
Your nginx settings are definitely not correct. There is no SSL at the instance level. Instead, the elb would terminate the SSL connection to the client. The only connection the ec2 instance should accept is on Port 80 and only from the elb. I suggest you remove the SSL report for for three references in nginx and make sure it's configured as above.
You'll definitely need to configure nginx to run php scripts. Since php is a pre processing engine, your nginx configuration needs to know how to handle php files.
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html index.htm index.nginx-debian.html;
server_name server_domain_or_IP;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
reference: https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-in-ubuntu-16-04
Since you mentioned this was a wordpress website, you'll need to setup a LEMP stack.

Redirecting http to https using nginx behind AWS load balancer

I've searched high and low and I feel this is the final hurdle for my project. I'm trying to redirect all http traffic to https.
Currently when typing domain.info, it redirects to https://domain.info:80 and returns ERR_SSL_PROTOCOL_ERROR
But replacing the 80 with 443 gives me my webpage just fine.
My server is behind a load balancer too, I have my certificates in the LB and none in my server. I'm using NGINX as my webserver. Basically this is my setup:
user>https>load balancer>http>server
Thanks so much in advance!!
Firstly, catch all HTTP incoming:
server {
listen 80 default_server;
server_name _;
return ....;
}
then redirect to HTTPS permanently
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
or...
server {
listen 443 ssl default_server;
server_name foo.com;
}
server {
listen 443 ssl default_server;
server_name foo.com;
}
Directly in the load balancer, you can add a redirection rule, did you try to play with that?
In the load balancer listener, update the default rule for HTTP:80 and configure it like this :
HTTP 80: default action
IF
Requests otherwise not routed
THEN
Redirect to https://#{host}:443/#{path}?#{query}
Status code:HTTP_301

How to solve 502 Bad Gateway errors with Elastic Load Balancer and EC2/Nginx for HTTPS requests?

I'm running into '502 Bad Gateway' issues for HTTPS requests when using AWS Elastic Load Balancer (Application type) in front of EC2 instances running Nginx. Nginx is acting as a reverse proxy on each instance for a waitress server serving up a python app (Pyramid framework). I'm trying to use TLS termination at the ELB so that the EC2 instances are only dealing with HTTP. Here's the rough setup:
Client HTTPS request > ELB (listening on 443, forwarding to 80 on backend) > Nginx listening on port 80 (on Ec2 instance) > forwarded to waitress/Pyramid (on same ec2 instance)
When I make requests on HTTPS I get the 502 error. However, when I make regular HTTP requests I get a response as expected (same setup as above except ELB is listening on port 80).
Some additional info:
ELB health checks are working.
All VPC/Security groups are configured correctly (I believe).
I'm using an AWS certificate on the ELB using the standard setup/walkthrough on AWS.
I SSH'd into the Ec2 instance and in the Nginx access log it looks like the HTTPS request are still encrypted? Or some encoding issue?
And here's nginx.conf on the EC2 instance:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /etc/nginx/access.log;
sendfile on;
# Configuration containing list of application servers
upstream app_servers {
server 127.0.0.1:6543;
}
server {
listen 80;
server_name [MY-EC2-SERVER-NAME];
# Proxy connections to the application servers
# app_servers
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Ok I figured it out (I'm a dummy). I had two listeners set up on the ELB, one for 80 and one for 443, which was correct. The listener for 80 was set up correctly to forward to backend (Nginx) port 80 over HTTP as expected. The 443 listener was INCORRECTLY configured to send to port 80 on the backend over HTTPS. I updated the 443 listener to use the same rule as the 80 listener (i.e. listen on 443 but send to backend 80 over HTTP) and it worked. Disregard y'all.