I am running a few microservices in ECS running on Fargate. I need these docker containers to be able to communicate between each other. Right now both containers are simply running NGINX to get a proof of concept up and running.
I have setup a private DNS namespace and created a service for each docker container e.g service1.local and service2.local
I then created the ECS services and linked the service discovery, each ECS service now has the .local namespace attached.
Right now both ECS services are in a public subnet with public IP's and are reachable individually. However reach service B via service A's IP is not possible i.e. the two containers cannot communicate with each other.
I get the following error service1.local could not be resolved (5: Operation refused).
I attached an EC2 instance to the same VPC and subnets and was able to ping each service via service1.local, service2.local curl them etc so so service discovery is working as expected in this situation.
Right now security groups allow traffic from all ports and all IPs (just whilst testing)
Below is a barebones copy of my NGIX config for service1
server {
listen 80;
server_name localhost;
resolver ${PRIVATE_DNS_RESOLVER} valid=10s;
set $serverHost http://service2.local:8000;
location /status {
access_log off;
add_header 'Content-Type' 'application/json';
return 200 '{"status":"OK"}';
}
# Proxy to server container
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $serverHost$request_uri;
}
# Route for client
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
# Route all errors back to index.html for react router to handle
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Hitting the public IP of this service followed by /api is what should proxy to the second service via service2.local
tl;dr
Both containers are accessible via public IPs.
Both containers are accessible from within the ups using the service discover private name space
Container one cannot resolve the hostname of container2
The CPC has dns hostnames enabled
The resolver here is one of the NS records from the private namespace. Setting it to 127.0.0.1 or 127.0.0.11 gave the same error.
Related
I have two services on my cluster: myapp-service and an nginx-service. I'm using service discovery to connect the both and everything works just fine.
The problem happens when i deploy a new version of myapp-service and it came with a new private(and public) ip address. After the deploy i see that the ip are correctly updated on the Route 53 but when i try to access my-app through nginx it return a bad-gateway. When i look to the nginx logs on Cloudwatch i can see that the nginx are trying to connect to the old private ip address of myapp-service.
currently i'm not using any loadbalance or auto-scaling configuration.
There are any health check for my containers on the task definition.
"Enable ECS task health propagation" is on.
This is my nginx configuration(default.conf) and marketplace-service.local is my registry on Route 53.
upstream channels-backend {
server marketplace-service.local:8000;
}
server {
listen 80;
location / {
proxy_pass http://channels-backend;
}
}
Can anybody help me to discover what i'm missing here??
thx
I don't know about upstream part.
upstream channels-backend {
server marketplace-service.local:8000;
}
But I'm sure about server part. For Fargate, localhost is used so add server_name localhost;. Then add 4 lines of code showing below to location block.
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://channels-backend;
}
}
I have a load balancer and an Nginx that sits behind the LB.
Below is the nginx config.
upstream app {
server service_discovery_name.local:5005;
}
server { // Reverse proxy for VPC ES to be available on public
listen 80;
location / {
proxy_pass vpc-es-domain-url;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server { // reverse proxy for django app
listen 8005;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
I've a listener attached to the ALB, listening at port 80, that forwards the traffic to the target IP. The target group has the private IP of the Nginx container. I use Fargate-ECS container.
Now when I route to ALB_url:80, it opens up the elasticsearch. However, when I route to ALB_url:8005, it fails to load anything. The django_app is running at port 5005, check by explicitly browsing to the IP=:5005.
I believe the nginx config is right. I want my traffic to be routed via ALB -> Nginx -> apps. What exactly am I missing?
When you configure an ALB you must create a listener, specify the port and the action(forward the request to Target Group or make redirects), you can create a multiples listener using different ports, for example, you can have a listener listening in 80 port and doing redirects to HTTPs and another listener with 443 port forwarding the request to Target Group.
According to that, I understand that your configuration is:
- ALB listening in 80 port and sending the request to Target Group.
- Target Group listening in 80 port and sending the request to Fargate Task(nginx server)
When you route to ALB_URL:80 the request is forwarded to Target Group by 80 port and the request is sending to Fargate task. But when you route to ALB_URL:8005 that will no work because the ALB doesn't have a listener for that port.
You can create a listener with 8005 port that forwards the request to a Target Group listening in the 8005. with this configuration when you route ALB_url:8005 the request will be sent to TG created and then will send to the Fargate task and will take the configuration into the Nginx config.
ALB---> listener 80 ----> Target Groupt port 80 ----> ECS Task Nginx
ALB---> listener 8005 ---> Target Groupt port 8005 ----> ECS Task Nginx
Don't forget to validate the Security groups to allow 8005 port
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#target-group-routing-configuration
I am using an AWS Lightsail Load Balancer in conjunction with another Lightsail ec2 instance so that I can use the free certificate manager built into the Lightsail Load Balancer. This seems to automatically forward all traffic from my Load Balancer to my ec2 nginx server on port 80 so that the following config also supports https connections:
server {
listen 80;
server_name mountainviewwebtech.ca www.mountainviewwebtech.ca;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
However, when I add the following lines to ensure that http redirects to https, I receive the error ERR_TOO_MANY_REDIRECTS as my ec2-instance is only receiving traffic on port 80 even when using a secure connection so it just keeps redirecting over and over again.
if ($scheme != https) {
return 301 https://$host$request_uri;
}
Is there anyway to obtain the original $scheme before it was forwarded to my ec2 instance?
If you are wanting a 'free' SSL cert, using Let's Encrypt's CertBot will get you there (and save you money). I have had good success with this option.
Using the load balancer as the SSL terminator ends the secure connection at the load balancer. Thus un-encrypted traffic is sent from the balancer to the compute instances (port 80). If you want to forward SSL traffic to the compute instances, see if the load balancer will do port forwarding. This being Lightsail I doubt it will be possible.
I'm running into '502 Bad Gateway' issues for HTTPS requests when using AWS Elastic Load Balancer (Application type) in front of EC2 instances running Nginx. Nginx is acting as a reverse proxy on each instance for a waitress server serving up a python app (Pyramid framework). I'm trying to use TLS termination at the ELB so that the EC2 instances are only dealing with HTTP. Here's the rough setup:
Client HTTPS request > ELB (listening on 443, forwarding to 80 on backend) > Nginx listening on port 80 (on Ec2 instance) > forwarded to waitress/Pyramid (on same ec2 instance)
When I make requests on HTTPS I get the 502 error. However, when I make regular HTTP requests I get a response as expected (same setup as above except ELB is listening on port 80).
Some additional info:
ELB health checks are working.
All VPC/Security groups are configured correctly (I believe).
I'm using an AWS certificate on the ELB using the standard setup/walkthrough on AWS.
I SSH'd into the Ec2 instance and in the Nginx access log it looks like the HTTPS request are still encrypted? Or some encoding issue?
And here's nginx.conf on the EC2 instance:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /etc/nginx/access.log;
sendfile on;
# Configuration containing list of application servers
upstream app_servers {
server 127.0.0.1:6543;
}
server {
listen 80;
server_name [MY-EC2-SERVER-NAME];
# Proxy connections to the application servers
# app_servers
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Ok I figured it out (I'm a dummy). I had two listeners set up on the ELB, one for 80 and one for 443, which was correct. The listener for 80 was set up correctly to forward to backend (Nginx) port 80 over HTTP as expected. The 443 listener was INCORRECTLY configured to send to port 80 on the backend over HTTPS. I updated the 443 listener to use the same rule as the 80 listener (i.e. listen on 443 but send to backend 80 over HTTP) and it worked. Disregard y'all.
Ok, so I'm a website newbie who just finished the django tutorial, and decided to try and publish my polls app on the net. So far I have a godaddy domain name which I'm trying to point to my amazon EC2 instance's elastic IP which is currently hosting my polls website.
Currently what I have set up is:
Amazon route 53: Hosted zone that points to mydomain.com with record sets of: name mydomain.com & www.mydomain.com and Value xx.xxx.xx.x
Godaddy: DNS zone file: A(Host) to my amazon elastic IP xx.xxx.xx.x, Nameservers to the 4 amazon route 53 hosted zone nameservers.
EC2 instance: running nginx and gunicorn to host the app.
My issue is that I can go to the website with amazon's elastic IP, but I cannot access it with the domain name (I get a bold "Welcome to nginx!" page no matter if i try to go to the home page or the /polls/1 page.)
Looks about right. Have you followed the standard gunicorn configs with nginx ?
http://docs.gunicorn.org/en/latest/deploy.html
You probably want something like this on your nginx configs:
http {
include mime.types;
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
upstream app_server {
server unix:/tmp/gunicorn.sock fail_timeout=0;
# For a TCP configuration:
# server 192.168.0.7:8000 fail_timeout=0;
}
server {
listen 443 default;
client_max_body_size 4G;
server_name _;
ssl on;
ssl_certificate /usr/local/nginx/conf/cert.pem;
ssl_certificate_key /usr/local/nginx/conf/cert.key;
keepalive_timeout 5;
# path for static files
root /path/to/app/current/public;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /path/to/app/current/public;
}
}
}
You want to point to the right SSL cert and key paths (instead of /usr/local/nginx/conf/cert.pem; and /usr/local/nginx/conf/cert.key;). Also, you should point root to your specific Django static files instead of /path/to/app/current/public
Ok i figured it out.
Nginx was listening to 127.0.0.1:8000 and gunicorn was broadcasting to 127.0.0.1:8001. (502 error)
To fix the DNS issue, I had to go onto my amazon EC2 control panel and open up port 8000.