ECS with ALB makes requests to itself but times out? - amazon-web-services

I have a PHP + Apache application running in ECS with an Application Load Balance sitting in front of it. Everything works fine except when the application makes request to itself and the request times out.
Let's say the URL to reach the application is www.app.com and in PHP I use Guzzle to send requests to www.app.com but that request will always time out.
I suspect it is a networking issue with ALB but I do not know how I can go about fixing it. Any help please?
Thanks.

As you're using ECS I would recommend replacing calls to a public load balancer with a service mesh instead to allow your application to keep all HTTP(S) traffic internal to the network. This will improve both security and performance (latency is reduced). AWS has an existing product that integrates with ECS to allow this functionality named App Mesh/
Alternatively if you want to stick with what you currently have setup you will need to check the following functionality:
If the hosts are ECS hosts are private then they will need to connect outbound by using a NAT Gateway/NAT Instance in the routing table for the 0.0.0.0/0 route. For Fargate this will depend on if the container is public or private.
If the host/container is public it will need the internet gateway added to its route table for the 0.0.0.0/0 route. Even if inbound access from the ALB to the host is private the host will always speak outbound to the internet via an internet gateway.
Ensure that inbound/outbound security groups allow access to either HTTP or HTTPS

Related

No response when passing outbound traffic through NAT

I have a setup with a couple of services running in ECS (separate frontends and backends). And now I have the requirement that outbound requests from the backends to some third part APIs needs to have an static (elastic) IP.
As I'm quite the novice with networking I've been following this guide, for basically routing requests to given IP-addresses through the NAT.
Setup:
One VPC
3 subnets (2 for ECS services, the third for the NAT) - All public(?)
Application load balancers for the services.
Routing to the load balancers through Route53.
The way I've been testing it is to either route all traffic, or traffic to my local IP, in the main routing table through the NAT gateway instead of the internet gateway directly. And in both cases, when I try to access either a frontend or server it never responds. And I don't see any traffic in the monitoring-tab for the NAT either. If I just route the traffic directly to the IGW from the main routing table it obviously still work.
So I'd really appreciate some help here since I'm not sure if it's my setup that's not compatible with the above solution, I'm doing something wrong of just overlooking something.
Edit: Did the sensible thing, as pointed out, and placed the services in private subnets.
If you have all your ECS tasks in the public subnet, how are you going to mask all of them behind the NAT? Even my cat knows this.

Do I need to configure certs on nginx itself if nginx is inside ec2 instance on loadbalancer which can only be accessed using https?

I have the cert applied on the load balancer, and https works fine, but i am wondering if I need to add the certs to nginx itself, which seems overkill but i am not sure.
No, one of the benefit of using a Load Balancer is you can hide your EC2 from public internet, making it less open and more secured.
Therefore, it is normal practice to use HTTP between your EC2 and load balancers, since they are in the same AWS Region (a safe and trusted internal environment).
By doing this you will also increase performance, because the https network overhead is only executed once in the load balancer, not twice. Your EC2 will focus the CPU resources on running the application logic instead.
Load Balancer is also Highly Available and can be configured to work with CloudFront and WAF for security and anti-DDoS controls.
No, you don't have to do this. The reason is that your load balancer (LB) is going to termiante the https connection, decrypt it using a SSL certificate you've deployed on it, and then forward HTTP connection to your ec2 instance(s).
Therefore, typical connections for LB with HTTPS have the following form:
client ---(HTTPS)---->LB---(HTTP)--->EC2 instance
This configuration is suited for most use-cases as HTTP traffic is happening withing AWS private network, not over the internet.

AWS - Private static IP address for Fargate task

I have the following situation. I have a VPC on AWS. In this VPC, I have an ECS Fargate cluster with multiple different tasks running. Additionally, I have a Site-to-Site VPN for one of my partners set up in this cluster.
Now, this partner has to send HTTP POST (SOAP in fact) requests to one of my Fargate tasks. This should be possible only through VPN, so the task can't be public-facing. For some reason which I can't control this partner requires a static IP to which requests have to be sent, so ALB is not an option. So I need a way to assign a private (within VPC) static IP to the Fargate task.
I've tried to achieve it with NLB, but not sure if I can send HTTP requests to NLB since it's L4 vs L7. Now my only option seems to be an EC2 instance with NGINX which would simply forward all requests to the task's ALB. I don't like this option because I have not much experience with NGINX configuration.
Do you think there are any other options for me to achieve what I need?
Thanks in advance
I've tried to achieve it with NLB, but not sure if I can send HTTP requests to NLB since it's L4 vs L7.
NLB is L3. But off course you can use it for HTTP or HTTPS. The only difference is that you won't be able to setup http-type listener rules, because NLB is for TCP/UDP. But it does not stop you from using it to distribute HTTP/HTTPS traffic among your fargate tasks.

AWS ECS how to call services from one container to another one

I have an ECS cluster which has an Auto Scaling group with 2 EC2 instances. Also I have 3 services, and each service has its own task definition. Each EC2 instance run one docker container per service. Therefore I have 3 docker containers in each EC2 instance.
Each docker container simply run a spring boot app. Since I have 3 services, then I have 3 spring boot apps. Again, one container only runs one of these 3 spring boot apps. Each app exposes a rest ful API, with services like POST, GET etc, under URLs like /service1 or /service1/resource1. One important point here is I'm using dynamic port mapping in the container's host.
I have an external (internet-facing) ALB on port 443, which has 3 target group. Depending on the URL, the request will go to one of the 3 apps (or containers).
My problem is sometimes app A needs to make an http request to app B. My EC2 instances live in private subnet, while my ALB lives in a public subnet. So if I use my ALB to make http requests from inside a container to another, what's going to happen is the request will go through a NAT, and since the public ip of the NAT is not part of the security group of the ALB then it can't connect on port 443. I have 2 ways to make this work:
In the security groups of the ALB whitelist 0.0.0.0/0. I don't want to do that since the entire world will have access.
In the security group of the ALB whitelist the public IP of the NAT. I'm not sure about this approach. Is it recommendable?
A third option I'm trying to implement is to have a third load balancer, an internal one. But I'm lost here, as per AWS docs you can only assign 1 load balancer to your service. And since we are using dynamic port mapping I don't see a way to create manually an ALB and use the dynamic automatically assigned port.
How do you guys make this kind of connectivity between containers, where one container consumes a service that other provides?
As last comment, I use cloud formation for all. No manual setup from console.
Thanks,
You may try to whitelist you nat gateways public IP as a host with /32 mask. It is a quite normal approach because you already have exposed endpoints to the public internet via ALB. You only need to care about security rules updates in case of destroying or changing the NAT gateway, because it's IP may change.

How to configure AWS internet facing LB ScurityGroup for internal and external requests

I'm having a hard time figuring out how to set the correct SecurityGroup rules for my LoadBalancer. I have made a diagram to try and illustrate this problem, please take a look at the image below:
I have an internet facing LoadBalancer ("Service A LoadBalancer" in the diagram) that is requested from "inhouse" and from one of our ECS services ("Task B" in the diagram). For the inhouse requests, i can configure a SecurityGroup rule for "Service A LoadBalancer" that allows incoming request to the LoadBalancer on port 80 from the CIDR for our inhouse IP's. No problem there. But for the other ECS service, Task B, how would i go about adding a rule (for "Service A SecurityGroup" in the diagram) that only allows requests from Task B? (or only from tasks in the ECS cluster). Since it is an internet facing loadbalancer, requests are made from public ip of the machine EC2, not the private (as far as i can tell?).
I can obviously make a rule that allow requests on port 80 from 0.0.0.0/0, and that would work, but that's far from being restrictive enough. And since it is an internet facing LoadBalancer, adding a rule that allows requests from the "Cluster SecurityGroup" (in the diagram) will not cut it. I assume it is because the LB cannot infer from which SecurityGroup the request originated, as it is internet-facing - and that this would work if it was an internal LoadBalancer. But i cannot use an internal LoadBalancer, as it is also requested from outside AWS (Inhouse).
Any help would be appriciated.
Thanks
Frederik
We solve this by running separate Internet facing and Internal Load Balancers. You can have multiple ELBs or ALBs (ELBv2) for the same cluster. Assuming your ECS clusters runs on an IP range such as 10.X.X.X you can open 10.X.0.0/16 for internal access on the internal ELB. Just make sure the ECS cluster SG also is open to the ELB. Task B can reach Task A over the internal ELB address assuming you use the DNS of the internal ELB address when making the request. If you hit the IP of a public DNS it will always be a public request.
However, you may want to think long term whether you really need a public ELB at all. Instead of IP restrictions, the next step is usually to run a VPN such as openVPN so you can connect into the VPC and access everything on the private network. We generally only ever run Internet Facing ELBs if we truly want something on the internet such as for external customers.