I have the following situation. I have a VPC on AWS. In this VPC, I have an ECS Fargate cluster with multiple different tasks running. Additionally, I have a Site-to-Site VPN for one of my partners set up in this cluster.
Now, this partner has to send HTTP POST (SOAP in fact) requests to one of my Fargate tasks. This should be possible only through VPN, so the task can't be public-facing. For some reason which I can't control this partner requires a static IP to which requests have to be sent, so ALB is not an option. So I need a way to assign a private (within VPC) static IP to the Fargate task.
I've tried to achieve it with NLB, but not sure if I can send HTTP requests to NLB since it's L4 vs L7. Now my only option seems to be an EC2 instance with NGINX which would simply forward all requests to the task's ALB. I don't like this option because I have not much experience with NGINX configuration.
Do you think there are any other options for me to achieve what I need?
Thanks in advance
I've tried to achieve it with NLB, but not sure if I can send HTTP requests to NLB since it's L4 vs L7.
NLB is L3. But off course you can use it for HTTP or HTTPS. The only difference is that you won't be able to setup http-type listener rules, because NLB is for TCP/UDP. But it does not stop you from using it to distribute HTTP/HTTPS traffic among your fargate tasks.
Related
I have a setup with a couple of services running in ECS (separate frontends and backends). And now I have the requirement that outbound requests from the backends to some third part APIs needs to have an static (elastic) IP.
As I'm quite the novice with networking I've been following this guide, for basically routing requests to given IP-addresses through the NAT.
Setup:
One VPC
3 subnets (2 for ECS services, the third for the NAT) - All public(?)
Application load balancers for the services.
Routing to the load balancers through Route53.
The way I've been testing it is to either route all traffic, or traffic to my local IP, in the main routing table through the NAT gateway instead of the internet gateway directly. And in both cases, when I try to access either a frontend or server it never responds. And I don't see any traffic in the monitoring-tab for the NAT either. If I just route the traffic directly to the IGW from the main routing table it obviously still work.
So I'd really appreciate some help here since I'm not sure if it's my setup that's not compatible with the above solution, I'm doing something wrong of just overlooking something.
Edit: Did the sensible thing, as pointed out, and placed the services in private subnets.
If you have all your ECS tasks in the public subnet, how are you going to mask all of them behind the NAT? Even my cat knows this.
I have a PHP + Apache application running in ECS with an Application Load Balance sitting in front of it. Everything works fine except when the application makes request to itself and the request times out.
Let's say the URL to reach the application is www.app.com and in PHP I use Guzzle to send requests to www.app.com but that request will always time out.
I suspect it is a networking issue with ALB but I do not know how I can go about fixing it. Any help please?
Thanks.
As you're using ECS I would recommend replacing calls to a public load balancer with a service mesh instead to allow your application to keep all HTTP(S) traffic internal to the network. This will improve both security and performance (latency is reduced). AWS has an existing product that integrates with ECS to allow this functionality named App Mesh/
Alternatively if you want to stick with what you currently have setup you will need to check the following functionality:
If the hosts are ECS hosts are private then they will need to connect outbound by using a NAT Gateway/NAT Instance in the routing table for the 0.0.0.0/0 route. For Fargate this will depend on if the container is public or private.
If the host/container is public it will need the internet gateway added to its route table for the 0.0.0.0/0 route. Even if inbound access from the ALB to the host is private the host will always speak outbound to the internet via an internet gateway.
Ensure that inbound/outbound security groups allow access to either HTTP or HTTPS
I have configured and passed the health check for my AWS ELB(load balancer), but I was trying to do a ping or send a packet to the tcp port 9300 there is no ip address for the ELB.
I have an EC2 instance at the end of the ELB which has Elasticsearch running on it.
The ELB that I configured is an internal ELB so it doesn't have a public IP address for it.
I was wondering if there is a way I can ssh? or do something to ping the ELB?
I am pretty new to AWS and read all the trouble shooting from AWS official website, but couldn't find a solution.
The goal that I am trying to achieve is to test whether my internal Amazon EC2 load Balancer is working properly.
I got the internal ELB ip address with the ping command, however, I am not able to ping or crul to that IP address.
I what to know what I am doing wrong.
Is it the way that I want to access a private network is in correct?
An Elastic Load Balancer is presented as a single service, but actually consists of several Load Balancing servers spread across the subnets and Availability Zones you nominate.
When connecting to an Elastic Load Balancer, you should always use the DNS Name of the Elastic Load Balancer. This will then resolve into one of the several servers that are providing the load balancing service.
Load Balancers are designed to pass requests and return responses. The next time a user sends a request, it might be sent to a different back-end service. Thus, it is good for web-type traffic but not suitable for situations requiring a permanent connection, such as SSH. You can configure sticky sessions for HTTP connections that will use cookies to send the user to the same back-end server if required.
The classic Elastic Load Balancer also supports TCP protocol, but these requests are distributed in a round-robin fashion to the back-end servers so they are also not suitable for long-lasting sessions.
Bottom line: They are great for request/response traffic that needs to be distributed across multiple back-end servers. They are not suitable for SSH.
Site-note: Using PING to test services often isn't a good idea. Ping is turned off in Security Groups by default since it can expose services and isn't good from a security perspective. You should test connectivity by connecting via the expected protocols (eg HTTP requests) rather than using Ping. This applies to testing EC2 connectivity, too.
I'm having a hard time figuring out how to set the correct SecurityGroup rules for my LoadBalancer. I have made a diagram to try and illustrate this problem, please take a look at the image below:
I have an internet facing LoadBalancer ("Service A LoadBalancer" in the diagram) that is requested from "inhouse" and from one of our ECS services ("Task B" in the diagram). For the inhouse requests, i can configure a SecurityGroup rule for "Service A LoadBalancer" that allows incoming request to the LoadBalancer on port 80 from the CIDR for our inhouse IP's. No problem there. But for the other ECS service, Task B, how would i go about adding a rule (for "Service A SecurityGroup" in the diagram) that only allows requests from Task B? (or only from tasks in the ECS cluster). Since it is an internet facing loadbalancer, requests are made from public ip of the machine EC2, not the private (as far as i can tell?).
I can obviously make a rule that allow requests on port 80 from 0.0.0.0/0, and that would work, but that's far from being restrictive enough. And since it is an internet facing LoadBalancer, adding a rule that allows requests from the "Cluster SecurityGroup" (in the diagram) will not cut it. I assume it is because the LB cannot infer from which SecurityGroup the request originated, as it is internet-facing - and that this would work if it was an internal LoadBalancer. But i cannot use an internal LoadBalancer, as it is also requested from outside AWS (Inhouse).
Any help would be appriciated.
Thanks
Frederik
We solve this by running separate Internet facing and Internal Load Balancers. You can have multiple ELBs or ALBs (ELBv2) for the same cluster. Assuming your ECS clusters runs on an IP range such as 10.X.X.X you can open 10.X.0.0/16 for internal access on the internal ELB. Just make sure the ECS cluster SG also is open to the ELB. Task B can reach Task A over the internal ELB address assuming you use the DNS of the internal ELB address when making the request. If you hit the IP of a public DNS it will always be a public request.
However, you may want to think long term whether you really need a public ELB at all. Instead of IP restrictions, the next step is usually to run a VPN such as openVPN so you can connect into the VPC and access everything on the private network. We generally only ever run Internet Facing ELBs if we truly want something on the internet such as for external customers.
kind of an unusual setting here:
We have an SMTP service running on Tomcat / Elastic Beanstalk on AWS in an auto-scaling group behind an ELB load-balancer.
Now, for one of our clients we need to have a static IP for the SMTP service. Since this is not possible with the out-of-the-box load-balancer on AWS, we have a separate HAProxy instance transparently routing the :25 traffic trough the AWS load-balancer.
For some reason, the HAProxy chokes after exactly 3 SMTP calls. After that connections either time out or take minutes to go through.
The interesting part is that the following configurations work perfectly fine:
Calling the SMTP service on the AWS load-balancer directly
Load-balancing the Elastic Beanstalk's nodes through HAProxy directly.
Target setting with HTTP calls on port 80, instead SMTP on port 25
Help is really appreciated
That sounds like EC2 rate limiting what appears -- to the system -- to be "outbound" SMTP from your HAProxy instance.
You're accessing the ELB from the HAProxy by one of this outside addresses, and this is causing your traffic to be treated as Internet-bound.
In order to maintain the quality of Amazon EC2 addresses for sending email, we enforce default limits on the amount of email that can be sent from EC2 accounts. If you wish to send larger amounts of email from EC2, you can apply to have these limits removed from your account by filling out this form.
https://portal.aws.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request
One solution is to had those limits removed, but consider your next step carefully -- you'd be better served by load-balancing the EB nodes through the HAProxy directly, using the nodes' private IP addresses -- because there is a charge for traffic to your ELB from within EC2 on the public IP.
Data Transfer OUT From Amazon EC2 To ... Amazon Elastic Load Balancing ... in the same Availability Zone ... Using a public or Elastic IP address ... $0.01/GB.
http://aws.amazon.com/ec2/pricing/
Not a massive charge, perhaps, but it should be an avoidable charge nonetheless.
Additionally, there's no way to configure HAProxy to look up the IP address behind the hostname you've configured for the ELB with each request. HAProxy resolves hostnames on startup and if the ELB's IP address changes, HAProxy will not detect this change.
On the flip side, you can't reliability configure HAProxy to directly connect to the EB instances, since they're dynamically-addressed as well.
The simplest way to prove that my diagnosis is correct is to set the ELB's TCP listener on another port, such as 587 (or 2025, or whatever), mapped to port 25 on the EB instances. Then have the HAProxy target the traffic to port 587. That should eliminate the EC2 rate limiting on SMTP, although you do still have an issue to deal with if the ELB's external IP changes.