Cannot call API with ALB's DNS - amazon-web-services

I have an API on AWS ECS, connected with an Application Load Balancer. It has two target groups for blue/green deployment with CodeDeploy. The deployment works, and the targets are healthy, so I assume the app runs, and the ports are configured correctly. The port I use is 3000, and the listener is set to HTTP:3000 as well.
The load balancer is assigned to the default VPC security group, and for testing purposes I set an inbound rule to it that accepts all traffic, and the IP address is 0.0.0.0/0, so in theory it should be accessible for anyone. When I try to call the health check endpoint with {alb_dns}/rest/health (which is tested by the health checker, and it works), I get ECONNREFUSED error. Why can't I access it?

Related

Which port gke https loadbalancer use for health checks?

Please I want to know which port GKE uses when performing the health checks of the backend services.
Does it use the service port declared in the service yaml or other specific ports? Because I'm having trouble getting the back services healthy.
Google Cloud has special routes for the load balancers and their associated health checks.
Routes that facilitate communication between Google Cloud health check probe systems and your backend VMs exist outside your VPC network, and cannot be removed. However, your VPC network must have ingress allow firewall rules to permit traffic from these systems.
For health checks to work you must create ingress allow firewall rules so that traffic from Google Cloud probers can connect to your backends. You can refer to this documentation.

AWS Network Load Balancer failed to connect with EC2 instance if EC2 instance is not open to public

I have been struggling with this problem for 2 days but couldn't get it working.
I have this flow:
external world --> AWS API Gateway ---> VPC Link ---> Network Load Balancer ---> my single EC2 instance
Before introducing the API Gateway, I want to first make sure the Network Load Balancer --> my single EC2 instance part works.
I have set up the EC2 instance correctly. There is a Typescript / ExpressJS api service running on port 3001
I have also set up a Network Load Balancer and a Target Group, the NLB is listening and forwarding requests to port 3001 of the target group (which contains the EC2 instance).
Here is the NLB:
Note that the NLB has a VPC! This raise the question below and I find it so confusing.
listener:
You can see it is forwarding requests to docloud-backend-service, which is described as follows:
You can see that the health check has passed.
I have configured the security group of my EC2 instance with this rule:
1. Allow All protocol traffic on All ports from my VPC
(specified using CIDR notation `171.23.0.0/16`);
Now, when I do curl docloud-backend-xxxxx.elb.ap-northeast-1.amazonaws.com:3001/api/user, the command fails by timeout.
Then, after I add this rule:
2. Allow All protocol traffic on All ports from ANY source (`0.0.0.0/0`);
Now, when I do curl docloud-backend-xxxxx.elb.ap-northeast-1.amazonaws.com:3001/api/user,
the api service gets the request and I can see logs generated in the EC2 instance.
Question:
The second rule opens up the EC2 instance to public, which is dangerous.
I want to limit access to my EC2 instance port 3001 such that only the AWS API Gateway, or the NLB can access it.
The NLB has no security group to be configured. It has a VPC though. If I limit the EC2 instance such that only its own VPC can access it, it should be fine, right?
The first rule does exactly that. Why does it fail?
The NLB has a VPC. Requests go from API Gateway to NLB, then from NLB to EC2 instance. So from the EC2 instance's perspective, the requests come from an entity in the VPC. So the first rule should work, right?
Otherwise why would AWS assign a VPC to the NLB anyways?
Why would I see the VPC on the NLB's description console anyways?
I want to limit access to my EC2 instance port 3001 such that only the AWS API Gateway, or the NLB can access it.
For instance based target groups and for IP based target groups as well we can enable/disable if want to preserve the requester's IP address:
This setting can be found if go to our target group -> Actions -> Edit Target attributes.
What does this mean from the perspective of the Security Group of our application?
If we enable it (which is the default for instance type target groups), the application will see traffic as it is coming directly from the end-client. This means, you we have to enable inbound traffic for 0.0.0.0:3001.
If we disable it, the application will see the source traffic as it was coming from the private IP address of the Network Load Balancer. In this case, we can limit the inbound traffic to the private IP address of the NLB or to the CIDR range of the subnet in which the NLB is placed.

ALB configuration to Rasa server getting unhealthy checks

We are trying to configure ALB for the AWS EC2 docker containerized Rasa Server with Port 5005.
We have attached the Rasa Server to the ALB but we are receiving unhealthy checks with 504 timeout gateway although we are getting the response from the Rasa server Ip address.
We are not able to get the health checks from the ALB after configuring the '/' path. But in the browser, we are getting the healthy response if we use the Rasa Server IP address instead of the ALB DNS name.
Private subnets, security groups and the VPC are configured as same in the ALB and the Rasa Server.
Can you help us here
The 504 timeout indicates that the Load Balancer is unable to speak to the target group.
As you're able to speak to the container (which indicates it is running), the most likely reason is that one of the security groups of the host is not providing inbound access from the load balancer.
Ensure that it can provide inbound access to your load balancer on the specific port.
Other than this check that the target group is configured to use the correct port within the target group, it is easy to accidentally set as port 80 which would lead to it attempting to either health check or service traffic via port 80

Attempting to reach AWS Network Load Balancer leads to timeout

I have an internal (not Internet-facing) NLB set up in a VPC. The NLB routes to a target group containing only one target, and health checks are succeeding.
However, I am unable to make JDBC calls using the NLB's DNS. The NLB has a listener on port 10000 and I have EC2 instances running an application in the same VPC. When these EC2 instances attempt to make a JDBC call to jdbc:hive2://nlb-dns-name.com:10000/orchard, they time out trying to connect. I've logged into the EC2 instances and attempted to ping the NLB DNS record, which also times out.
Please let me know if there is something obvious I'm overlooking here. Thank you!
Edit: The EC2 instances' Security Groups allow all outbound traffic to the same VPC. The SGs of the NLB's target allow inbound traffic and the health check is passing. The NLB listens on port 10000 and routes to a target group containing the master node of one EMR cluster, which listens to JDBC connections on port 10000.
However, I'm reasonably sure the error is not in the NLB -> target routing, since the health checks pass. I believe the error is in the instance -> NLB due to the timeout, and i'm not sure if I'm doing that part correctly.
Realized I never posted an update: the issue was that I was adding my targets by instance id. For some reason, when I added them by IP address, connections started being successful.

aws - how to access opsWorks app with ELB?

My app was easy deployed on 3 instances using OpsWorks. I can Access it using instance IP's fine.
My question is: how can I access it using load balancer?
ELB says all 3 instances are InService, but typing public DNS on browser, it loads forever and shows nothing.
Testing ELB public DNS on http://whatsmydns.com it shows IP's that aren't from my instances.
Am I doing something wrong?
I have added Public DNS to my app as hostname.
There are a couple things to check:
Check that your load balancer listeners are configured to listen
and pass traffic to the same port that the instance is listening on
(for example http traffic 80 => http 80, https traffic 443 => https
443)
Check that the security group of the webservers allows
traffic from the loadbalancer. Though if you can access your instances directly via browser, I'm guessing they are open to 0.0.0.0/0 so shouldn't be an issue here?
Check that security group of the load balancer allows access to public on all needed ports (typically 80 and 443)
Check that elb healthcheck is not failing (under elb
instances you can see if the instances are in service or not) If it
says "Out of service" that's the problem. You need to make sure that
healthcheck URL is accessible and returns 200.
The DNS of your load balancer is different from your instances - it returns the IP addresses of the instances that the load balancer is running on, AWS usually has at least 3 servers behind the scenes for that.