I set up a load balancer in a Availability Zone and added some EC2 instances in the same zone. The health check works fine. Now I tried to access the load balancer using its host name from outside. Even though I can access individual hosts behind the load balancer without any issue, I got a connection time-out error if I tried to connect to the load balancer:
$ wget -O test "http://xxxx.us-west-1.elb.amazonaws.com:8080/"
--2014-04-01 21:26:59-- http://xxxx.us-west-1.elb.amazonaws.com:8080/
Resolving xxxx.us-west-1.elb.amazonaws.com... 11.111.111.11
Connecting to xxxx.us-west-1.elb.amazonaws.com|11.111.111.11|:8080... failed: Connection timed out.
Listener configuration is like this (I don't know how to format this better):
Load Balancer Protocol | Load Balancer Port | Instance Protocol | Instance Port | Cipher | SSL Certificate
HTTP 8080 HTTP 8080 N/A N/A
Any insight/comment would be appreciated.
It turned out that it was because I set it up as VPC Load Balancer. In that case I have to access it through a private IP address :)
Related
I have created an Ubuntu EC2 instance, and created a load balancer to point to that EC2 instance. The rules on the Listener for the load balancer look OK (ports 80 and 443). I can access the EC2 instance Apache2 HTTPD server in a Browser using the EC2 IP address and Domain (only port 80 is working, no HTTPS).
The inbound rules for the security group look OK, i.e. port 80 and port 443.
The health check is checking the server every 30 seconds, and is showing as healthy every time.
The main problem is that when I try to connect to the webserver in a browser using the DNS name for the load balancer, the page times out, and I do not see the request hit the Apache2 server logs. However, I can connect when using the EC2 instance domain name, and I also see the request hitting the Apache2 server logs.
I wondered if I could please ask if anyone else has had the same issue with the load balancer DNS name not resolving to the EC2 instance?
Many thanks,
Martin
EDIT: This was resolved by setting the correct security group.
My ELB health check fails all the time but cannot figure it why (502 bad gateway).
I have a cluster (ECS) with a service that runs at least one task (Fargate) which is a Node API listening on port 3000 & 3001 (3000 for http & 3001 for https since I cannot use port below 1024).
I have an Elastic Load Balancer (application) that is listening on port 80. It forwards the trafic on a target group with protocol port 3000.
This target group has as target type: ip address since I use fargate and not EC2 for my tasks.
So when a task is turning on, I correctly see the private IP of the task registering into the target group.
My health route is server_ip_address/health and it returns a classic 200 status code. This route works well because I tried it directly from the public ip address of the task (quickly before it stopped because of the health check failing) and it returns a 200. I also tried it through the ELB dns name (so my-elb.eu-west-1.elb.amazonaws.com/health) and it worked well as well so I don't understand why the health check fail.
Anyone know what I missed ?
In the screenshot of your targets in the target group it is showing the port as 80, this means that the load balancer (and health check) will be attempting to connect to the Fargate container on port 80.
You mentioned that it should be served from port 3000, therefore you will need to ensure that the target group is listening on port 3000 instead. Once this is in place, assuming that the security group of the host allows inbound access the 502 error should go away.
To be clear the listener port is what port the client connects to, whereas the target port is the port the load balancer connects to your target on.
I have a Network Load Balancer and an Application Load Balancer, they work just fine, but as I need fixed IPs/hostnames I decided to create a Global Accelerator for each one.
Global Accelerator with Application Load Balancer works but with Network Load Balancer it doesn't respond...
Example:
ALB:
$ nc -zv <application-load-balancer>.awsglobalaccelerator.com 80
Connection to <application-load-balancer>.awsglobalaccelerator.com 80 port [tcp/*] succeeded!
NLB:
$ nc -zv <network-load-balancer>.awsglobalaccelerator.com 1883
nc: connect to <network-load-balancer>.awsglobalaccelerator.com port 1883 (tcp) failed: Connection timed out
I have changed Health Check port configuration for the NLB to 1883, and the Global Accelerator is shown as " All healthy".
And as I said, the Network Load Balancer itself works:
$ nc -zv <network-load-balancer>.elb.sa-east-1.amazonaws.com 1883
Connection to <network-load-balancer>.elb.sa-east-1.amazonaws.com 1883 port [tcp/*] succeeded!
Both load balancers are very similar (similar instances, same VPC, subnets, etc).
AWS docs say I can use Global Accelerator with both types of Load Balancers.
I don't know why the NLB Global Accelerator doesn't respond.
What am I missing?
More info:
- I'm testing in sa-east-1 region (South America)
- I need Global Accelerator because the LBs are part of terraform for deployment, so for every build the LBs hostname changes
- I could use Elastic IP's for NLB, but to do that I'd need to change my existing subnets (and as far as I know I can't use Elastic IPs for ALBs)...
If static ip is the only thing you need to achieve then I am not getting the point using Global accelerator and NLB together. Because both provides the features of static ip.
For static ip facility there are 2 options
Use Global accelerator on top of ALB(easy configuration and high cost)
Use NLB and forward your request to ALB(complex configuration and cost effective)
For 2nd option you can get reference from below link.
https://www.bluematador.com/blog/static-ips-for-aws-application-load-balancer
I created a load balancer and assigned it one of the running EC2 instance. After creation, I navigated to Target Group section in the AWS Console under Load Balancing and when I selected the target group that was assigned to the load balancer, it shows registered instance status as "Unhealthy" and there was a message above registered instance pane that says "None of these Availability Zones contains a healthy target. Requests are being routed to all targets". While creating the load balancer, I selected all the subnets (availability zones).
settings I used for health check are mentioned below,
Protocol: HTTP
Path: /healthcheck.html
Port: traffic port
Healthy threshold: 3
Unhealthy threshold: 2
Timeout: 5
Interval: 10
Success codes: 200
So why does my registered instance status as "Unhealthy" and how can I rectify/resolve that to change the status to "In-service"?
Unhealthy indicates that the health check is failing for the instance.
Things to check:
Check that the instance is running a web server
Check that the web page at healthcheck.html responds with a valid 200 response
Check that instance has a security group that permits access on Port 80 (HTTP)
In my case health check configuration on ALB is / with https.
I resolved with below steps.
Check the security groups - whether we have opened the required ports from ALB SG to EC2 SG.
Login to server and check does IIS server's default site has 443 port opened if your health-check is on 443. (whatever port you are using for health checks).
Use the curl command to troubleshoot the issue.
If you would like to check on HTTPS use the below command to check the response. Use -k or --insecure to ignore the SSL issue.
curl https://[serverIP] -k
For HTTP test use the below command.
curl http://[serverIP]
If you are sharing the load balancer among several EC2 instances that run similar services, make sure each of your services run in a different port otherwise your service won't be reachable and therefore your health check won't pass
I have an Amazon Elastic Load Balancer which has a health checker. It attempts to connect to my Logstash instance running on some_ec2_instance:5000.
The ELB health check attempts to open a tcp connection at some_ec2_instance:5000. However, it never passes this health test. I can manually connect to the the ec2 instance and check that Logstash is running and it is indeed operational. I can also telnet localhost 5000 into it without any problems.
In addition my security group allows input/outputs on port 5000 so I don't think that is a problem.
Does anyone have suggestions for how to enable the ELB to pass the health check? is there a /ping path or a plugin which will enable access to such path?
Assuming you are running elasticsearch and logstash on the same hosts, open port 9200 to your ELB and use a http health check on that port.
Running ELB health checks against port 5000 (logstash itself) overwhelms the port.