I have an AWS load balancer with 2 ec2 instances serving an API in Python.
If I have 10K request come in at the same time, and the AWS health check comes in, the health check will fail, and there is a 502/504 gateway error because of instances restart due the to failed health check.
I check the instances CPU usage, max at 30%, and memory maxed at 25%.
What's the best option to fix this?
A few things to consider here:
Keep the health check API fairly light, but ensure that the health check API/URL indeed returns correct responses based on the health of the app.
You can configure the health check to mark the instance as failed only after X failed checks. You can tune this parameter and the Health check frequency to match your needs.
You can disable the EC2 restart from failed health-check by configuring your autoscaling group health-check type to EC2. This will prevent instances from being terminated due to a failed ELB health-check.
Related
I have an ELB (Network Load Balancer with a couple of Auto Scaling Groups as Target Group) that has periodical health check fails (i.e. some instances would be marked as unhealthy and then recover after a few minutes). The health check is a simple static page (i.e. /health_check).
The timing seems to be at the same time when the host is having heavy network load (downloading large files from S3), but I want to have more information (e.g. they are failing the active health check or passive health check as mentioned in https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html).
However, I am not able to find the health check history or logs from ELB. All my search finds is about the Access Log for ELB (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-access-logs.html) which is about the actual user requests.
Is this health check history / log accessible anywhere?
I've been using AWS CodeDeploy to push our applications live, but it always takes ages doing the BlockTraffic and AllowTraffic steps. Currently, I have an application load balancer(ALB) with three EC2 nodes initially(behind an autoscaling group). So, If I do a CodeDeploy OneAtATime, the whole process takes up to 25 minutes.
The load balancer I'm using it with had connection draining set to 300s. I thought it was the reason for drag out. However, I disabled Connection Draining and got the same results. I then enabled Connection Draining and set timeout to 5 seconds and still got the same results.
Further, I found out CodeDeploy depends on the ALB Health Check settings. according to the AWS documentation
After an instance is bound to the ALB, CodeDeploy waits for the
status of the instance to be healthy ("inService") behind the load
balancer. This health check is done by ALB and depends on the health
check configuration.
So I tried by setting low timeouts and thresholds for health check settings. Even those changes didn't reduce the deployment time much.
Can someone direct me to a proper solution to speed up the process?
The issue is the de-registration of instances from the AWS target group. You want to change this value:
or find a way to update the deregistration_delay.timeout_seconds property - by default it's 300s, which is 5 minutes. The docs can be found here).
A WordPress application is deployed in AWS Elastic Beanstalk that has a load balancer. I see sometimes there is ELB 5XX error. To make the instance OutOfService for the higher number of unhealthy threshold count, I set Unhealthy Threshold to 10. But sometimes health check fails and health is Severe. I get sometimes the error "% of the requests to the ELB are failing with HTTP 5xx". I checked the ELB access logs and sometimes request get the timeout (504) error and after a consecutive number of 504, ELB makes the instance OutOfService. I am trying to fix which request is failing.
What I don't know, is it possible to make the instance "InService" as quickly as possible. Because sometimes instance is OutOfService for 2-3 hours, which is really bad. Is there any good way to handle this situation. I am really in trouble with this situation. Looks like after the service is out, I have nothing to do. I am relatively new to AWS. Please help.
To solve this issue:
1) HTTP 504 means timeout. The resource that the load balancer is accessing on your backend is failing to respond. Determine what the path for the healthcheck from the AWS console.
2) In your browser verify that you can access the healthcheck path going around the load balancer. This may mean temporarily assigning an EIP to the EC2 instance. If the load balancer healthcheck is "/test/myhealthpage.php" then use "http://REPLACE_WITH_EIP/test/myhealthpage.php". For HTTPS listeners use https in your path.
3) Debug why the path that you specified is timing out and fix it.
Note: Healthcheck paths should not be to pages that do complicated tests or operations. A healthcheck should be a quick and simple GO / NO GO type of page.
I am using aws ELB to report the status of my instances to an autoscaling group so a non-functional instance would be terminated and replaced by a new one. The ELB is configured to ping TCP:3000 every 60 seconds and wait for a timeout of 10 seconds to consider it a health check failure. the unhealthy threshold is 5 consecutive checks.
However the ELB always reports my instances as healthy and inservice all the time even though I periodically manually come across an instance that is timing out and I have to terminate it manually and launch a new one despite the ELB reporting it as inservice all the time
Why does this happen ?
After investigating a little bit I found that
I am trying to assess the health of the app through an api callto a web app running on the instance and wait for the response to timeout to declare the instance faulty. I needed to use http as the protocol to call port 3000 with a custom path through the load balancer instead of tcp.
Note: The api needs to return a status code of 200 for the load balancer to consider it healthy. It now works perfectly
I'm running servers in a AWS auto scale group. The running servers are behind a load balancer. I'm using the ELB to mange the auto scaling groups healthchecks. When servers are been started and join the auto scale group they are currently immediately join to the load balancer.
How much time (i.e. the healthcheck grace period) do I need to wait until I let them join to the load balancer?
Should it be only after the servers are in a state of running?
Should it be only after the servers passed the system and the instance status checks?
There are two types of Health Check available for Auto Scaling groups:
EC2 Health Check: This uses the EC2 status check to determine whether the instance is healthy. It only operates at the hypervisor level and cannot see the health of an application running on an instance.
Elastic Load Balancer (ELB) Health Check: This causes the Auto Scaling group to delegate the health check to the Elastic Load Balancer, which is capable of checking a specific HTTP(S) URL. This means it can check that an application is correctly running on an instance.
Given that your system is using an ELB health check, Auto Scaling will trust the results of the ELB health check when determining the health of each EC2 instance. This can be slightly dangerous because, if the instance takes a while to start, the health check could incorrectly mark the instance as Unhealthy. This, in turn, would cause Auto Scaling to terminate the instance and launch a replacement.
To avoid this situation, there is a Health Check Grace Period setting (in seconds) in the Auto Scaling group configuration. This indicates how long Auto Scaling should wait until it starts using the ELB health check (which, in turn, has settings for how often to check and how many checks are required to mark an instance as Healthy/Unhealthy).
So, if your application takes 3 minutes to start, set the Health Check Grace Period to a minimum of 180 seconds (3 minutes). The documentation does not state whether the timing starts from the moment that an instance is marked as "Running" or whether it is when the Status Checks complete, so perform some timing tests to avoid any "bounce" situations.
In fact, I would recommend setting the Health Check Grace Period to a significantly higher value (eg double the amount of time required). This will not impact the operation of your system since a Healthy Instance will start serving traffic as soon as the ELB Health Check is satisfied, which sooner than the Auto Scaling grace period. The worst case is that a genuinely unhealthy instance will be terminated a few minutes later, but this should be a rare occurrence.
the documentation (now) states "The grace period starts after the instance passes the EC2 system status check and instance status check."
So, at least according to the mid-2015 AWS documentation, the answer is "after the servers passed the system and the instance status checks." This is how we've set up our environment, and although I haven't done precise timings it appears to be correct.
If you closely monitor your cloudformation stack events, you will get success signal that your ASG got updated.
The time difference between ASG started updating and ASG received success signal is the health check grace period.
This health check grace period is always recommended to have double than the application startup time. Suppose, your application takes 10 min to start, you should put health check grace period for 20 min.
The reason is you never know your application might throw some kind of error and go for several retry.