How does ALB request health checks during ECS rolling update? - amazon-web-services

I deployed my service on ECS Fargate containers and using rolling deployment method. It has a ALB associate with the task. During deployment process, it will deploy a new container and mark the current one as Inactive. Then it destroys the current one after 300 seconds which is defined in ABL Deregistration delay field.
What I don't understand is that how many health check the ABL sends to the new instance? When both the old and new tasks running under the service, does both of them response the requests from load balancer or only the new task responses? If there is one unhealth response, will ABL roll back to the previous one?

Related

AWS ALB health check delay

How can I delay AWS ALB health check until all services have been started on the newly created EC2 instance by autoscaling?
My current health check points to a login page on the app server but some services are not fully up when healt check starts returning success. Is there a way I can add a 2 mins delay before LB starts the health check allowing newly created instance to load all services?
I don't think there is a direct way to do this. You can config the interval time and healthy threshhold to meet your requirements.
For example, you can config interval to 30s and healthy threshold to 5, so that your target will be registered after 30s x 5 = 150s

Application Load balancer doesn't keep requests till opening new instances by an autoscaling group

On AWS, I created an auto-scaling group with an automated scaling policy that adds a new instance based on an Application Load Balancer: Average Request Count Per Target above 5.
The target group is the number of HTTP requests sent to the Load Balancer.
The ASG is set to min 1, max 10 and desired 1.
I tried to send 200 requests to the ELB and record the IP of the instance that receives the request in a database. I found that most of the requests were sent to the same instance and some of them receive (Gateway Timeout 504) and few of them receive nothing.
The ASG launches new instances but after requests are already sent. So, the new instances receive nothing from the load balancer.
I think the reason is that cloud watch sends the average number of requests per instance every > 1 minute and perhaps opening a new instance happens in a longer time than the timeout of the request.
Q: Is there a method to keep the requests in a queue or increase their timeout till the new instances exist and then distribute these requests on all instances instead of losing them?
Q: If the user sends many requests at the same time, I want the ASG to start scaling immediately and these requests are distributed uniformly on the instances keeping a specific average number of requests per instance.
The solution was using Amazon Simple Queue Service. We forwarded the messages from the API Gateway to the queue. Then, a cloud watch alarm was used to open ECS fargate tasks when the queue size > 1 to read messages from the queue and process them. When the queue is empty, another alarm was used to set the # of tasks in the ECS service to 0.

How does ALB distribute requests to Fargate service during rolling update deployment?

I deploy a Fargate service in a cluster and use rolling update deployment. I configured an ALB in front of my service, and it is doing a health check as well. During the upgrade, I can see that my current task is marked as INACTIVE, and the new task is deployed. Both of the two tasks are in running state.
I understand that the ALB is doing a health check on the newly deployed tasks, but it keeps two tasks running for 5 minutes.
I have a few questions about this deployment period of time.
Does ALB distribute user requests to my new tasks before passing health check?
If the answer for the first question is no, Does ALB distribute user requests to the new service after passing health check before the old services is down?
If the second answer is yes, then there will be two versions of tasks running inside my service to serve user requests for 5 minutes. Is this true? How can I make sure it only send requests to one service at a time.
I don't want to change the deployment method to BLUE/GREEN. I want to keep the rolling update at the moment.
ALB will not send traffic to a task that is not yet passing health checks, so no to #1. ALB will send traffic to both old and new whilst deploying, so yes to #2. As soon as a replacement task is available ALB will start to drain the task it is replacing. The default time for that is 5 minutes. During that time the draining instance will not receive traffic, so sort of no to #3. The sort of part is that you will have some time with version A and B of your service will both be deployed. How long that is depends on the number of tasks and how long it takes for them to start to receive traffic.
The only way I can think of to send all traffic to one version and then hard cut over to the other is to create a completely new target group each time, keeping the old one active. Then, once the new target group is running switch to it. You'd have to change the routes in the ALB as you do that.
By the way, what is happening now is what I would call a rolling deployment.

What should the path of healthcheck be in the target group created for Fargate Service

I deployed docker image using AWS Fargate. When I created a service out of the task definition, logs show that tomcat has no errors and app is up and running but new instances are getting constantly getting spun as health check is failing
Health Checks (On target group tied to the service)
Protocol: HTTP
Path: /Sampler/data/ping
Port: traffic/port
What is the right path for health check?
I tried giving servicename too, but it did not work
for example: /servicename/data/ping
Can you please suggest what I am missing?
I have deployed the same war file in local by running docker run -p 8080:8080 sampler:latest (same image pushed from local to ECR) and when I hit the URL http://localhost:8080/Sampler/data/ping, I get 200 status code
Dockerfile
FROM tomcat:9.0-jre8-alpine
COPY target/Sampler-*.war $CATALINA_HOME/webapps/Sampler.war
EXPOSE 80
The path for the health check depends on your application. Based on the information you have provided, I suspect the issue could be related to healthCheckGracePeriodSeconds
healthCheckGracePeriodSeconds
The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy
Elastic Load Balancing target health checks after a task has first started.
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_Service.html
When ECS tasks took a long time to start, Elastic Load Balancing (ELB) health checks could mark the task as unhealthy and the service scheduler would shut the task down.
You can specify a health check grace period in ECS service definition parameter. This instructs the service scheduler to ignore ELB health checks for a pre-defined time period after a task has been instantiated.
https://aws.amazon.com/about-aws/whats-new/2017/12/amazon-ecs-adds-elb-health-check-grace-period/

Grace Period? - AWS EC2 Container Service and Elastic Load Balancers

When an elastic load balancer (ELB) is associated with an auto-scaling group, it is possible to specify a grace period during which new EC2 instances will not be terminated even if they are marked as unhealthy by the ELB. Is it possible to specify a similar grace period, during which new ECS tasks will not be killed and restarted by their associated ECS service, even if the ECS instance on which a task is running has been marked unhealthy by the ELB?
Update:
In our current use case, the docker container being run as an ECS task contains a JBoss instance that loads a number of caches on startup. These caches can take several minutes to load. However, the ECS service registers the container instance with the ELB, as soon as the container has started. This means that traffic can be routed to the new container before it is ready to accept it. We could increase the health check interval and the "healthy/unhealthy thresholds" on the ELB to prevent the ELB from routing traffic to the instance and the ECS service from restarting the container until the caches have been loaded. However, increasing the health check interval and thresholds is not desirable, because if an instance is marked as unhealthy after the caches have been loaded, the ECS service should restart the container as soon as possible (which necessitates a shorter health check interval and smaller thresholds).
Thus, is it possible to apply a grace period during which traffic will not be routed to a new container by the ELB and the ECS service will not restart the container (even if it fails the health checks)? Or failing that, are there any suggestions regarding a solution for our use case?
In case anyone else finds themselves here via google, in the linked support thread, it is noted that this has been added to AWS, it is called healthCheckGracePeriodSeconds https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateService.html#ECS-CreateService-request-healthCheckGracePeriodSeconds
After a discussion with the support team, it turns out that ECS cannot support our current use case.
There is a workaround that solves one of the issues we are facing. That workaround is to create a separate, essential, health-check container and in the same ECS task as the actual application container. The purpose of the health-check container is to monitor the application container to determine when the application has been started completely. If it detects that the application has failed to start, it exits, causing the ECS service to cycle the task. The ELB is then configured to perform its health checks against the health-check container, which will always report that it is up via the relevant port. This workaround will prevent the ECS service from cycling the ECS task due to failed health checks.
However, the ELB will begin routing traffic to the application container immediately. It will do so, even if the application container is not yet ready to receive traffic (for example, because it is still waiting for a cache to load). Currently, there is no way to delay the ELB from sending traffic to the application container, as the ECS service provides no support a grace period. We have managed to workaround this issue by providing messages to our application containers via SQS and only having them pull from the queue when their caches are fully loaded. However, we have future use cases (such as serving web requests) where this is not a feasible option. To this end, I intend to raise a feature request for the grace period.
As an aside, both Kubernetes (http://kubernetes.io/v1.0/docs/user-guide/walkthrough/k8s201.html#application-health-checking) and Marathon (https://mesosphere.github.io/marathon/docs/health-checks.html) already support this option for health checking, if someone reading this is happy not to use a managed service.
Use env var ECS_CONTAINER_STOP_TIMEOUT
See https://github.com/aws/amazon-ecs-agent/issues/126