Replace ECS tasks in cluster using AWS cli - amazon-web-services

I'm trying to replace the current tasks in an ECS cluster.
Context:
I have 2 tasks (and a maximum of 4)
Every time I make a change to the docker image, the image is built, tagged, and pushed to ECR (through Jenkins). I wanted to add a timer and after x minutes, replace the current tasks with new ones (also in the CI/CD)
I tried
aws ecs update-service --cluster myCluster --service myService --task-definition myTaskDef
but it didn't work.
Also, several suggestions that I found in StackOverflow and forums, but in the best cases, I ended with 4 tasks, while, I just want to replace the current ones with new ones.
Is this possible using the CLI?

First thing as mentioned by #Marcin, in such deployed where --force-new-deployment is not specified and no change in the task definition revision the deployment will ignore by ECS agent.
The second thing that you are seeing replica after deployment is minimumHealthyPercent and maximumPercent as the service scheduler uses these parameters to determine the deployment strategy.
minimumHealthyPercent
If minimumHealthyPercent is below 100%, the scheduler can ignore
desiredCount temporarily during a deployment. For example, if
desiredCount is four tasks, a minimum of 50% allows the scheduler to
stop two existing tasks before starting two new tasks. Tasks for
services that do not use a load balancer are considered healthy if
they are in the RUNNING state. Tasks for services that use a load
balancer are considered healthy if they are in the RUNNING state and
the container instance they are hosted on is reported as healthy by
the load balancer.
maximumPercent
The maximumPercent parameter represents an upper limit on the number of running tasks during a deployment, which enables you to define the deployment batch size. For example, if desiredCount is four tasks, a maximum of 200% starts four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available).
Modifies the parameters of a service
So setting minimumHealthyPercent is to 50% the scheduled will stop one exiting task before starting one new task. setting it will 0 then you may see the bad gateway from LB as it will stop both exiting tasks before starting two one.
If you still not able to control the flow then pass the --desired-count
aws ecs update-service --cluster test --service test --task-definition test --force-new-deployment --desired-count 2

Usually you would use --force-new-deployment parameter of update-service:
Whether to force a new deployment of the service. Deployments are not forced by default. You can use this option to trigger a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest ) or to roll Fargate tasks onto a newer platform version.

Related

AWS ECS Fargate deployment optimization not working

My situation right now is that I have a CI/CD pipeline set up in GitHub Actions, this workflow does the job of deploying my app container into ECS Fargate with a set of configs needed to work. To manage my infrastructure I use Terraform to set up an Application Load Balancer and the service inside my ECS app Cluster among a lot of other things that I use in my stack.
So before I started doing some optimization the pipeline took around 15 minutes (this is way to much for hotfixes, that's the main reason I'm doing this) and after some changes in the Dockerfile and Docker build stage I managed to take this down to around 8 minutes, in which 3 minutes are used in the GitHub release tag, Docker build and push of the image to ECR and the remaining 5 minutes are used in the ECS deploy.
The thing is I found this documentation from AWS in Best Practices - Speeding up deployments for ECS and decided to do some changes in this stage too. After reading Load balancer health check parameters, Load balancer connection draining and Task deployment I changed these configs:
(Terraform) In the Application Load Balancer
deregistration_delay from 100 to 70
health_check interval from 30 to 5
health_check healthy_threshold from 5 to 3
health_check timeout to 4
(Terraform) In the ECS Service
health_check_grace_period_seconds from 100 to 20
(task-definition) In the containerDefinitions:
stopTimeout = 10
So I was expecting to go down from 150 to 15 seconds just from health_check changes and even more because of the other settings but at the time of forcing a new deploy to check the results I got almost the exact same deploy time with the same 5 minutes used in the ECS stage.
So I would like to know what setting or process am I missing to make the changes work, I looked around in my AWS console and the values where changed so the Terraform apply did work but the ECS stage definitely is taking the same time.
I find that basic ECS Fargate deployments are way slower compared to ECS EC2 deployments. Which makes sense as Fargate has more work to do. It needs to identify a host etc, whereas EC2 hosts are there, running, may have some of the required Docker layers already downloaded.
I generally find Fargate deployments take 2.5-4mins (eu-west-1) so you really need to identify where the lag is.
Some things worth checking, which might help point you in the correct direction:
When do health checks start on the new task? If they start at 4mins then the deployment is only taking 1 minute.
The overall deployment time includes time to stop + deregister the old task(s) - how long is that taking?
How long does it take for you to start your application on an empty docker service?

ECS deployment and matching number of running tasks

Scenario:
ECS Fargate.
Say I have a “desired count” of 2 tasks.
The system takes on some load and auto scales to 6 tasks.
If I deploy during this time, ECS seems to kill off my actual running capacity back down to 2 tasks. This causes service failures b/c the system can no longer handle the actual load and must now scale back up.
All the docs I’ve come across indicate using “minimum healthy percent” and “maximum percent” to help control deployment sizes, but these refer back do the DESIRED count of tasks, not the actual number running on the actual system being deployed to.
Any idea if there is a way to say: “please just match the number of tasks running, or some percentage of such when spinning up new tasks from deploy”?
Deploy is Cloudformation via CodePipeline.
The DesiredCount parameter in CFN is now optional. See this issue for background.
From the issue:
We are making following improvements to ECS integration with Cloudformation:
- DesiredCount becomes an optional field in CFN CreateService and DesiredCount=1 would be used as the default value if it is missing
- UpdateService will also omit DesiredCount when it is missing in CFN template
Customers expect the current behavior (i.e. UpdateService will use the set DesiredCount in the CFN template) can just add DesiredCount in their CFN templates. Existing customers wanting the new behavior can get it by removing DesiredCount from their CFN templates. The changes will be released soon.

Autoscaling Kubernetes based on number of Jobs on AWS EKS

My cluster sometimes gets a "burst" of information and generates a large number of Kubernetes Jobs at once. And in other times I have ~0 active jobs.
I'm wondering how can I make it autoscale the number of nodes to continuously be able to process all these jobs in a reasonable time-frame.
I specifically use AWS EKS and each job takes a few minutes to complete.
EKS allows you to deploy cluster autoscaler so when new job can not be scheduled due to lack of available cpu/memory, extra node will be added to the cluster.

Deploying new docker image with AWS ECS

I have an ECS cluster with a service in it that is running a task I have defined. It's just a simple flask server as I'm learning how to use ECS. Now I'm trying to understand how to update my app and have it seamlessly deploy.
I start with the flask server returning Hello, World! (rev=1).
I modify my app.py locally to say Hello, World! (rev=2)
I rebuild the docker image, and push to ECR
Since my image is still named image_name:latest, I can simply update the service and force a new deployment with: aws ecs update-service --force-new-deployment --cluster hello-cluster --service hello-service
My minimum percent is set to 100 and my maximum is set to 200% (using rolling updates), so I'm assuming that a new EC2 instance should be set up while the old one is being shutdown. What I observe (continually refreshing the ELB HTTP endpoint) is that that the rev=? in the message alternates back and forth: (rev=1) then (rev=2) without fail (round robin, not randomly).
Then after a little bit (maybe 30 secs?) the flipping stops and the new message appears: Hello, World! (rev=2)
Throughout this process I've noticed that no more EC2 instances have been started. So all this must have been happening on the same instance.
What is going on here? Is this the correct way to update an application in ECS?
This is the normal behavior and it's linked to how you configured your minimum and maximum healthy percent.
A minimum healthy percent of 100% means that at every moment there must be at least 1 task running (for a service that should run 1 instance of your task). A maximum healthy percent of 200% means that you don't allow more than 2 tasks running at the same time (again for a service that should run 1 instance of your task). This means that during a service update ECS will first launch a new task (reaching the maximum of 200% and avoiding to go below 100%) and when this new task is considered healthy it will remove the old one (back to 100%). This explains why both tasks are running at the same time for a short period of time (and are load balanced).
This kind of configuration ensures maximum availability. If you want to avoid this, and can allow a small downtime, you can configure your minimum to 0% and maximum to 100%.
About your EC2 instances: they represent your "cluster" = the hardware that your service use to launch tasks. The process described above happens on this "fixed" hardware.

Updating ECS service with Terraform fails to place a new task

After pushing a new image of my container I use Terraform apply to update the task definition. This seems to work fine but in the ECS service list of tasks I can see the task as inactive and I have an event:
service blahblah was unable to place a task because no container instance met all of its requirements. The closest matching container-instance [guid here] is already using a port required by your task.
The thing is, the site is still active and working.
This is more of an ECS issue than a Terraform issue because Terraform is updating your task definition and updating the service to use the new task definition but ECS is unable to schedule new tasks on to the container instances because you're (presumably) defining a specific port that the container must run on and directly mapping it to the host or using host networking instead of bridge (or the new aws-vpc CNI plugin).
ECS has a couple of parameters to control the behaviour of an update to the service: minimum healthy percent and maximum healthy percent. By default these are set to 100% and 200% respectively meaning that ECS will attempt to deploy a new task matching the new task definition and wait for it to be considered healthy (such as passing ELB health checks) before terminating the old tasks.
In your case you have as many tasks as you have container instances in your cluster and so when it attempts to schedule a new task on to the cluster it is unable to place it because the port is already bound to by the old task. You could also find yourself in this position if you had placement constraints on your task/service.
Because the minimum healthy percent is set to 100% it is unable to schedule the removal of any of the old tasks that would then free up a placement option for a new task.
You could have more container instances in the cluster than you have instances of the task running which would allow ECS to deploy new tasks before removing old tasks from the other instances or you could change the minimum healthy percent (deployment_minimum_healthy_percent in Terraform's ECS service resource) to a number less than 100 that allows deployments to happen.
For example, if you normally deploy 3 instances of the task in the service then setting the minimum healthy percent to 50% would allow ECS to remove one task from the service before scheduling a new task matching the new task definition. It would then proceed with a rolling upgrade, making sure the new task is healthy before replacing the old task.
Setting the minimum healthy percent to 0% would mean that ECS can stop all of the tasks running before starting new tasks but this would obviously lead to a potential (but not guaranteed) service interruption.
Alternatively you could remove the placement constraint by switching away from host networking if that is viable for your service.