ECS Task Definition Update Not Triggering Deployment - amazon-web-services

When I update the task definition for one of my ECS services, the service does not re-deploy with the new task definition until I run aws ecs update-service ... --force-new-deployment. Is there some kind of configuration option that controls this behaviour? I've noticed that some of my services do re-deploy when I update the Task Definition, but others don't and I'm not sure why that is.
I've provisioned my infrastructure with terraform.

You need to set the force_new_deployment setting in the aws_ecs_service resource in your Terraform code.

Related

What's the proper way of updating ECS in terraform with blue/green deployment?

I have declared my ECS modules with the required IAM roles, application load balancer, auto scaling, and codedeploy with blue/green deployment.
Every time I update my task definition, or anything related to the ECS service, I get the error:
Error: updating ECS Service (arn:aws:ecs...): InvalidParameterException: Unable to update task definition on services with a CODE_DEPLOY deployment controller. Use AWS CodeDeploy to trigger a new deployment.
I get it, I should use CodeDeploy to deploy, but it was declared there when I created the service, and set up everything, so I guess changing the code to declare it as a data source instead of a resource would be just a dirty quick fix.
What's the proper way of updating task definitions to prevent this error from happening?

When fargate container is reloaded? after ecs repository is updated

I made ecs fargate with container.
My procedure is like this
codecommit -> codepipeline(codebuild) -> ecs repository.
fargate uses the ecs repository
When I push code to codecommit, codepipeline works and ecs repository is updated.
However container in fargate is not updated.
I stop the tasks manually, the tasks restart, but container itself is not updated.
What should I do for this?
I need to make codepipelline(again???) ecs repository to fargate.
You need to update a task definition so your container can start using the updated image.
And then you need to stop old task and start new task.
Or, if this is an ECS Service, you need to update the service to use the updated task definition.

Task revisions are creating for ECS Fargate service, even i am not changing any task definitions

Task revisions are creating for ECS Fargate service, even i am not changing any task definitions.
Used AWS Pipeline for CI/CD. (AWS CodePipeline + ECS Fargate Service)
CI/CD working perfectly, but when new deployment occurs everytime it creates new task defintion.
Unfortunately, it is not possible to prevent CodePipeline's ECS job worker from creating a new task definition revision every time it is run as this behaviour is by design.

How to update AWS ECS cluster instances with Terraform?

I have an existing ECS cluster (EC2) created using Terraform. I want to install some software on those EC2 instances using Terraform. One of our business requirements is that we will not be able to destroy and re-create the instances and we have to do it on existing instances.
How should I approach this?
It sounds like your organization is experimenting with running its services in docker and ECS. I also assume you are using AWS ECR to host your docker images (although technically doesn't matter).
When you create an ECS cluster it is initially empty. If you were to re-run your terraform template again it should show you that there are no updates to apply. In order to take the next step you will need to define a ecs-service and a ecs-task-definition. This can either be done in your existing terraform template, in a brand new template, or you can do it manually (aws web console or awscli). Since you are already using terraform I would assume you would continue to use it. Personally I would keep everything in 1 template but again it is up to you.
An ecs-service is essentially the runtime configuration for your ecs-tasks
An ecs-task-definition is a set of docker containers to run. In the simplest case it is 1 single docker container. Here is where you will specify the docker image(s) you will use, how much CPU+RAM for the docker container, etc...
In order for your running ecs service(s) to be updated without your EC2 nodes ever going down you would just need to update your docker image within the ecs-task-definition portion of your terraform template (an ofcourse run terraform).
with all this background info now you can add a Terraform ecs-service Terraform ecs-task-definition to your terraform template.
Since you did not provide your template I cannot say exactly how this should be setup but an example terraform template of a complete ECS cluster running nginx can be found below
Complete Terraform ECS example
more examples can be found at
Official terraform ECS github examples
You could run a provisioner attached to an always triggered null_resource to always run some process against things but I'd strongly recommend you rethink your processes.
Your ECS cluster should be considered completely ephemeral, as with the containers running on them. When you want to update the ECS instances then destroying and replacing the instances (ideally in an autoscaling group) is what you want to do as it greatly simplifies things. You can read more about the benefits of immutable infrastructure elsewhere.
If you absolutely couldn't do this then you'd most likely be best off using another tool, such as Ansible, entirely. You could choose to launch this via Terraform using a null_resource provisioner as mentioned above which would look something like the following:
resource "null_resource" "on_demand_provisioning" {
triggers {
always = "${uuid()}"
}
provisioner "local-exec" {
command = "ansible-playbook -i inventory.yml playbook.yml --ssh-common-args='-o StrictHostKeyChecking=no'"
}
}

AWS ECS: Monitoring the status of a service update

I am trying to migrate a set of microservices from Docker Swarm, to AWS ECS using Fargate.
I have created an ECS cluster. Moreover, I have initialized repositories using the ECR, each of which contains an image of a microservice.
I have successfully came up with a way to create new images, and push them into the ECR. In fact, with each change in the code, a new docker image is built, tagged, and pushed.
Moreover, I have created a task definition that is linked to a service. This task definition contains one container, and all the necessary information. Moreover, its service defines that the task will run in a VPC, and is linked to a load balancer, and has a target group. I am assuming that every new deployment uses the image with the "latest" tag.
So far with what I have explained, everything is clear and is working well.
Below is the part that is confusing me. After every new build, I would like to update the service in order for new tasks with the update image get deployed. I am using the cli to do so with the following command:
aws ecs update-service --cluster <cluster-name> --service <service-name>
Typically, after performing the command, I am monitoring the deployment logs, under the event tab, and checking the state of the service using the following command:
aws ecs describe-services --cluster <cluster-name> --service <service-name>
Finally, I tried to simulate a case where the newly created image contains a bad code. Thus, the new tasks will not be able to get deployed. What I have witnessed is that Fargate will keep trying (without stopping) to deploy the new tasks. Moreover, aside the event logs, the describe-services command does not contain relevant information, other than what Fargate is doing (e.g., registering/deregistering tasks). I am surprised that I could not find any mechanism that instructs Fargate, or the service to stop the deployment and rollback to the already existing one.
I found this article (https://aws.amazon.com/blogs/compute/automating-rollback-of-failed-amazon-ecs-deployments/ ), which provides a solution. However, it is a fairly complicated one, and assumes that each new deployment is triggered by a new task definition, which is not what I want.
Therefore, considering what I have described above, I hope you can answer the following questions:
1) Using CLI commands (For automation purposes) Is there a way to instruct Fargate to automatically stop the current deployment, after failing to deploy new tasks after a few tries?
2) Using the CLI commands, is there a way to monitor the current status of the deployment? For instance, when performing a service update on a service on Docker swarm, the terminal generates live logs on the update process
3) After a failed deployment, is there a way for Fargate to signal an error code, or flag, or message?
At the moment, ECS does not offer deployment status directly. Once you issue a deployment, there is no way to determine its status other than to continually poll for updates until you have enough information to infer from them. Plus unexpected container exits are not logged anywhere. You have to search through failed tasks. The way I get them is by cloudwatch rule that triggers a lambda upon task state change.
I recommend you read: https://medium.com/#aaron.kaz.music/monitoring-the-health-of-ecs-service-deployments-baeea41ae737
As of now, you have a way to do this:
aws ecs wait services-stable --cluster MyCluster --services MyService
The previous example pauses and continues only after it can confirm that the service running on the cluster is stable. Will return 255 exit code after 40 failed checks.
To cancel a deployment, enable ECS Circuit Breaker when creating your service:
aws ecs create-service \
--service-name MyService \
--deployment-configuration "deploymentCircuitBreaker={enable=true,rollback=true}" \
{...}
References:
Service deployment check.
Circuit Breaker