AWS ECS: Force redeployment on new latest image in ECR - amazon-web-services

I know that there are already countless questions in this direction, but unfortunately I was not able to find the right answer yet. If a post already exists, please just share the link here.
I have several gitlab CI / CD pipelines. The first pipeline uses Terraform to build the complete infrastructure for an ECS cluster based on Fargate. The second / third pipeline creates nightly builds of the frontend and the backend and pushes the Docker Image with the tag "latest" into the ECR of the (staging) AWS account.
What I now want to achieve is that the corresponding ECS tasks are redeloyed so that the latest Docker images are used. I actually thought that there is a way to do this via CloudWatch Events or whatsoever, but I don't find a really good starting point here. A workaround would be to install the AWS CLI in the CI / CD pipeline and then do a service update with "force new deployment". But that doesn't seem very elegant to me. Is there any better way here?
Conditions:
The solution must be fully automated (either in AWS or in gitlab CI / CD)
Switching to AWS CodePipeline is out of discussion
Ideally as close as possible to AWS standards. I would like to avoid extensive lambda functions that perform numerous actions due to their maintainability.
Thanks a lot!

Ok, for everybody who is interested in an answer. I solved it that way:
I execute the following AWS CLI command in the CICD pipeline
aws ecs update-service --cluster <<cluster-name>> --service <<service-name>> --force-new-deployment --region <<region>>
Not the solution I was looking for but it works.

As a general comment it is not recommended to always push the same container tag because then rolling back to a previous version in case of failure becomes really difficult.
One suitable option would be to use git tags.
Let's say you are deploying version v0.0.1
You can create a file app-version.tf which will contain the variable backend-version = v0.0.1 that you can reference on the task definition of the ecs service.
Same thing can be done for the container creation using git describe.
So, you get a new task definition for every git tag and the possibility of rolling back just by changing a value in the terraform configuration.

It is beneficial to refer to images using either digests or unique immutable tags. After the pipeline pushes the image, it could:
Grab the image's digest/unique tag
Create a new revision of the task definition
Trigger an ECS deployment with the new task definition.
As sgramo93 mentions, the big benefit is that rolling back your application can be done by deploying an older revision of the task definition.

Related

Deployment to AWS ECS

I am trying to automate the deployment of the AWS ECS and couldn't find much information I could do that and will like to see if there is any advice on what I can explore. Currently, we have an Azure DevOps pipeline that will push the containerized image to the ECR and we will manually create the task definition at ecs and update the service afterwards. Is there anyway that I can automate this with azure devops release?
A bit open ended for a Stackoverflow style question but the short answer is that there are a lot of AWS native alternatives to this. This is an example that implements the blue-green pattern (it can be simplified with a more generic rolling update deployment). If you are new to ECS you probably want to consider using Copilot. This is a entry level blog that hints about how to deploy an application and build a pipeline for it.

AWS ECS run latest task definition

I am trying to have run the lastest task definition image built from GitHub deployment (CD). Seems like on AWS it creates a task definition for example "task-api:1", "task-api:2", on was my cluster is still running task-api: 1 even though there is the latest task as a new image has been built. So far I have to manually stop the old one and start a new one . How can I have it automated?
You must wrap your tasks in a service and use rolling updates for automated deployments.
When the rolling update (ECS) deployment type is used for your service, when a new service deployment is started the Amazon ECS service scheduler replaces the currently running tasks with new tasks.
Read: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html
This is DevOps, so you need a CI/CD pipeline that will do the rolling updates for you. Look at CodeBuild, CodeDeploy and CodePipeline (and CodeCommit if you integrate your code repository in AWS with your CI/CD)
Read: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
This is a complex topic, but it pays off in the end.
Judging from what you have said in the comments:
I created my task via the AWS console, I am running just the task definition on its own without service plus service with task definition launched via the EC2 not target both of them, so in the task definition JSON file on my Github both repositories they are tied to a revision of a task (could that be a problem?).
It's difficult to understand exactly how you have this set up and it'd probably be a good idea for you to go back and understand the services you are using a little better using the guide you are following or AWS documentation. Pushing a new task definition does not automatically update services to use the new definition.
That said, my guess is that you need to update the service in ECS to use the latest task definition. You can do that in many ways:
Through the console (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service-console-v2.html).
Through the CLI (https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html).
Through the IaC like the CDK (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ecs-readme.html).
This can be automated but you would need to set up a process to automate it.
I would recommend reading some guides on how you could automate deployment and updates using the CDK. Amazon provide a good guide to get you started https://docs.aws.amazon.com/cdk/latest/guide/ecs_example.html.

Setting up CodePipeline with Terraform

I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.

Continuous Deployment and Delivery on ECS Fargate with Circleci and Terraform

My goal is simple. Commit a change to the application and have it running live on AWS.
I am using Circleci and I have built all my infra with Terraform, so I want to only use Terraform to make changes to AWS. The question is, how do I update my ECS Service and keep it in sync with Terraform. The solution I came up with, not sure if I’m reinventing the wheel here, is the following:
Use circleci/aws-ecr to push the newly built image to ECR.
Use circleci/aws-ecs -> update-task-definition-from-json to update the task-definition.json
Since I have an updated task-definition.json in the Terraform dir holding the newly built image, it’s only a matter of a terraform apply. I have set it with backend so it should be possible to run it from circleci with circleci/terraform.
I should have the latest container up and running, so the first goal is achieved.
Now, I need to have it in sync. Circleci made changes to the task-definition.json and also at the terraform state. I can give an access key to my github account for the ci to commit the changes.
Is there an easier way to do this?

AWS CD with CodeDeploy for Docker Images

I have a scenario and looking for feedback and best approaches. We create and build our Docker Images using Azure Devops (VSTS) and push those images to our AWS repository. Now I can deploy those images just fine manually but would like to automate the process in a continual deployment model. Is there an approach to use codepipeline with a build step to just create and zip the imagesdefinitions.json file before it goes to the deploy step?
Or is there an better alternative that I am overlooking.
Thanks!
You can definitely use a build step (eg. CodeBuild) to automate generating your imagedefinitions.json file, there's an example here.
You might also want to look at the recently announced CodeDeploy ECS deployment option. It works a little differently to the ECS deployment action but allows blue/green deployments via CodeDeploy. There's more information in the announcement and blog post.