Continuous Deployment and Delivery on ECS Fargate with Circleci and Terraform - amazon-web-services

My goal is simple. Commit a change to the application and have it running live on AWS.
I am using Circleci and I have built all my infra with Terraform, so I want to only use Terraform to make changes to AWS. The question is, how do I update my ECS Service and keep it in sync with Terraform. The solution I came up with, not sure if I’m reinventing the wheel here, is the following:
Use circleci/aws-ecr to push the newly built image to ECR.
Use circleci/aws-ecs -> update-task-definition-from-json to update the task-definition.json
Since I have an updated task-definition.json in the Terraform dir holding the newly built image, it’s only a matter of a terraform apply. I have set it with backend so it should be possible to run it from circleci with circleci/terraform.
I should have the latest container up and running, so the first goal is achieved.
Now, I need to have it in sync. Circleci made changes to the task-definition.json and also at the terraform state. I can give an access key to my github account for the ci to commit the changes.
Is there an easier way to do this?

Related

Deploy new container revision to Cloud Run without changing Terraform

I am setting up a CI&CD environment for a GCP project involves Cloud Run. While setting up everything via Terraform is pretty much straightforward, I cannot figure out how to update the environment when the code changes.
The documentation says:
Make a change to the configuration file.
But that couples the application deployment to terraform configuration, which should be responsible only for infrastructure deployment.
Ideally, I use terraform to provision the infrastructure, and another CI step to build and deploy the container.
Is there a best-practice here?
Relevant sources: 1.
I ended up separating Cloud Run service creation (which is still done in Terraform) and deployment to two different workflows.
The key component was to make terraform ignore the actual deployed image so that when the code deployment workflow is done, terraform won't complained that the Cloud Run image is different from the one it manages. I achieved this by setting ignore_changes = [template[0].spec[0].containers[0].image] on the google_cloud_run_service resource.

Setting up CodePipeline with Terraform

I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.

AWS ECS: Force redeployment on new latest image in ECR

I know that there are already countless questions in this direction, but unfortunately I was not able to find the right answer yet. If a post already exists, please just share the link here.
I have several gitlab CI / CD pipelines. The first pipeline uses Terraform to build the complete infrastructure for an ECS cluster based on Fargate. The second / third pipeline creates nightly builds of the frontend and the backend and pushes the Docker Image with the tag "latest" into the ECR of the (staging) AWS account.
What I now want to achieve is that the corresponding ECS tasks are redeloyed so that the latest Docker images are used. I actually thought that there is a way to do this via CloudWatch Events or whatsoever, but I don't find a really good starting point here. A workaround would be to install the AWS CLI in the CI / CD pipeline and then do a service update with "force new deployment". But that doesn't seem very elegant to me. Is there any better way here?
Conditions:
The solution must be fully automated (either in AWS or in gitlab CI / CD)
Switching to AWS CodePipeline is out of discussion
Ideally as close as possible to AWS standards. I would like to avoid extensive lambda functions that perform numerous actions due to their maintainability.
Thanks a lot!
Ok, for everybody who is interested in an answer. I solved it that way:
I execute the following AWS CLI command in the CICD pipeline
aws ecs update-service --cluster <<cluster-name>> --service <<service-name>> --force-new-deployment --region <<region>>
Not the solution I was looking for but it works.
As a general comment it is not recommended to always push the same container tag because then rolling back to a previous version in case of failure becomes really difficult.
One suitable option would be to use git tags.
Let's say you are deploying version v0.0.1
You can create a file app-version.tf which will contain the variable backend-version = v0.0.1 that you can reference on the task definition of the ecs service.
Same thing can be done for the container creation using git describe.
So, you get a new task definition for every git tag and the possibility of rolling back just by changing a value in the terraform configuration.
It is beneficial to refer to images using either digests or unique immutable tags. After the pipeline pushes the image, it could:
Grab the image's digest/unique tag
Create a new revision of the task definition
Trigger an ECS deployment with the new task definition.
As sgramo93 mentions, the big benefit is that rolling back your application can be done by deploying an older revision of the task definition.

ECS auto deploy with ECR

I'm using GitHub, Jenkins, AWS ECR, AWS ECS.
I want to deploy automatically when GitHub has a new commit.
When GitHub, have new commit, GitHub, sent webhook to Jenkins, Jenkins build images and push to ECR with tag 'latest'.
I wonder how can I make my ECS service restart task and redeploy images automatically when ECR image changed?
Don't use latest in this setup. Have Jenkins pick a tag for the image (maybe based off a source control commit ID, a source control tag name, or a timestamp). Give it the ability to update the ECS tasks, and then (once a build has happened and gone through appropriate pre-launch testing) have Jenkins change the image tag in the task to what it's just built. ECS will see that the image has changed, pull the new image, and launch containers accordingly.
Two other good reasons to do things this way: if you have explicit versions, you can have a pre-production cluster, deploy things there, run tests, and then deploy the same version to production; and if a deploy goes bad, you can straightforwardly roll back by manually setting the tag back to yesterday's build, which is impossible if the only version you have is latest.

Is it possible to use AWS CodePipeline with Lightsail?

I'm working all the day and couldn't find the answer. So I'm asking you guys: is it possible to use AWS Pipeline with AWS Lightsail?
My objective is to store the code inside CodeCommit and use CodeBuild, CodeDeploy, CodePipeline and S3 to create a Continuous Deployment inside a Lightsail instance.
Those are the steps I think I have to follow to accomplish the task:
[x] setup a Lightsail instance
[x] create an IAM user and set permissions
[x] transfer my repository to CodeCommit
[x] create an S3 bucket to hold the build artifacts
[x] create a CodeBuild project to build the artifacts
[x] create a buildspec.yml file with my build steps
[ ] create a CodeDeploy project to deploy my application
[ ] create a CodePipeline project to trigger the build when I commit to certain branch
As you can see, I'm almost there. But I couldn't find any way to use my Lightsail instance with CodeDeploy. So, my question is: is it possible? Is there some limitation? Did I miss something really basic? Is there any other way to make the CD with Lighsail? Sorry, I'm getting a little crazy right here ahhaha.
Today, 08/16/2017, it's not possible to integrate them.
I asked the same question on AWS forums and they replied that those technologies are not integrated yet since they are separated from each other.
Well I guess I'll have to find another way.
i’m not a total expert here, but I think the way to do it would be with a custom script in CodeBuild, rather than with CodeDeploy.
CodeDeploy has a lot of custom stuff going on to support rollbacks and that sorr of advanced stuff (means you have to install the agent on your target server etc).
CodeBuild is just made for running scripts, so I think it’d be reasonable to add a deploy script (that runs after your tests) that connects up to yor Lightsail instance via SSH and deploy any changed files (similar to how you’d do it in open source using Travis CI etc).
Specifically I’ve used the dploy package on npm to do the actual SFTP upload before. It’s Git-aware so it only uploads changes since the last revision (but you could just rsync if you didn’t care about that).
I recently had the same challenge and got it working.
It is necessary to register the Lightsail Instance as an on-premise instance with CodeDeploy. On the instance itself the CodeDeploy agent needs to be installed and configured.
I have written a post about how to set this up on my blog.
https://scratchpad.blog/howto/how-to-use-codedeploy-with-aws-lightsail/
Following those steps can help you deploy lightsail as an onpremises instance and you configure codedeploy to deploy to the onpremises instance