I recently started getting into the devops side of things and am currently working with Terraform and AWS ECS to setup a simple web server host my web applications.
Using my current Terraform config I can see my cluster being created with a service that has my task definition. I can't figure out how to run the tasks required to launch the webserver from Terraform. I can only see the capability to create the task definitions and services, but not run them. I am very new to both these technologies so I fear I might be missing something simple.
The setup I used is from an example I found online that I tried to follow.
TL;DR: I can create services using Terraform but can't seem to figure out how to run them.
You need define an "aws_ecs_service" resource in Terraform and in there define how many instances of your task you want running. In the example you link, that is done in the main.tf file here.
There is no way you can run the task from terraform except running the external script or aws-cli from the local-exec provisioner.
Related
I am setting up a CI&CD environment for a GCP project involves Cloud Run. While setting up everything via Terraform is pretty much straightforward, I cannot figure out how to update the environment when the code changes.
The documentation says:
Make a change to the configuration file.
But that couples the application deployment to terraform configuration, which should be responsible only for infrastructure deployment.
Ideally, I use terraform to provision the infrastructure, and another CI step to build and deploy the container.
Is there a best-practice here?
Relevant sources: 1.
I ended up separating Cloud Run service creation (which is still done in Terraform) and deployment to two different workflows.
The key component was to make terraform ignore the actual deployed image so that when the code deployment workflow is done, terraform won't complained that the Cloud Run image is different from the one it manages. I achieved this by setting ignore_changes = [template[0].spec[0].containers[0].image] on the google_cloud_run_service resource.
I am trying to have run the lastest task definition image built from GitHub deployment (CD). Seems like on AWS it creates a task definition for example "task-api:1", "task-api:2", on was my cluster is still running task-api: 1 even though there is the latest task as a new image has been built. So far I have to manually stop the old one and start a new one . How can I have it automated?
You must wrap your tasks in a service and use rolling updates for automated deployments.
When the rolling update (ECS) deployment type is used for your service, when a new service deployment is started the Amazon ECS service scheduler replaces the currently running tasks with new tasks.
Read: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html
This is DevOps, so you need a CI/CD pipeline that will do the rolling updates for you. Look at CodeBuild, CodeDeploy and CodePipeline (and CodeCommit if you integrate your code repository in AWS with your CI/CD)
Read: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
This is a complex topic, but it pays off in the end.
Judging from what you have said in the comments:
I created my task via the AWS console, I am running just the task definition on its own without service plus service with task definition launched via the EC2 not target both of them, so in the task definition JSON file on my Github both repositories they are tied to a revision of a task (could that be a problem?).
It's difficult to understand exactly how you have this set up and it'd probably be a good idea for you to go back and understand the services you are using a little better using the guide you are following or AWS documentation. Pushing a new task definition does not automatically update services to use the new definition.
That said, my guess is that you need to update the service in ECS to use the latest task definition. You can do that in many ways:
Through the console (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service-console-v2.html).
Through the CLI (https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html).
Through the IaC like the CDK (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ecs-readme.html).
This can be automated but you would need to set up a process to automate it.
I would recommend reading some guides on how you could automate deployment and updates using the CDK. Amazon provide a good guide to get you started https://docs.aws.amazon.com/cdk/latest/guide/ecs_example.html.
I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.
I have an existing ECS cluster (EC2) created using Terraform. I want to install some software on those EC2 instances using Terraform. One of our business requirements is that we will not be able to destroy and re-create the instances and we have to do it on existing instances.
How should I approach this?
It sounds like your organization is experimenting with running its services in docker and ECS. I also assume you are using AWS ECR to host your docker images (although technically doesn't matter).
When you create an ECS cluster it is initially empty. If you were to re-run your terraform template again it should show you that there are no updates to apply. In order to take the next step you will need to define a ecs-service and a ecs-task-definition. This can either be done in your existing terraform template, in a brand new template, or you can do it manually (aws web console or awscli). Since you are already using terraform I would assume you would continue to use it. Personally I would keep everything in 1 template but again it is up to you.
An ecs-service is essentially the runtime configuration for your ecs-tasks
An ecs-task-definition is a set of docker containers to run. In the simplest case it is 1 single docker container. Here is where you will specify the docker image(s) you will use, how much CPU+RAM for the docker container, etc...
In order for your running ecs service(s) to be updated without your EC2 nodes ever going down you would just need to update your docker image within the ecs-task-definition portion of your terraform template (an ofcourse run terraform).
with all this background info now you can add a Terraform ecs-service Terraform ecs-task-definition to your terraform template.
Since you did not provide your template I cannot say exactly how this should be setup but an example terraform template of a complete ECS cluster running nginx can be found below
Complete Terraform ECS example
more examples can be found at
Official terraform ECS github examples
You could run a provisioner attached to an always triggered null_resource to always run some process against things but I'd strongly recommend you rethink your processes.
Your ECS cluster should be considered completely ephemeral, as with the containers running on them. When you want to update the ECS instances then destroying and replacing the instances (ideally in an autoscaling group) is what you want to do as it greatly simplifies things. You can read more about the benefits of immutable infrastructure elsewhere.
If you absolutely couldn't do this then you'd most likely be best off using another tool, such as Ansible, entirely. You could choose to launch this via Terraform using a null_resource provisioner as mentioned above which would look something like the following:
resource "null_resource" "on_demand_provisioning" {
triggers {
always = "${uuid()}"
}
provisioner "local-exec" {
command = "ansible-playbook -i inventory.yml playbook.yml --ssh-common-args='-o StrictHostKeyChecking=no'"
}
}
I'm trying to synchronize aws iot and lorawan. Following the instructions in the official documentation , everything works out well.
But there is used for deployment elastic beanstalk which raises the instance ec2 where the script for synchronization is executed.
But I do not need eb. I want to run only the script
I downloaded all this script separately and am trying to run. This causes an error.
Based on the cloudformation template, this script uses these variables.
I do not understand where elasticbeanstalk stores all these variables. Someone can suggest this or point to another solution for running the script?