I have a number of ECS. Is it possible to pass a variable to ecs-container-definition.json to deploy to a specific ECS?
e.g. terraform apply -var 'deploy=aws-ecs-backend'
Firstly, there are a few overlapping concepts here. You don't specify what version of Terraform you are using so I'll assume it's relatively recent.
The Terraform resource you are likely referring to is an ecs_task_definition. This resource takes the following format (from the docs):
resource "aws_ecs_task_definition" "service" {
family = "service"
container_definitions = jsonencode([
//ommitted for brevity
])
}
It appears you are instead using the file or templatefile functions to embed the contents of a file called ecs-container-definition.json into your resource. From Terraform's perspective, you do not apply changes to this file, you apply changes to the resource referring to it (the aws_ecs_task_definition).
The process you are trying to undertake is resource targeting, and it is invoked (for example) by specifying the target resource in the terraform apply command. Here is an example, if your ecs_task_definition resource is called myservice you will have the below:
resource "aws_ecs_task_definition" "myservice" {
family = "service"
container_definitions = file('./templates/aws_ecs_task_definition')
.....
}
You would then apply changes to this resource using the below command (run from the same directory as the containing .tf file:
terraform plan -target="aws_ecs_task_definition.myservice" (to see planned changes)
terraform apply -target="aws_ecs_task_definition.myservice" (to apply changes)
Be aware that this is something of a Terraform anti-pattern, as the documentation states in the opening paragraph:
Occasionally you may want to only apply part of a plan, such as
situations where Terraform's state has become out of sync with your
resources due to a network failure, a problem with the upstream cloud
platform, or a bug in Terraform or its providers.
Generally best practice is to have Terraform "cleanly apply" in that all of the state is in sync with your repository content.
Related
Well, I have some resources in AWS that were created via terraform module. But now I have to change source module to the identical module except some stuff like name of some resources and now I need to use another module and avoid replacement. Now I have problems only with names of 4 resources. Here is the example:
KMS-ALIAS: BEFORE: kms-alias-s3bucket, CHANGES IN MODULE: kms-alias-s3bucket-dev. How to avoid replacement without changing the resources names'. I heard about terraform state mv but actually don't know how to properly configure that
Here is the output:
Here is how changes looks like:
Changing the terraform state to add the prefix -dev in the resources names will force terraform to diff from your Cloud environment, any update on those resources afterwards will force a replacement unless you do not touch those resources anymore.
If you cloud environment has this bucket named xyz, you want your state with the bucket name as xyz. So changing those names will depend on what those 4 resources are, bucket name change forces replacement for instance, so if you really want this environment as prefix you can create another bucket with the desired name <bucket-name>-dev and move everything from the old to the new one and then import the new one using terraform import into your state, then terraform will not force replacement anymore.
terraform import aws_s3_bucket.bucket <new-bucket-name>-dev
Additional Info
Modifying your state directly is usually for changing structural stuff and the resource local name could be among those potential changes.
resource "aws_s3_bucket" "bucket" { #bucket = resource local name
bucket = "my-tf-test-bucket" # my-tf-test-bucket = bucket name itself, it is unique and could not be changed without creating another bucket. AWS api does not allow that. That's why always depends on which resource.
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
In resume I would say if terraform try to replace when you change some argument (such as "bucket name") you will need to replace to be applied in your remote system (cloud environment). If AWS API / AWS Console allow you to change without recreating (even terraform saying that you need, it could happen sometimes), you can then import the resource into your state instead of editing the state.
resource "aws_cloudwatch_event_target" "sns" {
rule = aws_cloudwatch_event_rule.console.name
target_id = "SendToSNS"
arn = aws_sns_topic.aws_logins.arn
}
I would like to use Open Policy Agent to ensure the arn of the target above belongs to an allowed list of AWS services (like Lambda, SQS, SNS etc).
How can i verify this using OPA?
I can check if arn starts with aws::sns:: but the arn would be generated only at runtime during terraform apply and at plan time during terraform plan.
So how can i verify the arn during plan?
The common way of working with OPA and Terraform is to write policy that is run against the JSON represenation of terraform plan (and not the .tf files themselves), so that should not be a problem:
terraform plan -out tfplan.binary
terraform show -json tfplan.binary > tfplan.json
You'll now have a tfplan.json that will represent the planned changes in JSON format, and will include any values resolved at the time the plan was made. You can now run e.g.
opa eval --input tfplan.json --data policy.rego data.policy.allow
To run the policy stored in the policy.rego file against the input provided in the Terraform plan, querying the allow rule in the policy package (as a simple example).
I wrote an introduction to using OPA for Terraform policy some time back, covering this topic and a few others useful for getting started.
I'm setting up some Terraform to manage a lambda and s3 bucket with versioning on the contents of the s3. Creating the first version of the infrastructure is fine. When releasing a second version, terraform replaces the zip file instead of creating a new version.
I've tried adding versioning to the s3 bucket in terraform configuration and moving the api-version to a variable string.
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "main.js"
output_path = "main.zip"
}
resource "aws_s3_bucket" "lambda_bucket" {
bucket = "s3-bucket-for-tft-project"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "lambda_zip_file" {
bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
key = "v${var.api-version}-${data.archive_file.lambda_zip.output_path}"
source = "${data.archive_file.lambda_zip.output_path}"
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
s3_key = "${aws_s3_bucket_object.lambda_zip_file.key}"
function_name = "lambda_test_with_s3_version"
role = "${aws_iam_role.lambda_exec.arn}"
handler = "main.handler"
runtime = "nodejs8.10"
}
I would expect the output to be another zip file but with the lambda now pointing at the new version, with the ability to change back to the old version if var.api-version was changed.
Terraform isn't designed for creating this sort of "artifact" object where each new version should be separate from the ones before it.
The data.archive_file data source was added to Terraform in the early days of AWS Lambda when the only way to pass values from Terraform into a Lambda function was to retrieve the intended zip artifact, amend it to include additional files containing those settings, and then write that to Lambda.
Now that AWS Lambda supports environment variables, that pattern is no longer recommended. Instead, deployment artifacts should be created by some separate build process outside of Terraform and recorded somewhere that Terraform can discover them. For example, you could use SSM Parameter Store to record your current desired version and then have Terraform read that to decide which artifact to retrieve:
data "aws_ssm_parameter" "lambda_artifact" {
name = "lambda_artifact"
}
locals {
# Let's assume that this SSM parameter contains a JSON
# string describing which artifact to use, like this
# {
# "bucket": "s3-bucket-for-tft-project",
# "key": "v2.0.0/example.zip"
# }
lambda_artifact = jsondecode(data.aws_ssm_parameter.lambda_artifact)
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = local.lambda_artifact.bucket
s3_key = local.lambda_artifact.key
function_name = "lambda_test_with_s3_version"
role = aws_iam_role.lambda_exec.arn
handler = "main.handler"
runtime = "nodejs8.10"
}
This build/deploy separation allows for three different actions, whereas doing it all in Terraform only allows for one:
To release a new version, you can run your build process (in a CI system, perhaps) and have it push the resulting artifact to S3 and record it as the latest version in the SSM parameter, and then trigger a Terraform run to deploy it.
To change other aspects of the infrastructure without deploying a new function version, just run Terraform without changing the SSM parameter and Terraform will leave the Lambda function untouched.
If you find that a new release is defective, you can write the location of an older artifact into the SSM parameter and run Terraform to deploy that previous version.
A more complete description of this approach is in the Terraform guide Serverless Applications with AWS Lambda and API Gateway, which uses a Lambda web application as an example but can be applied to many other AWS Lambda use-cases too. Using SSM is just an example; any data that Terraform can retrieve using a data source can be used as an intermediary to decouple the build and deploy steps from one another.
This general idea can apply to all sorts of code build artifacts as well as Lambda zip files. For example: custom AMIs created with HashiCorp Packer, Docker images created using docker build. Separating the build process, the version selection mechanism, and the deployment process gives a degree of workflow flexibility that can support both the happy path and any exceptional paths taken during incidents.
Is there a way to export Google Cloud configuration for an object, such as for the load balancer, in the same way as one would use to set it up via the API?
I can quickly configure what I need in the console site, but I am spending tons of time trying to replicate that with Terraform. It would be great if I can generate Terraform files, or at least the Google API output, from the system I already have configured.
If you have something already created outside of Terraform and want to have Terraform manage it or want to work out how to best configure it with Terraform you could use Terraform's import command for any resource that supports it.
So if you have created a forwarding rule called terraform-test via the Google Cloud console and want to know how that maps to Terraform's google_compute_forwarding_rule resource then you could run terraform import google_compute_forwarding_rule.default terraform-test to import this into Terraform's state file.
If you then ran a plan, Terraform will tell you that it has the google_compute_forwarding_rule.default in its state but the resource is not defined in your code and as such it will want to remove it.
If you add the minimal config needed to make the plan work:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
}
And run the plan again Terraform will then tell you what things it needs to change to make your imported forwarding rule look like the config you have defined. Assuming you've done something like set the description on the load balancer Terraform's plan will show something like this:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ google_compute_forwarding_rule.default
description: "This is the description I added via the console" => ""
Plan: 5 to add, 5 to change, 3 to destroy.
This tells you that Terraform wants to remove the description on the forwarding rule to make it match the config.
If you then update your resource definition to something like:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
description = "This is the description I added via the console"
}
Terraform's plan will then show an empty change set:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
At this point you have now aligned your Terraform code with the reality of the resource in Google Cloud and should be able to easily see what needs to be set on the Terraform side to make things happen as expected in the Google Cloud console.
I want to share a terraform script that will be used across different projects. I know how to create and share modules, but this setup has a big annoyance: when I reference a module in a script and perform a terraform apply, if the module resource does not exist it will be created, but also if I perform a terraform destroy this resource will be destroyed.
If I have two projects dependent on the same module, and in one of them I call a terraform destroy it may lead to a inconsistent state, since the module is being used by another project. The script can either fail because it cannot destroy the resource or it will destroy the resource and affect the other project.
In my scenario, I want to share network scripts between two projects and I want the network resources to never be destroyed. I cannot create a project only for this resource because I need to reference it somehow in my projects, and the only way to do it is via its ID, which I have no idea what is going to be.
prevent_destroy is also not an option, since I do need to destroy other resources but the shared script resource. This configuration makes terraform destroy fail.
Is there any way to reference the resource, like by its name, or is there any other better approach to accomplish what I want?
If I understand you correctly, you have some resource R that is a "singleton". That is, only one instance of R can ever exist in your AWS account. For example, you can only ever have one aws_route53_zone with the name "foo.com". If you include R as a module in two different places, then either one may create it when you run terraform apply and either one may delete it when you run terraform destroy. You'd like to avoid that, but you still need some way to get an output attribute from R (e.g. the zone_id for an aws_route53_zone resource is generated by AWS, so you can't guess it).
If that's the case, then instead of using a R as a module, you should:
Create R by itself in its own set of Terraform templates. Let's say those are under /terraform/R.
Configure /terraform/R to use Remote State. For example, here is how you can configure those templates to store their remote state in an S3 bucket (you'll need to fill in the bucket name/region as indicated):
terraform remote config \
-backend=s3 \
-backend-config="bucket=(YOUR BUCKET NAME)" \
-backend-config="key=terraform.tfstate" \
-backend-config="region=(YOUR BUCKET REGION)" \
-backend-config="encrypt=true"
Define any output attributes you need from R as output variables. For example:
output "zone_id" {
value = "${aws_route_53.example.zone_id}"
}
When you run terraform apply in /terraform/R, it will store its Terraform state, including that output, in an S3 bucket.
Now, in all other Terraform templates that need that output attribute from R, you can pull it in from the S3 bucket using the terraform_remote_state data source. For example, let's say you had some template /terraform/foo that needed that zone_id parameter to create an aws_route53_record (you'll need to fill in the bucket name/region as indicated):
data "terraform_remote_state" "r" {
backend = "s3"
config {
bucket = "(YOUR BUCKET NAME)"
key = "terraform.tfstate"
region = "(YOUR BUCKET REGION)"
}
}
resource "aws_route53_record" "www" {
zone_id = "${data.terraform_remote_state.r.zone_id}"
name = "www.foo.com"
type = "A"
ttl = "300"
records = ["${aws_eip.lb.public_ip}"]
}
Note that terraform_remote_state is a read-only data source. That means when you run terraform apply or terraform destroy on any templates that use that resource, they will not have any effect in R.
For more info, check out How to manage terraform state and Terraform: Up & Running.