I have two workspaces (like dev and prd) and I have to create single resource to use on all of them.
My example is to create AWS ECR repository:
resource "aws_ecr_repository" "example" {
name = "example"
}
I applied it on prd workspace and after switching to dev workspace, Terraform wants to create the same, but it exist.
After consideration I used count to create it only on prd like that:
resource "aws_ecr_repository" "example" {
count = local.stage == "prd" ? 1 : 0
name = "example"
}
and on prd workspace I use it like that:
aws_ecr_repository.default[0].repository_url
but there is a problem how to use it on dev workspace.
What is the better way to solve this?
since i´m not able to add a comment (i do not have enough rep)
i´m adding this as an answer.
as Jens mentioned, best is to avoid this approach.
but you can import a remote state with something like this:
data "terraform_remote_state" "my_remote_state" {
backend = "local" # could also be a remote state like s3
config = {
key = "project-key"
}
workspace = "prd"
}
in your prod workspace you have to define the outputs of your repo:
output "ecr_repo_url" {
aws_ecr_repository.default[0].repository_url
}
in your dev workspace, you can access the value with:
data.terraform_remote_state.my_remote_state.ecr_repo_url
in some cases this maybe useful, but be aware to what Jens said: if you destroy your prod environment, you can´t apply or change your dev environment!
Related
I want to build an s3 bucket only when we are using our prod account. We are using terraform 11.7 which I don't think supports the 'null' choice for if statement
Does anyone know if something like this can be done in terraform 11:
resource "aws_s3_bucket" "feedback_service_bucket" {
bucket = "${var.account_name == "prod" ? "ces-${var.environment}-QA-compendex" : null}"
acl = "private"
}
I will upgrade soon to terraform 13, but as this is required in this sprint I don't have time. We have a prod.tfvars file which is used when the applied environment is set to prod (obviously), so if the above is not possible, is there a way to put an entire resource into a tfvars file. I know you can use variables, but I'm not sure about resource. Any help would be great, thanks
Does anyone know if something like this can be done in terraform 11:
You can use count for such scenario:
resource "aws_s3_bucket" "feedback_service_bucket" {
count = "${var.account_name == "prod" ? 1 : 0}"
bucket = "ces-${var.environment}-QA-compendex"
acl = "private"
}
You can refer to this: https://www.terraform.io/docs/configuration-0-11/interpolation.html#conditionals
I will upgrade soon to terraform 13, but as this is required in this sprint I don't have time.
For info, Terraform 0.13 won't give you much advantages for this specific use case. You would still use count. What changes in 0.13 regarding conditional resources is that you have an alternative using for_each - introduced with v0.12 - and in which you can achieve the same behavior.
I have a secret stored in AWS secret manager and trying to integrate that within terraform during runtime. We are using terraform 0.11.13 version, and updating to latest terraform is in the roadmap.
We all want to use the jsondecode() available as part of latest terraform, but need to get few things integrated before we upgrade our terraform.
We tried to use the below helper external data program suggested as part of https://github.com/terraform-providers/terraform-provider-aws/issues/4789.
data "external" "helper" {
program = ["echo", "${replace(data.aws_secretsmanager_secret_version.map_example.secret_string, "\\\"", "\"")}"]
}
But we ended up getting this error now.
data.external.helper: can't find external program "echo"
Google search didn't help much.
Any help will be much appreciated.
OS: Windows 10
It sounds like you want to use a data source for the aws_secretsmanager_secret.
Resources in terraform create new resources. Data sources in terraform reference the value of existing resources in terraform.
data "aws_secretsmanager_secret" "example" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
version_stage = "example"
}
Note: you can also use the secret name
Docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret
Then you can use the value from this like so:
output MySecretJsonAsString {
value = data.aws_secretsmanager_secret_version.example.secret_string
}
Per the docs, the secret_string property of this resource is:
The decrypted part of the protected secret information that was originally provided as a string.
You should also be able to pass that value into jsondecode and then access the properties of the json body individually.
but you asked for a terraform 0.11.13 solution. If the secret value is defined by terraform you can use the terraform state datasource to get the value. This does trust that nothing else is updating the secret other than terraform. But the best answer is to upgrade your terraform. This could be a useful stopgap until then.
As a recommendation, you can make the version of terraform specific to a module and not your whole organization. I do this through the use of docker containers that run specific versions of the terraform bin. There is a script in the root of every module that will wrap the terraform commands to come up in the version of terraform meant for that project. Just a tip.
I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here:
uploaded version outputs as null. I would expect some version_id like 1, 2, 3
When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed. I would expect to upload all the times when I run terraform apply and create a new version.
What am I doing wrong? Here is my Terraform config:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my_bucket_name"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "my_files.zip"
}
output "my_bucket_file_version" {
value = "${aws_s3_bucket_object.file_upload.version_id}"
}
Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.
To make subsequent changes, there are a few options:
You could use a different local filename for each new version.
You could use a different remote object path for each new version.
You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.
The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "${path.module}/my_files.zip"
etag = "${filemd5("${path.module}/my_files.zip")}"
}
With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.
(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)
The preferred solution is now to use the source_hash property. Note that aws_s3_bucket_object has been replaced by aws_s3_object.
locals {
object_source = "${path.module}/my_files.zip"
}
resource "aws_s3_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = local.object_source
source_hash = filemd5(local.object_source)
}
Note that etag can have issues when encryption is used.
You shouldn't be using Terraform to do this. Terraform is supposed to orchestrate and provision your infrastructure and its configuration, not files. That said, terraform is not aware of changes on your files. Unless you change their names, terraform will not update the state.
Also, it is better to use local-exec to do that. Something like:
resource "aws_s3_bucket" "my-bucket" {
# ...
provisioner "local-exec" {
command = "aws s3 cp path_to_my_file ${aws_s3_bucket.my-bucket.id}"
}
}
I have used Terragrunt to orchestrate the creation of a non-default AWS VPC.
I've got S3/DynamoDB state mgmt, and the VPC code is a module. I have the 'VPC environment' terraform.tfvars code checked into a second repo as per the terragrunt README.md.
I created a second module which will eventually create hosts in this VPC but for now just aims to output its ID. I have created a separate 'hosts environment' / terraform.tfvars for the instantiation of this module.
I run terragrunt apply in the VPC environment directory - VPC created
I run terragrunt apply a second time in the hosts environment directory - output directive doesn't work (no error, but incorrect, see below).
This is a precursor to one day running a terragrunt apply-all in the parent directory of the VPC/hosts environment directories; my reading of the docs suggest using a terraform_remote_state data source to expose the VPC ID, so I specified access like this in the data.tf file of the hosts module:
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "myBucket"
key = "keyToMy/vpcEnvironment.tfstate"
region = "stateRegion"
}
}
Then, in the hosts module outputs.tf, I specified an output to check assignment:
output "mon_vpc" {
value = "${data.terraform_remote_state.vpc.id}"
}
When I run (2) above, it exits with:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
mon_vpc = 2018-06-02 23:14:42.958848954 +0000 UTC
Questions:
I'm going wrong setting up the code so that the hosts environment is configured to correctly acquire the VPC ID from the already-existing VPC (terraform state file) - any advice on what to change here would be appreciated.
It does look like I've managed to acquire the date of when the VPC was created rather than its ID, which given the code is perplexing - anyone know why?
I'm not using community modules - all hand rolled.
EDIT: In response to Brandon Miller, here is a bit more. In my VPC module, I have an outputs.tf containing among other outputs:
output "aws_vpc.mv.id-op" {
value = "${aws_vpc.mv.id}"
}
and the vpc.tf contains
resource "aws_vpc" "mv" {
cidr_block = "${var.vpcCidr}"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "mv-vpc-${var.aws_region}"
}
}
As this cfg results in a vpc being created, and as most of the parameters are <computed>, I assumed state would contain sufficient data for other modules to refer to by consulting state (I assumed at first that terraform used the AWS API for this under the bonnet, rather than consulting a different state key).
EDIT 2: Read all of #brendan-miller's answer and following comments first.
Use of periods causes a problem as it confuses terraform (see Brendan's answer for the specification format below):
Error: output 'mon_vpc': unknown resource 'data.aws_vpc.mv-ds' referenced in variable data.aws_vpc.mv-ds.vpc.id
You named your output aws_vpc.mv.id-op but when you retrieve it you are retrieving just id. You could try
data.terraform_remote_state.vpc.aws_vpc.mv.id
but im not sure if Terraform will complain about the additional .. However the format should always be
data.terraform_remote_state.<name of the remote state module>.<name of the output>
You mentioned wanting to be able to get this info with the AWS API. That is also possible by using the aws_vpc data source. Their example uses id, but you can also use any tag you used on your vpc.
Like this:
data "aws_vpc" "default" {
filter {
name = "tag:Name"
values = ["example-vpc-name"]
}
}
Then you can use this for the id
${data.aws_vpc.default.id}
In addition this retrieves all tags set, for example:
${data.aws_vpc.default.tags.Name}
And the cidr block
${data.aws_vpc.default.cidr_block}
As well as some other info. This can be very useful for storing and retrieving things about your VPC.
I attempted to manage my application versions in my terraform template by parameterising the name. This was an attempt to have a new application version created by our CI process whenever the contents of the application changed. This way in elasticbeanstalk i could keep a list of historic application versions so that i could roll back etc. This didnt work as the same application version was constantly updated and in effect i lost the history of all application versions.
resource "aws_elastic_beanstalk_application_version" "default" {
name = "${var.eb-app-name}-${var.build-number}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
I then tried to parameterise the logical resource reference name, but this isnt supported by terraform.
resource "aws_elastic_beanstalk_application_version" "${var.build-number}" {
name = "${var.eb-app-name}-${var.build-number}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
Currently my solution is to manage my application versions outside of terraform which is disappointing as there are other associated resources such as the S3 bucket and permissions to worry about.
Am i missing something?
As far as Terraform is concerned you are just updating a single EB application version resource there. If you wanted to keep the previous versions around then you might need to try and increment the count of resources that Terraform is managing.
Off the top of my head you could try something like this:
variable "builds" = {
type = list
}
resource "aws_elastic_beanstalk_application_version" "default" {
count = "${length(var.builds)}"
name = "${var.eb-app-name}-${element(builds, count.index)}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
Then if you have a list of builds it should create a new application version for each build.
Of course that could be dynamic in that the variable could instead be a data source that returns a list of all your builds. If a data source doesn't exist for it already you could write a small script that is used as an external data source.