I'm using Terraform to build our AWS infrastructure projects. I need to be able to output multiple variables to a file and then to load that file back into another Terraform script.
Right now, I'm able to output the variables but they come out with the values not quoted:
variable = value
However, when loading a variable file into Terraform, it expects all values to be quoted, like this:
variable = "value"
So I can't understand why the hell Terraform doesn't just export the variables that way in the first place.
Is there any way to have it do this without requiring additional work on my part?
EDIT: I'm using Terraform v0.11.13 and cannot upgrade due to security restrictions
Output in JSON and use JQ to transform into what you like to have.
terraform output -json
main.tf
output "hogehoge" {
value = "hogehoge"
}
Execution
$ terraform apply
Outputs:
hogehoge = hogehoge
$ terraform output -json
{
"hogehoge": {
"sensitive": false,
"type": "string",
"value": "hogehoge"
}
}
However, as #ydaetskcoR commented, why not use data.terraform_remote_state?
Related
I'm having an issue when upgrading from Terraform 0.13 to the latest version, as in Terraform 0.13, there were no quotes around variable outputs.
In 0.13, the following code:
output "bucket-name-output" {
value = "${aws_s3_bucket.logos-bucket.id}"
}
produced this value:
logo-horizon-bucket
In the latest version of Terraform (v 1.3.7), the same code gives this value:
"logo-horizon-bucket"
So you can see that quotes are now being added around the string.
This is then causing my Golang application code, which is trying to upload an object to an S3 bucket, to fail as it's setting the name of the bucket to the output value (which is surrounded by the quotes):
bucketName := terraform.Output(t, terraformOptions, "bucket-name-output")
params := &s3manager.UploadInput{
Bucket: aws.String(bucketName),
Key: aws.String("imageFileName"),
Body: getImage(),
}
I get the following error as S3 bucket names cannot contain quotes.
InvalidBucketName: The specified bucket is not valid.
status code: 400, request id:
So my question is how to do I remove those quotes from inside the Terraform file? I could add some logic to remove them in the Golang application code, but this is pretty messy so I would like to avoid this if possible...
Thanks!
I assume that something in your system is running terraform output bucket-name-output to get the value of this output value.
The terraform output command was always intended primarily for human consumption and at some point since v0.13 the human-oriented output changed to show values using a syntax similar to how they would appear inside the Terraform language itself, because it helps the reader to understand what type of value they have exported.
There are two different ways to tell terraform output that it should produce data in a form suitable for consumption by external software rather than by humans:
terraform output -json bucket-name-output produces a JSON description of the output value. This option will work for values of any type, by following the same encoding decisions as Terraform's jsonencode function.
terraform output -raw bucket-name-output produces just a raw string version of the output value. Technically what it is doing is converting the output value to a string with the same meaning as the tostring function and then printing the resulting string. This means it only works with output values of types that tostring can convert: strings, numbers, and boolean values.
Since your program was originally trying to use the human-readable output as if it was a raw string value, I think the second of these options would be the easiest for you to retrofit into your system. Your program already seems to be expecting a raw string version of the output value and the -raw option designed for exactly that purpose.
I have been trying to create a step function with a choice step that acts as a rule engine. I would like to compare a date variable (from the stale input JSON) to another date variable that I generate with a lambda function.
AWS documentation does not go into details about the Timestamp comparator functions, but I assumed that it can handle two input variables. Here is the relevant part of the code:
{
"Variable": "$.staleInputVariable",
"TimestampEquals": "$.generatedTimestampUsingLambda"
}
Here is the error that I am getting when trying to update(!!!) the stepFunction. I would like to highlight the fact that I don't even get to invoking the stepFunction as it fails while updating the function.
Resource handler returned message: "Invalid State Machine Definition: 'SCHEMA_VALIDATION_FAILED: String does not match RFC3339 timestamp at ..... (Service: AWSStepFunctions; Status Code: 400; Error Code: InvalidDefinition; Request ID: 97df9775-7d2d-4dd2-929b-470c8s741eaf; Proxy: null)" (RequestToken: 030aa97d-35a5-a6a5-0ac5-5698a8662bc2, HandlerErrorCode: InvalidRequest)
The stepfunction updates without the Timestamp matching, therefore, I suspect this bit of code.. Any guess?
EDIT (08.Jun.2021):
A comparison – Two fields that specify an input variable to compare,
the type of comparison, and the value to compare the variable to.
Choice Rules support comparison between two variables. Within a Choice
Rule, the value of Variable can be compared with another value from
the state input by appending Path to name of supported comparison
operators.
Source: AWS docs
It clearly states that two variables can be compared, but to no avail. Still trying :)
When I explained the problem to one of my peers, I realised that the AWS documentation mentions a Path postfix (which I confused with the $.). This Path needs to be added to the operatorName.
The following code works:
{
"Variable": "$.staleInputVariable",
"TimestampEqualsPath": "$.generatedTimestampUsingLambda"
}
Again, I would like to draw your attention to the "Path" word. That makes the magic!
Looks like you indeed found a way around your initial challenge from the thread linked below.
Using an amazon step function, how do I write a choice operator that references current time?
However, I thought you wanted to compare $.staleInputVariable to the current timestamp and I wince to think you had to configure a lambda function (and test it!) to do only that.
If so, you could have achieved that simply by using the Context Object or $$.:
{
"Variable": "$$.State.EnteredTime",
"TimestampEqualsPath": "$.staleInputVariable"
}
In terraform I have 2 data outputs:
data "aws_instances" "daas_resolver_ip_1" {
instance_tags = {
Name = "${var.env_type}.${var.environment}.ns1.${var.aws_region}.a."
}
}
data "aws_instances" "daas_resolver_ip_2" {
instance_tags = {
Name = "${var.env_type}.${var.environment}.ns2.${var.aws_region}.b."
}
}
I want to get the private_ip from each of those combine those into a list and be used as follows:
dhcp_options_domain_name_servers = ["${data.aws_instances.daas_resolver_ip_1.private_ip}", "${data.aws_instances.daas_resolver_ip_1.private_ip}"]
How can I achieve this? At the moment this is the error I get:
Error: module.pmc_environment.module.pmc_vpc.aws_vpc_dhcp_options.vpc: domain_name_servers: should be a list
I believe what you've encountered here is a common limitation of Terraform 0.11. If this is a new configuration then starting with Terraform 0.12 should avoid the problem entirely, as this limitation was addressed in the Terraform 0.12 major release.
The underlying problem here is that the private_ip values of at least one of these resources is unknown during planning (it will be selected by the remote system during apply) but then Terraform 0.11's type checker is failing because it cannot prove that these unknown values will eventually produce a list of strings as the dhcp_options_domain_name_servers requires.
Terraform 0.12 addresses this by tracking type information for unknown values and propagating types through expressions so that e.g. in this case it could know that the result is a list of two strings but the strings themselves are not known yet. From Terraform 0.11's perspective, this is just an unknown value with no type information at all, and is therefore not considered to be a list, causing this error message.
A workaround for Terraform 0.11 is to use the -target argument to ask Terraform to deal with the operations it needs to learn the private_ip values first, and then run Terraform again as normal once those values are known:
terraform apply -target=module.pmc_environment.module.pmc_vpc.data.aws_instances.daas_resolver_ip_1 -target=module.pmc_environment.module.pmc_vpc.data.aws_instances.daas_resolver_ip_2
terraform apply
The first terraform apply with -target set should deal with the two data resources, and then the subsequent terraform apply with no arguments should then be able to see what the two IP addresses are.
This will work only if all of the values contributing to the data resource configurations remain stable after the initial creation step. You'd need to repeat this two-step process on subsequent changes if any of var.env_type, var.environment, or var.aws_region become unknown as a result of other planned actions.
I'm using Terraform with AWS as a provider.
I want to use a ternary operator in my availability zones local variable.
The logic is simple:
If a variable exist - take it.
If not, use the availability zones data.
The following code:
data "aws_availability_zones" "available" {}
locals {
azs = "${length(var.azs) > 0 ? var.azs : data.aws_availability_zones.available.names}"
}
variable "azs" {
description = "A list of Availability zones in the region"
default = []
type = "list"
}
Generates the following error:
conditional operator cannot be used with list values.
Although its quiet a simple operation, It turns out like a familiar issue.
I followed the work-arounds in the mentioned thread, but they looked looked quiet complicated (Using compact split and join functions together).
Any suggestions for more simple solution?
Thank you.
you are close to the answer.
Not sure how you define the variable var.azs, I guess they are defined as string and connected with commas.
So you need adjust the code, join the list to string.
locals {
azs = "${length(var.azs) > 0 ? var.azs : join(",", data.aws_availability_zones.available.names)}"
}
I create an AWS RDS instance with different KMS CMKs depending on whether or not the environment is Production or Non-Production. So I have two resources that use the terraform count if:
count = "${var.bluegreen == "nonprod" ? 1 : 0}"
This spins up an RDS instance with different KMS keys with different addresses. I need to capture that endpoint (which I do with terraform show after the build finishes) so why doesn't this work in Terraform?
output "rds_endpoint" {
value = "${var.bluegreen == "nonprod" ? aws_db_instance.rds_nonprod.address : aws_db_instance.rds_prod.address}"
}
It is an error to access attributes of a resource that has count = 0, and unfortunately Terraform currently checks both "sides" of a conditional during its check step, so expressions like this can fail. Along with this, there is a current behavior that errors in outputs are not explicitly shown since outputs can get populated when the state isn't yet complete (e.g. as a result of using -target). These annoyances all sum up to a lot of confusion in this case.
Instead of using a conditional expression in this case, it works better to use "splat expressions", which evaluate to an empty list in the case where count = 0. This would looks something like the following:
output "rds_endpoint" {
value = "${element(concat(aws_db_instance.rds_nonprod.*.address, aws_db_instance.rds_prod.*.address), 0)}"
}
This takes the first element of a list created by concatenating together all of the nonprod addresses and all of the prod addresses. Due to how you've configured count on these resource blocks, the resulting list will only ever have one element and so it will just take that element.
In general, to debug issues with outputs it can be helpful to evaluate the expressions in terraform console, or somewhere else in a config, to bypass the limitation that errors are silently ignored on outputs.