CloudFormation provides AllowedValues for Parameters which tells that the possible value of the parameter can be from this list. How can I achieve this with Terraform variables? The variable type of list does not provide this functionality. So, in case I want my variable to have value out of only two possible values, how can I achieve this with Terraform. CloudFormation script that I want to replicate is:
"ParameterName": {
"Description": "desc",
"Type": "String",
"Default": true,
"AllowedValues": [
"true",
"false"
]
}
I don't know of an official way, but there's an interesting technique described in a Terraform issue:
variable "values_list" {
description = "acceptable values"
type = "list"
default = ["true", "false"]
}
variable "somevar" {
description = "must be true or false"
}
resource "null_resource" "is_variable_value_valid" {
count = "${contains(var.values_list, var.somevar) == true ? 0 : 1}"
"ERROR: The somevar value can only be: true or false" = true
}
Update:
Terraform now offers custom validation rules in Terraform 0.13:
variable "somevar" {
type = string
description = "must be true or false"
validation {
condition = can(regex("^(true|false)$", var.somevar))
error_message = "Must be true or false."
}
}
Custom validation rules is definitely the way to go. If you want to keep things simple and check the provided value against a list of valid ones, you can use the following in your variables.tf config:
variable "environment" {
type = string
description = "Deployment environment"
validation {
condition = contains(["dev", "prod"], var.environment)
error_message = "Valid value is one of the following: dev, prod."
}
}
Variation on the above answer to use an array/list.
variable "appservice_sku" {
type = string
description = "AppService Plan SKU code"
default = "P1v3"
validation {
error_message = "Please use a valid AppService SKU."
condition = can(regex(join("", concat(["^("], [join("|", [
"B1", "B2", "B3", "D1", "F1",
"FREE", "I1", "I1v2", "I2", "I2v2",
"I3", "I3v2", "P1V2", "P1V3", "P2V2",
"P2V3", "P3V2", "P3V3", "PC2",
"PC3", "PC4", "S1", "S2", "S3",
"SHARED", "WS1", "WS2", "WS3"
])], [")$"])), var.appservice_sku))
}
}
Related
I'm trying to create an option group that requires an option with option settings that add multiple values.
See the following, scrubbed for sensitivity:
option {
option_name = "VALID_OPTION_NAME"
option_settings = [
{
name = "foobar1"
value = "foobar1"
},
{
name = "foobar2"
value = "foobar2"
},
{
name = "foobar3"
value = "foobar3"
},
{
name = "foobar4"
value = "foobar4"
},
{
name = "foobar5"
value = "foobar5"
}
]
}
terraform validate gives the following error:
[0m on rds.tf line 112, in resource "aws_db_option_group" "rds-option-group":
112: [4moption_settings[0m = [
[0m
An argument named "option_settings" is not expected here. Did you mean to
define a block of type "option_settings"?
I've tried numerous variations of this syntax to no avail. AWS in the GUI has the option by default to include multiple option settings, so there should be a way to do it in Terraform as well.
The Option Group docs for Terraform unfortunately don't include an example where one option has multiple settings.
Among other things, I also checked out this thread which didn't help me, I believe because I'm not using that module.
Any recommendations?
Answer, for any future viewers:
option {
option_name = "xx"
option_settings {
name = "xx"
value = "xx"
}
option_settings {
name = "xx"
value = "xx"
}
option_settings {
name = "xx"
value = "xx"
}
option_settings {
name = "xx"
value = "xx"
}
}
longtime lurker first time poster
Looking for some guidance from you all. I'm trying to replicate the aws command to essentially get the parameters (ssm get-parameters-by-path) then loop through the parameters and get them
then loop through and put them into a new parameter (ssm put-parameter)
I understand there's a for loop expression in TF but for the life of me I can't put together how I would achieve this.
so thanks to the wonderful breakdown below, I've gotten closer! But have this one issue. Code below:
provider "aws" {
region = "us-east-1"
}
data "aws_ssm_parameters_by_path" "parameters" {
path = "/${var.old_env}"
recursive = true
}
output "old_params_by_path" {
value = data.aws_ssm_parameters_by_path.parameters
sensitive = true
}
locals {
names = toset(data.aws_ssm_parameters_by_path.parameters.names)
}
data "aws_ssm_parameter" "old_param_name" {
for_each = local.names
name = each.key
}
output "old_params_names" {
value = data.aws_ssm_parameter.old_param_name
sensitive = true
}
resource "aws_ssm_parameter" "new_params" {
for_each = local.names
name = replace(data.aws_ssm_parameter.old_param_name[each.key].name, var.old_env, var.new_env)
type = data.aws_ssm_parameter.old_param_name[each.key].type
value = data.aws_ssm_parameter.old_param_name[each.key].value
}
I have another file like how the helpful poster mentioned and created the initial dataset. But what's interesting is that after you create the set after the second set, it overwrites the first set! The idea is that I would be able to tell terraform, I have this current set of SSM parameters and I want you to copy that info (values, type) and create a brand new set of parameters (and not destroy anything that's already there).
Any and all help would be appreciated!
I understand, It's not easy at the beginning. I will try to elaborate step-by-step on how I achieve that.
Anyway, it's nice to include any code, that you tried before, even if doesn't work.
So, firstly I create some example parameters:
# create_parameters.tf
resource "aws_ssm_parameter" "p" {
count = 3
name = "/test/${count.index}/p${count.index}"
type = "String"
value = "test-${count.index}"
}
Then I try to view them:
# example.tf
data "aws_ssm_parameters_by_path" "parameters" {
path = "/test/"
recursive = true
}
output "params_by_path" {
value = data.aws_ssm_parameters_by_path.parameters
sensitive = true
}
As an output I received:
terraform output params_by_path
{
"arns" = tolist([
"arn:aws:ssm:eu-central-1:999999999999:parameter/test/0/p0",
"arn:aws:ssm:eu-central-1:999999999999:parameter/test/1/p1",
"arn:aws:ssm:eu-central-1:999999999999:parameter/test/2/p2",
])
"id" = "/test/"
"names" = tolist([
"/test/0/p0",
"/test/1/p1",
"/test/2/p2",
])
"path" = "/test/"
"recursive" = true
"types" = tolist([
"String",
"String",
"String",
])
"values" = tolist([
"test-0",
"test-1",
"test-2",
])
"with_decryption" = true
}
aws_ssm_parameters_by_path is unusable without additional processing, so we need to use another data source, to get a suitable object for a copy of provided parameters. n the documentation I found aws_ssm_parameter. However, to use it, I need the full name of the parameter.
List of the parameter names I retrieved in the previous stage, so now only needed is to iterate through them:
# example.tf
locals {
names = toset(data.aws_ssm_parameters_by_path.parameters.names)
}
data "aws_ssm_parameter" "param" {
for_each = local.names
name = each.key
}
output "params" {
value = data.aws_ssm_parameter.param
sensitive = true
}
And as a result, I get:
terraform output params
{
"/test/0/p0" = {
"arn" = "arn:aws:ssm:eu-central-1:999999999999:parameter/test/0/p0"
"id" = "/test/0/p0"
"name" = "/test/0/p0"
"type" = "String"
"value" = "test-0"
"version" = 1
"with_decryption" = true
}
"/test/1/p1" = {
"arn" = "arn:aws:ssm:eu-central-1:999999999999:parameter/test/1/p1"
"id" = "/test/1/p1"
"name" = "/test/1/p1"
"type" = "String"
"value" = "test-1"
"version" = 1
"with_decryption" = true
}
"/test/2/p2" = {
"arn" = "arn:aws:ssm:eu-central-1:999999999999:parameter/test/2/p2"
"id" = "/test/2/p2"
"name" = "/test/2/p2"
"type" = "String"
"value" = "test-2"
"version" = 1
"with_decryption" = true
}
}
Each parameter object has been retrieved, so now it is possible to create new parameters - which can be done like this:
# example.tf
resource "aws_ssm_parameter" "new_param" {
for_each = local.names
name = "/new_path${data.aws_ssm_parameter.param[each.key].name}"
type = data.aws_ssm_parameter.param[each.key].type
value = data.aws_ssm_parameter.param[each.key].value
}
I need to pass the database host name (that is dynamically generated) as an environmental variable into my task definition. I thought I could set locals and have the variable map refer to a local but it seems to not work, as I receive this error: “error="failed to check table existence: dial tcp: lookup local.grafana-db-address on 10.0.0.2:53: no such host". I am able to execute the terraform plan without issues and the code works when I hard code the database host name, but that is not optimal.
My Variables and Locals
//MySql Database Grafana Username (Stored as ENV Var in Terraform Cloud)
variable "username_grafana" {
description = "The username for the DB grafana user"
type = string
sensitive = true
}
//MySql Database Grafana Password (Stored as ENV Var in Terraform Cloud)
variable "password_grafana" {
description = "The password for the DB grafana password"
type = string
sensitive = true
}
variable "db-port" {
description = "Port for the sql db"
type = string
default = "3306"
}
locals {
gra-db-user = var.username_grafana
}
locals {
gra-db-password = var.password_grafana
}
locals {
db-address = aws_db_instance.grafana-db.address
}
locals {
grafana-db-address = "${local.db-address}.${var.db-port}"
}
variable "app_environments_vars" {
type = list(map(string))
description = "Database environment variables needed by Grafana"
default = [
{
"name" = "GF_DATABASE_TYPE",
"value" = "mysql"
},
{
"name" = "GF_DATABASE_HOST",
"value" = "local.grafana-db-address"
},
{
"name" = "GF_DATABASE_USER",
"value" = "local.gra-db-user"
},
{
"name" = "GF_DATABASE_PASSWORD",
"value" = "local.gra-db-password"
}
]
}
Task Definition Variable reference
"environment": ${jsonencode(var.app_environments_vars)},
Thank you to everyone who has helped me with this project. I am new to all of this and could not have done it without help from this community.
You can't use dynamic references in your app_environments_vars. So your default values "value" = "local.grafana-db-address" will never get resolved by TF. If will be just a literal string "local.grafana-db-address".
You have to modify your code so that all these dynamic references in app_environments_vars get populated in locals.
UPDATE
Your app_environments_vars should be local variable for it to be resolved:
locals {
app_environments_vars = [
{
"name" = "GF_DATABASE_TYPE",
"value" = "mysql"
},
{
"name" = "GF_DATABASE_HOST",
"value" = local.grafana-db-address
},
{
"name" = "GF_DATABASE_USER",
"value" = local.gra-db-user
},
{
"name" = "GF_DATABASE_PASSWORD",
"value" = local.gra-db-password
}
]
}
then you pass that local to your template for the task definition.
I have this code, which is working if I remove version from msr code block. But if I add it - this error pop-ups. I've tried so far to interpolate conditional and to change types of the variables. No luck
mke_launchpad_tmpl = {
apiVersion = "API"
kind = "mke"
spec = {
mke = {
version: var.mke_version
adminUsername = "admin"
adminPassword = var.admin_password
installFlags : [
"--default-node-orchestrator=kubernetes",
"--san=${module.masters.lb_dns_name}",
]
licenseFilePath: var.license_file_path
upgradeFlags: [
"--force-minimums",
"--force-recent-backups",
]
}
mcr = {
version: var.mcr_version
}
msr = {}
hosts = concat(local.managers, local.workers, local.windows_workers)
}
}
msr_launchpad_tmpl = {
apiVersion = "API"
kind = "mke+msr"
spec = {
mke = {
version: var.mke_version
adminUsername = "admin"
adminPassword = var.admin_password
installFlags : [
"--default-node-orchestrator=kubernetes",
"--san=${module.masters.lb_dns_name}",
]
licenseFilePath: var.license_file_path
upgradeFlags: [
"--force-minimums",
"--force-recent-backups",
]
}
mcr = {
version: var.mcr_version
}
msr = {
version: var.msr_version
installFlags : [
"--ucp-insecure-tls",
"--dtr-external-url ${module.msrs.lb_dns_name}",
]
}
hosts = concat(local.managers, local.msrs, local.workers, local.windows_workers)
}
}
launchpad_tmpl = var.msr_count > 0 ? local.msr_launchpad_tmpl : local.mke_launchpad_tmpl
}
Expected behaviour:
To normally run plan and apply it and get the output at the end to change it for the launchpad and install everything by versions from this output which I can pass in terraform.tfvars
Actual behaviour:
Error: Inconsistent conditional result types
on main.tf line 179, in locals:
179: launchpad_tmpl = var.msr_count > 0 ? local.msr_launchpad_tmpl : local.mke_launchpad_tmpl
|----------------
| local.mke_launchpad_tmpl is object with 3 attributes
| local.msr_launchpad_tmpl is object with 3 attributes
The true and false result expressions must have consistent types. The given
expressions are object and object, respectively.
Unfortunately this is a situation where Terraform doesn't really know how to explain the problem fully because the difference between your two result types is in some details in deeply nested attributes.
However, what Terraform is referring to here is that your local.msr_launchpad_tmpl and local.make_launchpad_tmpl values have different object types, because an object type in Terraform is defined by the attribute names and associated types and your msr attributes are not consistent across both objects.
One way you could make this work is to explicitly add the msr attributes to local.msr_launchpad_tmpl but set them to null, so that the object types will be compatible but the unneeded attributes will still be left without a specific value:
msr = {
version = null
installFlags = null
}
This difference in msr's type was the only type difference I noticed between the two expressions, although I might have missed another example. If so, the general idea here is to make sure that both of the values have the same object structure, so that their types will be compatible with one another.
Terraform requires the true and false expressions in a conditional to have compatible types because it uses the common type as the return type for the conditional during type checking. However, in situations like this where you might intentionally want to use a different type for each case, you can use other language constructs that will allow Terraform to successfully complete type checking in other ways.
For example, if you combine both of those object values into a single object container then Terraform will be able to see that each of the two top-level attributes has a different type and see exactly what type each one has:
locals {
launchpad_tmpls =
mke = {
apiVersion = "API"
kind = "mke"
spec = {
mke = {
version: var.mke_version
adminUsername = "admin"
adminPassword = var.admin_password
installFlags : [
"--default-node-orchestrator=kubernetes",
"--san=${module.masters.lb_dns_name}",
]
licenseFilePath: var.license_file_path
upgradeFlags: [
"--force-minimums",
"--force-recent-backups",
]
}
mcr = {
version: var.mcr_version
}
msr = {}
hosts = concat(local.managers, local.workers, local.windows_workers)
}
}
msr = {
apiVersion = "API"
kind = "mke+msr"
spec = {
mke = {
version: var.mke_version
adminUsername = "admin"
adminPassword = var.admin_password
installFlags : [
"--default-node-orchestrator=kubernetes",
"--san=${module.masters.lb_dns_name}",
]
licenseFilePath: var.license_file_path
upgradeFlags: [
"--force-minimums",
"--force-recent-backups",
]
}
mcr = {
version: var.mcr_version
}
msr = {
version: var.msr_version
installFlags : [
"--ucp-insecure-tls",
"--dtr-external-url ${module.msrs.lb_dns_name}",
]
}
hosts = concat(local.managers, local.msrs, local.workers, local.windows_workers)
}
}
}
launchpad_tmpl = local.launchpad_tmpl[var.msr_count > 0 ? "msr" : "mke"]
}
Because Terraform can see the exact types of both local.launchpad_tmpl["msr"] and local.launchpad_tmpl["mke"] it will be able to determine the exact object type for local.launchpad_tmpl in each case, even though the two have different types.
There is one exception to this: if var.msr_count is unknown during planning (that is, if you've computed it based on a resource attribute that won't be known until the apply step) then Terraform will be left in a situation where it can't infer a specific type for local.launchpad_tmpl, and so Terraform will treat it as an "unknown value of unknown type", which effectively means that any uses you make of it elsewhere in the configuration won't be type checked during planning and so might fail at apply time. However, this caveat won't apply as long as var.msr_count is set to a static value you've specified directly in your configuration.
I ran into this issue with TF 0.14 while trying to conditionally set replication_configuration in a call to aws_s3_bucket:
replication_configuration = var.replication ? local.replication_configuration : {}
var.replication was defined as a bool, and local.replication_configuration looked something like this:
replication_configuration = {
role = "arn:aws:iam::${account}:role/${name}-s3-replication"
rules = [
{
id = "everything-without-filters"
status = "Enabled" # Enabled or Disabled
priority = 10
delete_marker_replication_status = "Enabled"
destination = {
bucket = "arn:aws:s3:::${name}-delete8-dr"
storage_class = "STANDARD_IA"
}
}
]
}
Note: The contents of the json above are not real working code - they are provided only to illustrate the points below.
{} was not a close enough match to local.replication_configuration as it was defined, so the conditional failed, but the module for aws_s3_bucket errored when passed a null, so it was not possible to approach it this way, either.
Ultimately, I solved this by writing a conditional without using conditionals:
locals {
repl_bool = {
true = local.replication_configuration
false = {}
}
}
...
module "s3-bucket" {
...
replication_configuration = local.repl_bool[var.replication]
...
}
Writing code like the above really doesn't leave me with a good feeling. It looks awkward to me, and definitely has a hacky feel to it. But we needed to be able to write TF that only used one module, with or without replication, and this was a way to do that.
I ran into a similar error (The given expressions are list and list).
Took quite a bit of trial and error to figure another hacky workaround.
Here is a modified simple non-working example.
output "wont_work" {
value = false ? {
foo: "foo",
bar: {
baz: "foo",
},
} : {}
}
And here is my workaround
output "works" {
value = try(false ? {
foo: "foo",
bar: {
baz: "foo",
},
} : throw_error(), {})
}
When the condition is false,
value = try(false ? { value = try(throw_error(), {}) value = {}
foo: "foo",
bar: { ==> ==>
baz: "foo",
},
} : throw_error(), {})
I'm using a module that references a central module used to build a Puppet server in terraform. There is one variable in the root module that allows additional tags to be used with the ASG however I can't seem to get the syntax right. This is the information in the core repository:
variable "additional_asg_tags" {
description = "A map of additional tags to add to the puppet server ASG."
type = list(object({ key = string, value = string, propagate_at_launch = bool }))
default = []
}
I've tried everything I can think of to call this but it always errors with messages like "incorrect list element type: string required." or "This default value is not compatible with the variable's type constraint: list of object required."
I'm trying to call the above with something like;
variable "additional_asg_tags" {
description = "A map of additional tags to add to ASG."
type = list(object({ key = string, value = string, propagate_at_launch = bool }))
default = { key = "Name", value = "Puppet-nonprod", propagate_at_launch = "true"
}
}
I've removed the square braces around this as that was causing errors also but I may need to add these back in.
Can someone help please in what is the correct way to reference a list of objects with these values
The correct default value for your additional_asg_tags is a list:
variable "additional_asg_tags" {
description = "A map of additional tags to add to ASG."
type = list(object({
key = string,
value = string,
propagate_at_launch = bool
}))
default = [{
key = "Name",
value = "Puppet-nonprod",
propagate_at_launch = "true"
}]
}
You can reference individual elements as follows (some examples):
var.additional_asg_tags[0]["key"]
var.additional_asg_tags[0].value
# to get list
var.additional_asg_tags[*].propagate_at_launch