I am writing some terraform code that looks up the ami based on a variable called fortios_version. I seem to not understand how to have a map pass back a map value. Here is my code...
variable "fortios_version" {
type = string
}
variable "fortios_map" {
type = map
default = {
"7.2.0" = "fgtvmbyolami-7-2-0"
"6.4.8" = "fgtvmbyolami-6-4-8"
}
}
variable "fgtvmbyolami-7-2-0" {
type = map
default = {
us-east-1 = "ami-08a9244de2d3b3cfa"
us-east-2 = "ami-0b07d15df1781b3d8"
}
}
My aws instance code:
ami = lookup(lookup(var.fortios_map[var.fortios_version]), var.region)
My variable:
fortios_version: "7.2.0"
I hope I am making sense. I have played with different variations all day with no luck.
You can't dynamically refer to fgtvmbyolami-7-2-0 based on the output of fortios_map. It would be best to re-organize your variables:
variable "fortios_version" {
type = string
}
variable "fortios_map" {
type = map
default = {
"7.2.0" = "fgtvmbyolami-7-2-0"
"6.4.8" = "fgtvmbyolami-6-4-8"
}
}
variable "amis" {
type = map
default = {
"fgtvmbyolami-7-2-0" = {
us-east-1 = "ami-08a9244de2d3b3cfa"
us-east-2 = "ami-0b07d15df1781b3d8"
},
"fgtvmbyolami-6-4-8" = {
us-east-1 = "ami-08a92333cfa"
us-east-2 = "ami-0b07dgggg781b3d8"
}
}
}
then
ami = var.amis[var.fortios_map[var.fortios_version]][var.region]
You can expand this to ad lookup in to have some default values for each map.
Related
I've the following variable:
variable "mymap" {
type = map(string)
default = {
"key1" = "val1"
"key2" = "val2"
}
}
I am trying to expand this to create individual parameters in this resource:
resource "aws_elasticache_parameter_group" "default" {
name = "cache-params"
family = "redis2.8"
parameter {
name = "activerehashing"
value = "yes"
}
parameter {
name = "min-slaves-to-write"
value = "2"
}
}
My desired state for this example would be:
resource "aws_elasticache_parameter_group" "default" {
name = "cache-params"
family = "redis2.8"
parameter {
name = "key1"
value = "val1"
}
parameter {
name = "key2"
value = "val2"
}
}
I don't see this supported explicitly in the docs; am I even taking the correct approach to doing this?
(I'm mainly looking at leveraging 'dynamic' and 'for_each' keywords, but haven't been able to have success)
To achieve the desired state, you would have to do a couple of things. One could be to use dynamic meta-argument [1] with for_each [2]. The code would have to be changed to the following:
resource "aws_elasticache_parameter_group" "default" {
name = "cache-params"
family = "redis2.8"
dynamic "parameter" {
for_each = var.mymap
content {
name = parameter.value.name
value = parameter.value.value
}
}
}
However, you would also have to adjust the variable:
variable "mymap" {
type = map(map(string))
description = "Map of parameters for Elasticache."
default = {
"parameter1" = {
"value" = "value1"
"name" = "name1"
}
}
}
Then, you can define the values for the variable mymap in a tfvars file (e.g., terraform.tfvars) like this:
mymap = {
"parameter1" = {
"name" = "activerehashing"
"value" = "yes"
}
"parameter2" = {
"name" = "min-slaves-to-write"
"value" = "2"
}
}
[1] https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks
[2] https://developer.hashicorp.com/terraform/language/meta-arguments/for_each
You can use a dynamic block to dynamically declare zero or more nested configuration blocks based on a collection.
resource "aws_elasticache_parameter_group" "default" {
name = "cache-params"
family = "redis2.8"
dynamic "parameter" {
for_each = var.mymap
content {
name = parameter.key
value = parameter.value
}
}
}
The above tells Terraform to generate one parameter block for each element of var.varmap, and to populate the name and value arguments of each generated block based on the key and value from each map element respectively.
The parameter symbol inside the content block represents the current element of the collection. This symbol is by default named after the block type being generated, which is why it was named parameter in this case. It's possible to override that generated name using an additional iterator argument in the dynamic block, but that's necessary only if you are generating multiple levels of nesting where a nested block type has the same name as its container.
I am trying to build a reusable module that creates multiple S3 buckets. Based on a condition, some buckets may have lifecycle rules, others do not. I am using a for loop in the lifecycle rule resource and managed to do it but not on 100%.
My var:
variable "bucket_details" {
type = map(object({
bucket_name = string
enable_lifecycle = bool
glacier_ir_days = number
glacier_days = number
}))
}
How I go through the map on the lifecycle resource:
resource "aws_s3_bucket_lifecycle_configuration" "compliant_s3_bucket_lifecycle_rule" {
for_each = { for bucket, values in var.bucket_details : bucket => values if values.enable_lifecycle }
depends_on = [aws_s3_bucket_versioning.compliant_s3_bucket_versioning]
bucket = aws_s3_bucket.compliant_s3_bucket[each.key].bucket
rule {
id = "basic_config"
status = "Enabled"
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
transition {
days = each.value["glacier_ir_days"]
storage_class = "GLACIER_IR"
}
transition {
days = each.value["glacier_days"]
storage_class = "GLACIER"
}
expiration {
days = 2555
}
noncurrent_version_transition {
noncurrent_days = each.value["glacier_ir_days"]
storage_class = "GLACIER_IR"
}
noncurrent_version_transition {
noncurrent_days = each.value["glacier_days"]
storage_class = "GLACIER"
}
noncurrent_version_expiration {
noncurrent_days = 2555
}
}
}
How I WOULD love to reference it in the root module:
module "s3_buckets" {
source = "./modules/aws-s3-compliance"
#
bucket_details = {
"fisrtbucketname" = {
bucket_name = "onlythefisrtbuckettesting"
enable_lifecycle = true
glacier_ir_days = 555
glacier_days = 888
}
"secondbuckdetname" = {
bucket_name = "onlythesecondbuckettesting"
enable_lifecycle = false
}
}
}
So when I reference it like that, it cannot validate, because I am not setting values for both glacier_ir_days & glacier_days - understandable.
My question is - is there a way to check if the enable_lifecycle is set to false, to not expect values for these?
Currently, as a workaround, I am just setting zeroes for those and since the resource is not created if enable_lifecycle is false, it does not matter, but I would love it to be cleaner.
Thank you in advance.
The forthcoming Terraform v1.3 release will include a new feature for declaring optional attributes in an object type constraint, with the option of declaring a default value to use when the attribute isn't set.
At the time I'm writing this the v1.3 release is still under development and so not available for general use, but I'm going to answer this with an example that should work with Terraform v1.3 once it's released. If you wish to try it in the meantime you can experiment with the most recent v1.3 alpha release which includes this feature, though of course I would not recommend using it in production until it's in a final release.
It seems that your glacier_ir_days and glacier_days attributes are, from a modeling perspective, attribtues that are required when the lifecycle is enabled and not required when lifecycle is disabled.
I would suggest modelling that by placing these attributes in a nested object called lifecycle and implementing it such that the lifecycle resource is enabled when that attribute is set, and disabled when it is left unset.
The declaration would therefore look like this:
variable "s3_buckets" {
type = map(object({
bucket_name = string
lifecycle = optional(object({
glacier_ir_days = number
glacier_days = number
}))
}))
}
When an attribute is marked as optional(...) like this, Terraform will allow omitting it in the calling module block and then will quietly set the attribute to null when it performs the type conversion to make the given value match the type constraint. This particular declaration doesn't have a default value, but it's also possible to pass a second argument in the optional(...) syntax which Terraform will then use instead of null as the placeholder value when the attribute isn't specified.
The calling module block would therefore look like this:
module "s3_buckets" {
source = "./modules/aws-s3-compliance"
#
bucket_details = {
"fisrtbucketname" = {
bucket_name = "onlythefisrtbuckettesting"
lifecycle = {
glacier_ir_days = 555
glacier_days = 888
}
}
"secondbuckdetname" = {
bucket_name = "onlythesecondbuckettesting"
}
}
}
Your resource block inside the module will remain similar to what you showed, but the if clause of the for expression will test if the lifecycle object is non-null instead:
resource "aws_s3_bucket_lifecycle_configuration" "compliant_s3_bucket_lifecycle_rule" {
for_each = {
for bucket, values in var.bucket_details : bucket => values
if values.lifecycle != null
}
# ...
}
Finally, the references to the attributes would be slightly different to traverse through the lifecycle object:
transition {
days = each.value.lifecycle.glacier_days
storage_class = "GLACIER"
}
I want to create a map of map as a local variable. I am using this
locals {
region_map = {
mumbai = {
is_second_execution = true
cg_ip_address = "ip.add.re.ss"
}
}
}
Now I am referencing it as
module "mumbai" {
source = "./site-to-site-vpn-setup"
providers = { aws = aws.mumbai }
is_second_execution = lookup(local.region_map, local.region_map["mumbai"]["is_second_execution"], false)
cg_ip_address = lookup(local.region_map, local.region_map["mumbai"]["cg_ip_address"], "")
}
but upon doing terrafrom plan the cg_ip_address is being set to null.
Also If I add another module say "saopaulo" and I need to pass default values of is_second_execution and cg_ip_address for it without adding saopaulo in the map, how do I do that?
The lookup built-in function [1] has the following syntax:
lookup(map, key, default)
Since you have a map of maps, that means that the first argument is the map (local.region_map.mumbai), the second is the key you are looking for (cg_ip_address) and the third argument is the default value. So in your case you have to change the lookup to this:
module "mumbai" {
source = "./site-to-site-vpn-setup"
providers = { aws = aws.mumbai }
is_second_execution = lookup(local.region_map.mumbai, "is_second_execution", false)
cg_ip_address = lookup(local.region_map.mumbai, "cg_ip_address", "")
}
[1] https://www.terraform.io/language/functions/lookup
I need to change, using Terraform, the default project_id in my Composer environment so that I can access secrets from another project. To do so, according to Terraform, I need the variable airflow_config_overrides. I guess I should have something like this:
resource "google_composer_environment" "test" {
# ...
config {
software_config {
airflow_config_overrides = {
secrets-backend = "airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend",
secrets-backend_kwargs = {"project_id":"9999999999999"}
}
}
}
}
The secrets-backend section-key seems to be working. On the other hand, secrets-backend_kwargs is returning the following error:
Inappropriate value for attribute "airflow_config_overrides": element "secrets-backend_kwargs": string required
It seems that the problem is that GCP expects a JSON format and Terraform requires a string. How can I get Terraform to provide it in the format needed?
You can convert a map such as {"project_id":"9999999999999"} into a JSON encoded string by using the jsonencode function.
So merging the example given in the google_composer_environment resource documentation with your config in the question you can do something like this:
resource "google_composer_environment" "test" {
name = "mycomposer"
region = "us-central1"
config {
software_config {
airflow_config_overrides = {
secrets-backend = "airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend",
secrets-backend_kwargs = jsonencode({"project_id":"9999999999999"})
}
pypi_packages = {
numpy = ""
scipy = "==1.1.0"
}
env_variables = {
FOO = "bar"
}
}
}
}
I am building a very basic Systems Manager Association in TerraForm but I do not understand what the sourceInfo field is asking for. It requires a string but even simple strings like "test" cause it to reject the input.
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "{"owner":"awslabs","repository":"amazon-ssm","path":"Compliance/InSpec/PortCheck","getOptions":"branch:master"}"
#^this line doesn't work
#sourceInfo = "test"
#^this line doesn't work either
}
}
Instead of escaping all of your strings you could also use the jsonencode function to turn a map into the JSON you want:
locals {
source_info = {
owner = "awslabs"
repository = "amazon-ssm"
path = "Compliance/InSpec/PortCheck"
getOptions = "branch:master"
}
}
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "${jsonencode(local.source_info)}"
}
}
I wasn't aware sourceInfo expects parentheses and all inner double quotes to be escaped or it won't work.
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "{\"owner\":\"awslabs\",\"repository\":\"amazon-ssm\",\"path\":\"Compliance/InSpec/PortCheck\",\"getOptions\":\"branch:master\"}"
}
}
There is a mistake in the code shared (no equal sign after targets but after parameters). The correct syntax of the resource is :
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets {
key = "tag:os"
values = ["linux"]
}
parameters = {
sourceType = "GitHub"
sourceInfo = "${jsonencode(local.source_info)}"
}
}