Terraform loop over list of objects in dynamic block issue - google-cloud-platform

I am trying to create a storage bucket in GCP using Terraform. Please see the below implementation and the .tfvars snippet foe the same
implementation logic
`
resource "google_storage_bucket" "cloud_storage" {
for_each = {for gcs in var.storage_buckets : gcs.name => gcs}
name = each.value.name
location = lookup(each.value, "location", "AUSTRALIA-SOUTHEAST1")
project = data.google_project.existing_projects[each.value.project].project_id
force_destroy = lookup(each.value, "force_destroy", false)
storage_class = lookup(each.value, "storage_class", "STANDARD")
labels = merge(
lookup(each.value, "labels", {}),
{
managed_by = "terraform"
}
)
dynamic "versioning" {
for_each = [for version in [lookup(each.value, "versioning", null)] : version if version != null]
content {
enabled = lookup(versioning.value, "enabled", true)
}
}
dynamic "lifecycle_rule" {
for_each = [for rule in [lookup(each.value, "lifecycle_rule", toset([]))] : rule if length(rule) != 0]
content {
action {
type = lifecycle_rule.value.action.type
storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
}
condition {
# matches_suffix = lookup(lifecycle_rule.value["condition"], "matches_suffix", null)
age = lookup(lifecycle_rule.value.condition, "age", null)
}
}
}
uniform_bucket_level_access = lookup(each.value, "uniform_bucket_level_access", false)
depends_on = [
data.google_project.existing_projects
]
}
.tfvars snippet
storage_buckets = [
# this 1st bucket is only defined in DEV tf vars. reason: this bucket is a onetime creation for all DWH cloud artifacts under ecx-cicd-tools project.
{
name = "ecx-dwh-artefacts"
localtion = "AUSTRALIA-SOUTHEAST1"
force_destroy = false
project = "ecx-cicd-tools"
storage_class = "STANDARD"
versioning = {
enabled = false
}
labels = {
app = "alation"
project = "resetx"
team = "dwh"
}
uniform_bucket_level_access = false
folders = ["alation/","alation/packages/","alation/packages/archive/",
"alation/backups/","alation/backups/data/","alation/backups/data/DEV/","alation/backups/data/PROD/"]
lifecycle_rule = [
{
action = {
type = "Delete"
}
condition = {
age = "10"
}
},
]
}
,
{
name = "eclipx-dwh-dev"
localtion = "AUSTRALIA-SOUTHEAST1"
force_destroy = false
project = "eclipx-dwh-dev"
storage_class = "STANDARD"
versioning = {}
labels = {
app = "dataflow"
project = "resetx"
team = "dwh"
}
uniform_bucket_level_access = false
folders = ["Data/","Data/stagingCustomDataFlow/","Data/temp/","Data/templatesCustomDataFlow/"]
lifecycle_rule = []
}
]
`
Some have I am unable to make the dynamic block working in the bucket provision logic for the lifecycle_rule section, I am passing a list of objects from .tfvars as I need to be able to add many rules to the same bucket.
It looks like the foreach loop is not iterating over the list of objects in the lifecycle_rule of .tfvars
Below are the errors its throwing. Can someone please assist.
Error: Unsupported attribute
│
│ on storage.tf line 56, in resource "google_storage_bucket" "cloud_storage":
│ 56: type = lifecycle_rule.value.action.type
│ ├────────────────
│ │ lifecycle_rule.value is list of object with 1 element
│
│ Can't access attributes on a list of objects. Did you mean to access attribute "action" for a specific element of the list, or across all elements of the list?
╵
╷
│ Error: Unsupported attribute
│
│ on storage.tf line 57, in resource "google_storage_bucket" "cloud_storage":
│ 57: storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
│ ├────────────────
│ │ lifecycle_rule.value is list of object with 1 element
│
│ Can't access attributes on a list of objects. Did you mean to access attribute "action" for a specific element of the list, or across all elements of the list?
╵
╷
│ Error: Unsupported attribute
│
│ on storage.tf line 61, in resource "google_storage_bucket" "cloud_storage":
│ 61: age = lookup(lifecycle_rule.value.condition, "age", null)
│ ├────────────────
│ │ lifecycle_rule.value is list of object with 1 element
│
│ Can't access attributes on a list of objects. Did you mean to access attribute "condition" for a specific element of the list, or across all elements of the list?
Thank you.
I am expecting it that the dynamic block loop over lifecycle_rule

Your for_each is incorrect. It should be:
dynamic "lifecycle_rule" {
for_each = length(each.value["lifecycle_rule"]) != 0 ? each.value["lifecycle_rule"] : []
content {
action {
type = lifecycle_rule.value.action.type
storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
}
condition {
# matches_suffix = lookup(lifecycle_rule.value["condition"], "matches_suffix", null)
age = lookup(lifecycle_rule.value.condition, "age", null)
}
}

Related

Add tag to launch template for ec2 nodes

I am trying to add tags to a launch template so that the ec2 nodes are tagged and named..
When I add the hardcoded tags inside the module it works, but the idea is to have dynamic tags and be able to merge with the local ones.
module
resource "aws_autoscaling_group" "ecs_asg" {
name = var.name_asg
max_size = var.max_size
min_size = var.min_size
.
.
.
service_linked_role_arn = var.service_linked_role_arn
tags = var.asg_tags
launch_template {
id = aws_launch_template.launch_template.id
version = "$Latest"
}
}
variables.tf
variable "asg_tags" {
type = map(string)
default = {}
}
main.tf
name_asg = "leo-nombre-asg"
max_size = var.max_size
min_size = var.min_size
.
.
.
asg_tags = merge(
local.tags,
{
propagate_at_launch=true,
},
)
locals.tf
locals {
tags = {
"Accountable" = "business"
"Deploy" = "terraform"
"Role" = "services"
}
}
terraform validate
│ Error: Incorrect attribute value type
│
│ on modules\ecs\main.tf line 38, in resource "aws_autoscaling_group" "ecs_asg":
│ 38: tags = var.asg_tags
│ ├────────────────
│ │ var.asg_tags is a map of string
│
│ Inappropriate value for attribute "tags": set of map of string required.
The two fixes necessary here are both for the type in the asg_tags parameter argument value:
asg_tags = [merge(local.tags, { "propagate_at_launch" = "true" })]
Here we use the list/set constructor to cast the type set(map(string)). Terraform will coerce to set instead of list with the constructor as long as the type is specified to be set instead. Since we need to fix the type declaration anyway to be compatible with the resource attribute schema, this is convenient to do:
variable "asg_tags" {
type = set(map(string))
default = {}
}

Facing Issue with same variable name "node_config" for two different resource while creating a module

Module giving "Reference to undeclared resource" when I am creating a Google Container cluster and I am using two resources with same dynamic input value "node_config"
This is my main.tf file
resource "google_container_cluster" "primary" {
name = var.name
location = var.location
description = var.description
project = var.project
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
# default_max_pods_per_node = var.default_max_pods_per_node
enable_tpu = var.enable_tpu
enable_shielded_nodes = var.enable_shielded_nodes
enable_legacy_abac = var.enable_legacy_abac
enable_kubernetes_alpha = var.enable_kubernetes_alpha
enable_intranode_visibility = var.enable_intranode_visibility
node_locations = var.node_locations
resource_labels = var.resource_labels
remove_default_node_pool = var.remove_default_node_pool
initial_node_count = var.initial_node_count
dynamic "node_config" {
for_each = var.node_config
content {
disk_size_gb = node_config.value["disk_size_gb"]
disk_type = node_config.value["disk_type"]
image_type = node_config.value["image_type"]
labels = node_config.value["labels"]
local_ssd_count = node_config.value["local_ssd_count"]
machine_type = node_config.value["machine_type"]
metadata = node_config.value["metadata"]
min_cpu_platform = node_config.value["min_cpu_platform"]
oauth_scopes = node_config.value["oauth_scopes"]
preemptible = node_config.value["preemptible"]
dynamic "shielded_instance_config" {
for_each = node_config.value.shielded_instance_config
content {
enable_integrity_monitoring = shielded_instance_config.value["enable_integrity_monitoring"]
enable_secure_boot = shielded_instance_config.value["enable_secure_boot"]
}
}
tags = node_config.value["tags"]
}
}
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = var.nodepool_name
location = var.location
project = var.project
cluster = google_container_cluster.primary.name
node_count = var.nodepool_node_count
node_locations = var.node_locations
dynamic "node_config" {
for_each = var.ndpool_node_config
content {
disk_size_gb = ndpool_node_config.value["disk_size_gb"]
disk_type = ndpool_node_config.value["disk_type"]
preemptible = ndpool_node_config.value["preemptible"]
image_type = ndpool_node_config.value["image_type"]
machine_type = ndpool_node_config.value["machine_type"]
oauth_scopes = ndpool_node_config.value["oauth_scopes"]
}
}
}
This my module definition inside module folder
module "google_container_cluster" {
source = "../"
name = var.name
location = var.location
description = var.description
project = var.project
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
# default_max_pods_per_node = var.default_max_pods_per_node
enable_tpu = var.enable_tpu
enable_shielded_nodes = var.enable_shielded_nodes
enable_legacy_abac = var.enable_legacy_abac
enable_kubernetes_alpha = var.enable_kubernetes_alpha
enable_intranode_visibility = var.enable_intranode_visibility
node_locations = var.node_locations
resource_labels = var.resource_labels
remove_default_node_pool = var.remove_default_node_pool
initial_node_count = var.initial_node_count
node_config = var.node_config
nodepool_name = var.nodepool_name
nodepool_node_count = var.nodepool_node_count
}
My terraform.tfvars are as follows
name = "tf-gcp-cluster"
location = "us-central1-c"
description = "Cluster Creation using TF"
project = "gcp-terraform-prjt"
#default_max_pods_per_node = ""
enable_tpu = false
enable_shielded_nodes = false
enable_legacy_abac = false
enable_kubernetes_alpha = false
enable_intranode_visibility = false
node_locations = []
resource_labels = {
"test" = "tftestgcp"
}
remove_default_node_pool = true
initial_node_count = 1
node_config = [{
disk_size_gb = "10"
disk_type = "pd-standard"
image_type = "cos_containerd"
labels = {
"test" = "tf-container-cluster"
}
local_ssd_count = "0"
machine_type = "e2-micro"
metadata = {
"key" = "value"
}
min_cpu_platform = ""
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
preemptible = true
shielded_instance_config = [{
enable_integrity_monitoring = false
enable_secure_boot = false
}]
tags = ["value"]
}]
nodepool_name = "tf-nodepool"
nodepool_node_count = 1
ndpool_node_config = [{
disk_size_gb = 10
disk_type = "pd-standard"
image_type = "cos_container"
machine_type = "e2-micro"
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
preemptible = false
}]
I have supplied value for both "node_config" and "ndpool_node_config" but for some reason it is giving me following error when i run terraform plan command
╷
│ Error: Reference to undeclared resource
│
│ on ../main.tf line 57, in resource "google_container_node_pool" "primary_preemptible_nodes":
│ 57: disk_size_gb = ndpool_node_config.value["disk_size_gb"]
│
│ A managed resource "ndpool_node_config" "value" has not been declared in module.google_container_cluster.
╵
╷
│ Error: Reference to undeclared resource
│
│ on ../main.tf line 58, in resource "google_container_node_pool" "primary_preemptible_nodes":
│ 58: disk_type = ndpool_node_config.value["disk_type"]
│
│ A managed resource "ndpool_node_config" "value" has not been declared in module.google_container_cluster.
╵
╷
│ Error: Reference to undeclared resource
│
│ on ../main.tf line 59, in resource "google_container_node_pool" "primary_preemptible_nodes":
│ 59: preemptible = ndpool_node_config.value["preemptible"]
│
│ A managed resource "ndpool_node_config" "value" has not been declared in module.google_container_cluster.
╵
╷
│ Error: Reference to undeclared resource
│
│ on ../main.tf line 60, in resource "google_container_node_pool" "primary_preemptible_nodes":
│ 60: image_type = ndpool_node_config.value["image_type"]
│
│ A managed resource "ndpool_node_config" "value" has not been declared in module.google_container_cluster.
╵
╷
│ Error: Reference to undeclared resource
│
│ on ../main.tf line 61, in resource "google_container_node_pool" "primary_preemptible_nodes":
│ 61: machine_type = ndpool_node_config.value["machine_type"]
│
│ A managed resource "ndpool_node_config" "value" has not been declared in module.google_container_cluster.
╵
╷
│ Error: Reference to undeclared resource
│
│ on ../main.tf line 62, in resource "google_container_node_pool" "primary_preemptible_nodes":
│ 62: oauth_scopes = ndpool_node_config.value["oauth_scopes"]
│
│ A managed resource "ndpool_node_config" "value" has not been declared in module.google_container_cluster.
I want the value to be passed down as i have defined them, if i remove it created the node_pool with default values rather than the values defined by me.

Terraform --Create multiple hosted zoned and assign different records for each zones

I am new to terraform and trying to change the existing script were we used to create 1 route 53 zone and corresponding route 53 record , now the requirement is to add one more zone and 53 record (correspondingly) , i am trying multi level map , i need your help correct my code
tf.vars
variable "facade_hostname" = {
type = "map"
default = {
old_mobile_facade_hostname = "xxx.morgen.nl"
new_mobile_facade_hostname = "xxx.test.nl"
}
}
dns_config = {
old_dns_records = {
mobile_facade = {
name = "xxx.morgen.nl",
ttl = "5",
type = "A",
records = [
"1.2.3.4"]
}
},
new_dns_records = {
mobile_facade = {
name = "xxx.test.nl",
ttl = "5",
type = "A",
records = [
"5.6.7.8"]
}
}
}
varibles.tf
variable "dns_config" {
type = map(object({
name = string
ttl = string
type = string
records = string
}))
default = {}
}
variable "facade_hostname" {
type = map(object({
old_mobile_facade_hostname = string
new_mobile_facade_hostname = string
}))
default = {}
}
and finally my resource creation
resource "aws_route53_zone" "private" {
for_each = var.facade_hostname
count = var.dns_config != "" && var.facade_hostname != "" ? 1 : 0
name = var.facade_hostname
force_destroy = true
vpc {
vpc_id = module.vpc_private.vpc_id
}
}
resource "aws_route53_record" "A" {
for_each = var.facade_hostname
count = var.dns_config != "" && var.facade_hostname!= "" ? 1 : 0
zone_id = aws_route53_zone.private[count.index].zone_id
name = var.dns_config.facade_hostname.name
ttl = var.dns_config.facade_hostname.ttl
type = var.dns_config.facade_hostname.type
records = var.dns_config.facade_hostname.records
allow_overwrite = true
}
Error i am encountering, when running the terraform init
╷
│ Error: Invalid combination of "count" and "for_each"
│
│ on route53.tf line 2, in resource "aws_route53_zone" "private":
│ 2: for_each = var.facade_hostname
│
│ The "count" and "for_each" meta-arguments are mutually-exclusive, only one
│ should be used to be explicit about the number of resources to be created.
╵
╷
│ Error: Invalid combination of "count" and "for_each"
│
│ on route53.tf line 12, in resource "aws_route53_record" "A":
│ 12: for_each = var.facade_hostname
│
│ The "count" and "for_each" meta-arguments are mutually-exclusive, only one
│ should be used to be explicit about the number of resources to be created.
╵
aws-vault: error: exec: Failed to wait for command termination: exit status 1
Thanks
Finally after spending some time , this seems a working solution incase if it helps any one in future , to create couple of hosted zone and create different A record based on hosted zone,
resource "aws_route53_zone" "private" {
for_each = var.mobile_facade_hostname
name = each.key
force_destroy = true
vpc {
vpc_id = module.vpc_private.vpc_id
}
}
resource "aws_route53_record" "A" {
for_each = aws_route53_zone.private
zone_id = each.value["zone_id"]
name = trimsuffix(each.value["name"], ".")
type = "A"
ttl = "5"
records = [var.mobile_facade_hostname[trimsuffix(each.value["name"], ".")]]
My tfvars
mobile_facade_hostname = { "x.y.nl" = "1.2.3.4", "a.b.nl" = "5.6.7.8" }
variables.tf
variable "mobile_facade_hostname" {
type = map(string)
default = {}
}

Cloud Function Module Terraform

I am comparatively new to terraform and trying to create a working module which can spin up multiple cloud functions at once. The part which is throwing error for me is where i am dynamically calling event trigger. I have written a rough code below.
Can someone please suggest what i am doing wrong?
Main.tf
resource "google_cloudfunctions_function" "event-function" {
for_each = var.cloudfunctions
project = local.test_project
region = lookup(local.regions,"use1")
name = format("clf-%s-%s-use1-%s-%s", var.domain, var.env, var.use_case, each.key)
description = format("clf-%s-%s-use1-%s-%s", var.domain, var.env, var.use_case, each.key)
#source_directory = "${path.module}/${each.value}}
#bucket_force_destroy = var.bucket_force_destroy
entry_point = each.value.entry_point
runtime = each.value.runtime
#vpc_connector = "projects/${var.host_project}/locations/${var.region}/connectors/${var.vpc_connector_prefix}-${var.environment}-test"
dynamic event_trigger {
for_each = [ for i in each.value.event_trigger : lookup(local.event_trigger,i.event_name,i.resource) ]
content {
event_type = event_trigger.value.event_type
resource = event_trigger.value.resource
}
}
Variables.tf
variable "cloudfunctions" {
type = map(object({
runtime = string
event_trigger = list(object({
event_type = string
resource = string
}))
}))
default = {}
}
Locals.tf
42.event_trigger = flatten ([
43. for i,n in var.cloudfunctions :[
44. for event in n.event_trigger :{
45. event_type = event_type
46. resource = resource
}
]
])
}
Error
on locals.tf line 44, in locals:
│ 44: event_type = event_type
│
│ A reference to a resource type must be followed by at least one attribute
│ access, specifying the resource name.
╵
╷
│ Error: Invalid reference
│
│ on locals.tf line 45, in locals:
│ 45: resource = resource
│
│ The "resource" object must be followed by two attribute names: the resource
│ type and the resource name.
Your event_trigger is in n. Thus, your event_trigger should be:
event_trigger = flatten ([
for i,n in var.cloudfunctions :[
for event in n.event_trigger: {
event_type = event.event_type
resource = event.resource
}
]
])

Terraform workspaces creation

I am trying to write a terraform code for creating workspaces and will be using the same for future creation as well. I am facing an issue while referencing the bundle_ids since there are multiple bundles available and it changes according to the req. each time. if someone can suggest a better approach to this.
resource "aws_workspaces_workspace" "this" {
directory_id = var.directory_id
for_each = var.workspace_user_names
user_name = each.key
bundle_id = [local.bundle_ids["${each.value}"]]
root_volume_encryption_enabled = true
user_volume_encryption_enabled = true
volume_encryption_key = var.volume_encryption_key
workspace_properties {
user_volume_size_gib = 50
root_volume_size_gib = 80
running_mode = "AUTO_STOP"
running_mode_auto_stop_timeout_in_minutes = 60
}
tags = var.tags
}
terraform.tfvars
directory_id = "d-xxxxxxx"
##Add the Workspace Username & bundle_id;
workspace_user_names = {
"User1" = "n"
"User2" = "y"
"User3" = "k"
}
locals.tf
locals {
bundle_ids = {
"n" = "wsb-nn"
"y" = "wsb-yy"
"k" = "wsb-kk"
}
}
Terraform plan
Error: Incorrect attribute value type
│
│ on r_aws_workspaces.tf line 8, in resource "aws_workspaces_workspace" "this":
│ 8: bundle_id = [local.bundle_ids["${each.value}"]]
│ ├────────────────
│ │ each.value will be known only after apply
│ │ local.bundle_ids is object with 3 attributes
│
│ Inappropriate value for attribute "bundle_id": string required.
At the movement you have a list, but it should be string. Assuming everything else is correct, the following should address your error:
bundle_id = local.bundle_ids[each.value]