Terraform : Invalid dynamic for_each value => Cannot use a set of object value in for_each. An iterable collection is required - amazon-web-services

In terraform version 1.1.9 am facing the below issue while doing terraform apply.
Help me to fix how this for_each can be done without error.
rke_nodes values sample will be :
# Outputs
output "rancher_nodes" {
  value = [
        for instance in flatten([[aws_instance.node_all], [aws_instance.node_master], [aws_instance.node_worker]]): {
    public_ip  = instance.public_ip
    private_ip = instance.private_ip
    hostname   = instance.id
    user       = var.node_username
    roles      = split(",", instance.tags.K8sRoles)
    ssh_key    = file(var.ssh_key_file)
    }
  ]
  sensitive = true
}
I have variable.tf :
variable "rke_nodes" {
type = list(object({
public_ip = string
private_ip = string
hostname = string
roles = list(string)
user = string
ssh_key = string
}))
description = "Node info to install RKE cluster"
}
main.tf :
# Provision RKE cluster on provided infrastructure
resource "rke_cluster" "rancher_cluster" {
cluster_name = var.rke.cluster_name
dynamic nodes {
for_each = var.rke_nodes
content {
address = nodes.value.public_ip
internal_address = nodes.value.private_ip
hostname_override = nodes.value.hostname
user = nodes.value.user
role = nodes.value.roles
ssh_key = nodes.value.ssh_key
}
}
upgrade_strategy {
drain = false
max_unavailable_controlplane = "1"
max_unavailable_worker = "10%"
}
kubernetes_version = var.rke.kubernetes_version
}
I got error when terraform apply :
╷
│ Error: Invalid dynamic for_each value
│
│ on .terraform/modules/rke-cluster/main.tf line 6, in resource "rke_cluster" "rancher_cluster":
│ 6: for_each = var.rke_nodes
│ ├────────────────
│ │ var.rke_nodes has a sensitive value
│
│ Cannot use a list of object value in for_each. An iterable collection is required.
Actual Value when apply it can be list in sometimes:
- nodes {
- address = "65.2.140.68" -> null
- hostname_override = "i-0d5bf5f22fb84f5d4" -> null
- internal_address = "10.30.8.120" -> null
- labels = {} -> null
- role = [
- "controlplane",
- "etcd",
- "worker",
] -> null
- ssh_agent_auth = false -> null
- ssh_key = (sensitive value)
- user = (sensitive value)
}

You don't need index. It just should be:
for_each = var.rke_nodes
Note: This works only for dynamic blocks. If you use for_each in resource blocks, this form of for_each (list of maps) will not work.

Related

(AWS) Terraform: "no matching Route53Zone found"

Im currently trying to set up an AWS EC2 Instance & integrated API-Gateway with terraform.
I watched the tutorial of Anton Putra: https://www.youtube.com/watch?v=XhS2JbPg8jA&t=287s
and also cloned his code: https://github.com/antonputra/tutorials/tree/main/lessons/118
I simply wanted to rename some of the resources and apply the terraform.
"terraform init" works but when i run "terraform apply", i get this message:
CMD Error Message
This is the code from the file its complaining about:
resource "aws_acm_certificate" "gradebook" {
    domain_name          = "gradebook.bmeisn.com"
    validation_method = "DNS"
} 
data "aws_route53_zone" "gradebook-r53z" {
    name              = "bmeisn.com"
    private_zone      = false
} 
resource "aws_route53_record" "gradebook-r53r" {
    for_each = {
        for dvo in aws_acm_certificate.gradebook.domain_validation_options : dvo.domain_name => {
            name    = dvo.resource_record_name
            record    = dvo.resource_record_value
            type    = dvo.resource_record_type
        }
    }    
allow_overwrite = true
    name            = each.value.name
    records            = [each.value.record]
    ttl                = 60
    type            = each.value.type
    zone_id            = data.aws_route53_zone.gradebook-r53z.zone_id
} 
resource "aws_acm_certificate_validation" "gradebook" {
    certificate_arn            = aws_acm_certificate.gradebook.arn
    validation_record_fqdns    = [for record in aws_route53_record.gradebook-r53r : record.fqdn ]
}
I read that it might be because of the domain so heres the tf file for that aswell:
resource "aws_apigatewayv2_domain_name" "gradebook" {
  domain_name = "gradebook.bmeisn.com"   domain_name_configuration {
    certificate_arn = aws_acm_certificate.gradebook.arn
    endpoint_type   = "REGIONAL"
    security_policy = "TLS_1_2"
  }  
depends_on = [aws_acm_certificate_validation.gradebook]
} 
resource "aws_route53_record" "gradebook-r53r-02" {
  name    = aws_apigatewayv2_domain_name.gradebook.domain_name
  type    = "A"
  zone_id = data.aws_route53_zone.gradebook-r53z.zone_id   alias {
    name                   = aws_apigatewayv2_domain_name.gradebook.domain_name_configuration[0].target_domain_name
    zone_id                = aws_apigatewayv2_domain_name.gradebook.domain_name_configuration[0].hosted_zone_id
    evaluate_target_health = false
  }
} 
resource "aws_apigatewayv2_api_mapping" "gradebook-map" {
  api_id      = aws_apigatewayv2_api.gradebook-agw.id
  domain_name = aws_apigatewayv2_domain_name.gradebook.id
  stage       = aws_apigatewayv2_stage.dev.id
} 
output "custom_domain_api-v2" {
  value = "https://${aws_apigatewayv2_api_mapping.gradebook-map.domain_name}/health"
}
The whole setup around it seems to work so im assuming i did something wrong here, i just cant figure out what exactly as im not very experienced with this technology.
Also if this question is missing any important info, let me know.
As pointed out in the comments, you aren't exactly creating your Route 53 zone. If you're committed to do it via Terraform (I'd personally advise against it but it's your choice to make) aws_route53_zone resource is what you seek, it also has example on how to reference a zone you're to create.
In case you still get messages about it being absent AFTER referencing zone resource you are creating (Terraform borking the resource creating order), then just use depends_on and call it a day.

Terraform loop over list of objects in dynamic block issue

I am trying to create a storage bucket in GCP using Terraform. Please see the below implementation and the .tfvars snippet foe the same
implementation logic
`
resource "google_storage_bucket" "cloud_storage" {
for_each = {for gcs in var.storage_buckets : gcs.name => gcs}
name = each.value.name
location = lookup(each.value, "location", "AUSTRALIA-SOUTHEAST1")
project = data.google_project.existing_projects[each.value.project].project_id
force_destroy = lookup(each.value, "force_destroy", false)
storage_class = lookup(each.value, "storage_class", "STANDARD")
labels = merge(
lookup(each.value, "labels", {}),
{
managed_by = "terraform"
}
)
dynamic "versioning" {
for_each = [for version in [lookup(each.value, "versioning", null)] : version if version != null]
content {
enabled = lookup(versioning.value, "enabled", true)
}
}
dynamic "lifecycle_rule" {
for_each = [for rule in [lookup(each.value, "lifecycle_rule", toset([]))] : rule if length(rule) != 0]
content {
action {
type = lifecycle_rule.value.action.type
storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
}
condition {
# matches_suffix = lookup(lifecycle_rule.value["condition"], "matches_suffix", null)
age = lookup(lifecycle_rule.value.condition, "age", null)
}
}
}
uniform_bucket_level_access = lookup(each.value, "uniform_bucket_level_access", false)
depends_on = [
data.google_project.existing_projects
]
}
.tfvars snippet
storage_buckets = [
# this 1st bucket is only defined in DEV tf vars. reason: this bucket is a onetime creation for all DWH cloud artifacts under ecx-cicd-tools project.
{
name = "ecx-dwh-artefacts"
localtion = "AUSTRALIA-SOUTHEAST1"
force_destroy = false
project = "ecx-cicd-tools"
storage_class = "STANDARD"
versioning = {
enabled = false
}
labels = {
app = "alation"
project = "resetx"
team = "dwh"
}
uniform_bucket_level_access = false
folders = ["alation/","alation/packages/","alation/packages/archive/",
"alation/backups/","alation/backups/data/","alation/backups/data/DEV/","alation/backups/data/PROD/"]
lifecycle_rule = [
{
action = {
type = "Delete"
}
condition = {
age = "10"
}
},
]
}
,
{
name = "eclipx-dwh-dev"
localtion = "AUSTRALIA-SOUTHEAST1"
force_destroy = false
project = "eclipx-dwh-dev"
storage_class = "STANDARD"
versioning = {}
labels = {
app = "dataflow"
project = "resetx"
team = "dwh"
}
uniform_bucket_level_access = false
folders = ["Data/","Data/stagingCustomDataFlow/","Data/temp/","Data/templatesCustomDataFlow/"]
lifecycle_rule = []
}
]
`
Some have I am unable to make the dynamic block working in the bucket provision logic for the lifecycle_rule section, I am passing a list of objects from .tfvars as I need to be able to add many rules to the same bucket.
It looks like the foreach loop is not iterating over the list of objects in the lifecycle_rule of .tfvars
Below are the errors its throwing. Can someone please assist.
Error: Unsupported attribute
│
│ on storage.tf line 56, in resource "google_storage_bucket" "cloud_storage":
│ 56: type = lifecycle_rule.value.action.type
│ ├────────────────
│ │ lifecycle_rule.value is list of object with 1 element
│
│ Can't access attributes on a list of objects. Did you mean to access attribute "action" for a specific element of the list, or across all elements of the list?
╵
╷
│ Error: Unsupported attribute
│
│ on storage.tf line 57, in resource "google_storage_bucket" "cloud_storage":
│ 57: storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
│ ├────────────────
│ │ lifecycle_rule.value is list of object with 1 element
│
│ Can't access attributes on a list of objects. Did you mean to access attribute "action" for a specific element of the list, or across all elements of the list?
╵
╷
│ Error: Unsupported attribute
│
│ on storage.tf line 61, in resource "google_storage_bucket" "cloud_storage":
│ 61: age = lookup(lifecycle_rule.value.condition, "age", null)
│ ├────────────────
│ │ lifecycle_rule.value is list of object with 1 element
│
│ Can't access attributes on a list of objects. Did you mean to access attribute "condition" for a specific element of the list, or across all elements of the list?
Thank you.
I am expecting it that the dynamic block loop over lifecycle_rule
Your for_each is incorrect. It should be:
dynamic "lifecycle_rule" {
for_each = length(each.value["lifecycle_rule"]) != 0 ? each.value["lifecycle_rule"] : []
content {
action {
type = lifecycle_rule.value.action.type
storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
}
condition {
# matches_suffix = lookup(lifecycle_rule.value["condition"], "matches_suffix", null)
age = lookup(lifecycle_rule.value.condition, "age", null)
}
}

Terraform -ssh: handshake failed: ssh: │ unable to authenticate, attempted methods [none publickey], no supported methods remain ╵

I have my terraform code to ssh into a ec2 instance and i keep getting the error as below. Also, I am able to ssh into the instance from my local machine.
timeout - last error: SSH authentication failed (kali#:22): ssh: handshake failed: ssh:
│ unable to authenticate, attempted methods [none publickey], no supported methods remain
here is my code:
resource "aws_key_pair" "public_key" {
  key_name   = "public_key”
  public_key = "ssh-rsa xxxxxxxxxxxxx"
}
data "template_file" "user_data" {
  template = file("../kali_linux_aws/payload.sh")
}
resource "aws_default_subnet" "default" {
    availability_zone = var.availability_zone
}
resource "aws_default_vpc" "default" {
  tags = {
    Name = "Default VPC"
  }
}
resource "aws_security_group" "kali_security_group" {
  name        = "allow_tls"
  description = "Allow TLS inbound traffic"
  vpc_id      = aws_default_vpc.default.id
  ingress {
    description      = "ssh"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
  }
  ingress {
    description      = "rdp"
    from_port        = 3389
    to_port          = 3389
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
  }
  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  tags = {
    Name = "kali_security_group"
  }
}
resource "aws_instance" "kali_linux" {
  ami                         = "ami-0f226738ik68873d1"
  instance_type               = var.instance_type
  availability_zone           = var.availability_zone
  associate_public_ip_address = true
  key_name                    = aws_key_pair.public_key.key_name
  user_data                   = data.template_file.user_data.rendered
  subnet_id                   = var.subnet_id == null ? aws_default_subnet.default.id : var.subnet_id
  vpc_security_group_ids      = [aws_security_group.kali_security_group.id]
 
  root_block_device {
    volume_size = var.volume_size
  }
}
resource "null_resource" "provision"{
  connection {
    type = "ssh"
    user = "kali"
    private_key = "${file("/Users/path/to/id_rsa")}"
    host = aws_instance.kali_linux.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update"
    ]
}
}
All i am trying to do is to create a kali linux EC2 instance on AWS and run some remote-exec commands. Can someone please help? Also if there are any workarounds please suggest as well. Thank you in advance.

Terraform workspaces creation

I am trying to write a terraform code for creating workspaces and will be using the same for future creation as well. I am facing an issue while referencing the bundle_ids since there are multiple bundles available and it changes according to the req. each time. if someone can suggest a better approach to this.
resource "aws_workspaces_workspace" "this" {
directory_id = var.directory_id
for_each = var.workspace_user_names
user_name = each.key
bundle_id = [local.bundle_ids["${each.value}"]]
root_volume_encryption_enabled = true
user_volume_encryption_enabled = true
volume_encryption_key = var.volume_encryption_key
workspace_properties {
user_volume_size_gib = 50
root_volume_size_gib = 80
running_mode = "AUTO_STOP"
running_mode_auto_stop_timeout_in_minutes = 60
}
tags = var.tags
}
terraform.tfvars
directory_id = "d-xxxxxxx"
##Add the Workspace Username & bundle_id;
workspace_user_names = {
"User1" = "n"
"User2" = "y"
"User3" = "k"
}
locals.tf
locals {
bundle_ids = {
"n" = "wsb-nn"
"y" = "wsb-yy"
"k" = "wsb-kk"
}
}
Terraform plan
Error: Incorrect attribute value type
│
│ on r_aws_workspaces.tf line 8, in resource "aws_workspaces_workspace" "this":
│ 8: bundle_id = [local.bundle_ids["${each.value}"]]
│ ├────────────────
│ │ each.value will be known only after apply
│ │ local.bundle_ids is object with 3 attributes
│
│ Inappropriate value for attribute "bundle_id": string required.
At the movement you have a list, but it should be string. Assuming everything else is correct, the following should address your error:
bundle_id = local.bundle_ids[each.value]

Terraform Multilevel Maps with list throwing error

I am getting below error when I execute terraform plan. I don't see any error when I use a single volume name in volume_name, I am facing this error when I specify multiple volume names in volume_name= ["test-terraform-0", "test-terraform-1", "test-terraform-2"]. I request you to help me correct my issue or suggest an alternate idea/solution achieve my goal
terraform plan -var-file=customers/nike.tfvars
│ Error: Incorrect attribute value type
│
│ on gcp_compute_disk/gcp_compute_disk.tf line 4, in resource "google_compute_disk" "disk":
│ 4: name = each.value.volume_name
│ ├────────────────
│ │ each.value.volume_name is list of string with 3 elements
│
│ Inappropriate value for attribute "name": string required.
╵
│ Error: Incorrect attribute value type
│
│ on gcp_compute_disk/gcp_compute_disk.tf line 5, in resource "google_compute_disk" "disk":
│ 5: size = each.value.volume_size
│ ├────────────────
│ │ each.value.volume_size is list of number with 3 elements
│
│ Inappropriate value for attribute "size": number required.
╷
│ Error: Incorrect attribute value type
│
│ on gcp_compute_disk/gcp_compute_disk.tf line 6, in resource "google_compute_disk" "disk":
│ 6: type = each.value.volume_type
│ ├────────────────
│ │ each.value.volume_type is list of string with 3 elements
│
│ Inappropriate value for attribute "type": string required.
folder structure
├── gcp_compute_disk
│   ├── gcp_compute_disk.tf
│   └── variables.tf
├── gcp_instance
│   ├── gcp_instance.tf
│   └── variables.tf
├── main.tf
├── customers
│   └── nike.tfvars
└── variables.tf
vairable.tf
variable "instance_config" {
type = map(object({
name = string
image = string
type = string
tags = list(string)
deletion_protection = bool
startup_script = string
hostname = string
volume_name = list(string)
volume_size = list(number)
volume_type = list(string)
}))
default = {
test_vm = {
name = "test_vm"
image = "debian-cloud/debian-9"
type = "n1-standard-4"
tags = ["test_vm"]
deletion_protection = false
startup_script = "start-up.sh"
hostname = "test_vm"
volume_name = ["test-terraform-0", "test-terraform-1", "test-terraform-2"]
volume_size = [50, 50, 50]
volume_type = ["pd-standard", "pd-standard", "pd-standard", ]
}
}
}
.tfvars
instance_config = {
testvm1 = {
name = "solr"
image = "debian-cloud/debian-9"
type = "n1-standard-4"
tags = ["testvm1"]
deletion_protection = false
startup_script = "../scripts/start-up.sh"
hostname = "testvm1.terraform.test"
volume_name = ["testvm1-test-terraform-0", "testvm1-test-terraform-1", "testvm1-test-terraform-2"]
volume_size = [50, 50, 50]
volume_type = ["pd-standard", "pd-standard", "pd-standard", ]
},
testvm2 = {
name = "testvm2"
image = "debian-cloud/debian-9"
type = "f1-micro"
tags = ["testvm2"]
deletion_protection = false
startup_script = "../scripts/start-up.sh"
hostname = "testvm2.terraform.test"
volume_name = ["testvm2-test-terraform-0", "testvm2-test-terraform-1", "testvm2-test-terraform-2"]
volume_size = [50, 50, 50]
volume_type = ["pd-standard", "pd-standard", "pd-standard", ]
}
}
gcp_compute_disk.tf
resource "google_compute_disk" "disk" {
for_each = var.instance_config
name = each.value.volume_name
size = each.value.volume_size
type = each.value.volume_type
}
gcp_instance.tf
resource "google_compute_instance" "vm_instance" {
for_each = var.instance_config
name = each.value.name
machine_type = each.value.type
tags = each.value.tags
deletion_protection = each.value.deletion_protection
hostname = each.value.hostname
boot_disk {
initialize_params {
image = each.value.image
}
}
metadata_startup_script = file(each.value.startup_script)
attached_disk {
source = each.value.volume_name
}
network_interface {
network = var.gcp_network
}
}
You can check the following:
locals {
instance_volume_map = merge([for key, val in var.instance_config:
{
for idx in range(length(val.volume_size)):
"${key}-${idx}" => {
volume_name = val.volume_name[idx]
volume_size = val.volume_size[idx]
volume_type = val.volume_type[idx]
}
}
]...)
}
resource "google_compute_disk" "disk" {
for_each = local.instance_volume_map
name = each.value.volume_name
size = each.value.volume_size
type = each.value.volume_type
}
resource "google_compute_instance" "vm_instance" {
for_each = var.instance_config
name = each.value.name
machine_type = each.value.type
tags = each.value.tags
deletion_protection = each.value.deletion_protection
hostname = each.value.hostname
boot_disk {
initialize_params {
image = each.value.image
}
}
metadata_startup_script = file(each.value.startup_script)
dynamic "attached_disk" {
for_each = {for idx, val in range(length(each.value.volume_name)):
idx => val}
content {
source = local.instance_volume_map["${each.key}-${attached_disk.key}"].volume_name
}
}
network_interface {
network = var.gcp_network
}
}