Accessing values in list for modules in terraform - amazon-web-services

I am trying to refactor some terraform code when doing an upgrade.
I'm using some S3 module that takes some lifecycle configuration rules:
module "s3_bucket" {
source = "../modules/s3"
lifecycle_rule = [
{
id = "id_name"
enabled = true
abort_incomplete_multipart_upload = 7
expiration = {
days = 7
}
noncurrent_version_expiration = {
days = 7
}
}
]
}
Here is how the resource inside the model looks like:
resource "aws_s3_bucket_lifecycle_configuration" "main" {
count = length(var.lifecycle_rule) > 0 ? 1 : 0
bucket = aws_s3_bucket.main.id
dynamic "rule" {
for_each = var.lifecycle_rule
content {
id = lookup(lifecycle_rule.value, "id", null)
enabled = lookup(lifecycle_rule.value, "enabled", null)
abort_incomplete_multipart_upload = lookup(lifecycle_rule.value, "abort_incomplete_multipart_upload", null)
filter {
and {
prefix = lookup(lifecycle_rule.value, "prefix", null)
tags = lookup(lifecycle_rule.value, "tags", null)
}
}
}
}
}
Running plan gives me the following error:
on ../modules/s3/main.tf line 73, in resource "aws_s3_bucket_lifecycle_configuration" "main":
73: id = lookup(lifecycle_rule.id, null)
A managed resource "lifecycle_rule" "id" has not been declared in
module.s3_bucket.
2 questions:
1 - Looks like I'm not reaching the lifecycle_rule.value attribute in the list for the module, any help with the syntax?
2 - How to access the nested expiration.days value inside the module also?
Thanks!

The first part of your question: you need to use the rule and not lifecycle_rule [1]. Make sure you understand this part:
The iterator argument (optional) sets the name of a temporary variable that represents the current element of the complex value. If omitted, the name of the variable defaults to the label of the dynamic block.
To complete the answer, accessing expiration.days is possible if you define a corresponding argument in the module. In other words, you need to add expiration block to the module code [2].
There are a couple more issues with the code you currently have:
The abort_incomplete_multipart_upload is a configuration block, the same as expiration
The expiration date should not be set in number of days, rather an RFC3339 format [3]
The enabled value cannot be a bool, it has to be either Enabled or Disabled (mind the first capital letter) and the name of the argument is status [4] not enabled
To sum up, here's what the code in the module should look like:
resource "aws_s3_bucket_lifecycle_configuration" "main" {
count = length(var.lifecycle_rule) > 0 ? 1 : 0
bucket = aws_s3_bucket.main.id
dynamic "rule" {
for_each = var.lifecycle_rule
content {
id = lookup(rule.value, "id", null)
status = lookup(rule.value, "enabled", null)
abort_incomplete_multipart_upload {
days_after_initiation = lookup(rule.value, "abort_incomplete_multipart_upload", null)
}
filter {
and {
prefix = lookup(rule.value, "prefix", null)
tags = lookup(rule.value, "tags", null)
}
}
expiration {
date = lookup(rule.value.expiration, "days", null)
}
}
}
}
The module should be called with the following variable values:
module "s3_bucket" {
source = "../modules/s3"
lifecycle_rule = [
{
id = "id_name"
enabled = "Enabled" # Mind the value
abort_incomplete_multipart_upload = 7
expiration = {
days = "2022-08-28T15:04:05Z" # RFC3339 format
}
noncurrent_version_expiration = {
days = 7
}
}
[1] https://www.terraform.io/language/expressions/dynamic-blocks
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#expiration
[3] https://www.rfc-editor.org/rfc/rfc3339#section-5.8
[4] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#status

Related

Terraform AWS: SQS destination for Lambda doesn't get added

I have a working AWS project that I'm trying to implement in Terraform.
One of the steps requires a lambda function to query athena and return results to SQS (I am using this module for lambda instead of the original resource). Here is the code:
data "archive_file" "go_package" {
type = "zip"
source_file = "./report_to_SQS_go/main"
output_path = "./report_to_SQS_go/main.zip"
}
resource "aws_sqs_queue" "emails_queue" {
name = "sendEmails_tf"
}
module "lambda_report_to_sqs" {
source = "terraform-aws-modules/lambda/aws"
function_name = "report_to_SQS_Go_tf"
handler = "main"
runtime = "go1.x"
create_package = false
local_existing_package = "./report_to_SQS_go/main.zip"
attach_policy_json = true
policy_json = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect : "Allow"
Action : [
"dynamodb:*",
"lambda:*",
"logs:*",
"athena:*",
"cloudwatch:*",
"s3:*",
"sqs:*"
]
Resource : ["*"]
}
]
})
destination_on_success = aws_sqs_queue.emails_queue.arn
timeout = 200
memory_size = 1024
}
The code works fine and produces the desired output; however, the problem is, SQS doesn't show up as a destination (although the Queue shows up in SQS normally and can send/recieve messages).
I don't think permissions are the problem because I can add SQS destinations manually from the console successfully.
The variable destination_on_success is only used if you also set create_async_event_config as true. Below is extracted from https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/master
variables.tf
############################
# Lambda Async Event Config
############################
variable "create_async_event_config" {
description = "Controls whether async event configuration for Lambda Function/Alias should be created"
type = bool
default = false
}
variable "create_current_version_async_event_config" {
description = "Whether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)"
type = bool
default = true
}
.....
variable "destination_on_failure" {
description = "Amazon Resource Name (ARN) of the destination resource for failed asynchronous invocations"
type = string
default = null
}
variable "destination_on_success" {
description = "Amazon Resource Name (ARN) of the destination resource for successful asynchronous invocations"
type = string
default = null
}
main.tf
resource "aws_lambda_function_event_invoke_config" "this" {
for_each = { for k, v in local.qualifiers : k => v if v != null && local.create && var.create_function && !var.create_layer && var.create_async_event_config }
function_name = aws_lambda_function.this[0].function_name
qualifier = each.key == "current_version" ? aws_lambda_function.this[0].version : null
maximum_event_age_in_seconds = var.maximum_event_age_in_seconds
maximum_retry_attempts = var.maximum_retry_attempts
dynamic "destination_config" {
for_each = var.destination_on_failure != null || var.destination_on_success != null ? [true] : []
content {
dynamic "on_failure" {
for_each = var.destination_on_failure != null ? [true] : []
content {
destination = var.destination_on_failure
}
}
dynamic "on_success" {
for_each = var.destination_on_success != null ? [true] : []
content {
destination = var.destination_on_success
}
}
}
}
}
So the destination_on_success is only used in this resource and this resources is only invoked if several conditions are met. The key one being var.create_async_event_config must be true.
You can see the example for this here https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/be6cf9701071bf807cd7864fbcc751ed2552e434/examples/async/main.tf
module "lambda_function" {
source = "../../"
function_name = "${random_pet.this.id}-lambda-async"
handler = "index.lambda_handler"
runtime = "python3.8"
architectures = ["arm64"]
source_path = "${path.module}/../fixtures/python3.8-app1"
create_async_event_config = true
attach_async_event_policy = true
maximum_event_age_in_seconds = 100
maximum_retry_attempts = 1
destination_on_failure = aws_sns_topic.async.arn
destination_on_success = aws_sqs_queue.async.arn
}

Updated stages variable to which is working

UPDATE
i got the variable working which passes the terraform plan flying colors. That said when i run terraform apply I get a new error:
creating CodePipeline (dev-mgt-mytest-cp): ValidationException: 2
validation errors detected: Value at
'pipeline.stages.1.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have length
greater than or equal to 1]; Value at
'pipeline.stages.2.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have a length
greater than or equal to 1]
I don't believe this is a limit for the code pipeline since I have done this pipeline manually without dynamic stages, and it works fine. Not sure if this is a terraform hard limit. Looking for some help here. Also, I have updated the code with the working variable for those looking for the syntax.
OLD POST
================================================================
I am giving my first stab at creating dynamic stages and really struggling with the documentation out there. What I have put together so far is based on articles found here in StackOverflow and a few resources online. So far I think i have good syntax, but the value i am passing is from my main.tf is getting an error:
The given value is not suitable for the module.test_code.var.stages
declared at │ ../dynamic_pipeline/variables.tf:60,1-18: all list
elements must have the same │ type.
Part 1
All I am trying to do basically is pass in dynamic stages into the pipeline. Once I get the stages working, I will add the new dynamic variables. I am providing the dynamic module, variables.tf for the module, and then my test run along with variables.
dynamic_pipeline.tf
resource "aws_codepipeline" "cp_plan_pipeline" {
name = "${local.cp_name}-cp"
role_arn = var.cp_run_role
artifact_store {
type = var.cp_artifact_type
location = var.cp_artifact_bucketname
}
dynamic "stage" {
for_each = [for s in var.stages : {
name = s.name
action = s.action
} if(lookup(s, "enabled", true))]
content {
name = stage.value.name
dynamic "action" {
for_each = stage.value.action
content {
name = action.value["name"]
owner = action.value["owner"]
version = action.value["version"]
category = action.value["category"]
provider = action.value["provider"]
run_order = lookup(action.value, "run_order", null)
namespace = lookup(action.value, "namespace", null)
region = lookup(action.value, "region", data.aws_region.current.name)
input_artifacts = lookup(action.value, "input_artifacts", [])
output_artifacts = lookup(action.value, "output_artifacts", [])
configuration = {
RepositoryName = lookup(action.value, "repository_name", null)
ProjectName = lookup(action.value, "ProjectName", null)
BranchName = lookup(action.value, "branch_name", null)
PollForSourceChanges = lookup(action.value, "poll_for_sourcechanges", null)
OutputArtifactFormat = lookup(action.value, "ouput_format", null)
}
}
}
}
}
}
variables.tf
#---------------------------------------------------------------------------------------------------
# General
#---------------------------------------------------------------------------------------------------
variable "region" {
type = string
description = "The AWS Region to be used when deploying region-specific resources (Default: us-east-1)"
default = "us-east-1"
}
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_name" {
type = string
description = "The name of the codepipline"
}
variable "cp_repo_name" {
type = string
description = "Then name of the repo that will be used as a source repo to trigger builds"
}
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_artifact_bucketname" {
type = string
description = "name of the artifact bucket where articacts are stored."
default = "Codepipeline-artifacts-s3"
}
variable "cp_run_role" {
type = string
description = "S3 artifact bucket name."
}
variable "cp_artifact_type" {
type = string
description = ""
default = "S3"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs"
default = "CODE_ZIP"
}
variable "stages" {
type = list(object({
name = string
action = list(object({
name = string
owner = string
version = string
category = string
provider = string
run_order = number
namespace = string
region = string
input_artifacts = list(string)
output_artifacts = list(string)
repository_name = string
ProjectName = string
branch_name = string
poll_for_sourcechanges = bool
output_format = string
}))
}))
description = "This list describes each stage of the build"
}
#---------------------------------------------------------------------------------------------------
# ENVIORNMENT VARIABLES
#---------------------------------------------------------------------------------------------------
variable "env" {
type = string
description = "The environment to deploy resources (dev | test | prod | sbx)"
default = "dev"
}
variable "tenant" {
type = string
description = "The Service Tenant in which the IaC is being deployed to"
default = "dummytenant"
}
variable "project" {
type = string
description = "The Project Name or Acronym. (Note: You should consider setting this in your Enviornment Variables.)"
}
#---------------------------------------------------------------------------------------------------
# Parameter Store Variables
#---------------------------------------------------------------------------------------------------
variable "bucketlocation" {
type = string
description = "location within the S3 bucket where the State file resides"
}
Part 2
That is the main makeup of the pipeline. Below is the module I created to try to execute as a test to ensure it works. This is where I am getting the error
main.tf
module test_code {
source = "../dynamic_pipeline"
cp_name = "dynamic-actions"
project = "my_project"
bucketlocation = var.backend_bucket_target_name
cp_run_role = "arn:aws:iam::xxxxxxxxx:role/cp-deploy-service-role"
cp_repo_name = var.repo
stages = [{
name = "part 1"
action = [{
name = "Source"
owner = "AWS"
version = "1"
category = "Source"
provider = "CodeCommit"
run_order = 1
repository_name = "my_target_repo"
branch_name = "main"
poll_for_sourcechanges = true
output_artifacts = ["CodeWorkspace"]
ouput_format = var.cp_ouput_format
}]
},
{
name = "part 2"
action = [{
run_order = 1
name = "Combine_Binaries"
owner = "AWS"
version = "1"
category = "Build"
provider = "CodeBuild"
namespace = "BIN"
input_artifacts = ["CodeWorkspace"]
output_artifacts = ["CodeSource"]
ProjectName = "test_runner"
}]
}]
}
variables files associated with the run book:
variables.tf
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs. Values can be CODEBUILD_CLONE_REF or CODE_ZIP"
default = "CODE_ZIP"
}
variable "backend_bucket_target_name" {
type = string
description = "The folder name where the state file is stored for the pipeline"
default = "dynamic-test-pl"
}
variable "repo" {
type = string
description = "name of the repo the pipeine is managing"
default = "my_target_repo"
}
I know this is my first time trying this. Not very good with Lists and maps on terraform, but I am certain it has to do with the way i am passing it in. Any help or guidance would be appreciated it.
After some time, I finally found the answer to this issue. Special thanks to this thread on github. It put me in the right direction. A couple of things to take away from this. Variable declaration is the essential part of Dynamic Pipeline. I worked with several working examples that yielded great results for Stages and Actions, but when it came to the configuration environment variables, they all crashed. The root problem that I concluded was that you could not perform Dynamic Actions with Environment variables and hope for terraform to perform the JSON translation for you. In some cases it would work but required that every Action contain similar elements which led to character constraints and errors like my post called out.
My best guess is that terraform has a hard limit on variables and their character limits. The solution, declare the resource as a dynamic that seems to support different limits versus traditional variables within a resource. The approach that was taken makes the entire Terraform Resource a Dynamic attribute which I feel is treated differently by terraform in its entirety with fewer limits (an assumption). I say that because I tried four methods of dynamic staging and actions. Those methods worked up until I introduced the environment variables (forces JSON conversion on a specific resource type) and then I would get various errors all pointing at either a variable not supported for missing attributes or a variable that exceeds terraform character limits.
What worked was creating the entire resource as a dynamic resource which I could pass in as a map attribute that includes the EnvironmentVariables. See examples below.
Final Dynamic Pipeline
resource "aws_codepipeline" "codepipeline" {
for_each = var.code_pipeline
name = "${local.name_prefix}-${var.AppName}"
role_arn = each.value["code_pipeline_role_arn"]
tags = {
Pipeline_Key = each.key
}
artifact_store {
type = lookup(each.value, "artifact_store", null) == null ? "" : lookup(each.value.artifact_store, "type", "S3")
location = lookup(each.value, "artifact_store", null) == null ? null : lookup(each.value.artifact_store, "artifact_bucket", null)
}
dynamic "stage" {
for_each = lookup(each.value, "stages", {})
iterator = stage
content {
name = lookup(stage.value, "name")
dynamic "action" {
for_each = lookup(stage.value, "actions", {}) //[stage.key]
iterator = action
content {
name = action.value["name"]
category = action.value["category"]
owner = action.value["owner"]
provider = action.value["provider"]
version = action.value["version"]
run_order = action.value["run_order"]
input_artifacts = lookup(action.value, "input_artifacts", null)
output_artifacts = lookup(action.value, "output_artifacts", null)
configuration = action.value["configuration"]
namespace = lookup(action.value, "namespace", null)
}
}
}
}
}
Calling Dynamic Pipeline
module "code_pipeline" {
source = "../module-aws-codepipeline" #using module locally
#source = "your-github-repository/aws-codepipeline" #using github repository
AppName = "My_new_pipeline"
code_pipeline = local.code_pipeline
}
SAMPlE local pipeline variable
locals {
/*
DECLARE enviornment variables. Note each Action does not require environment variables
*/
action_second_stage_variables = [
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "NamespaceVariable"
type = "PLAINTEXT"
value = "some_value"
},
]
action_third_stage_variables = [
{
name = "PL_VARIABLE_1"
type = "PLAINTEXT"
value = "VALUE1"
},
{
name = "PL_VARIABLE 2"
type = "PLAINTEXT"
value = "VALUE2"
},
{
name = "PL_VARIABLE_3"
type = "PLAINTEXT"
value = "VAUE3"
},
{
name = "PL_VARIABLE_4"
type = "PLAINTEXT"
value = "#{BLD.NamespaceVariable}"
},
]
/*
BUILD YOUR STAGES
*/
code_pipeline = {
codepipeline-configs = {
code_pipeline_role_arn = "arn:aws:iam::aws_account_name:role/role_name"
artifact_store = {
type = "S3"
artifact_bucket = "your-aws-bucket-name"
}
stages = {
stage_1 = {
name = "Download"
actions = {
action_1 = {
run_order = 1
category = "Source"
name = "First_Stage"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["download_ouput"]
configuration = {
RepositoryName = "Codecommit_target_repo"
BranchName = "main"
PollForSourceChanges = true
OutputArtifactFormat = "CODE_ZIP"
}
}
}
}
stage_2 = {
name = "Build"
actions = {
action_1 = {
run_order = 2
category = "Build"
name = "Second_Stage"
owner = "AWS"
provider = "CodeBuild"
version = "1"
namespace = "BLD"
input_artifacts = ["Download_ouput"]
output_artifacts = ["build_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_second_stage"
EnvironmentVariables = jsonencode(local.action_second_stage_variables)
}
}
}
}
stage_3 = {
name = "Validation"
actions = {
action_1 = {
run_order = 1
name = "Third_Stage"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["build_outputs"]
output_artifacts = ["validation_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_third_stage"
EnvironmentVariables = jsonencode(local.action_third_stage_variables)
}
}
}
}
}
}
}
}
The trick becomes building your code pipeline resource and its stages and actions at the local level. You take your local.tf and build out the pipeline variable there, you build out all your stages, actions, and EnvironmentVariables. EnvironmentVariables are then passed and converted from JSON directly into the variable, which passes in as a single variable type. A sample explaining this approach can be found within this GitHub repository. I took the findings and consolidated them, and documented them so others could leverage this method.

Create waf rule if environment is nonprod Terraform

I'm trying to create an IP whitelist in nonprod for load testing, the WAF is dynamically created in prod and nonprod based on the envname/envtype:
resource "aws_waf_ipset" "pwa_cloudfront_ip_restricted" {
name = "${var.envname}-pwa-cloudfront-whitelist"
dynamic "ip_set_descriptors" {
for_each = var.cloudfront_ip_restricted_waf_cidr_whitelist
content {
type = ip_set_descriptors.value.type
value = ip_set_descriptors.value.value
}
}
}
resource "aws_waf_rule" "pwa_cloudfront_ip_restricted" {
depends_on = [aws_waf_ipset.pwa_cloudfront_ip_restricted]
name = "${var.envname}-pwa-cloudfront-whitelist"
metric_name = "${var.envname}PWACloudfrontWhitelist"
predicates {
data_id = aws_waf_ipset.pwa_cloudfront_ip_restricted.id
negated = false
type = "IPMatch"
}
}
resource "aws_waf_ipset" "pwa_cloudfront_ip_restricted_load_testing" {
name = "${var.envname}-pwa-cloudfront-whitelist_load_testing"
count = var.envtype == "nonprod" ? 1 : 0
dynamic "ip_set_descriptors" {
for_each = var.cloudfront_ip_restricted_waf_cidr_whitelist_load_testing
content {
type = ip_set_descriptors.value.type
value = ip_set_descriptors.value.value
}
}
}
resource "aws_waf_rule" "pwa_cloudfront_ip_restricted_load_testing" {
depends_on = [aws_waf_ipset.pwa_cloudfront_ip_restricted_load_testing]
count = var.envtype == "nonprod" ? 1 : 0
name = "${var.envname}-pwa-cloudfront-whitelist-load_testing"
metric_name = "${var.envname}PWACloudfrontWhitelistload_testing"
predicates {
data_id = aws_waf_ipset.pwa_cloudfront_ip_restricted_load_testing[count.index].id
negated = false
type = "IPMatch"
}
}
resource "aws_waf_web_acl" "pwa_cloudfront_ip_restricted" {
name = "${var.envname}-pwa-cloudfront-whitelist"
metric_name = "${var.envname}PWACloudfrontWhitelist"
default_action {
type = "BLOCK"
}
rules {
action {
type = "ALLOW"
}
priority = 1
rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted.id
type = "REGULAR"
}
rules {
action {
type = "ALLOW"
}
priority = 2
rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing.id
type = "REGULAR"
}
}
The second rules block throws and error in the terraform plan:
Error: Missing resource instance key
on waf.tf line 73, in resource "aws_waf_web_acl" "pwa_cloudfront_ip_restricted":
73: rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing.id
Because aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing has "count" set,
its attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing[count.index]
However if I add [count.index] :
Error: Reference to "count" in non-counted context
on waf.tf line 73, in resource "aws_waf_web_acl" "pwa_cloudfront_ip_restricted":
73: rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing[count.index].id
The "count" object can only be used in "module", "resource", and "data"
blocks, and only when the "count" argument is set.
Is there a way to do this that doesn't use the count param? Or am I missing something in the way that I am using it?
Since there is difference between the prod and non-prod environment, the way this should be tackled is by using dynamic [1] and for_each meta-argument [2]:
resource "aws_waf_web_acl" "pwa_cloudfront_ip_restricted" {
name = "${var.envname}-pwa-cloudfront-whitelist"
metric_name = "${var.envname}PWACloudfrontWhitelist"
default_action {
type = "BLOCK"
}
dynamic "rules" {
for_each = var.envtype == "nonprod" ? [1] : []
content {
action {
type = "ALLOW"
}
priority = 1
rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted[0].id
type = "REGULAR"
}
}
dynamic "rules" {
for_each = var.envtype == "nonprod" ? [1] : []
content {
action {
type = "ALLOW"
}
priority = 2
rule_id = aws_waf_rule.pwa_cloudfront_ip_restricted_load_testing[0].id
type = "REGULAR"
}
}
}
[1] https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks
[2] https://developer.hashicorp.com/terraform/language/expressions/for

for_each value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created

i am trying to create route 53 using module concept. but getting below error.
"The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created."
for_each = var.create-route53 ? local.recordsets : tomap({})
local.recordsets will be known only after apply
var.create-route53 is true
can someone guide me on this ?
actual code:
module "route53" {
...
...
records = [
{
name = "test-name"
type = "A"
ttl = 300
records = [for instance in module.ec2: instance.ec2-IP
]
},
]
vpc_id = "${module.vpc.vpc_id}"
}
Inside modules folder below code will be there in route53 folder:
locals {
records = try(jsondecode(var.records), var.records)
recordsets = {
for rs in local.records :
join(" ", compact(["${rs.name} ${rs.type}", lookup(rs, "set_identifier", "")])) => merge(rs, {
records = jsonencode(try(rs.records, null))
})
}
}
resource "aws_route53_record" "this" {
for_each = var.create-route53 ? local.recordsets : tomap({})
zone_id = aws_route53_zone.private.zone_id
name = each.value.name != "" ? "${each.value.name}" : "test"
type = each.value.type
ttl = lookup(each.value, "ttl", null)
records = jsondecode(each.value.records)
set_identifier = lookup(each.value, "set_identifier", null)
health_check_id = lookup(each.value, "health_check_id", null)
multivalue_answer_routing_policy = lookup(each.value, "multivalue_answer_routing_policy", null)
allow_overwrite = lookup(each.value, "allow_overwrite", false)
}

Terraform Unsupported block type error for "condition_monitoring_query_language"

I am trying to deploy an alert policy in terraform but came across an error saying that this block is unsupported. I find this confusing because I have used another field called condition absent and the policy works fine. Here is the link to policy I am trying to create: google_monitoring_alert_policy
Error: Unsupported block type
on terraform/modules/google-monitoring-mql-alert-policy/main.tf line 29, in resource "google_monitoring_alert_policy" "default":
29: condition_monitoring_query_language {
Blocks of type "condition_monitoring_query_language" are not expected here.
This is the code so far. it was a simple change from condition_absent to condition_monitoring_query_language
resource "google_monitoring_alert_policy" "default" {
depends_on = [
null_resource.is_ready
]
display_name = each.key
project = var.gcp_project
enabled = lookup(each.value, "enabled", true)
combiner = lookup(each.value, "combiner", "OR")
notification_channels = lookup(each.value, "notification_channels", null) == null ? null : matchkeys(values(var.notificationlist), keys(var.notificationlist), lookup(each.value, "notification_channels", []))
dynamic "conditions" {
for_each = each.value["conditions"]
content {
display_name = conditions.key
condition_monitoring_query_language {
query = lookup(conditions.value, "query", null)
duration = lookup(conditions.value, "duration", null)
dynamic "trigger" {
for_each = lookup(conditions.value, "trigger", [])
content {
count = lookup(trigger.value, "count", null)
percent = lookup(trigger.value, "percent", null)
}
}
}
}
}
dynamic "documentation" {
for_each = lookup(each.value, "documentation", [])
content {
content = lookup(documentation.value, "documentation_content", null)
mime_type = lookup(documentation.value, "documentation_mime_type", null)
}
}
What should I do to successfully run my "terraform plan" ? Thank you in advance
Monitoring Query Language based alerting was added in v3.46.0. The error msg suggests that you are using older version. You have to upgrade your gcp provider.