I use the module, https://github.com/cloudposse/terraform-aws-elasticache-redis to provision elasticache redis. Below are the errors when I run terraform apply. I have no clue of these errors.
Terraform version: v0.13.5
module.redis.aws_elasticache_parameter_group.default[0]: Creating...
module.redis.aws_elasticache_subnet_group.default[0]: Creating...
Error: Error creating CacheSubnetGroup: InvalidParameterValue: The parameter CacheSubnetGroupName must be provided and must not be blank.
status code: 400, request id: a1ab57b1-fa23-491c-aa7b-a2d3804014c9
Error: Error creating Cache Parameter Group: InvalidParameterValue: The parameter CacheParameterGroupName must be provided and must not be blank.
status code: 400, request id: 9abc80b6-bd3b-46fd-8b9e-9bf14d1913eb
redis.tf:
module "redis" {
source = "git::https://github.com/cloudposse/terraform-aws-elasticache-redis.git?ref=tags/0.25.0"
availability_zones = var.azs
vpc_id = module.vpc.vpc_id
allowed_security_groups = [data.aws_security_group.default.id]
subnets = module.vpc.private_subnets
cluster_size = var.redis_cluster_size #number_cache_clusters
instance_type = var.redis_instance_type
apply_immediately = true
automatic_failover_enabled = false
engine_version = var.engine_version
family = var.family
replication_group_id = var.replication_group_id
elasticache_subnet_group_name = var.elasticache_subnet_group_name
#enabled = true
enabled = var.enabled
#at-rest encryption is to increase data security by encrypting on-disk data.
at_rest_encryption_enabled = var.at_rest_encryption_enabled
#in-transit encryption protects data when it is moving from one location to another.
transit_encryption_enabled = var.transit_encryption_enabled
cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled
parameter = [
{
#Keyspace notifications send events for every operation affecting the Redis data space.
name = "notify-keyspace-events"
value = "lK"
}
]
context = module.this.context
}
context.tf:
module "this" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2"
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = var.tags
additional_tag_map = var.additional_tag_map
label_order = var.label_order
regex_replace_chars = var.regex_replace_chars
id_length_limit = var.id_length_limit
context = var.context
}
variable "context" {
type = object({
enabled = bool
namespace = string
environment = string
stage = string
name = string
delimiter = string
attributes = list(string)
tags = map(string)
additional_tag_map = map(string)
regex_replace_chars = string
label_order = list(string)
id_length_limit = number
})
default = {
enabled = true
namespace = null
environment = null
stage = null
name = null
delimiter = null
attributes = []
tags = {}
additional_tag_map = {}
regex_replace_chars = null
label_order = []
id_length_limit = null
}
description = <<-EOT
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
EOT
}
variable "enabled" {
type = bool
default = true
description = "Set to false to prevent the module from creating any resources"
}
variable "namespace" {
type = string
default = null
description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}
variable "environment" {
type = string
default = null
description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}
variable "stage" {
type = string
default = null
description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}
variable "name" {
type = string
default = null
description = "Solution name, e.g. 'app' or 'jenkins'"
}
variable "delimiter" {
type = string
default = null
description = <<-EOT
Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
EOT
}
variable "attributes" {
type = list(string)
default = []
description = "Additional attributes (e.g. `1`)"
}
variable "tags" {
type = map(string)
default = {}
description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}
variable "additional_tag_map" {
type = map(string)
default = {}
description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}
variable "label_order" {
type = list(string)
default = null
description = <<-EOT
The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
EOT
}
variable "regex_replace_chars" {
type = string
default = null
description = <<-EOT
Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
EOT
}
variable "id_length_limit" {
type = number
default = null
description = <<-EOT
Limit `id` to this many characters.
Set to `0` for unlimited length.
Set to `null` for default, which is `0`.
Does not affect `id_full`.
EOT
}
Open up the context.tf and set variable "enabled" to true if you want the module to create resources for you, including the subnet group.
Otherwise, you have to create all per-requsite resources yourselves, which includes elasticache_subnet_group_name.
I found the problem. The default values of namespace/environment/stage/name in context.tf are null. That causes the problem. Set these values solved the problem.
You need to provide module input which can be found on the link you provided.
For instance :
Error: Error creating CacheSubnetGroup: InvalidParameterValue: The parameter CacheSubnetGroupName must be provided and must not be blank. status code: 400, request id: a1ab57b1-fa23-491c-aa7b-a2d3804014c9
In this case, you need to set elasticache_subnet_group_name, and so on:
Related
UPDATE
i got the variable working which passes the terraform plan flying colors. That said when i run terraform apply I get a new error:
creating CodePipeline (dev-mgt-mytest-cp): ValidationException: 2
validation errors detected: Value at
'pipeline.stages.1.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have length
greater than or equal to 1]; Value at
'pipeline.stages.2.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have a length
greater than or equal to 1]
I don't believe this is a limit for the code pipeline since I have done this pipeline manually without dynamic stages, and it works fine. Not sure if this is a terraform hard limit. Looking for some help here. Also, I have updated the code with the working variable for those looking for the syntax.
OLD POST
================================================================
I am giving my first stab at creating dynamic stages and really struggling with the documentation out there. What I have put together so far is based on articles found here in StackOverflow and a few resources online. So far I think i have good syntax, but the value i am passing is from my main.tf is getting an error:
The given value is not suitable for the module.test_code.var.stages
declared at │ ../dynamic_pipeline/variables.tf:60,1-18: all list
elements must have the same │ type.
Part 1
All I am trying to do basically is pass in dynamic stages into the pipeline. Once I get the stages working, I will add the new dynamic variables. I am providing the dynamic module, variables.tf for the module, and then my test run along with variables.
dynamic_pipeline.tf
resource "aws_codepipeline" "cp_plan_pipeline" {
name = "${local.cp_name}-cp"
role_arn = var.cp_run_role
artifact_store {
type = var.cp_artifact_type
location = var.cp_artifact_bucketname
}
dynamic "stage" {
for_each = [for s in var.stages : {
name = s.name
action = s.action
} if(lookup(s, "enabled", true))]
content {
name = stage.value.name
dynamic "action" {
for_each = stage.value.action
content {
name = action.value["name"]
owner = action.value["owner"]
version = action.value["version"]
category = action.value["category"]
provider = action.value["provider"]
run_order = lookup(action.value, "run_order", null)
namespace = lookup(action.value, "namespace", null)
region = lookup(action.value, "region", data.aws_region.current.name)
input_artifacts = lookup(action.value, "input_artifacts", [])
output_artifacts = lookup(action.value, "output_artifacts", [])
configuration = {
RepositoryName = lookup(action.value, "repository_name", null)
ProjectName = lookup(action.value, "ProjectName", null)
BranchName = lookup(action.value, "branch_name", null)
PollForSourceChanges = lookup(action.value, "poll_for_sourcechanges", null)
OutputArtifactFormat = lookup(action.value, "ouput_format", null)
}
}
}
}
}
}
variables.tf
#---------------------------------------------------------------------------------------------------
# General
#---------------------------------------------------------------------------------------------------
variable "region" {
type = string
description = "The AWS Region to be used when deploying region-specific resources (Default: us-east-1)"
default = "us-east-1"
}
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_name" {
type = string
description = "The name of the codepipline"
}
variable "cp_repo_name" {
type = string
description = "Then name of the repo that will be used as a source repo to trigger builds"
}
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_artifact_bucketname" {
type = string
description = "name of the artifact bucket where articacts are stored."
default = "Codepipeline-artifacts-s3"
}
variable "cp_run_role" {
type = string
description = "S3 artifact bucket name."
}
variable "cp_artifact_type" {
type = string
description = ""
default = "S3"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs"
default = "CODE_ZIP"
}
variable "stages" {
type = list(object({
name = string
action = list(object({
name = string
owner = string
version = string
category = string
provider = string
run_order = number
namespace = string
region = string
input_artifacts = list(string)
output_artifacts = list(string)
repository_name = string
ProjectName = string
branch_name = string
poll_for_sourcechanges = bool
output_format = string
}))
}))
description = "This list describes each stage of the build"
}
#---------------------------------------------------------------------------------------------------
# ENVIORNMENT VARIABLES
#---------------------------------------------------------------------------------------------------
variable "env" {
type = string
description = "The environment to deploy resources (dev | test | prod | sbx)"
default = "dev"
}
variable "tenant" {
type = string
description = "The Service Tenant in which the IaC is being deployed to"
default = "dummytenant"
}
variable "project" {
type = string
description = "The Project Name or Acronym. (Note: You should consider setting this in your Enviornment Variables.)"
}
#---------------------------------------------------------------------------------------------------
# Parameter Store Variables
#---------------------------------------------------------------------------------------------------
variable "bucketlocation" {
type = string
description = "location within the S3 bucket where the State file resides"
}
Part 2
That is the main makeup of the pipeline. Below is the module I created to try to execute as a test to ensure it works. This is where I am getting the error
main.tf
module test_code {
source = "../dynamic_pipeline"
cp_name = "dynamic-actions"
project = "my_project"
bucketlocation = var.backend_bucket_target_name
cp_run_role = "arn:aws:iam::xxxxxxxxx:role/cp-deploy-service-role"
cp_repo_name = var.repo
stages = [{
name = "part 1"
action = [{
name = "Source"
owner = "AWS"
version = "1"
category = "Source"
provider = "CodeCommit"
run_order = 1
repository_name = "my_target_repo"
branch_name = "main"
poll_for_sourcechanges = true
output_artifacts = ["CodeWorkspace"]
ouput_format = var.cp_ouput_format
}]
},
{
name = "part 2"
action = [{
run_order = 1
name = "Combine_Binaries"
owner = "AWS"
version = "1"
category = "Build"
provider = "CodeBuild"
namespace = "BIN"
input_artifacts = ["CodeWorkspace"]
output_artifacts = ["CodeSource"]
ProjectName = "test_runner"
}]
}]
}
variables files associated with the run book:
variables.tf
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs. Values can be CODEBUILD_CLONE_REF or CODE_ZIP"
default = "CODE_ZIP"
}
variable "backend_bucket_target_name" {
type = string
description = "The folder name where the state file is stored for the pipeline"
default = "dynamic-test-pl"
}
variable "repo" {
type = string
description = "name of the repo the pipeine is managing"
default = "my_target_repo"
}
I know this is my first time trying this. Not very good with Lists and maps on terraform, but I am certain it has to do with the way i am passing it in. Any help or guidance would be appreciated it.
After some time, I finally found the answer to this issue. Special thanks to this thread on github. It put me in the right direction. A couple of things to take away from this. Variable declaration is the essential part of Dynamic Pipeline. I worked with several working examples that yielded great results for Stages and Actions, but when it came to the configuration environment variables, they all crashed. The root problem that I concluded was that you could not perform Dynamic Actions with Environment variables and hope for terraform to perform the JSON translation for you. In some cases it would work but required that every Action contain similar elements which led to character constraints and errors like my post called out.
My best guess is that terraform has a hard limit on variables and their character limits. The solution, declare the resource as a dynamic that seems to support different limits versus traditional variables within a resource. The approach that was taken makes the entire Terraform Resource a Dynamic attribute which I feel is treated differently by terraform in its entirety with fewer limits (an assumption). I say that because I tried four methods of dynamic staging and actions. Those methods worked up until I introduced the environment variables (forces JSON conversion on a specific resource type) and then I would get various errors all pointing at either a variable not supported for missing attributes or a variable that exceeds terraform character limits.
What worked was creating the entire resource as a dynamic resource which I could pass in as a map attribute that includes the EnvironmentVariables. See examples below.
Final Dynamic Pipeline
resource "aws_codepipeline" "codepipeline" {
for_each = var.code_pipeline
name = "${local.name_prefix}-${var.AppName}"
role_arn = each.value["code_pipeline_role_arn"]
tags = {
Pipeline_Key = each.key
}
artifact_store {
type = lookup(each.value, "artifact_store", null) == null ? "" : lookup(each.value.artifact_store, "type", "S3")
location = lookup(each.value, "artifact_store", null) == null ? null : lookup(each.value.artifact_store, "artifact_bucket", null)
}
dynamic "stage" {
for_each = lookup(each.value, "stages", {})
iterator = stage
content {
name = lookup(stage.value, "name")
dynamic "action" {
for_each = lookup(stage.value, "actions", {}) //[stage.key]
iterator = action
content {
name = action.value["name"]
category = action.value["category"]
owner = action.value["owner"]
provider = action.value["provider"]
version = action.value["version"]
run_order = action.value["run_order"]
input_artifacts = lookup(action.value, "input_artifacts", null)
output_artifacts = lookup(action.value, "output_artifacts", null)
configuration = action.value["configuration"]
namespace = lookup(action.value, "namespace", null)
}
}
}
}
}
Calling Dynamic Pipeline
module "code_pipeline" {
source = "../module-aws-codepipeline" #using module locally
#source = "your-github-repository/aws-codepipeline" #using github repository
AppName = "My_new_pipeline"
code_pipeline = local.code_pipeline
}
SAMPlE local pipeline variable
locals {
/*
DECLARE enviornment variables. Note each Action does not require environment variables
*/
action_second_stage_variables = [
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "NamespaceVariable"
type = "PLAINTEXT"
value = "some_value"
},
]
action_third_stage_variables = [
{
name = "PL_VARIABLE_1"
type = "PLAINTEXT"
value = "VALUE1"
},
{
name = "PL_VARIABLE 2"
type = "PLAINTEXT"
value = "VALUE2"
},
{
name = "PL_VARIABLE_3"
type = "PLAINTEXT"
value = "VAUE3"
},
{
name = "PL_VARIABLE_4"
type = "PLAINTEXT"
value = "#{BLD.NamespaceVariable}"
},
]
/*
BUILD YOUR STAGES
*/
code_pipeline = {
codepipeline-configs = {
code_pipeline_role_arn = "arn:aws:iam::aws_account_name:role/role_name"
artifact_store = {
type = "S3"
artifact_bucket = "your-aws-bucket-name"
}
stages = {
stage_1 = {
name = "Download"
actions = {
action_1 = {
run_order = 1
category = "Source"
name = "First_Stage"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["download_ouput"]
configuration = {
RepositoryName = "Codecommit_target_repo"
BranchName = "main"
PollForSourceChanges = true
OutputArtifactFormat = "CODE_ZIP"
}
}
}
}
stage_2 = {
name = "Build"
actions = {
action_1 = {
run_order = 2
category = "Build"
name = "Second_Stage"
owner = "AWS"
provider = "CodeBuild"
version = "1"
namespace = "BLD"
input_artifacts = ["Download_ouput"]
output_artifacts = ["build_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_second_stage"
EnvironmentVariables = jsonencode(local.action_second_stage_variables)
}
}
}
}
stage_3 = {
name = "Validation"
actions = {
action_1 = {
run_order = 1
name = "Third_Stage"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["build_outputs"]
output_artifacts = ["validation_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_third_stage"
EnvironmentVariables = jsonencode(local.action_third_stage_variables)
}
}
}
}
}
}
}
}
The trick becomes building your code pipeline resource and its stages and actions at the local level. You take your local.tf and build out the pipeline variable there, you build out all your stages, actions, and EnvironmentVariables. EnvironmentVariables are then passed and converted from JSON directly into the variable, which passes in as a single variable type. A sample explaining this approach can be found within this GitHub repository. I took the findings and consolidated them, and documented them so others could leverage this method.
main.tf
module "iam_assumable_role" {
for_each = var.service_accounts
source = "../../../../../../modules/iam-assumable-role-with-oidc/"
create_role = true
role_name = each.value.name
provider_url = replace(module.eks.cluster_oidc_issuer_url, "https://", "")
// role_policy_arns = [for i in each.value.policies : "aws_iam_policy.${i}.arn"]
oidc_fully_qualified_subjects = each.value.wildcard == "" ? ["system:serviceaccount:${each.value.namespace}:${each.value.name}"] : []
oidc_subjects_with_wildcards = each.value.wildcard != "" ? ["system:serviceaccount:${each.value.namespace}:${each.value.wildcard}"] : []
tags = var.tags
}
resource "aws_iam_policy" "dev-policy1" {
name_prefix = "dev-policy"
description = "some description"
policy = data.aws_iam_policy_document.dev-policy1.json
}
variable "service_accounts" {
type = map(object({
name = string
namespace = string
wildcard = string
policies = list(any)
}))
}
tfvars
service_accounts = {
"dev-sa" = {
"name" = "dev-sa",
"namespace" = "dev",
"wildcard" = "*",
"policies" = ["dev-policy1", "dev-policy2"]
},
"qa-sa" = {
"name" = "qa-sa",
"namespace" = "qa",
"wildcard" = "*",
"policies" = ["qa-policy1", "qa-policy2"]
}
}
My code is iterating over service_accounts variable and creates appropriate resources. The problem is that in the commented line I cannot get the list of aws_iam_policy.arn s for the provided policy names (policy names are provided through service_account variable). My current code returns the aws_iam_policy.PolicyName.arn as string and not the actual value. Note that dev-policy1 resource s just one of the all policy resources. All policy documents exist as well. module itself is working correctly when I provide policy list directly and not through variable.
Is it possible to achieve the desired in terraform at all?
You have to use for_each, to create your policies, as you can't dynamically references individual resources the way you are trying to do:
# get all policy names. Your names are unique, so its fine to use list
locals {
policy_names = flatten(values(var.service_accounts)[*]["policies"])
}
# create policy for each name in `policy_names`
resource "aws_iam_policy" "policy" {
for_each = local.policy_names
name_prefix = "dev-policy"
description = "some description"
# similar must be done below
# policy = data.aws_iam_policy_document.dev-policy1.json
}
Then you refer to them as:
role_policy_arns = [for i in each.value.policies: aws_iam_policy[${i}].arn]
I'm getting an error when I try to set default values for an object inside a list:
variable "routes" {
type = list(object({
destination_cidr_block = string
blackhole = bool})
default = {
blackhole = "false"
destination_cidr_block = ""
})
description = "a list of objects containing the cidr blocks of the dest and whether the cidr is a blackhole or not."
default = null
}
When I run this, I get the below error:
Error: Missing argument separator
on variables.tf line 21, in variable "routes":
18: type = list(object({
19: destination_cidr_block = string
20: blackhole = bool})
21: default = {
A comma is required to separate each function argument from the next.
Line 21 "default" is underlined in the error.
Setting defaults in this way works fine when it's just an object by itself. I don't know why it complains when the variable is a list of objects.
You may want to have it like this:
variable "routes" {
type = list(object({
destination_cidr_block = string
blackhole = bool
}))
default = [{
blackhole = "false"
destination_cidr_block = ""
}]
description = "a list of objects containing the cidr blocks of the dest and whether the cidr is a blackhole or not."
}
You cannot really have a default inside a type.
I requested an object type default feature and they are now releasing Optional attributes for object type constraints into the next build of Terraform!
I'm using Terraform with GCP ... I have the groups variable that I have not been able to get to work. Here's the definitions:
resource "google_compute_instance_group" "vm_group" {
name = "vm-group"
zone = "us-central1-c"
project = "myproject-dev"
instances = [google_compute_instance.east_vm.id, google_compute_instance.west_vm.id]
named_port {
name = "http"
port = "8080"
}
named_port {
name = "https"
port = "8443"
}
lifecycle {
create_before_destroy = true
}
}
data "google_compute_image" "debian_image" {
family = "debian-9"
project = "debian-cloud"
}
resource "google_compute_instance" "west_vm" {
name = "west-vm"
project = "myproject-dev"
machine_type = "e2-micro"
zone = "us-central1-c"
boot_disk {
initialize_params {
image = data.google_compute_image.debian_image.self_link
}
}
network_interface {
network = "default"
}
}
resource "google_compute_instance" "east_vm" {
name = "east-vm"
project = "myproject-dev"
machine_type = "e2-micro"
zone = "us-central1-c"
boot_disk {
initialize_params {
image = data.google_compute_image.debian_image.self_link
}
}
network_interface {
network = "default"
}
}
And here are the variables:
http_forward = true
https_redirect = true
create_address = true
project = "myproject-dev"
backends = {
"yobaby" = {
description = "my app"
enable_cdn = false
security_policy = ""
custom_request_headers = null
custom_response_headers = null
iap_config = {
enable = false
oauth2_client_id = ""
oauth2_client_secret = ""
}
log_config = {
enable = false
sample_rate = 0
}
groups = [{group = "google_compute_instance_group.vm_group.id"}]
}
}
... this is my latest attempt to get a group value that works, but this one won't work for me either; I still get
Error 400: Invalid value for field 'resource.backends[0].group': 'google_compute_instance_group.vm_group.id'. The URL is malformed., invalid
I've tried this with DNS FQDNs and variations on the syntax above; still no go.
Thanks much for any advice whatsoever!
There are couple clues that can lead in this direction based from the error message reported by Terraform Error 400: Invalid value for field 'resource.backends[0].group': 'google_compute_instance_group.vm_group.id'. The URL is malformed., invalid:
Error code 400 means the request was actually sent to the server, who rejected it as malformed (HTTP error code 400 is for client-side errors); this implies that Terraform itself has no problem with the syntax, i.e., the configuration file is correct and actionable from TF's PoV
Value of field resource.backends[0].group is reported as being literally 'google_compute_instance_group.vm_group.id' which strongly suggests that a variable substitution did not take place.
The quotes around the code block makes it into a literal value instead of a variable reference. The solution is to change this:
groups = [{group = "google_compute_instance_group.vm_group.id"}]
To this:
groups = [{group = google_compute_instance_group.vm_group.id}]
I gave up on Terraform and used gcloud scripts to do what I needed to do, based on this posting.
I working on a module, provided below, to manage AWS KMS keys via Terraform and I'm using the flatten function but the output I'm getting is empty when I call this module.
Any thought why I'm getting empty output?
module
main.tf
locals {
kms_keys = flatten([
for key, kms_key in var.kms_key_list : [
for index in range(kms_key.key_id) : {
key_id = index
aws_kms_alias = kms_key.alias
is_rotating = kms_key.enable_key_rotation
deletion_window_in_days = kms_key.deletion_window_in_days
is_enabled = kms_key.is_enabled
description = kms_key.description
policy = kms_key.policy
}
]
])
}
resource "aws_kms_key" "main" {
for_each = {
for k, v in local.kms_keys: k => v if v.key_id > 0
}
deletion_window_in_days = each.value.deletion_window_in_days
is_enabled = each.value.is_enabled
enable_key_rotation = each.value.enable_key_rotation
description = each.value.description
policy = each.value.policy
tags = merge({
Name = each.value.aws_kms_alias
}, var.common_tags)
}
resource "aws_kms_alias" "alias" {
for_each = aws_kms_key.main
name = "alias/${each.value.tags.Name}"
target_key_id = each.value.key_id
}
variables.tf
variable "kms_key_list" {
type = map(object({
key_id = number
deletion_window_in_days = number
is_enabled = bool
enable_key_rotation = bool
description = string
policy = string
key_usage = string
customer_master_key_spec = string
alias = string
}))
}
calling the module in main.tf
module "kms_keys" {
source = "../module/kms"
kms_key_list = local.kms_keys
}
kms_keys.tf
locals {
kms_keys = {
name_1 = {
key_id = 1
deletion_window_in_days = 7
is_enabled = true
enable_key_rotation = true
description = "description_1"
policy = ""
key_usage = "ENCRYPT_DECRYPT"
customer_master_key_spec = "SYMMETRIC_DEFAULT"
alias = "alias_1"
}
}
}
TF Plan Output looks like this:
Changes to Outputs:
+ kms_info = {
+ kms_key = {}
}
This seems odd:
for index in range(kms_key.key_id)
This is going to loop through all values from 0 to the key_id value; is that really what you want? To add an entry into kms_keys for each value from 0 to key_id?
I doubt it, because the way you have this coded, if your var.kms_key_list contains a key config with key_id = 10, it's going to create 10 different KMS keys, all with the same configuration values.
Essentially, I'm not understanding the purpose of the nested for loop.
If you can provide samples of:
The input variable, but with a key_id > 1
The output that you expect to see
Then we might be able to help. Also, I don't see any output declared either in the module or in the parent file, so those must be missing; please include them.