Create Terraform resource if variable is not null - amazon-web-services

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_object_lock_configuration
So basically I want to make a resource creation optional only if the variable object_lock_enabled is declared. It's an optional variable and if it exists, the bucket recreation is forced and I don't want that with other environments, only for the production.
prod.tfvars
object_lock_enabled = true
main.tf
module "voucher_s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.4.0"
bucket = local.voucher_bucket_name
object_lock_enabled = var.object_lock_enabled
}
.
.
.
resource "aws_s3_bucket_object_lock_configuration" "example" {
bucket = 'mybucket'
rule {
default_retention {
mode = "COMPLIANCE"
days = 5
}
}
}
variables.tf
variable "object_lock_enabled" {
description = "Enable object lock on bucket"
type = bool
default = null
}
but TF_VAR_env=platform terragrunt plan returns Error during operation: argument must not be null
I tried adding this line to the configuration resource bloc
count = var.object_lock_enabled == null ? 0 : 1
But I still get the same error.

You can just use false instead of null as a default value:
variable "object_lock_enabled" {
description = "Enable object lock on bucket"
type = bool
default = false # <----
}
and keep:
object_lock_enabled = var.object_lock_enabled

Related

Terraform AWS: SQS destination for Lambda doesn't get added

I have a working AWS project that I'm trying to implement in Terraform.
One of the steps requires a lambda function to query athena and return results to SQS (I am using this module for lambda instead of the original resource). Here is the code:
data "archive_file" "go_package" {
type = "zip"
source_file = "./report_to_SQS_go/main"
output_path = "./report_to_SQS_go/main.zip"
}
resource "aws_sqs_queue" "emails_queue" {
name = "sendEmails_tf"
}
module "lambda_report_to_sqs" {
source = "terraform-aws-modules/lambda/aws"
function_name = "report_to_SQS_Go_tf"
handler = "main"
runtime = "go1.x"
create_package = false
local_existing_package = "./report_to_SQS_go/main.zip"
attach_policy_json = true
policy_json = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect : "Allow"
Action : [
"dynamodb:*",
"lambda:*",
"logs:*",
"athena:*",
"cloudwatch:*",
"s3:*",
"sqs:*"
]
Resource : ["*"]
}
]
})
destination_on_success = aws_sqs_queue.emails_queue.arn
timeout = 200
memory_size = 1024
}
The code works fine and produces the desired output; however, the problem is, SQS doesn't show up as a destination (although the Queue shows up in SQS normally and can send/recieve messages).
I don't think permissions are the problem because I can add SQS destinations manually from the console successfully.
The variable destination_on_success is only used if you also set create_async_event_config as true. Below is extracted from https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/master
variables.tf
############################
# Lambda Async Event Config
############################
variable "create_async_event_config" {
description = "Controls whether async event configuration for Lambda Function/Alias should be created"
type = bool
default = false
}
variable "create_current_version_async_event_config" {
description = "Whether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)"
type = bool
default = true
}
.....
variable "destination_on_failure" {
description = "Amazon Resource Name (ARN) of the destination resource for failed asynchronous invocations"
type = string
default = null
}
variable "destination_on_success" {
description = "Amazon Resource Name (ARN) of the destination resource for successful asynchronous invocations"
type = string
default = null
}
main.tf
resource "aws_lambda_function_event_invoke_config" "this" {
for_each = { for k, v in local.qualifiers : k => v if v != null && local.create && var.create_function && !var.create_layer && var.create_async_event_config }
function_name = aws_lambda_function.this[0].function_name
qualifier = each.key == "current_version" ? aws_lambda_function.this[0].version : null
maximum_event_age_in_seconds = var.maximum_event_age_in_seconds
maximum_retry_attempts = var.maximum_retry_attempts
dynamic "destination_config" {
for_each = var.destination_on_failure != null || var.destination_on_success != null ? [true] : []
content {
dynamic "on_failure" {
for_each = var.destination_on_failure != null ? [true] : []
content {
destination = var.destination_on_failure
}
}
dynamic "on_success" {
for_each = var.destination_on_success != null ? [true] : []
content {
destination = var.destination_on_success
}
}
}
}
}
So the destination_on_success is only used in this resource and this resources is only invoked if several conditions are met. The key one being var.create_async_event_config must be true.
You can see the example for this here https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/be6cf9701071bf807cd7864fbcc751ed2552e434/examples/async/main.tf
module "lambda_function" {
source = "../../"
function_name = "${random_pet.this.id}-lambda-async"
handler = "index.lambda_handler"
runtime = "python3.8"
architectures = ["arm64"]
source_path = "${path.module}/../fixtures/python3.8-app1"
create_async_event_config = true
attach_async_event_policy = true
maximum_event_age_in_seconds = 100
maximum_retry_attempts = 1
destination_on_failure = aws_sns_topic.async.arn
destination_on_success = aws_sqs_queue.async.arn
}

Updated stages variable to which is working

UPDATE
i got the variable working which passes the terraform plan flying colors. That said when i run terraform apply I get a new error:
creating CodePipeline (dev-mgt-mytest-cp): ValidationException: 2
validation errors detected: Value at
'pipeline.stages.1.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have length
greater than or equal to 1]; Value at
'pipeline.stages.2.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have a length
greater than or equal to 1]
I don't believe this is a limit for the code pipeline since I have done this pipeline manually without dynamic stages, and it works fine. Not sure if this is a terraform hard limit. Looking for some help here. Also, I have updated the code with the working variable for those looking for the syntax.
OLD POST
================================================================
I am giving my first stab at creating dynamic stages and really struggling with the documentation out there. What I have put together so far is based on articles found here in StackOverflow and a few resources online. So far I think i have good syntax, but the value i am passing is from my main.tf is getting an error:
The given value is not suitable for the module.test_code.var.stages
declared at │ ../dynamic_pipeline/variables.tf:60,1-18: all list
elements must have the same │ type.
Part 1
All I am trying to do basically is pass in dynamic stages into the pipeline. Once I get the stages working, I will add the new dynamic variables. I am providing the dynamic module, variables.tf for the module, and then my test run along with variables.
dynamic_pipeline.tf
resource "aws_codepipeline" "cp_plan_pipeline" {
name = "${local.cp_name}-cp"
role_arn = var.cp_run_role
artifact_store {
type = var.cp_artifact_type
location = var.cp_artifact_bucketname
}
dynamic "stage" {
for_each = [for s in var.stages : {
name = s.name
action = s.action
} if(lookup(s, "enabled", true))]
content {
name = stage.value.name
dynamic "action" {
for_each = stage.value.action
content {
name = action.value["name"]
owner = action.value["owner"]
version = action.value["version"]
category = action.value["category"]
provider = action.value["provider"]
run_order = lookup(action.value, "run_order", null)
namespace = lookup(action.value, "namespace", null)
region = lookup(action.value, "region", data.aws_region.current.name)
input_artifacts = lookup(action.value, "input_artifacts", [])
output_artifacts = lookup(action.value, "output_artifacts", [])
configuration = {
RepositoryName = lookup(action.value, "repository_name", null)
ProjectName = lookup(action.value, "ProjectName", null)
BranchName = lookup(action.value, "branch_name", null)
PollForSourceChanges = lookup(action.value, "poll_for_sourcechanges", null)
OutputArtifactFormat = lookup(action.value, "ouput_format", null)
}
}
}
}
}
}
variables.tf
#---------------------------------------------------------------------------------------------------
# General
#---------------------------------------------------------------------------------------------------
variable "region" {
type = string
description = "The AWS Region to be used when deploying region-specific resources (Default: us-east-1)"
default = "us-east-1"
}
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_name" {
type = string
description = "The name of the codepipline"
}
variable "cp_repo_name" {
type = string
description = "Then name of the repo that will be used as a source repo to trigger builds"
}
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_artifact_bucketname" {
type = string
description = "name of the artifact bucket where articacts are stored."
default = "Codepipeline-artifacts-s3"
}
variable "cp_run_role" {
type = string
description = "S3 artifact bucket name."
}
variable "cp_artifact_type" {
type = string
description = ""
default = "S3"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs"
default = "CODE_ZIP"
}
variable "stages" {
type = list(object({
name = string
action = list(object({
name = string
owner = string
version = string
category = string
provider = string
run_order = number
namespace = string
region = string
input_artifacts = list(string)
output_artifacts = list(string)
repository_name = string
ProjectName = string
branch_name = string
poll_for_sourcechanges = bool
output_format = string
}))
}))
description = "This list describes each stage of the build"
}
#---------------------------------------------------------------------------------------------------
# ENVIORNMENT VARIABLES
#---------------------------------------------------------------------------------------------------
variable "env" {
type = string
description = "The environment to deploy resources (dev | test | prod | sbx)"
default = "dev"
}
variable "tenant" {
type = string
description = "The Service Tenant in which the IaC is being deployed to"
default = "dummytenant"
}
variable "project" {
type = string
description = "The Project Name or Acronym. (Note: You should consider setting this in your Enviornment Variables.)"
}
#---------------------------------------------------------------------------------------------------
# Parameter Store Variables
#---------------------------------------------------------------------------------------------------
variable "bucketlocation" {
type = string
description = "location within the S3 bucket where the State file resides"
}
Part 2
That is the main makeup of the pipeline. Below is the module I created to try to execute as a test to ensure it works. This is where I am getting the error
main.tf
module test_code {
source = "../dynamic_pipeline"
cp_name = "dynamic-actions"
project = "my_project"
bucketlocation = var.backend_bucket_target_name
cp_run_role = "arn:aws:iam::xxxxxxxxx:role/cp-deploy-service-role"
cp_repo_name = var.repo
stages = [{
name = "part 1"
action = [{
name = "Source"
owner = "AWS"
version = "1"
category = "Source"
provider = "CodeCommit"
run_order = 1
repository_name = "my_target_repo"
branch_name = "main"
poll_for_sourcechanges = true
output_artifacts = ["CodeWorkspace"]
ouput_format = var.cp_ouput_format
}]
},
{
name = "part 2"
action = [{
run_order = 1
name = "Combine_Binaries"
owner = "AWS"
version = "1"
category = "Build"
provider = "CodeBuild"
namespace = "BIN"
input_artifacts = ["CodeWorkspace"]
output_artifacts = ["CodeSource"]
ProjectName = "test_runner"
}]
}]
}
variables files associated with the run book:
variables.tf
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs. Values can be CODEBUILD_CLONE_REF or CODE_ZIP"
default = "CODE_ZIP"
}
variable "backend_bucket_target_name" {
type = string
description = "The folder name where the state file is stored for the pipeline"
default = "dynamic-test-pl"
}
variable "repo" {
type = string
description = "name of the repo the pipeine is managing"
default = "my_target_repo"
}
I know this is my first time trying this. Not very good with Lists and maps on terraform, but I am certain it has to do with the way i am passing it in. Any help or guidance would be appreciated it.
After some time, I finally found the answer to this issue. Special thanks to this thread on github. It put me in the right direction. A couple of things to take away from this. Variable declaration is the essential part of Dynamic Pipeline. I worked with several working examples that yielded great results for Stages and Actions, but when it came to the configuration environment variables, they all crashed. The root problem that I concluded was that you could not perform Dynamic Actions with Environment variables and hope for terraform to perform the JSON translation for you. In some cases it would work but required that every Action contain similar elements which led to character constraints and errors like my post called out.
My best guess is that terraform has a hard limit on variables and their character limits. The solution, declare the resource as a dynamic that seems to support different limits versus traditional variables within a resource. The approach that was taken makes the entire Terraform Resource a Dynamic attribute which I feel is treated differently by terraform in its entirety with fewer limits (an assumption). I say that because I tried four methods of dynamic staging and actions. Those methods worked up until I introduced the environment variables (forces JSON conversion on a specific resource type) and then I would get various errors all pointing at either a variable not supported for missing attributes or a variable that exceeds terraform character limits.
What worked was creating the entire resource as a dynamic resource which I could pass in as a map attribute that includes the EnvironmentVariables. See examples below.
Final Dynamic Pipeline
resource "aws_codepipeline" "codepipeline" {
for_each = var.code_pipeline
name = "${local.name_prefix}-${var.AppName}"
role_arn = each.value["code_pipeline_role_arn"]
tags = {
Pipeline_Key = each.key
}
artifact_store {
type = lookup(each.value, "artifact_store", null) == null ? "" : lookup(each.value.artifact_store, "type", "S3")
location = lookup(each.value, "artifact_store", null) == null ? null : lookup(each.value.artifact_store, "artifact_bucket", null)
}
dynamic "stage" {
for_each = lookup(each.value, "stages", {})
iterator = stage
content {
name = lookup(stage.value, "name")
dynamic "action" {
for_each = lookup(stage.value, "actions", {}) //[stage.key]
iterator = action
content {
name = action.value["name"]
category = action.value["category"]
owner = action.value["owner"]
provider = action.value["provider"]
version = action.value["version"]
run_order = action.value["run_order"]
input_artifacts = lookup(action.value, "input_artifacts", null)
output_artifacts = lookup(action.value, "output_artifacts", null)
configuration = action.value["configuration"]
namespace = lookup(action.value, "namespace", null)
}
}
}
}
}
Calling Dynamic Pipeline
module "code_pipeline" {
source = "../module-aws-codepipeline" #using module locally
#source = "your-github-repository/aws-codepipeline" #using github repository
AppName = "My_new_pipeline"
code_pipeline = local.code_pipeline
}
SAMPlE local pipeline variable
locals {
/*
DECLARE enviornment variables. Note each Action does not require environment variables
*/
action_second_stage_variables = [
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "NamespaceVariable"
type = "PLAINTEXT"
value = "some_value"
},
]
action_third_stage_variables = [
{
name = "PL_VARIABLE_1"
type = "PLAINTEXT"
value = "VALUE1"
},
{
name = "PL_VARIABLE 2"
type = "PLAINTEXT"
value = "VALUE2"
},
{
name = "PL_VARIABLE_3"
type = "PLAINTEXT"
value = "VAUE3"
},
{
name = "PL_VARIABLE_4"
type = "PLAINTEXT"
value = "#{BLD.NamespaceVariable}"
},
]
/*
BUILD YOUR STAGES
*/
code_pipeline = {
codepipeline-configs = {
code_pipeline_role_arn = "arn:aws:iam::aws_account_name:role/role_name"
artifact_store = {
type = "S3"
artifact_bucket = "your-aws-bucket-name"
}
stages = {
stage_1 = {
name = "Download"
actions = {
action_1 = {
run_order = 1
category = "Source"
name = "First_Stage"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["download_ouput"]
configuration = {
RepositoryName = "Codecommit_target_repo"
BranchName = "main"
PollForSourceChanges = true
OutputArtifactFormat = "CODE_ZIP"
}
}
}
}
stage_2 = {
name = "Build"
actions = {
action_1 = {
run_order = 2
category = "Build"
name = "Second_Stage"
owner = "AWS"
provider = "CodeBuild"
version = "1"
namespace = "BLD"
input_artifacts = ["Download_ouput"]
output_artifacts = ["build_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_second_stage"
EnvironmentVariables = jsonencode(local.action_second_stage_variables)
}
}
}
}
stage_3 = {
name = "Validation"
actions = {
action_1 = {
run_order = 1
name = "Third_Stage"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["build_outputs"]
output_artifacts = ["validation_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_third_stage"
EnvironmentVariables = jsonencode(local.action_third_stage_variables)
}
}
}
}
}
}
}
}
The trick becomes building your code pipeline resource and its stages and actions at the local level. You take your local.tf and build out the pipeline variable there, you build out all your stages, actions, and EnvironmentVariables. EnvironmentVariables are then passed and converted from JSON directly into the variable, which passes in as a single variable type. A sample explaining this approach can be found within this GitHub repository. I took the findings and consolidated them, and documented them so others could leverage this method.

Terraform "Unsupported Block type" error when using aws_s3_bucket_logging resource

We are currently in the process of editing our core s3 module to adapt to the new V4.0 changes that terraform released -
Existing main.tf
bucket = local.bucket_name
tags = local.tags
force_destroy = var.force_destroy
dynamic "logging" {
for_each = local.logging
content {
target_bucket = logging.value["target_bucket"]
target_prefix = logging.value["target_prefix"]
}
}
....
}
I am trying to convert this to use the resource aws_s3_bucket_logging as below
resource "aws_s3_bucket" "bucket" {
bucket = local.bucket_name
tags = local.tags
force_destroy = var.force_destroy
hosted_zone_id = var.hosted_zone_id
}
resource "aws_s3_bucket_logging" "logging" {
bucket = aws_s3_bucket.bucket.id
dynamic "logging" {
for_each = local.logging
content {
target_bucket = logging.value["target_bucket"]
target_prefix = logging.value["target_prefix"]
}
}
locals.tf
locals {
logging = var.log_bucket == null ? [] : [
{
target_bucket = var.log_bucket
target_prefix = var.log_prefix
}
]
....
variables.tf
type = string
default = null
description = "The name of the bucket that will receive the log objects."
}
variable "log_prefix" {
type = string
default = null
description = "To specify a key prefix for log objects."
}
And I receive the error
Error: Unsupported block type
Blocks of type "logging" are not expected here.
Any help is greatly appreciated. TA
If you check TF docs aws_s3_bucket_logging, you will find that aws_s3_bucket_logging does not have any block nor attribute called logging. Please have a look at the docs linked, and follow the examples and the documentation.
Although, Is it the right usage of the resource if i remove the locals and simplify the resource to be as below? All am i trying to is pass null as default value
resource "aws_s3_bucket_logging" "logging" {
bucket = aws_s3_bucket.bucket.id
target_bucket = var.log_bucket
target_prefix = var.log_prefix
}

The parameter CacheSubnetGroupName must be provided and must not be blank

I use the module, https://github.com/cloudposse/terraform-aws-elasticache-redis to provision elasticache redis. Below are the errors when I run terraform apply. I have no clue of these errors.
Terraform version: v0.13.5
module.redis.aws_elasticache_parameter_group.default[0]: Creating...
module.redis.aws_elasticache_subnet_group.default[0]: Creating...
Error: Error creating CacheSubnetGroup: InvalidParameterValue: The parameter CacheSubnetGroupName must be provided and must not be blank.
status code: 400, request id: a1ab57b1-fa23-491c-aa7b-a2d3804014c9
Error: Error creating Cache Parameter Group: InvalidParameterValue: The parameter CacheParameterGroupName must be provided and must not be blank.
status code: 400, request id: 9abc80b6-bd3b-46fd-8b9e-9bf14d1913eb
redis.tf:
module "redis" {
source = "git::https://github.com/cloudposse/terraform-aws-elasticache-redis.git?ref=tags/0.25.0"
availability_zones = var.azs
vpc_id = module.vpc.vpc_id
allowed_security_groups = [data.aws_security_group.default.id]
subnets = module.vpc.private_subnets
cluster_size = var.redis_cluster_size #number_cache_clusters
instance_type = var.redis_instance_type
apply_immediately = true
automatic_failover_enabled = false
engine_version = var.engine_version
family = var.family
replication_group_id = var.replication_group_id
elasticache_subnet_group_name = var.elasticache_subnet_group_name
#enabled = true
enabled = var.enabled
#at-rest encryption is to increase data security by encrypting on-disk data.
at_rest_encryption_enabled = var.at_rest_encryption_enabled
#in-transit encryption protects data when it is moving from one location to another.
transit_encryption_enabled = var.transit_encryption_enabled
cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled
parameter = [
{
#Keyspace notifications send events for every operation affecting the Redis data space.
name = "notify-keyspace-events"
value = "lK"
}
]
context = module.this.context
}
context.tf:
module "this" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2"
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = var.tags
additional_tag_map = var.additional_tag_map
label_order = var.label_order
regex_replace_chars = var.regex_replace_chars
id_length_limit = var.id_length_limit
context = var.context
}
variable "context" {
type = object({
enabled = bool
namespace = string
environment = string
stage = string
name = string
delimiter = string
attributes = list(string)
tags = map(string)
additional_tag_map = map(string)
regex_replace_chars = string
label_order = list(string)
id_length_limit = number
})
default = {
enabled = true
namespace = null
environment = null
stage = null
name = null
delimiter = null
attributes = []
tags = {}
additional_tag_map = {}
regex_replace_chars = null
label_order = []
id_length_limit = null
}
description = <<-EOT
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
EOT
}
variable "enabled" {
type = bool
default = true
description = "Set to false to prevent the module from creating any resources"
}
variable "namespace" {
type = string
default = null
description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}
variable "environment" {
type = string
default = null
description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}
variable "stage" {
type = string
default = null
description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}
variable "name" {
type = string
default = null
description = "Solution name, e.g. 'app' or 'jenkins'"
}
variable "delimiter" {
type = string
default = null
description = <<-EOT
Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
EOT
}
variable "attributes" {
type = list(string)
default = []
description = "Additional attributes (e.g. `1`)"
}
variable "tags" {
type = map(string)
default = {}
description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}
variable "additional_tag_map" {
type = map(string)
default = {}
description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}
variable "label_order" {
type = list(string)
default = null
description = <<-EOT
The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
EOT
}
variable "regex_replace_chars" {
type = string
default = null
description = <<-EOT
Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
EOT
}
variable "id_length_limit" {
type = number
default = null
description = <<-EOT
Limit `id` to this many characters.
Set to `0` for unlimited length.
Set to `null` for default, which is `0`.
Does not affect `id_full`.
EOT
}
Open up the context.tf and set variable "enabled" to true if you want the module to create resources for you, including the subnet group.
Otherwise, you have to create all per-requsite resources yourselves, which includes elasticache_subnet_group_name.
I found the problem. The default values of namespace/environment/stage/name in context.tf are null. That causes the problem. Set these values solved the problem.
You need to provide module input which can be found on the link you provided.
For instance :
Error: Error creating CacheSubnetGroup: InvalidParameterValue: The parameter CacheSubnetGroupName must be provided and must not be blank. status code: 400, request id: a1ab57b1-fa23-491c-aa7b-a2d3804014c9
In this case, you need to set elasticache_subnet_group_name, and so on:

Terraform - creating multiple buckets

Creating a bucket is pretty simple.
resource "aws_s3_bucket" "henrys_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "private"
force_destroy = "true"
}
Initially I thought I could create a list for the s3_bucket_name variable but I get an error:
Error: bucket must be a single value, not a list
-
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
How can I create multiple buckets without duplicating code?
You can use a combination of count & element like so:
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${element(var.s3_bucket_name, count.index)}"
acl = "private"
force_destroy = "true"
}
Edit: as suggested by #ydaetskcoR you can use the list[index] pattern rather than element.
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
force_destroy = "true"
}