passing variable to terraform dynamic block v12 - amazon-web-services

I am trying to use the code in this repo https://github.com/jmgreg31/terraform-aws-cloudfront/
but getting a hard time in setting the variables.
My variables.tf has this value, but somehow its not working :
variable "dynamic_s3_origin_config" {
default =
[
{
domain_name = "domain.s3.amazonaws.com"
origin_id = "S3-domain-cert"
origin_access_identity = "origin-access-identity/cloudfront/1234"
},
{
domain_name = "domain2.s3.amazonaws.com"
origin_id = "S3-domain2-cert"
origin_access_identity = "origin-access-identity/cloudfront/1234"
origin_path = ""
}
]
}
variable definition in module looks like:
variable dynamic_s3_origin_config {
description = "Configuration for the s3 origin config to be used in dynamic block"
type = list(map(string))
default = []
}
can someone help me to understand what I am doing wrong here?
terraform plan
Error: Invalid expression
on variables.tf line 65, in variable "dynamic_s3_origin_config":
65:
66:
Expected the start of an expression, but found an invalid expression token.

You can't have a newline between default = and the start of the expression. Try changing your block to:
variable "dynamic_s3_origin_config" {
default = [
{
domain_name = "domain.s3.amazonaws.com"
origin_id = "S3-domain-cert"
origin_access_identity = "origin-access-identity/cloudfront/1234"
},
{
domain_name = "domain2.s3.amazonaws.com"
origin_id = "S3-domain2-cert"
origin_access_identity = "origin-access-identity/cloudfront/1234"
origin_path = ""
}
]
}

Related

Convert locals to string with literal curly braces

I'm trying to convert a locals variable to string.
locals {
env_vars = {
STAGE = "${var.environment}"
ENVIRONMENT = "${var.environment}"
REGION = "${var.region}"
AWS_DEFAULT_REGION = "${var.region}"
}
}
container_definitions = templatefile("${path.module}/task-definitions/ecs-web-app.json", {
# This works just fine:
#ENVIRONMENT_VARS = <<EOT
# {"name":"STAGE","value":"${var.environment}"},
# {"name":"ENVIRONMENT","value":"${var.environment}"},
# {"name":"REGION","value":"${var.region}"},
# {"name":"AWS_DEFAULT_REGION","value":"${var.region}"}
# EOT
# But I'm trying to get this to work instead:
ENVIRONMENT_VARS = join(",", [for key, value in local.env_vars : "{{\"name\":\"${key}\",\"value\":\"${value}}}\""])
}
Where foo.json is:
[
{
"environment":[
${ENVIRONMENT_VARS}
]
}
]
The error I get is:
Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: invalid character '{' looking for the beginning of object key string
So in a nutshell, I'm trying to convert:
locals {
env_vars = {
STAGE = "${var.environment}"
ENVIRONMENT = "${var.environment}"
REGION = "${var.region}"
AWS_DEFAULT_REGION = "${var.region}"
}
}
To a string that looks like this:
{"name":"STAGE","value":"dev"},
{"name":"ENVIRONMENT","value":"dev"},
{"name":"REGION","value":"us-west-2"},
{"name":"AWS_DEFAULT_REGION","value":"us-west-2"}
I'm almost 100% sure the curly braces are throwing me off, but I can't figure out what I'm doing wrong.
Any help or pointers would be greatly appreciated.
You have one too many curly brackets. It should be:
container_definitions = templatefile("foo.json", {
ENVIRONMENT_VARS = join(",", [for key, value in local.env_vars:
"{\"name\":\"${key}\",\"value\":\"${value}}\""
])
})

Updated stages variable to which is working

UPDATE
i got the variable working which passes the terraform plan flying colors. That said when i run terraform apply I get a new error:
creating CodePipeline (dev-mgt-mytest-cp): ValidationException: 2
validation errors detected: Value at
'pipeline.stages.1.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have length
greater than or equal to 1]; Value at
'pipeline.stages.2.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have a length
greater than or equal to 1]
I don't believe this is a limit for the code pipeline since I have done this pipeline manually without dynamic stages, and it works fine. Not sure if this is a terraform hard limit. Looking for some help here. Also, I have updated the code with the working variable for those looking for the syntax.
OLD POST
================================================================
I am giving my first stab at creating dynamic stages and really struggling with the documentation out there. What I have put together so far is based on articles found here in StackOverflow and a few resources online. So far I think i have good syntax, but the value i am passing is from my main.tf is getting an error:
The given value is not suitable for the module.test_code.var.stages
declared at │ ../dynamic_pipeline/variables.tf:60,1-18: all list
elements must have the same │ type.
Part 1
All I am trying to do basically is pass in dynamic stages into the pipeline. Once I get the stages working, I will add the new dynamic variables. I am providing the dynamic module, variables.tf for the module, and then my test run along with variables.
dynamic_pipeline.tf
resource "aws_codepipeline" "cp_plan_pipeline" {
name = "${local.cp_name}-cp"
role_arn = var.cp_run_role
artifact_store {
type = var.cp_artifact_type
location = var.cp_artifact_bucketname
}
dynamic "stage" {
for_each = [for s in var.stages : {
name = s.name
action = s.action
} if(lookup(s, "enabled", true))]
content {
name = stage.value.name
dynamic "action" {
for_each = stage.value.action
content {
name = action.value["name"]
owner = action.value["owner"]
version = action.value["version"]
category = action.value["category"]
provider = action.value["provider"]
run_order = lookup(action.value, "run_order", null)
namespace = lookup(action.value, "namespace", null)
region = lookup(action.value, "region", data.aws_region.current.name)
input_artifacts = lookup(action.value, "input_artifacts", [])
output_artifacts = lookup(action.value, "output_artifacts", [])
configuration = {
RepositoryName = lookup(action.value, "repository_name", null)
ProjectName = lookup(action.value, "ProjectName", null)
BranchName = lookup(action.value, "branch_name", null)
PollForSourceChanges = lookup(action.value, "poll_for_sourcechanges", null)
OutputArtifactFormat = lookup(action.value, "ouput_format", null)
}
}
}
}
}
}
variables.tf
#---------------------------------------------------------------------------------------------------
# General
#---------------------------------------------------------------------------------------------------
variable "region" {
type = string
description = "The AWS Region to be used when deploying region-specific resources (Default: us-east-1)"
default = "us-east-1"
}
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_name" {
type = string
description = "The name of the codepipline"
}
variable "cp_repo_name" {
type = string
description = "Then name of the repo that will be used as a source repo to trigger builds"
}
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_artifact_bucketname" {
type = string
description = "name of the artifact bucket where articacts are stored."
default = "Codepipeline-artifacts-s3"
}
variable "cp_run_role" {
type = string
description = "S3 artifact bucket name."
}
variable "cp_artifact_type" {
type = string
description = ""
default = "S3"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs"
default = "CODE_ZIP"
}
variable "stages" {
type = list(object({
name = string
action = list(object({
name = string
owner = string
version = string
category = string
provider = string
run_order = number
namespace = string
region = string
input_artifacts = list(string)
output_artifacts = list(string)
repository_name = string
ProjectName = string
branch_name = string
poll_for_sourcechanges = bool
output_format = string
}))
}))
description = "This list describes each stage of the build"
}
#---------------------------------------------------------------------------------------------------
# ENVIORNMENT VARIABLES
#---------------------------------------------------------------------------------------------------
variable "env" {
type = string
description = "The environment to deploy resources (dev | test | prod | sbx)"
default = "dev"
}
variable "tenant" {
type = string
description = "The Service Tenant in which the IaC is being deployed to"
default = "dummytenant"
}
variable "project" {
type = string
description = "The Project Name or Acronym. (Note: You should consider setting this in your Enviornment Variables.)"
}
#---------------------------------------------------------------------------------------------------
# Parameter Store Variables
#---------------------------------------------------------------------------------------------------
variable "bucketlocation" {
type = string
description = "location within the S3 bucket where the State file resides"
}
Part 2
That is the main makeup of the pipeline. Below is the module I created to try to execute as a test to ensure it works. This is where I am getting the error
main.tf
module test_code {
source = "../dynamic_pipeline"
cp_name = "dynamic-actions"
project = "my_project"
bucketlocation = var.backend_bucket_target_name
cp_run_role = "arn:aws:iam::xxxxxxxxx:role/cp-deploy-service-role"
cp_repo_name = var.repo
stages = [{
name = "part 1"
action = [{
name = "Source"
owner = "AWS"
version = "1"
category = "Source"
provider = "CodeCommit"
run_order = 1
repository_name = "my_target_repo"
branch_name = "main"
poll_for_sourcechanges = true
output_artifacts = ["CodeWorkspace"]
ouput_format = var.cp_ouput_format
}]
},
{
name = "part 2"
action = [{
run_order = 1
name = "Combine_Binaries"
owner = "AWS"
version = "1"
category = "Build"
provider = "CodeBuild"
namespace = "BIN"
input_artifacts = ["CodeWorkspace"]
output_artifacts = ["CodeSource"]
ProjectName = "test_runner"
}]
}]
}
variables files associated with the run book:
variables.tf
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs. Values can be CODEBUILD_CLONE_REF or CODE_ZIP"
default = "CODE_ZIP"
}
variable "backend_bucket_target_name" {
type = string
description = "The folder name where the state file is stored for the pipeline"
default = "dynamic-test-pl"
}
variable "repo" {
type = string
description = "name of the repo the pipeine is managing"
default = "my_target_repo"
}
I know this is my first time trying this. Not very good with Lists and maps on terraform, but I am certain it has to do with the way i am passing it in. Any help or guidance would be appreciated it.
After some time, I finally found the answer to this issue. Special thanks to this thread on github. It put me in the right direction. A couple of things to take away from this. Variable declaration is the essential part of Dynamic Pipeline. I worked with several working examples that yielded great results for Stages and Actions, but when it came to the configuration environment variables, they all crashed. The root problem that I concluded was that you could not perform Dynamic Actions with Environment variables and hope for terraform to perform the JSON translation for you. In some cases it would work but required that every Action contain similar elements which led to character constraints and errors like my post called out.
My best guess is that terraform has a hard limit on variables and their character limits. The solution, declare the resource as a dynamic that seems to support different limits versus traditional variables within a resource. The approach that was taken makes the entire Terraform Resource a Dynamic attribute which I feel is treated differently by terraform in its entirety with fewer limits (an assumption). I say that because I tried four methods of dynamic staging and actions. Those methods worked up until I introduced the environment variables (forces JSON conversion on a specific resource type) and then I would get various errors all pointing at either a variable not supported for missing attributes or a variable that exceeds terraform character limits.
What worked was creating the entire resource as a dynamic resource which I could pass in as a map attribute that includes the EnvironmentVariables. See examples below.
Final Dynamic Pipeline
resource "aws_codepipeline" "codepipeline" {
for_each = var.code_pipeline
name = "${local.name_prefix}-${var.AppName}"
role_arn = each.value["code_pipeline_role_arn"]
tags = {
Pipeline_Key = each.key
}
artifact_store {
type = lookup(each.value, "artifact_store", null) == null ? "" : lookup(each.value.artifact_store, "type", "S3")
location = lookup(each.value, "artifact_store", null) == null ? null : lookup(each.value.artifact_store, "artifact_bucket", null)
}
dynamic "stage" {
for_each = lookup(each.value, "stages", {})
iterator = stage
content {
name = lookup(stage.value, "name")
dynamic "action" {
for_each = lookup(stage.value, "actions", {}) //[stage.key]
iterator = action
content {
name = action.value["name"]
category = action.value["category"]
owner = action.value["owner"]
provider = action.value["provider"]
version = action.value["version"]
run_order = action.value["run_order"]
input_artifacts = lookup(action.value, "input_artifacts", null)
output_artifacts = lookup(action.value, "output_artifacts", null)
configuration = action.value["configuration"]
namespace = lookup(action.value, "namespace", null)
}
}
}
}
}
Calling Dynamic Pipeline
module "code_pipeline" {
source = "../module-aws-codepipeline" #using module locally
#source = "your-github-repository/aws-codepipeline" #using github repository
AppName = "My_new_pipeline"
code_pipeline = local.code_pipeline
}
SAMPlE local pipeline variable
locals {
/*
DECLARE enviornment variables. Note each Action does not require environment variables
*/
action_second_stage_variables = [
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "NamespaceVariable"
type = "PLAINTEXT"
value = "some_value"
},
]
action_third_stage_variables = [
{
name = "PL_VARIABLE_1"
type = "PLAINTEXT"
value = "VALUE1"
},
{
name = "PL_VARIABLE 2"
type = "PLAINTEXT"
value = "VALUE2"
},
{
name = "PL_VARIABLE_3"
type = "PLAINTEXT"
value = "VAUE3"
},
{
name = "PL_VARIABLE_4"
type = "PLAINTEXT"
value = "#{BLD.NamespaceVariable}"
},
]
/*
BUILD YOUR STAGES
*/
code_pipeline = {
codepipeline-configs = {
code_pipeline_role_arn = "arn:aws:iam::aws_account_name:role/role_name"
artifact_store = {
type = "S3"
artifact_bucket = "your-aws-bucket-name"
}
stages = {
stage_1 = {
name = "Download"
actions = {
action_1 = {
run_order = 1
category = "Source"
name = "First_Stage"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["download_ouput"]
configuration = {
RepositoryName = "Codecommit_target_repo"
BranchName = "main"
PollForSourceChanges = true
OutputArtifactFormat = "CODE_ZIP"
}
}
}
}
stage_2 = {
name = "Build"
actions = {
action_1 = {
run_order = 2
category = "Build"
name = "Second_Stage"
owner = "AWS"
provider = "CodeBuild"
version = "1"
namespace = "BLD"
input_artifacts = ["Download_ouput"]
output_artifacts = ["build_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_second_stage"
EnvironmentVariables = jsonencode(local.action_second_stage_variables)
}
}
}
}
stage_3 = {
name = "Validation"
actions = {
action_1 = {
run_order = 1
name = "Third_Stage"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["build_outputs"]
output_artifacts = ["validation_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_third_stage"
EnvironmentVariables = jsonencode(local.action_third_stage_variables)
}
}
}
}
}
}
}
}
The trick becomes building your code pipeline resource and its stages and actions at the local level. You take your local.tf and build out the pipeline variable there, you build out all your stages, actions, and EnvironmentVariables. EnvironmentVariables are then passed and converted from JSON directly into the variable, which passes in as a single variable type. A sample explaining this approach can be found within this GitHub repository. I took the findings and consolidated them, and documented them so others could leverage this method.

How to call default list variables in terraform module

I want to call list variables from below code. But, It is throwing error instead after mentioning the default value in variables.tf
Terraform Service Folder (/root/terraform-ukg-smtp).
main.tf
module "google_uig" {
source = "/root/terraform-google-vm/modules/compute_engine_uig"
depends_on = [
module.google_vm
]
project = var.project
count = var.num_instances
zone = var.zone == null ? data.google_compute_zones.available.names[count.index % length(data.google_compute_zones.available.names)] : var.zone
name = "apoc-uig-${random_integer.integer[count.index].result}"
instances = element((module.google_vm[*].google_instance_id), count.index)
named_ports = var.named_ports
}
variables.tf
variable "named_ports" {
description = "Named name and named port"
type = list(object({
port_name = string
port_number = number
}))
default = [{
port_name = "smtp"
port_number = "33"
}]
}
Terraform Core Folder (/root/terraform-google-vm/modules/compute_engine_uig).
main.tf
# Instance Group
resource "google_compute_instance_group" "google_uig" {
count = var.num_instances
project = var.project
zone = var.zone
name = var.name
instances = var.instances
dynamic "named_port" {
for_each = var.named_ports != null ? toset([1]) : toset([])
content {
name = named_port.value.port_name
port = named_port.value.port_number
}
}
}
variables.tf
variable "named_ports" {
description = "Named name and named port"
type = list(object({
port_name = string
port_number = number
}))
default = null
}
ERROR
╷
│ Error: Unsupported argument
│
│ on main.tf line 66, in module "google_uig":
│ 66: port_number = each.value["port_number"]
│
│ An argument named "port_number" is not expected here.
The error actually lies in the file /root/terraform-google-vm/modules/compute_engine_uig/main.tf, which you have not added to your question. But from the error message, I think to know what is wrong.
The resource google_compute_instance_group.google_uig in compute_engine_uig/main.tf should look like this:
resource "google_compute_instance_group" "google_uig" {
other_keys = other_values
dynamic "named_port" {
for_each = var.named_ports
content {
name = named_port.value.name
port = named_port.value.port
}
}
}
From the error message, it seems that you have written
name = named_ports.value.name
i.e., with a plural s instead of
name = named_port.value.name
in the content block.
If this doesn't solve it, please add the file that throws the error.
Edit from 30.05.2022:
Two more problems are now visible:
You set for_each = var.named_ports != null ? toset([1]) : toset([]), which is not correct. You have to iterate over var.named_ports (as I have written above), not over a set containing the number 1. Just copy it word by word from the code above.
Additionaly, you have defined the type of port_number in your variable named_ports as "number", but you have given a string "33". This may be fine for terraform since it does a lot of conversion in the background, but better change it too.

Why is terraform saying my required variable is not defined?

TF project:
main.tf
inputs.tf
The contents are:
main.tf
locals {
common_tags = {
SECRET_MGR_HOST = "${var.SECRET_MGR_HOST}",
SECRET_MGR_SAFE = "${var.SECRET_MGR_SAFE}",
SECRET_MGR_SECRET_KEY_NAME = "${var.SECRET_MGR_SECRET_KEY_NAME}",
SECRET_MGR_USER_NAME = "${var.SECRET_MGR_USER_NAME}",
LOGON_URL = "${var.LOGON_URL}",
PLATFORM_SECRET_NAME = "${var.PLATFORM_SECRET_NAME}"
}
vpc_config_vars = {
subnet_ids = "${var.SUBNET_IDS}",
security_group_ids = "${var.SECURITY_GROUP_IDS}"
}
}
module "lambda" {
source = "git::https://corpsource.io/corp-cloud-platform-team/corpcloudv2/terraform/lambda-modules.git?ref=dev"
lambda_name = var.name
lambda_role = "arn:aws:iam::${var.ACCOUNT}:role/${var.lambda_role}"
lambda_handler = var.handler
lambda_runtime = var.runtime
default_lambda_timeout = var.timeout
ACCOUNT = var.ACCOUNT
vpc_config_vars = merge(
local.vpc_config_vars
)
env = merge(
local.common_tags,
{ DEFAULT_ROLE = "corp-platform" }
)
}
module "lambda_iam" {
source = "git::https://corpsource.io/corp-cloud-platform-team/corpcloudv2/terraform/iam-modules/lambda-iam.git?ref=dev"
lambda_policy = var.lambda_policy
ACCOUNT = var.ACCOUNT
lambda_role = var.lambda_role
}
and inputs.tf
variable "handler" {
type = string
default = "handler.lambda_handler"
}
variable "runtime" {
type = string
default = "python3.8"
}
variable "name" {
type = string
default = "create-SECRET_MGR-entry"
}
variable "timeout"{
type = string
default = "120"
}
variable "lambda_role" {
type = string
default = "create-SECRET_MGR-entry-role"
}
variable "ACCOUNT" {
type = string
default = ""
}
variable "SECRET_MGR_HOST" {
type = string
default = ""
}
variable "SECRET_MGR_SAFE" {
type = string
default = ""
}
variable "SUBNET_IDS" {
type = string
default = ""
}
variable "subnet_ids" {
type = string
default = ""
}
variable "security_group_ids" {
type = string
default = ""
}
variable "SECURITY_GROUP_IDS" {
type = string
default = ""
}
variable "SECRET_MGR_SECRET_KEY_NAME" {
type = string
default = ""
}
variable "SECRET_MGR_USER_NAME" {
type = string
default = ""
}
variable "LOGON_URL" {
type = string
default = ""
}
variable "PLATFORM_SECRET_NAME" {
type = string
default = ""
}
variable "lambda_policy" {
default = "{\"Version\": \"2012-10-17\",\"Statement\": [{\"Sid\":\"VisualEditor0\",\"Effect\":\"Allow\",\"Action\":[\"logs:CreateLogStream\",\"logs:CreateLogGroup\"],\"Resource\":\"*\"},{\"Sid\":\"UseKMSKey\",\"Effect\":\"Allow\",\"Action\":\"kms:Decrypt\",\"Resource\":\"*\"},{\"Sid\":\"GetSecret\",\"Effect\":\"Allow\",\"Action\":\"secretsmanager:GetSecretValue\",\"Resource\":\"*\"},{\"Sid\":\"ConnectToVPC\",\"Effect\":\"Allow\",\"Action\":[\"ec2:CreateNetworkInterface\",\"ec2:DescribeNetworkInterfaces\",\"ec2:DeleteNetworkInterface\"],\"Resource\":\"*\"},{\"Sid\":\"VisualEditor1\",\"Effect\":\"Allow\",\"Action\":\"logs:PutLogEvents\",\"Resource\":\"*\"},{\"Effect\": \"Allow\",\"Action\": [\"logs:*\"],\"Resource\": \"arn:aws:logs:*:*:*\"},{\"Effect\": \"Allow\",\"Action\": [\"s3:GetObject\",\"s3:PutObject\"],\"Resource\": \"arn:aws:s3:::*\"}]}"
}
As you see, main.tf references a module in another project referenced via source argument. The structure of the module project is also:
main.tf
inputs.tf
main.tf
data "archive_file" "lambda_handler" {
type = "zip"
output_path = "lambda_package.zip"
source_dir = "lambda_code/"
}
resource "aws_lambda_function" "lambda_function" {
filename = "lambda_package.zip"
function_name = var.lambda_name
role = var.lambda_role
handler = var.lambda_handler
runtime = var.lambda_runtime
memory_size = 256
timeout = var.default_lambda_timeout
source_code_hash = filebase64sha256("lambda_code/lambda_package.zip")
dynamic "vpc_config" {
for_each = length(keys(var.vpc_config_vars)) == 0 ? [] : [true]
content {
variables = var.vpc_config_vars
}
}
dynamic "environment" {
for_each = length(keys(var.env)) == 0 ? [] : [true]
content {
variables = var.env
}
}
}
inputs.tf
variable "lambda_name" {
type = string
}
variable "lambda_runtime" {
type = string
}
variable "lambda_role" {
type = string
}
variable "default_lambda_timeout" {
type = string
}
variable "lambda_handler" {
type = string
}
variable "vpc_config_vars" {
type = map(string)
default = {}
}
variable "env" {
type = map(string)
default = {}
}
variable "tags" {
default = {
blc = "1539"
costcenter = "54111"
itemid = "obfuscated"
owner = "cloudengineer#company.com"
}
}
variable "ACCOUNT" {
type = string
}
Error when my pipeline runs the project:
Error: Missing required argument
(and 7 more similar warnings elsewhere)
on .terraform/modules/lambda/main.tf line 18, in resource "aws_lambda_function" "lambda_function":
18: content {
The argument "subnet_ids" is required, but no definition was found.
Error: Missing required argument
on .terraform/modules/lambda/main.tf line 18, in resource "aws_lambda_function" "lambda_function":
18: content {
The argument "security_group_ids" is required, but no definition was found.
Error: Unsupported argument
on .terraform/modules/lambda/main.tf line 19, in resource "aws_lambda_function" "lambda_function":
19: variables = var.vpc_config_vars
An argument named "variables" is not expected here.
Oh and I'm passing in the value for subnet_ids and security_group_ids as an environment variable using my gitlab ci file. And log statements confirm that those values are defined.
What is wrong? thank you
You need to pass the required arguments for the vpc_config child block, which are subnet_ids and security_group_ids. You cannot use the entire map variable as it is inside the nested content block. You need to use the equals sign "=" to introduce the argument value.
Try the below code snippet
###################
# Root Module
###################
locals {
vpc_config_vars = {
vpc_config = {
subnet_ids = ["subnet-072297c000a32e200"],
security_group_ids = ["sg-05d06431bd25870b4"]
}
}
}
module "lambda" {
source = "./modules"
...
......
vpc_config_vars = local.vpc_config_vars
}
###################
# Child Module
###################
variable "vpc_config_vars" {
default = {}
}
resource "aws_lambda_function" "lambda_function" {
filename = "lambda_package.zip"
function_name = var.lambda_name
role = var.lambda_role
handler = var.lambda_handler
runtime = var.lambda_runtime
memory_size = 256
timeout = var.default_lambda_timeout
source_code_hash = filebase64sha256("lambda_code/lambda_package.zip")
dynamic "vpc_config" {
for_each = var.vpc_config_vars != {} ? var.vpc_config_vars : {}
content {
subnet_ids = vpc_config.value["subnet_ids"]
security_group_ids = vpc_config.value["security_group_ids"]
}
}
}

Why does terraform fail with "An argument named "flow_log_destination_type" is not expected here"?

"While I am using terraform to create vpc flow log module to s3 bucket then its throwing errors like:
An argument named "flow_log_destination_type" is not expected here.
An argument named "flow_log_destination_arn" is not expected here.
In the Terraform docs, I can see the details to be filled like log_destination_type & log_destination_arn,
and I found some docs on GitHub that exactly says the same code but while trying it's not working for me
The following error produced:
Error: Unsupported argument
on main.tf line 52, in module "vpc_with_flow_logs_s3_bucket":
52: flow_log_destination_type = "s3"
An argument named "flow_log_destination_type" is not expected here.
Error: Unsupported argument
on main.tf line 53, in module "vpc_with_flow_logs_s3_bucket":
53: flow_log_destination_arn = "${aws_s3_bucket.terra-test2-lifecycle.arn}"
An argument named "flow_log_destination_arn" is not expected here.
Error: Unsupported argument
on main.tf line 55, in module "vpc_with_flow_logs_s3_bucket":
55: vpc_flow_log_tags = {
An argument named "vpc_flow_log_tags" is not expected here.
Where I am doing wrong?"
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.33.0"
# Interpolated from the workspace
name = "${terraform.workspace}"
cidr = var.vpc_cidr
azs = var.vpc_azs
private_subnets = var.vpc_private_subnets
public_subnets = var.vpc_public_subnets
enable_nat_gateway = var.vpc_enable_nat_gw
single_nat_gateway = var.vpc_single_nat_gw
public_subnet_tags = {
Name = "${terraform.workspace}-public"
}
private_subnet_tags = {
Name = "${terraform.workspace}-private"
}
tags = {
Name = "${terraform.workspace}"
}
vpc_tags = {
owner = "PEDevOps"
environment = "${terraform.workspace}"
version = "0.0.1"
managedby = "Terraform"
}
}
module "vpc_with_flow_logs_s3_bucket" {
source = "../../"
log_destination_type = "s3"
log_destination_arn = "${aws_s3_bucket.terra-test2-lifecycle.arn}"
vpc_flow_log_tags = {
Name = "vpc-flow-logs-s3-bucket"
}
}
resource "aws_s3_bucket" "terra-test-lifecycle" {
bucket = "terra-test-lifecycle"
acl = "private"
lifecycle_rule {
id = "log"
enabled = true
prefix = "log/"
tags = {
"rule" = "log"
"autoclean" = "true"
}
transition {
days = 30
storage_class = "STANDARD_IA" # or "ONEZONE_IA"
}
expiration {
days = 60
}
}
lifecycle_rule {
id = "tmp"
prefix = "tmp/"
enabled = true
expiration {
date = "2020-06-06"
}
}
}
Why does terraform fail with "An argument named "flow_log_destination_type" is not expected here"?
The module at "../../" does not declare any of the log_destination_type, log_destination_arn, or vpc_flow_log_tags variables and Terraform considers it an error to assign to undeclared variables in a module block like this:
module "vpc_with_flow_logs_s3_bucket" {
source = "../../"
log_destination_type = "s3"
log_destination_arn = "${flow_log_destination_arn}"
vpc_flow_log_tags = {
Name = "vpc-flow-logs-s3-bucket"
}
}
It's most likely that "../../" is the wrong source path for the vpc_with_flow_logs_s3_bucket module and you should fix that. If you are in the source path for the module where this module block is declared and you run cd ../../, do you end up in the directory with the vpc_with_flow_logs_s3_bucket Terraform code? If not, then source is set incorrectly and you need to fix it.
If "../../" is the correct path, then you should add the missing variable declarations.
variable "log_destination_type" {
type = string
}
variable "log_destination_arb" {
type = string
}
variable "vpc_flow_log_tags" {
type = map(string)
}
This error occurs if you are passing a variable that module is not expecting.
For e.g.
module "vpc_with_flow_logs_s3_bucket" {
source = "../../"
log_destination_type = "s3"
log_destination_arn = "${flow_log_destination_arn}"
vpc_flow_log_tags = {
Name = "vpc-flow-logs-s3-bucket"
}
}
If you specify this it will throw an error if the variable flow_log_destination_arn is defined in main.tf and not in variables.tf
source: ../../vpc_with_flow_logs_s3_bucket/main.tf
resource "aws_flow_log" "example" {
iam_role_arn = "${aws_iam_role.example.arn}"
log_destination = "${aws_cloudwatch_log_group.example.arn}"
traffic_type = "ALL"
vpc_id = "${aws_vpc.example.id}"
}
I'll share another possible reason to this error.
Writing configuration block like this:
scaling_config = {
desired_size = 2
max_size = 2
min_size = 2
}
Instead of (Notice the = Equal Sign):
scaling_config {
desired_size = 2
max_size = 2
min_size = 2
}
Will give an error of An argument named "scaling_config" is not expected here.
(*) Notice that after the change, if the block type is really not supported the error title will be change from:
Error: Unsupported argument
To:
Error: Unsupported block type
With error message of:
Blocks of type "scaling_config" are not expected here.