Im trying to assign multiple roles to different members using terraform but im running into an error.This is for assigning iam permission in GCP.
Use a combination of nested map. But the nested map became complex since Im using two different variables and use them in creating resources.
main.tf looks like this
locals {
data_access = flatten([
for bkt_key, bkt_value in var.buckets_data : [
for user,roles in var.data_access : [
for role in roles:{
member = user
bkt = bkt_key
role = roles
}]
]
])
}
resource "google_storage_bucket_iam_member" "buckets_data_access" {
for_each = { for access in local.data_access : "${access.bkt}_${access.member}" => access... }
bucket = google_storage_bucket.tf_buckets_data[each.value.bkt].name
role = each.value.role
member = each.value.member
}
terraform.tfvars looks like this, Please note I'm using two different variables in the nested map of main.tf.
buckets_data = {
"landing" = {
region = "nane1",
storage_class = "COLDLINE",
versioning = "false",
data_tier = "raw",
lifecycle_rules = ["retention-2years"],
external_access = []
},
"dftemp" = {
region = "nane1",
storage_class = "STANDARD"
},
"curated" = {
region = "nane1",
storage_class = "STANDARD"
}
}
data_access = {
"group:GCP-npe#bell.ca"= ["roles/storage.objectViewer","roles/Browser"]
}
error I received in my terminal
$ terraform plan
╷
│ Error: Unsupported attribute
│
│ on main.tf line 29, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 29: bucket = google_storage_bucket.tf_buckets_data[each.value.bkt].name
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 29, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 29: bucket = google_storage_bucket.tf_buckets_data[each.value.bkt].name
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 30, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 30: role = each.value.role
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 30, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 30: role = each.value.role
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 31, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 31: member = each.value.member
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 31, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 31: member = each.value.member
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
If my understanding is correct of what you are trying to do, the following flattening is better:
locals {
data_access = merge(flatten([
for bkt_key, bkt_value in var.buckets_data : [
for user,roles in var.data_access : {
for role in roles:
"${bkt_key}-${user}-${role}" => {
member = user
bkt = bkt_key
role = role
}}
]
])...) # please do NOT remove the dots
}
then
resource "google_storage_bucket_iam_member" "buckets_data_access" {
for_each = local.data_access
bucket = google_storage_bucket.tf_buckets_data[each.value.bkt].name
role = each.value.role
member = each.value.member
}
Trying to upgrade AWS provider to version 4, but getting the following error in RDS module:
Error: Conflicting configuration arguments
│
│ with module.my-instance-mysql-eu[0].module.rds.module.db_instance.aws_db_instance.this[0],
│ on .terraform/modules/my-instance-mysql-eu.rds/modules/db_instance/main.tf line 47, in resource "aws_db_instance" "this":
│ 47: db_name = var.db_name
│
│ "db_name": conflicts with replicate_source_db
The error is stating that the db_name attribute conflicts with the replicate_source_db attribute; you cannot specify both attributes, it must be one or the other. This is also mentioned in the Terraform documentation.
If you are replicating an existing RDS database, the database name will be the same as the name of the source. If this is a new database, do not set the replicate_source_db attribute at all.
I encountered a similar issue with the engine & engine_version variables:
│ Error: Conflicting configuration arguments
│
│ with module.production.module.replica_app_db_production.aws_db_instance.db,
│ on modules/rds/postgres/main.tf line 36, in resource "aws_db_instance" "db":
│ 36: engine = var.engine
│
│ "engine": conflicts with replicate_source_db
╵
╷
│ Error: Conflicting configuration arguments
│
│ with module.production.module.replica_app_db_production.aws_db_instance.db,
│ on modules/rds/postgres/main.tf line 37, in resource "aws_db_instance" "db":
│ 37: engine_version = var.engine_version
│
│ "engine_version": conflicts with replicate_source_db
╵
I found a good example of a solution here: https://github.com/terraform-aws-modules/terraform-aws-rds/blob/v5.2.2/modules/db_instance/main.tf
And I managed to solve this with the below conditions:
# Replicas will use source metadata
username = var.replicate_source_db != null ? null : var.username
password = var.replicate_source_db != null ? null : var.password
engine = var.replicate_source_db != null ? null : var.engine
engine_version = var.replicate_source_db != null ? null : var.engine_version
If var.replicate_source_db is not null, then the username/password/engine/engine_version will be set to null (which is what we need as these variables cannot be specified for a replica). And if it is not a replica, then we will have the variables set accordingly :)
You can add the same for the db_name parameter:
db_name = var.replicate_source_db != null ? null : var.db_name
I am trying to build a module in terraform that can be passed a variable map from a workspace file and lookup the variables to build an AWS CodePipeline. For those not aware, CodePipeline has a series of stages, and inside those stages are actions. My module needs to handle building a pipeline wiht any supported number of stages and actions within those stages.
Below is a part of the module I've wirtten concerned with the creation of the pipeline itself:
dynamic "stage" {
for_each = [for s in var.stages : {
name = s.name
action = s.action
} if(lookup(s, "enabled", true))]
content {
name = stage.value.name
dynamic "action" {
for_each = stage.value.action
content {
name = lookup(action.value, "name", null)
owner = lookup(action.value, "owner", null)
version = lookup(action.value, "version", null)
category = lookup(action.value, "category", null)
provider = lookup(action.value, "provider", null)
input_artifacts = lookup(action.value, "input_artifacts", null)
output_artifacts = lookup(action.value, "output_artifacts", null)
configuration = {
BranchName = lookup(action.value, "BranchName", null)
PollForSourceChanges = lookup(action.value, "PollForSourceChanges", null)
RepositoryName = lookup(action.value, "RepositoryName", null)
ProjectName = lookup(action.value, "ProjectName", null)
}
role_arn = lookup(action.value, "role_arn", null)
run_order = lookup(action.value, "run_order", null)
region = lookup(action.value, "region", null)
}
}
}
}
tags = merge(
{ "Name" = "${var.prefix}-${var.name}" },
var.tags,
)
}
And here is an excerpt from the workspace yaml, where I pass the variables for two stages, each with an associated action:
test_pipeline_prefix: "test"
test_pipeline_name: "test-pipeline"
test_pipeline_description: "this is a POC pipeline"
test_pipeline_s3: "<<REDACTED ARN>>"
test_pipeline_codestar_arn: "<<REDACTED ARN>>"
test_pipeline_tags: [""]
test_pipeline_stages: [{
name: "Source",
action: [{
name = "Source",
category = "Source",
owner = "AWS"
version = "1",
provider = "CodeStarSourceConnection",
output_artifacts = "source_output",
BranchName = "main",
PollForSourceChanges = "false",
RepositoryName = "<<REDACTED REPO>>",
region = "eu-west-2",
run_order = 1
}]
},
{
name: "Plan",
action: [{
name = "Plan",
category = "Build",
owner = "AWS"
version = "1",
provider = "CodeBuild",
input_artifacts = "source_output",
output_artifacts = "build_output",
ProjectName = "pipeline-plan",
run_order = 2
}]
}]
Finally I call it with:
module "codepipeline" {
source = "./modules/codepipeline"
s3 = local.vars.test_pipeline_s3
description = local.vars.test_pipeline_description
prefix = local.vars.test_pipeline_prefix
name = local.vars.test_pipeline_name
stages = local.vars.test_pipeline_stages
codestar_arn = local.vars.test_pipeline_codestar_arn
tags = {
Environment = local.vars.env
Terraform = "true"
}
}
What I am hoping for it to do, is loop through the stages and actions in the "test_pipeline_stages" variable supplied, and loop through to create a stage for the Source, with an action configured to connect to the preexsiting CodeStar connection, and another stage called "Plan" that runs the CodeBuild job as it's action.
The result I'm actually getting is:
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.0.action.0.owner" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.0.action.0.provider" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.0.action.0.category" is required, but no definition was
│ found.
│ Error: Missing required argument
│
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.0.action.0.version" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.1.action.0.version" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.1.action.0.category" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.1.action.0.owner" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.1.action.0.provider" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 2, in resource "aws_codepipeline" "this":
│ 2: name = "${var.prefix}-${var.name}"
│
│ The argument "stage.0.action.0.name" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 2, in resource "aws_codepipeline" "this":
│ 2: name = "${var.prefix}-${var.name}"
│
│ The argument "stage.1.action.0.name" is required, but no definition was
│ found.
This suggest to me that its not indexing the variables prporly but I can't really figure the best way to proceed. Anyone got any ideas?
Thanks to responders so far - the actual fix I'll detail below, but all responses up to this point have helped me get there. I also agree that supplying null values as a fallback is not sensible; I'll look to review that.
The actual issue was simply that my workspace yaml as posted above is... not valid yaml. Once I replaced it with the below, the module began to read out the values correctly.
test_pipeline_stages: [{
name: "Source",
action: [{
ActionName: "Source",
ActionCategory: "Source",
ActionOwner: "AWS",
ActionVersion: "1",
ActionProvider: "CodeStarSourceConnection",
output_artifacts: ["source_output"],
BranchName: "main",
PollForSourceChanges: "false",
RepositoryName: "REDACTED",
ConnectionArn: "REDACTED",
region: "eu-west-2",
run_order: 1
}]
},
{
name: "Plan",
action: [{
ActionName: "Plan",
ActionCategory: "Build",
ActionOwner: "AWS",
ActionVersion: "1",
ActionProvider: "CodeBuild",
input_artifacts: ["source_output"],
output_artifacts: ["build_output"],
ProjectName: "pipeline-plan",
test_pipeline_ans_tags: "",
run_order: 2
}]
}]
comma is missing after owner= "AWS", put comma in workspace file.
I used this module to create a security group inside a VPC. One of the outputs is the security_group_id, but I'm getting this error:
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = [module.app_security_group.security_group_id]
│ ├────────────────
│ │ module.app_security_group is a object, known only after apply
│
│ This object does not have an attribute named "security_group_id".
I need the security group for an ECS service:
resource "aws_ecs_service" "hello_world" {
name = "hello-world-service"
cluster = aws_ecs_cluster.container_service_cluster.id
task_definition = aws_ecs_task_definition.hello_world.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [module.app_security_group.security_group_id]
subnets = module.vpc.private_subnets
}
load_balancer {
target_group_arn = aws_lb_target_group.loadbalancer_target_group.id
container_name = "hello-world-app"
container_port = 3000
}
depends_on = [aws_lb_listener.loadbalancer_listener, module.app_security_group]
}
I understand that I can only know the security group ID after it is created. That's why I added the depends_on part on the ECS stanza, but it kept returning the same error.
Update
I specified count as 1 on the app_security_group module and this is the error I'm getting now.
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = module.app_security_group.security_group_id
│ ├────────────────
│ │ module.app_security_group is a list of object, known only after apply
│
│ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
Update II
This is the module declaration:
module "app_security_group" {
source = "terraform-aws-modules/security-group/aws//modules/web"
version = "3.17.0"
name = "${var.project}-web-sg"
description = "Security group for web-servers with HTTP ports open within VPC"
vpc_id = module.vpc.vpc_id
# ingress_cidr_blocks = module.vpc.public_subnets_cidr_blocks
ingress_cidr_blocks = ["0.0.0.0/0"]
}
I took a look at that module. The problem is that the version 3.17.0 of the module simply does not have the output of security_group_id. You are using a really old version.
The latest version from the site is 4.7.0, you would want to upgrade to this one. In fact, any version above 4.0.0 has the security_group_id, so you need to at least 4.0.0.
As you are using count, please try below.
network_configuration {
security_groups = [module.app_security_group[0].security_group_id]
subnets = module.vpc.private_subnets
}
I have the following project structure to build Lambda functions on AWS using Terraform :
.
├── aws.tf
├── dev.tfvars
├── global_variables.tf -> ../shared/global_variables.tf
├── main.tf
├── module
│ ├── data_source.tf
│ ├── main.tf
│ ├── output.tf
│ ├── role.tf
│ ├── security_groups.tf
│ ├── sources
│ │ ├── function1.zip
│ │ └── function2.zip
│ └── variables.tf
└── vars.tf
In the .main.tf file i have this code that will create 2 different lambda functions :
module "function1" {
source = "./module"
function_name = "function1"
source_code = "function1.zip"
runtime = "${var.runtime}"
memory_size = "${var.memory_size}"
timeout = "${var.timeout}"
aws_region = "${var.aws_region}"
vpc_id = "${var.vpc_id}"
}
module "function2" {
source = "./module"
function_name = "function2"
source_code = "function2.zip"
runtime = "${var.runtime}"
memory_size = "${var.memory_size}"
timeout = "${var.timeout}"
aws_region = "${var.aws_region}"
vpc_id = "${var.vpc_id}"
}
The problem is that in deployment terraform create all resources twice. For Lambda it's Ok, that's the purpose, but for security groups and Roles that's not what i want.
For example this security group is create 2 times :
resource "aws_security_group" "lambda-sg" {
vpc_id = "${data.aws_vpc.main_vpc.id}"
name = "sacem-${var.project}-sg-lambda-${var.function_name}-${var.environment}"
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = "${var.authorized_ip}"
}
# To solve dependcies error when updating the security groups
lifecycle {
create_before_destroy = true
ignore_changes = ["tags.DateTimeTag"]
}
tags = "${merge(var.resource_tagging, map("Name", "${var.project}-sg-lambda-${var.function_name}-${var.environment}"))}"
}
So that's clear that the problem is the structure of the project. Could you help to solve that ?
Thanks.
If you create the SecurityGroup within the module, it'll be created once per module inclusion.
I believe that some of the variable values for the sg name change when you include the module, right? Therefore, the sg name will be unique for both modules and can be created twice without errors.
If you'd choose a static name, Terraform would throw an error when trying to create the sg from module 2 as the resource already exists (as created by module 1).
You could thus define the sg resource outside of the module itself to create it only once.
You can then pass the id of the created sg as variable to the module inclusion and use it there for other resources.