Terraform AWS provider upgrade issue with RDS - amazon-web-services

Trying to upgrade AWS provider to version 4, but getting the following error in RDS module:
Error: Conflicting configuration arguments
│
│ with module.my-instance-mysql-eu[0].module.rds.module.db_instance.aws_db_instance.this[0],
│ on .terraform/modules/my-instance-mysql-eu.rds/modules/db_instance/main.tf line 47, in resource "aws_db_instance" "this":
│ 47: db_name = var.db_name
│
│ "db_name": conflicts with replicate_source_db

The error is stating that the db_name attribute conflicts with the replicate_source_db attribute; you cannot specify both attributes, it must be one or the other. This is also mentioned in the Terraform documentation.
If you are replicating an existing RDS database, the database name will be the same as the name of the source. If this is a new database, do not set the replicate_source_db attribute at all.

I encountered a similar issue with the engine & engine_version variables:
│ Error: Conflicting configuration arguments
│
│ with module.production.module.replica_app_db_production.aws_db_instance.db,
│ on modules/rds/postgres/main.tf line 36, in resource "aws_db_instance" "db":
│ 36: engine = var.engine
│
│ "engine": conflicts with replicate_source_db
╵
╷
│ Error: Conflicting configuration arguments
│
│ with module.production.module.replica_app_db_production.aws_db_instance.db,
│ on modules/rds/postgres/main.tf line 37, in resource "aws_db_instance" "db":
│ 37: engine_version = var.engine_version
│
│ "engine_version": conflicts with replicate_source_db
╵
I found a good example of a solution here: https://github.com/terraform-aws-modules/terraform-aws-rds/blob/v5.2.2/modules/db_instance/main.tf
And I managed to solve this with the below conditions:
# Replicas will use source metadata
username = var.replicate_source_db != null ? null : var.username
password = var.replicate_source_db != null ? null : var.password
engine = var.replicate_source_db != null ? null : var.engine
engine_version = var.replicate_source_db != null ? null : var.engine_version
If var.replicate_source_db is not null, then the username/password/engine/engine_version will be set to null (which is what we need as these variables cannot be specified for a replica). And if it is not a replica, then we will have the variables set accordingly :)
You can add the same for the db_name parameter:
db_name = var.replicate_source_db != null ? null : var.db_name

Related

Error in assigning gcs IAM permissions using nested map in terraform

Im trying to assign multiple roles to different members using terraform but im running into an error.This is for assigning iam permission in GCP.
Use a combination of nested map. But the nested map became complex since Im using two different variables and use them in creating resources.
main.tf looks like this
locals {
data_access = flatten([
for bkt_key, bkt_value in var.buckets_data : [
for user,roles in var.data_access : [
for role in roles:{
member = user
bkt = bkt_key
role = roles
}]
]
])
}
resource "google_storage_bucket_iam_member" "buckets_data_access" {
for_each = { for access in local.data_access : "${access.bkt}_${access.member}" => access... }
bucket = google_storage_bucket.tf_buckets_data[each.value.bkt].name
role = each.value.role
member = each.value.member
}
terraform.tfvars looks like this, Please note I'm using two different variables in the nested map of main.tf.
buckets_data = {
"landing" = {
region = "nane1",
storage_class = "COLDLINE",
versioning = "false",
data_tier = "raw",
lifecycle_rules = ["retention-2years"],
external_access = []
},
"dftemp" = {
region = "nane1",
storage_class = "STANDARD"
},
"curated" = {
region = "nane1",
storage_class = "STANDARD"
}
}
data_access = {
"group:GCP-npe#bell.ca"= ["roles/storage.objectViewer","roles/Browser"]
}
error I received in my terminal
$ terraform plan
╷
│ Error: Unsupported attribute
│
│ on main.tf line 29, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 29: bucket = google_storage_bucket.tf_buckets_data[each.value.bkt].name
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 29, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 29: bucket = google_storage_bucket.tf_buckets_data[each.value.bkt].name
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 30, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 30: role = each.value.role
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 30, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 30: role = each.value.role
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 31, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 31: member = each.value.member
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 31, in resource "google_storage_bucket_iam_member" "buckets_data_access":
│ 31: member = each.value.member
│ ├────────────────
│ │ each.value is tuple with 2 elements
│
│ This value does not have any attributes.
If my understanding is correct of what you are trying to do, the following flattening is better:
locals {
data_access = merge(flatten([
for bkt_key, bkt_value in var.buckets_data : [
for user,roles in var.data_access : {
for role in roles:
"${bkt_key}-${user}-${role}" => {
member = user
bkt = bkt_key
role = role
}}
]
])...) # please do NOT remove the dots
}
then
resource "google_storage_bucket_iam_member" "buckets_data_access" {
for_each = local.data_access
bucket = google_storage_bucket.tf_buckets_data[each.value.bkt].name
role = each.value.role
member = each.value.member
}

Passing Variables into a customer terraform module

I am trying to build a module in terraform that can be passed a variable map from a workspace file and lookup the variables to build an AWS CodePipeline. For those not aware, CodePipeline has a series of stages, and inside those stages are actions. My module needs to handle building a pipeline wiht any supported number of stages and actions within those stages.
Below is a part of the module I've wirtten concerned with the creation of the pipeline itself:
dynamic "stage" {
for_each = [for s in var.stages : {
name = s.name
action = s.action
} if(lookup(s, "enabled", true))]
content {
name = stage.value.name
dynamic "action" {
for_each = stage.value.action
content {
name = lookup(action.value, "name", null)
owner = lookup(action.value, "owner", null)
version = lookup(action.value, "version", null)
category = lookup(action.value, "category", null)
provider = lookup(action.value, "provider", null)
input_artifacts = lookup(action.value, "input_artifacts", null)
output_artifacts = lookup(action.value, "output_artifacts", null)
configuration = {
BranchName = lookup(action.value, "BranchName", null)
PollForSourceChanges = lookup(action.value, "PollForSourceChanges", null)
RepositoryName = lookup(action.value, "RepositoryName", null)
ProjectName = lookup(action.value, "ProjectName", null)
}
role_arn = lookup(action.value, "role_arn", null)
run_order = lookup(action.value, "run_order", null)
region = lookup(action.value, "region", null)
}
}
}
}
tags = merge(
{ "Name" = "${var.prefix}-${var.name}" },
var.tags,
)
}
And here is an excerpt from the workspace yaml, where I pass the variables for two stages, each with an associated action:
test_pipeline_prefix: "test"
test_pipeline_name: "test-pipeline"
test_pipeline_description: "this is a POC pipeline"
test_pipeline_s3: "<<REDACTED ARN>>"
test_pipeline_codestar_arn: "<<REDACTED ARN>>"
test_pipeline_tags: [""]
test_pipeline_stages: [{
name: "Source",
action: [{
name = "Source",
category = "Source",
owner = "AWS"
version = "1",
provider = "CodeStarSourceConnection",
output_artifacts = "source_output",
BranchName = "main",
PollForSourceChanges = "false",
RepositoryName = "<<REDACTED REPO>>",
region = "eu-west-2",
run_order = 1
}]
},
{
name: "Plan",
action: [{
name = "Plan",
category = "Build",
owner = "AWS"
version = "1",
provider = "CodeBuild",
input_artifacts = "source_output",
output_artifacts = "build_output",
ProjectName = "pipeline-plan",
run_order = 2
}]
}]
Finally I call it with:
module "codepipeline" {
source = "./modules/codepipeline"
s3 = local.vars.test_pipeline_s3
description = local.vars.test_pipeline_description
prefix = local.vars.test_pipeline_prefix
name = local.vars.test_pipeline_name
stages = local.vars.test_pipeline_stages
codestar_arn = local.vars.test_pipeline_codestar_arn
tags = {
Environment = local.vars.env
Terraform = "true"
}
}
What I am hoping for it to do, is loop through the stages and actions in the "test_pipeline_stages" variable supplied, and loop through to create a stage for the Source, with an action configured to connect to the preexsiting CodeStar connection, and another stage called "Plan" that runs the CodeBuild job as it's action.
The result I'm actually getting is:
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.0.action.0.owner" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.0.action.0.provider" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.0.action.0.category" is required, but no definition was
│ found.
│ Error: Missing required argument
│
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.0.action.0.version" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.1.action.0.version" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.1.action.0.category" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.1.action.0.owner" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 1, in resource "aws_codepipeline" "this":
│ 1: resource "aws_codepipeline" "this" {
│
│ The argument "stage.1.action.0.provider" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 2, in resource "aws_codepipeline" "this":
│ 2: name = "${var.prefix}-${var.name}"
│
│ The argument "stage.0.action.0.name" is required, but no definition was
│ found.
│ Error: Missing required argument
│ with module.codepipeline.aws_codepipeline.this,
│ on modules/codepipeline/main.tf line 2, in resource "aws_codepipeline" "this":
│ 2: name = "${var.prefix}-${var.name}"
│
│ The argument "stage.1.action.0.name" is required, but no definition was
│ found.
This suggest to me that its not indexing the variables prporly but I can't really figure the best way to proceed. Anyone got any ideas?
Thanks to responders so far - the actual fix I'll detail below, but all responses up to this point have helped me get there. I also agree that supplying null values as a fallback is not sensible; I'll look to review that.
The actual issue was simply that my workspace yaml as posted above is... not valid yaml. Once I replaced it with the below, the module began to read out the values correctly.
test_pipeline_stages: [{
name: "Source",
action: [{
ActionName: "Source",
ActionCategory: "Source",
ActionOwner: "AWS",
ActionVersion: "1",
ActionProvider: "CodeStarSourceConnection",
output_artifacts: ["source_output"],
BranchName: "main",
PollForSourceChanges: "false",
RepositoryName: "REDACTED",
ConnectionArn: "REDACTED",
region: "eu-west-2",
run_order: 1
}]
},
{
name: "Plan",
action: [{
ActionName: "Plan",
ActionCategory: "Build",
ActionOwner: "AWS",
ActionVersion: "1",
ActionProvider: "CodeBuild",
input_artifacts: ["source_output"],
output_artifacts: ["build_output"],
ProjectName: "pipeline-plan",
test_pipeline_ans_tags: "",
run_order: 2
}]
}]
comma is missing after owner= "AWS", put comma in workspace file.

DataBricks Sample Terraform Code causes error in AWS VPC module

I'm completely new to DataBricks and trying to deploy an E2 workspace using the sample Terraform code provided by DataBricks. I've just started with the VPC part:
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
# version = "3.2.0"
name = local.prefix
cidr = var.cidr_block
azs = data.aws_availability_zones.available.names
enable_dns_hostnames = true
enable_nat_gateway = true
single_nat_gateway = true
create_igw = true
private_subnets = [cidrsubnet(var.cidr_block, 3, 1),
cidrsubnet(var.cidr_block, 3, 2)]
manage_default_security_group = true
default_security_group_name = "${local.prefix}-sg"
default_security_group_egress = [{
cidr_blocks = "0.0.0.0/0"
}]
default_security_group_ingress = [{
description = "Allow all internal TCP and UDP"
self = true
}]
}
When I run terraform plan I get this error:
│ Error: Error in function call
│
│ on .terraform/modules/vpc/main.tf line 1090, in resource "aws_nat_gateway" "this":
│ 1090: subnet_id = element(
│ 1091: aws_subnet.public.*.id,
│ 1092: var.single_nat_gateway ? 0 : count.index,
│ 1093: )
│ ├────────────────
│ │ aws_subnet.public is empty tuple
│ │ count.index is 0
│ │ var.single_nat_gateway is true
│
│ Call to function "element" failed: cannot use element function with an empty list.
Would really appreciate any pointers on what is going wrong here.
You set that you want internet gateway create_igw = true, but you haven't specified public_subnets. You must have public_subnets if you have igw.

Get ID of AWS Security Group Terraform Module

I used this module to create a security group inside a VPC. One of the outputs is the security_group_id, but I'm getting this error:
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = [module.app_security_group.security_group_id]
│ ├────────────────
│ │ module.app_security_group is a object, known only after apply
│
│ This object does not have an attribute named "security_group_id".
I need the security group for an ECS service:
resource "aws_ecs_service" "hello_world" {
name = "hello-world-service"
cluster = aws_ecs_cluster.container_service_cluster.id
task_definition = aws_ecs_task_definition.hello_world.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [module.app_security_group.security_group_id]
subnets = module.vpc.private_subnets
}
load_balancer {
target_group_arn = aws_lb_target_group.loadbalancer_target_group.id
container_name = "hello-world-app"
container_port = 3000
}
depends_on = [aws_lb_listener.loadbalancer_listener, module.app_security_group]
}
I understand that I can only know the security group ID after it is created. That's why I added the depends_on part on the ECS stanza, but it kept returning the same error.
Update
I specified count as 1 on the app_security_group module and this is the error I'm getting now.
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = module.app_security_group.security_group_id
│ ├────────────────
│ │ module.app_security_group is a list of object, known only after apply
│
│ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
Update II
This is the module declaration:
module "app_security_group" {
source = "terraform-aws-modules/security-group/aws//modules/web"
version = "3.17.0"
name = "${var.project}-web-sg"
description = "Security group for web-servers with HTTP ports open within VPC"
vpc_id = module.vpc.vpc_id
# ingress_cidr_blocks = module.vpc.public_subnets_cidr_blocks
ingress_cidr_blocks = ["0.0.0.0/0"]
}
I took a look at that module. The problem is that the version 3.17.0 of the module simply does not have the output of security_group_id. You are using a really old version.
The latest version from the site is 4.7.0, you would want to upgrade to this one. In fact, any version above 4.0.0 has the security_group_id, so you need to at least 4.0.0.
As you are using count, please try below.
network_configuration {
security_groups = [module.app_security_group[0].security_group_id]
subnets = module.vpc.private_subnets
}

Terraform - how to access the tuple and extract the invoke_arn and function_name

I have written terraform code which:
Creates IAM Role
Creates lambda functions and attaches the above created role
Dynamo DB table creation
Creates API gateway, resources and adds POST method with lambda integration.
The first 3 steps works well. However while creating and configuring the API gateway, I am encountering below error in resource aws_api_gateway_integration & aws_lambda_permission, where I am trying to attach the lambda function "save_course" to the POST method under "courses" resource
│
│ on main.tf line 117, in resource "aws_api_gateway_integration" "apigateway84f0f20":
│ 117: uri = module.awsLambda["save_course.py"].lambda_invoke_urn
│ ├────────────────
│ │ module.awsLambda["save_course.py"].lambda_invoke_urn is tuple with 1 element
│
│ Inappropriate value for attribute "uri": string required.
╵
╷
│ Error: Incorrect attribute value type
│
│ on main.tf line 141, in resource "aws_lambda_permission" "lambda_permission":
│ 141: function_name = module.awsLambda["save_course.py"].function_name
│ ├────────────────
│ │ module.awsLambda["save_course.py"].function_name is tuple with 1 element
│
│ Inappropriate value for attribute "function_name": string required.
Not sure how to access the tuple and extract the invoke_arn and function_name. After going through the generated terraform.tfstate file, I have tried different combinations to extract the required value. Not sure where I am wrong.
The terraform code along with generated terraform.tfstate file is available at my location:
https://github.com/myanees284/lambda_website
git clone https://github.com/myanees284/lambda_website.git
terraform init
terraform apply -auto-approve
Change your locals from
lambda_invoke_urn=aws_lambda_function.lambda.*.invoke_arn
lambda_name=aws_lambda_function.lambda.*.function_name
to
lambda_invoke_urn=aws_lambda_function.lambda.invoke_arn
lambda_name=aws_lambda_function.lambda.function_name