Terraform does not keep state if resources are generated dynamically - amazon-web-services

I have a variables.tf file that has the following contents:
variable "thing_configuration_set" {
default = {
"name" = "customer1"
"projects" = [
{
"name" = "project1"
"things" = [
{
"name" = "device1"
"fw_version" = "1.0"
"fw_type" = "generic_device"
"thing_type" = "default_device"
}
]
}
]
}
}
variable "iot_policy" {
type = string
sensitive = true
}
locals {
customer_list = distinct(flatten([for idx, customer in var.thing_configuration_set :
{
"customer" : customer.name
}
]))
project_list = distinct(flatten([for idx, customer in var.thing_configuration_set :
flatten([for project_idx, project in customer.projects :
{
"customer" = customer.name
"project" = project.name
}
])
]))
thing_list = flatten([for idx, customer in var.thing_configuration_set :
flatten([for project_idx, project in customer.projects :
flatten([for thing in project.things :
{
"customer" = customer.name
"project" = project.name
"thing" = thing
}
])
])
])
thing_types = distinct(flatten([for idx, record in local.thing_list :
{
"thing_type" = record.thing.thing_type
}]))
iot_policy_json = base64decode(var.iot_policy)
}
And then another tf file that defines all the resources needed to setup an IoT thing in aws:
resource "aws_iot_thing_group" "customer" {
for_each = { for idx, record in local.customer_list : idx => record }
name = each.value.customer
}
resource "aws_iot_thing_group" "project" {
for_each = { for idx, record in local.project_list : idx => record }
name = each.value.project
parent_group_name = each.value.customer
}
resource "aws_iot_thing" "thing" {
for_each = { for idx, record in local.thing_list : idx => record }
name = "${each.value.customer}_${each.value.project}_${each.value.thing.name}"
attributes = {
bms_fw_version = each.value.thing.bms_fw_version
bms_type = each.value.thing.bms_fw_type
}
thing_type_name = each.value.thing.thing_type
}
resource "aws_iot_thing_group_membership" "thing_group_membership" {
for_each = { for idx, record in local.thing_list : idx => record }
thing_name = "${each.value.customer}_${each.value.project}_${each.value.thing.name}"
thing_group_name = each.value.project
}
resource "aws_iot_thing_type" "thing_type" {
for_each = { for idx, record in local.thing_types : idx => record }
name = "${each.value.thing_type}"
}
resource "aws_iot_certificate" "things_cert" {
active = true
}
resource "aws_iot_thing_principal_attachment" "cert_attachment" {
for_each = { for idx, record in local.thing_list : idx => record }
principal = aws_iot_certificate.things_cert.arn
thing = aws_iot_thing.thing[each.key].name
}
resource "aws_iot_policy" "policy" {
name = "connect_subscribe_publish_any"
policy = local.iot_policy_json
}
resource "aws_iot_policy_attachment" "thing_policy_attachment" {
policy = aws_iot_policy.tf_policy.name
target = aws_iot_certificate.things_cert.arn
}
Since we have quite a few resources in AWS already I tried importing them. But when I do terraform plan it still wants to created these 'successfully' imported resources.
For example:
terraform import aws_iot_thing_group.customer Customer1
Would return:
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
If I then run terraform plan it will still list that it will create this customer:
# aws_iot_thing_group.customer["0"] will be created
+ resource "aws_iot_thing_group" "customer" {
+ arn = (known after apply)
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "Customer1"
+ tags_all = (known after apply)
+ version = (known after apply)
}
What am I doing wrong? Is this a bug in terraform?
From what I've seen (very new to terraform) this state only works when you define the resource directly, without any generated stuff (like for-each etc).

As per #luk2302 (h/t) comment and documentation [1], the correct import command is (since it is being run in PowerShell):
terraform import 'aws_iot_thing_group.customer[\"0\"]' Customer1
[1] https://developer.hashicorp.com/terraform/cli/commands/import#example-import-into-resource-configured-with-for_each

Related

Terraform trying to replace existing resource when new user is added

i have this yaml file
team1:
test#gmail,com: ${live_user_role}
some#gmail.com: ${live_user_role}
tem#gmail.com: ${live_admin_role}
terraform code
locals {
render_membership = templatefile("${path.module}/teammembers.yaml",
{
live_user_role = var.liveteam_user_role_id
live_admin_role = var.liveteam_admin_role_id
}
)
membership_nested = yamldecode(local.render_membership)
team_names = keys(local.membership_nested)
membership_flat = flatten(
[
for team_name in local.team_names : [
for user_email, roles in local.membership_nested[team_name] : {
team_name = team_name
user_name = user_email
roles = [roles]
}
]
]
)
}
resource "squadcast_team_member" "membership" {
for_each = { for i, v in local.membership_flat : i => v }
team_id = data.squadcast_team.teams[each.key].id
user_id = data.squadcast_user.users[each.key].id
role_ids = each.value.roles
lifecycle {
create_before_destroy = true
}
}
data "squadcast_team" "teams" {
for_each = { for i, v in local.membership_flat : i => v }
name = each.value.team_name
}
data "squadcast_user" "users" {
for_each = { for i, v in local.membership_flat : i => v }
email = each.value.user_name
}
Now when i add the new member in the list
let's say like this:
team1:
test#gmail,com: ${live_user_role}
some#gmail.com: ${live_user_role}
tem#gmail.com: ${live_admin_role}
roy#gmail.com: ${live_admin_role}
terraform is deleting the previous user and recreating all the users again
squadcast_team_member.membership["1"] must be replaced
+/- resource "squadcast_team_member" "membership" {
~ id = "62ed115ab4b4017fa2a4b786" -> (known after apply)
~ role_ids = [
- "61b08676e4466d68c4866db0",
+ "61b08676e4466d68c4866db1",
]
~ user_id = "62ed115ab4b4017fa2a4b786" -> "61b7d915a14c2569ea9edde6" # forces replacement
# (1 unchanged attribute hidden)
}
how to modify the code that will create a new member only and not change the other members during its creation
This happens because membership_flat results in a list of maps. In a list, order is important. Thus its better to flatten your data, to get a map instead:
membership_flat = merge([
for team_name in local.team_names : {
for user_email, roles in local.membership_nested[team_name] :
"${team_name}-${user_email}" => {
team_name = team_name
user_name = user_email
roles = [roles]
}
}
]...) # dots are important. Do not Remove them.
then
data "squadcast_team" "teams" {
for_each = local.membership_flat
name = each.value.team_name
}
data "squadcast_user" "users" {
for_each = local.membership_flat
email = each.value.user_name
}
resource "squadcast_team_member" "membership" {
for_each = local.membership_flat
team_id = data.squadcast_team.teams[each.key].id
user_id = data.squadcast_user.users[each.key].id
role_ids = each.value.roles
lifecycle {
create_before_destroy = true
}
}

how to enable dynamic block conditionally for creating GCS buckets

Im trying to add retention policy but I want to enable it conditionally, as you can see from the code
buckets.tf
locals {
team_buckets = {
arc = { app_id = "20390", num_buckets = 2, retention_period = null }
ana = { app_id = "25402", num_buckets = 2, retention_period = 631139040 }
cha = { app_id = "20391", num_buckets = 2, retention_period = 631139040 } #20 year
}
}
module "team_bucket" {
source = "../../../../modules/gcs_bucket"
for_each = {
for bucket in flatten([
for product_name, bucket_info in local.team_buckets : [
for i in range(bucket_info.num_buckets) : {
name = format("%s-%02d", product_name, i + 1)
team = "ei_${product_name}"
app_id = bucket_info.app_id
retention_period = bucket_info.retention_period
}
]
]) : bucket.name => bucket
}
project_id = var.project
name = "teambucket-${each.value.name}"
app_id = each.value.app_id
team = each.value.team
retention_period = each.value.retention_period
}
root module is defined as follows
main.tf
resource "google_storage_bucket" "bucket" {
project = var.project_id
name = "${var.project_id}-${var.name}"
location = var.location
labels = {
app_id = var.app_id
ei_team = var.team
cost_center = var.cost_center
}
uniform_bucket_level_access = var.uniform_bucket_level_access
dynamic "retention_policy" {
for_each = var.retention_policy == null ? [] : [var.retention_period]
content {
retention_period = var.retention_period
}
}
}
but I can't seem to make the code pick up the value,
for example as you see below the value doesn't get implemented
~ resource "google_storage_bucket" "bucket" {
id = "teambucket-cha-02"
name = "teambucket-cha-02"
# (11 unchanged attributes hidden)
- retention_policy {
- is_locked = false -> null
- retention_period = 3155760000 -> null
}
}
variables.tf for retention policy is as follows
variable "retention_policy" {
description = "Configuation of the bucket's data retention policy for how long objects in the bucket should be retained"
type = any
default = null
}
variable "retention_period" {
default = null
}
Your var.retention_policy is always null, as its default value. You are not changing the default value at all. Probably you wanted the following:
for_each = var.retention_period == null ? [] : [var.retention_period]
instead of
for_each = var.retention_policy == null ? [] : [var.retention_period]

Terraform aws_s3_bucket_replication_configuration can't generate multiple rules with for_each

I have an S3 bucket with the following "folder" structure:
Bucket1----> /Partner1 ----> /Client1 ----> /User1
| | |--> /User2
| |
| |--> /Client2 ----> /User1
|
|--> /Partner2 ----> /Client1 ----> /User1
and so on.
I'm trying to setup replication from this bucket to another such that a file placed in
Bucket1/Partner1/client1/User1/
should replicate to
Bucket2/Partner1/client1/User1/,
Bucket1/Partner2/client1/User2/
should replicate to
Bucket2/Partner2/client1/User2/,
and so on.
I'm trying to achieve this with the following terraform code:
locals {
s3_folders = [
"Partner1/client1/User1",
"Partner1/client1/User2",
"Partner1/client1/User3",
"Partner1/client1/User4",
"Partner1/client1/User5",
"Partner1/client2/User1",
"Partner1/client3/User1",
"Partner2/client1/User1",
"Partner3/client1/User1"
]
}
resource "aws_s3_bucket_replication_configuration" "replication" {
for_each = local.s3_input_folders
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
rule {
id = each.value
filter {
prefix = each.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
This is not looping and creating 10 different rules, rather it overwrites the same rule on every run and I only get one rule as a result.
You should use dynamic block:
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = toset(local.s3_input_folders)
content {
id = rule.value
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
Thanks, Marcin. The dynamic block construct you mentioned works to create the content blocks but it fails to apply because AWS needs multiple replication rules to be differentiated by priority. So some slight modifications achieve this:
locals {
s3_input_folders_list_counter = tolist([
for i in range(length(local.s3_input_folders)) : i
])
s3_input_folders_count_map = zipmap(local.s3_input_folders_list_counter, tolist(local.s3_input_folders))
}
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = local.s3_input_folders_count_map
content {
id = rule.key
priority = rule.key
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
which creates rules like these:
+ rule {
+ id = "0"
+ priority = 0
+ status = "Enabled"
...
}
+ rule {
+ id = "1"
+ priority = 1
+ status = "Enabled"
...
}
and so on...

Terragrunt for_each value, can't retrieve data in other resources

I have an issue with my terragrunt/terraform code as below.
I don't know the right way to retrieve my both crawlers created by my for_each loop.
Normally I create it with for and count.
I can't retrieve the correct values in my action triggers (main.tf).
terragrunt file (input):
inputs = {
glue_crawler = {
crawler = {
crawler_name = "test",
description = "test crawler"
},
crawler1 = {
crawler_name = "test2",
description = "test2 crawler"
}
}
}
main.tf
#crawler declaration
resource "aws_glue_crawler" "default" {
for_each = var.glue_crawler
database_name = aws_glue_catalog_database.database.name
name = "Crawler_${each.value.crawler_name}"
description = each.value.description
role = aws_iam_role.svc-glue-crawler.id
table_prefix = "raw_"
tags = var.tags
s3_target {
path = "${var.s3_glue_name}/${each.value.crawler_name}"
}
configuration = jsonencode(var.crawler_configuration)
}
...
#trigger
resource "aws_glue_trigger" "my_trigger" {
name = var.trigger_name
schedule = "cron(00 01 * * ? *)"
type = "SCHEDULED"
enabled = "false"
tags = var.tags
actions {
job_name = aws_glue_crawler.default[0].name
}
actions {
job_name = aws_glue_crawler.default[1].name
}
variable.tf
variable "glue_crawler" {
type = map(object({
crawler_name = string
description = string
}))
default = {}
description = "glue crawler definitions."
}
When i run this code i have the following errors:
Error: Invalid index
on main.tf line 294, in resource "aws_glue_trigger" "my_trigger": 294: job_name = aws_glue_crawler.default[0].name
|----------------
| aws_glue_crawler.default is object with 2 attributes
The given key does not identify an element in this collection value.
Error: Invalid index
on main.tf line 298, in resource "aws_glue_trigger" "my_trigger": 298: job_name = aws_glue_crawler.default[1].name
|----------------
| aws_glue_crawler.default is object with 2 attributes
The given key does not identify an element in this collection value.
When you use for_each instead of count you need to access the specific element with the key and not the index. So this will be crawler and crawler1 instead of 0 and 1 in your example:
resource "aws_glue_crawler" "default" {
for_each = var.glue_crawler
database_name = aws_glue_catalog_database.database.name
name = "Crawler_${each.value.crawler_name}"
description = each.value.description
role = aws_iam_role.svc-glue-crawler.id
table_prefix = "raw_"
tags = var.tags
s3_target {
path = "${var.s3_glue_name}/${each.value.crawler_name}"
}
configuration = jsonencode(var.crawler_configuration)
}
...
#trigger
resource "aws_glue_trigger" "my_trigger" {
name = var.trigger_name
schedule = "cron(00 01 * * ? *)"
type = "SCHEDULED"
enabled = "false"
tags = var.tags
actions {
job_name = aws_glue_crawler.default["crawler"].name
}
actions {
job_name = aws_glue_crawler.default["crawler1"].name
}
}
But of course that only works that specific input. Instead you should consider making the actions parameter dynamic and using for_each over the crawlers here too:
resource "aws_glue_trigger" "my_trigger" {
name = var.trigger_name
schedule = "cron(00 01 * * ? *)"
type = "SCHEDULED"
enabled = "false"
tags = var.tags
dynamic "actions" {
for_each = aws_glue_crawler.default
content {
job_name = actions.name
}
}
}

Terraform count within for_each loop

I'm trying to create GCP SQL DBs by iterating a list of string using Terraform's for_each and count parameter and the other loop is for the map keys (maindb & replicadb).
Unfortunately, I get the error that appears below.
Is it possible to do this is Terraform?
variables.tf
variable "sql_var" {
default = {
"maindb" = {
"db_list" = ["firstdb", "secondsdb", "thirddb"],
"disk_size" = "20",
},
"replicadb" = {
"db_list" = ["firstdb"],
"disk_size" = "",
}
}
}
main.tf
resource "google_sql_database_instance" "master_sql_instance" {
...
}
resource "google_sql_database" "database" {
for_each = var.sql_var
name = "${element(each.value.db_list, count.index)}"
instance = "${google_sql_database_instance.master_sql_instance[each.key].name}"
count = "${length(each.value.db_list)}"
}
Error Message
Error: Invalid combination of "count" and "for_each"
on ../main.tf line 43, in resource
"google_sql_database" "database": 43: for_each =
var.sql_var
The "count" and "for_each" meta-arguments are mutually-exclusive, only
one should be used to be explicit about the number of resources to be
created.
What the error message tells you is that you cannot use count and for_each together. It looks like you are trying to create 3 main databases and 1 replica database am I correct? What I would do is create your 2 master instances and then transform your map variable to create the databases.
terraform {
required_version = ">=0.13.3"
required_providers {
google = ">=3.36.0"
}
}
variable "sql_instances" {
default = {
"main_instance" = {
"db_list" = ["first_db", "second_db", "third_db"],
"disk_size" = "20",
},
"replica_instance" = {
"db_list" = ["first_db"],
"disk_size" = "20",
}
}
}
locals {
databases = flatten([
for key, value in var.sql_instances : [
for item in value.db_list : {
name = item
instance = key
}
]
])
sql_databases = {
for item in local.databases :
uuid() => item
}
}
resource "google_sql_database_instance" "sql_instance" {
for_each = var.sql_instances
name = each.key
settings {
disk_size = each.value.disk_size
tier = "db-f1-micro"
}
}
resource "google_sql_database" "sql_database" {
for_each = local.sql_databases
name = each.value.name
instance = each.value.instance
depends_on = [
google_sql_database_instance.sql_instance,
]
}
Then, first run terraform apply -target=google_sql_database_instance.sql_instance and after this run terraform apply.