How to iterate list of objects in terraform locals - list

Basically, we are trying to create cloudwatch dashboards using terraform 0.13.5 and our requirement is to pass the 2 variable to widget block i.e. ${function_name} and ${title}.This will be passed as object variable.
Error : Invalid template interpolation value
Cannot include the given value in a string template: string required.
here is the code:
locals{
lambda = [
{
function_name = "lambda1"
title = "Error"
},
{
function_name = "lambda1"
title = "Error1"
}
]
widget_defination = <<EOT
%{ for function_name , title in local.lambda}
[
{
"type": "metric",
"x": 0,
"y": 0,
"width": 12,
"height": 6,
"properties": {
"metrics": [
[
"AWS/EC2",
"CPUUtilization",
"FunctionName",
"${funtion_name}"
]
],
"period": 300,
"stat": "Average",
"region": "us-east-1",
"title": "${title}"
}
}
]
}
%{endfor }
EOT
}

Gotcha.
We need to call objects in widgets like -
${function_name.function_name} and
${function_name.title}

As far as I know that is not the way to work with variables in terraform.
You have to declare the variables and type on its own file and assign their values in a different file or as a result of a resource creation.
You are talking about widgets so I'm not sure if you already know that because I've never used widgets before. But if you need some help ASAP I don't mind to try..
variables.tf
variable "project_name" {
type = string
}
variable "vpc_id" {}
...
terraform.tfvars
project_name = "my-project"
vpc_id = "vpc-10101010"
...
The way you put that in a template is up to you.
I would suggest a simple approach like bash but IDK maybe the widgets are fun
And a little EDIT here because I just saw your "Gotcha" late.. yes, do not mix variables and strings.. you should vote for your own answer :D

Related

Create multiple IAM roles with different policies in Terraform

Good day everyone, I am new to terraform, trying to solve a problem and stuck in it. I want to create multiple AWS IAM roles using input variables and then assign different policies to those roles. I am trying to do this in for_each loop. However, I am unable to solve the riddle that how to provide different policies. I am trying to solve this using a map variable. Here is my test code
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "region"
shared_credentials_file = "path_to_creds"
profile = "profile_name"
}
variable "roles" {
type = map
default = {
* # These bucket names should be different in each policy
"TestRole1" = "module.s3_bucket_raw.s3_bucket_arn, module.s3_bucket_bronze.s3_bucket_arn"
"TestRole2" = "module.s3_bucket_bronze.s3_bucket_arn, module.s3_bucket_silver.s3_bucket_arn"
}
}
resource "aws_iam_role" "example" {
for_each = var.roles
name = "${each.key}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketAcl",
"s3:DeleteObject",
"s3:GetBucketLocation"
]
Effect = "Allow"
Resource = [
each.value,
"${each.value}/*"
]
}
]
})
}
As you can see in the Resource block of the policy the bucket name will be each.value which will be
module.s3_bucket_raw.s3_bucket_arn, module.s3_bucket_bronze.s3_bucket_arn
this is fine however I also want to do the
module.s3_bucket_raw.s3_bucket_arn/*, module.s3_bucket_bronze.s3_bucket_arn/*
which is not possible with my approach because when i do "${each.value}/*" this will translate into
module.s3_bucket_raw.s3_bucket_arn, module.s3_bucket_bronze.s3_bucket_arn/*
I hope some expert can spend few minutes for me, for which I am thanking you all in anticipation.
I would first either update the variable definition to store the buckets as a list or add a local to convert your variable to a nice object before creating the resource if you can't update the variable definition. This variable would be nicer to work with:
variable "roles" {
type = map
default = {
"TestRole1" = ["module.s3_bucket_raw.s3_bucket_arn", "module.s3_bucket_bronze.s3_bucket_arn"]
"TestRole2" = ["module.s3_bucket_bronze.s3_bucket_arn", "module.s3_bucket_silver.s3_bucket_arn"]
}
}
Then you can use a for loop in the resource like this:
resource "aws_iam_role" "example" {
for_each = var.roles
name = "${each.key}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketAcl",
"s3:DeleteObject",
"s3:GetBucketLocation"
]
Effect = "Allow"
Resource = flatten([for bucket in each.value: [bucket, "${bucket}/*"]]),
}]
})
}
I prefer the aws_iam_policy_document data source for complex policies but I'll leave that to you.

Terraform Error: Provider produced inconsistent final plan - value "known after apply" causes empty list on plan

The following contrived example causes "Error: Provider produced inconsistent final plan" because of the locals.project_id used in the list of rrdatas on the google_dns_record_set.cdn_dns_txt_record_firebase resource. The project_id value is known only after apply and I do not know how to manage this for the rrdatas list. When I come to apply the plan, the value changes and causes the error mentioned. Your help would be really appreciated.
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.24.0"
}
random = {
version = ">= 3.3.2"
}
}
}
locals {
project_id = random_id.project_id.hex
}
resource "random_id" "project_id" {
keepers = {
project_id = "my-project-id"
}
byte_length = 8
prefix = "project-"
}
resource "google_project" "my_project" {
name = "A Great Project"
project_id = random_id.project_id.hex
}
resource "google_dns_record_set" "cdn_dns_txt_record_firebase" {
name = "www.bob.com"
project = google_project.my_project.project_id
managed_zone = "bob.com."
type = "TXT"
ttl = 300
rrdatas = [
"\"v=spf1 include:_spf.firebasemail.com ~all\"",
"firebase=${local.project_id}"
]
}
The plan for the google_dns_record_set.cdn_dns_txt_record_firebase resource looks like this:
# google_dns_record_set.cdn_dns_txt_record_firebase will be created
+ resource "google_dns_record_set" "cdn_dns_txt_record_firebase" {
+ id = (known after apply)
+ managed_zone = "bob.com."
+ name = "www.bob.com"
+ project = (known after apply)
+ ttl = 300
+ type = "TXT"
}
But I would expect something more like:
# google_dns_record_set.cdn_dns_txt_record_firebase will be created
+ resource "google_dns_record_set" "cdn_dns_txt_record_firebase" {
+ id = (known after apply)
+ managed_zone = "bob.com."
+ name = "www.bob.com"
+ project = (known after apply)
+ rrdatas = [
+ "\"v=spf1 include:_spf.firebasemail.com ~all\"",
+ "firebase=(known after apply)",
]
+ ttl = 300
+ type = "TXT"
}
Ran into this issue with the AWS Provider. I know it's not quite the same but my solution was to modify the infrastructure via AWS CLI directly. From there I removed the state record of the specific resource in our state store (Terraform Cloud) and then did an import of the resource from AWS.
If you are managing remote state yourself you could likely run the terraform plan and then discard the run, but modify the remote state directly with the changes that Terraform detected. It's definitely a provider bug but that might be a workaround.
We can see from the plan information that the provider has indeed generated an invalid plan in this case, for the reason you observed: you set rrdatas in the configuration, and so the provider ought to have generated an initial plan to set those values.
As is being discussed over in the bug report you filed about this, the provider seems to be mishandling the unknown value you passed here, and returning a plan that has it set to null instead of to unknown as expected.
Until that bug is fixed in the provider, I think the main workaround would be to find some way to ensure that Terraform Core already knows the value of local.project_id before asking the provider to plan resource "google_dns_record_set" "cdn_dns_txt_record_firebase".
One way to achieve that would be to create a targeted plan that tells Terraform to focus only on generating that random ID in its first operation, and then once that's succeeded you can use normal Terraform applies moving forward as long as you avoid regenerating that random ID:
terraform apply -target=random_id.project_id to just generate that random ID, without planning anything else.
terraform apply to converge everything else. This should succeed because random_id.project_id.hex should already be known from the previous run, and so local.project_id will be known too. The provider then won't run into this buggy behavior, because it will see rrdatas as being a known list of strings rather than as an unknown value.
You have to check the tfstate file in order to find the incoherence between your terraform code and tfstate attribute.
It helped me to fix issue:
tfstate
{
"module": "module.gitlab_cloud_sql",
"mode": "managed",
"type": "random_string",
"name": "random",
"provider": "provider[\"registry.terraform.io/hashicorp/random\"]",
"instances": [
{
"schema_version": 2,
"attributes": {
"id": "pemz",
"keepers": null,
"length": null,
"lower": null,
"min_lower": null
"min_numeric": null,
"min_special": null,
"min_upper": null,
"number": null,
"numeric": null,
"override_special": null,
"result": "pemz",
"special": null,
"upper": null
},
"private": "eyJzY2hlbWFfdmVyc2lvbiI6IjIifQ=="
}
]
},
Terraform code:
resource "random_string" "random" {
length = 4
special = false
lower = true
upper = false
numeric = false
min_upper = 0
lifecycle {
ignore_changes = all
}
}
After fixing the tfstate, it works
{
"module": "module.gitlab_cloud_sql",
"mode": "managed",
"type": "random_string",
"name": "random",
"provider": "provider[\"registry.terraform.io/hashicorp/random\"]",
"instances": [
{
"schema_version": 2,
"attributes": {
"id": "pemz",
"keepers": null,
"length": 4,
"lower": true,
"min_lower": 0,
"min_numeric": 0,
"min_special": 0,
"min_upper": 0,
"number": false,
"numeric": false,
"override_special": null,
"result": "pemz",
"special": false,
"upper": false
},
}
]
},

Adding multiple DynamoDB set items through terraform

terraform version: 0.12.20
I wanted to add multiple items with set datatype in terraform, I went through the link where an example has shown how to add simple data type as String but it fails to add with Set
Below is the code that I am testing
resource "aws_dynamodb_table_item" "items" {
hash_key = "key"
table_name = "test"
for_each = {
"72" = {
test = ["114717","2"],
test1 = []
},
"25" = {
test = ["114717"],
test1 = []
}
}
item = <<EOF
{
"key": {"S": "${each.key}"},
"test": {"SS": "${each.value.test}"},
"test1": {"SS": "${each.value.test1}"}
}
EOF
}
However, it fails with Cannot include the given value in a string template: string required.
I tried something like
resource "aws_dynamodb_table_item" "items" {
hash_key = "key"
table_name = "test"
for_each = {
"72" = {
test = "114717,2",
test1 = ""
},
"25" = {
test = "114717",
test1 = ""
}
}
item = <<EOF
{
"key": {"S": "${each.key}"},
"test": {"SS": ["${each.value.test}"]},
"test1": {"SS": ["${each.value.test1}"]}
}
EOF
}
This fails to differentiate "114717,2" as two different items
In the second example, I have even tried the below section too
{
"key": {"S": "${each.key}"},
"test": {"SS": "${split(",",each.value.test)}"},
"test1": {"SS": "${split(",",each.value.test1)}"}
}
This also fails with Cannot include the given value in a string template: string required.
I am expecting to be able to split the values into array ["114717","2"]. This will help me to store the values as Set in DynamoDB
Your item should be valid json. To achieve that, you can use jsonencode:
resource "aws_dynamodb_table_item" "items" {
hash_key = "key"
table_name = "GameScores"
for_each = {
"72" = {
test = ["114717","2"],
test1 = []
},
"25" = {
test = ["114717"],
test1 = []
}
}
item = <<EOF
{
"key": {"S": "${each.key}"},
"test": {"SS": ${jsonencode(each.value.test)}},
"test1": {"SS": ${length(each.value.test1) > 0 ? jsonencode(each.value.test1) : jsonencode([""])}}
}
EOF
}
Also SS can't be empty, so you have to account for that. Thus you have to check for that and use [""] array. Or you have to re-consider what to do if your test1 is [].

Pass command as variable to ECS task definition

Is there a way to pass a Docker command as a Terraform variable to the ECS task definition that is defined in Terraform?
According to the aws_ecs_task_definition documentation, the container_definitions property is an unparsed JSON object that's an array of container definitions as you'd pass directly to the AWS APIs. One of the properties of that object is a command.
Paraphrasing the documentation somewhat, you'd come up with a sample task definition like:
resource "aws_ecs_task_definition" "service" {
family = "service"
container_definitions = <<DEFINITIONS
[
{
"name": "first",
"image": "service-first",
"command": ["httpd", "-f", "-p", "8080"],
"cpu": 10,
"memory": 512,
"essential": true
}
]
DEFINITIONS
}
You can try the below method to take the command as a variable with template condition if nothing is passed from the root module.
service.json
[
{
...
],
%{ if command != "" }
"command" : [${command}],
%{ endif ~}
...
}
]
container.tf
data "template_file" "container_def" {
count = 1
template = file("${path.module}/service.json")
vars = {
command = var.command != "" ? join(",", formatlist("\"%s\"", var.command)) : ""
}
}
main.tf
module "example" {
...
command = ["httpd", "-f", "-p", "8080"]
...
}
variables.tf
variable "command" {
default = ""
}

AWS Terraform: Filter specific subnets by matching substring in tag name

I have 6 subnets, I want to filter 3 subnets from them matching substring internal and use in rds.
Tag name has internal word and want to filter based on that.
Could anyone please help me?
data "aws_vpc" "vpc_nonprod-sctransportationops-vpc" {
tags {
Name = "vpc_nonprod-sctransportationops-vpc"
}
}
data "aws_subnet_ids" "all" {
vpc_id = "${data.aws_vpc.vpc_nonprod-sctransportationops-vpc.id}"
}
output "aws_subnet_ids" {
value = "${data.aws_subnet_ids.all.ids}"
}
# 6 subnets
# Now look up details for each subnet
data "aws_subnet" "filtered_subnets" {
count = "${length(data.aws_subnet_ids.all.ids)}"
id = "${data.aws_subnet_ids.all.ids[count.index]}"
filter {
name = "tag:Name"
values = ["*internal*"]
}
}
Some tag name has internal substring
Need to grab all subnet id whose tag name has internal substring
values = ["*"] return 6 ids, however, values = ["any word not work"] or values = ["*internal*"] doesn't work.
Following are error:
Error: Error refreshing state: 1 error(s) occurred:
* data.aws_subnet.publicb: 3 error(s) occurred:
* data.aws_subnet.publicb[1]: data.aws_subnet.publicb.1: no matching subnet found
* data.aws_subnet.publicb[4]: data.aws_subnet.publicb.4: no matching subnet found
* data.aws_subnet.publicb[0]: data.aws_subnet.publicb.0: no matching subnet found
There should be 6 but I am getting only 3, that means there should be partially good things and partially bad things.
These 3 subnets doesn't have internal substring in tag name.
It means it's parsing. aws_subnet_ids doesn't have filter option.
There should be instead. For one match, it will be simple, however, I need multiple matches.
In my guess now the error is because of loops which runs for 6 times.
Here is same output without filter:
"data.aws_subnet.filtered_subnets.2": {
"type": "aws_subnet",
"depends_on": [
"data.aws_subnet_ids.all"
],
"primary": {
"id": "subnet-14058972",
"attributes": {
"assign_ipv6_address_on_creation": "false",
"availability_zone": "us-west-2a",
"cidr_block": "172.18.201.0/29",
"default_for_az": "false",
"id": "subnet-14038772",
"map_public_ip_on_launch": "false",
"state": "available",
"tags.%": "4",
"tags.Designation": "internal",
"tags.Name": "subnet_nonprod-sctransportationops-vpc_internal_az2",
"tags.Permissions": "f00000",
"tags.PhysicalLocation": "us-west-2a",
"vpc_id": "vpc-a47k07c2"
},
"meta": {},
"tainted": false
},
"deposed": [],
"provider": "provider.aws"
}
aws_subnet_ids has this feature, however, different way. Here, it solved my problem:
data "aws_subnet_ids" "all" {
vpc_id = "${data.aws_vpc.vpc_nonprod-sctransportationops-vpc.id}"
tags = {
Name = "*internal*"
}
}
Thanks for reviewing :D
According to the Terraform documentation aws_subnet_ids data source has been deprecated and will be removed in a future version (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/subnet_ids).
You can use aws_subnets instead.
Example:
# Private Subnets (db_subnet)
data "aws_subnets" "private_db_subnet" {
filter {
name = "vpc-id"
values = [data.aws_vpc.main_vpc.id]
}
tags = {
Name = "{YOUR_FILTER}"
}
}
Its output is a list of subnets: data.aws_subnets.private_db_subnet.ids
Use case example:
resource "aws_lambda_function" "lambda_json_documentdb" {
...
vpc_config {
subnet_ids = data.aws_subnets.private_db_subnet.ids
security_group_ids = [aws_security_group.lambda_sg.id]
}
...
}