read key value from a json file for terraform variables - amazon-web-services

I have the below sample json file: .json file
[
{
"ParameterKey": "key1",
"ParameterValue": "valueofthekey1"
},
{
"ParameterKey": "key2",
"ParameterValue": "valueofthekey2"
}
]
resource tf file:
locals {
local_data = jsondecode(file("./modules/path/file.json"))
}
resource "aws_ssm_parameter" "testing1" {
type = "String"
name = "test_name1"
value = local.local_data.valueofthekey1
}
resource "aws_ssm_parameter" "testing2" {
type = "String"
name = "test_name2"
value = local.local_data.valueofthekey2
}
Any leads how can I read the json file and pass the value for the key1 in first resource followed by key2 for 2nd resource ??
I tried using local, but they showed me the below error:
12: value = local.local_data.testing1
|----------------
| local.local_data is tuple with 2 elements

If you want ParameterKey to be name of the parameter, you can do:
resource "aws_ssm_parameter" "testing" {
count = length(local.local_data)
type = "String"
name = local.local_data[count.index].ParameterKey
value = local.local_data[count.index].ParameterValue
}
But if you want the entire json element to be value, then you can do:
resource "aws_ssm_parameter" "testing" {
count = length(local.local_data)
type = "String"
name = "test_name${count.index}"
value = jsonencode(local.local_data[count.index])
}

Related

Terraform: how to create resource from loop through a objects with inner list

I am working with Terraform and I need to create a Glue Workflow. My target is the following schema:
I don't understand how I can use the "nested loops" to create the resouce from a variable object with a list of string:
My main.tf file is:
provider "aws" {
region = "eu-west-1"
profile = "<MY-STAGE>"
}
locals {
workflow_name = "my_example"
first_job = "Job_start"
my_map = [
{
flow = ["JOB-A1", "JOB-A2", "JOB-A3"]
},
{
flow = ["JOB-B1", "JOB-B2", "JOB-B3"]
}
]
}
resource "aws_glue_workflow" "example" {
name = "example"
}
resource "aws_glue_trigger" "example-start" {
name = "trigger-start"
type = "ON_DEMAND"
workflow_name = local.workflow_name
actions {
job_name = "${replace(lower(local.first_job), "_", "-")}"
}
}
resource "aws_glue_trigger" "this" {
for_each = toset(local.my_map)
name = "trigger-inner--${lower(element(each.key, index(local.my_map, each.key)))}"
type = "CONDITIONAL"
workflow_name = aws_glue_workflow.example.name
predicate {
conditions {
job_name = "${replace(lower(element(each.key, index(local.my_map, each.key))), "_", "-")}"
state = "SUCCEEDED"
}
}
actions {
job_name = "${replace(lower(element(each.key, index(local.my_map, each.key) + 1)), "_", "-")}"
}
}
When I try to do the "plan" I get this error:
| each.key is a string, known only after apply
| local.my_map is tuple with 2 elements
| Call to function "element" failed: cannot read elements from string.
Then, how can I get all the rows of the object and scroll through the list elements?
Any help or pointers would be much appreciated!

terraform plan 'string required' dynamodb_table_item

I need to add a set of strings to a dynamodb_table_item resource.
# my.tfvars
client_days = [
"2021-05-08", # May
"2021-06-12", "2021-06-26", # June
]
# main.tf
variable "client_days" {
type = set(string)
description = "Client days."
}
resource "aws_dynamodb_table_item" "client_days" {
table_name = aws_dynamodb_table.periods.name
hash_key = "name"
item = <<EOF
{
"name": { "S": "client-days" },
"days": {
"SS" : "${client_days}"
}
}
EOF
}
The resulting looks like this:
32: item = <<EOF
33: {
34: "name": { "S": "client-days" },
35: "days": {
36: "SS" : "${var.client_days}"
37: }
38: }
39: EOF
|----------------
| var.client_days is set of string with 11 elements
Cannot include the given value in a string template: string required.
I have no clue how to solve this.
I also tried converting that list into a string with join().
You have to use jsonencode:
resource "aws_dynamodb_table_item" "client_days" {
table_name = "testdb"
hash_key = "name"
item = <<EOF
{
"name": { "S": "client-days" },
"days": {
"SS" : ${jsonencode(var.client_days)}
}
}
EOF
}
For an argument that expects entirely JSON, it's typically best to produce the entire value with jsonencode, which then avoids various JSON syntax and Terraform templating issues:
resource "aws_dynamodb_table_item" "client_days" {
table_name = "testdb"
hash_key = "name"
item = jsonencode({
name = {
S = "client-days"
}
days = {
SS = var.client_days
}
})
}
This way you can mix static data with references in your argument to jsonencode, without any need for string templating.

Terraform dynamodb error - all attributes must be indexed

I am trying to create a simple dynamodb table using following reource modules of terraform.
Get the following error while running terraform:
All attributes must be indexed. Unused attributes: ["pactitle" "ipadress" "Timestamp"].
why do we need to index all attributes ?
How to solve this ?
resource "aws_dynamodb_table" "this" {
count = var.create_table ? 1 : 0
name = var.name
billing_mode = var.billing_mode
hash_key = var.hash_key
range_key = var.range_key
read_capacity = var.read_capacity
write_capacity = var.write_capacity
//stream_enabled = var.stream_enabled
//stream_view_type = var.stream_view_type
dynamic "attribute" {
for_each = var.attributes
content {
name = attribute.value.name
type = attribute.value.type
}
}
server_side_encryption {
enabled = var.server_side_encryption_enabled
kms_key_arn = var.server_side_encryption_kms_key_arn
}
tags = merge(
var.tags,
{
"Name" = format("%s", var.name)
},
)
timeouts {
create = lookup(var.timeouts, "create", null)
delete = lookup(var.timeouts, "delete", null)
update = lookup(var.timeouts, "update", null)
}
}
calling module
module "dynamodb_table" {
source = "./../../../modules/dynamodb"
name = "pack-audit-cert"
hash_key = "id"
create_table= true
read_capacity=5
write_capacity=5
billing_mode = "PROVISIONED"
range_key = "pacid"
attributes = [
{
name = "id"
type = "N"
},
{
name = "pacid"
type = "S"
},
{
name = "pactitle"
type = "S"
},
{
name = "ipadress"
type = "S"
},
{
name = "Timestamp"
type = "S"
}
]
}
Thank you
That error message is a bit misleading. You should only define the indexed attributes when you are creating the table. Since DynamoDB is a schemaless database, it doesn't care about the other attributes at table creation time.

Terraform count within for_each loop

I'm trying to create GCP SQL DBs by iterating a list of string using Terraform's for_each and count parameter and the other loop is for the map keys (maindb & replicadb).
Unfortunately, I get the error that appears below.
Is it possible to do this is Terraform?
variables.tf
variable "sql_var" {
default = {
"maindb" = {
"db_list" = ["firstdb", "secondsdb", "thirddb"],
"disk_size" = "20",
},
"replicadb" = {
"db_list" = ["firstdb"],
"disk_size" = "",
}
}
}
main.tf
resource "google_sql_database_instance" "master_sql_instance" {
...
}
resource "google_sql_database" "database" {
for_each = var.sql_var
name = "${element(each.value.db_list, count.index)}"
instance = "${google_sql_database_instance.master_sql_instance[each.key].name}"
count = "${length(each.value.db_list)}"
}
Error Message
Error: Invalid combination of "count" and "for_each"
on ../main.tf line 43, in resource
"google_sql_database" "database": 43: for_each =
var.sql_var
The "count" and "for_each" meta-arguments are mutually-exclusive, only
one should be used to be explicit about the number of resources to be
created.
What the error message tells you is that you cannot use count and for_each together. It looks like you are trying to create 3 main databases and 1 replica database am I correct? What I would do is create your 2 master instances and then transform your map variable to create the databases.
terraform {
required_version = ">=0.13.3"
required_providers {
google = ">=3.36.0"
}
}
variable "sql_instances" {
default = {
"main_instance" = {
"db_list" = ["first_db", "second_db", "third_db"],
"disk_size" = "20",
},
"replica_instance" = {
"db_list" = ["first_db"],
"disk_size" = "20",
}
}
}
locals {
databases = flatten([
for key, value in var.sql_instances : [
for item in value.db_list : {
name = item
instance = key
}
]
])
sql_databases = {
for item in local.databases :
uuid() => item
}
}
resource "google_sql_database_instance" "sql_instance" {
for_each = var.sql_instances
name = each.key
settings {
disk_size = each.value.disk_size
tier = "db-f1-micro"
}
}
resource "google_sql_database" "sql_database" {
for_each = local.sql_databases
name = each.value.name
instance = each.value.instance
depends_on = [
google_sql_database_instance.sql_instance,
]
}
Then, first run terraform apply -target=google_sql_database_instance.sql_instance and after this run terraform apply.

Creating dynamic resources with Terraform for_each

I would like to create AWS SSM Parameters using Terraform, with the parameters being passed in as input variables.
I see there is a for_each feature, but how can this be applied to top level properties within a terraform resource? From the documentation, the use of for_each appears to be restricted to not work on top level properties of a resource, am I misunderstanding?
This is what I am trying to accomplish:
main.tf
resource "aws_ssm_parameter" "ssm_parameters" {
for_each = var.params
content {
name = name.value
type = "String"
overwrite = true
value = paramValue.value
tags = var.tags
lifecycle {
ignore_changes = [
tags,
value
]
}
}
}
variables.tf
variable "params" {
default = [
{
name = "albUrl"
paramValue = "testa"
},
{
name = "rdsUrl1"
paramValue = "testb"
},
{
name = "rdsUrl2"
valparamValueue = "testc"
},
]
}
You can use for each, but you need to modify its syntax and fix syntax in your var.params:
variable "params" {
default = [
{
name = "albUrl"
paramValue = "testa"
},
{
name = "rdsUrl1"
paramValue = "testb"
},
{
name = "rdsUrl2"
paramValue = "testc"
},
]
}
Then to use for each, and create 3 ssm parameters:
resource "aws_ssm_parameter" "ssm_parameters" {
for_each = {for v in var.params: v.name => v.paramValue}
type = "String"
name = each.key
value = each.value
overwrite = true
}
In the above you have to project your list(map) to a map as it is required for for_each.