Adding multiple DynamoDB set items through terraform - amazon-web-services

terraform version: 0.12.20
I wanted to add multiple items with set datatype in terraform, I went through the link where an example has shown how to add simple data type as String but it fails to add with Set
Below is the code that I am testing
resource "aws_dynamodb_table_item" "items" {
hash_key = "key"
table_name = "test"
for_each = {
"72" = {
test = ["114717","2"],
test1 = []
},
"25" = {
test = ["114717"],
test1 = []
}
}
item = <<EOF
{
"key": {"S": "${each.key}"},
"test": {"SS": "${each.value.test}"},
"test1": {"SS": "${each.value.test1}"}
}
EOF
}
However, it fails with Cannot include the given value in a string template: string required.
I tried something like
resource "aws_dynamodb_table_item" "items" {
hash_key = "key"
table_name = "test"
for_each = {
"72" = {
test = "114717,2",
test1 = ""
},
"25" = {
test = "114717",
test1 = ""
}
}
item = <<EOF
{
"key": {"S": "${each.key}"},
"test": {"SS": ["${each.value.test}"]},
"test1": {"SS": ["${each.value.test1}"]}
}
EOF
}
This fails to differentiate "114717,2" as two different items
In the second example, I have even tried the below section too
{
"key": {"S": "${each.key}"},
"test": {"SS": "${split(",",each.value.test)}"},
"test1": {"SS": "${split(",",each.value.test1)}"}
}
This also fails with Cannot include the given value in a string template: string required.
I am expecting to be able to split the values into array ["114717","2"]. This will help me to store the values as Set in DynamoDB

Your item should be valid json. To achieve that, you can use jsonencode:
resource "aws_dynamodb_table_item" "items" {
hash_key = "key"
table_name = "GameScores"
for_each = {
"72" = {
test = ["114717","2"],
test1 = []
},
"25" = {
test = ["114717"],
test1 = []
}
}
item = <<EOF
{
"key": {"S": "${each.key}"},
"test": {"SS": ${jsonencode(each.value.test)}},
"test1": {"SS": ${length(each.value.test1) > 0 ? jsonencode(each.value.test1) : jsonencode([""])}}
}
EOF
}
Also SS can't be empty, so you have to account for that. Thus you have to check for that and use [""] array. Or you have to re-consider what to do if your test1 is [].

Related

Terraform: how to create resource from loop through a objects with inner list

I am working with Terraform and I need to create a Glue Workflow. My target is the following schema:
I don't understand how I can use the "nested loops" to create the resouce from a variable object with a list of string:
My main.tf file is:
provider "aws" {
region = "eu-west-1"
profile = "<MY-STAGE>"
}
locals {
workflow_name = "my_example"
first_job = "Job_start"
my_map = [
{
flow = ["JOB-A1", "JOB-A2", "JOB-A3"]
},
{
flow = ["JOB-B1", "JOB-B2", "JOB-B3"]
}
]
}
resource "aws_glue_workflow" "example" {
name = "example"
}
resource "aws_glue_trigger" "example-start" {
name = "trigger-start"
type = "ON_DEMAND"
workflow_name = local.workflow_name
actions {
job_name = "${replace(lower(local.first_job), "_", "-")}"
}
}
resource "aws_glue_trigger" "this" {
for_each = toset(local.my_map)
name = "trigger-inner--${lower(element(each.key, index(local.my_map, each.key)))}"
type = "CONDITIONAL"
workflow_name = aws_glue_workflow.example.name
predicate {
conditions {
job_name = "${replace(lower(element(each.key, index(local.my_map, each.key))), "_", "-")}"
state = "SUCCEEDED"
}
}
actions {
job_name = "${replace(lower(element(each.key, index(local.my_map, each.key) + 1)), "_", "-")}"
}
}
When I try to do the "plan" I get this error:
| each.key is a string, known only after apply
| local.my_map is tuple with 2 elements
| Call to function "element" failed: cannot read elements from string.
Then, how can I get all the rows of the object and scroll through the list elements?
Any help or pointers would be much appreciated!

read key value from a json file for terraform variables

I have the below sample json file: .json file
[
{
"ParameterKey": "key1",
"ParameterValue": "valueofthekey1"
},
{
"ParameterKey": "key2",
"ParameterValue": "valueofthekey2"
}
]
resource tf file:
locals {
local_data = jsondecode(file("./modules/path/file.json"))
}
resource "aws_ssm_parameter" "testing1" {
type = "String"
name = "test_name1"
value = local.local_data.valueofthekey1
}
resource "aws_ssm_parameter" "testing2" {
type = "String"
name = "test_name2"
value = local.local_data.valueofthekey2
}
Any leads how can I read the json file and pass the value for the key1 in first resource followed by key2 for 2nd resource ??
I tried using local, but they showed me the below error:
12: value = local.local_data.testing1
|----------------
| local.local_data is tuple with 2 elements
If you want ParameterKey to be name of the parameter, you can do:
resource "aws_ssm_parameter" "testing" {
count = length(local.local_data)
type = "String"
name = local.local_data[count.index].ParameterKey
value = local.local_data[count.index].ParameterValue
}
But if you want the entire json element to be value, then you can do:
resource "aws_ssm_parameter" "testing" {
count = length(local.local_data)
type = "String"
name = "test_name${count.index}"
value = jsonencode(local.local_data[count.index])
}

terraform plan 'string required' dynamodb_table_item

I need to add a set of strings to a dynamodb_table_item resource.
# my.tfvars
client_days = [
"2021-05-08", # May
"2021-06-12", "2021-06-26", # June
]
# main.tf
variable "client_days" {
type = set(string)
description = "Client days."
}
resource "aws_dynamodb_table_item" "client_days" {
table_name = aws_dynamodb_table.periods.name
hash_key = "name"
item = <<EOF
{
"name": { "S": "client-days" },
"days": {
"SS" : "${client_days}"
}
}
EOF
}
The resulting looks like this:
32: item = <<EOF
33: {
34: "name": { "S": "client-days" },
35: "days": {
36: "SS" : "${var.client_days}"
37: }
38: }
39: EOF
|----------------
| var.client_days is set of string with 11 elements
Cannot include the given value in a string template: string required.
I have no clue how to solve this.
I also tried converting that list into a string with join().
You have to use jsonencode:
resource "aws_dynamodb_table_item" "client_days" {
table_name = "testdb"
hash_key = "name"
item = <<EOF
{
"name": { "S": "client-days" },
"days": {
"SS" : ${jsonencode(var.client_days)}
}
}
EOF
}
For an argument that expects entirely JSON, it's typically best to produce the entire value with jsonencode, which then avoids various JSON syntax and Terraform templating issues:
resource "aws_dynamodb_table_item" "client_days" {
table_name = "testdb"
hash_key = "name"
item = jsonencode({
name = {
S = "client-days"
}
days = {
SS = var.client_days
}
})
}
This way you can mix static data with references in your argument to jsonencode, without any need for string templating.

Terraform create weird for_each

I wanted to create a for_each loop that loops only over objects in array that have some specific key : value pair.
My input variables are:
inputs = {
names = ["first", "second"]
lifecycle_rules = [
{
name = first
condition = {
age = "1"
}
action = {
type = "Delete"
}
},{
condition = {
age = "2"
}
action = {
type = "Delete"
}
},
{
name = second
condition = {
age = "3"
}
action = {
type = "Delete"
}
},{
condition = {
age = "4"
}
action = {
type = "Delete"
}
}
]
and in my main.tf (btw for deploying gcp bucket for reference), I wanted to separate the lifecycle per bucket and wanted to apply only the rules that have the buckets name in it.
So if anyone has idea how to modify for_each code below to work, I would highly appreciate. I believe only the for_each needs to be changed to loop over the right elements (let's say only objects in that list that have name = first). from the var.lifecycle_rules set
resource "google_storage_bucket" "buckets" {
count = length(var.names)
name = "${lower(element(var.names, count.index))}"
...
dynamic "lifecycle_rule" {
#for_each = length(lookup(lifecycle_rules[lookup(element(var.names, count.index))])
for_each = lifecycle_rules
content {
action {
type = lifecycle_rule.value.action.type
storage_class = lookup(lifecycle_rule.value.action, "storage_class", null)
}
condition {
#age = lifecycle_rule.value.name == element(var.names, count.index) ? lookup(lifecycle_rule.value.condition, "age", null) : null
age = lookup(lifecycle_rule.value.condition, "age", null) : null
...
I think that this "wierd" look can be obtained in two stages.
Reorganize lifecycle_rules into a map based on names
variable "input" {
default = {
names = ["first", "second"],
lifecycle_rules = [
{
name = "first",
condition = {
age = "1"
},
action = {
type = "Delete"
}
},
{
condition = {
age = "2"
},
action = {
type = "Delete"
}
},
{
name = "second",
condition = {
age = "3"
},
action = {
type = "Delete"
}
},
{
condition = {
age = "4"
},
action = {
type = "Delete"
}
}
]
}
}
locals {
new = {
for name in var.input.names:
name => [for rule in var.input.lifecycle_rules:
contains(keys(rule), "name" ) ?
rule.name == name ? rule: null :
null ]
}
}
which will give local.new in the form of:
{
"first" = [
{
"action" = {
"type" = "Delete"
}
"condition" = {
"age" = "1"
}
"name" = "first"
},
null,
null,
null,
]
"second" = [
null,
null,
{
"action" = {
"type" = "Delete"
}
"condition" = {
"age" = "3"
}
"name" = "second"
},
null,
]
}
Perform the for_each
resource "google_storage_bucket" "buckets" {
for_each = toset(var.input.names)
name = each.key
dynamic "lifecycle_rule" {
# iterate for each name skipping null values
for_each = [for v in local.new[each.key]: v if v != null]
content {
action {
type = lifecycle_rule.value["action"].type
storage_class = lookup(lifecycle_rule.value["action"], "storage_class", null)
}
condition {
age = lookup( tag.value["condition"], "age", null)
}
}
}
}
I could only verify the first step and partial second (using aws_autoscaling_group and its multiple tag components). I don't access to google cloud to fully test the code.

How to add Glue Table with struct type column using Terraform?

I am enabling Athena to query on Cloudtrail s3 logs using Terraform.
To do this, I need to create database and tables in Glue Catalog.
I am following this link.
In Terraform I am using aws_glue_catalog_table resource.
How can I define columns with type struct and Array in terraform file?
I tried defining below ways but did not work.
resource "aws_glue_catalog_database" "cloud_logs" {
name = "trail_logs_db"
}
resource "aws_glue_catalog_table" "cloud_table" {
name = "trail_logs"
database_name = "${aws_glue_catalog_database.cloud_logs.name}"
table_type = "EXTERNAL_TABLE"
parameters = {
EXTERNAL = "TRUE"
}
storage_descriptor {
location = "s3://<BUCKET NAME>/AWSLogs/<AWS ACCOUNT ID>/"
input_format = "com.amazon.emr.cloudtrail.CloudTrailInputFormat"
output_format = "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
ser_de_info {
name = "trail-logs"
serialization_library = "com.amazon.emr.hive.serde.CloudTrailSerde"
parameters {
serialization.format = 1
}
}
columns = [
{
name = "useridentity"
type = "struct<type:string,
principalid:string,
arn:string,
accountid:string,
invokedby:string,
accesskeyid:string,
userName:string,>"
comment = ""
},
{
name = "resources"
type = "array<STRUCT<ARN:string,
accountId:string,
type:string>>"
comment = ""
},
]
}
}
WHen I run terraform init It throws below error:
Error: Error parsing test.tf: At 33:27: illegal char
Found workaround, It does not look good though.
It works when I format it in one line.
{
name = "useridentity"
type = "struct<type:string, principalid:string, arn:string, accountid:string, invokedby:string, accesskeyid:string, userName:string,>"
comment = ""
},
{
name = "resources"
type = "array<STRUCT<ARN:string, accountId:string, type:string>>"
comment = ""
},
What you want is a heredoc:
{
name = "useridentity"
type = <<- EOT
struct<type:string,
principalid:string,
arn:string,
accountid:string,
invokedby:string,
accesskeyid:string,
userName:string,>
EOT
comment = ""
},
{
name = "resources"
type = <<- EOT
array<STRUCT<ARN:string,
accountId:string,
type:string>>
EOT
comment = ""
},