Applying varying lifecycle policies to list of s3 buckets - amazon-web-services

I have a list of s3 buckets on which i want to apply different lifecycle policies.
variables.tf
variable "bucket_name" {
type = list(any)
default = ["in", "out", "in-archive", "out-archive"]
}
For the first 2 items in the list I want to have their contents deleted after 180 days. And the remaining 2 buckets to move their contents to GLACIER class and then remove them after 600 days.
I have declared two different resource blocks for varying policies, but the problem is how do I make terraform to start counting index from 3rd element instead of 1st element.
resource block
resource "aws_s3_bucket" "bucket" {
count = length(var.bucket_name)
bucket = "${var.bucket_name[count.index]}"
}
resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle_rule" {
count = length(aws_s3_bucket.bucket)
bucket = aws_s3_bucket.bucket[count.index].id ///Want this index to stop at 2nd element
rule {
status = "Enabled"
id = "bucket-lifecycle-rule"
expiration {
days = 180
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "archive_bucket_lifecycle_rule" {
count = length(aws_s3_bucket.bucket)
bucket = aws_s3_bucket.bucket[count.index + 4].id ///Want this index to begin from 3rd and end
rule { ///at 4th element
status = "Enabled"
id = "archive-bucket-lifecycle-rule"
transition {
days = 181
storage_class = "GLACIER"
}
expiration {
days = 600
}
}
}
While I approach this rule, i get an error :
in resource "aws_s3_bucket_lifecycle_configuration" "archive_bucket_lifecycle_rule":
31: bucket = aws_s3_bucket.bucket[count.index + 2].id
├────────────────
│ aws_s3_bucket.bucket is tuple with 4 elements
│ count.index is 2
The given key does not identify an element in this collection value.

How about making the input variable a bit more complex to accommodate what you need...
Here is a quick example:
provider "aws" { region = "us-east-1" }
variable "buckets" {
type = map(any)
default = {
"in" : { expiration : 180, transition : 0 },
"out" : { expiration : 120, transition : 0 },
"in-archive" : { expiration : 200, transition : 180 },
"out-archive" : { expiration : 360, transition : 180 }
}
}
resource "aws_s3_bucket" "bucket" {
for_each = var.buckets
bucket = each.key
}
resource "aws_s3_bucket_lifecycle_configuration" "lifecycle" {
for_each = var.buckets
bucket = aws_s3_bucket.bucket[each.key].id
rule {
status = "Enabled"
id = "bucket-lifecycle-rule"
expiration {
days = each.value.expiration
}
}
rule {
status = each.value.transition > 0 ? "Enabled" : "Disabled"
id = "archive-bucket-lifecycle-rule"
transition {
days = each.value.transition
storage_class = "GLACIER"
}
}
}
Now our variable is type = map(any) we can create a more complex object there and pass the lifecycle expiration, you can make that as complex as you need to fit more complex rules

Related

terraform: Overriding same variables in different tf files

I have 2 buckets A and B which requires lifecycle polices but different expiration days.
Since they both are root modules in same directory they share common variables.tf file
Lifecycle policy code for both A and B
Note:- both the bucket code are in different file as both have different configurations
Bucket A file's code
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.A.bucket # for B bucket aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.s3_expiration_days}, noncurrent version after ${var.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.s3_noncurrent_version_expiration_days
}
}
}
B bucket's file code
resource "aws_s3_bucket_lifecycle_configuration" "B_log" {
bucket = aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.s3_expiration_days}, noncurrent version after ${var.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.s3_noncurrent_version_expiration_days
}
}
}
variables.tf
bucket A needs 30( current) and 3 day ( non current versions) to expire, however B bucket needs 0 ( current) and 90 ( non current versions) respectively.
How do I achieve this?
Note:- I do not want to hardcode value for any of the bucket.
variable "s3_expiration_days" {
type = number
description = "S3 bucket objects expiration days https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#days"
default = 30
}
variable "s3_noncurrent_version_expiration_days" {
type = number
description = "S3 bucket noncurrent version objects expiration days https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#noncurrent_days"
default = 3
}
I think the easiest and most scalable way would be to do it through a single variable map:
variable "buckets_config" {
default = {
"bucket-name-a" = {
s3_expiration_days = 30
s3_noncurrent_version_expiration_days = 3
}
"bucket-name-b" = {
s3_expiration_days = 0
s3_noncurrent_version_expiration_days = 90
}
}
}
# then
resource "aws_s3_bucket" "bucket" {
for_each = var.buckets_config
bucket = each.key
}
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
for_each = var.buckets_config
bucket = aws_s3_bucket.bucket[each.key].bucket
rule {
id = "expire current version after ${each.value.s3_expiration_days}, noncurrent version after ${each.value.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = each.value.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = each.value.s3_noncurrent_version_expiration_days
}
}
}
UDPATE
For two different buckets:
# for bucket A
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.A.bucket
rule {
id = "expire current version after ${var.buckets_config[aws_s3_bucket.A.bucket].s3_expiration_days}, noncurrent version after ${var.buckets_config[aws_s3_bucket.A.bucket].s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.buckets_config[aws_s3_bucket.A.bucket].s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.buckets_config[aws_s3_bucket.A.bucket].s3_noncurrent_version_expiration_days
}
}
}
# for bucket B
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.buckets_config[aws_s3_bucket.B.bucket].s3_expiration_days}, noncurrent version after ${var.buckets_config[aws_s3_bucket.B.bucket].s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.buckets_config[aws_s3_bucket.B.bucket].s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.buckets_config[aws_s3_bucket.B.bucket].s3_noncurrent_version_expiration_days
}
}
}

Accessing values in list for modules in terraform

I am trying to refactor some terraform code when doing an upgrade.
I'm using some S3 module that takes some lifecycle configuration rules:
module "s3_bucket" {
source = "../modules/s3"
lifecycle_rule = [
{
id = "id_name"
enabled = true
abort_incomplete_multipart_upload = 7
expiration = {
days = 7
}
noncurrent_version_expiration = {
days = 7
}
}
]
}
Here is how the resource inside the model looks like:
resource "aws_s3_bucket_lifecycle_configuration" "main" {
count = length(var.lifecycle_rule) > 0 ? 1 : 0
bucket = aws_s3_bucket.main.id
dynamic "rule" {
for_each = var.lifecycle_rule
content {
id = lookup(lifecycle_rule.value, "id", null)
enabled = lookup(lifecycle_rule.value, "enabled", null)
abort_incomplete_multipart_upload = lookup(lifecycle_rule.value, "abort_incomplete_multipart_upload", null)
filter {
and {
prefix = lookup(lifecycle_rule.value, "prefix", null)
tags = lookup(lifecycle_rule.value, "tags", null)
}
}
}
}
}
Running plan gives me the following error:
on ../modules/s3/main.tf line 73, in resource "aws_s3_bucket_lifecycle_configuration" "main":
73: id = lookup(lifecycle_rule.id, null)
A managed resource "lifecycle_rule" "id" has not been declared in
module.s3_bucket.
2 questions:
1 - Looks like I'm not reaching the lifecycle_rule.value attribute in the list for the module, any help with the syntax?
2 - How to access the nested expiration.days value inside the module also?
Thanks!
The first part of your question: you need to use the rule and not lifecycle_rule [1]. Make sure you understand this part:
The iterator argument (optional) sets the name of a temporary variable that represents the current element of the complex value. If omitted, the name of the variable defaults to the label of the dynamic block.
To complete the answer, accessing expiration.days is possible if you define a corresponding argument in the module. In other words, you need to add expiration block to the module code [2].
There are a couple more issues with the code you currently have:
The abort_incomplete_multipart_upload is a configuration block, the same as expiration
The expiration date should not be set in number of days, rather an RFC3339 format [3]
The enabled value cannot be a bool, it has to be either Enabled or Disabled (mind the first capital letter) and the name of the argument is status [4] not enabled
To sum up, here's what the code in the module should look like:
resource "aws_s3_bucket_lifecycle_configuration" "main" {
count = length(var.lifecycle_rule) > 0 ? 1 : 0
bucket = aws_s3_bucket.main.id
dynamic "rule" {
for_each = var.lifecycle_rule
content {
id = lookup(rule.value, "id", null)
status = lookup(rule.value, "enabled", null)
abort_incomplete_multipart_upload {
days_after_initiation = lookup(rule.value, "abort_incomplete_multipart_upload", null)
}
filter {
and {
prefix = lookup(rule.value, "prefix", null)
tags = lookup(rule.value, "tags", null)
}
}
expiration {
date = lookup(rule.value.expiration, "days", null)
}
}
}
}
The module should be called with the following variable values:
module "s3_bucket" {
source = "../modules/s3"
lifecycle_rule = [
{
id = "id_name"
enabled = "Enabled" # Mind the value
abort_incomplete_multipart_upload = 7
expiration = {
days = "2022-08-28T15:04:05Z" # RFC3339 format
}
noncurrent_version_expiration = {
days = 7
}
}
[1] https://www.terraform.io/language/expressions/dynamic-blocks
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#expiration
[3] https://www.rfc-editor.org/rfc/rfc3339#section-5.8
[4] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#status

Terraform aws_s3_bucket_replication_configuration can't generate multiple rules with for_each

I have an S3 bucket with the following "folder" structure:
Bucket1----> /Partner1 ----> /Client1 ----> /User1
| | |--> /User2
| |
| |--> /Client2 ----> /User1
|
|--> /Partner2 ----> /Client1 ----> /User1
and so on.
I'm trying to setup replication from this bucket to another such that a file placed in
Bucket1/Partner1/client1/User1/
should replicate to
Bucket2/Partner1/client1/User1/,
Bucket1/Partner2/client1/User2/
should replicate to
Bucket2/Partner2/client1/User2/,
and so on.
I'm trying to achieve this with the following terraform code:
locals {
s3_folders = [
"Partner1/client1/User1",
"Partner1/client1/User2",
"Partner1/client1/User3",
"Partner1/client1/User4",
"Partner1/client1/User5",
"Partner1/client2/User1",
"Partner1/client3/User1",
"Partner2/client1/User1",
"Partner3/client1/User1"
]
}
resource "aws_s3_bucket_replication_configuration" "replication" {
for_each = local.s3_input_folders
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
rule {
id = each.value
filter {
prefix = each.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
This is not looping and creating 10 different rules, rather it overwrites the same rule on every run and I only get one rule as a result.
You should use dynamic block:
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = toset(local.s3_input_folders)
content {
id = rule.value
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
Thanks, Marcin. The dynamic block construct you mentioned works to create the content blocks but it fails to apply because AWS needs multiple replication rules to be differentiated by priority. So some slight modifications achieve this:
locals {
s3_input_folders_list_counter = tolist([
for i in range(length(local.s3_input_folders)) : i
])
s3_input_folders_count_map = zipmap(local.s3_input_folders_list_counter, tolist(local.s3_input_folders))
}
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = local.s3_input_folders_count_map
content {
id = rule.key
priority = rule.key
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
which creates rules like these:
+ rule {
+ id = "0"
+ priority = 0
+ status = "Enabled"
...
}
+ rule {
+ id = "1"
+ priority = 1
+ status = "Enabled"
...
}
and so on...

Terraform count within for_each loop

I'm trying to create GCP SQL DBs by iterating a list of string using Terraform's for_each and count parameter and the other loop is for the map keys (maindb & replicadb).
Unfortunately, I get the error that appears below.
Is it possible to do this is Terraform?
variables.tf
variable "sql_var" {
default = {
"maindb" = {
"db_list" = ["firstdb", "secondsdb", "thirddb"],
"disk_size" = "20",
},
"replicadb" = {
"db_list" = ["firstdb"],
"disk_size" = "",
}
}
}
main.tf
resource "google_sql_database_instance" "master_sql_instance" {
...
}
resource "google_sql_database" "database" {
for_each = var.sql_var
name = "${element(each.value.db_list, count.index)}"
instance = "${google_sql_database_instance.master_sql_instance[each.key].name}"
count = "${length(each.value.db_list)}"
}
Error Message
Error: Invalid combination of "count" and "for_each"
on ../main.tf line 43, in resource
"google_sql_database" "database": 43: for_each =
var.sql_var
The "count" and "for_each" meta-arguments are mutually-exclusive, only
one should be used to be explicit about the number of resources to be
created.
What the error message tells you is that you cannot use count and for_each together. It looks like you are trying to create 3 main databases and 1 replica database am I correct? What I would do is create your 2 master instances and then transform your map variable to create the databases.
terraform {
required_version = ">=0.13.3"
required_providers {
google = ">=3.36.0"
}
}
variable "sql_instances" {
default = {
"main_instance" = {
"db_list" = ["first_db", "second_db", "third_db"],
"disk_size" = "20",
},
"replica_instance" = {
"db_list" = ["first_db"],
"disk_size" = "20",
}
}
}
locals {
databases = flatten([
for key, value in var.sql_instances : [
for item in value.db_list : {
name = item
instance = key
}
]
])
sql_databases = {
for item in local.databases :
uuid() => item
}
}
resource "google_sql_database_instance" "sql_instance" {
for_each = var.sql_instances
name = each.key
settings {
disk_size = each.value.disk_size
tier = "db-f1-micro"
}
}
resource "google_sql_database" "sql_database" {
for_each = local.sql_databases
name = each.value.name
instance = each.value.instance
depends_on = [
google_sql_database_instance.sql_instance,
]
}
Then, first run terraform apply -target=google_sql_database_instance.sql_instance and after this run terraform apply.

Terraform create a Cloudwatch rule with multiple targets

From the Terraform docs - https://www.terraform.io/docs/providers/aws/r/cloudwatch_event_target.html
I don't see an option to map multiple targets to the same Cloudwatch rule. It only takes an arn field which accepts one resource. I'm trying to map 5 Lambdas to the same Cloudwatch rule. Does Terraform support this?
EDIT: How can I attach only 5 lambdas? If I've created 15 lambdas, I want to attach 5 each to 3 cloudwatch rules.
Got it working! I had to divide the count of the rules by 5 when I assigned targets to rules. This is roughly what it looks like:
resource "aws_cloudwatch_event_rule" "arule" {
count = "${ceil(length(var.lambda_arns) / 5.0)}" // Needs to be 5.0 to force float computation
name = "${var.rule_name}${format("-%d", count.index)}"
is_enabled = true
}
resource "aws_cloudwatch_event_target" "atarget" {
depends_on = ["aws_cloudwatch_event_rule.arule"]
count = "${length(var.lambda_arns)}"
rule = "${aws_cloudwatch_event_rule.arule.*.name[count.index / 5]}"
arn = "${var.lambda_arns[count.index]}"
}
I created the event rules based on the number of lambdas (i.e., if there are 10 lambdas, 2 rules are created).
I created the targets based on number of lambdas (i.e., if there are 10 lambdas, 10 targets are created).
I assigned the targets proportionally among the rules by dividing the count.index by 5 (the same logic used to determine the count of rules).
Assuming you created all your lambas using the same terraform resource with count, you can use count on this as well:
resource "aws_cloudwatch_event_target" "cw_target" {
count = length(aws_lambda_function.my_lambdas)
rule = "${aws_cloudwatch_event_rule.my_rule.name}"
arn = "${aws_lambda_function.my_lambdas.*.arn[count.index]}"
}
Here is what I did. I ignore the "target_id" of resource "aws_cloudwatch_event_target" (very important), and use local variables (define your local vars, this example: "targets"), and looping for the local vars, and creating multiple aws_cloudwatch_event_target, and multiple assessment templates.
locals {
stack_name_prefix = "Inspector"
rules_package_arn_cis = "arn:aws:inspector:ap-southeast-2:454640832652:rulespackage/0-Vkd2Vxjq"
default_target = {
rules : [local.rules_package_arn_cis],
duration : 3600
}
targets = [
merge(local.default_target, {
name : "data_indexer",
tags : {
Function = "LR - DX"
},
}),
merge(local.default_target, {
name : "ai_engine",
tags : {
Function = "LR - AIE"
},
}),
merge(local.default_target, {
name : "data_processor",
tags : {
Function = "LR - Data Processor"
},
}),
merge(local.default_target, {
name : "platform_manager",
tags : {
Function = "LR - PM"
},
})
]
}
resource "aws_inspector_assessment_template" "assessment_template" {
count = length(local.targets)
name = "${local.stack_name_prefix}_${local.targets[count.index]["name"]}_assessment_template"
target_arn = aws_inspector_assessment_target.assessment[count.index].arn
duration = local.default_target.duration
rules_package_arns = [local.rules_package_arn_cis]
}
resource "aws_cloudwatch_event_target" "event_target_for_inspector_assessment_template" {
count = length(local.targets)
rule = aws_cloudwatch_event_rule.event_rule_for_inspector_assessment_template.name
// target_id = "amazon_inspector_assessment" ## Don't USE target_id, it will mess up the cloudwatch event target, and only generated one target instead of 4
arn = aws_inspector_assessment_template.assessment_template[count.index].arn
role_arn = aws_iam_role.inspector_assessment_template.arn
}
module "eventbridgetarget" {
for_each = var.rule_and_target_details
source = "git::ssh://git#bitbucket.int.ally.com/tf/terraform-modules-aws-eventbridge.git//modules/target?ref=v1"
rule_arn = module.eventbridgerule.rule.arn
name = each.value.name
namespace = module.namespace.lower_short_name
tags = module.namespace.tags
#arn = module.lambda.arn
arn = each.value.arn
}
Now in the tfvars pass the value like below:
rule_and_target_details = {
"firsttarget" = {
name = "getentities"
arn = "arn:aws:execute-api:us-east-1:2XXXXXXX:92mchkioeh/api/GET/getEntities"
}
"secondtarget" = {
name = "getactivitylog"
arn = "arn:aws:execute-api:us-east-1:2XXXXXX:92mchkioeh/api/GET/getActivityLog"
}
"thirdtarget" = {
name = "searchactivitylog"
arn = "arn:aws:execute-api:us-east-1:XXXXXX:92mchkioeh/api/GET/searchActivityLog"
}
}