terraform: Overriding same variables in different tf files - amazon-web-services

I have 2 buckets A and B which requires lifecycle polices but different expiration days.
Since they both are root modules in same directory they share common variables.tf file
Lifecycle policy code for both A and B
Note:- both the bucket code are in different file as both have different configurations
Bucket A file's code
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.A.bucket # for B bucket aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.s3_expiration_days}, noncurrent version after ${var.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.s3_noncurrent_version_expiration_days
}
}
}
B bucket's file code
resource "aws_s3_bucket_lifecycle_configuration" "B_log" {
bucket = aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.s3_expiration_days}, noncurrent version after ${var.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.s3_noncurrent_version_expiration_days
}
}
}
variables.tf
bucket A needs 30( current) and 3 day ( non current versions) to expire, however B bucket needs 0 ( current) and 90 ( non current versions) respectively.
How do I achieve this?
Note:- I do not want to hardcode value for any of the bucket.
variable "s3_expiration_days" {
type = number
description = "S3 bucket objects expiration days https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#days"
default = 30
}
variable "s3_noncurrent_version_expiration_days" {
type = number
description = "S3 bucket noncurrent version objects expiration days https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#noncurrent_days"
default = 3
}

I think the easiest and most scalable way would be to do it through a single variable map:
variable "buckets_config" {
default = {
"bucket-name-a" = {
s3_expiration_days = 30
s3_noncurrent_version_expiration_days = 3
}
"bucket-name-b" = {
s3_expiration_days = 0
s3_noncurrent_version_expiration_days = 90
}
}
}
# then
resource "aws_s3_bucket" "bucket" {
for_each = var.buckets_config
bucket = each.key
}
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
for_each = var.buckets_config
bucket = aws_s3_bucket.bucket[each.key].bucket
rule {
id = "expire current version after ${each.value.s3_expiration_days}, noncurrent version after ${each.value.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = each.value.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = each.value.s3_noncurrent_version_expiration_days
}
}
}
UDPATE
For two different buckets:
# for bucket A
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.A.bucket
rule {
id = "expire current version after ${var.buckets_config[aws_s3_bucket.A.bucket].s3_expiration_days}, noncurrent version after ${var.buckets_config[aws_s3_bucket.A.bucket].s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.buckets_config[aws_s3_bucket.A.bucket].s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.buckets_config[aws_s3_bucket.A.bucket].s3_noncurrent_version_expiration_days
}
}
}
# for bucket B
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.buckets_config[aws_s3_bucket.B.bucket].s3_expiration_days}, noncurrent version after ${var.buckets_config[aws_s3_bucket.B.bucket].s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.buckets_config[aws_s3_bucket.B.bucket].s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.buckets_config[aws_s3_bucket.B.bucket].s3_noncurrent_version_expiration_days
}
}
}

Related

Applying varying lifecycle policies to list of s3 buckets

I have a list of s3 buckets on which i want to apply different lifecycle policies.
variables.tf
variable "bucket_name" {
type = list(any)
default = ["in", "out", "in-archive", "out-archive"]
}
For the first 2 items in the list I want to have their contents deleted after 180 days. And the remaining 2 buckets to move their contents to GLACIER class and then remove them after 600 days.
I have declared two different resource blocks for varying policies, but the problem is how do I make terraform to start counting index from 3rd element instead of 1st element.
resource block
resource "aws_s3_bucket" "bucket" {
count = length(var.bucket_name)
bucket = "${var.bucket_name[count.index]}"
}
resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle_rule" {
count = length(aws_s3_bucket.bucket)
bucket = aws_s3_bucket.bucket[count.index].id ///Want this index to stop at 2nd element
rule {
status = "Enabled"
id = "bucket-lifecycle-rule"
expiration {
days = 180
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "archive_bucket_lifecycle_rule" {
count = length(aws_s3_bucket.bucket)
bucket = aws_s3_bucket.bucket[count.index + 4].id ///Want this index to begin from 3rd and end
rule { ///at 4th element
status = "Enabled"
id = "archive-bucket-lifecycle-rule"
transition {
days = 181
storage_class = "GLACIER"
}
expiration {
days = 600
}
}
}
While I approach this rule, i get an error :
in resource "aws_s3_bucket_lifecycle_configuration" "archive_bucket_lifecycle_rule":
31: bucket = aws_s3_bucket.bucket[count.index + 2].id
├────────────────
│ aws_s3_bucket.bucket is tuple with 4 elements
│ count.index is 2
The given key does not identify an element in this collection value.
How about making the input variable a bit more complex to accommodate what you need...
Here is a quick example:
provider "aws" { region = "us-east-1" }
variable "buckets" {
type = map(any)
default = {
"in" : { expiration : 180, transition : 0 },
"out" : { expiration : 120, transition : 0 },
"in-archive" : { expiration : 200, transition : 180 },
"out-archive" : { expiration : 360, transition : 180 }
}
}
resource "aws_s3_bucket" "bucket" {
for_each = var.buckets
bucket = each.key
}
resource "aws_s3_bucket_lifecycle_configuration" "lifecycle" {
for_each = var.buckets
bucket = aws_s3_bucket.bucket[each.key].id
rule {
status = "Enabled"
id = "bucket-lifecycle-rule"
expiration {
days = each.value.expiration
}
}
rule {
status = each.value.transition > 0 ? "Enabled" : "Disabled"
id = "archive-bucket-lifecycle-rule"
transition {
days = each.value.transition
storage_class = "GLACIER"
}
}
}
Now our variable is type = map(any) we can create a more complex object there and pass the lifecycle expiration, you can make that as complex as you need to fit more complex rules

Accessing values in list for modules in terraform

I am trying to refactor some terraform code when doing an upgrade.
I'm using some S3 module that takes some lifecycle configuration rules:
module "s3_bucket" {
source = "../modules/s3"
lifecycle_rule = [
{
id = "id_name"
enabled = true
abort_incomplete_multipart_upload = 7
expiration = {
days = 7
}
noncurrent_version_expiration = {
days = 7
}
}
]
}
Here is how the resource inside the model looks like:
resource "aws_s3_bucket_lifecycle_configuration" "main" {
count = length(var.lifecycle_rule) > 0 ? 1 : 0
bucket = aws_s3_bucket.main.id
dynamic "rule" {
for_each = var.lifecycle_rule
content {
id = lookup(lifecycle_rule.value, "id", null)
enabled = lookup(lifecycle_rule.value, "enabled", null)
abort_incomplete_multipart_upload = lookup(lifecycle_rule.value, "abort_incomplete_multipart_upload", null)
filter {
and {
prefix = lookup(lifecycle_rule.value, "prefix", null)
tags = lookup(lifecycle_rule.value, "tags", null)
}
}
}
}
}
Running plan gives me the following error:
on ../modules/s3/main.tf line 73, in resource "aws_s3_bucket_lifecycle_configuration" "main":
73: id = lookup(lifecycle_rule.id, null)
A managed resource "lifecycle_rule" "id" has not been declared in
module.s3_bucket.
2 questions:
1 - Looks like I'm not reaching the lifecycle_rule.value attribute in the list for the module, any help with the syntax?
2 - How to access the nested expiration.days value inside the module also?
Thanks!
The first part of your question: you need to use the rule and not lifecycle_rule [1]. Make sure you understand this part:
The iterator argument (optional) sets the name of a temporary variable that represents the current element of the complex value. If omitted, the name of the variable defaults to the label of the dynamic block.
To complete the answer, accessing expiration.days is possible if you define a corresponding argument in the module. In other words, you need to add expiration block to the module code [2].
There are a couple more issues with the code you currently have:
The abort_incomplete_multipart_upload is a configuration block, the same as expiration
The expiration date should not be set in number of days, rather an RFC3339 format [3]
The enabled value cannot be a bool, it has to be either Enabled or Disabled (mind the first capital letter) and the name of the argument is status [4] not enabled
To sum up, here's what the code in the module should look like:
resource "aws_s3_bucket_lifecycle_configuration" "main" {
count = length(var.lifecycle_rule) > 0 ? 1 : 0
bucket = aws_s3_bucket.main.id
dynamic "rule" {
for_each = var.lifecycle_rule
content {
id = lookup(rule.value, "id", null)
status = lookup(rule.value, "enabled", null)
abort_incomplete_multipart_upload {
days_after_initiation = lookup(rule.value, "abort_incomplete_multipart_upload", null)
}
filter {
and {
prefix = lookup(rule.value, "prefix", null)
tags = lookup(rule.value, "tags", null)
}
}
expiration {
date = lookup(rule.value.expiration, "days", null)
}
}
}
}
The module should be called with the following variable values:
module "s3_bucket" {
source = "../modules/s3"
lifecycle_rule = [
{
id = "id_name"
enabled = "Enabled" # Mind the value
abort_incomplete_multipart_upload = 7
expiration = {
days = "2022-08-28T15:04:05Z" # RFC3339 format
}
noncurrent_version_expiration = {
days = 7
}
}
[1] https://www.terraform.io/language/expressions/dynamic-blocks
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#expiration
[3] https://www.rfc-editor.org/rfc/rfc3339#section-5.8
[4] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#status

Terraform aws_s3_bucket_replication_configuration can't generate multiple rules with for_each

I have an S3 bucket with the following "folder" structure:
Bucket1----> /Partner1 ----> /Client1 ----> /User1
| | |--> /User2
| |
| |--> /Client2 ----> /User1
|
|--> /Partner2 ----> /Client1 ----> /User1
and so on.
I'm trying to setup replication from this bucket to another such that a file placed in
Bucket1/Partner1/client1/User1/
should replicate to
Bucket2/Partner1/client1/User1/,
Bucket1/Partner2/client1/User2/
should replicate to
Bucket2/Partner2/client1/User2/,
and so on.
I'm trying to achieve this with the following terraform code:
locals {
s3_folders = [
"Partner1/client1/User1",
"Partner1/client1/User2",
"Partner1/client1/User3",
"Partner1/client1/User4",
"Partner1/client1/User5",
"Partner1/client2/User1",
"Partner1/client3/User1",
"Partner2/client1/User1",
"Partner3/client1/User1"
]
}
resource "aws_s3_bucket_replication_configuration" "replication" {
for_each = local.s3_input_folders
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
rule {
id = each.value
filter {
prefix = each.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
This is not looping and creating 10 different rules, rather it overwrites the same rule on every run and I only get one rule as a result.
You should use dynamic block:
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = toset(local.s3_input_folders)
content {
id = rule.value
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
Thanks, Marcin. The dynamic block construct you mentioned works to create the content blocks but it fails to apply because AWS needs multiple replication rules to be differentiated by priority. So some slight modifications achieve this:
locals {
s3_input_folders_list_counter = tolist([
for i in range(length(local.s3_input_folders)) : i
])
s3_input_folders_count_map = zipmap(local.s3_input_folders_list_counter, tolist(local.s3_input_folders))
}
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = local.s3_input_folders_count_map
content {
id = rule.key
priority = rule.key
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
which creates rules like these:
+ rule {
+ id = "0"
+ priority = 0
+ status = "Enabled"
...
}
+ rule {
+ id = "1"
+ priority = 1
+ status = "Enabled"
...
}
and so on...

How can I pass arguent rule of aws_s3_bucket_lifecycle_configuration as variable using Terraform?

Using terraform version 1.0.10 and AWS provider 4, My code deploy an AWS s3 bucket using a module that has aws_s3_bucket_lifecycle_configuration as required and works fine:
main.tf code:
source = "./modules/aws_plain_resource/s3/"
bucket_name = var.bucket_name_cloud_events
sse_algorithm = var.sse_algorithm
logging_target_bucket = var.logging_target_bucket
lifecycle_status = "Enabled"
first_period = 30
fp_storage_class = "STANDARD_IA"
second_period = 0
sp_storage_class = "GLACIER"
object_expiration = 1060
common_tags = {
data_classification = "confidential"
risk_classification = "high"
tyro_team = "operations_team"
}
}
Module code:
bucket = var.bucket_name
tags = merge(var.common_tags, {
"Name" = var.bucket_name
})
}
resource "aws_s3_bucket_acl" "this" {
bucket = aws_s3_bucket.this.id
acl = "private"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = var.kms_master_key_arn
sse_algorithm = var.sse_algorithm
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
id = "iac_lifecycle_rule"
status = var.lifecycle_status
transition {
days = var.first_period
storage_class = var.fp_storage_class
}
transition {
days = var.second_period
storage_class = var.sp_storage_class
}
expiration {
days = var.object_expiration
}
}
}
output "bucket-id" {
value = aws_s3_bucket.this.id
}
However, I would like to leave the argument rule in the maint.tf file to be configurable. Is there a way to pass the rule configuration as variable to the terraform S3 module?
Rule is:
rule {
id = "iac_lifecycle_rule"
status = var.lifecycle_status
transition {
days = var.first_period
storage_class = var.fp_storage_class
}
transition {
days = var.second_period
storage_class = var.sp_storage_class
}
expiration {
days = var.object_expiration
}
}
You have to define a rule map variable in your module, and then pass it in:
module "bucket"
source = "./modules/aws_plain_resource/s3/"
rule = {
id = "iac_lifecycle_rule"
status = var.lifecycle_status
transition = {
days = var.first_period
storage_class = var.fp_storage_class
}
transition = {
days = var.second_period
storage_class = var.sp_storage_class
}
expiration = {
days = var.object_expiration
}
}
and then you have to explicitly set all the values in aws_s3_bucket_lifecycle_configuration:
resource "aws_s3_bucket_lifecycle_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
id = rule["id"]
status = rule["status"]
transition {
days = rule["transition"]["days"]
storage_class = rule["transition"]["storage_class"]
}
# and so on for the rest
}
}

Terraform s3 bucket lifecycle_rule overlapping prefixes

What will happen if I apply to my s3-bucket multiple lifecycle rules to which do overlap?
I want to keep 7 days of old versions by default, but for a specific prefix I want it to be different (1 day).
Can I overwrite lifecycle rule for a sub-prefix just by adding a rule below?
resource "aws_s3_bucket" "my-bucket" {
bucket = "my-bucket"
acl = "private"
versioning {
enabled = true
}
lifecycle_rule {
id = "clean_old_versions"
prefix = ""
enabled = true
noncurrent_version_expiration {
days = 7
}
}
lifecycle_rule {
id = "clean_old_versions_playground"
prefix = "playground/"
enabled = true
noncurrent_version_expiration {
days = 1
}
}