Terraform module s3 lifecycle rule not not working - amazon-web-services

I have a s3 lifecycle rule that should delete the failed multipart upload after n number of days by using lifecycle rules. I want to use lookup instead of try
resource "aws_s3_bucket_lifecycle_configuration" "default" {
count = length(var.lifecycle_rule) != 0 ? 1 : 0
bucket = aws_s3_bucket.bucket.bucket
dynamic "rule" {
for_each = try(jsondecode(var.lifecycle_rule), var.lifecycle_rule)
content {
id = lookup(rule.value, "id", "default")
status = lookup(rule.value, "status", "Enabled")
dynamic "abort_incomplete_multipart_upload" {
for_each = lookup(rule.value, "abort_incomplete_multipart_upload", null) != null ? [rule.value.abort_incomplete_multipart_upload] : []
content {
days_after_initiation = abort_incomplete_multipart_upload.value.days_after_initiation
}
}
}
}
}
When I try to use this module resource in my child module, it does not work
module "test" {
source = "./s3"
bucket_name = "test"
lifecycle_rule = [
{
expiration = {
days = 7
}
},
{
id = "abort-incomplete-multipart-upload-lifecyle-rule"
abort_incomplete_multipart_upload_days = {
days_after_initiation = 6
}
}
]
}
terraform plan gives me
+ rule {
+ id = "abort-incomplete-multipart-upload-lifecyle-rule"
+ status = "Enabled"
+ filter {
}
}
expected output:
+ rule {
+ id = "abort-incomplete-multipart-upload-lifecyle-rule"
+ status = "Enabled"
+ abort_incomplete_multipart_upload {
+ days_after_initiation = 8
}
+ filter {
}
}

Here's the code that works:
resource "aws_s3_bucket_lifecycle_configuration" "default" {
count = length(var.lifecycle_rule) != 0 ? 1 : 0
bucket = aws_s3_bucket.bucket.bucket
dynamic "rule" {
for_each = try(jsondecode(var.lifecycle_rule), var.lifecycle_rule)
content {
id = lookup(rule.value, "id", "default")
status = lookup(rule.value, "status", "Enabled")
dynamic "abort_incomplete_multipart_upload" {
for_each = lookup(rule.value, "abort_incomplete_multipart_upload_days", null) != null ? [rule.value.abort_incomplete_multipart_upload_days] : []
content {
days_after_initiation = abort_incomplete_multipart_upload.value.days_after_initiation
}
}
}
}
}
There are basically two issues:
The lookup was looking for a non-existing key in your map, abort_incomplete_multipart_upload, instead of abort_incomplete_multipart_upload_days
Because of the first error, it was propagated to the value you wanted, i.e., rule.value.abort_incomplete_multipart_upload instead of rule.value.abort_incomplete_multipart_upload_days
This code yields the following output:
# aws_s3_bucket_lifecycle_configuration.default[0] will be created
+ resource "aws_s3_bucket_lifecycle_configuration" "default" {
+ bucket = (known after apply)
+ id = (known after apply)
+ rule {
+ id = "default"
+ status = "Enabled"
}
+ rule {
+ id = "abort-incomplete-multipart-upload-lifecyle-rule"
+ status = "Enabled"
+ abort_incomplete_multipart_upload {
+ days_after_initiation = 6
}
}
}
However, if you want it to be one rule (i.e., the example output you want), you need to make a change to your lifecycle_rule variable:
lifecycle_rule = [
{
expiration = {
days = 7
}
id = "abort-incomplete-multipart-upload-lifecyle-rule"
abort_incomplete_multipart_upload_days = {
days_after_initiation = 6
}
}
]
This gives:
+ resource "aws_s3_bucket_lifecycle_configuration" "default" {
+ bucket = (known after apply)
+ id = (known after apply)
+ rule {
+ id = "abort-incomplete-multipart-upload-lifecyle-rule"
+ status = "Enabled"
+ abort_incomplete_multipart_upload {
+ days_after_initiation = 6
}
}
}

Related

how to enable dynamic block conditionally for creating GCS buckets

Im trying to add retention policy but I want to enable it conditionally, as you can see from the code
buckets.tf
locals {
team_buckets = {
arc = { app_id = "20390", num_buckets = 2, retention_period = null }
ana = { app_id = "25402", num_buckets = 2, retention_period = 631139040 }
cha = { app_id = "20391", num_buckets = 2, retention_period = 631139040 } #20 year
}
}
module "team_bucket" {
source = "../../../../modules/gcs_bucket"
for_each = {
for bucket in flatten([
for product_name, bucket_info in local.team_buckets : [
for i in range(bucket_info.num_buckets) : {
name = format("%s-%02d", product_name, i + 1)
team = "ei_${product_name}"
app_id = bucket_info.app_id
retention_period = bucket_info.retention_period
}
]
]) : bucket.name => bucket
}
project_id = var.project
name = "teambucket-${each.value.name}"
app_id = each.value.app_id
team = each.value.team
retention_period = each.value.retention_period
}
root module is defined as follows
main.tf
resource "google_storage_bucket" "bucket" {
project = var.project_id
name = "${var.project_id}-${var.name}"
location = var.location
labels = {
app_id = var.app_id
ei_team = var.team
cost_center = var.cost_center
}
uniform_bucket_level_access = var.uniform_bucket_level_access
dynamic "retention_policy" {
for_each = var.retention_policy == null ? [] : [var.retention_period]
content {
retention_period = var.retention_period
}
}
}
but I can't seem to make the code pick up the value,
for example as you see below the value doesn't get implemented
~ resource "google_storage_bucket" "bucket" {
id = "teambucket-cha-02"
name = "teambucket-cha-02"
# (11 unchanged attributes hidden)
- retention_policy {
- is_locked = false -> null
- retention_period = 3155760000 -> null
}
}
variables.tf for retention policy is as follows
variable "retention_policy" {
description = "Configuation of the bucket's data retention policy for how long objects in the bucket should be retained"
type = any
default = null
}
variable "retention_period" {
default = null
}
Your var.retention_policy is always null, as its default value. You are not changing the default value at all. Probably you wanted the following:
for_each = var.retention_period == null ? [] : [var.retention_period]
instead of
for_each = var.retention_policy == null ? [] : [var.retention_period]

How to create iam-user module in terraform to cover 3 type of iam-user scenarios

Can you please help here on how to create iam-user module in terraform to cover 3 type of iam-user scenarios ?
PS: I don't want to create nested directory under modules/iam/iam-user/ to make each iam-user cases separately.
Following are the scenarios:
// Type 1
resource "aws_iam_user" "aws_iam_user_000" {
name = "user-000"
permissions_boundary = data.aws_iam_policy.permission_boundary.arn
}
resource "aws_iam_user_policy_attachment" "aws_iam_user_000" {
policy_arn = aws_iam_policy.s3_iam_policy.arn
user = aws_iam_user.aws_iam_user_000.name
}
// Type 2
resource "aws_iam_user" "aws_iam_user_001" {
path = "/"
for_each = toset(var.user_lists)
name = each.value
force_destroy = true
permissions_boundary = data.aws_iam_policy.permission_boundary.arn
}
resource "aws_iam_group" "aws_iam_group_001" {
name = "group-0001"
}
resource "aws_iam_user_group_membership" "group-membership" {
for_each = toset(var.user_lists)
user = aws_iam_user.aws_iam_user_001[each.value].name
groups = [aws_iam_group.aws_iam_group_001.name]
}
// Type 3
resource "aws_iam_user" "aws_iam_user_0002" {
name = "user-002"
tags = { "user_type" = "admin_account" }
permissions_boundary = data.aws_iam_policy.permission_boundary.arn
}
If I understand you correctly, you should be able to accomplish this using count and for_each with variables as below.
variables.tf
variable "is_admin" {
type = bool
default = false
}
variable "user_lists" {
type = list(any)
default = null
}
main.tf
// Type 1 and Type 3
resource "aws_iam_user" "this" {
count = var.user_lists == null ? 1 : 0
name = var.is_admin ? "user-000" : "user-002"
permissions_boundary = data.aws_iam_policy.permission_boundary.arn
tags = var.is_admin ? { "user_type" = "admin_account" } : null
}
resource "aws_iam_user_policy_attachment" "this" {
count = var.user_lists == null ? 1 : 0
policy_arn = aws_iam_policy.s3_iam_policy.arn
user = aws_iam_user.this[0].name
}
// Type 2
resource "aws_iam_user" "from_list" {
for_each = var.user_lists != null ? toset(var.user_lists) : []
path = "/"
name = each.value
force_destroy = true
permissions_boundary = data.aws_iam_policy.permission_boundary.arn
}
resource "aws_iam_group" "from_list" {
count = var.user_lists == null ? 1 : 0
name = "group-0001"
}
resource "aws_iam_user_group_membership" "this" {
for_each = var.user_lists != null ? toset(var.user_lists) : []
user = aws_iam_user.from_list[each.value].name
groups = [aws_iam_group.from_list[0].name]
}

Terraform aws_s3_bucket_replication_configuration can't generate multiple rules with for_each

I have an S3 bucket with the following "folder" structure:
Bucket1----> /Partner1 ----> /Client1 ----> /User1
| | |--> /User2
| |
| |--> /Client2 ----> /User1
|
|--> /Partner2 ----> /Client1 ----> /User1
and so on.
I'm trying to setup replication from this bucket to another such that a file placed in
Bucket1/Partner1/client1/User1/
should replicate to
Bucket2/Partner1/client1/User1/,
Bucket1/Partner2/client1/User2/
should replicate to
Bucket2/Partner2/client1/User2/,
and so on.
I'm trying to achieve this with the following terraform code:
locals {
s3_folders = [
"Partner1/client1/User1",
"Partner1/client1/User2",
"Partner1/client1/User3",
"Partner1/client1/User4",
"Partner1/client1/User5",
"Partner1/client2/User1",
"Partner1/client3/User1",
"Partner2/client1/User1",
"Partner3/client1/User1"
]
}
resource "aws_s3_bucket_replication_configuration" "replication" {
for_each = local.s3_input_folders
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
rule {
id = each.value
filter {
prefix = each.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
This is not looping and creating 10 different rules, rather it overwrites the same rule on every run and I only get one rule as a result.
You should use dynamic block:
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = toset(local.s3_input_folders)
content {
id = rule.value
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
Thanks, Marcin. The dynamic block construct you mentioned works to create the content blocks but it fails to apply because AWS needs multiple replication rules to be differentiated by priority. So some slight modifications achieve this:
locals {
s3_input_folders_list_counter = tolist([
for i in range(length(local.s3_input_folders)) : i
])
s3_input_folders_count_map = zipmap(local.s3_input_folders_list_counter, tolist(local.s3_input_folders))
}
resource "aws_s3_bucket_replication_configuration" "replication" {
depends_on = [aws_s3_bucket_versioning.source_bucket]
role = aws_iam_role.s3-replication-prod[0].arn
bucket = aws_s3_bucket.source_bucket.id
dynamic "rule" {
for_each = local.s3_input_folders_count_map
content {
id = rule.key
priority = rule.key
filter {
prefix = rule.value
}
status = "Enabled"
destination {
bucket = "arn:aws:s3:::${var.app}-dev"
storage_class = "ONEZONE_IA"
access_control_translation {
owner = "Destination"
}
account = var.dev_account_id
}
delete_marker_replication {
status = "Enabled"
}
}
}
}
which creates rules like these:
+ rule {
+ id = "0"
+ priority = 0
+ status = "Enabled"
...
}
+ rule {
+ id = "1"
+ priority = 1
+ status = "Enabled"
...
}
and so on...

Adding Extra Routes to route table using terraform module

I am adding routes to route table using module. Below is my code. It runs successfully but routes don't get added.
module.tf: (This checks if the publicRoute & privateRoute has more than one item, it will add that many routes to route table)
resource "aws_route" "public_routes" {
count = length(var.ExtraRoutes.publicRoute) > 1 ? length(var.ExtraRoutes.publicRoute) : 0
route_table_id = aws_route_table.VPCPublicSubnetRouteTable[0].id
destination_cidr_block = length(regexall("^[0-9].*.[0-9].*",var.ExtraRoutes.publicRoute[count.index].destination)) != 0 ? var.ExtraRoutes.publicRoute[count.index].destination : null
gateway_id = length(regexall("^igw-.*",var.ExtraRoutes.publicRoute[count.index].target)) != 0 ? var.ExtraRoutes.publicRoute[count.index].target : null
}
resource "aws_route" "private_routes" {
count = length(var.ExtraRoutes.privateRoute) > 1 ? length(var.ExtraRoutes.privateRoute) : 0
route_table_id = aws_route_table.VPCPrivateSubnetRouteTable[0].id
destination_cidr_block = length(regexall("^[0-9].*.[0-9].*",var.ExtraRoutes.privateRoute[count.index].destination)) != 0 ? var.ExtraRoutes.privateRoute[count.index].destination : null
gateway_id = length(regexall("^igw-.*",var.ExtraRoutes.privateRoute[count.index].target)) != 0 ? var.ExtraRoutes.privateRoute[count.index].target : null
}
module_var.tf (I am keeping it only a map)
variable "ExtraRoutes" {
type = map
default = {
publicRoute = []
privateRoute = []
}
}
main.tf (As I need the first item in ExtraRoutes for something else I want from count.index + 1)
module "ExtraVPCs" {
source = "./modules/VPC"
count = length(var.ExtraRoutes)
ExtraRoutes = {
publicRoute = var.ExtraRoutes[count.index + 1].publicRoute
privateRoute = var.ExtraRoutes[count.index + 1].privateRoute
}
}
main_var.tf
variable "ExtraRoutes" {
type = list(object({
publicRoute = list(object({
destination = string
target = string
})
)
privateRoute = list(object({
destination = string
target = string
}))
}))
}
init.tfvars (There are 2 items in ExtraRoutes. It should add the 2nd item in Route table but it's not working as expected.
ExtraRoutes = [
{
publicRoute = [
{
destination = "10.0.0.0/32"
target = "igw-092aba6c187183f48"
}
]
privateRoute = [
{
destination = "10.0.0.0/32"
target = "igw-092aba6c187183f48"
}
]
},
{
publicRoute = [
{
destination = "10.0.0.0/32"
target = "igw-0acf4f7ac1e7eba47"
}
]
privateRoute = [
{
destination = "10.0.0.0/32"
target = "igw-0acf4f7ac1e7eba47"
}
]
}
]
You check the length of a list using >0, not >1:
count = length(var.ExtraRoutes.publicRoute) > 0 ? length(var.ExtraRoutes.publicRoute) : 0
TF counts items from 0. When you use >1, in your case you end up with count = 0.

Terraform plan does not include all of my .tf changes

I am using AWS provider. I've added transaction blocks in my lifecycle_rule block with the appropriate days and storage_class properties. Besides that change I've also increased the expiry_days from 30 to 180.
The variable looks like this:
variable "bucket_details" {
type = map(object({
bucket_name = string
purpose = string
infrequent_transition_days = number
infrequent_transition_storage = string
archive_transition_days = number
archive_transition_storage = string
expiry_days = number
versioning = bool
}))
}
The resource looks like this: (I've removed unrelated configs)
resource "aws_s3_bucket" "bucket-s3" {
for_each = var.bucket_details
bucket = "${each.key}-${var.region}-${var.environment}"
lifecycle_rule {
id = "clear"
enabled = true
transition {
days = each.value.infrequent_transition_days
storage_class = each.value.infrequent_transition_storage
}
transition {
days = each.value.archive_transition_days
storage_class = each.value.archive_transition_storage
}
expiration {
days = each.value.expiry_days
}
}
}
I've followed this transition example for reference.
When I run transaction plan I get the following output:
~ lifecycle_rule {
abort_incomplete_multipart_upload_days = 0
enabled = true
id = "clear"
tags = {}
+ expiration {
+ days = 180
}
- expiration {
- days = 30 -> null
- expired_object_delete_marker = false -> null
}
}
No transition changes listed. Could it be because transition is AWS-specific and thus Terraform does not catch it?
I tried your code as is and here is the response:
provider "aws" {
region = "us-west-2"
}
variable "region" {
default = "us-west-2"
}
variable "environment" {
default = "dev"
}
variable "bucket_details" {
type = map(object({
bucket_name = string
infrequent_transition_days = number
infrequent_transition_storage = string
archive_transition_days = number
archive_transition_storage = string
expiry_days = number
}))
default = {
hello_world = {
bucket_name: "demo-001",
infrequent_transition_days: 10,
infrequent_transition_storage: "STANDARD_IA",
archive_transition_days: 10,
archive_transition_storage: "GLACIER",
expiry_days = 30
}}
}
resource "aws_s3_bucket" "bucket-s3" {
for_each = var.bucket_details
bucket = "${each.key}-${var.region}-${var.environment}"
lifecycle_rule {
id = "clear"
enabled = true
transition {
days = each.value.infrequent_transition_days
storage_class = each.value.infrequent_transition_storage
}
transition {
days = each.value.archive_transition_days
storage_class = each.value.archive_transition_storage
}
expiration {
days = each.value.expiry_days
}
}
}
Response of Terraform plan:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.bucket-s3["hello_world"] will be created
+ resource "aws_s3_bucket" "bucket-s3" {
+ acceleration_status = (known after apply)
+ acl = "private"
+ arn = (known after apply)
+ bucket = "hello_world-us-west-2-dev"
+ bucket_domain_name = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = (known after apply)
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
+ lifecycle_rule {
+ enabled = true
+ id = "clear"
+ expiration {
+ days = 30
}
+ transition {
+ days = 10
+ storage_class = "GLACIER"
}
+ transition {
+ days = 10
+ storage_class = "STANDARD_IA"
}
}
+ versioning {
+ enabled = (known after apply)
+ mfa_delete = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply"
now.
As you can see there are transition changes. Can you try setting defaults vars and check the response.