Terraform s3 bucket lifecycle_rule overlapping prefixes - amazon-web-services

What will happen if I apply to my s3-bucket multiple lifecycle rules to which do overlap?
I want to keep 7 days of old versions by default, but for a specific prefix I want it to be different (1 day).
Can I overwrite lifecycle rule for a sub-prefix just by adding a rule below?
resource "aws_s3_bucket" "my-bucket" {
bucket = "my-bucket"
acl = "private"
versioning {
enabled = true
}
lifecycle_rule {
id = "clean_old_versions"
prefix = ""
enabled = true
noncurrent_version_expiration {
days = 7
}
}
lifecycle_rule {
id = "clean_old_versions_playground"
prefix = "playground/"
enabled = true
noncurrent_version_expiration {
days = 1
}
}

Related

Adding S3 bucket policy to multiple buckets with for_each Terraform module

I have the following module which works fine.
Module
resource "aws_s3_bucket" "buckets" {
bucket = var.s3_buckets
}
resource "aws_s3_bucket_acl" "buckets" {
bucket = var.s3_buckets
acl = "private"
}
Root module
module "s3_buckets" {
source = "./modules/s3"
for_each = toset([
"bucket-test1-${var.my_env}",
"bucket-test2-${var.my_env}",
])
s3_buckets = each.value
}
I would like to add get the following policy to all the buckets in the list. Obviously the count option below does not work.
data "aws_iam_policy_document" "buckets" {
count = length(var.s3_buckets)
statement {
sid = "AllowSSlRequestsOnly"
actions = ["s3:*"]
effect = "Deny"
condition {
test = "Bool"
variable = "aws:SecureTransport"
values = ["false"]
}
principals {
type = "*"
identifiers = ["*"]
}
resources = ["arn:aws:s3:::${var.s3_buckets}"]
}
}
resource "aws_s3_bucket_policy" "buckets" {
bucket = var.s3_buckets
policy = data.aws_iam_policy_document.buckets[count.index].json
}
I'm thinking that I need another for_each and use a resource for the IAM policy, I have seen an example such as below, but in the current form I'm providing a string instead a set of strings. Any ideas?
resource "aws_s3_bucket_policy" "buckets" {
for_each = var.s3_buckets
bucket = each.key
policy = jsonencode({
Version = "2012-10-17"
Id = "AllowSSlRequestsOnly",
Statement = [
{
Sid = "AllowSSlRequestsOnly"
Effect = "Deny"
Principal = "*"
Action = "s3:*"
Resource = each.value.arn
Condition = {
Bool = {
"aws:SecureTransport": "false"
}
}
}
]
})
}
If you are adding the policy inside the module (which you probably are, otherwise it doesn't make much sense to attach the policies outside, since you have full control) - then why do you need to mingle with count() at all?
Create the policy and attach to the bucket like:
data "aws_iam_policy_document" "buckets" {
statement {
sid = "AllowSSlRequestsOnly"
actions = ["s3:*"]
effect = "Deny"
condition {
test = "Bool"
variable = "aws:SecureTransport"
values = ["false"]
}
principals {
type = "*"
identifiers = ["*"]
}
resources = ["arn:aws:s3:::${var.s3_buckets}"]
}
}
resource "aws_s3_bucket_policy" "buckets" {
bucket = var.s3_buckets
policy = data.aws_iam_policy_document.buckets.json
}
Couple more points:
It's a singular bucket that you are passing to the module, yet the variable is named s3_buckets, it is confusing.
Using var.s3_buckets for all dependent resources is not the best practice. Create the bucket with var.s3.buckets, after which use the outputs of the resource. This and examples of policies reside here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket
good luck ☺️

terraform: Overriding same variables in different tf files

I have 2 buckets A and B which requires lifecycle polices but different expiration days.
Since they both are root modules in same directory they share common variables.tf file
Lifecycle policy code for both A and B
Note:- both the bucket code are in different file as both have different configurations
Bucket A file's code
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.A.bucket # for B bucket aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.s3_expiration_days}, noncurrent version after ${var.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.s3_noncurrent_version_expiration_days
}
}
}
B bucket's file code
resource "aws_s3_bucket_lifecycle_configuration" "B_log" {
bucket = aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.s3_expiration_days}, noncurrent version after ${var.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.s3_noncurrent_version_expiration_days
}
}
}
variables.tf
bucket A needs 30( current) and 3 day ( non current versions) to expire, however B bucket needs 0 ( current) and 90 ( non current versions) respectively.
How do I achieve this?
Note:- I do not want to hardcode value for any of the bucket.
variable "s3_expiration_days" {
type = number
description = "S3 bucket objects expiration days https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#days"
default = 30
}
variable "s3_noncurrent_version_expiration_days" {
type = number
description = "S3 bucket noncurrent version objects expiration days https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#noncurrent_days"
default = 3
}
I think the easiest and most scalable way would be to do it through a single variable map:
variable "buckets_config" {
default = {
"bucket-name-a" = {
s3_expiration_days = 30
s3_noncurrent_version_expiration_days = 3
}
"bucket-name-b" = {
s3_expiration_days = 0
s3_noncurrent_version_expiration_days = 90
}
}
}
# then
resource "aws_s3_bucket" "bucket" {
for_each = var.buckets_config
bucket = each.key
}
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
for_each = var.buckets_config
bucket = aws_s3_bucket.bucket[each.key].bucket
rule {
id = "expire current version after ${each.value.s3_expiration_days}, noncurrent version after ${each.value.s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = each.value.s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = each.value.s3_noncurrent_version_expiration_days
}
}
}
UDPATE
For two different buckets:
# for bucket A
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.A.bucket
rule {
id = "expire current version after ${var.buckets_config[aws_s3_bucket.A.bucket].s3_expiration_days}, noncurrent version after ${var.buckets_config[aws_s3_bucket.A.bucket].s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.buckets_config[aws_s3_bucket.A.bucket].s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.buckets_config[aws_s3_bucket.A.bucket].s3_noncurrent_version_expiration_days
}
}
}
# for bucket B
resource "aws_s3_bucket_lifecycle_configuration" "A_log" {
bucket = aws_s3_bucket.B.bucket
rule {
id = "expire current version after ${var.buckets_config[aws_s3_bucket.B.bucket].s3_expiration_days}, noncurrent version after ${var.buckets_config[aws_s3_bucket.B.bucket].s3_noncurrent_version_expiration_days} days"
status = "Enabled"
expiration {
days = var.buckets_config[aws_s3_bucket.B.bucket].s3_expiration_days
}
noncurrent_version_expiration {
noncurrent_days = var.buckets_config[aws_s3_bucket.B.bucket].s3_noncurrent_version_expiration_days
}
}
}

Error creating CloudTrail: InvalidCloudWatchLogsLogGroupArnException: Access denied. Check the permissions for your role

Summarize the problem
My goal is to create an Organizational Trail for an origin account in AWS.
Include details about your goal
The implementation for this trail uses terraform. I plan to use the organization id from an aws_organizations_organization data source, aws_cloudtrail resource, aws_iam_policy_document data source, an aws_iam_policy resource, aws_s3_bucket resource, aws_iam_role resource for cloudwatch logs, and a aws_cloudwatch_log_group resource.
The terraform for the organizational trail should be created such that a master trail is created in the origin account which collects from individual trails in slave accounts. This trail should then feed the logs to an s3 bucket in the origin account.
Describe expected and actual results
I expected that the target apply on the specific resources for the origin account should work.
terraform apply \
-target=aws_cloudtrail.cloudtrail \
-target=aws_s3_bucket_policy.bucket_policy \
-target=aws_s3_bucket.s3_bucket_cloudtrail \
-target=aws_s3_bucket.s3_bucket_log_bucket \
-target=aws_iam_role.iam_role_cloudwatch \
-target=aws_iam_policy.iam_policy_cloudtrail_cloudwatch_logs \
-target=aws_cloudwatch_log_group.cloudwatch_log_group
Include any error messages
However, I get the following error
Error: Error creating CloudTrail: InvalidCloudWatchLogsLogGroupArnException: Access denied. Check the permissions for your role
Describe what you’ve tried
Show what you’ve tried and tell us what you found (on this site or elsewhere) and why it didn’t meet your needs. You can get better answers when you provide research.
I reviewed the documentation for creating an Amazon S3 bucket policy for CloudTrail and Sending events to CloudWatch Logs
Show some code
When appropriate, share the minimum amount of code others need to reproduce your problem.
aws_cloudtrail.cloudtrail
resource "aws_cloudtrail" "cloudtrail" {
name = "cloudtrail"
s3_bucket_name = aws_s3_bucket.s3_bucket_cloudtrail.id
enable_logging = true
enable_log_file_validation = true
is_multi_region_trail = true
include_global_service_events = true
is_organization_trail = true
kms_key_id = aws_kms_key.kms_key.arn
depends_on = [aws_s3_bucket.s3_bucket_cloudtrail]
cloud_watch_logs_role_arn = aws_iam_role.iam_role_cloudwatch.arn
cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.cloudwatch_log_group.arn}:*"
event_selector {
read_write_type = "All"
include_management_events = true
}
}
-target=aws_s3_bucket_policy.bucket_policy \
data aws_iam_policy_document s3 {
version = "2012-10-17"
depends_on = [aws_s3_bucket.s3_bucket_cloudtrail]
statement {
sid = "AclCheck"
actions = ["s3:GetBucketAcl","s3:*"]
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
resources = [aws_s3_bucket.s3_bucket_cloudtrail.arn]
}
statement {
sid = "AWSConfigBucketDelivery"
actions = ["s3:PutObject"]
principals {
type = "Service"
identifiers = ["config.amazonaws.com"]
}
resources = ["${aws_s3_bucket.s3_bucket_cloudtrail.arn}/AWSLogs/${data.aws_caller_identity.current.account_id}/*","${aws_s3_bucket.s3_bucket_cloudtrail.arn}/AWSLogs/${data.aws_organizations_organization.org.id}/*"]
condition {
test = "StringLike"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control"
]
}
}
statement {
sid = "AWSCloudTrailWrite"
actions = ["s3:PutObject","s3:*"]
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
resources = ["${aws_s3_bucket.s3_bucket_cloudtrail.arn}/AWSLogs/${data.aws_caller_identity.current.account_id}/*","${aws_s3_bucket.s3_bucket_cloudtrail.arn}/AWSLogs/${data.aws_organizations_organization.org.id}/*"]
condition {
test = "StringLike"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control"
]
}
}
}
-target=aws_s3_bucket.s3_bucket_cloudtrail \
resource aws_s3_bucket s3_bucket_cloudtrail {
bucket = "${local.name}-cloudtrail"
acl = "private"
force_destroy = true
# 3.6 Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Automated)
logging {
target_bucket = aws_s3_bucket.s3_bucket_log_bucket.id
target_prefix = "log/"
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.nfcisbenchmark.arn
sse_algorithm = "aws:kms"
}
}
}
}
-target=aws_s3_bucket.s3_bucket_log_bucket \
resource aws_s3_bucket s3_bucket_log_bucket {
bucket = "${local.name}-log"
acl = "log-delivery-write"
}
-target=aws_iam_role.iam_role_cloudwatch \
resource aws_iam_role iam_role_cloudwatch {
name = "${local.name}-${terraform.workspace}-cloudwatch"
assume_role_policy = data.aws_iam_policy_document.cloudtrail_assume_role_policy.json
}
data aws_iam_policy_document cloudtrail_assume_role_policy {
statement {
sid = "CloudtrailToCloudWatch"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["cloudtrail.amazonaws.com"]
}
}
}
-target=aws_iam_policy.iam_policy_cloudtrail_cloudwatch_logs \
resource aws_iam_policy iam_policy_cloudtrail_cloudwatch_logs {
name = "${local.name}-${terraform.workspace}-cloudtrail-cloudwatch-logs"
policy = data.aws_iam_policy_document.cloudwatch.json
}
data aws_iam_policy_document cloudwatch_nfcisbenchmark {
version = "2012-10-17"
statement {
sid = "AWSCloudTrailCreateLogStream"
actions = ["logs:CreateLogStream"]
resources = ["arn:aws:logs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:log-group:CloudTrail/log_group:${aws_cloudwatch_log_group.nfcisbenchmark.name}:log-stream:${data.aws_caller_identity.current.account_id}_CloudTrail_${data.aws_region.current.name}*",
"arn:aws:logs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:log-group:CloudTrail/log_group:${aws_cloudwatch_log_group.cloudwatch_log_group.name}:log-stream:${data.aws_organizations_organization.org.id}"]
}
statement {
sid = "AWSCloudTrailPutLogEvents"
actions = ["logs:PutLogEvents"]
resources = ["arn:aws:logs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:log-group:CloudTrail/log_group:${aws_cloudwatch_log_group.cloudwatch_log_group.name}:log-stream:${data.aws_caller_identity.current.account_id}_CloudTrail_${data.aws_region.current.name}*",
"arn:aws:logs:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:log-group:CloudTrail/log_group:${aws_cloudwatch_log_group.cloudwatch_log_group.name}:log-stream:${data.aws_organizations_organization.org.id}"
]
}
}
-target=aws_cloudwatch_log_group.cloudwatch_log_group
resource aws_cloudwatch_log_group cloudwatch_log_group {
name = "${local.name}"
kms_key_id = aws_kms_key.cloudwatch_log_group.arn
retention_in_days = 90
}

Create multiple folders within multiple S3 buckets with Terraform

I am looking to create one S3 terraform module which can take list of bucket names and folder names to be created inside all those buckets.
For e.g. in my S3 module main.tf. I have
resource "aws_s3_bucket_object" "folders" {
count = var.create_folders ? length(var.s3_folder_names) : 0
bucket = element(aws_s3_bucket.s3bucket.*.id, count.index)
acl = "private"
key = format("%s/",var.s3_folder_names[count.index])
source = "/dev/null"
}
I am calling this module as given below:
variable "s3_bucket_name" {
type = list
description = "List of S3 bucket names"
default = ["bucket1","bucket-2"]
}
variable "s3_folder_names" {
type = list
description = "The list of S3 folders to be created inside S3 bucket"
default=["folder1/dev","folder2/qa"]
}
module "s3" {
source = "../../../gce-nextgen-cloud-terraform-modules/modules/s3"
create_folders = true
s3_folder_names = var.s3_folder_names
environment = var.environment
s3_bucket_name = var.s3_bucket_name
force_destroy = true
bucket_replication_enabled = true
tags = local.common_tags
providers = {
aws.main_region = aws.main_region
aws.secondary_region = aws.secondary_region
}
}
I am facing problem because count variable can only be set once in resource block. Here is the scenario that is cauing problems:
If
var.s3_folder_names < aws_s3_bucket.s3bucket.*.id.
Then I will not be able to access all the elements of S3 bucket list as shown below
resource "aws_s3_bucket_object" "folders" {
count = var.create_folders ? length(var.s3_folder_names) : 0
**bucket = element(aws_s3_bucket.s3bucket.*.id, count.index)**
acl = "private"
key = format("%s/",var.s3_folder_names[count.index])
source = "/dev/null"
}
Hence because of this I will not be able to create these folders inside all of the buckets. The only goal is to create same set of folder structure within all of those buckets.
Any help would be truly appreciated. Thanks in advance!
You can create a combined data structure, e.g.:
locals {
buckets_and_folders = merge([
for bucket in var.s3_bucket_name:
{
for folder in var.s3_folder_names:
"${bucket}-${folder}" => {
bucket = bucket
folder = folder
}
}
]...)
}
Then you would iterate over this structure using for_each:
resource "aws_s3_bucket_object" "folders" {
for_each = var.create_folders ? local.buckets_and_folders : {}
bucket = each.value.bucket
acl = "private"
key = format("%s/", each.value.folder)
source = "/dev/null"
}

Terraform - creating multiple buckets

Creating a bucket is pretty simple.
resource "aws_s3_bucket" "henrys_bucket" {
bucket = "${var.s3_bucket_name}"
acl = "private"
force_destroy = "true"
}
Initially I thought I could create a list for the s3_bucket_name variable but I get an error:
Error: bucket must be a single value, not a list
-
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
How can I create multiple buckets without duplicating code?
You can use a combination of count & element like so:
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${element(var.s3_bucket_name, count.index)}"
acl = "private"
force_destroy = "true"
}
Edit: as suggested by #ydaetskcoR you can use the list[index] pattern rather than element.
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
force_destroy = "true"
}