Is there a way in terraform to have multiple lifecycle configuration blocks for a single AWS S3 bucket? - amazon-web-services

I am using module to create a AWS S3 bucket via terraform. This module creates a bucket with some a lot of default policies/configuration as mandated by my company. Along with that it sets some lifecycle rules using aws_s3_bucket_lifecycle_configuration.
I don't want to use those rules and they can be disabled via the inputs to the said module. But the problem is when I try to add my custom lifecycle configurations, I always get a different result each time. Sometimes my rules are applied while at other instances they are not present in the configuration.
Even the documentation says that:
NOTE: S3 Buckets only support a single lifecycle configuration. Declaring multiple aws_s3_bucket_lifecycle_configuration resources to the same S3 Bucket will cause a perpetual difference in configuration.
What can be the way around this issue?
I cant set enable_private_bucket to false, but here is the code for the configuration resource in the module.
resource "aws_s3_bucket_lifecycle_configuration" "pca_private_bucket_infrequent_access" {
count = var.enable_private_bucket ? 1 : 0
bucket = aws_s3_bucket.pca_private_bucket[0].id
}

You need to do the v3 style which is deprecated but it seems to be the only way of doing it.
Here's how I have it set up where I have extra lifecycle rules using the dynamic block
resource "aws_s3_bucket" "cache" {
bucket = local.cache_bucket_name
force_destroy = false
tags = {
Name = "${var.vpc_name} cache"
}
lifecycle_rule {
id = "${local.cache_bucket_name} lifecycle rule"
abort_incomplete_multipart_upload_days = 1
enabled = true
noncurrent_version_expiration {
days = 1
}
transition {
days = 1
storage_class = "INTELLIGENT_TIERING"
}
}
dynamic "lifecycle_rule" {
for_each = var.cache_expiration_rules
content {
id = "${lifecycle_rule.value["prefix"]} expiration in ${lifecycle_rule.value["days"]} days"
enabled = true
prefix = lifecycle_rule.value["prefix"]
expiration {
days = lifecycle_rule.value["days"]
}
}
}
lifecycle {
prevent_destroy = true
}
}

Related

How to sync lifecycle_rule of s3 bucket actual configuration with terraform script

The rule was set up manually in AWS console. I wanted to sync it in my terraform script.
I have the following defined in terraform script:
resource "aws_s3_bucket" "bucketname" {
bucket = "${local.bucket_name}"
acl = "private"
force_destroy = "false"
acceleration_status = "Enabled"
lifecycle_rule {
enabled = true,
transition {
days = 30
storage_class = "INTELLIGENT_TIERING"
}
}
lifecycle_rule {
enabled = true,
expiration {
days = 30
}
}
}
However this always gives me the following output when applying it:
lifecycle_rule.0.transition.1300905083.date: "" => ""
lifecycle_rule.0.transition.1300905083.days: "" => "30"
lifecycle_rule.0.transition.1300905083.storage_class: "" => "INTELLIGENT_TIERING"
lifecycle_rule.0.transition.3021102259.date: "" => ""
lifecycle_rule.0.transition.3021102259.days: "0" => "0"
lifecycle_rule.0.transition.3021102259.storage_class: "INTELLIGENT_TIERING" => ""
I'm not sure what's the behavior , is it trying to delete the existing and recreate it?
is it trying to delete the existing and recreate it?
Yes. If the rules have been created outside of TF, as far as TF is concerned, they don't exist. Thus TF is going to replace existing ones, as it is not aware of them. TF docs states:
It [TF] does not generate configuration.
Since your bucket does not have lifecycles in TF, TF treats them as non-existent.
When you are managing your infrastructure using any IoC tool (TF, CloudFormation, ...) its a bad practice to modify resources "manually", outside of these tools. This leads to, so called, resource drift which in turn can lead to more issues in future.
In your case, you either have to re-create the rules in TF, which means the manually ones will be replaced, or import them. However, you can't import individual attributes of a resource. Thus you would have to import the bucket.
It looks like i just made a silly mistake putting a value for the days parameter. The correct config which is same as the manual update done is:
resource "aws_s3_bucket" "bucketname" {
bucket = "${local.bucket_name}"
acl = "private"
force_destroy = "false"
acceleration_status = "Enabled"
lifecycle_rule {
enabled = true,
transition {
storage_class = "INTELLIGENT_TIERING"
}
}
lifecycle_rule {
enabled = true,
expiration {
days = 30
}
}
}

How do I apply a lifecycle rule to an EXISTING s3 bucket in Terraform?

New to Terraform. Trying to apply a lifecycle rule to an existing s3 bucket declared as a datasource, but I guess I can't do that with a datasource - throws an error. Here's the gist of what I'm trying to achieve:
data "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...and if this were a resource, not a datasource, then it would work. How can I apply a lifecycle rule to an s3 bucket declared as a data source? Google Fu has yielded little in the way of results. Thanks!
The best way to solve this is to import your bucket to terraform state instead of using it as data.
For that try to put this on your terraform code:
resource "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
And then run on the terminal:
terraform import aws_s3_bucket.test-bucket bucket_name
This will import the bucket to your state and now you can make the changes or add new things to your bucket using terraform.
Last step just run terraform apply and the lifecycle rule will be added.

How to associate new "aws_wafregional_rule" with existing WAF ACL

I have created a WAF ACL using AWS Console. Now I need to create WAF Rule using Terraform, so I have implemented below rule.
resource "aws_wafregional_byte_match_set" "blocked_path_match_set" {
name = format("%s-%s-blocked-path", local.name, var.module)
dynamic "byte_match_tuples" {
for_each = length(var.blocked_path_prefixes) > 0 ? var.blocked_path_prefixes : []
content {
field_to_match {
type = lookup(byte_match_tuples.value, "type", null)
}
target_string = lookup(byte_match_tuples.value, "target_string", null)
positional_constraint = lookup(byte_match_tuples.value, "positional_constraint", null)
text_transformation = lookup(byte_match_tuples.value, "text_transformation", null)
}
}
}
resource "aws_wafregional_rule" "blocked_path_allowed_ipaccess" {
metric_name = format("%s%s%sBlockedPathIpaccess", var.application, var.environment, var.module)
name = format("%s%s%sBlockedPathIpaccessRule", var.application, var.environment, var.module)
predicate {
type = "ByteMatch"
data_id = aws_wafregional_byte_match_set.blocked_path_match_set.id
negated = false
}
}
But how do I map this new rule to existing "web_acl" which was created through AWS Console. As per documentation I can use "aws_wafregional_web_acl" to create new web_acl, but is there a way to associate rule created through terraform with existing waf_acl ? I have a gitlab pipeline which deploys terraform code to aws, so eventually I will pass id/arn of existing web_acl and through pipeline just add/update new rule without impacting existing rules which were created through console.
Please share your valuable feedback.
Thank you.
As per the WAF documentation you associate the rule via an AWS WAF resource, see the example below for a code snippet.
resource "aws_wafregional_web_acl" "foo" {
name = "foo"
metric_name = "foo"
default_action {
type = "ALLOW"
}
rule {
action {
type = "BLOCK"
}
priority = 1
rule_id = aws_wafregional_rule.blocked_path_allowed_ipaccess.id
}
}
However, as you said you have created the resource already in the AWS console. Terraform does support the import of an AWS resource, so you would need to go with this method if you would like to manage it via Terraform.

Renaming s3 bucket in Terraform (but not S3) causes create then destroy?

I want to refactor my Terraform scripts a bit.
Before:
resource "aws_s3_bucket" "abc" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
After:
resource "aws_s3_bucket" "def" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
As you can see, only the name in Terraform has changed (abc -> def).
However, this causes a create / destroy of the bucket in terraform plan.
I expected Terraform recognize the buckets as the same (they have the same attributes, including bucket).
Questions:
Why is this?
Is there a way to refactor Terraform scripts without destroying infrastructure?
You can use terraform state mv, to reflect this change in the state.
In you case, this would be
terraform state mv aws_s3_bucket.abc aws_s3_bucket.def
From my own experience, this works well and I recommend doing it instead of working with bad names.
Terraform does not recognize such changes, no :-)

How to add lifecycle rules to an S3 bucket using terraform?

I am using Terraform to create a bucket in S3 and I want to add "folders" and lifecycle rules to it.
I can create the bucket (using an "aws_s3_bucket" resource).
I can create the bucket and define my lifecycle rules within the same "aws_s3_bucket" resource, ie. at creation time.
I can add "folders" to the bucket (I know they aren't really folders, but they are presented to the client systems as if they were... :-) ), using an "aws_s3_bucket_object" resource, ie. after bucket creation.
All good...
But I want to be able to add lifecycle rules AFTER I've created the bucket, but I get an error telling me the bucket already exists. (Actually I want to be able to subsequently add folders and corresponding lifecycle rules as and when required.)
Now, I can add lifecycle rules to an existing bucket in the AWS GUI, so I know it is a reasonable thing to want to do.
But is there a way of doing it with Terraform?
Am I missing something?
resource "aws_s3_bucket" "bucket" {
bucket = "${replace(var.tags["Name"],"/_/","-")}"
region = "${var.aws_region}"
#tags = "${merge(var.tags, map("Name", "${var.tags["Name"]}"))}"
tags = "${merge(var.tags, map("Name", "${replace(var.tags["Name"],"/_/","-")}"))}"
}
resource "aws_s3_bucket" "bucket_quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "quarterly_retention"
prefix = "quarterly/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "bucket_permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
resource "aws_s3_bucket_object" "quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "quarterly"
source = "/dev/null"
}
resource "aws_s3_bucket_object" "permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "permanent"
source = "/dev/null"
}
I expect to have a bucket with 2 lifecycle rules, but I get the following error:
Error: Error applying plan:
2 error(s) occurred:
* module.s3.aws_s3_bucket.bucket_quarterly: 1 error(s) occurred:
* aws_s3_bucket.bucket_quarterly: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: EFE9C62B25341478, host id: hcsCNracNrpTJZ4QdU0AV2wNm/FqhYSEY4KieQ+zSHNsj6AUR69XvPF+0BiW4ZOpfgIoqwFoXkI=
* module.s3.aws_s3_bucket.bucket_permanent: 1 error(s) occurred:
* aws_s3_bucket.bucket_permanent: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: 7DE1B1A36138A614, host id: 8jB6l7d6Hc6CZFgQSLQRMJg4wtvnrSL6Yp5R4RScq+GtuMW+6rkN39bcTUwQhzxeI7jRStgLXSc=
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Lets first break down whats happening and how we can overcome this issue. Each time you define a resource "aws_s3_bucket", terraform will attempt to create a bucket with the parameters specified. If you want to attach a lifecycle policy to a bucket, do it where you define the bucket, e.g.:
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
lifecycle_rule {
id = "quarterly_retention"
prefix = "folder/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "permanent" {
bucket = "perm_bucket_name"
acl = "private"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
A bucket can have multiple lifecycle_rule blocks on it.
If you want to define the lifecycle rules as external blocks, you can do it in this way:
// example of what the variable would look like:
variable "lifecycle_rules" {
type = "list"
default = []
}
// example of what the assignment would look like:
lifecycle_rules = [{
id = "cleanup"
prefix = ""
enabled = true
expiration = [{
days = 1
}]
}, {...}, {...} etc...]
// example what the usage would look like
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
source = "/dev/null"
lifecycle_rule = [ "${var.lifecycle_rules}" ]
}
Note: the implementation above of having an external lifecycle policy isn't really the best way to do it, but the only way. You pretty much trick terraform into accepting the list of maps, which happens to be the same type as lifecycle_rule, so it works. Ideally, Terraform should have it's own resource block for lifecycle rules, but it doesn't.
Edit: why have separate resource blocks when we now have dynamic blocks! Woohoo
As far as I am aware, you cannot make a lifecycle policy separately.
Someone raised a PR for a resource to be created to allow you to do so, but looks like it is still open: https://github.com/terraform-providers/terraform-provider-aws/issues/6188
As for your error, I believe the reason you're getting the error is because:
resource "aws_s3_bucket" "bucket"
Creates a bucket with a particular name.
resource "aws_s3_bucket" "bucket_quarterly"
References bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the previous resource (which cannot be done as names are unique).
resource "aws_s3_bucket" "bucket_permanent"
Similarly, this resource references bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the first resource (which cannot be done as names are unique).
You mentioned I expect to have a bucket with 2 lifecycle rules but in your above code you are creating 3 separate s3 buckets (one without a lifecycle, and 2 with a lifecycle) and two objects (folders) that are being placed into the s3 bucket without a lifecycle policy.
Thanks for the info (I like the idea of the list to separate the rules from the resource).
The issue was that I didn't appreciate that you could define lifecycle rules within the resource AND change them subsequently, so I was trying to figure out how to define them separately...
All that's required is to specify them in the resource and do terraform apply, then you can edit it and add/amend/remove lifecycle_rules items and just do terraform apply again to apply the changes.
source "aws_s3_bucket" "my_s3_bucket" {
bucket = local.s3_bucket_name
}
resource "aws_s3_bucket_acl" "my_s3_bucket_acl" {
bucket = aws_s3_bucket.my_s3_bucket.arn
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "my_s3_bucket_versioning" {
bucket = aws_s3_bucket.my_s3_bucket.arn
versioning_configuration {
status = true
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "my_s3-bucket_encryption" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "my_s3_bucket_lifecycle_config" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
id = "dev_lifecycle_7_days"
status = true
abort_incomplete_multipart_upload {
days_after_initiation = 30
}
noncurrent_version_expiration {
noncurrent_days = 1
}
transition {
storage_class = "STANDARD_IA"
days = 30
}
expiration {
days = 30
}
}
}