How do I apply a lifecycle rule to an EXISTING s3 bucket in Terraform? - amazon-web-services

New to Terraform. Trying to apply a lifecycle rule to an existing s3 bucket declared as a datasource, but I guess I can't do that with a datasource - throws an error. Here's the gist of what I'm trying to achieve:
data "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...and if this were a resource, not a datasource, then it would work. How can I apply a lifecycle rule to an s3 bucket declared as a data source? Google Fu has yielded little in the way of results. Thanks!

The best way to solve this is to import your bucket to terraform state instead of using it as data.
For that try to put this on your terraform code:
resource "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
And then run on the terminal:
terraform import aws_s3_bucket.test-bucket bucket_name
This will import the bucket to your state and now you can make the changes or add new things to your bucket using terraform.
Last step just run terraform apply and the lifecycle rule will be added.

Related

how to import multiple s3 bucket resources to single terraform resource name

I am trying to import existing s3 buckets to my terraform code. i have a lot of buckets in the s3 so I want to collect under a single resource name. For example, let's consider 3 baskets running on s3, 2 of them are created with terraform but 1 of them is not created with terraform.
terraformed-bucket
terraformed-bucket-2
nonterraformed-bucket
I have one resource name for these two buckets. I want to import nonterraformed-bucket to existing resource name that used for terraformed-buckets when migrating to terraform code. but i cant :/
resource "aws_s3_bucket" "tfer--buckets" {
count = "${length(var.bucket_names)}"
bucket = "${element(var.bucket_names, count.index)}"
# count = length(local.bucket_names)
# bucket = local.bucket_names[count.index]
force_destroy = "false"
grant {
id = "674f4d195ff567a2eeb7ee328c84410b02484f646c5f1f595f83ecaf5cfbf"
permissions = ["FULL_CONTROL"]
type = "CanonicalUser"
}
object_lock_enabled = "false"
request_payer = "BucketOwner"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
bucket_key_enabled = "true"
}
}
versioning {
enabled = "false"
mfa_delete = "false"
}
}
and my variables:
variable "bucket_names" {
type = list
default = ["terraformed-bucket", "terraformed-bucket-2"]
}
these are the states in my terraform code
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
i tried to import nonterraformed-bucket to existing this resource:
resource "aws_s3_bucket" "tfer--buckets" {}
with this command:
terraform import aws_s3_bucket.tfer--buckets nonterraformed-bucket
but still the output of terraform state list is the same. nothing changed:
mek-bash#%: terraform import aws_s3_bucket.tfer--buckets nonterraformed-bucket
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
I don't want to use separate resources for each bucket. So I want to import each outside bucket with the same name as the others. So I want to include it as [2] in the same resource name. justlike:
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
aws_s3_bucket.tfer--buckets[2] (should represent nonterraformed-bucket)
Do you have any suggestions for this? Or is there a way to import non-terraformed resources into a single resource name?
You have to add your your nonterraformed-bucket into bucket_names:
variable "bucket_names" {
type = list
default = ["terraformed-bucket", "terraformed-bucket-2", "nonterraformed-bucket"]
}
and then import it as [2] (third bucket):
terraform import aws_s3_bucket.tfer--buckets[2] nonterraformed-bucket
It worked with:
terraform import 'aws_s3_bucket.tfer--buckets[2]' nonterraformed-bucket
it fixed after quotes 'aws_s3_bucket.tfer--buckets[2]'

Is there a way in terraform to have multiple lifecycle configuration blocks for a single AWS S3 bucket?

I am using module to create a AWS S3 bucket via terraform. This module creates a bucket with some a lot of default policies/configuration as mandated by my company. Along with that it sets some lifecycle rules using aws_s3_bucket_lifecycle_configuration.
I don't want to use those rules and they can be disabled via the inputs to the said module. But the problem is when I try to add my custom lifecycle configurations, I always get a different result each time. Sometimes my rules are applied while at other instances they are not present in the configuration.
Even the documentation says that:
NOTE: S3 Buckets only support a single lifecycle configuration. Declaring multiple aws_s3_bucket_lifecycle_configuration resources to the same S3 Bucket will cause a perpetual difference in configuration.
What can be the way around this issue?
I cant set enable_private_bucket to false, but here is the code for the configuration resource in the module.
resource "aws_s3_bucket_lifecycle_configuration" "pca_private_bucket_infrequent_access" {
count = var.enable_private_bucket ? 1 : 0
bucket = aws_s3_bucket.pca_private_bucket[0].id
}
You need to do the v3 style which is deprecated but it seems to be the only way of doing it.
Here's how I have it set up where I have extra lifecycle rules using the dynamic block
resource "aws_s3_bucket" "cache" {
bucket = local.cache_bucket_name
force_destroy = false
tags = {
Name = "${var.vpc_name} cache"
}
lifecycle_rule {
id = "${local.cache_bucket_name} lifecycle rule"
abort_incomplete_multipart_upload_days = 1
enabled = true
noncurrent_version_expiration {
days = 1
}
transition {
days = 1
storage_class = "INTELLIGENT_TIERING"
}
}
dynamic "lifecycle_rule" {
for_each = var.cache_expiration_rules
content {
id = "${lifecycle_rule.value["prefix"]} expiration in ${lifecycle_rule.value["days"]} days"
enabled = true
prefix = lifecycle_rule.value["prefix"]
expiration {
days = lifecycle_rule.value["days"]
}
}
}
lifecycle {
prevent_destroy = true
}
}

How do I get list of all S3 Buckets with given prefix using terraform?

I am writing a Terraform script to setup an event notification on multiple S3 buckets which are starting with given prefix.
For example I want to setup notification for bucket starting with finance-data. With help of aws_s3_bucket datasource, we can configure a multiple S3 buckets which are already present and later we can use them in aws_s3_bucket_notification resource. Example:
data "aws_s3_bucket" "source_bucket" {
# set of buckets on which event notification will be set
# finance-data-1 and finance-data-2 are actual bucket id
for_each = toset(["finance-data-1", "finance-data-2"])
bucket = each.value
}
resource "aws_s3_bucket_notification" "bucket_notification_to_lambda" {
for_each = data.aws_s3_bucket.source_bucket
bucket = each.value.id
lambda_function {
lambda_function_arn = aws_lambda_function.s3_event_lambda.arn
events = [
"s3:ObjectCreated:*",
"s3:ObjectRemoved:*"
]
}
}
In aws_s3_bucket datasource, I am not able to find an option to give a prefix of the bucket and instead I have to enter bucket-id for all the buckets. Is there any way to achieve this?
Is there any way to achieve this?
No there is not. You have to explicitly specify buckets that you want.

Renaming s3 bucket in Terraform (but not S3) causes create then destroy?

I want to refactor my Terraform scripts a bit.
Before:
resource "aws_s3_bucket" "abc" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
After:
resource "aws_s3_bucket" "def" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
As you can see, only the name in Terraform has changed (abc -> def).
However, this causes a create / destroy of the bucket in terraform plan.
I expected Terraform recognize the buckets as the same (they have the same attributes, including bucket).
Questions:
Why is this?
Is there a way to refactor Terraform scripts without destroying infrastructure?
You can use terraform state mv, to reflect this change in the state.
In you case, this would be
terraform state mv aws_s3_bucket.abc aws_s3_bucket.def
From my own experience, this works well and I recommend doing it instead of working with bad names.
Terraform does not recognize such changes, no :-)

How to add lifecycle rules to an S3 bucket using terraform?

I am using Terraform to create a bucket in S3 and I want to add "folders" and lifecycle rules to it.
I can create the bucket (using an "aws_s3_bucket" resource).
I can create the bucket and define my lifecycle rules within the same "aws_s3_bucket" resource, ie. at creation time.
I can add "folders" to the bucket (I know they aren't really folders, but they are presented to the client systems as if they were... :-) ), using an "aws_s3_bucket_object" resource, ie. after bucket creation.
All good...
But I want to be able to add lifecycle rules AFTER I've created the bucket, but I get an error telling me the bucket already exists. (Actually I want to be able to subsequently add folders and corresponding lifecycle rules as and when required.)
Now, I can add lifecycle rules to an existing bucket in the AWS GUI, so I know it is a reasonable thing to want to do.
But is there a way of doing it with Terraform?
Am I missing something?
resource "aws_s3_bucket" "bucket" {
bucket = "${replace(var.tags["Name"],"/_/","-")}"
region = "${var.aws_region}"
#tags = "${merge(var.tags, map("Name", "${var.tags["Name"]}"))}"
tags = "${merge(var.tags, map("Name", "${replace(var.tags["Name"],"/_/","-")}"))}"
}
resource "aws_s3_bucket" "bucket_quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "quarterly_retention"
prefix = "quarterly/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "bucket_permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
resource "aws_s3_bucket_object" "quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "quarterly"
source = "/dev/null"
}
resource "aws_s3_bucket_object" "permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "permanent"
source = "/dev/null"
}
I expect to have a bucket with 2 lifecycle rules, but I get the following error:
Error: Error applying plan:
2 error(s) occurred:
* module.s3.aws_s3_bucket.bucket_quarterly: 1 error(s) occurred:
* aws_s3_bucket.bucket_quarterly: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: EFE9C62B25341478, host id: hcsCNracNrpTJZ4QdU0AV2wNm/FqhYSEY4KieQ+zSHNsj6AUR69XvPF+0BiW4ZOpfgIoqwFoXkI=
* module.s3.aws_s3_bucket.bucket_permanent: 1 error(s) occurred:
* aws_s3_bucket.bucket_permanent: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: 7DE1B1A36138A614, host id: 8jB6l7d6Hc6CZFgQSLQRMJg4wtvnrSL6Yp5R4RScq+GtuMW+6rkN39bcTUwQhzxeI7jRStgLXSc=
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Lets first break down whats happening and how we can overcome this issue. Each time you define a resource "aws_s3_bucket", terraform will attempt to create a bucket with the parameters specified. If you want to attach a lifecycle policy to a bucket, do it where you define the bucket, e.g.:
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
lifecycle_rule {
id = "quarterly_retention"
prefix = "folder/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "permanent" {
bucket = "perm_bucket_name"
acl = "private"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
A bucket can have multiple lifecycle_rule blocks on it.
If you want to define the lifecycle rules as external blocks, you can do it in this way:
// example of what the variable would look like:
variable "lifecycle_rules" {
type = "list"
default = []
}
// example of what the assignment would look like:
lifecycle_rules = [{
id = "cleanup"
prefix = ""
enabled = true
expiration = [{
days = 1
}]
}, {...}, {...} etc...]
// example what the usage would look like
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
source = "/dev/null"
lifecycle_rule = [ "${var.lifecycle_rules}" ]
}
Note: the implementation above of having an external lifecycle policy isn't really the best way to do it, but the only way. You pretty much trick terraform into accepting the list of maps, which happens to be the same type as lifecycle_rule, so it works. Ideally, Terraform should have it's own resource block for lifecycle rules, but it doesn't.
Edit: why have separate resource blocks when we now have dynamic blocks! Woohoo
As far as I am aware, you cannot make a lifecycle policy separately.
Someone raised a PR for a resource to be created to allow you to do so, but looks like it is still open: https://github.com/terraform-providers/terraform-provider-aws/issues/6188
As for your error, I believe the reason you're getting the error is because:
resource "aws_s3_bucket" "bucket"
Creates a bucket with a particular name.
resource "aws_s3_bucket" "bucket_quarterly"
References bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the previous resource (which cannot be done as names are unique).
resource "aws_s3_bucket" "bucket_permanent"
Similarly, this resource references bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the first resource (which cannot be done as names are unique).
You mentioned I expect to have a bucket with 2 lifecycle rules but in your above code you are creating 3 separate s3 buckets (one without a lifecycle, and 2 with a lifecycle) and two objects (folders) that are being placed into the s3 bucket without a lifecycle policy.
Thanks for the info (I like the idea of the list to separate the rules from the resource).
The issue was that I didn't appreciate that you could define lifecycle rules within the resource AND change them subsequently, so I was trying to figure out how to define them separately...
All that's required is to specify them in the resource and do terraform apply, then you can edit it and add/amend/remove lifecycle_rules items and just do terraform apply again to apply the changes.
source "aws_s3_bucket" "my_s3_bucket" {
bucket = local.s3_bucket_name
}
resource "aws_s3_bucket_acl" "my_s3_bucket_acl" {
bucket = aws_s3_bucket.my_s3_bucket.arn
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "my_s3_bucket_versioning" {
bucket = aws_s3_bucket.my_s3_bucket.arn
versioning_configuration {
status = true
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "my_s3-bucket_encryption" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "my_s3_bucket_lifecycle_config" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
id = "dev_lifecycle_7_days"
status = true
abort_incomplete_multipart_upload {
days_after_initiation = 30
}
noncurrent_version_expiration {
noncurrent_days = 1
}
transition {
storage_class = "STANDARD_IA"
days = 30
}
expiration {
days = 30
}
}
}