Renaming s3 bucket in Terraform (but not S3) causes create then destroy? - amazon-web-services

I want to refactor my Terraform scripts a bit.
Before:
resource "aws_s3_bucket" "abc" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
After:
resource "aws_s3_bucket" "def" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
As you can see, only the name in Terraform has changed (abc -> def).
However, this causes a create / destroy of the bucket in terraform plan.
I expected Terraform recognize the buckets as the same (they have the same attributes, including bucket).
Questions:
Why is this?
Is there a way to refactor Terraform scripts without destroying infrastructure?

You can use terraform state mv, to reflect this change in the state.
In you case, this would be
terraform state mv aws_s3_bucket.abc aws_s3_bucket.def
From my own experience, this works well and I recommend doing it instead of working with bad names.
Terraform does not recognize such changes, no :-)

Related

Terraform : difference between referencing using local and resource

I have the following piece of code which creates bucket and enables versioning.
Here I am referencing bucket using locals
resource "aws_s3_bucket_versioning" "sample" {
bucket = local.bucket_name
versioning_configuration {
status = "Enabled"
}
}
In this code I am referencing using
resource "aws_s3_bucket_versioning" "sample" {
bucket = aws_s3_bucket.sample.bucket
versioning_configuration {
status = "Enabled"
}
}
I think both does the same work, but my mentor said writing with resource is better code as it decreases the chances of errors because using locals terraform wont understand the dependencies snd might provision versioning before provisioning bucket.
My point is I think terraform is smart enough to resolve dependencies, is it still true when we reference using locals.
Tere is an article about this ( in japanese ) https://dev.classmethod.jp/articles/dependency-in-terraform/
Your mentor is correct. aws_s3_bucket.sample.bucket is better then local.bucket_name. This is because it makes code easier to maintain and modify. For example, if you have code as follows:
resource "aws_s3_bucket" "sample" {
bucket = local.bucket_name
}
resource "aws_s3_bucket_versioning" "sample" {
bucket = local.bucket_name
}
Then any changes in aws_s3_bucket, to let say
resource "aws_s3_bucket" "sample" {
bucket = var.bucket_name
}
will require you to manual change aws_s3_bucket_versioning as well. Its easy if you have small code, but if not, then this can be quite troublesome.
In contrast, if you have
resource "aws_s3_bucket" "sample" {
bucket = local.bucket_name
}
resource "aws_s3_bucket_versioning" "sample" {
bucket = aws_s3_bucket.sample.bucket
}
then changing aws_s3_bucket
resource "aws_s3_bucket" "sample" {
bucket = var.bucket_name
}
will automatically translate to the rest of code. You do not have to manually change aws_s3_bucket_versioning and fix bucket name.

Is there a way in terraform to have multiple lifecycle configuration blocks for a single AWS S3 bucket?

I am using module to create a AWS S3 bucket via terraform. This module creates a bucket with some a lot of default policies/configuration as mandated by my company. Along with that it sets some lifecycle rules using aws_s3_bucket_lifecycle_configuration.
I don't want to use those rules and they can be disabled via the inputs to the said module. But the problem is when I try to add my custom lifecycle configurations, I always get a different result each time. Sometimes my rules are applied while at other instances they are not present in the configuration.
Even the documentation says that:
NOTE: S3 Buckets only support a single lifecycle configuration. Declaring multiple aws_s3_bucket_lifecycle_configuration resources to the same S3 Bucket will cause a perpetual difference in configuration.
What can be the way around this issue?
I cant set enable_private_bucket to false, but here is the code for the configuration resource in the module.
resource "aws_s3_bucket_lifecycle_configuration" "pca_private_bucket_infrequent_access" {
count = var.enable_private_bucket ? 1 : 0
bucket = aws_s3_bucket.pca_private_bucket[0].id
}
You need to do the v3 style which is deprecated but it seems to be the only way of doing it.
Here's how I have it set up where I have extra lifecycle rules using the dynamic block
resource "aws_s3_bucket" "cache" {
bucket = local.cache_bucket_name
force_destroy = false
tags = {
Name = "${var.vpc_name} cache"
}
lifecycle_rule {
id = "${local.cache_bucket_name} lifecycle rule"
abort_incomplete_multipart_upload_days = 1
enabled = true
noncurrent_version_expiration {
days = 1
}
transition {
days = 1
storage_class = "INTELLIGENT_TIERING"
}
}
dynamic "lifecycle_rule" {
for_each = var.cache_expiration_rules
content {
id = "${lifecycle_rule.value["prefix"]} expiration in ${lifecycle_rule.value["days"]} days"
enabled = true
prefix = lifecycle_rule.value["prefix"]
expiration {
days = lifecycle_rule.value["days"]
}
}
}
lifecycle {
prevent_destroy = true
}
}

How to ensure an S3 bucket name is not used with Terraform?

I know the data aws_s3_bucket resource can be used to get a reference to an existing bucket, but how would it be used to ensure that a new potential bucket name is unique?
I'm thinking a loop using random numbers, but how can that be used to search for a bucket name which has not been used?
As discussed in the comments, this behaviour can be achieved with the bucket_prefix functionality
This code:
resource "aws_s3_bucket" "my_s3_bucket" {
bucket_prefix = "my-stackoverflow-bucket-"
acl = "private"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
Produces the following unique bucket:
Another solution by using bucket instead of bucket_prefix and random_uuid, for example:
resource "aws_s3_bucket" "my_s3_bucket" {
bucket = "my-s3-bucket-${random_uuid.uuid.result}"
}
resource "random_uuid" "uuid" {}
This will give you a name like this:
my-s3-bucket-ebb92011-3cd9-503f-0977-7371102405f5

How do I apply a lifecycle rule to an EXISTING s3 bucket in Terraform?

New to Terraform. Trying to apply a lifecycle rule to an existing s3 bucket declared as a datasource, but I guess I can't do that with a datasource - throws an error. Here's the gist of what I'm trying to achieve:
data "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...and if this were a resource, not a datasource, then it would work. How can I apply a lifecycle rule to an s3 bucket declared as a data source? Google Fu has yielded little in the way of results. Thanks!
The best way to solve this is to import your bucket to terraform state instead of using it as data.
For that try to put this on your terraform code:
resource "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
And then run on the terminal:
terraform import aws_s3_bucket.test-bucket bucket_name
This will import the bucket to your state and now you can make the changes or add new things to your bucket using terraform.
Last step just run terraform apply and the lifecycle rule will be added.

How to add lifecycle rules to an S3 bucket using terraform?

I am using Terraform to create a bucket in S3 and I want to add "folders" and lifecycle rules to it.
I can create the bucket (using an "aws_s3_bucket" resource).
I can create the bucket and define my lifecycle rules within the same "aws_s3_bucket" resource, ie. at creation time.
I can add "folders" to the bucket (I know they aren't really folders, but they are presented to the client systems as if they were... :-) ), using an "aws_s3_bucket_object" resource, ie. after bucket creation.
All good...
But I want to be able to add lifecycle rules AFTER I've created the bucket, but I get an error telling me the bucket already exists. (Actually I want to be able to subsequently add folders and corresponding lifecycle rules as and when required.)
Now, I can add lifecycle rules to an existing bucket in the AWS GUI, so I know it is a reasonable thing to want to do.
But is there a way of doing it with Terraform?
Am I missing something?
resource "aws_s3_bucket" "bucket" {
bucket = "${replace(var.tags["Name"],"/_/","-")}"
region = "${var.aws_region}"
#tags = "${merge(var.tags, map("Name", "${var.tags["Name"]}"))}"
tags = "${merge(var.tags, map("Name", "${replace(var.tags["Name"],"/_/","-")}"))}"
}
resource "aws_s3_bucket" "bucket_quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "quarterly_retention"
prefix = "quarterly/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "bucket_permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
resource "aws_s3_bucket_object" "quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "quarterly"
source = "/dev/null"
}
resource "aws_s3_bucket_object" "permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "permanent"
source = "/dev/null"
}
I expect to have a bucket with 2 lifecycle rules, but I get the following error:
Error: Error applying plan:
2 error(s) occurred:
* module.s3.aws_s3_bucket.bucket_quarterly: 1 error(s) occurred:
* aws_s3_bucket.bucket_quarterly: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: EFE9C62B25341478, host id: hcsCNracNrpTJZ4QdU0AV2wNm/FqhYSEY4KieQ+zSHNsj6AUR69XvPF+0BiW4ZOpfgIoqwFoXkI=
* module.s3.aws_s3_bucket.bucket_permanent: 1 error(s) occurred:
* aws_s3_bucket.bucket_permanent: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: 7DE1B1A36138A614, host id: 8jB6l7d6Hc6CZFgQSLQRMJg4wtvnrSL6Yp5R4RScq+GtuMW+6rkN39bcTUwQhzxeI7jRStgLXSc=
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Lets first break down whats happening and how we can overcome this issue. Each time you define a resource "aws_s3_bucket", terraform will attempt to create a bucket with the parameters specified. If you want to attach a lifecycle policy to a bucket, do it where you define the bucket, e.g.:
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
lifecycle_rule {
id = "quarterly_retention"
prefix = "folder/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "permanent" {
bucket = "perm_bucket_name"
acl = "private"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
A bucket can have multiple lifecycle_rule blocks on it.
If you want to define the lifecycle rules as external blocks, you can do it in this way:
// example of what the variable would look like:
variable "lifecycle_rules" {
type = "list"
default = []
}
// example of what the assignment would look like:
lifecycle_rules = [{
id = "cleanup"
prefix = ""
enabled = true
expiration = [{
days = 1
}]
}, {...}, {...} etc...]
// example what the usage would look like
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
source = "/dev/null"
lifecycle_rule = [ "${var.lifecycle_rules}" ]
}
Note: the implementation above of having an external lifecycle policy isn't really the best way to do it, but the only way. You pretty much trick terraform into accepting the list of maps, which happens to be the same type as lifecycle_rule, so it works. Ideally, Terraform should have it's own resource block for lifecycle rules, but it doesn't.
Edit: why have separate resource blocks when we now have dynamic blocks! Woohoo
As far as I am aware, you cannot make a lifecycle policy separately.
Someone raised a PR for a resource to be created to allow you to do so, but looks like it is still open: https://github.com/terraform-providers/terraform-provider-aws/issues/6188
As for your error, I believe the reason you're getting the error is because:
resource "aws_s3_bucket" "bucket"
Creates a bucket with a particular name.
resource "aws_s3_bucket" "bucket_quarterly"
References bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the previous resource (which cannot be done as names are unique).
resource "aws_s3_bucket" "bucket_permanent"
Similarly, this resource references bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the first resource (which cannot be done as names are unique).
You mentioned I expect to have a bucket with 2 lifecycle rules but in your above code you are creating 3 separate s3 buckets (one without a lifecycle, and 2 with a lifecycle) and two objects (folders) that are being placed into the s3 bucket without a lifecycle policy.
Thanks for the info (I like the idea of the list to separate the rules from the resource).
The issue was that I didn't appreciate that you could define lifecycle rules within the resource AND change them subsequently, so I was trying to figure out how to define them separately...
All that's required is to specify them in the resource and do terraform apply, then you can edit it and add/amend/remove lifecycle_rules items and just do terraform apply again to apply the changes.
source "aws_s3_bucket" "my_s3_bucket" {
bucket = local.s3_bucket_name
}
resource "aws_s3_bucket_acl" "my_s3_bucket_acl" {
bucket = aws_s3_bucket.my_s3_bucket.arn
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "my_s3_bucket_versioning" {
bucket = aws_s3_bucket.my_s3_bucket.arn
versioning_configuration {
status = true
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "my_s3-bucket_encryption" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "my_s3_bucket_lifecycle_config" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
id = "dev_lifecycle_7_days"
status = true
abort_incomplete_multipart_upload {
days_after_initiation = 30
}
noncurrent_version_expiration {
noncurrent_days = 1
}
transition {
storage_class = "STANDARD_IA"
days = 30
}
expiration {
days = 30
}
}
}