Version 4 of the AWS Provider introduces significant changes to the aws_s3_bucket:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-4-upgrade#changes-to-s3-bucket-drift-detection
From this version on, each parameter of the S3 bucket should be configured separately from the aws_s3_bucket resource configuration and re-imported into the Terraform state.
Example:
Before:
resource "aws_s3_bucket" "example" {
bucket = "yournamehere"
# ... other configuration ...
acceleration_status = "Enabled"
}
After:
resource "aws_s3_bucket" "example" {
bucket = "yournamehere"
# ... other configuration ...
}
resource "aws_s3_bucket_accelerate_configuration" "example" {
bucket = aws_s3_bucket.example.id
status = "Enabled"
}
The number of buckets I need to reconfigure exceeds 100 which will take many days of work.
Is there a solution or a tool to make the configuration conversion faster?
S3 Bucket Accelerate can be configured in either the standalone resource "aws_s3_bucket_accelerate_configuration" or with the deprecated parameter "acceleration_status" in the resource aws_s3_bucket. Configuring with both will cause inconsistencies and may overwrite configuration.
Resource: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket
Related
I'm trying to setup a remote Terraform backend to S3. I was able to create the bucket, but I used bucket_prefix instead of bucket to define my bucket name. I did this to ensure code re-usability within my org.
My issue is that I've been having trouble referencing the new bucket in my Terraform back end config. I know that I can hard code the name of the bucket that I created, but I would like to reference the bucket similar to other resources in Terraform.
Would this be possible?
I've included my code below:
#configure terraform to use s3 as the backend
terraform {
backend "s3" {
bucket = "aws_s3_bucket.my-bucket.id"
key = "terraform/terraform.tfstate"
region = "ca-central-1"
}
}
AWS S3 Resource definition
resource "aws_s3_bucket" "my-bucket" {
bucket_prefix = var.bucket_prefix
acl = var.acl
lifecycle {
prevent_destroy = true
}
versioning {
enabled = var.versioning
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.sse_algorithm
}
}
}
}
Terraform needs a valid backend configuration when the initialization steps happens (terraform init), meaning that you have to have an existing bucket before being able to provision any resources (before the first terraform apply).
If you do a terraform init with a bucket name which does not exist, you get this error:
The referenced S3 bucket must have been previously created. If the S3 bucket
│ was created within the last minute, please wait for a minute or two and try
│ again.
This is self explanatory. It is not really possible to have the S3 bucket used for backend and also defined as a Terraform resource. While certainly you can use terraform import to import an existing bucket into the state, I would NOT recommend importing the backend bucket.
I want to delete my non empty s3 bucket created with terraform. I used force_destroy=true option as well but still i get
BucketNotEmpty: The bucket you tried to delete is not empty
status code: 409, request id: xxxx, host id: xxxxxxx
also the bucket was created with force_destroy option:
resource "aws_s3_bucket" "pipelineartifactstore" {
bucket = "${var.prefix}-${var.namespace}-${var.stage}-pipeline-artifactstore"
acl = "private"
force_destroy = true
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
tags = var.default_tags
}
There is a force_destroy option for buckets that can be set. When the value is true it will remove all items in the bucket and then remove the bucket too.
Once set, you'll be able to remove the resource and then run terraform apply and the bucket will be removed, no need for any additional CLI steps.
Documentation Link
Some objects may be added before enable encryption or versioning so try to empty the bucket with AWS cli:
aws s3 rm s3://bucket-name --recursive
Then run Terraform apply again.
Since this is for Codepipeline I suppose.
The best option here is to follow these steps:
Add force_destroy = true to your S3 configuration
Apply terraform code with terraform apply
Then terraform destroy. Even if you have data there, it will be removed and then the bucket removed too.
I have imported an existing S3 bucket to my terraform state.
I am now trying to reverse engineer its configuration and pass it to the .tf file.
Here is my file
resource "aws_s3_bucket" "my-bucket" {
provider = "aws.eu_west_1"
bucket = "my-bucket"
grant {
type = "Group"
permissions = ["READ_ACP", "WRITE"]
uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
grant {
id = "my-account-id"
type = "CanonicalUser"
permissions = ["FULL_CONTROL"]
}
Here is my terraform plan output
~ aws_s3_bucket.my-bucket
acl: "" => "private"
No matter what value I use for the acl I always fail to align my tf with the existing acl configuration on the S3 bucket, e.g.
resource "aws_s3_bucket" "my-bucket" {
provider = "aws.eu_west_1"
bucket = "my-bucket"
acl. = "private"
corresponding plan output:
Error: aws_s3_bucket.my-bucket: "acl": conflicts with grant
Error: aws_s3_bucket.my-bucket: "grant": conflicts with acl
and another:
resource "aws_s3_bucket" "my-bucket" {
provider = "aws.eu_west_1"
bucket = "my-bucket"
acl. = ""
resource "aws_s3_bucket" "my-bucket" {
provider = "aws.eu_west_1"
bucket = "my-bucket"
acl. = ""
so if I use no value for acl, terraform shows the acl will change from non-set to private
If I use any value whatsoever, I get an error.
Why is that?
This is an observation on 0.13 but still might help:
If I create a bucket using your original code (i.e. with no acl line), the resulting TF state file still includes a "acl": "private", attribute for the bucket. If I then add an acl="private" definition in the TF code, I also get "acl": conflicts with grant when trying to apply.
But what's really odd is that if I delete the acl="private" definition (i.e. revert to your original code), and also delete the "acl": "private", attribute line from the state file, then the plan (including a refresh) shows that the bucket will be updated in place with this: + acl = "private". Applying this seems to work fine, but then a second apply shows that the grants have been lost and need to be reapplied.
So it seems to me that there's a bug in the S3 state refresh that might also affect the import, and in addition clearly removing the acl attribute from state makes it then incorrectly apply as a default overriding any grants. I think it might be worth using your code to create a new bucket, and then compare the state definitions to bring over any bits the original import missed.
I would like to deploy a local zip file to Elastic Beanstalk using Terraform. I would also like to keep old versions of the application in S3, with some retention policy, such as keep for 90 days. If I rebuild the bundle, I would like Terraform to detect this and deploy the new version. If the hash of the bundle hasn't changed then Terraform should not change anything.
Here is (some of) my config:
resource "aws_s3_bucket" "application" {
bucket = "test-elastic-beanstalk-bucket"
}
locals {
user_interface_bundle_path = "${path.module}/../../build.zip"
}
resource "aws_s3_bucket_object" "user_interface_latest" {
bucket = aws_s3_bucket.application.id
key = "user-interface-${filesha256(local.user_interface_bundle_path)}.zip"
source = local.user_interface_bundle_path
}
resource "aws_elastic_beanstalk_application" "user_interface" {
name = "${var.environment}-user-interface-app"
}
resource "aws_elastic_beanstalk_application_version" "user_interface_latest" {
name = "user-interface-${filesha256(local.user_interface_bundle_path)}"
application = aws_elastic_beanstalk_application.user_interface.name
bucket = aws_s3_bucket_object.user_interface_latest.bucket
key = aws_s3_bucket_object.user_interface_latest.key
}
resource "aws_elastic_beanstalk_environment" "user_interface" {
name = "${var.environment}-user-interface-env"
application = aws_elastic_beanstalk_application.user_interface.name
solution_stack_name = "64bit Amazon Linux 2018.03 v4.15.0 running Node.js"
version_label = aws_elastic_beanstalk_application_version.user_interface_latest.name
}
The problem with this is that each time the hash of the bundle changes, it deletes the old object in S3.
How can I get Terraform to create a new aws_s3_bucket_object and not delete the old one?
This is related but I don't want to maintain build numbers Elastic Beanstalk Application Version in Terraform
Expanding on #Marcin comment...
You should enable bucket versioning and add a lifecycle rule to delete versions older than 90 days
Here is an example:
resource "aws_s3_bucket" "application" {
bucket = "test-elastic-beanstalk-bucket"
versioning {
enabled = true
}
lifecycle_rule {
id = "retention"
noncurrent_version_expiration {
days = 90
}
}
}
You can see more examples in the documentation:
https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#using-object-lifecycle
Then I would simplify your aws_s3_bucket_object since we have versioning we don't really need to do the filesha256 just use the original name build.zip and good to go.
If you don't want to enable bucket versioning another way would be to use the AWS CLI to upload the file before you call terraform or do it in a local-exec from a null_resource here are a couple of examples:
https://www.terraform.io/docs/provisioners/local-exec.html#interpreter-examples
I was trying to create a remote backend for my S3 bucket.
provider "aws" {
version = "1.36.0"
profile = "tasdik"
region = "ap-south-1"
}
terraform {
backend "s3" {
bucket = "ops-bucket"
key = "aws/ap-south-1/homelab/s3/terraform.tfstate"
region = "ap-south-1"
}
}
resource "aws_s3_bucket" "ops-bucket" {
bucket = "ops-bucket"
acl = "private"
versioning {
enabled = true
}
lifecycle {
prevent_destroy = true
}
tags {
Name = "ops-bucket"
Environmet = "devel"
}
}
I haven't applied anything yet, the bucket is not present as of now. So, terraform asks me to do an init. But when I try to do so, I get a
$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: BucketRegionError: incorrect region, the bucket is not in 'ap-south-1' region
status code: 301, request id: , host id:
Terraform will initialise any state configuration before any other actions such as a plan or apply. Thus you can't have the creation of the S3 bucket for your state to be stored in be defined at the same time as you defining the state backend.
Terraform also won't create an S3 bucket for you to put your state in, you must create this ahead of time.
You can either do this outside of Terraform such as with the AWS CLI:
aws s3api create-bucket --bucket "${BUCKET_NAME}" --region "${BUCKET_REGION}" \
--create-bucket-configuration LocationConstraint="${BUCKET_REGION}"
or you could create it via Terraform as you are trying to do so but use local state for creating the bucket on the first apply and then add the state configuration and re-init to get Terraform to migrate the state to your new S3 bucket.
As for the error message, S3 bucket names are globally unique across all regions and all AWS accounts. The error message is telling you that it ran the GetBucketLocation call but couldn't find a bucket in ap-south-1. When creating your buckets I recommend making sure they are likely to be unique by doing something such as concatenating the account ID and possibly the region name into the bucket name.