Is there a way I can use terraform import aws_s3_bucket to import an s3 bucket AND it's policy? I'm trying to import my existing s3 buckets into terraform so that I can apply replication configuration and to them. For example:
#Importing an existing bucket
terraform import aws_s3_bucket.replication myexistingbucket
#Maybe be able to grab s3 bucket policy from the import
data "aws_s3_bucket_policy" "replication" {
bucket = myexistingbucket
}
#Terraform configuration to add replication to the existing bucket
resource "aws_s3_bucket" "replication" {
bucket = "myexistingbucket"
policy = data.aws_s3_bucket_policy.replication
versioning {
enabled = true
}
replication_configuration {
####configuration code here
}
lifecycle {
prevent_destroy = true
}
}
My code looks like what is above without the aws_s3_bucket_policy data block. The issue is when I run this configuration, it applies the replication configuration but it makes the bucket policy blank after the run since I don't define a bucket policy. I'm hoping I don't have to import every single bucket policy (there are multiple buckets that are being imported and each has a very long policy).
Related
Version 4 of the AWS Provider introduces significant changes to the aws_s3_bucket:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-4-upgrade#changes-to-s3-bucket-drift-detection
From this version on, each parameter of the S3 bucket should be configured separately from the aws_s3_bucket resource configuration and re-imported into the Terraform state.
Example:
Before:
resource "aws_s3_bucket" "example" {
bucket = "yournamehere"
# ... other configuration ...
acceleration_status = "Enabled"
}
After:
resource "aws_s3_bucket" "example" {
bucket = "yournamehere"
# ... other configuration ...
}
resource "aws_s3_bucket_accelerate_configuration" "example" {
bucket = aws_s3_bucket.example.id
status = "Enabled"
}
The number of buckets I need to reconfigure exceeds 100 which will take many days of work.
Is there a solution or a tool to make the configuration conversion faster?
S3 Bucket Accelerate can be configured in either the standalone resource "aws_s3_bucket_accelerate_configuration" or with the deprecated parameter "acceleration_status" in the resource aws_s3_bucket. Configuring with both will cause inconsistencies and may overwrite configuration.
Resource: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket
I'm trying to setup a remote Terraform backend to S3. I was able to create the bucket, but I used bucket_prefix instead of bucket to define my bucket name. I did this to ensure code re-usability within my org.
My issue is that I've been having trouble referencing the new bucket in my Terraform back end config. I know that I can hard code the name of the bucket that I created, but I would like to reference the bucket similar to other resources in Terraform.
Would this be possible?
I've included my code below:
#configure terraform to use s3 as the backend
terraform {
backend "s3" {
bucket = "aws_s3_bucket.my-bucket.id"
key = "terraform/terraform.tfstate"
region = "ca-central-1"
}
}
AWS S3 Resource definition
resource "aws_s3_bucket" "my-bucket" {
bucket_prefix = var.bucket_prefix
acl = var.acl
lifecycle {
prevent_destroy = true
}
versioning {
enabled = var.versioning
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.sse_algorithm
}
}
}
}
Terraform needs a valid backend configuration when the initialization steps happens (terraform init), meaning that you have to have an existing bucket before being able to provision any resources (before the first terraform apply).
If you do a terraform init with a bucket name which does not exist, you get this error:
The referenced S3 bucket must have been previously created. If the S3 bucket
│ was created within the last minute, please wait for a minute or two and try
│ again.
This is self explanatory. It is not really possible to have the S3 bucket used for backend and also defined as a Terraform resource. While certainly you can use terraform import to import an existing bucket into the state, I would NOT recommend importing the backend bucket.
Is there a way for Terraform to make changes to an existing S3 bucket without affecting the creation or deletion of the bucket?
For example, I want to use Terraform to enable S3 replication across several AWS accounts. The S3 buckets already exist, and I simply want to enable a replication rule (via a pipeline) without recreating, deleting, or emptying the bucket.
My code looks like this:
data "aws_s3_bucket" "test" {
bucket = "example_bucket"
}
data "aws_iam_role" "s3_replication" {
name = "example_role"
}
resource "aws_s3_bucket" "source" {
bucket = data.aws_s3_bucket.example_bucket.id
versioning {
enabled = true
}
replication_configuration {
role = data.aws_iam_role.example_role.arn
rules {
id = "test"
status = "Enabled"
destination {
bucket = "arn:aws:s3:::dest1"
}
}
rules {
id = "test2"
status = "Enabled"
destination {
bucket = "arn:aws:s3:::dest2"
}
}
}
}
When I try to do it this way, Terraform apply tries to delete the existing bucket and create a new one instead of just updating the configuration. I don't mind trying terraform import, but my concern is that this will destroy the bucket when I run terraform destroy as well. I would like to simply apply and destroy the replication configuration, not the already existing bucket.
I would like to simply apply and destroy the replication configuration, not the already existing bucket.
Sadly, you can't do this. Your bucket must be imported to TF so that it can be managed by it.
I don't mind trying terraform import, but my concern is that this will destroy the bucket when I run terraform destroy as well.
To protect against this, you can use prevent_destroy:
This meta-argument, when set to true, will cause Terraform to reject with an error any plan that would destroy the infrastructure object associated with the resource, as long as the argument remains present in the configuration.
I am currently having two (maybe conflicting) S3 bucket policies, which show a permanent difference on Terraform. Before I show parts of the code, I will try to give an overview of the structure.
I am currently using a module, which:
Takes IAM Role & an S3 Bucket as inputs
Attaches S3 Bucket policy to the inputted role
Attaches S3 Bucket (allowing VPC) policy to the inputted S3 bucket
I have created some code (snippet and not full code) to illustrate how this looks like for the module.
The policies look like:
# S3 Policy to be attached to the ROLE
data "aws_iam_policy_document" "foo_iam_s3_policy" {
statement {
effect = "Allow"
resources = ["${data. s3_bucket.s3_bucket.arn}/*"]
actions = ["s3:GetObject", "s3:GetObjectVersion"]
}
statement {
effect = "Allow"
resources = [data.s3_bucket.s3_bucket.arn]
actions = ["s3:*"]
}
}
# VPC Policy to be attached to the BUCKET
data "aws_iam_policy_document" "foo_vpc_policy" {
statement {
sid = "VPCAllow"
effect = "Allow"
resources = [data.s3_bucket.s3_bucket.arn, "${data.s3_bucket.s3_bucket.arn}/*"]
actions = ["s3:GetObject", "s3:GetObjectVersion"]
condition {
test = "StringEquals"
variable = "aws:SourceVpc"
values = [var.foo_vpc]
}
principals {
type = "*"
identifiers = ["*"]
}
}
}
The policy attachments look like:
# Turn policy into a resource to be able to use ARN
resource "aws_iam_policy" "foo_iam_policy_s3" {
name = "foo-s3-${var.s3_bucket_name}"
description = "IAM policy for foo on s3"
policy = data.aws_iam_policy_document.foo_iam_s3_policy.json
}
# Attaches s3 bucket policy to IAM Role
resource "aws_iam_role_policy_attachment" "foo_attach_s3_policy" {
role = data.aws_iam_role.foo_role.name
policy_arn = aws_iam_policy.foo_iam_policy_s3.arn
}
# Attach foo vpc policy to bucket
resource "s3_bucket_policy" "foo_vpc_policy" {
bucket = data.s3_bucket.s3_bucket.id
policy = data.aws_iam_policy_document.foo_vpc_policy.json
}
Now let's step outside of the module, where the S3 bucket (the one I mentioned that will be inputted into the module) is created, and where another policy needs to be attached to it (the S3 bucket). So outside of the module, we:
Provide an S3 bucket to the aforementioned module as input (alongside the IAM Role)
Create a policy to allow some IAM Role to put objects in the aforementioned bucket
Attach the created policy to the bucket
The policy looks like:
# Create policy to allow bar to put objects in the bucket
data "aws_iam_policy_document" "bucket_policy_bar" {
statement {
sid = "Bar IAM access"
effect = "Allow"
resources = [module.s3_bucket.bucket_arn, "${module. s3_bucket.bucket_arn}/*"]
actions = ["s3:PutObject", "s3:GetObject", "s3:ListBucket"]
principals {
type = "AWS"
identifiers = [var.bar_iam]
}
}
}
And its attachment looks like:
# Attach Bar bucket policy
resource "s3_bucket_policy" "attach_s3_bucket_bar_policy" {
bucket = module.s3_bucket.bucket_name
policy = data.aws_iam_policy_document.bucket_policy_bar.json
}
(For more context: Basically foo is a database that needs VPC and s3 attachment to role to operate on the bucket and bar is an external service that needs to write data to the bucket)
What is going wrong
When I try to plan/apply, Terraform shows that there is always change, and shows an overwrite between the S3 bucket policy of bar (bucket_policy_bar) and the VPC policy attached inside the module (foo_vpc_policy).
In fact the error I am getting kind of sounds like what is described here:
The usage of this resource conflicts with the
aws_iam_policy_attachment resource and will permanently show a
difference if both are defined.
But I am attaching policies to S3 and not to a role, so I am not sure if this warning applies to my case.
Why are my policies conflicting? And how can I avoid this conflict?
EDIT:
For clarification, I have a single S3 bucket, to which I need to attach two policies. One that allows VPC access (foo_vpc_policy, which gets created inside the module) and another one (bucket_policy_bar) that allows IAM role to put objects in the bucket
there is always change
That is correct. aws_s3_bucket_policy sets new policy on the bucket. It does not add new statements to it.
Since you are invoking aws_s3_bucket_policy twice for same bucket, first time in module.s3_bucket module, then second time in parent module (I guess), the parent module will simply attempt to set new policy on the bucket. When you perform terraform apply/plan again, the terraform will detect that the policy defined in module.s3_bucket is different, and will try to update it. So you end up basically with a circle, where each apply will change the bucket policy to new one.
I'm not aware of a terraform resource which would allow you to update (i.e. add new statements) to an existing bucket policy. Thus I would try to re-factor your design so that you execute aws_s3_bucket_policy only once with all the statements that you require.
Thanks to the tip from Marcin I was able to resolve the issue by making the attachment of the policy inside the module optional like:
# Attach foo vpc policy to bucket
resource "s3_bucket_policy" "foo_vpc_policy" {
count = var.attach_vpc_policy ? 1 : 0 # Only attach VPC Policy if required
bucket = data.s3_bucket.s3_bucket.id
policy = data.aws_iam_policy_document.foo_vpc_policy.json
}
The policy in all cases has been added as output of the module like:
# Outputting only the statement, as it will later be merged with other policies
output "foo_vpc_policy_json" {
description = "VPC Allow policy json (to be later merged with other policies that relate to the bucket outside of the module)"
value = data.aws_iam_policy_document.foo_vpc_policy.json
}
For the cases when it was needed to defer the attachment of the policy (wait to attach it together with another policy), I in-lined the poliicy via source_json)
data "aws_iam_policy_document" "bucket_policy_bar" {
# Adding the VPC Policy JSON as a base for this Policy (docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document)
source_json = module.foor_.foo_vpc_policy_json # here we add the statement that has
statement {
sid = "Bar IAM access"
effect = "Allow"
resources = [module.s3_bucket_data.bucket_arn, "${module.s3_bucket_data.bucket_arn}/*"]
actions = ["s3:PutObject", "s3:GetObject", "s3:ListBucket"]
principals {
type = "AWS"
identifiers = [var.bar_iam]
}
}
}
I was trying to create a remote backend for my S3 bucket.
provider "aws" {
version = "1.36.0"
profile = "tasdik"
region = "ap-south-1"
}
terraform {
backend "s3" {
bucket = "ops-bucket"
key = "aws/ap-south-1/homelab/s3/terraform.tfstate"
region = "ap-south-1"
}
}
resource "aws_s3_bucket" "ops-bucket" {
bucket = "ops-bucket"
acl = "private"
versioning {
enabled = true
}
lifecycle {
prevent_destroy = true
}
tags {
Name = "ops-bucket"
Environmet = "devel"
}
}
I haven't applied anything yet, the bucket is not present as of now. So, terraform asks me to do an init. But when I try to do so, I get a
$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: BucketRegionError: incorrect region, the bucket is not in 'ap-south-1' region
status code: 301, request id: , host id:
Terraform will initialise any state configuration before any other actions such as a plan or apply. Thus you can't have the creation of the S3 bucket for your state to be stored in be defined at the same time as you defining the state backend.
Terraform also won't create an S3 bucket for you to put your state in, you must create this ahead of time.
You can either do this outside of Terraform such as with the AWS CLI:
aws s3api create-bucket --bucket "${BUCKET_NAME}" --region "${BUCKET_REGION}" \
--create-bucket-configuration LocationConstraint="${BUCKET_REGION}"
or you could create it via Terraform as you are trying to do so but use local state for creating the bucket on the first apply and then add the state configuration and re-init to get Terraform to migrate the state to your new S3 bucket.
As for the error message, S3 bucket names are globally unique across all regions and all AWS accounts. The error message is telling you that it ran the GetBucketLocation call but couldn't find a bucket in ap-south-1. When creating your buckets I recommend making sure they are likely to be unique by doing something such as concatenating the account ID and possibly the region name into the bucket name.