Unable to create a s3 bucket with versioning using terraform - amazon-web-services

I am creating a S3 bucket using Terraform on AWS.
I am unable to create a s3 bucket with versioning using terraform. I am Getting "Error putting S3 versioning: AccessDenied" when I try terraform apply.
Terraform plan works with no issues.
provider "aws" {
region = "us-east-1"
}
variable "instance_name" {}
variable "environment" {}
resource "aws_s3_bucket" "my_dr_bucket" {
bucket = "${var.instance_name}-dr-us-west-2"
region = "us-west-2"
acl = "private"
versioning {
enabled = "true"
}
}
Gettin gthe below error:
Error: Error putting S3 versioning: AccessDenied: Access Denied
status code: 403, request id: 21EBBB358558C617

Make sure you are creating S3 bucket in the same region your provider is configured for.

Below code resolved the issue:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
variable "instance_name" {}
variable "environment" {}
resource "aws_s3_bucket" "my_dr_bucket" {
provider = "aws.west"
bucket = "${var.instance_name}-dr-us-west-2"
region = "us-west-2"
acl = "private"
versioning {
enabled = true
}
}

Related

It is possible to store terraform state file in one aws account and deploy into another using environmental variables?

I would like to store a terraform state file in one aws account and deploy infrastructure into another. Is it possible to provide different set of credentials for backend and aws provider using environmental variables(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)? Or maybe provide credentials to one with environmental variables and another through shared_credentials_file?
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "=3.74.3"
}
}
backend "s3" {
encrypt = true
bucket = "bucket-name"
region = "us-east-1"
key = "terraform.tfstate"
}
}
variable "region" {
default = "us-east-1"
}
provider "aws" {
region = "${var.region}"
}
resource "aws_vpc" "test" {
cidr_block = "10.0.0.0/16"
}
Yes, the AWS profile/access keys configuration used by the S3 backend are separate from the AWS profile/access keys configuration used by the AWS provider. By default they are both going to be looking in the same place, but you could configure the backend to use a different profile so that it connects to a different AWS account.
Yes, and you can even keep them in separated files in the same folder to avoid confusion
backend.tf
terraform {
backend "s3" {
profile = "profile-1"
region = "eu-west-1"
bucket = "your-bucket"
key = "terraform-state/terraform.tfstate"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
main.tf
provider "aws" {
profile = "profile-2"
region = "us-east-1"
}
resource .......
This way, the state file will be stored in the profile-1, and all the code will run in the profile-2

Terraform import fails due to erroneous region although defined in the provider

I am using the following tf configuration:
variable "aws_profile" {
description = "The AWS profile to use for this account"
}
provider "aws" {
version = "~> 2"
region = "us-east-1"
profile = "${var.aws_profile}"
}
provider "aws" {
version = "~> 2"
region = "us-west-2"
alias = "us_west_2"
profile = "profile-us-west-2"
}
where
cat ~/.aws/credentials
[profile-us-west-2]
region=us-west-2
aws_access_key_id = ΧΧΧΧΧΧΧΧΧΧΧ
aws_secret_access_key = ΧΧΧΧΧΧΧΧΧΧΧΧΧ
and trying to import an existing S3 bucket
to the tf resource below
resource "aws_s3_bucket" "my_tf_bucket" {
provider = "aws.us_west_2"
bucket = "my_tf_bucket"
with the following command:
terraform import aws_s3_bucket.my_tf_bucket existing_bucket_name
which fails as follows:
Error importing AWS S3 bucket policy: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'
status code: 400, request id: 64242424244D21946, host id: bddw422424
Why isn't the provider alias working?
The problem was that (for some reason), terraform requires to explicitly pass the provider value when importing
terraform import --provider=aws.us_west_2 aws_s3_bucket.my_tf_bucket existing_bucket_name

Terraform throwing bucket region error when attaching bucket policy to s3 bucket

I am trying to create and attach and attach s3 bucket policies to s3 buckets with terraform. Terraform is throwing the following errors: BucketRegionError and AccessDenied errors. It is saying the bucket I am trying to attach the policy to is not the specified region even though it is deployed in that region. Any advice on how I can attach this policy would be helpful. Below are the errors and how I am creating the buckets, the bucket policy, and how I am attaching. Thanks!
resource "aws_s3_bucket" "dest_buckets" {
provider = aws.dest
for_each = toset(var.s3_bucket_names)
bucket = "${each.value}-replica"
acl = "private"
force_destroy = "true"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = each.key
policy = data.aws_iam_policy_document.dest_policy.json
}
data "aws_iam_policy_document" "dest_policy" {
statement {
actions = [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : bucket.arn
]
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::${data.aws_caller_identity.source.account_id}:role/${var.replication_role}"
]
}
}
statement {
actions = [
"s3:ReplicateObject",
"s3:ReplicateDelete",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : "${bucket.arn}/*"
]
}
}
Errors:
Error: Error putting S3 policy: AccessDenied: Access Denied
status code: 403, request id: 7F17A032D84DE672, host id: EjX+cDYt57caooCIlGX9wPf5s8B2JlXqAZpG8mA5KZtuw/4varoutQfxlkC/9JstdMdjN8EYBtg=
on main.tf line 36, in resource "aws_s3_bucket_policy" "dest_policy":
36: resource "aws_s3_bucket_policy" "dest_policy" {
Error: Error putting S3 policy: BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region at endpoint ''
status code: 301, request id: , host id:
on main.tf line 36, in resource "aws_s3_bucket_policy" "dest_policy":
36: resource "aws_s3_bucket_policy" "dest_policy" {
The buckets create with no issue, I'm just having issues with attaching this policy.
UPDATE:
Below is the provider block for aws.dest, the variables I have defined, and also my .aws/config file.
provider "aws" {
alias = "dest"
profile = var.dest_profile
region = var.dest_region
}
variable "dest_region" {
default = "us-east-2"
}
variable "dest_profile" {
default = "replica"
}
[profile replica]
region = us-east-2
output = json
I managed to execute your configuration and noticed some issues:
In your policy, in the second statement the principals is missing.
statement {
actions = [
"s3:ReplicateObject",
"s3:ReplicateDelete",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : "${bucket.arn}/*"
]
}
This block is creating the bucket correctly (with -replica in the end):
provider = aws.dest
for_each = toset(var.s3_bucket_names)
bucket = "${each.value}-replica"
acl = "private"
force_destroy = "true"
versioning {
enabled = true
}
}
However, by activating the debug, I've noticed that for this resource each.key references the bucket name without -replica so that I was receiving a 404.
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = each.key
policy = data.aws_iam_policy_document.dest_policy.json
}
Changing to it to the same pattern as the bucket creation it worked:
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = "${each.key}-replica"
policy = data.aws_iam_policy_document.dest_policy.json
}
Regarding the 403, it may be the lack of permissions for the user which is creating this resource.
Let me know if this helps you.
I believe you need to add provider = aws.dest to your data "aws_iam_policy_document" "dest_policy" data object.
The provider directive also works with data objects.

Why does creating a bucket with Terraform throw "The provider provider.aws does not support resource type "aws_s3"" error?

I am trying to create an S3 bucket using the following terraform code:
provider.tf
provider "aws" {
access_key = "XX"
secret_key = "YY"
region = "us-east-2"
}
main.tf
resource "aws_s3" "bucket" {
bucket = "terraform-s3-bucket"
acl = "private"
tags = {
Name = "My Bucket"
Environment = "Test"
}
}
However when I run terraform apply on the above code, I get this error:
Error: Invalid resource type
on main.tf line 1, in resource "aws_s3" "bucket":
1: resource "aws_s3" "bucket" {
The provider provider.aws does not support resource type "aws_s3".
What am I doing wrong?
Because there is no resource named aws_s3. The resource you are looking for is aws_s3_bucket.

terraform multiple providers not working with s3 bucket

Im trying to do this:
terraform {
backend "s3" {
bucket = "resources"
region = "us-east-1"
key = "resources"
}
}
// the default region
provider "aws" {
region = "us-west-2"
}
//for creating buckets in other regions- region param broken stupid issue with aws_s3_bucket resource...
provider "aws" {
alias = "east1"
region = "us-east-1"
}
resource "aws_s3_bucket" "zzzzz" {
provider = "aws.east1"
bucket = "zzzzz"
acl = "private"
force_destroy = true
}
And getting error
Error creating S3 bucket: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'
I just needed to wait 1hour + because I recreated bucket in different region
This may also happen if your bucket name is not globally unique (not within your account only). Trying a different (usually longer) name would help
This error is related to your S3 bucket name. Following my example, I had this name: my_bucket
When I changed to a more detailed name (my-project-s3-state-bucket) the error disappeared.
So, in conclusion, your s3 bucket should be globally unique.
PS: Yeah, I agree that the terraform/aws provider error isn't friendly to understand.