terraform multiple providers not working with s3 bucket - amazon-web-services

Im trying to do this:
terraform {
backend "s3" {
bucket = "resources"
region = "us-east-1"
key = "resources"
}
}
// the default region
provider "aws" {
region = "us-west-2"
}
//for creating buckets in other regions- region param broken stupid issue with aws_s3_bucket resource...
provider "aws" {
alias = "east1"
region = "us-east-1"
}
resource "aws_s3_bucket" "zzzzz" {
provider = "aws.east1"
bucket = "zzzzz"
acl = "private"
force_destroy = true
}
And getting error
Error creating S3 bucket: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'

I just needed to wait 1hour + because I recreated bucket in different region

This may also happen if your bucket name is not globally unique (not within your account only). Trying a different (usually longer) name would help

This error is related to your S3 bucket name. Following my example, I had this name: my_bucket
When I changed to a more detailed name (my-project-s3-state-bucket) the error disappeared.
So, in conclusion, your s3 bucket should be globally unique.
PS: Yeah, I agree that the terraform/aws provider error isn't friendly to understand.

Related

It is possible to store terraform state file in one aws account and deploy into another using environmental variables?

I would like to store a terraform state file in one aws account and deploy infrastructure into another. Is it possible to provide different set of credentials for backend and aws provider using environmental variables(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)? Or maybe provide credentials to one with environmental variables and another through shared_credentials_file?
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "=3.74.3"
}
}
backend "s3" {
encrypt = true
bucket = "bucket-name"
region = "us-east-1"
key = "terraform.tfstate"
}
}
variable "region" {
default = "us-east-1"
}
provider "aws" {
region = "${var.region}"
}
resource "aws_vpc" "test" {
cidr_block = "10.0.0.0/16"
}
Yes, the AWS profile/access keys configuration used by the S3 backend are separate from the AWS profile/access keys configuration used by the AWS provider. By default they are both going to be looking in the same place, but you could configure the backend to use a different profile so that it connects to a different AWS account.
Yes, and you can even keep them in separated files in the same folder to avoid confusion
backend.tf
terraform {
backend "s3" {
profile = "profile-1"
region = "eu-west-1"
bucket = "your-bucket"
key = "terraform-state/terraform.tfstate"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
main.tf
provider "aws" {
profile = "profile-2"
region = "us-east-1"
}
resource .......
This way, the state file will be stored in the profile-1, and all the code will run in the profile-2

terraform aws_s3_bucket region that is different to the aws provider region gets created in the same provider region

I would like to manage AWS S3 buckets with terraform and noticed that there's a region parameter for the resource.
I have an AWS provider that is configured for 1 region, and would like to use that provider to create S3 buckets in multiple regions if possible. My S3 buckets have a lot of common configuration that I don't want to repeat, so i have a local module to do all the repetitive stuff....
In mod-s3-bucket/main.tf, I have something like:
variable bucket_region {}
variable bucket_name {}
resource "aws_s3_bucket" "s3_bucket" {
region = var.bucket_region
bucket = var.bucket_name
}
And then in main.tf in the parent directory (tf root):
provider "aws" {
region = "us-east-1"
}
module "somebucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-1"
bucket_name = "useast1-bucket"
}
module "anotherbucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-2"
bucket_name = "useast2-bucket"
}
When I run a terraform apply with that, both buckets get created in us-east-1 - is this expected behaviour? My understanding is that region should make the buckets get created in different regions.
Further to that, if I run a terraform plan after bucket creation, I see the following:
~ region = "us-east-1" -> "us-east-2"
on the 1 bucket, but after an apply, the region has not changed.
I know I can easily solve this by using a 2nd, aliased AWS provider, but am asking specifically about how the region parameter is meant to work for an aws_s3_bucket resource (https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#region)
terraform v0.12.24
aws v2.64.0
I think you'll need to do something like the docs show in this example for Replication Configuration: https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#using-replication-configuration
# /root/main.tf
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "us-east-2"
region = "us-east-2"
}
module "somebucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-1"
bucket_name = "useast1-bucket"
}
module "anotherbucket" {
source = "mod-s3-bucket"
provider = "aws.us-east-2"
bucket_region = "us-east-2"
bucket_name = "useast2-bucket"
}
# /mod-s3-bucket/main.tf
variable provider {
type = string
default = "aws"
}
variable bucket_region {}
variable bucket_name {}
resource "aws_s3_bucket" "s3_bucket" {
provider = var.provider
region = var.bucket_region
bucket = var.bucket_name
}
I've never explicitly set the provider like that though in a resource but based on the docs it might work.
The region attribute in s3 bucket resource isn't parsed as expected, there is a bug for this:
https://github.com/terraform-providers/terraform-provider-aws/issues/592
The multiple provider approach is needed.
Terraform informs you if you try to set the region directly in the resource:
╷
│ Error: Value for unconfigurable attribute
│
│ with aws_s3_bucket.my_bucket,
│ on s3.tf line 10, in resource "aws_s3_bucket" "my_bucket":
│ 28: region = "us-east-1"
│
│ Can't configure a value for "region": its value will be decided automatically based on the result of applying this configuration.
Terraform uses the configuration of the provider, where the region is set, for managing resources. Alternatively, as already mentioned, you can use multiple configurations for the same provider by making use of the alias meta-argument.
You can optionally define multiple configurations for the same
provider, and select which one to use on a per-resource or per-module
basis. The primary reason for this is to support multiple regions for
a cloud platform; other examples include targeting multiple Docker
hosts, multiple Consul hosts, etc.
...
A provider block without an alias argument is the default
configuration for that provider. Resources that don't set the provider
meta-argument will use the default provider configuration that matches
the first word of the resource type name. link

Unable to create a s3 bucket with versioning using terraform

I am creating a S3 bucket using Terraform on AWS.
I am unable to create a s3 bucket with versioning using terraform. I am Getting "Error putting S3 versioning: AccessDenied" when I try terraform apply.
Terraform plan works with no issues.
provider "aws" {
region = "us-east-1"
}
variable "instance_name" {}
variable "environment" {}
resource "aws_s3_bucket" "my_dr_bucket" {
bucket = "${var.instance_name}-dr-us-west-2"
region = "us-west-2"
acl = "private"
versioning {
enabled = "true"
}
}
Gettin gthe below error:
Error: Error putting S3 versioning: AccessDenied: Access Denied
status code: 403, request id: 21EBBB358558C617
Make sure you are creating S3 bucket in the same region your provider is configured for.
Below code resolved the issue:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
variable "instance_name" {}
variable "environment" {}
resource "aws_s3_bucket" "my_dr_bucket" {
provider = "aws.west"
bucket = "${var.instance_name}-dr-us-west-2"
region = "us-west-2"
acl = "private"
versioning {
enabled = true
}
}

Terraform cannot create AWS private hosted route53 zone

Terraform doesn't seem to be able to create AWS private hosted Route53 zones, and dies with the following error when I try to create a new hosted private zone associated with an existing VPC:
Error applying plan:
1 error(s) occurred:
aws_route53_zone.analytics: InvalidVPCId: The VPC: vpc-xxxxxxx you provided is not authorized to make the association.
status code: 400, request id: b411af23-0187-11e7-82e3-df8a3528194f
Here's my .tf file:
provider "aws" {
region = "${var.region}"
profile = "${var.environment}"
}
variable "vpcid" {
default = "vpc-xxxxxx"
}
variable "region" {
default = "eu-west-1"
}
variable "environment" {
default = "dev"
}
resource "aws_route53_zone" "analytics" {
vpc_id = "${var.vpcid}"
name = "data.int.example.com"
}
I'm not sure if the error is referring to either one of these:
VPC somehow needs to be authorised to associate with the Zone in advance.
The aws account running the terraform needs correct IAM permissions to associate the zone with the vpc
Would anyone have a clue how I could troubleshoot this further?
some times you also face such issue when the aws region which is configured in provider config is different then the region in which you have VPC deployed. for such cases we can use alias for aws provider. like below:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
region = "ap-southeast-1"
alias = "singapore"
}
then we can use it as below in terraform resources:
resource "aws_route53_zone_association" "vpc_two" {
provider = "aws.singapore"
zone_id = "${aws_route53_zone.dlos_vpc.zone_id}"
vpc_id = "${aws_vpc.vpc_two.id}"
}
above snippet is helpful when you need your terraform script to do deployment in multiple regions.
check the terraform version if run with latest or not.
Second, your codes are wrong if compare with the sample
data "aws_route53_zone" "selected" {
name = "test.com."
private_zone = true
}
resource "aws_route53_record" "www" {
zone_id = "${data.aws_route53_zone.selected.zone_id}"
name = "www.${data.aws_route53_zone.selected.name}"
type = "A"
ttl = "300"
records = ["10.0.0.1"]
}
The error code you're getting is because either your user/role doesn't have the necessary VPC related permissions or you are using the wrong VPC id.
I'd suggest you double check the VPC id you are using, potentially using the VPC data source to fetch it:
# Assuming you use the "Name" tag on the VPC resource to identify your VPCs
variable "vpc_name" {}
data "aws_vpc" "selected" {
tags {
Name = "${var.vpc_name}"
}
}
resource "aws_route53_zone" "analytics" {
vpc_id = "${data.aws_vpc.selected.id}"
name = "data.int.example.com"
}
You also want to check that your user/role has the necessary VPC related permissions. For this you'll probably want all of the permissions listed in the docs:

Signature does not match : Amazon S3 bucket creation from terraform

I wanted to crete a bucket and then have something like folder1 as the folder(equivalent to create folder action in the bucket from AWS console).
I am trying to do the same with the following terraform code :
resource "aws_s3_bucket" "bucket_create1" {
bucket = "test_bucket/folder1/"
acl = "private"
}
I am getting the following error :
Error creating S3 bucket: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
How can I resolve this?
Don't create folder in your bucket:
resource "aws_s3_bucket" "bucket_create1" {
bucket = "test_bucket"
acl = "private"
}