Terraform: S3 bucket for website redirect: issue with "website_endpoint" - amazon-web-services

I am trying to use Terraform to create an S3 bucket for a redirect to another S3 bucket hosting a static website....
Current setting:
main s3 bucket name: myDomain.com
redirect s3 bucket name: www.myDomain.com
the challenge is when I create a CloudFront distribution and in the "origin" parameters I cannot get them right...
my provider settings:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~>4.0"
}
}
}
my code:
origin {
domain_name = aws_s3_bucket_website_configuration.www_website.website_endpoint
origin_id = aws_s3_bucket.www_bucket.id
}
because
aws_s3_bucket.www_bucket_website_endpoint
is deprecated, I only can use
aws_s3_bucket_website_configuration.www_website.website_endpoint
but I get the following error: The parameter Origin DomainName does not refer to a valid s3 bucket
this seems to be a known issue/bug as per this post: https://discuss.hashicorp.com/t/aws-cloudfront-origin-originname-bug/37997
I have thought of a couple of "fixes":
run the code with a placeholder Origin DomainName (aws_s3_bucket.www_bucket.bucket_regional_domain_name) and then do a manual adjustment every time I run the code
downgrade the provider version till aws_s3_bucket.www_bucket_website_endpoint was not deprecated (I think at 3.0) --> but this result in many features not available and rework half of my code
if you were me, how would you fix this problem?
Thank you

Related

Terraform Reference Created S3 Bucket for Remote Backend

I'm trying to setup a remote Terraform backend to S3. I was able to create the bucket, but I used bucket_prefix instead of bucket to define my bucket name. I did this to ensure code re-usability within my org.
My issue is that I've been having trouble referencing the new bucket in my Terraform back end config. I know that I can hard code the name of the bucket that I created, but I would like to reference the bucket similar to other resources in Terraform.
Would this be possible?
I've included my code below:
#configure terraform to use s3 as the backend
terraform {
backend "s3" {
bucket = "aws_s3_bucket.my-bucket.id"
key = "terraform/terraform.tfstate"
region = "ca-central-1"
}
}
AWS S3 Resource definition
resource "aws_s3_bucket" "my-bucket" {
bucket_prefix = var.bucket_prefix
acl = var.acl
lifecycle {
prevent_destroy = true
}
versioning {
enabled = var.versioning
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.sse_algorithm
}
}
}
}
Terraform needs a valid backend configuration when the initialization steps happens (terraform init), meaning that you have to have an existing bucket before being able to provision any resources (before the first terraform apply).
If you do a terraform init with a bucket name which does not exist, you get this error:
The referenced S3 bucket must have been previously created. If the S3 bucket
│ was created within the last minute, please wait for a minute or two and try
│ again.
This is self explanatory. It is not really possible to have the S3 bucket used for backend and also defined as a Terraform resource. While certainly you can use terraform import to import an existing bucket into the state, I would NOT recommend importing the backend bucket.

Problem Initializing terraform with s3 backend - CredentialRequiresARNError

I'm having problems initializing terraform s3 backend in following setup. This works well with terraform version 0.11.15 but fails with 0.15.5 and 1.0.7.
There are 2 files:
terraform.tf
provider "aws" {
region = "eu-west-1"
}
terraform {
backend "s3" {
}
}
resource "aws_s3_bucket" "this" {
bucket = "test-bucket"
acl = "private"
}
test-env.tfvars
encrypt = true
dynamodb_table = "terraform-test-backend"
bucket = "terraform-test-backend"
key = "terraform/deployment/test-release.tfstate"
region = "eu-west-1"
When I run terraform init -backend-config=test-env.tfvars using terraform 0.11.15 it works and I can peform terraform apply. Here is the output:
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (2.70.0)...
* provider.aws: version = "~> 2.70"
But when I try to use versions 0.15.5 and 1.0.7 I get following error:
Error: error configuring S3 Backend: Error creating AWS session: CredentialRequiresARNError: credential type source_profile requires role_arn, profile default
Any ideas how to fix it ?
A few changes were introduced with respect to the s3 backend and the way terraform checks for credentials in version >0.13.
Take a look at the the following GitHub issue or even more specifically this one. In addition its outlined in the Changelog
I believe that the issue you are facing is related to the way your aws profile is set up (check your ~/.aws/config).

CloudFront: The specified bucket does not exist

I have a static website hosted on S3 which works fine when accessed through my bucket's endpoint. However, when I create a CloudFront distribution and try access it through using the CloudFront domain I keep getting the error below.
d1xu3mknlk0sbd.cloudfront.net
Code: NoSuchBucket
Message: The specified bucket does not exist
BucketName: d1xu3mknlk0sbd.cloudfront.net
RequestId: 656B653A2ED5B2B1
HostId: 9etNAX1XEJmbVKUAMylBDz3xEky+7RhAnr9b8HhpkIb9+pkMnn920v/MSAUjr78oyONEUdlba50=
I have set up my CloudFront origin domain name to the s3 url of my static site which works when I type it in the browser so why can't CloudFront find the bucket ...
Ended up solving this by changing my s3 bucket name from 'sample' to 'www.sample.com' .Strangely CloudFront started resolving the correct bucket name. Why this works remains a mystery ...

AWS Route53 + S3 static website gives an error: alias target name does not lie within the target zone

I have registered a domain using Route53 and created a S3 bucket for my website.
Assume the following:
Route53 hosted zone is: domain.com
S3 bucket name is: staging.domain.com
Using the Route53 console I then attempted to create new record to point to my S3 bucket with the following settings:
Record name: staging.domain.com
Value/Route traffic to: Alias to S3 website endpoint
Region: (from drop-down) Africa(Cape Town)[af-south-1]
Choose S3 bucket: (from drop-down) s3-website.af-south-1.amazonaws.com (staging.domain.com)
Record type: A
After clicking on create records I am greeted with the following error:
**Error occurred
Bad request.**
(InvalidChangeBatch 400: Tried to create an alias that targets s3-website.af-south-1.amazonaws.com., type A in zone Z11KHD8FBVPUYU, but the alias target name does not lie within the target zone)
In my mind the alias target is supposed to be staging.domain.com.s3-website.af-south-1.amazonaws.com
not s3-website.af-south-1.amazonaws.com
For completeness sake I have 2 other A records listed on this domain:
dev.domain.com -> Pointing to an EC2 instance (working)
test.domain.com -> Pointing to a CloudFront distribution (working)
Any idea why this is happening or how it can be corrected?
I think drop down you are getting is correct.
See this screenshot :
New Console UI has different feature. First you have to select as region then bucket selection.
Check if you are following same seeting then you should not get error. Also one more check whether S3 bucket is enable for website hosting or not.

Creating Route53 Hosted zone fails with InvalidClientTokenId

Details below, but at a high level I've had no issues building out resources in AWS GovCloud, particularly in the us-gov-wast-1 region. When I decided to add a resource for a private aws_route53_zone I get the below error:
* aws_route53_zone.private: error creating Route53 Hosted Zone: InvalidClientTokenId: The security token included in the request is invalid. status code: 403, request id: a9124a21-8eba-11e9-8bbb-c59c842ad843
Normally I would think this is due to incorrect IAM creds since it's a 403, but my creds are working fine for every other resource, even those that are in the same TF file. I even tried changing them but no luck. Anyone know what could be the cause of this and how I can get around it. Route53 is supposed to be available in GovCloud us-west.
Terraform Version
Terraform v0.11.13
provider.aws v2.12.0
Terraform Configuration Details
provider "aws" {
region = "us-gov-west-1"
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
}
... Other VPC resources.
resource "aws_route53_zone" "private" {
name = "my-domain.com"
comment = "my-domain (preprod-gov) terraform"
vpc = {
vpc_id = "${module.preprod_gov_vpc.vpc_id}"
}
}
Just figured this problem out. The cached AWS Provider plugin within the /.terraform/plugins/linux_amd64 directory was an older version (2.12) and had not been updated since the initial build out of the environment months ago. Once we performed a terraform init -upgrade the plugin was upgraded to version current (2.52). After the upgrade, we no longer received the "InvalidClientTokenId" error.