Creating Route53 Hosted zone fails with InvalidClientTokenId - amazon-web-services

Details below, but at a high level I've had no issues building out resources in AWS GovCloud, particularly in the us-gov-wast-1 region. When I decided to add a resource for a private aws_route53_zone I get the below error:
* aws_route53_zone.private: error creating Route53 Hosted Zone: InvalidClientTokenId: The security token included in the request is invalid. status code: 403, request id: a9124a21-8eba-11e9-8bbb-c59c842ad843
Normally I would think this is due to incorrect IAM creds since it's a 403, but my creds are working fine for every other resource, even those that are in the same TF file. I even tried changing them but no luck. Anyone know what could be the cause of this and how I can get around it. Route53 is supposed to be available in GovCloud us-west.
Terraform Version
Terraform v0.11.13
provider.aws v2.12.0
Terraform Configuration Details
provider "aws" {
region = "us-gov-west-1"
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
}
... Other VPC resources.
resource "aws_route53_zone" "private" {
name = "my-domain.com"
comment = "my-domain (preprod-gov) terraform"
vpc = {
vpc_id = "${module.preprod_gov_vpc.vpc_id}"
}
}

Just figured this problem out. The cached AWS Provider plugin within the /.terraform/plugins/linux_amd64 directory was an older version (2.12) and had not been updated since the initial build out of the environment months ago. Once we performed a terraform init -upgrade the plugin was upgraded to version current (2.52). After the upgrade, we no longer received the "InvalidClientTokenId" error.

Related

Problem Initializing terraform with s3 backend - CredentialRequiresARNError

I'm having problems initializing terraform s3 backend in following setup. This works well with terraform version 0.11.15 but fails with 0.15.5 and 1.0.7.
There are 2 files:
terraform.tf
provider "aws" {
region = "eu-west-1"
}
terraform {
backend "s3" {
}
}
resource "aws_s3_bucket" "this" {
bucket = "test-bucket"
acl = "private"
}
test-env.tfvars
encrypt = true
dynamodb_table = "terraform-test-backend"
bucket = "terraform-test-backend"
key = "terraform/deployment/test-release.tfstate"
region = "eu-west-1"
When I run terraform init -backend-config=test-env.tfvars using terraform 0.11.15 it works and I can peform terraform apply. Here is the output:
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (2.70.0)...
* provider.aws: version = "~> 2.70"
But when I try to use versions 0.15.5 and 1.0.7 I get following error:
Error: error configuring S3 Backend: Error creating AWS session: CredentialRequiresARNError: credential type source_profile requires role_arn, profile default
Any ideas how to fix it ?
A few changes were introduced with respect to the s3 backend and the way terraform checks for credentials in version >0.13.
Take a look at the the following GitHub issue or even more specifically this one. In addition its outlined in the Changelog
I believe that the issue you are facing is related to the way your aws profile is set up (check your ~/.aws/config).

Terraform getting error when configuring S3 Backend

trying to store my state file in an s3 bucket , but getting this error when trying 'Terraform init' :
made sure my aws credentials doesnt have " / # $ .. "
error configuring S3 Backend: error validating provider credentials:
error calling sts:GetCallerIdentity:
InvalidClientTokenId: The security token included in the request is invalid.
main.tf :
provider "aws" {
region = var.region
access_key = var.acc_key
secret_key = var.sec_key
}
terraform {
backend "s3" {
bucket = "mybucket-terra-prac"
key = "terraform.tfstate"
region = "eu-central-1"
}
}
resource "aws_instance" "web" {
ami = var.ami
instance_type = "t2.large"
associate_public_ip_address=true
key_name = var.public_key
tags = {
Name = var.ec2_name
}
}
variables i have in variables.tf file ( with type and default ) :
variable "acc_key" {}
variable "sec_key" {}
variable "public_key" {}
variable "ami" {}
I am not entirely sure but I think you can't use variables when specifying region in aws provider section. I think you need to hardcode that to your region. Also again not entirely sure but using variables in the secret and access key should be hardcoded instead of pointing it to a variable ( these parameters are meant to be used when specifying values inside terraform file directly ).
And the terraform section should be placed in the beginning of the file before aws provider section.
Try execute aws sts get-caller-identity command and see you are using correct credentials.
I encountered a similar error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: f07a9a38-ef21-44ee-a122-71800b865fea
with provider["registry.terraform.io/hashicorp/aws"],
on main.tf line 1, in provider "aws":
1: provider "aws" {
It turned out the region I was working in was not enabled. FYI, it takes a few minutes to enable a region.
In my case, first I needed to configure MFA on my AWS CLI (company policy), then I edited the
~/.aws/credentials (vim ~/.aws/credentials) to add a correct profile.
In my case it was showing [default]. After editing I was still getting error on vs code. I tried on local terminal and it worked.
In my case, I was able to resolve the issue by deleting the .terraform/ folder then running the terraform init again.
For me the problem was. I have an existing aws token defined in ~/.aws/config
Try to check it out especially if you are using multiple profiles.
The default constructor client searches for credentials by using the default credentials in system environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
So UnSet them.
Then execute aws sts get-caller-identity command and see if you are using correct credentials.
It seems that your AWS provider missing "token" field.
Try adding this field to you AWS provider section.
Your AWS provider block should look like this:
provider "aws" {
region = var.region
access_key = var.acc_key
secret_key = var.sec_key
token = var.token
}
Also don't forget to add to your file variables.tf this line:
variable "token" {}
terraform init
-backend-config="access_key=${{ secrets.AWS_ACCESS_KEY }}"
-backend-config="secret_key=${{ secrets.AWS_SECRET_ACCESS_KEY}}"
Copied from Reddit

Am I getting this error because I'm using SSO? "Error loading state: AccessDenied: Access Denied status code: 403"?

This is my terraform set up. When I used an Access Key and a Secret Key in a different account, I had no problems initializing terraform. But now that I'm using SSO with this account, I get this error:
Error loading state:
AccessDenied: Access Denied
status code: 403, request id: xxx, host id: xxxx
Then I found this in a terraform document. Not sure if I understand this correctly, but am I getting this error because I am using SSO? If so, what do I need to do to fix this and get terraform to work with this account.
"Please note that the AWS Go SDK, the underlying authentication handler used by the Terraform AWS Provider, does not support all AWS CLI features, such as Single Sign On (SSO) configuration or credentials."
Note: "my-bucket" was previously created in this account using the CLI.
provider "aws" {
region = "us-east-1"
profile = "XXXXX"
}
terraform {
required_version = "~> 0.13.0"
backend "s3" {
bucket = "mybucket"
key = "mykey"
region = "us-east-1"
}
}
I am having this exact same issue with terraform and sso, will update if I find solution. * Update, in my case it was because the state bucket had an explicit deny for unencrypted transfers. I added encrypt = true to my tfstate backend and it worked fine. https://www.terraform.io/docs/language/settings/backends/s3.html#s3-state-storage

Using AWS Terraform How to enable s3 backend authentication with assumed role MFA credentials

I provision AWS resources using Terraform using a python script that call terraform via shell
os.system('terraform apply')
The only way I found to enable terraform authentication, after enabling MFA and assuming a role, is to publish these environment variables:
os.system('export ASSUMED_ROLE="<>:botocore-session-123";
export AWS_ACCESS_KEY_ID="vfdgdsfg";
export AWS_SECRET_ACCESS_KEY="fgbdzf";
export AWS_SESSION_TOKEN="fsrfserfgs";
export AWS_SECURITY_TOKEN="fsrfserfgs"; terraform apply')
This worked OK until I configured s3 as backend, terraform action is performed but before the state can be stored in the bucket I get the standard (very confusing) exception:
Error: error configuring S3 Backend: Error creating AWS session: AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
I read this excellent answer explaining that for security and other reasons backend configuration is separate.
Since I don't want to add actual secret keys to source code (as suggested by the post) I tried adding a reference to the profile and when it failed I added the actual keys just to see if it would work, which it didn't.
My working theory is that behind the scenes terraform starts another process which doesn't access or inherit the credential e environment variables.
How do I use s3 backend, with an MFA assumed role?
One must point the backend to the desired profile. In my case the same profile used for the provisioning itself.
Here is a minimal POC
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
backend "s3" {
bucket = "unique-terraform-state-dev"
key = "test"
region = "us-east-2"
profile = "the_role_assumed_in_aws_credentials"
}
}
provider "aws" {
version = "~> 3.0"
region = var.region
}
resource "aws_s3_bucket" "s3_bucket" {
bucket = var.bucket_name
}
I'm reminding that it's run by shell which has these environment variables:
os.system('export ASSUMED_ROLE="<>:botocore-session-123";
export AWS_ACCESS_KEY_ID="vfdgdsfg";
export AWS_SECRET_ACCESS_KEY="fgbdzf";
export AWS_SESSION_TOKEN="fsrfserfgs";
export AWS_SECURITY_TOKEN="fsrfserfgs"; terraform apply')

Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.