InvalidClientTokenID error when running Terraform Plan/Apply - amazon-web-services

I'm setting up a HA cluster in AWS using Terraform and user data. My main.tf looks like this:
provider "aws" {
access_key = "access_key"
secret_key = "secret_key"
}
resource "aws_instance" "etcd" {
ami = "${var.ami}" // coreOS 17508
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
key_path = "${var.key_path}"
count = "${var.count}"
region = "${var.aws_region}"
user_data = "${file("cloud-config.yml")}"
subnet_id = "${aws_subnet.k8s.id}"
private_ip = "${cidrhost("10.43.0.0/16", 10 + count.index)}"
associate_public_ip_address = true
vpc_security_group_ids = ["${aws_security_group.terraform_swarm.id}"]
tags {
name = "coreOS-master"
}
}
However, when I run terraform plan I get the following error provider.aws: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 45099d1a-4d6a-11e8-891c-df22e6789996
I've looked around some suggestions were to clear out my ~/.aws/credentials file or update it with the new aws IAM credentials. I'm pretty lost on how to fix this error.

This is usually caused by some certain characters (\ # !, etc) in the credentials. It can be fixed by re-generating credentials your aws access code and secret key.

This is a general error that can be cause by a few reasons.
Some examples:
1) Invalid credentials passed as environment variables or in ~/.aws/credentials.
Solution: Remove old profiles / credentials and clean all your environment vars:
for var in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_SECURITY_TOKEN ; do eval unset $var ; done
2) When your aws_secret_access_key contains characters like the plus-sign + or multiple forward-slash /. See more in here.
Solution: Delete credentials and generate new ones.
3) When you try to execute Terraform inside a region which must be explicitly enabled (and wasn't).
(In my case it was me-south-1 (Bahrain) - See more in here).
Solution: Enable region or move to an enabled one.
4) In cases where you work with 3rd party tools like Vault and don't supply valid AWS credentials to communicate with - See more in here.
All will lead to a failure of aws sts:GetCallerIdentity API.

Make sure that your access key and secret are correct. I have used Static credentials and substituting variables using variables.tf. The latest error also points to the documentation. Start with making Static Credentials work.

I had the same issue and managed to solve it. I actually changed two things before I tried again, so not sure which one solved the issue.
I managed to create new creds without any "special" characters (+/ etc).
I then included a shared credentials file in my .tf file under "provider".
provider "aws" {
shared_credentials_file = "\\wsl$\\Debian\\home\\user\\.aws\\credentials"
region = var.region
}
When I ran terraform plan -out myplan.tfplan it completed!

Getting the same error and resolve it just by reinserting AWS credentials correctly. Give it a try.

I got this problem running Terraform in a Lambda function when I was setting the "access_key" and "secret_key" properties in the AWS provider, but I had not set "token".
This was solved by not setting any property but "region" in the AWS provider and letting the provider pull what it needed from the environment - AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN.

Related

Terraform getting error when configuring S3 Backend

trying to store my state file in an s3 bucket , but getting this error when trying 'Terraform init' :
made sure my aws credentials doesnt have " / # $ .. "
error configuring S3 Backend: error validating provider credentials:
error calling sts:GetCallerIdentity:
InvalidClientTokenId: The security token included in the request is invalid.
main.tf :
provider "aws" {
region = var.region
access_key = var.acc_key
secret_key = var.sec_key
}
terraform {
backend "s3" {
bucket = "mybucket-terra-prac"
key = "terraform.tfstate"
region = "eu-central-1"
}
}
resource "aws_instance" "web" {
ami = var.ami
instance_type = "t2.large"
associate_public_ip_address=true
key_name = var.public_key
tags = {
Name = var.ec2_name
}
}
variables i have in variables.tf file ( with type and default ) :
variable "acc_key" {}
variable "sec_key" {}
variable "public_key" {}
variable "ami" {}
I am not entirely sure but I think you can't use variables when specifying region in aws provider section. I think you need to hardcode that to your region. Also again not entirely sure but using variables in the secret and access key should be hardcoded instead of pointing it to a variable ( these parameters are meant to be used when specifying values inside terraform file directly ).
And the terraform section should be placed in the beginning of the file before aws provider section.
Try execute aws sts get-caller-identity command and see you are using correct credentials.
I encountered a similar error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: f07a9a38-ef21-44ee-a122-71800b865fea
with provider["registry.terraform.io/hashicorp/aws"],
on main.tf line 1, in provider "aws":
1: provider "aws" {
It turned out the region I was working in was not enabled. FYI, it takes a few minutes to enable a region.
In my case, first I needed to configure MFA on my AWS CLI (company policy), then I edited the
~/.aws/credentials (vim ~/.aws/credentials) to add a correct profile.
In my case it was showing [default]. After editing I was still getting error on vs code. I tried on local terminal and it worked.
In my case, I was able to resolve the issue by deleting the .terraform/ folder then running the terraform init again.
For me the problem was. I have an existing aws token defined in ~/.aws/config
Try to check it out especially if you are using multiple profiles.
The default constructor client searches for credentials by using the default credentials in system environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
So UnSet them.
Then execute aws sts get-caller-identity command and see if you are using correct credentials.
It seems that your AWS provider missing "token" field.
Try adding this field to you AWS provider section.
Your AWS provider block should look like this:
provider "aws" {
region = var.region
access_key = var.acc_key
secret_key = var.sec_key
token = var.token
}
Also don't forget to add to your file variables.tf this line:
variable "token" {}
terraform init
-backend-config="access_key=${{ secrets.AWS_ACCESS_KEY }}"
-backend-config="secret_key=${{ secrets.AWS_SECRET_ACCESS_KEY}}"
Copied from Reddit

How to make Terraform to read AWS Credentials file?

I am trying to create an AWS S3 bucket using terraform and this is my code:
provider "aws" {
profile = "default"
region = "ap-south-1"
}
resource "aws_s3_bucket" "first_tf" {
bucket = "svk-pl-2909202022"
acl = "private"
}
I have manually created the "Credentials" file using Notepad and also removed the ".txt" extension using Powershell and stored that file in C:\Users\terraform\.aws, and that file is like this:
[default]
aws_access_key_id=**************
aws_secret_access_key=************
But when I try to run terraform plan, I get an error which says
ERROR: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found
Then, I also tried to create that "Credentials" file by installing AWS CLI, I ran the command
aws configure --profile terraform
where terraform was my username. So, it asked me to enter aws_access_key_id and aws_secret_access_key. and after entering all the credentials, I ran the command terraform init, which ran successfully but when I ran terraform plan, it shows the error again which says:
ERROR: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found
When you create profile manually
provider "aws" {
region = "your region"
shared_credentials_file = "path_file_credentials like C:\Users\terraform\.aws\credentials"
profile = "profile_name"
}
When you don't want to put your shared file manually
Need to be in this path %USERPROFILE%.aws\credentials
provider "aws" {
region = "your region"
profile = "profile_name"
}
If you want to put your credentials in a tf file
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
I've spent quite a bit of time trying to figure out how to get Terraform to read ~/.aws/credentials. The only option that worked for me was specifying AWS_PROFILE environment var to point it to the specific section of the credentials file.
AWS_PROFILE=prod terraform plan
or
export AWS_PROFILE=prod
terraform plan
The fact that the shared_credentials_file and/or the profile options in the provider section get ignored looks like a bug to me.
The path where you are storing the credentials file is wrong.
C:\Users\your-username\.aws
You can add these below files in the above location.
credentials
[default]
aws_access_key_id = your access key
aws_secret_access_key = your secret key
config
[default]
region=ap-south-1
And you don't need to configure any thing into terraform or python if you're using boto3. Terraform and boto3 will automatically find the desired credentials file.
You have to set up a custom section in your credentials file with the command
aws configure --profile=prod
in order to use env variable like this.
when you have AWS cli already installed in local then go to config file path: %USERPROFILE%\.aws\credentials
Update Credentials as below:
[default]
aws_access_key_id = "xxxxx"
aws_secret_access_key = "xxxxx"
region= us-east-1

Terraform: can't authenticate to aws provider using shared config file or static variables

I'm trying to use terraform to manage AWS resources and trying to set up the credentials configuration. I'm following the official documentation: https://www.terraform.io/docs/providers/aws/index.html
My first idea was set a shared credentials file so I configure it:
~.aws/credentials
[default]
aws_access_key_id=****
aws_secret_access_key=****
~.aws/config
[default]
region=us-east-1
output=json
app/main.tf
provider "aws" {
region = "us-east-1"
version = "~> 2.0"
profile = "default"
}
terraform {
backend "s3" {
bucket = "example-bucket"
key = "terraform-test.tfstate"
region = "us-east-1"
}
}
When I run terraform init I receive the following message:
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
I have already tested the credentials using aws cli and it's working perfectly.
After that, I tried to configure static credentials in main.tf like this:
provider "aws" {
region = "us-east-1"
version = "~> 2.0"
access_key = "****"
secret_key = "****"
}
Same error...
I decided to test with environment variables and then it worked. But now I want to know why I could not configure with static variables or shared config file. All this cases was described in the official docs, what am I doing wrong?
By terraform documentation, you can specifiy the credentials file by code. example:
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
*I'd also make sure that the env variables aren't set (just to ensure that terraform surely looks for the credentials file), as the priority of the credentials that terraform will look for are:
a. Inline acces key and secret key.
b. Environemnt variables
c. Credentials file
I've encountered the same issue in the past. The only way I know to get past it is to set the following environment variable before running any terraform commands:
export AWS_SDK_LOAD_CONFIG=true
Adding more info for the next person who comes across this.
I tried the same code as the OP, short of putting the creds inline in the tf file.
One of the responses mentioned "env variables".
Ran $ env and saw I had some set.
Ran this to eliminate them:
$ unset AWS_ACCESS_KEY_ID && unset AWS_SECRET_ACCESS_KEY && \
unset AWS_SESSION_TOKEN && unset AWS_DEFAULT_REGION
This was my problem! Wasted maybe 3 hours of frustration trying to narrow this down.
The config below worked for me on a Linux based OS:
provider "aws" {
shared_config_files = ["~/.aws/config"]
shared_credentials_files = ["~/.aws/credentials"]
region = var.AWS_REGION
}
By latest Terraform documentation this is how it will work,
provider "aws" {
region = "us-east-1
shared_credentials_files = ["C:/Users/tf_user/.aws/credentials"]
profile = "customprofile"
}
I had the same issue this works for me.

InvalidClientTokenId: The security token included in the request is invalid. status code: 403

I am using, terraform & kubectl to deploy insfra-structure and application.
Since I changed aws configure :
terraform init
terraform apply
I always got :
terraform apply
Error: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 5ba38c31-d39a-11e9-a642-21e0b5cf5c0e
on providers.tf line 1, in provider "aws":
1: provider "aws" {
Can you advise ? Appreciate !
From here.
This is a general error that can be cause by a few reasons.
Some examples:
1) Invalid credentials passed as environment variables or in ~/.aws/credentials.
Solution: Remove old profiles / credentials and clean all your environment vars:
for var in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_SECURITY_TOKEN ; do eval unset $var ; done
2) When your aws_secret_access_key contains characters like the plus-sign + or multiple forward-slash /. See more in here.
Solution: Delete credentials and generate new ones.
3) When you try to execute Terraform inside a region which must be explicitly enabled (and wasn't).
(In my case it was me-south-1 (Bahrain) - See more in here).
Solution: Enable region or move to an enabled one.
4) In cases where you work with 3rd party tools like Vault and don't supply valid AWS credentials to communicate with - See more in here.
All will lead to a failure of aws sts:GetCallerIdentity API.
I got the same invalid token error after adding an S3 Terraform backend.
It was because I was missing a profile attribute on the new backend.
This was my setup when I got the invalid token error:
# ~/.aws/credentials
[default]
aws_access_key_id=OJA6...
aws_secret_access_key=r2a7...
[my_profile_name]
aws_access_key_id=RX9T...
aws_secret_access_key=oaQy...
// main.tf
terraform {
backend "s3" {
bucket = "terraform-state"
encrypt = true
key = "terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locks"
}
}
And this was the fix that worked (showing a diff, I added the line with "+" at the beginning):
// main.tf
terraform {
backend "s3" {
bucket = "terraform-state"
// ...
+ profile = "my_profile_name"
}
}
None of the guides or videos I read or watched included the profile attribute. But it's explained in the Terraform documentation, here:
https://www.terraform.io/language/settings/backends/s3
In my case, it turned out that I had the environment variables AWS_ACCESS_KEY_ID, AWS_DEFAULT_REGION and AWS_SECRET_ACCESS_KEY set. This circumvented my ~/.aws/credentials file. Simply unsetting these environment variables worked for me!
My issue was related to VS Code Debug Console: The AWS_PROFILE and AWS_REGION environment variables were not loaded. For solving that I closed vscode and reopened through CLI using the command code <project-folder>.
I used aws configure and provide my Keys as shown below
See image of the error I got
But I still got the invalid token error.
Answer
I have cleaned everything from ~/.aws/credentials and then run aws configure again and provided my keys.
It worked for me. Try it too

Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.