trying to store my state file in an s3 bucket , but getting this error when trying 'Terraform init' :
made sure my aws credentials doesnt have " / # $ .. "
error configuring S3 Backend: error validating provider credentials:
error calling sts:GetCallerIdentity:
InvalidClientTokenId: The security token included in the request is invalid.
main.tf :
provider "aws" {
region = var.region
access_key = var.acc_key
secret_key = var.sec_key
}
terraform {
backend "s3" {
bucket = "mybucket-terra-prac"
key = "terraform.tfstate"
region = "eu-central-1"
}
}
resource "aws_instance" "web" {
ami = var.ami
instance_type = "t2.large"
associate_public_ip_address=true
key_name = var.public_key
tags = {
Name = var.ec2_name
}
}
variables i have in variables.tf file ( with type and default ) :
variable "acc_key" {}
variable "sec_key" {}
variable "public_key" {}
variable "ami" {}
I am not entirely sure but I think you can't use variables when specifying region in aws provider section. I think you need to hardcode that to your region. Also again not entirely sure but using variables in the secret and access key should be hardcoded instead of pointing it to a variable ( these parameters are meant to be used when specifying values inside terraform file directly ).
And the terraform section should be placed in the beginning of the file before aws provider section.
Try execute aws sts get-caller-identity command and see you are using correct credentials.
I encountered a similar error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: f07a9a38-ef21-44ee-a122-71800b865fea
with provider["registry.terraform.io/hashicorp/aws"],
on main.tf line 1, in provider "aws":
1: provider "aws" {
It turned out the region I was working in was not enabled. FYI, it takes a few minutes to enable a region.
In my case, first I needed to configure MFA on my AWS CLI (company policy), then I edited the
~/.aws/credentials (vim ~/.aws/credentials) to add a correct profile.
In my case it was showing [default]. After editing I was still getting error on vs code. I tried on local terminal and it worked.
In my case, I was able to resolve the issue by deleting the .terraform/ folder then running the terraform init again.
For me the problem was. I have an existing aws token defined in ~/.aws/config
Try to check it out especially if you are using multiple profiles.
The default constructor client searches for credentials by using the default credentials in system environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
So UnSet them.
Then execute aws sts get-caller-identity command and see if you are using correct credentials.
It seems that your AWS provider missing "token" field.
Try adding this field to you AWS provider section.
Your AWS provider block should look like this:
provider "aws" {
region = var.region
access_key = var.acc_key
secret_key = var.sec_key
token = var.token
}
Also don't forget to add to your file variables.tf this line:
variable "token" {}
terraform init
-backend-config="access_key=${{ secrets.AWS_ACCESS_KEY }}"
-backend-config="secret_key=${{ secrets.AWS_SECRET_ACCESS_KEY}}"
Copied from Reddit
Related
Error: error configuring Terraform AWS Provider:
error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 95e52463-8cd7-038-b924-3a5d4ad6ef03, api error InvalidClientTokenId: The security token included in the request is invalid. with provider["registry.terraform.io/hashicorp/aws"], on provider.tf line 1, in provider "aws": 1: provider "aws" {
I have only two files.
instance.tf
resource "aws_instance" "web" {
ami = "ami-068257025f72f470d"
instance_type = "t2.micro"
tags = {
Name = "instance_using_terraform"
}
}
provider.tf
provider "aws" {
region = "ap-east-1"
access_key = "xxxx"
secret_key = "xxxx/xxx+xxx"
}
In my test environment I was using the root users access and secret access key which did not work. After creating a dedicated user the error did not occur anymore.
In detail I did the following steps:
Created a user called terraform here
Created a new group Administrators with attached permissions Administrator Access by following the wizard
Copied access key and secret access key to ~/. aws /credentials
aws access key =xxx
aws secret access key=xxx
Created ~/.aws/config
[default]
region=us-west-2
Check .aws folder(CONFIG FILE).
Try this
aws sts get-caller-identity
{
"UserId": "AIDAYMYFUCQM7K2RD9DDD",
"Account": "111147549871",
"Arn": "arn:aws:iam::111147549871:user/myself"
}
Also show us your main.tf file and where and how you define access.
Made mistake in the region where I declared entered the wrong namecode of region and access key - secret key '+' and '/' generating the error due to some symbols, you just need to try the new key till the access key contains only alphabetical string. (Symbols are lmao).
May be Your passed AWS configure region is different from your terraform provider region
e.g: us-east-1 in AWS configure, us-east-1a in terraform provider region.
Please change those regions to the same.
In case anyone comes across this issue, I found that the workspace I was working in had environment variables set in Terraform Cloud for the AWS credentials. These were taking precedence over my local credentials and needed to be refreshed.
In mycase this issue is because your system date/time is wrong.
Set Time for my centos8 OS through following command
timedatectl status
timedatectl set-time HH:MM:SS
it will throw error saying
"Failed to set time: NTP unit is active“if you already have set NTP service on your machine"
sudo timedatectl set-local-rtc true
sudo timedatectl set-ntp false
sudo timedatectl set-time "yyyy-MM-dd hh:mm:ss"
timedatectl list-timezones
sudo timedatectl set-timezone Europe/Zagreb
sudo timedatectl set-ntp yes
I'm trying to use terraform to initiate connections with AWS to create infra.
If I run up aws configure sso, i can log in default to eu-west-2 and move around the estate
I then use terraform apply, with the aws part as follows:
provider "aws" {
region = "eu-west-2"
shared_credentials_file = "~/.aws/credentials"
profile = "450694575897_ProdPS-SuperUsers"
}
Terraform reports: Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: 5b8be53d-253d-4c48-8568-ad78be14115f
The following vars are set:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
If I run
aws sts get-session-token --region=us-west-2
I get
An error occurred (InvalidClientTokenId) when calling the GetSessionToken operation: The security token included in the request is invalid.
I was having the same problem when i tried to deploy through terraform cloud.
You might be using an old key that is either deleted or inactive, to be sure:
1- Try to go to the security credentials on your account page: Click on your name in the top right corner -> My security credentials.
2- Check if the key you set in your credentials is deleted or still exists.
2.2- if it's deleted create a new key and use it.
3- If your key is still there, check if it is active.
I solved the issue doing the following:
$: aws configure
enter the access key:
enter the secret key:
select default region:
select default format[none/json]:
In your main.tf file add the profile shown as below
provider "aws" {
region = "eu-west-2"
profile="xxxuuzzz"
}
I am trying to create an AWS S3 bucket using terraform and this is my code:
provider "aws" {
profile = "default"
region = "ap-south-1"
}
resource "aws_s3_bucket" "first_tf" {
bucket = "svk-pl-2909202022"
acl = "private"
}
I have manually created the "Credentials" file using Notepad and also removed the ".txt" extension using Powershell and stored that file in C:\Users\terraform\.aws, and that file is like this:
[default]
aws_access_key_id=**************
aws_secret_access_key=************
But when I try to run terraform plan, I get an error which says
ERROR: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found
Then, I also tried to create that "Credentials" file by installing AWS CLI, I ran the command
aws configure --profile terraform
where terraform was my username. So, it asked me to enter aws_access_key_id and aws_secret_access_key. and after entering all the credentials, I ran the command terraform init, which ran successfully but when I ran terraform plan, it shows the error again which says:
ERROR: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found
When you create profile manually
provider "aws" {
region = "your region"
shared_credentials_file = "path_file_credentials like C:\Users\terraform\.aws\credentials"
profile = "profile_name"
}
When you don't want to put your shared file manually
Need to be in this path %USERPROFILE%.aws\credentials
provider "aws" {
region = "your region"
profile = "profile_name"
}
If you want to put your credentials in a tf file
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
I've spent quite a bit of time trying to figure out how to get Terraform to read ~/.aws/credentials. The only option that worked for me was specifying AWS_PROFILE environment var to point it to the specific section of the credentials file.
AWS_PROFILE=prod terraform plan
or
export AWS_PROFILE=prod
terraform plan
The fact that the shared_credentials_file and/or the profile options in the provider section get ignored looks like a bug to me.
The path where you are storing the credentials file is wrong.
C:\Users\your-username\.aws
You can add these below files in the above location.
credentials
[default]
aws_access_key_id = your access key
aws_secret_access_key = your secret key
config
[default]
region=ap-south-1
And you don't need to configure any thing into terraform or python if you're using boto3. Terraform and boto3 will automatically find the desired credentials file.
You have to set up a custom section in your credentials file with the command
aws configure --profile=prod
in order to use env variable like this.
when you have AWS cli already installed in local then go to config file path: %USERPROFILE%\.aws\credentials
Update Credentials as below:
[default]
aws_access_key_id = "xxxxx"
aws_secret_access_key = "xxxxx"
region= us-east-1
I am using, terraform & kubectl to deploy insfra-structure and application.
Since I changed aws configure :
terraform init
terraform apply
I always got :
terraform apply
Error: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 5ba38c31-d39a-11e9-a642-21e0b5cf5c0e
on providers.tf line 1, in provider "aws":
1: provider "aws" {
Can you advise ? Appreciate !
From here.
This is a general error that can be cause by a few reasons.
Some examples:
1) Invalid credentials passed as environment variables or in ~/.aws/credentials.
Solution: Remove old profiles / credentials and clean all your environment vars:
for var in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_SECURITY_TOKEN ; do eval unset $var ; done
2) When your aws_secret_access_key contains characters like the plus-sign + or multiple forward-slash /. See more in here.
Solution: Delete credentials and generate new ones.
3) When you try to execute Terraform inside a region which must be explicitly enabled (and wasn't).
(In my case it was me-south-1 (Bahrain) - See more in here).
Solution: Enable region or move to an enabled one.
4) In cases where you work with 3rd party tools like Vault and don't supply valid AWS credentials to communicate with - See more in here.
All will lead to a failure of aws sts:GetCallerIdentity API.
I got the same invalid token error after adding an S3 Terraform backend.
It was because I was missing a profile attribute on the new backend.
This was my setup when I got the invalid token error:
# ~/.aws/credentials
[default]
aws_access_key_id=OJA6...
aws_secret_access_key=r2a7...
[my_profile_name]
aws_access_key_id=RX9T...
aws_secret_access_key=oaQy...
// main.tf
terraform {
backend "s3" {
bucket = "terraform-state"
encrypt = true
key = "terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-locks"
}
}
And this was the fix that worked (showing a diff, I added the line with "+" at the beginning):
// main.tf
terraform {
backend "s3" {
bucket = "terraform-state"
// ...
+ profile = "my_profile_name"
}
}
None of the guides or videos I read or watched included the profile attribute. But it's explained in the Terraform documentation, here:
https://www.terraform.io/language/settings/backends/s3
In my case, it turned out that I had the environment variables AWS_ACCESS_KEY_ID, AWS_DEFAULT_REGION and AWS_SECRET_ACCESS_KEY set. This circumvented my ~/.aws/credentials file. Simply unsetting these environment variables worked for me!
My issue was related to VS Code Debug Console: The AWS_PROFILE and AWS_REGION environment variables were not loaded. For solving that I closed vscode and reopened through CLI using the command code <project-folder>.
I used aws configure and provide my Keys as shown below
See image of the error I got
But I still got the invalid token error.
Answer
I have cleaned everything from ~/.aws/credentials and then run aws configure again and provided my keys.
It worked for me. Try it too
I'm setting up a HA cluster in AWS using Terraform and user data. My main.tf looks like this:
provider "aws" {
access_key = "access_key"
secret_key = "secret_key"
}
resource "aws_instance" "etcd" {
ami = "${var.ami}" // coreOS 17508
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
key_path = "${var.key_path}"
count = "${var.count}"
region = "${var.aws_region}"
user_data = "${file("cloud-config.yml")}"
subnet_id = "${aws_subnet.k8s.id}"
private_ip = "${cidrhost("10.43.0.0/16", 10 + count.index)}"
associate_public_ip_address = true
vpc_security_group_ids = ["${aws_security_group.terraform_swarm.id}"]
tags {
name = "coreOS-master"
}
}
However, when I run terraform plan I get the following error provider.aws: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 45099d1a-4d6a-11e8-891c-df22e6789996
I've looked around some suggestions were to clear out my ~/.aws/credentials file or update it with the new aws IAM credentials. I'm pretty lost on how to fix this error.
This is usually caused by some certain characters (\ # !, etc) in the credentials. It can be fixed by re-generating credentials your aws access code and secret key.
This is a general error that can be cause by a few reasons.
Some examples:
1) Invalid credentials passed as environment variables or in ~/.aws/credentials.
Solution: Remove old profiles / credentials and clean all your environment vars:
for var in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_SECURITY_TOKEN ; do eval unset $var ; done
2) When your aws_secret_access_key contains characters like the plus-sign + or multiple forward-slash /. See more in here.
Solution: Delete credentials and generate new ones.
3) When you try to execute Terraform inside a region which must be explicitly enabled (and wasn't).
(In my case it was me-south-1 (Bahrain) - See more in here).
Solution: Enable region or move to an enabled one.
4) In cases where you work with 3rd party tools like Vault and don't supply valid AWS credentials to communicate with - See more in here.
All will lead to a failure of aws sts:GetCallerIdentity API.
Make sure that your access key and secret are correct. I have used Static credentials and substituting variables using variables.tf. The latest error also points to the documentation. Start with making Static Credentials work.
I had the same issue and managed to solve it. I actually changed two things before I tried again, so not sure which one solved the issue.
I managed to create new creds without any "special" characters (+/ etc).
I then included a shared credentials file in my .tf file under "provider".
provider "aws" {
shared_credentials_file = "\\wsl$\\Debian\\home\\user\\.aws\\credentials"
region = var.region
}
When I ran terraform plan -out myplan.tfplan it completed!
Getting the same error and resolve it just by reinserting AWS credentials correctly. Give it a try.
I got this problem running Terraform in a Lambda function when I was setting the "access_key" and "secret_key" properties in the AWS provider, but I had not set "token".
This was solved by not setting any property but "region" in the AWS provider and letting the provider pull what it needed from the environment - AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN.