InvalidClientTokenId on terraform apply/plan despite correct credentials - amazon-web-services

I am trying to create an aws instance through terraform. Despite generating multiple users with different key pairs, all of them seem to return a InvalidClientTokenID error when I try to terraform plan. Below are the options I've tried based on the research I've done:
AWS cli configure to save my credentials there
"aws sts get-caller-identity" to confirm that the credentials are valid
Exported the credentials to my local env
Pointed the instance.tf file to my /.aws credentials file through "shared_credentials_files"
Generated multiple access keys until I got secret keys with no special symbols
This is my code:
provider "aws" {
# access_key = "redacted"
# secret_key = "redacted"
shared_credentials_files = "/home/nocnoc/.aws/credentials"
region = "eu-central-1"
}
resource "aws_instance" "example" {
ami = "ami-0965bd5ba4d59211c"
instance_type = "t3.micro"
}
This is the error message:
$terraform apply
╷
│ Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 2ea13d91-630c-40dc-84eb-72b26222aecb, api error InvalidClientTokenId: The security token included in the request is invalid.
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on instance.tf line 1, in provider "aws":
│ 1: provider "aws" {
│
Are there any other options that I have not yet considered? I have MFA set up on my AWS account, but so did my tutor and the course didn't mention anything regarding adding a special field into the terraform file regarding that

Related

Configure terraform with aws without user credentials

Trying to configure aws from terraform. Running terraform from ec2. Have attached AmazonEC2FullAccess policy to the role attached to ec2.
I don't have access and secret keys. Using keys for aws cli and terraform is not allowed. I need to use existing role to configure to aws and create resources using it.
Getting below error when using AmazonEC2FullAccess policy with ec2.
[ec2-user#ip-1*-1*-1*-2** terraform]$ terraform plan
╷
│ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request send failed, Get "http://1**.***.***.***/latest/meta-data/iam/security-credentials/": proxyconnect tcp: dial tcp 1*.*.*.*:8***: i/o timeout
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 17, in provider "aws":
│ 17: provider "aws" {
│
Resource vpc file :-
[ec2-user#ip-1*.1*.1*.*** terraform]$ cat vpc.tf
resource "aws_vpc" "main" {
cidr_block = "1*.*.*.*/16"
}
main.tf file :-
[ec2-user#ip-1*.1*.1*.*** terraform]$ cat main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.39.0"
}
}
required_version = ">= 1.3.0"
}
provider "aws" {
region = var.aws_region
#role_arn =var.aws_role_arn
}
Also tried using role_arn in main.tf it gives following error :-
│ Error: Unsupported argument
│
│ on main.tf line 19, in provider "aws":
│ 19: role_arn =var.aws_role_arn
│
│ An argument named "role_arn" is not expected here.
Any help is much appreciated.

How can I provide to terraform different aws credentials to s3 backend and ec2 with role_arn usage

I want to store terraform state files in s3 bucket in one aws account and deploy instance changes in another aws account with role_arn usage.
This is my configuration:
providers.tf
terraform {
backend "s3" {
bucket = "bucket"
key = "tf/terraform.tfstate"
encrypt = "false"
region = "us-east-1"
profile = "s3"
role_arn = "arn:aws:iam::1111111111111:role/s3-role"
dynamodb_table = "name"
}
}
provider "aws" {
profile = "ec2"
region = "eu-north-1"
assume_role {
role_arn = "arn:aws:iam::2222222222222:role/ec2-role"
}
}
~/.aws/credentials
[s3-def]
aws_access_key_id = aaaaaaaaaa
aws_secret_access_key = sssssssss
[ec2-def]
aws_access_key_id = aaaaaaa
aws_secret_access_key = sssss
[s3]
role_arn = arn:aws:iam::1111111111:role/s3-role
region = us-east-1
source_profile = s3-def
[ec2]
role_arn = arn:aws:iam::22222222222:role/ec2-role
region = eu-north-1
source_profile = ec2-def
And when I try terraform init -migrate-state I get:
2022-08-03T17:23:21.334+0300 [INFO] Terraform version: 1.2.5
2022-08-03T17:23:21.334+0300 [INFO] Go runtime version: go1.18.1
2022-08-03T17:23:21.334+0300 [INFO] CLI args: []string{"terraform", "init", "-migrate-state"}
2022-08-03T17:23:21.334+0300 [INFO] Loading CLI configuration from /
2022-08-03T17:23:21.335+0300 [INFO] CLI command args: []string{"init", "-migrate-state"}
Initializing the backend...
2022-08-03T17:23:21.337+0300 [WARN] backend config has changed since last init
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
2022-08-03T17:23:21.338+0300 [INFO] Attempting to use session-derived credentials
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
I just dont understand what is this error and it even possible to provide two different set of credentials to s3 and ec2?

How to fix 403 error when applying Terraform?

I created a new EC2 instance on AWS.
I am trying to create a Terraform on the AWS server and I'm getting an error.
I don't have the AMI previously created, so I'm not sure if this is the issue.
I checked my keypair and ensured it is correct.
I also checked the API details and they are correct too. I'm using a college AWS App account where the API details are the same for all users. Not sure if that would be an issue.
This is the error I'm getting after running terraform apply:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: be2bf9ee-3aa4-401a-bc8b-f15c8a1e63d0
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 10, in provider "aws":
│ 10: provider "aws" {
My main.tf file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "eu-west-1"
}
resource "aws_instance" "app_server" {
ami = "ami-04505e74c0741db8d"
instance_type = "t2.micro"
key_name = "<JOEY'S_KEYPAIR>"
tags = {
Name = "joey_terraform"
}
}
Credentials:
AWS Access Key ID [****************LRMC]:
AWS Secret Access Key [****************6OO3]:
Default region name [eu-west-1]:
Default output format [None]:

no valid credential sources for Terraform AWS Provider found

I am using shared_cred_file for aws provider. With aws provider version 3.63 for example, terraform plan works good.
When I use aws provider 4.0 it prompts me to use apply changed setting for shared_credentials_files. After the changes, there is no error, but the second error remains
what could be the problem?
Warning: Argument is deprecated
│
│ with provider[“registry.terraform.io/hashicorp/aws”],
│ on main.tf line 15, in provider “aws”:
│ 15: shared_credentials_file = “~/.aws/credentials”
│
│ Use shared_credentials_files instead.
│
│ (and one more similar warning elsewhere)
╵
╷
│ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│
│
│ with provider[“registry.terraform.io/hashicorp/aws”],
│ on main.tf line 13, in provider “aws”:
│ 13: provider “aws” {
│
///////////////////////////////
// Infrastructure init
terraform {
backend "s3" {
bucket = "monitoring-********-infrastructure"
key = "tfstates/********-non-prod-rds-info.tfstate"
profile = "test-prof"
region = "eu-west-2"
shared_credentials_file = "~/.aws/credentials"
}
}
provider "aws" {
profile = "test-prof"
shared_credentials_files = ["~/.aws/credentials"]
region = "eu-west-2"
}
Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 13, in provider "aws":
│ 13: provider "aws" {
cat config
[test-prof]
output = json
region = eu-west-2
cat credentials
[test-prof]
aws_access_key_id = ****************
aws_secret_access_key = ******************
By latest Terraform documentation this is how it will work,
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["C:/Users/tf_user/.aws/credentials"]
profile = "customprofile"
}
I had the same issue this thing works for me.
Changing
provider "aws" {
shared_credentials_file = "$HOME/.aws/credentials"
profile = "default"
region = "us-east-1"
}
to
provider "aws" {
shared_credentials_file = "/Users/me/.aws/credentials"
profile = "default"
region = "us-east-1"
}
worked for me.
We stumbled with this issue in our pipelines after migration AWS Provider from version 3 -> 4.
So, for anyone using Azure DevOps or any other CI tools, the fix should be as easy as adding a new step in the pipeline and creating the shared credentials file:
mkdir $HOME/.aws
echo [default] >> $HOME/.aws/credentials
echo aws_access_key_id = ${AWS_ACCESS_KEY_ID} >> $HOME/.aws/credentials
echo aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY} >> $HOME/.aws/credentials
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should be defined as a var or secrets in your pipeline.
When you are using
provider "aws" {
region = "your region"
shared_credentials_file = "path_file_credentials like C:\Users\terraform\.aws\credentials"
profile = "profile_name"
}
The path should be in this format: %USERPROFILE%.aws\credentials
This is the only acceptable format by the date of this answer, there are other ways too:
1.You can put your credentials in a tf file
provider "aws" {
profile = "profile_name"
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
If you are working on a project and don't want to share them with your teammates, you can use it as a variable like this:
main.tf
provider "aws" {
profile = "profile_name"
region = "us-west-2"
access_key = var.access_key
secret_key = var.secret_key
}
variables.tf
variable "access_key" {
description = "My AWS access key"
}
variable "secret_key" {
description = "My AWS secret key"
}
You can either fill them after terraform apply or add variables.tf to .gitignore
You can find more options here.

UnauthorizedOperation on terraform apply. How to run the following AWS config?

I want to deploy an infrastructure on AWS using terraform. This is the main.tf config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
AWS config file ~/.aws/config,:
[default]
region = us-east-1
[humboi]
region = us-east-1
Running terraform apply and entering "yes" gives:
aws_instance.app_server: Creating...
╷
│ Error: Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: r8hvTFNQzGA7k309BxQ9OYRxCaCH-0wwYvhAzbjEt77PsyOYyWItWNrOPUW4Z1CIzm8A6x6euBuSZsE8uSfb3YdPuzLXttHT3DS9IJsDs0ilX0Vxtu1OZ3nSCBowuylMuLEXY8VdaA35Hb7CaLb-ktQwb_ke0Pku-Uh2Vi_cwsYwAdXdGVeTETkiuErZ3tAU37f5DyZkaL4dVgPMynjRI3-GW0P63WJxcZVTkfNcNzuTx6PQfdv-YydIdUOSAS-RUVqK6ewiX-Mz4S0GwAaIFeJ_4SoIQVjogbzYYBC0bI4-sBSyVmySGuxNF6x-BOU0Zt2-po1mwEiPaDBVL9aOt6k_eZKMbYM9Ef8qQRcxnSLWOCiHuw6LVbmPJzaDQRFNZ2eO11Fa2oOcu8JMEOQjOtPkibQNAdO_5LZWAnc6Ye2-Ukt2_folTKN6TH6v1hmwsLAO7uGL60gQ-n9iBfCIqEE_6gfImsdbOptgz-IRtTrz5a8bfLOBVfd9oNjKGXQoA2ZKhM35m1ML1DQKY8LcDv0aULkGzoM6bRYoq1UkJBYuF-ShamtSpSlzpd4KDXztpxUdb496FR4MdOoHgS04W_3WXoN-hb_lG-Wgbkv7CEWMv2pNhBCRipBgUUw3QK-NApkeTxxJXy9vFQ4fTZQanEIQa_Bxxg
│ status code: 403, request id: 0c1f14ec-b5f4-4a3f-bf1f-40be4cf370fc
│
│ with aws_instance.app_server,
│ on main.tf line 17, in resource "aws_instance" "app_server":
│ 17: resource "aws_instance" "app_server" {
│
╵
The error is that the Operation was Unauthorized. What's the cause of the unauthorized operation if I have the ~/.aws/config and also the ~/.aws/credentials?
I have this happen when I change my backend configuration without deleting .terraform. I believe terraform caches credentials in .terraform. If you delete that directory, it will regenerate it and it might work for you.
Also, make sure you restart your machine after setting environment variables for aws.
The IAM User which you have created
doesn't have Admin access or EC2 FULL ACESS
so enable it and try it again..