I created a new EC2 instance on AWS.
I am trying to create a Terraform on the AWS server and I'm getting an error.
I don't have the AMI previously created, so I'm not sure if this is the issue.
I checked my keypair and ensured it is correct.
I also checked the API details and they are correct too. I'm using a college AWS App account where the API details are the same for all users. Not sure if that would be an issue.
This is the error I'm getting after running terraform apply:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: be2bf9ee-3aa4-401a-bc8b-f15c8a1e63d0
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 10, in provider "aws":
│ 10: provider "aws" {
My main.tf file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "eu-west-1"
}
resource "aws_instance" "app_server" {
ami = "ami-04505e74c0741db8d"
instance_type = "t2.micro"
key_name = "<JOEY'S_KEYPAIR>"
tags = {
Name = "joey_terraform"
}
}
Credentials:
AWS Access Key ID [****************LRMC]:
AWS Secret Access Key [****************6OO3]:
Default region name [eu-west-1]:
Default output format [None]:
Related
I am trying to create an aws instance through terraform. Despite generating multiple users with different key pairs, all of them seem to return a InvalidClientTokenID error when I try to terraform plan. Below are the options I've tried based on the research I've done:
AWS cli configure to save my credentials there
"aws sts get-caller-identity" to confirm that the credentials are valid
Exported the credentials to my local env
Pointed the instance.tf file to my /.aws credentials file through "shared_credentials_files"
Generated multiple access keys until I got secret keys with no special symbols
This is my code:
provider "aws" {
# access_key = "redacted"
# secret_key = "redacted"
shared_credentials_files = "/home/nocnoc/.aws/credentials"
region = "eu-central-1"
}
resource "aws_instance" "example" {
ami = "ami-0965bd5ba4d59211c"
instance_type = "t3.micro"
}
This is the error message:
$terraform apply
╷
│ Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 2ea13d91-630c-40dc-84eb-72b26222aecb, api error InvalidClientTokenId: The security token included in the request is invalid.
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on instance.tf line 1, in provider "aws":
│ 1: provider "aws" {
│
Are there any other options that I have not yet considered? I have MFA set up on my AWS account, but so did my tutor and the course didn't mention anything regarding adding a special field into the terraform file regarding that
Using Terraform v1.2.5 I am attempting to deploy an AWS VPC Peer. However, the following code fails validation:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.1"
}
}
}
provider "aws" {
region = "us-east-1"
}
data "aws_vpc" "accepter" {
provider = aws.accepter
id = "${var.accepter_vpc_id}"
}
locals {
accepter_account_id = "${element(split(":", data.aws_vpc.accepter.arn), 4)}"
}
resource "aws_vpc_peering_connection" "requester" {
description = "peer_to_${var.accepter_profile}"
vpc_id = "{$var.requester_vpc_id}"
peer_vpc_id = "${data.aws_vpc.accepter.id}"
peer_owner_id = "${local.accepter_account_id}"
}
When validating this terraform code I am receiving the following error :
$ terraform validate
╷
│ Error: Provider configuration not present
│
│ To work with data.aws_vpc.accepter its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].accepter is
│ required, but it has been removed. This occurs when a provider configuration is removed while objects created by that provider still exist
│ in the state. Re-add the provider configuration to destroy data.aws_vpc.accepter, after which you can remove the provider configuration
│ again.
What am I missing or misconfigured that is causing this error?
You need to provide the aliased provider version you are referencing here:
data "aws_vpc" "accepter" {
provider = aws.accepter # <--- missing aliased provider
id = var.accepter_vpc_id
}
To fix this, you just need to add the corresponding aliased provider block [1]:
provider "aws" {
alias = "accepter"
region = "us-east-1" # make sure the region is right
}
[1] https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations
Since VPC Peering is an "inter-region" Peering we need 2 AWS regions. Hence the way to achhieve this is creating AWS provider and AWS alias provider blocks.
See an example:
provider "aws" {
region = "us-east-1"
# Requester's credentials.
}
provider "aws" {
alias = "peer"
region = "us-west-2"
# Accepter's credentials.
}
Your resource doesn't know what provider to use, the one you provided does not exist.
You have two options:
Add alias to provider:
provider "aws" {
region = "us-east-1"
alias = "accepter"
}
remove line provider = aws.accepter from data "aws_vpc" "accepter"
Somehow, the first will be more valid for you, I suppose.
In terraform, how can I specify EC2 private/public key pair, when launching a new EC2 instance?
I am following https://stackoverflow.com/a/73351869 and https://stackoverflow.com/a/64287520 to add key_name to a resource in the following main.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
}
variable "public_path" {
default = "/path/to/MyKeyPair.pem"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "somekeyname"
}
resource "aws_instance" "app_server" {
ami = "ami-052efd3df9dad4825"
instance_type = "t2.micro"
vpc_security_group_ids = ["sg-xxxxx"]
key_name = aws_key_pair.app_keypair.key_name
tags = {
Name = "ExampleAppServerInstance"
}
}
but
$ terraform apply
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_key_pair.app_keypair: Creating...
╷
│ Error: error importing EC2 Key Pair (somekeyname): InvalidParameterValue: Value for parameter PublicKeyMaterial is invalid. Length exceeds maximum of 2048.
│ status code: 400, request id: 72425610-202e-42ce-98b4-a8dce5fef694
│
│ with aws_key_pair.app_keypair,
│ on main.tf line 21, in resource "aws_key_pair" "app_keypair":
│ 21: resource "aws_key_pair" "app_keypair" {
│
Why is it and what can I do about it? Thanks.
See the "NOTE:" at the bottom of the aws_key_pair documentation.
The AWS API does not include the public key in the response, so terraform apply will attempt to replace the key pair. There is currently no supported workaround for this limitation.
And I can tell by your comment that you know the create-key-pair output key material is the private key, which is correct and can be verified in the Create key pairs documentation.
Use the create-key-pair command as follows to generate the key pair and to save the private key to a .pem file.
So if you want to create a key locally and use this terrform resource, you'll need to use a method that leaves you with public key material, like openssh. I'd recommend following the Create a key pair using a third-party tool and import the public key to Amazon EC2 section, if you intend to do so.
I am using shared_cred_file for aws provider. With aws provider version 3.63 for example, terraform plan works good.
When I use aws provider 4.0 it prompts me to use apply changed setting for shared_credentials_files. After the changes, there is no error, but the second error remains
what could be the problem?
Warning: Argument is deprecated
│
│ with provider[“registry.terraform.io/hashicorp/aws”],
│ on main.tf line 15, in provider “aws”:
│ 15: shared_credentials_file = “~/.aws/credentials”
│
│ Use shared_credentials_files instead.
│
│ (and one more similar warning elsewhere)
╵
╷
│ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│
│
│ with provider[“registry.terraform.io/hashicorp/aws”],
│ on main.tf line 13, in provider “aws”:
│ 13: provider “aws” {
│
///////////////////////////////
// Infrastructure init
terraform {
backend "s3" {
bucket = "monitoring-********-infrastructure"
key = "tfstates/********-non-prod-rds-info.tfstate"
profile = "test-prof"
region = "eu-west-2"
shared_credentials_file = "~/.aws/credentials"
}
}
provider "aws" {
profile = "test-prof"
shared_credentials_files = ["~/.aws/credentials"]
region = "eu-west-2"
}
Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 13, in provider "aws":
│ 13: provider "aws" {
cat config
[test-prof]
output = json
region = eu-west-2
cat credentials
[test-prof]
aws_access_key_id = ****************
aws_secret_access_key = ******************
By latest Terraform documentation this is how it will work,
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["C:/Users/tf_user/.aws/credentials"]
profile = "customprofile"
}
I had the same issue this thing works for me.
Changing
provider "aws" {
shared_credentials_file = "$HOME/.aws/credentials"
profile = "default"
region = "us-east-1"
}
to
provider "aws" {
shared_credentials_file = "/Users/me/.aws/credentials"
profile = "default"
region = "us-east-1"
}
worked for me.
We stumbled with this issue in our pipelines after migration AWS Provider from version 3 -> 4.
So, for anyone using Azure DevOps or any other CI tools, the fix should be as easy as adding a new step in the pipeline and creating the shared credentials file:
mkdir $HOME/.aws
echo [default] >> $HOME/.aws/credentials
echo aws_access_key_id = ${AWS_ACCESS_KEY_ID} >> $HOME/.aws/credentials
echo aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY} >> $HOME/.aws/credentials
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should be defined as a var or secrets in your pipeline.
When you are using
provider "aws" {
region = "your region"
shared_credentials_file = "path_file_credentials like C:\Users\terraform\.aws\credentials"
profile = "profile_name"
}
The path should be in this format: %USERPROFILE%.aws\credentials
This is the only acceptable format by the date of this answer, there are other ways too:
1.You can put your credentials in a tf file
provider "aws" {
profile = "profile_name"
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
If you are working on a project and don't want to share them with your teammates, you can use it as a variable like this:
main.tf
provider "aws" {
profile = "profile_name"
region = "us-west-2"
access_key = var.access_key
secret_key = var.secret_key
}
variables.tf
variable "access_key" {
description = "My AWS access key"
}
variable "secret_key" {
description = "My AWS secret key"
}
You can either fill them after terraform apply or add variables.tf to .gitignore
You can find more options here.
I want to deploy an infrastructure on AWS using terraform. This is the main.tf config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
AWS config file ~/.aws/config,:
[default]
region = us-east-1
[humboi]
region = us-east-1
Running terraform apply and entering "yes" gives:
aws_instance.app_server: Creating...
╷
│ Error: Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: r8hvTFNQzGA7k309BxQ9OYRxCaCH-0wwYvhAzbjEt77PsyOYyWItWNrOPUW4Z1CIzm8A6x6euBuSZsE8uSfb3YdPuzLXttHT3DS9IJsDs0ilX0Vxtu1OZ3nSCBowuylMuLEXY8VdaA35Hb7CaLb-ktQwb_ke0Pku-Uh2Vi_cwsYwAdXdGVeTETkiuErZ3tAU37f5DyZkaL4dVgPMynjRI3-GW0P63WJxcZVTkfNcNzuTx6PQfdv-YydIdUOSAS-RUVqK6ewiX-Mz4S0GwAaIFeJ_4SoIQVjogbzYYBC0bI4-sBSyVmySGuxNF6x-BOU0Zt2-po1mwEiPaDBVL9aOt6k_eZKMbYM9Ef8qQRcxnSLWOCiHuw6LVbmPJzaDQRFNZ2eO11Fa2oOcu8JMEOQjOtPkibQNAdO_5LZWAnc6Ye2-Ukt2_folTKN6TH6v1hmwsLAO7uGL60gQ-n9iBfCIqEE_6gfImsdbOptgz-IRtTrz5a8bfLOBVfd9oNjKGXQoA2ZKhM35m1ML1DQKY8LcDv0aULkGzoM6bRYoq1UkJBYuF-ShamtSpSlzpd4KDXztpxUdb496FR4MdOoHgS04W_3WXoN-hb_lG-Wgbkv7CEWMv2pNhBCRipBgUUw3QK-NApkeTxxJXy9vFQ4fTZQanEIQa_Bxxg
│ status code: 403, request id: 0c1f14ec-b5f4-4a3f-bf1f-40be4cf370fc
│
│ with aws_instance.app_server,
│ on main.tf line 17, in resource "aws_instance" "app_server":
│ 17: resource "aws_instance" "app_server" {
│
╵
The error is that the Operation was Unauthorized. What's the cause of the unauthorized operation if I have the ~/.aws/config and also the ~/.aws/credentials?
I have this happen when I change my backend configuration without deleting .terraform. I believe terraform caches credentials in .terraform. If you delete that directory, it will regenerate it and it might work for you.
Also, make sure you restart your machine after setting environment variables for aws.
The IAM User which you have created
doesn't have Admin access or EC2 FULL ACESS
so enable it and try it again..