I was trying to write a terraform code after terraform init I tried to run terraform plan and ran into an error
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.22.0"
}
}
}
provider "aws" {
region = "ap-south-1"
shared_credentials_files = ["~/.aws/credentials"]
profile = "vscode"
}
ERROR
│ Error: error configuring Terraform AWS Provider: failed to get shared config profile, vscode
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 10, in provider "aws":
│ 10: provider "aws" {```
Related
I have setup keyless authentication for my Github Actions pipeline using Workload Identity Federation by following the official tutorial
When running a terraform init command from my pipeline I get the following error:
│ Error: Failed to get existing workspaces: querying Cloud Storage failed: Get "https://storage.googleapis.com/storage/v1/b/lws-dev-common-bucket/o?alt=json&delimiter=%2F&pageToken=&prefix=global%2Fnetworking.state%2F&prettyPrint=false&projection=full&versions=false": oauth2/google: status code 403: {
│ "error": {
│ "code": 403,
│ "message": "Permission 'iam.serviceAccounts.getAccessToken' denied on resource (or it may not exist).",
│ "status": "PERMISSION_DENIED",
│ "details": [
│ {
│ "#type": "type.googleapis.com/google.rpc.ErrorInfo",
│ "reason": "IAM_PERMISSION_DENIED",
│ "domain": "iam.googleapis.com",
│ "metadata": {
│ "permission": "iam.serviceAccounts.getAccessToken"
│ }
│ }
│ ]
│ }
│ }
I have ensured that the service account that I am using has proper permissions including:
Cloud Run Admin
Cloud Run Service Agent
Below is a snippet of my pipeline code:
- id: 'auth'
name: 'Authenticate to Google Cloud'
uses: 'google-github-actions/auth#v0.4.0'
with:
workload_identity_provider: 'projects/385050593732/locations/global/workloadIdentityPools/my-pool/providers/my-provider'
service_account: 'lws-d-iac-sa#lefewaresolutions-poc.iam.gserviceaccount.com'
- name: Terraform Init
working-directory: ./Terraform/QuickStartDeployments/EKSCluster
run: terraform init
and my terraform code:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.89.0"
}
}
backend "gcs" {
bucket = "lws-dev-common-bucket"
prefix = "global/networking.state"
}
required_version = ">= 0.14.9"
}
provider "google" {
project = var.project_id
region = var.region
}
module "vpc" {
source = "../../Modules/VPC"
project_id = var.project_id
region = "us-west1"
vpc_name = var.vpc_name
}
I ran into the same issue and was able to fix it by granting the service account Service Account Token Creator role in the project IAM page manually
This can also happen if your service account doesn't have permission to access the storage bucket where your terraform state file is stored, or if your service account doesn't have the Workload Identity User role set properly.
Trying to configure aws from terraform. Running terraform from ec2. Have attached AmazonEC2FullAccess policy to the role attached to ec2.
I don't have access and secret keys. Using keys for aws cli and terraform is not allowed. I need to use existing role to configure to aws and create resources using it.
Getting below error when using AmazonEC2FullAccess policy with ec2.
[ec2-user#ip-1*-1*-1*-2** terraform]$ terraform plan
╷
│ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request send failed, Get "http://1**.***.***.***/latest/meta-data/iam/security-credentials/": proxyconnect tcp: dial tcp 1*.*.*.*:8***: i/o timeout
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 17, in provider "aws":
│ 17: provider "aws" {
│
Resource vpc file :-
[ec2-user#ip-1*.1*.1*.*** terraform]$ cat vpc.tf
resource "aws_vpc" "main" {
cidr_block = "1*.*.*.*/16"
}
main.tf file :-
[ec2-user#ip-1*.1*.1*.*** terraform]$ cat main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.39.0"
}
}
required_version = ">= 1.3.0"
}
provider "aws" {
region = var.aws_region
#role_arn =var.aws_role_arn
}
Also tried using role_arn in main.tf it gives following error :-
│ Error: Unsupported argument
│
│ on main.tf line 19, in provider "aws":
│ 19: role_arn =var.aws_role_arn
│
│ An argument named "role_arn" is not expected here.
Any help is much appreciated.
I am using shared_cred_file for aws provider. With aws provider version 3.63 for example, terraform plan works good.
When I use aws provider 4.0 it prompts me to use apply changed setting for shared_credentials_files. After the changes, there is no error, but the second error remains
what could be the problem?
Warning: Argument is deprecated
│
│ with provider[“registry.terraform.io/hashicorp/aws”],
│ on main.tf line 15, in provider “aws”:
│ 15: shared_credentials_file = “~/.aws/credentials”
│
│ Use shared_credentials_files instead.
│
│ (and one more similar warning elsewhere)
╵
╷
│ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│
│
│ with provider[“registry.terraform.io/hashicorp/aws”],
│ on main.tf line 13, in provider “aws”:
│ 13: provider “aws” {
│
///////////////////////////////
// Infrastructure init
terraform {
backend "s3" {
bucket = "monitoring-********-infrastructure"
key = "tfstates/********-non-prod-rds-info.tfstate"
profile = "test-prof"
region = "eu-west-2"
shared_credentials_file = "~/.aws/credentials"
}
}
provider "aws" {
profile = "test-prof"
shared_credentials_files = ["~/.aws/credentials"]
region = "eu-west-2"
}
Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 13, in provider "aws":
│ 13: provider "aws" {
cat config
[test-prof]
output = json
region = eu-west-2
cat credentials
[test-prof]
aws_access_key_id = ****************
aws_secret_access_key = ******************
By latest Terraform documentation this is how it will work,
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["C:/Users/tf_user/.aws/credentials"]
profile = "customprofile"
}
I had the same issue this thing works for me.
Changing
provider "aws" {
shared_credentials_file = "$HOME/.aws/credentials"
profile = "default"
region = "us-east-1"
}
to
provider "aws" {
shared_credentials_file = "/Users/me/.aws/credentials"
profile = "default"
region = "us-east-1"
}
worked for me.
We stumbled with this issue in our pipelines after migration AWS Provider from version 3 -> 4.
So, for anyone using Azure DevOps or any other CI tools, the fix should be as easy as adding a new step in the pipeline and creating the shared credentials file:
mkdir $HOME/.aws
echo [default] >> $HOME/.aws/credentials
echo aws_access_key_id = ${AWS_ACCESS_KEY_ID} >> $HOME/.aws/credentials
echo aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY} >> $HOME/.aws/credentials
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should be defined as a var or secrets in your pipeline.
When you are using
provider "aws" {
region = "your region"
shared_credentials_file = "path_file_credentials like C:\Users\terraform\.aws\credentials"
profile = "profile_name"
}
The path should be in this format: %USERPROFILE%.aws\credentials
This is the only acceptable format by the date of this answer, there are other ways too:
1.You can put your credentials in a tf file
provider "aws" {
profile = "profile_name"
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
If you are working on a project and don't want to share them with your teammates, you can use it as a variable like this:
main.tf
provider "aws" {
profile = "profile_name"
region = "us-west-2"
access_key = var.access_key
secret_key = var.secret_key
}
variables.tf
variable "access_key" {
description = "My AWS access key"
}
variable "secret_key" {
description = "My AWS secret key"
}
You can either fill them after terraform apply or add variables.tf to .gitignore
You can find more options here.
I have the following terraform file
provider "google" {
project = "prj1-user"
region = "APAC"
zone = "australia-southeast1-a"
}
resource "google_pubsub_topic" "prj1-messages" {
name = "prj1Messages"
labels = {
foo = "bar"
}
}
however when I try to provision this through terraform apply I get the following error
│ Error: Error creating Topic: Put "https://pubsub.googleapis.com/v1/projects/prj1-user/topics/prj1Messages?alt=json": oauth2/google: invalid token JSON from metadata: EOF
│
│ with google_pubsub_topic.brwmessages,
│ on main.tf line 7, in resource "google_pubsub_topic" "prj1Messages":
│ 7: resource "google_pubsub_topic" "prj1Messages" {
The version I'm using is
Terraform v1.0.0
on linux_amd64
+ provider registry.terraform.io/hashicorp/google v3.71.0
I want to deploy an infrastructure on AWS using terraform. This is the main.tf config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
AWS config file ~/.aws/config,:
[default]
region = us-east-1
[humboi]
region = us-east-1
Running terraform apply and entering "yes" gives:
aws_instance.app_server: Creating...
╷
│ Error: Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: r8hvTFNQzGA7k309BxQ9OYRxCaCH-0wwYvhAzbjEt77PsyOYyWItWNrOPUW4Z1CIzm8A6x6euBuSZsE8uSfb3YdPuzLXttHT3DS9IJsDs0ilX0Vxtu1OZ3nSCBowuylMuLEXY8VdaA35Hb7CaLb-ktQwb_ke0Pku-Uh2Vi_cwsYwAdXdGVeTETkiuErZ3tAU37f5DyZkaL4dVgPMynjRI3-GW0P63WJxcZVTkfNcNzuTx6PQfdv-YydIdUOSAS-RUVqK6ewiX-Mz4S0GwAaIFeJ_4SoIQVjogbzYYBC0bI4-sBSyVmySGuxNF6x-BOU0Zt2-po1mwEiPaDBVL9aOt6k_eZKMbYM9Ef8qQRcxnSLWOCiHuw6LVbmPJzaDQRFNZ2eO11Fa2oOcu8JMEOQjOtPkibQNAdO_5LZWAnc6Ye2-Ukt2_folTKN6TH6v1hmwsLAO7uGL60gQ-n9iBfCIqEE_6gfImsdbOptgz-IRtTrz5a8bfLOBVfd9oNjKGXQoA2ZKhM35m1ML1DQKY8LcDv0aULkGzoM6bRYoq1UkJBYuF-ShamtSpSlzpd4KDXztpxUdb496FR4MdOoHgS04W_3WXoN-hb_lG-Wgbkv7CEWMv2pNhBCRipBgUUw3QK-NApkeTxxJXy9vFQ4fTZQanEIQa_Bxxg
│ status code: 403, request id: 0c1f14ec-b5f4-4a3f-bf1f-40be4cf370fc
│
│ with aws_instance.app_server,
│ on main.tf line 17, in resource "aws_instance" "app_server":
│ 17: resource "aws_instance" "app_server" {
│
╵
The error is that the Operation was Unauthorized. What's the cause of the unauthorized operation if I have the ~/.aws/config and also the ~/.aws/credentials?
I have this happen when I change my backend configuration without deleting .terraform. I believe terraform caches credentials in .terraform. If you delete that directory, it will regenerate it and it might work for you.
Also, make sure you restart your machine after setting environment variables for aws.
The IAM User which you have created
doesn't have Admin access or EC2 FULL ACESS
so enable it and try it again..