Terraform Error: Failed to load plugin schemas - amazon-web-services

I'm adding a new module to an existing project.
I get the message that I have to do a terraform init to recognise the new module.
aws.tf:
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = var.region
}
terraform.tfvars:
access_key = "Removed"
secret_key = "Removed"
region = "ap-southeast-2"
When I do that, it downloads the latest aws
- Installing hashicorp/aws v4.40.0...
Then I get below...
│ Error: Failed to load plugin schemas
│
│ Error while loading schemas for plugin components: Failed to obtain provider schema: Could not load the schema for
│ provider registry.terraform.io/hashicorp/aws: failed to instantiate provider "registry.terraform.io/hashicorp/aws" to
│ obtain schema: Unrecognized remote plugin message:
│
│ This usually means that the plugin is either invalid or simply
│ needs to be recompiled to support the latest protocol...
╵
So seems the latest aws schema is corrupt???
I can't progress.
I've copied over an older aws schema but I still have to do a terraform init and it always updates the aws schema. So stuck here.
Even if i create a new folder and have nothing in the folder and just start with terraform on its own and do an init, it creates the .terraform folder and installs the aws schema.
Then do terraform plan I get the schema error.
Terraform v1.3.5
AWS schema v4.40.0
Windows 10
Thanks in advance.

OK the problem was the Trend Micro Anti Virus killing the process. I've whitelisted terraform.exe in the client computer and all good now. Thanks for the responses. Hopefully this helps someone in future.

Related

Terraform AWS | Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found

Just started using IntelliJ with AWS and this error pops after using the terraform apply command and the code is just a simple deployment of an EC2 instance.
Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request send failed, Get "http://169.254.169.2
54/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: i/o timeout
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 1, in provider "aws":
│ 1: provider "aws" {
│
╵
Credentials with AWS Explorer are correct Using an IAM user in Admin group. Terraform is installed IntelliJ plug-ins for Terraform and AWS are installed There is a connection between IntelliJ and AWS
Using Windows 10 Can't use admin operations on Windows I feel like Terraform and AWS cant be connected (as the error says), but I can't understand why.
Any ideas how can I deploy this instance? Any help is appreciated. I am here to answer any questions you may have. I expected to deploy an EC2 instance. I've tried creating a new project, reinstalling IntelliJ, using other IDE like VS Code.
So I had to run:
$ export AWS_ACCESS_KEY_ID=(your access key id)
$ export AWS_SECRET_ACCESS_KEY=(your secret access key)
With my keys in the Ubuntu terminal in IntelliJ and it worked!
Alternatively, you can configure your provider block as follows:
provider "aws" {
region = "aws_region_to_create_resources"
profile = "aws_profile_to_use"
}
or if your aws credentials file is in another path other than the default $HOME/.aws :
provider "aws" {
region = "aws_region_to_create_resources"
shared_credentials_file = "path_to_aws_credentials_file"
profile = "aws_profile_to_use"
}
Specifying a profile has the advantage of allowing you to use different AWS accounts for different environments. You can also use this method to deploy resources in the same terraform module to different accounts or regions by using aliases.

Error: instance profile is required to re-create mounting cluster

I'm attempting to run a terraform_plan in my prod environment, but I receive the following error:
│ Error: instance profile is required to re-create mounting cluster
│
│ with databricks_mount.gfc_databricks_delta_lake,
│ on gfc_mount_delta_lake.tf line 1, in resource "databricks_mount" "gfc_databricks_delta_lake":
│ 1: resource "databricks_mount" "gfc_databricks_delta_lake" {
│
╵
Here's the code for the mount:
resource "databricks_mount" "gfc_databricks_delta_lake" {
depends_on = [
databricks_cluster.gfc_automation_cluster,
databricks_instance_profile.gfc_instance_profile
]
provider = databricks.workspace_00
name = "gfc"
cluster_id = databricks_cluster.gfc_automation_cluster.id
s3 {
bucket_name = "XXX"
}
}
This code, along with the code for the instance profiles and automation clusters, is identical between our dev and prod environment. Still, the error only pops up in prod.
What's puzzling is that the databricks_mount is pointed to a cluster that already has an instance profile. The instance profile exists in the Terraform state file, Databricks, and AWS.
One thing that's strange is that the cluster that's supposed to be using that instance profile is missing from Databricks, but is present in the state file. Could be a clue.
You need to set DATABRICKS_HOST and DATABRICKS_TOKEN environment variables.
You can add the following lines to your .bashrc or .zshrc to automatically set these values as environment variables when you open up a new terminal session
export DATABRICKS_HOST="insert_databricks_workspace_url"
export DATABRICKS_TOKEN="insert_databricks_api_token"
Per Databricks Provider Terraform Documentation,
You can use host and token parameters to supply credentials to the workspace. When environment variables are preferred, then you can specify DATABRICKS_HOST and DATABRICKS_TOKEN instead. Environment variables are the second most recommended way of configuring this provider.
source

Terraform Databricks AWS instance profile - "authentication is not configured for provider"

I'm trying to create a databricks instance profile using the sample code from the documentation.
Terraform can generate the plan successfully but when I try to apply it it gives me this error:
╷
│ Error: cannot create instance profile: authentication is not configured for provider.. Please check https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs#authentication for details
│
│ with databricks_instance_profile.shared,
│ on IAM.tf line 73, in resource "databricks_instance_profile" "shared":
│ 73: resource "databricks_instance_profile" "shared" {
I have setup username/password authentication for databricks in my terraform tfvars files and this works - it is able to actually provision a workspace, but fails when creating the instance profile.
Appreciate any inputs on what I'm doing wrong.
Usually this kind of problems arise when you create a workspace & attempt to use it in the same terraform template. The solution for that is to have two declarations of the Databricks provider - one will be used for creation of the workspace, and second - for creation of the objects inside workspace. The AWS provisioning guide is a part of official documentation and contains full example:
provider "databricks" {
alias = "mws"
host = "https://accounts.cloud.databricks.com"
username = var.databricks_account_username
password = var.databricks_account_password
}
# Notice "provider = databricks.mws" !
resource "databricks_mws_credentials" "this" {
provider = databricks.mws
account_id = var.databricks_account_id
role_arn = aws_iam_role.cross_account_role.arn
credentials_name = "${local.prefix}-creds"
depends_on = [aws_iam_role_policy.this]
}
provider "databricks" {
host = var.databricks_host
token = var.databricks_token
}
resource "databricks_instance_profile" "shared" {
depends_on = [databricks_mws_workspaces.this]
instance_profile_arn = aws_iam_instance_profile.shared.arn
}
Another common issue arises from the fact that Terraform is trying to run as many tasks as possible in parallel, so it may attempt to create Terraform resource before workspace is created - this is explicitly documented in the AWS provisioning guide, so you need to add depends_on = [databricks_mws_workspaces.this] to all databricks resources, so Terraform won't attempt to create Databricks objects before creating workspace:
P.S. It's also recommended to upgrade to the latest version of provider (0.4.4 as of right now) that has better error messages for such problems.

Terraform plan using terraform cloud back end AWS credential error

Context :
I have set my aws credentials using aws configure.
I use terraform remote backend to store the terraform state
Using the following terraform configuration.
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "test"
workspaces {
prefix = "networking-"
}
}
}
provider "aws" {
region = "eu-west-3"
}
Problem:
When I run terraform apply I have this error.
╷
│ Error: No valid credential sources found for AWS Provider.
│ Please see https://terraform.io/docs/providers/aws/index.html for more information on
│ providing credentials for the AWS Provider
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 13, in provider "aws":
│ 13: provider "aws" {
│
╵
If you are using terraform cloud remote backend by default when you create a workspace, the terraform plan command in executed on the remote backend and not of your local machine. This is why, terraform can not find your credentials, because they are not set in the remote machine. To fix this problem, you need to order terraform to run the plan on your machine. To do so
go in your workspace
then go in the general settings
then switch from remote execution mode to local
Then try to run again your plan on your machine

Terraform gke node pool getting 403 on googleapi: Error 403: Required "container.clusters.update"

I'm getting that 403 error when terraform, v0.11.11, is applying to the node pool that is managed separately from the gke cluster creation.
Full error:
google_container_node_pool.np: error creating NodePool: googleapi: Error 403: Required "container.clusters.update" permission(s) for "projects//locations/us-central1/clusters/". See https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted for more info., forbidden
I ran through the troubleshooting guide but all it says is to disable and then enable the api again, which I did try, and still am getting that error.
Also using the google and google beta providers both versions 1.20
try to delete the default GKE service account, and re enable the service using the gcloud command which is going to recreate the default service account.
If that's not working for you try to change the roles of the account to "Editor" or create a custom role including the "container.clusters.update" permission.
So the root cause was I was using a custom module but passing credentials down to the custom module in the module block, but it was still using the original credentials that were being used to test the custom module. Once I changed the custom module creds to what they should be it was working.
I too ran into the same problem. Looks like the issue is that the google_container_node_pool resource is trying to update cluster in the project specified in the terraform google provider block rather than the project in which the actual cluster exits. I was able to fix it by specifying the same project provided in the google_container_node_pool which is same as the google_container_cluster resource.
In my case, it was zone issue. I used region instead of zone.
google_container_node_pool.primary_nodes[0]: Creating...
╷
│ Error: error creating NodePool: googleapi: Error 404: Not found: projects/project/locations/europe-west6/clusters/myslodi-cluster., notFound
│
│ with google_container_node_pool.primary_nodes[0],
│ on main.tf line 17, in resource "google_container_node_pool" "primary_nodes":
│ 17: resource "google_container_node_pool" "primary_nodes" {
my env was look like this
region: "europe-west6"
zone: "europe-west6-b"
so had to replace var.region with var.zone
resource "google_container_node_pool" "primary_nodes" {
count = 1
name = "${google_container_cluster.primary.name}-node-pool"
cluster = google_container_cluster.primary.name
node_count = var.node_count
location = var.zone