Cannot add "Wiz"(third-party) terraform provider - amazon-web-services

I am intergrating Wiz for AWS resources scanning, and following the doc : https://docs.wiz.io/wiz-docs/docs/auto-connect-clusters, wherein when I added wiz provider in terraform, it is giving following error,
In providers.tf, I added following code :
terraform {
required_providers {
wiz = {
version = " ~> 1.0"
source = "tf.app.wiz.io/wizsec/wiz"
}
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
Also, for Wiz integration, there is pre-requisite to have K8 provider and wiz client and secret added, for that I added :
provider "kubernetes" {
config_context = //context
config_path = //path
}
provider "wiz" {
client_id = //clientid
secret = //secret.id
}
Thanks in advance.

Okay, I could fetch wiz plugin from tf.app.wiz.io registry. The above terraform init should work.
The only case where I think it can fail is when you are using a module which expects wiz provider & you haven't defined the source tf.app.wiz.io/wizsec/wiz in all the modules you are sourcing. If you don't specify in each module, terraform assumes it needs to fetch from default registry registry.terraform.io & fails with above message.
You could specify the provider like below in each module & let the calling module specify the version you desire to have.
terraform {
required_providers {
wiz = {
source = "tf.app.wiz.io/wizsec/wiz"
}
}
}
Are you calling a module which relies on wiz provider?

Related

Terraform Provider issue: registry.terraform.io/hashicorp/s3

I current have code that I have been using for quiet sometime that calls a custom S3 module. Today I tried to run the same code and I started getting an error regarding the provider.
╷ │ Error: Failed to query available provider packages │ │ Could not
retrieve the list of available versions for provider hashicorp/s3:
provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/s3 │ │ All modules should specify
their required_providers so that external consumers will get the
correct providers when using a module. To see which modules │ are
currently depending on hashicorp/s3, run the following command: │
terraform providers
Doing some digging seems that terraform is looking for a module registry.terraform.io/hashicorp/s3, which doesn't exist.
So far, I have tried the following things:
Validated that the S3 Resource code meets the standards of the upgrade Hashicorp did to 4.x this year. Plus I have been using it for a couple of months with no issues.
Delete .terraform directory and rerun terraform init (No success same error)
Delete .terraform directory and .terraform.hcl lock and run terraform init -upgrade (No Success)
I have tried to update my provider's file to try to force an upgrade (no Success)
I tried to change the provider to >= current version to pull the latest version with no success
Reading further, it refers to a caching problem of the terraform modules. I tried to run terraform providers lock and received this error.
Error: Could not retrieve providers for locking │ │ Terraform failed
to fetch the requested providers for darwin_amd64 in order to
calculate their checksums: some providers could not be installed: │ -
registry.terraform.io/hashicorp/s3: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/s3.
Kind of at my wits with what could be wrong. below is a copy of my version.tf which I changed from providers.tf based on another post I was following:
version.tf
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
use_fips_endpoint = true
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.2.1"
}
}
required_version = ">= 1.2.0" #required terraform version
}
S3 Module
I did not include locals, outputs, or variables unless someone thinks we need to see them. As I said before, the module was running correctly until today. Hopefully, this is all you need for the provider's issue. Let me know if other files are needed.
resource "aws_s3_bucket" "buckets" {
count = length(var.bucket_names)
bucket = lower(replace(replace("${var.bucket_names[count.index]}-s3", " ", "-"), "_", "-"))
force_destroy = var.bucket_destroy
tags = local.all_tags
}
# Set Public Access Block for each bucket
resource "aws_s3_bucket_public_access_block" "bucket_public_access_block" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
block_public_acls = var.bucket_block_public_acls
ignore_public_acls = var.bucket_ignore_public_acls
block_public_policy = var.bucket_block_public_policy
restrict_public_buckets = var.bucket_restrict_public_buckets
}
resource "aws_s3_bucket_acl" "bucket_acl" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
acl = var.bucket_acl
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle_rule" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
id = "${var.bucket_names[count.index]}-lifecycle-${count.index}"
status = "Enabled"
expiration {
days = var.bucket_backup_expiration_days
}
transition {
days = var.bucket_backup_days
storage_class = "GLACIER"
}
}
}
# AWS KMS Key Server Encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_encryption" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.bucket_key[count.index].arn
sse_algorithm = var.bucket_sse
}
}
}
Looking for any other ideas I can use to fix this issue. thank you!!
Although you haven't included it in your question, I'm guessing that somewhere else in this Terraform module you have a block like this:
resource "s3_bucket" "example" {
}
For backward compatibility with modules written for older versions of Terraform, terraform init has some heuristics to guess what provider was intended whenever it encounters a resource that doesn't belong to one of the providers in the module's required_providers block. By default, a resource "belongs to" a provider by matching the prefix of its resource type name -- s3 in this case -- to the local names chosen in the required_providers block.
Given a resource block like the above, terraform init would notice that required_providers doesn't have an entry s3 = { ... } and so will guess that this is an older module trying to use a hypothetical legacy official provider called "s3" (which would now be called hashicorp/s3, because official providers always belong to the hashicorp/ namespace).
The correct name for this resource type is aws_s3_bucket, and so it's important to include the aws_ prefix when you declare a resource of this type:
resource "aws_s3_bucket" "example" {
}
This resource is now by default associated with the provider local name "aws", which does match one of the entries in your required_providers block and so terraform init will see that you intend to use hashicorp/aws to handle this resource.
My colleague and I finally found the problem. Turns out that we had a data call to the S3 bucket. Nothing was wrong with the module but the place I was calling the module had a local.tf action where I was calling s3 in a legacy format see the change below:
WAS
data "s3_bucket" "MyResource" {}
TO
data "aws_s3_bucket" "MyResource" {}
Appreciate the responses from everyone. Resource was the root of the problem but forgot that data is also a resource to check.

Using Terraform Provider in aws module

I am going through the terraform documentation, and it seems unclear to me. I'm quite new to Terraform so no doubt i'm misunderstanding something here:
https://developer.hashicorp.com/terraform/language/modules/develop/providers
Problem:
My terraform pipeline is returning the following warning:
│
│ on waf-cdn.tf line 9, in module "waf_cdn":
│ 9: aws = aws.useastone
│
│ Module module.waf_cdn does not declare a provider named aws.
│ If you wish to specify a provider configuration for the module, add an entry for aws in the required_providers block within the module.
My root module is calling a child waf module. I understand that i need to configure my provider within my root module. There are 2 files within my root module:
...terraform.tf...
terraform {
backend "s3" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.33.0"
}
random = {
source = "hashicorp/random"
version = "3.1.0"
}
local = {
source = "hashicorp/local"
version = "2.1.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.1"
}
}
}
...and providers.tf...
provider "aws" {
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${local.role_name}"
}
}
provider "aws" {
region = "us-east-1"
alias = "useastone"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${local.role_name}"
}
}
provider "aws" {
region = var.region
alias = "master"
assume_role {
role_arn = replace(
"arn:aws:iam::${var.master_account_id}:role/${local.role_name}",
local.app_region,
"master"
)
}
}
When calling the child module, the SCOPE attribute of the waf needs to specify the region as us-east-1 for CLOUDFRONT as it is a global service in AWS. Therefore, i need to pass the useastone provider when calling the child waf module as seen below:
module "waf_cdn" {
source = "../modules/qa-aws-waf-common"
name = "${local.waf_prefix}-cdn"
logging_arn = aws_kinesis_firehose_delivery_stream.log_stream_cdn.arn
scope = "CLOUDFRONT"
tags = merge(module.tags.tags, { name = "${local.name_prefix}-qa-waf-cdn" })
providers = {
aws = aws.useastone
}
}
With this code i'm getting the error show above.
I'm banging my head against the documentation here so any help guys would be really appreciated.
Here's hoping, thanks!
As per the documentation you linked, here is the passage you are interested in [1]:
Additional provider configurations (those with the alias argument set) are never inherited automatically by child modules, and so must always be passed explicitly using the providers map.
Since that is the case, you need to define the provider(s) on the module level as well:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.33.0"
configuration_aliases = [ aws.useastone ]
}
}
}
That would probably be an additional providers.tf file in ../modules/qa-aws-waf-common.
[1] https://developer.hashicorp.com/terraform/language/modules/develop/providers#passing-providers-explicitly

Terraform import : ignore specific resource from public module

I trying to import the state of a (private ) s3 bucket which was created via the console. Im using the public s3 module. I was able to create a module block and import the state of the bucket. However terraform plan also tries to create a aws_s3_bucket_public_access_block . How do I ignore or stop terraform from creating that specific resource from the module ?
main.tf
locals {
region = "dev"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-${region}-bucket"
acl = "private"
block_public_acls = true
block_public_policy = true
lifecycle_rule = [
{
id = "weekly_expiration_rule"
enabled = true
expiration = {
days = 7
}
}
]
}
Import command for bucket - terraform import module.s3_bucket.aws_s3_bucket.this my-dev-bucket
Meanwhile when I try importing the public access block resource I run into the error ` Error: Cannot import non-existent remote object, even when I have the settings configured on the bucket.
Looking into the source code more carefully , specifically this section
resource "aws_s3_bucket_public_access_block" "this" {
count = var.create_bucket && var.attach_public_policy ? 1 : 0
setting attach_public_policy to false got me what I needed
You should run terraform plan to see the real output and read the source code in github (resource "aws_s3_bucket" "this"). You can see count at line 6.
# module.s3_bucket.aws_s3_bucket.this[0] will be created
...
# module.s3_bucket.aws_s3_bucket_public_access_block.this[0] will be created
...
You can import with these commands:
t import module.s3_bucket.aws_s3_bucket.this[0] my-test-bucket-823567823576023
t import module.s3_bucket.aws_s3_bucket_public_access_block.this[0] my-test-bucket-823567823576023
My test main.tf, after I import it, t plan show 0 to add
terraform {
required_version = ">= 0.13.1"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.69"
}
random = {
source = "hashicorp/random"
version = ">= 2.0"
}
}
}
provider "aws" {
region = "ap-southeast-1"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-test-bucket-823567823576023"
acl = "private"
}

Configuration_aliases for multiple providers not working Terraform

Iam having two providers for my Kubenretes which are going to be used by the modules.
Below one is the code for the version.tf file
terraform {
required_version = ">= 0.15"
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 2.11.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.0.0"
configuration_aliases = [ kubernetes.gke ]
source = "hashicorp/kubernetes"
version = "1.7.0"
configuration_aliases = [ kubernetes.gke_v2 ]
}
So I also have a provider .tf file which having data like this
provider "kubernetes" {
alias = "gke"
host = module.gke.gke_cluster_endpoint
token = module.gke.google_client_config_access_token
cluster_ca_certificate = base64decode(module.gke.gke_cluster_cluster_ca_certificate)
}
provider "kubernetes" {
alias = "gke_v2"
kubernetes {
host = module.gke.gke_cluster_endpoint
cluster_ca_certificate = base64decode(module.gke.gke_cluster_cluster_ca_certificate)
token = module.gke.google_client_config_access_token
}
}
And in my modules im adding like
module “istio-base” {
providers = {
kubernetes = kubernetes.gke
helm = helm.helm
}
istio_values = [file(“environment/${var.environment}/istio-values.yaml”)]
source = “”
depends_on = [module.gke, kubernetes_namespace.istio_system_namespace]
}
But the issue is that when im doing the terraform init command. Its only taking one version like below.
*Error: Failed to query available provider packages*
│
Could not retrieve the list of available versions for provider hashicorp/kubernetes: no available releases match the given constraints 1.7.0, 2.0.0
╵
MY TERRAFORM code is in 0.15.5 version.
Can anyone tell where its wrong

Unable to use IAM Access control method using terraform aws_msk_cluster resource

I am trying to develop a module to create AWS MSK. I would like to enable IAM authentication for MSK resource
I am following the below link, but I don't see anything related to IAM authentication.
[(https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/msk_cluster#sasl)]
dynamic "client_authentication" {
for_each = var.client_tls_auth_enabled || var.client_sasl_iam_enabled ? [1] : []
content {
dynamic "tls" {
for_each = var.client_tls_auth_enabled ? [1] : []
content {
certificate_authority_arns = var.certificate_authority_arns
}
}
dynamic "sasl" {
for_each = var.client_sasl_iam_enabled ? [1] : []
content {
iam = var.client_sasl_iam_enabled
}
}
}
}
Error: An argument named "iam" is not expected here.
It's necessary update your aws provider at least v3.43.0: see changelog
e.g.
terraform {
required_version = ">= 0.13"
required_providers {
aws = ">= 3.43.0"
}
}
it's really works for me.
Guess what? CF doesn't have it either. There is a PR for tf with support for it. https://github.com/hashicorp/terraform-provider-aws/pull/19404