Configuration_aliases for multiple providers not working Terraform - google-cloud-platform

Iam having two providers for my Kubenretes which are going to be used by the modules.
Below one is the code for the version.tf file
terraform {
required_version = ">= 0.15"
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 2.11.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.0.0"
configuration_aliases = [ kubernetes.gke ]
source = "hashicorp/kubernetes"
version = "1.7.0"
configuration_aliases = [ kubernetes.gke_v2 ]
}
So I also have a provider .tf file which having data like this
provider "kubernetes" {
alias = "gke"
host = module.gke.gke_cluster_endpoint
token = module.gke.google_client_config_access_token
cluster_ca_certificate = base64decode(module.gke.gke_cluster_cluster_ca_certificate)
}
provider "kubernetes" {
alias = "gke_v2"
kubernetes {
host = module.gke.gke_cluster_endpoint
cluster_ca_certificate = base64decode(module.gke.gke_cluster_cluster_ca_certificate)
token = module.gke.google_client_config_access_token
}
}
And in my modules im adding like
module “istio-base” {
providers = {
kubernetes = kubernetes.gke
helm = helm.helm
}
istio_values = [file(“environment/${var.environment}/istio-values.yaml”)]
source = “”
depends_on = [module.gke, kubernetes_namespace.istio_system_namespace]
}
But the issue is that when im doing the terraform init command. Its only taking one version like below.
*Error: Failed to query available provider packages*
│
Could not retrieve the list of available versions for provider hashicorp/kubernetes: no available releases match the given constraints 1.7.0, 2.0.0
╵
MY TERRAFORM code is in 0.15.5 version.
Can anyone tell where its wrong

Related

Terraform loop through multiple providers(accounts) - invokation through module

i have a use case where need help to use for_each to loop through multiple providers( AWS accounts & regions) and this is a module, the TF will be using hub and spoke model.
below is the TF Pseudo code i would like to achieve.
module.tf
---------
app_accounts = [
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child1"},
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child2"}
]
below are the provider and resource files, pleas ignore the variables and output files, as its not relevant here
provider.tf
------------
provider "aws" {
for_each = var.app_accounts
alias = "child"
profile = each.value.role
}
here is the main resouce block where i want to multiple child accounts against single master account, so i want to iterate through the loop
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = vpc_id
zone_id = zone_id
}
resource "aws_route53_zone_association" "child" {
provider = aws.child
vpc_id = vpc_id
zone_id = zone_id
}
any idea on how to achieve this, please? thanks in advance.
The typical way to achieve your goal in Terraform is to define a shared module representing the objects that should be present in a single account and then to call that module once for each account, passing a different provider configuration into each.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
alias = "master"
# ...
}
provider "aws" {
alias = "example1"
profile = "example1"
}
module "example1" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example1
aws.master = aws.master
}
}
provider "aws" {
alias = "example2"
profile = "example2"
}
module "example2" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example2
aws.master = aws.master
}
}
The ./modules/account directory would then contain the resource blocks describing what should exist in each individual account. For example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [ aws, aws.master ]
}
}
}
variable "account" {
type = string
}
variable "app_vpc_id" {
type = string
}
resource "aws_route53_zone" "example" {
# (omitting the provider argument will associate
# with the default provider configuration, which
# is different for each instance of this module)
# ...
}
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
resource "aws_route53_zone_association" "child" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
(I'm not sure if you actually intended var.app_vpc_id to be the VPC specified for those zone associations, but my goal here is only to show the general pattern, not to show a fully-working example.)
Using a shared module in this way allows to avoid repeating the definitions for each account separately, and keeps each account-specific setting specified in only one place (either in a provider "aws" block or in a module block).
There is no way to make this more dynamic within the Terraform language itself, but if you expect to be adding and removing accounts regularly and want to make it more systematic then you could use code generation for the root module to mechanically produce the provider and module block for each account, to ensure that they all remain consistent and that you can update them all together in case you need to change the interface of the shared module in a way that will affect all of the calls.

Using Terraform Provider in aws module

I am going through the terraform documentation, and it seems unclear to me. I'm quite new to Terraform so no doubt i'm misunderstanding something here:
https://developer.hashicorp.com/terraform/language/modules/develop/providers
Problem:
My terraform pipeline is returning the following warning:
│
│ on waf-cdn.tf line 9, in module "waf_cdn":
│ 9: aws = aws.useastone
│
│ Module module.waf_cdn does not declare a provider named aws.
│ If you wish to specify a provider configuration for the module, add an entry for aws in the required_providers block within the module.
My root module is calling a child waf module. I understand that i need to configure my provider within my root module. There are 2 files within my root module:
...terraform.tf...
terraform {
backend "s3" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.33.0"
}
random = {
source = "hashicorp/random"
version = "3.1.0"
}
local = {
source = "hashicorp/local"
version = "2.1.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.1"
}
}
}
...and providers.tf...
provider "aws" {
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${local.role_name}"
}
}
provider "aws" {
region = "us-east-1"
alias = "useastone"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${local.role_name}"
}
}
provider "aws" {
region = var.region
alias = "master"
assume_role {
role_arn = replace(
"arn:aws:iam::${var.master_account_id}:role/${local.role_name}",
local.app_region,
"master"
)
}
}
When calling the child module, the SCOPE attribute of the waf needs to specify the region as us-east-1 for CLOUDFRONT as it is a global service in AWS. Therefore, i need to pass the useastone provider when calling the child waf module as seen below:
module "waf_cdn" {
source = "../modules/qa-aws-waf-common"
name = "${local.waf_prefix}-cdn"
logging_arn = aws_kinesis_firehose_delivery_stream.log_stream_cdn.arn
scope = "CLOUDFRONT"
tags = merge(module.tags.tags, { name = "${local.name_prefix}-qa-waf-cdn" })
providers = {
aws = aws.useastone
}
}
With this code i'm getting the error show above.
I'm banging my head against the documentation here so any help guys would be really appreciated.
Here's hoping, thanks!
As per the documentation you linked, here is the passage you are interested in [1]:
Additional provider configurations (those with the alias argument set) are never inherited automatically by child modules, and so must always be passed explicitly using the providers map.
Since that is the case, you need to define the provider(s) on the module level as well:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.33.0"
configuration_aliases = [ aws.useastone ]
}
}
}
That would probably be an additional providers.tf file in ../modules/qa-aws-waf-common.
[1] https://developer.hashicorp.com/terraform/language/modules/develop/providers#passing-providers-explicitly

Cannot add "Wiz"(third-party) terraform provider

I am intergrating Wiz for AWS resources scanning, and following the doc : https://docs.wiz.io/wiz-docs/docs/auto-connect-clusters, wherein when I added wiz provider in terraform, it is giving following error,
In providers.tf, I added following code :
terraform {
required_providers {
wiz = {
version = " ~> 1.0"
source = "tf.app.wiz.io/wizsec/wiz"
}
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
Also, for Wiz integration, there is pre-requisite to have K8 provider and wiz client and secret added, for that I added :
provider "kubernetes" {
config_context = //context
config_path = //path
}
provider "wiz" {
client_id = //clientid
secret = //secret.id
}
Thanks in advance.
Okay, I could fetch wiz plugin from tf.app.wiz.io registry. The above terraform init should work.
The only case where I think it can fail is when you are using a module which expects wiz provider & you haven't defined the source tf.app.wiz.io/wizsec/wiz in all the modules you are sourcing. If you don't specify in each module, terraform assumes it needs to fetch from default registry registry.terraform.io & fails with above message.
You could specify the provider like below in each module & let the calling module specify the version you desire to have.
terraform {
required_providers {
wiz = {
source = "tf.app.wiz.io/wizsec/wiz"
}
}
}
Are you calling a module which relies on wiz provider?

Terraform 0.15 - Multiple Providers \ Regions and Guardduty

I’m trying to deploy AWS Guardduty using Organisations to multiple regions.
In my root config I’ve created the following provider:
# If I remove this default provider out i get prompted for a region
provider "aws" {
profile = "default"
region = var.region
}
provider "aws" {
profile = "default"
alias = "eu-west-2"
region = "eu-west-2"
}
provider "aws" {
profile = "default"
alias = "eu-west-3"
region = "eu-west-3"
}
then in my module call I have multiple calls to the module passing in my providers alias’s
module "guardduty_orgs_eu_west_2" {
source = "../../modules/aws_guardduty_organisations"
security_account_id = var.security_account_id
providers = {
aws.alternate = aws.eu-west-2
}
}
module "guardduty_orgs_eu_west_3" {
source = "../../modules/aws_guardduty_organisations"
security_account_id = var.security_account_id
providers = {
aws.alternate = aws.eu-west-3
}
}
In my module I then have the required providers block and ‘configuration_aliases’
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
configuration_aliases = [ aws.alternate ]
}
}
}
and finally my resource
resource "aws_guardduty_organization_admin_account" "gdoaa" {
admin_account_id = var.security_account_id
provider = aws.alternate
}
However, i get an error :
" Error: error enabling GuardDuty Organization Admin Account (123456789): BadRequestException: The request failed because the account is already enabled as the GuardDuty delegated administrator for the organization."
Now, this is correct as the first module call enables the Admin Account for “eu-west-2”, but i would think passing in the 2nd provider for “eu-west-3” would enable the Admin Account for this region as per the Guardduty best practices \ docs.
Any help appreciated
cheers
Paul
/*resource "aws_guardduty_detector" "MyDetector" {
enable = true
datasources {
s3_logs {
enable = false
}
kubernetes {
audit_logs {
enable = false
}
}
}
}
*/
resource "aws_guardduty_organization_configuration" "example" {
provider = aws.securityacc
auto_enable = true
detector_id = "12345678"
}
this worked for me. hashout guardduty detector as it gets enabled already when you delegate it as a admin account.

terraform (0.11 and 0.12) apply works on 1 machine, but not on the other

Working on 2 different windows 10 machines where 'terraform apply' works on one machine, but not on the other. Before moving to the second pc, i completely removed the infrastructure on gcp, and made sure i only kopied the tf file + the essential json. (no state files etc. ) Since preparing this for pipeline, i want to have a clean environment to start with
codesnippit (Full script at the end, further below):
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].client_certificate)}"
client_key = "${base64decode(google_container_cluster.primary.master_auth[0].client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)}"
version = "~> 1.7"
}
# Namespace
resource "kubernetes_namespace" "testspace" {
metadata {
annotations = {
name = "testspace"
}
name = "testspace"
}
}
According to all examples i see, this should work, and it does, on my laptop, but on my second machine i get the following error:
Error: Failed to configure: username/password or bearer token may be set, but not both
on Deploy_Test.tf line 1, in provider "kubernetes":
1: provider "kubernetes" {
If I remove the username and password, the error disapears, but I can't create a namespace because i have no authorization? the error states:
Error: namespaces is forbidden: User "client" cannot create namespaces at the cluster scope
and now i'm getting a bit lost: This code runs fine on one pc, but not on the other, and i can't figure out why. When redploying this again from pc one, after starting in a new clean terraform folder
Hopefully someone has an idea where to look ?
Tried the following so far:
updated to 0.12.1 - no difference.
downgraded to 0.11 - no difference.
Tried all different combinations of using certificate, or username/pw combo
provider "google" {
credentials = file("account.json")
project = var.project
region = var.region
version = "~> 2.7"
}
resource "google_container_cluster" "primary" {
name = "${var.name}-cluster"
location = var.region
initial_node_count = 1
master_auth {
username = var.username
password = var.password
/*
client_certificate_config {
issue_client_certificate = true
}
*/
}
node_version = "1.11.10-gke.4"
min_master_version = "1.11.10-gke.4"
node_config {
preemptible = true
machine_type = "n1-standard-1"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].client_certificate)}"
client_key = "${base64decode(google_container_cluster.primary.master_auth[0].client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)}"
version = "~> 1.7"
}
# Namespace
resource "kubernetes_namespace" "testspace" {
metadata {
annotations = {
name = "testspace"
}
name = "testspace"
}
}
You have two problems here, first:
Error: Failed to configure: username/password or bearer token may be set, but not both
is telling you that you can EITHER authenticate with username and password, OR with a bearer token. Your error appears to be sourced from here, first:
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].client_certificate)}"
client_key = "${base64decode(google_container_cluster.primary.master_auth[0].client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth[0].cluster_ca_certificate)}"
version = "~> 1.7"
}
Basically you're pointing at the three .pem files AND you're trying to auth with user/pass. Choose one or the other. See this page about the kubernetes provider (Specifically "Statically defined credentials") for details about that particular error.
That said:
Error: namespaces is forbidden: User "client" cannot create namespaces at the cluster scope
Is telling you that you don't have the permissions to do what you're trying to do. Once you can identify what it's trying to authenticate as, you can identify what is wrong. What it appears to be is that your client_certificate, client_key, and/or cluster_ca_certificate are out of date on the second computer, but not the first. I believe it should be the cluster_ca_certificate that's out of date, if your gcloud config set container/use_client_certificate is true. This answer has more information about that.
If that's not the case, we will have to investigate further.
Found the cause of this:
Previously i've had Docker Desktop installed. After removal, it has left some junk, in this case on c:\users\%username% there was a .kube folder left with a kube config file in there, containing the used certificates.
I zipped the folder content, and removed the folder. After that terraform works similar as on the other machines.