I am trying to pass two AWS Terraform providers to my child module. I want the default to stay unaliased, because I can't go through and add a provider to all of the terraform resources in the parent module.
Parent Module------------------------------------------
versions.tf
terraform {
required_version = "~> 1.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "some-org"
workspaces {
prefix = "some-state-file"
}
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
configuration_aliases = [ aws.domain-management ]
}
}
}
provider "aws" {
access_key = var.aws_access_key_id
secret_key = var.aws_secret_access_key
region = var.aws_region
default_tags {
tags = {
Application = var.application_name
Environment = var.environment
}
}
}
provider "aws" {
alias = "domain-management"
region = var.domain_management_aws_region
access_key = var.domain_management_aws_access_key_id
secret_key = var.domain_management_aws_secret_access_key
}
module.tf (calling child module)
module "vanity-cert-test" {
source = "some-source"
fully_qualified_domain_name = "some-domain.com"
alternative_names = ["*.${var.dns_zone.name}"]
application_name = var.application_name
environment = var.environment
service_name = var.service_name
domain_managment_zone_name = "some-domain02.com"
providers = {
aws.domain-management = aws.domain-management
}
}
Child Module-------------------------------------------------------
versions.tf
terraform {
required_version = "~> 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
confiuration_aliases = [aws.domain-management]
}
}
}
provider "aws" {
alias = domain-management
}
route53.tf
# Create validation Route53 records
resource "aws_route53_record" "vanity_route53_cert_validation" {
# use domain management secondary aws provider
provider = aws.domain-management
for_each = {
for dvo in aws_acm_certificate.vanity_certificate.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
zone_id = data.aws_route53_zone.vanity_zone.zone_id
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
allow_overwrite = true
}
The use case for this is to have a vanity cert defined in a seperate account from where the DNS Validation for the certificate needs to go. Currently when running this, I get the following error:
terraform plan -var-file=./application.tfvars
╷
│ Warning: Provider aws.domain-management is undefined
│
│ on services/self-service-ticket-portal-app/ssl-certificate.tf line 33, in module "vanity-cert-test":
│ 33: aws.domain-management = aws.domain-management
│
│ Module module.services.module.self-service-ticket-portal-app.module.vanity-cert-test does not declare a provider named aws.domain-management.
│ If you wish to specify a provider configuration for the module, add an entry for aws.domain-management in the required_providers block within the module.
╵
╷
│ Error: missing provider module.services.module.self-service-ticket-portal-app.provider["registry.terraform.io/hashicorp/aws"].domain-management
If your "Parent Module" is the root module, then you can't use configuration_aliases in it. configuration_aliases is only used in child modules:
To declare a configuration alias within a module in order to receive an alternate provider configuration from the parent module, add the configuration_aliases argument to that provider's required_providers entry.
Related
I'm trying to create multiple AWS Accounts in an Organization containing ressources.
The resources should owned by the created accounts.
for that I created a module for the accounts:
resource "aws_organizations_account" "this" {
name = var.customer
email = var.email
parent_id = var.parent_id
role_name = "OrganizationAccountAccessRole"
provider = aws.src
}
resource "aws_s3_bucket" "this" {
bucket = "exconcept-terraform-state-${var.customer}"
provider = aws.dst
depends_on = [
aws_organizations_account.this
]
}
output "account_id" {
value = aws_organizations_account.this.id
}
output "account_arn" {
value = aws_organizations_account.this.arn
}
my provider file for the module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
configuration_aliases = [ aws.src, aws.dst ]
}
}
}
In the root module I'm calling the module like this:
module "account" {
source = "./modules/account"
for_each = var.accounts
customer = each.value["customer"]
email = each.value["email"]
# close_on_deletion = true
parent_id = aws_organizations_organizational_unit.testing.id
providers = {
aws.src = aws.default
aws.dst = aws.customer
}
}
Since the provider information comes from the root module, and the accounts are created with a for_each map, how can I use the current aws.dst provider?
Here is my root provider file:
provider "aws" {
region = "eu-central-1"
profile = "default"
alias = "default"
}
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::${module.account[each.key].account_id}:role/OrganizationAccountAccessRole"
}
alias = "customer"
region = "eu-central-1"
}
With Terraform init I got this error:
Error: Cycle: module.account.aws_s3_bucket_versioning.this, module.account.aws_s3_bucket.this, provider["registry.terraform.io/hashicorp/aws"].customer, module.account.aws_s3_bucket_acl.this, module.account (close)
I am trying to setup a bigquery data transfer configuration using terraform. I am using my personal gcp account. I have a setup of terraform on my laptop so that terraform and gcp can work together.
Trying this below code in main.tf,
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.18.0"
}
}
}
provider "google" {
# Configuration options
project="gcp-project-100"
region="us-central1"
zone="us-central1-a"
credentials = "keys.json"
}
data "google_project" "project" {
}
resource "google_project_iam_member" "permissions" {
project = data.google_project.project.project_id
role = "roles/iam.serviceAccountShortTermTokenMinter"
member = "serviceAccount:service-${data.google_project.project.number}#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com"
}
resource "google_bigquery_data_transfer_config" "query_config" {
depends_on = [google_project_iam_member.permissions]
display_name = "my-query"
location = "US"
data_source_id = "scheduled_query"
schedule = "every wednesday 09:30"
service_account_name = "service-${data.google_project.project.number}#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com"
destination_dataset_id = "practice"
params = {
destination_table_name_template = "test_gsod"
write_disposition = "WRITE_TRUNCATE"
query = "select station_number , year , month,day, mean_temp,mean_dew_point ,mean_visibility from `bigquery-public-data.samples.gsod`"
}
}
terraform apply is failing with below details
google_bigquery_data_transfer_config.query_config: Creating...
╷
│ Error: Error creating Config: googleapi: Error 403: The caller does not have permission
│
│ with google_bigquery_data_transfer_config.query_config,
│ on main.tf line 27, in resource "google_bigquery_data_transfer_config" "query_config":
│ 27: resource "google_bigquery_data_transfer_config" "query_config" {
Can someone please help me , how to do this setup properly.
The issue is resolved now. I have used below piece of code along with bigquery admin role for my terraform service account.
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.18.0"
}
}
}
provider "google" {
# Configuration options
project="gcp-project-100"
region="us-central1"
zone="us-central1-a"
credentials = "keys.json"
}
resource "google_bigquery_data_transfer_config" "query_config" {
display_name = "my-query"
location = "US"
data_source_id = "scheduled_query"
schedule = "every 15 mins"
destination_dataset_id = "practice"
params = {
destination_table_name_template = "test_gsod"
write_disposition = "WRITE_TRUNCATE"
query = "select station_number , year , month,day, mean_temp,mean_dew_point ,mean_visibility from `bigquery-public-data.samples.gsod`"
}
}
Now it is working fine. Thanks.
I am getting below error while creating firewall manager policy for cloud front distribution.
the documentation provide little details on how to deploy a Cloudfront distribution which is a Global resource.
I am getting below error while executing my code:
aws_fms_policy.xxxx: Creating...
╷
│ Error: error creating FMS Policy: InternalErrorException:
│
│ with aws_fms_policy.xxxx,
│ on r_wafruleset.tf line 1, in resource "aws_fms_policy" "xxxx":
│ 1: resource "aws_fms_policy" "xxxx" {
│
╵
Releasing state lock. This may take a few moments...
main.tf looks like this with provider information:
provider "aws" {
region = "ap-southeast-2"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/yyyy"
}
}
r_fms.tf looks like this:
resource "aws_fms_policy" "xxxx" {
name = "xxxx"
exclude_resource_tags = true
resource_tags = var.exclude_tags
remediation_enabled = true
provider = aws.us_east_1
include_map {
account = ["123123123"]
}
resource_type = "AWS::CloudFront::Distribution"
security_service_policy_data {
type = "WAFV2"
managed_service_data = jsonencode(
{
type = "WAFV2"
defaultAction = {
type = "ALLOW"
}
overrideCustomerWebACLAssociation = false
postProcessRuleGroups = []
preProcessRuleGroups = [
{
excludeRules = []
managedRuleGroupIdentifier = {
vendorName = "AWS"
managedRuleGroupName = "AWSManagedRulesAmazonIpReputationList"
version = true
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
{
excludeRules = []
managedRuleGroupIdentifier = {
managedRuleGroupName = "AWSManagedRulesWindowsRuleSet"
vendorName = "AWS"
version = null
}
overrideAction = {
type = "COUNT"
}
ruleGroupArn = null
ruleGroupType = "ManagedRuleGroup"
sampledRequestsEnabled = true
},
]
sampledRequestsEnabledForDefaultActions = true
})
}
}
I have tried to follow the thread but still getting below error:
https://github.com/hashicorp/terraform-provider-aws/issues/17821
Terraform Version:
Terraform v1.1.7
on windows_386
+ provider registry.terraform.io/hashicorp/aws v4.6.0
There is open issue in terraform aws provider.
A workaround for this issue is to remove: 'version' attribute;
AWS has recently introduced Versioning with WAF policies managed by Firewall Manager; which is causing this weird error.
Though a permanent fix is InProgress (refer my earlier post) we can remove the attribute to avoid this error.
Another approach is to use the new attribute: versionEnabled=true in case you want versioning enabled.
I'm trying to create two EC2 instances on AWS with the following features:
Instance: Ubuntu Server 18.04 LTS (HVM), SSD Volume Type
Type: ami for 64-bit x86 us-east-1 region ami-0747bdcabd34c712a (64-bit x86)
Type: 2 processors, 8 GB Memory, Up to 10 Gigabit Network, m5a type m5a.large
Number of instances: 2
Storage: 20 GB General Purpose SSD, Delete storage on termination
Tags: Name=lfs258_class
Allow all traffic from everywhere
Use the existing SSH Keypair I have on my laptop
This is the tree file structure
.
├── README.md
├── ec2.tf
├── outputs.tf
├── provider.tf
├── variables.tf
└── versions.tf
file ec2.tf
locals {
availability_zone = "${local.region}a"
name = "kubernetes-lfs258-course"
region = "us-east-1"
tags = {
Owner = "pss-cli-user1 "
Environment = "kubernetes-lfs258-course"
}
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"
name = local.name
azs = ["${local.region}a", "${local.region}b", "${local.region}c"]
public_subnets = lookup(var.init,"public-subnet")
tags = local.tags
}
module "security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "~> 4.0"
name = local.name
description = "Security group for example usage with EC2 instance"
vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = ["0.0.0.0/0"]
ingress_rules = ["all-all"]
egress_rules = ["all-all"]
tags = local.tags
}
################################################################################
# Supporting Resources for the EC2 module
################################################################################
module "ec2" {
source = "../../"
name = local.name
ami = lookup(var.init,"ami")
#instance_type = "c5.large"
instance_type = lookup(element(var.instances,0),"instance_type")
availability_zone = local.availability_zone
subnet_id = element(module.vpc.private_subnets, 0)
vpc_security_group_ids = [module.security_group.security_group_id]
associate_public_ip_address = true
tags = local.tags
}
resource "aws_volume_attachment" "this" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.this.id
instance_id = module.ec2.id
}
resource "aws_ebs_volume" "this" {
availability_zone = local.availability_zone
size = 20
tags = local.tags
}
file outputs.tf
# EC2
output "ec2_id" {
description = "The ID of the instance"
value = module.ec2.id
}
output "ec2_arn" {
description = "The ARN of the instance"
value = module.ec2.arn
}
output "ec2_capacity_reservation_specification" {
description = "Capacity reservation specification of the instance"
value = module.ec2.capacity_reservation_specification
}
output "ec2_instance_state" {
description = "The state of the instance. One of: `pending`, `running`, `shutting-down`, `terminated`, `stopping`, `stopped`"
value = module.ec2.instance_state
}
output "ec2_primary_network_interface_id" {
description = "The ID of the instance's primary network interface"
value = module.ec2.primary_network_interface_id
}
output "ec2_private_dns" {
description = "The private DNS name assigned to the instance. Can only be used inside the Amazon EC2, and only available if you've enabled DNS hostnames for your VPC"
value = module.ec2.private_dns
}
output "ec2_public_dns" {
description = "The public DNS name assigned to the instance. For EC2-VPC, this is only available if you've enabled DNS hostnames for your VPC"
value = module.ec2.public_dns
}
output "ec2_public_ip" {
description = "The public IP address assigned to the instance, if applicable. NOTE: If you are using an aws_eip with your instance, you should refer to the EIP's address directly and not use `public_ip` as this field will change after the EIP is attached"
value = module.ec2.public_ip
}
output "ec2_tags_all" {
description = "A map of tags assigned to the resource, including those inherited from the provider default_tags configuration block"
value = module.ec2.tags_all
}
file provider. tf
provider "aws" {
region = local.region
profile = "pss-cli-user1"
shared_credentials_file = "~/.aws/credentials"
}
file variables.tf
# This file defines variables types and their initial hardcoded values
variable "zones" {
type = list(string)
default = ["us-east-1a", "us-east-1b"]
}
variable "instances" {
type = list(object({
instance_type = string
count = number
tags = map(string)
}))
# If instances is not defined in terraforms.tfvars use this value
default = [
{
instance_type = "m5a.large"
count = 2
tags = { "UsedFor" = "kubernetes lfs258 course"}
}
]
}
variable "init" {
type = object({
vpc-id=list(string),
public-subnet=list(string),
aws_region=string,
ami=string
vpc-sec-group= list(string)
})
# if not defined in terraform.tfvars takes this default
default = {
vpc-id = ["vpc-02938578"]
public-subnet = ["subnet-94e25d9a"]
aws_region = "us-east-1"
ami = "ami-0747bdcabd34c712a"
vpc-sec-group = ["sg-d60bf3f5"]
}
}
file versions.tf
terraform {
required_version = ">= 0.13.1"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.51"
}
}
}
The command terraform init works without errors
However terraform plan is giving me the following complains
╷
│ Error: Unsupported argument
│
│ on ec2.tf line 41, in module "ec2":
│ 41: name = local.name
│
│ An argument named "name" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on ec2.tf line 43, in module "ec2":
│ 43: ami = lookup(var.init,"ami")
│
│ An argument named "ami" is not expected here.
..... more errors like this removed
Questions are :
What am I doing wrong and how to fix it ?
How to create a better IaC Terraform deployment?
BR
David
Trying to implement a Data Module for referencing a 'Robot Account' for Terraform.
I get the folowing errors:
Error: Reference to undeclared resource
on main.tf line 7, in provider "google":
7: credentials = data.google_secret_manager_secret_version.secret
A data resource "google_secret_manager_secret_version" "secret" has not been
declared in the root module.
Error: Reference to undeclared input variable
on datamodule\KeydataModule.tf line 3, in data "google_secret_manager_secret_version" "secret":
3: secret = "${var.Terra_Auth}"
An input variable with the name "Terra_Auth" has not been declared. This
variable can be declared with a variable "Terra_Auth" {} block.
With the following main.tf:
module "KeydataModule" {
source = "./datamodule"
}
provider "google" {
credentials = data.google_secret_manager_secret_version.secret
project = "KubeProject"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "ubuntu-cloud/ubuntu-1804-lts"
}
}
network_interface {
# A default network is created for all GCP projects
network = google_compute_network.vpc_network.self_link
access_config {
}
}
}
resource "google_compute_network" "vpc_network" {
name = "terraform-network"
auto_create_subnetworks = "true"
}
The keydataModule.tf:
data "google_secret_manager_secret_version" "secret" {
provider = google-beta
secret = "${var.Terra_Auth}"
}
The following variables.tf for creating the 'Terra Auth' variable:
variable "Terra_Auth" {
type = string
description = "Access Key for Terraform Service Account"
}
And finally a terraform.tfvars file, which in this case houses the secret name within my GCP account:
Terra_Auth = "Terraform_GCP_Account_Secret"