Trying to implement a Data Module for referencing a 'Robot Account' for Terraform.
I get the folowing errors:
Error: Reference to undeclared resource
on main.tf line 7, in provider "google":
7: credentials = data.google_secret_manager_secret_version.secret
A data resource "google_secret_manager_secret_version" "secret" has not been
declared in the root module.
Error: Reference to undeclared input variable
on datamodule\KeydataModule.tf line 3, in data "google_secret_manager_secret_version" "secret":
3: secret = "${var.Terra_Auth}"
An input variable with the name "Terra_Auth" has not been declared. This
variable can be declared with a variable "Terra_Auth" {} block.
With the following main.tf:
module "KeydataModule" {
source = "./datamodule"
}
provider "google" {
credentials = data.google_secret_manager_secret_version.secret
project = "KubeProject"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "ubuntu-cloud/ubuntu-1804-lts"
}
}
network_interface {
# A default network is created for all GCP projects
network = google_compute_network.vpc_network.self_link
access_config {
}
}
}
resource "google_compute_network" "vpc_network" {
name = "terraform-network"
auto_create_subnetworks = "true"
}
The keydataModule.tf:
data "google_secret_manager_secret_version" "secret" {
provider = google-beta
secret = "${var.Terra_Auth}"
}
The following variables.tf for creating the 'Terra Auth' variable:
variable "Terra_Auth" {
type = string
description = "Access Key for Terraform Service Account"
}
And finally a terraform.tfvars file, which in this case houses the secret name within my GCP account:
Terra_Auth = "Terraform_GCP_Account_Secret"
Related
I'm trying to create multiple AWS Accounts in an Organization containing ressources.
The resources should owned by the created accounts.
for that I created a module for the accounts:
resource "aws_organizations_account" "this" {
name = var.customer
email = var.email
parent_id = var.parent_id
role_name = "OrganizationAccountAccessRole"
provider = aws.src
}
resource "aws_s3_bucket" "this" {
bucket = "exconcept-terraform-state-${var.customer}"
provider = aws.dst
depends_on = [
aws_organizations_account.this
]
}
output "account_id" {
value = aws_organizations_account.this.id
}
output "account_arn" {
value = aws_organizations_account.this.arn
}
my provider file for the module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
configuration_aliases = [ aws.src, aws.dst ]
}
}
}
In the root module I'm calling the module like this:
module "account" {
source = "./modules/account"
for_each = var.accounts
customer = each.value["customer"]
email = each.value["email"]
# close_on_deletion = true
parent_id = aws_organizations_organizational_unit.testing.id
providers = {
aws.src = aws.default
aws.dst = aws.customer
}
}
Since the provider information comes from the root module, and the accounts are created with a for_each map, how can I use the current aws.dst provider?
Here is my root provider file:
provider "aws" {
region = "eu-central-1"
profile = "default"
alias = "default"
}
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::${module.account[each.key].account_id}:role/OrganizationAccountAccessRole"
}
alias = "customer"
region = "eu-central-1"
}
With Terraform init I got this error:
Error: Cycle: module.account.aws_s3_bucket_versioning.this, module.account.aws_s3_bucket.this, provider["registry.terraform.io/hashicorp/aws"].customer, module.account.aws_s3_bucket_acl.this, module.account (close)
I am trying to pass two AWS Terraform providers to my child module. I want the default to stay unaliased, because I can't go through and add a provider to all of the terraform resources in the parent module.
Parent Module------------------------------------------
versions.tf
terraform {
required_version = "~> 1.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "some-org"
workspaces {
prefix = "some-state-file"
}
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
configuration_aliases = [ aws.domain-management ]
}
}
}
provider "aws" {
access_key = var.aws_access_key_id
secret_key = var.aws_secret_access_key
region = var.aws_region
default_tags {
tags = {
Application = var.application_name
Environment = var.environment
}
}
}
provider "aws" {
alias = "domain-management"
region = var.domain_management_aws_region
access_key = var.domain_management_aws_access_key_id
secret_key = var.domain_management_aws_secret_access_key
}
module.tf (calling child module)
module "vanity-cert-test" {
source = "some-source"
fully_qualified_domain_name = "some-domain.com"
alternative_names = ["*.${var.dns_zone.name}"]
application_name = var.application_name
environment = var.environment
service_name = var.service_name
domain_managment_zone_name = "some-domain02.com"
providers = {
aws.domain-management = aws.domain-management
}
}
Child Module-------------------------------------------------------
versions.tf
terraform {
required_version = "~> 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
confiuration_aliases = [aws.domain-management]
}
}
}
provider "aws" {
alias = domain-management
}
route53.tf
# Create validation Route53 records
resource "aws_route53_record" "vanity_route53_cert_validation" {
# use domain management secondary aws provider
provider = aws.domain-management
for_each = {
for dvo in aws_acm_certificate.vanity_certificate.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
zone_id = data.aws_route53_zone.vanity_zone.zone_id
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
allow_overwrite = true
}
The use case for this is to have a vanity cert defined in a seperate account from where the DNS Validation for the certificate needs to go. Currently when running this, I get the following error:
terraform plan -var-file=./application.tfvars
╷
│ Warning: Provider aws.domain-management is undefined
│
│ on services/self-service-ticket-portal-app/ssl-certificate.tf line 33, in module "vanity-cert-test":
│ 33: aws.domain-management = aws.domain-management
│
│ Module module.services.module.self-service-ticket-portal-app.module.vanity-cert-test does not declare a provider named aws.domain-management.
│ If you wish to specify a provider configuration for the module, add an entry for aws.domain-management in the required_providers block within the module.
╵
╷
│ Error: missing provider module.services.module.self-service-ticket-portal-app.provider["registry.terraform.io/hashicorp/aws"].domain-management
If your "Parent Module" is the root module, then you can't use configuration_aliases in it. configuration_aliases is only used in child modules:
To declare a configuration alias within a module in order to receive an alternate provider configuration from the parent module, add the configuration_aliases argument to that provider's required_providers entry.
Error: Unable to find remote state
on ../../modules/current_global/main.tf line 26, in data "terraform_remote_state" "global":
26: data "terraform_remote_state" "global" {
No stored state was found for the given workspace in the given backend
I am really stucked in this issue for a while.
main.tf:
data "aws_caller_identity" "current" {}
locals {
state_buckets = {
"amazon_account_id" = {
bucket = "bucket_name"
key = "key"
region = "region"
}
}
state_bucket = local.state_buckets[data.aws_caller_identity.current.account_id]
}
data "terraform_remote_state" "global" {
backend = "s3"
config = local.state_bucket
}
output "outputs" {
description = "Current account's global Terraform module outputs"
value = data.terraform_remote_state.global.outputs
}
one directoty above there is one main.tf file which has reference of above main.tf file
main.tf:
provider "aws" {
version = "~> 2.0"
region = var.region
allowed_account_ids = ["id"]
}
terraform {
backend "s3" {
bucket = "bucket_name"
key = "key"
region = "region"
}
}
module "global" {
source = "../../modules/current_global"
}
i can't create a VM on GCP using terraform, i want to attach a kms key in the attribute "kms_key_self_link", but when the machine is being created, time goes and after 2 minutes waiting (in every case) the error 503 appears. I'm going to share my script, is worthly to say that with the attribute "kms_key_self_link" dissabled, the script runs ok.
data "google_compute_image" "tomcat_centos" {
name = var.vm_img_name
}
data "google_kms_key_ring" "keyring" {
name = "keyring-example"
location = "global"
}
data "google_kms_crypto_key" "cmek-key" {
name = "crypto-key-example"
key_ring = data.google_kms_key_ring.keyring.self_link
}
data "google_project" "project" {}
resource "google_kms_crypto_key_iam_member" "key_user" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/owner"
member = "serviceAccount:service-${data.google_project.project.number}#compute-system.iam.gserviceaccount.com"
}
resource "google_compute_instance" "vm-hsbc" {
name = var.vm_name
machine_type = var.vm_machine_type
zone = var.zone
allow_stopping_for_update = true
can_ip_forward = false
deletion_protection = false
boot_disk {
kms_key_self_link = data.google_kms_crypto_key.cmek-key.self_link
initialize_params {
type = var.disk_type
#GCP-CE-CTRL-22
image = data.google_compute_image.tomcat_centos.self_link
}
}
network_interface {
network = var.network
}
#GCP-CE-CTRL-2-...-5, 7, 8
service_account {
email = var.service_account_email
scopes = var.scopes
}
#GCP-CE-CTRL-31
shielded_instance_config {
enable_secure_boot = true
enable_vtpm = true
enable_integrity_monitoring = true
}
}
And this is the complete error:
Error creating instance: googleapi: Error 503: Internal error. Please try again or contact Google Support. (Code: '5C54C97EB5265.AA25590.F4046F68'), backendError
I solved this issue granting to my compute service account the role of encrypter/decripter through this resource:
resource "google_kms_crypto_key_iam_binding" "key_iam_binding" {
crypto_key_id = data.google_kms_crypto_key.cmek-key.id
role = "roles/cloudkms.cryptoKeyEncrypter"
members = [
"serviceAccount:service-${data.google_project.gcp_project.number}#compute-system.iam.gserviceaccount.com",
]
}
I just started with Terraform infrastructure. Trying to create a vpc module that will contain code for vpc, subnets, internet gateway, rout table. Also creating a separate tf file for rds , which will refer to the vpc module and utilize the private subnets declared in vpc module.
Created a vpc module that has vpc.tf with following
provider "aws" {
region = var.region
}
terraform {
backend "s3" {}
}
resource "aws_vpc" "production-vpc" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
tags = {
Name = "Dev-VPC"
}
}
// Private Subnets
resource "aws_subnet" "private-subnet-1" {
cidr_block = var.private_subnet_1_cidr
vpc_id = aws_vpc.production-vpc.id
availability_zone = "us-east-1a"
tags = {
Name = "Private-Subnet-1"
}
}
resource "aws_subnet" "private-subnet-2" {
cidr_block = var.private_subnet_2_cidr
vpc_id = aws_vpc.production-vpc.id
availability_zone = "us-east-1b"
tags = {
Name = "Private-Subnet-2"
}
}
The output.tf has following
output "private-subnet1-id" {
description = "Private Subnet1 Id"
value = aws_subnet.private-subnet-1.*.id
}
output "private-subnet2-id" {
description = "Private Subnet2 Id"
value = aws_subnet.private-subnet-2.*.id
}
The file is saved in folder \module\vpc folder
Created rds.tf as follows in folder \rds
provider "aws" {
region = var.region
}
terraform {
backend "s3" {}
}
module "vpc" {
source = "../module/vpc"
}
resource "aws_db_subnet_group" "subnetgrp" {
name = "dbsubnetgrp"
subnet_ids = [module.vpc.private-subnet1-id.id, module.vpc.private-subnet2-id.id]
}
When I run terraform plan , I get following error
Error: Unsupported attribute
on rds.tf line 16, in resource "aws_db_subnet_group" "subnetgrp":
16: subnet_ids = [module.vpc.private-subnet1-id.id, module.vpc.private-subnet2-id.id]
|----------------
| module.vpc.private-subnet1-id is tuple with 1 element
This value does not have any attributes.
Error: Unsupported attribute
on rds.tf line 16, in resource "aws_db_subnet_group" "subnetgrp":
16: subnet_ids = [module.vpc.private-subnet1-id.id, module.vpc.private-subnet2-id.id]
|----------------
| module.vpc.private-subnet2-id is tuple with 1 element
This value does not have any attributes.
You don't need the splat expression in the output.tf. Try the following,
output "private-subnet1-id" {
description = "Private Subnet1 Id"
value = aws_subnet.private-subnet-1.id
}
output "private-subnet2-id" {
description = "Private Subnet2 Id"
value = aws_subnet.private-subnet-2.id
}