Terraform referenced module has no attributes error - google-cloud-platform

I am trying to use a Palo Alto Networks module to deploy a panorama VM instance to GCP with Terraform. In the example module, I see they create a VPC together with a subnetwork, however, I have an existing VPC I am adding to. So I data source the VPC and create the subnetwork with a module. Upon referencing this subnetwork in my VM instance module, it complains it has no attributes:
Error: Incorrect attribute value type
on ../../../../modules/panorama/main.tf line 67, in resource "google_compute_instance" "panorama":
67: subnetwork = var.subnet
|----------------
| var.subnet is object with no attributes
Here is the subnet code:
data "google_compute_network" "panorama" {
project = var.project_id
name = "fed-il4-p-net-panorama"
}
module "panorama_subnet" {
source = "../../../../modules/subnetwork-module"
subnet_name = "panorama-${var.region_short_name[var.region]}"
subnet_ip = var.panorama_subnet
subnet_region = var.region
project_id = var.project_id
network = data.google_compute_network.panorama.self_link
}
Here is the panorama VM instance code:
module "panorama" {
source = "../../../../modules/panorama"
name = "${var.project_id}-panorama-${var.region_short_name[var.region]}"
project = var.project_id
region = var.region
zone = data.google_compute_zones.zones.names[0]
*# panorama_version = var.panorama_version
ssh_keys = (file(var.ssh_keys))
network = data.google_compute_network.panorama.self_link
subnet = module.panorama <====== I cannot do module.panorama.id or .name here
private_static_ip = var.private_static_ip
custom_image = var.custom_image_pano
#attach_public_ip = var.attach_public_ip
}
Can anyone tell me what I may be doing wrong. Any help would be appreciated. Thanks!
Edit:
parent module for vm instance
resource "google_compute_instance" "panorama" {
name = var.name
zone = var.zone
machine_type = var.machine_type
min_cpu_platform = var.min_cpu_platform
labels = var.labels
tags = var.tags
project = var.project
can_ip_forward = false
allow_stopping_for_update = true
metadata = merge({
serial-port-enable = true
ssh-keys = var.ssh_keys
}, var.metadata)
network_interface {
/*
dynamic "access_config" {
for_each = var.attach_public_ip ? [""] : []
content {
nat_ip = google_compute_address.public[0].address
}
}
*/
network_ip = google_compute_address.private.address
network = var.network
subnetwork = var.subnet
}

I've come across this issue var.xxx is object with [n] attributes multiple times, and 9/10 times it has got to do with wrong referencing of a variable. In your case, in the panorama VM module , you're assigning value of subnet as:
subnet = module.panorama
Now, its not possible to assign a module to an attribute within the module. From your problem statement, i see you're trying to get name attribute assigned to subnet. Try this:
subnet = this.id OR
subnet = this.name
Also, regarding what values can be called, the resources defined in a module are encapsulated, so the calling module cannot access their attributes directly. However, the child module can declare output values to selectively export certain values to be accessed by the calling module.
For example, if the ./panorama module referenced in the example below exported an output value named subnet
module "panorama" {
source = "../../../../modules/panorama"
output "subnet_name" {
value = var.subnet
}
OR WITHOUT SETTING subnet VALUE
output "subnet_name" {
value = var.name
}
then the calling module can reference that result using the expression module.panorama.outputs.subnet_name. Hope this helps

Related

Terraform create multiple tags for subnets

I am trying to create VPC Module, here i am facing issue with private subnets. We have multiple resources like RDS, REDSHIFT, CASSANDRA. I want to create subnet for each of this resource in each AZ from single block of code. How ever i am unable figure out how to assign the tags in that case.
resource "aws_subnet" "packages_subnet" {
count = "${length(var.packages_subnet)}"
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${element(var.packages_subnet, count.index)}"
availability_zone = "${element(var.availability_zones, count.index)}"
map_public_ip_on_launch = false
tags = {
Name = "${var.env_name}-${element(var.test, count.index)}-${element(var.availability_zones, count.index)}"
}
}
this is how my vars.tf looks like
variable "test" {
type = list
default = ["rds","redshift","lambda","emr","cassandra","redis"]
}
with the above approach rds subnet is always creating in 1a.
and redshift in 1b.
module "Networking" {
source = "../modules/Networking"
packages_subnet = ["10.3.4.0/24", "10.3.5.0/24", "10.3.6.0/24", "10.3.10.0/24", "10.3.7.0/24", "10.3.8.0/24", "10.3.9.0/24", "10.3.11.0/24", "10.3.12.0/24", "10.3.13.0/24", "10.3.14.0/24", "10.3.15.0/24", "10.3.16.0/24", "10.3.17.0/24", "10.3.18.0/24"]
}

gcp redis with authorized_network on a shared subnetwork

I want to create a GCP redis instance on a service project that have a shared subnetwork from a host project shared with it. I don't want the redis instance to be on the top level of the vpc-network. But be part of a subnetwork on the vpc-network.
So instead of authorized_network equal to:
"projects/infra/global/networks/infra".
I want the authorized_network to be equal to:
"projects/infra/regions/europe-west1/subnetworks/service"
Under the vpc-network->Shared-VPC tab I can see my subnetwork "service" shared with the service-project and I can see it belongs to the "infra" vpc-network. But when I try to create the instance in the gui or with terraform I can't select the subnetwork only the vpc-network top level "infra".
terraform code I tried but did't work:
resource "google_redis_instance" "test" {
auth_enabled = true
authorized_network = "projects/infra/regions/europe-west1/subnetworks/service"
connect_mode = "PRIVATE_SERVICE_ACCESS"
name = "test"
project = local.infra_project_id
display_name = "test"
memory_size_gb = 1
redis_version = "REDIS_6_X"
region = "europe-west1"
}
terraform code that works, but on vpc-network not on subnetwork:
resource "google_redis_instance" "test" {
auth_enabled = true
authorized_network = "projects/infra/global/networks/infra"
connect_mode = "PRIVATE_SERVICE_ACCESS"
name = "test"
project = local.infra_project_id
display_name = "test"
memory_size_gb = 1
redis_version = "REDIS_6_X"
region = "europe-west1"
}
First of is this possible?
Second what is needed to get it to work?

Deploying resource to multiple regions w/ TF 0.12/13

We have a rather complex environment where we have lots of AWS accounts, in multiple regions and these are all connected to a transit network via VPN tunnels.
At the moment we deploy Customer Gateways via a "VPC" module for each VPC in a region but the problem that we get is that deploying the first VPC is fine but subsequent VPC deploys cause issues with the fact that the CGW is already there and so we have to import it before we can continue which isn't an ideal place to be in and also I think there's a risk that if we tear down a VPC it might try to kill the CGW that is being used by other VPN's.
What I'm wanting to do is deploy the CGW's separately from the VPC and then the VPC does a data lookup for the CGW.
I've been thinking that perhaps we can use our "base" job to provision the CGW's that are defined in the variables file but nothing I've tried has worked so far.
The variable definition would be:
variable "region_data" {
type = list(object({
region = string
deploy_cgw = bool
gateways = any
}))
default = [
{
region = "eu-west-1"
deploy_cgw = true
gateways = [
{
name = "gateway1"
ip = "1.2.3.4"
},
{
name = "gateway2"
ip = "2.3.4.5"
}
]
},
{
region = "us-east-1"
deploy_cgw = true
gateways = [
{
name = "gateway1"
ip = "2.3.4.5"
},
{
name = "gateway2"
ip = "3.4.5.6"
}
]
}
]
}
I've tried a few things, like:
locals {
regions = [for region in var.region_data : region if region.deploy_cgw]
cgws = flatten([
for region in local.regions : [
for gateway in region.gateways : {
region = region.region
name = gateway.name
ip = gateway.ip
}
]
])
}
provider "aws" {
region = "eu-west-1"
alias = "eu-west-1"
}
provider "aws" {
region = "us-east-1"
alias = "us-east-1"
}
module "cgw" {
source = "../../../modules/customer-gateway"
for_each = { for cgw in local.cgws: "${cgw.region}.${cgw.name}" => cgw }
name_tag = each.value.name
ip_address = each.value.ip
providers = {
aws = "aws.${each.value.region}"
}
}
But with this I get:
Error: Invalid provider configuration reference
on main.tf line 439, in module "cgw":
439: aws = "aws.${each.value.region}"
A provider configuration reference must not be given in quotes.
If I move the AWS provider into the module and pass the region as a parameter, I get the following:
Error: Module does not support for_each
on main.tf line 423, in module "cgw":
423: for_each = { for cgw in local.testing : "${cgw.region}.${cgw.name}" => cgw }
Module "cgw" cannot be used with for_each because it contains a nested
provider configuration for "aws", at
I've done quite a bit of research and the last one I understand is something that Terraform take a tough stance on.
Is what I'm asking possible?
for_each can't be used on modules that have providers defined within them. I was disappointed to find this out too. They do this because having nested providers does cause nightmares if that provider goes away, then you have orphaned resources in the state that you can't manage and your plans will fail. It is, however, entirely possible in https://www.pulumi.com/. I'm sick of the limitations in terraform and will be moving to pulumi. But that's not what you asked so I'll move on.
Definitely don't keep importing it. You'll end up with multiple parts of your terraform managing the same resource.
Just create the cgw once per region. Then pass the id into your vpc module. You can't iterate over providers, so have one module per provider. In other words, for each over all vpcs in the same account and same region per module call.
resource "aws_customer_gateway" "east" {
bgp_asn = 65000
ip_address = "172.83.124.10"
type = "ipsec.1"
}
resource "aws_customer_gateway" "west" {
bgp_asn = 65000
ip_address = "172.83.128.10"
type = "ipsec.1"
}
module "east" {
source = "../../../modules/customer-gateway"
for_each = map(
{
name = "east1"
ip = "1.2.3.4"
},
{
name = "east2"
ip = "1.2.3.5"
},
)
name_tag = each.value.name
ip_address = each.value.ip
cgw_id = aws_customer_gateway.east.id
providers = {
aws = "aws.east"
}
}
module "west" {
source = "../../../modules/customer-gateway"
for_each = map(
{
name = "west1"
ip = "1.2.3.4"
},
{
name = "west2"
ip = "1.2.3.5"
},
)
name_tag = each.value.name
ip_address = each.value.ip
cgw_id = aws_customer_gateway.west.id
providers = {
aws = "aws.west"
}
}

Terraform: Creating GCP Project using Shared VPC

I've been working through this for what feels like an eternity now.. so the host project already exists.. and has all the VPN's and networking set up. I am looking to create a new project, through Terraform and allowing it to use the host projects shared VPC.
Every time I run up against a problem and end up resolving it, I just run up against another one.
Right now I'm seeing:
google_compute_shared_vpc_service_project.project: googleapi: Error 404: The resource 'projects/intacct-staging-db3b7e7a' was not found, notFound
* google_compute_instance.dokku: 1 error(s) occurred:
As well as:
google_compute_instance.dokku: Error loading zone 'europe-west2-a': googleapi: Error 404: Failed to find project intacct-staging, notFound
I was originally convinced it was ordering, which is why I was playing around with depends_on configurations, to try and sort out the order. That hasn't seemed to resolve it.
Reading it simply, google_compute_shared_vpc_service_project doesn't exist as far as google_compute_shared_vpc_service_project is concerned. Even though I've added the following to google_compute_shared_vpc_service_project:
depends_on = ["google_project.project",
"google_compute_shared_vpc_host_project.host_project",
]
Perhaps, because the host project already exists I should use data to refer to it instead of resource?
My full TF File is here:
provider "google" {
region = "${var.gcp_region}"
credentials = "${file("./creds/serviceaccount.json")}"
}
resource "random_id" "id" {
byte_length = 4
prefix = "${var.project_name}-"
}
resource "google_project" "project" {
name = "${var.project_name}"
project_id = "${random_id.id.hex}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
resource "google_project_services" "project" {
project = "${google_project.project.project_id}"
services = [
"compute.googleapis.com"
]
depends_on = [ "google_project.project" ]
}
# resource "google_service_account" "service-account" {
# account_id = "intacct-staging-service"
# display_name = "Service Account for the intacct staging app"
# }
resource "google_compute_shared_vpc_host_project" "host_project" {
project = "${var.vpc_parent}"
}
resource "google_compute_shared_vpc_service_project" "project" {
host_project = "${google_compute_shared_vpc_host_project.host_project.project}"
service_project = "${google_project.project.project_id}"
depends_on = ["google_project.project",
"google_compute_shared_vpc_host_project.host_project",
]
}
resource "google_compute_address" "dokku" {
name = "fr-intacct-staging-ip"
address_type = "EXTERNAL"
project = "${google_project.project.project_id}"
depends_on = [ "google_project_services.project" ]
}
resource "google_compute_instance" "dokku" {
project = "${google_project.project.name}"
name = "dokku-host"
machine_type = "${var.comp_type}"
zone = "${var.gcp_zone}"
allow_stopping_for_update = "true"
tags = ["intacct"]
# Install Dokku
metadata_startup_script = <<SCRIPT
sed -i 's/PermitRootLogin no/PermitRootLogin yes/' /etc/ssh/sshd_config && service sshd restart
SCRIPT
boot_disk {
initialize_params {
image = "${var.compute_image}"
}
}
network_interface {
subnetwork = "${var.subnetwork}"
subnetwork_project = "${var.vpc_parent}"
access_config = {
nat_ip = "${google_compute_address.dokku.address}"
}
}
metadata {
sshKeys = "root:${file("./id_rsa.pub")}"
}
}
EDIT:
As discussed below I was able to resolve the latter project not found error by changing the reference to project_id instead of name as name does not include the random hex.
I'm now also seeing another error, referring to the static IP. The network interface is configured to use the subnetwork from the Host VPC...
network_interface {
subnetwork = "${var.subnetwork}"
subnetwork_project = "${var.vpc_parent}"
access_config = {
nat_ip = "${google_compute_address.dokku.address}"
}
}
The IP is setup here:
resource "google_compute_address" "dokku" {
name = "fr-intacct-staging-ip"
address_type = "EXTERNAL"
project = "${google_project.project.project_id}"
}
The IP should really be in the host project, which I've tried.. and when I do I get an error saying that cross-project is not allowed with this resource.
When I change to the above, it also errors saying that the new project is now capable of handling API Calls. Which I suppose would make sense as I only allowed compute API calls per the google_project_services resource.
I'll try allowing network API calls and see if that works, but I'm thinking the external IP needs to be in the host project's shared VPC?
For anyone encountering the same problem, in my case the project not found error was solved just by enabling the Compute Engine API.

'Not a valid output for module' when using output variable with terraform

I'm trying to setup some IaC for a new project using Hashicorp Terraform on AWS. I'm using modules because I want to be able to reuse stuff across multiple environments (staging, prod, dev, etc.)
I'm struggling to understand where I have to set an output variable within a module, and how I then use that in another module. Any pointers to this would be greatly appreciated!
I need to use some things created in my VPC module (subnet IDs) when creating EC2 machines. My understanding is that you can't reference something from one module in another, so I am trying to use an output variable from the VPC module.
I have the following in my site main.tf
module "myapp-vpc" {
source = "dev/vpc"
aws_region = "${var.aws_region}"
}
module "myapp-ec2" {
source = "dev/ec2"
aws_region = "${var.aws_region}"
subnet_id = "${module.vpc.subnetid"}
}
dev/vpc simply sets some values and uses my vpc module:
module "vpc" {
source = "../../modules/vpc"
aws_region = "${var.aws_region}"
vpc-cidr = "10.1.0.0/16"
public-subnet-cidr = "10.1.1.0/24"
private-subnet-cidr = "10.1.2.0/24"
}
In my vpc main.tf, I have the following at the very end, after the aws_vpc and aws_subnet resources (showing subnet resource):
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.main.id}"
map_public_ip_on_launch = true
availability_zone = "${var.aws_region}a"
cidr_block = "${var.public-subnet-cidr}"
}
output "subnetid" {
value = "${aws_subnet.public.id}"
}
When I run terraform plan I get the following error message:
Error: module 'vpc': "subnetid" is not a valid output for module "vpc"
Outputs need to be passed up through each module explicitly each time.
For example if you wanted to output a variable to the screen from a module nested below another module you would need something like this:
child-module.tf
output "child_foo" {
value = "foobar"
}
parent-module.tf
module "child" {
source = "path/to/child"
}
output "parent_foo" {
value = "${module.child.child_foo}"
}
main.tf
module "parent" {
source = "path/to/parent"
}
output "main_foo" {
value = "${module.parent.parent_foo}"
}