GCP Terraform Forwarding Rule Target - google-cloud-platform

Hey Terraform friends;
Trying to navigate my way through some basic load balancing in the GCP environment. First time doing it, so, still trying to pick up on some of the nuances between the various types and environments.
My goal is to have a target pool of hosts be the destination recipient for connections made to the forwarding rule, in whatever load balancing policy the business needs.
The setup is a bit awkward, because I have the project that I'm doing the work in, but the destination subnetworks are in a shared VPC in another project. Now, according to powers that implemented, the project has permissions to use that subnetwork, which, so far is accurate (I've deployed compute instances to it).
Here's my code:
#target pool
resource "google_compute_target_pool" "target_pool" {
name = "${var.name}-pool"
instances = var.instances
session_affinity = var.session_affinity
}
resource "google_compute_address" "lb_address" {
name = "${var.name}-load-balancer"
address_type = "INTERNAL"
address = "10.129.48.250" //cause fuck it I'm trying to figure it out
subnetwork = data.google_compute_subnetwork.gcp_subnetwork.name
project = var.subnetwork_project
}
#load balancer
resource "google_compute_forwarding_rule" "basic_load_balancer" {
name = var.name
target = google_compute_target_pool.target_pool.id
ip_address = google_compute_address.lb_address.address
port_range = join(",", var.port_range)
labels = var.labels
subnetwork = data.google_compute_subnetwork.gcp_subnetwork.name
project = var.subnetwork_project
load_balancing_scheme = var.lb_scheme
}
With the relevant data:
data "google_compute_subnetwork" "gcp_subnetwork" {
name = var.subnetwork
project = var.subnetwork_project
}
And variables:
variable "name" {
description = "Name of the application usage"
type = string
}
variable "port_range" {
description = "List of ports to load balance"
}
variable "port_range_health" {
description = "List of ports to health check"
type = list(string)
default = [ "" ]
}
variable "instances" {
description = "List of Instances by Zone/Name value"
type = list(string)
}
variable "backup_instances" {
description = "List of Instances by Zone/Name value for backup use"
type = list(string)
default = []
}
variable "session_affinity" {
description = "Load balancer session affinity type"
type = string
default = "NONE"
}
variable "labels" {
description = "Labels to apply to the load balancer"
type = map
default = {}
}
variable "lb_scheme" {
description = "What the LB Forwarding rule is to be used for"
type = string
default = "INTERNAL"
}
variable "subnetwork" {
description = "What network the load balancer should be deployed into"
type = string
}
variable "subnetwork_project" {
description = "Project that contains the subnetwork to deploy into"
type = string
default = ""
}
variable "project" {
description = "Project supplied by user"
type = string
}
My subnetwork project is set to the project id of my shared VPC, and I specify the name of the subnetwork. Instance
And when I try to apply, I keep getting hit with an error message about the structure of the target:
Error: Error creating ForwardingRule: googleapi: Error 400: Invalid value for field 'resource.target': 'projects/insight-dev-272215/regions/us-central1/targetPools/insight-dev-pool'. The URL is malformed. Must be a valid In-Project Target Proxy URL or a supported Google API bundle., invalid
│
│ with module.insight_solr_lb.google_compute_forwarding_rule.basic_load_balancer,
│ on ..\..\..\modules\load_balancer\gcp\basic\main.tf line 29, in resource "google_compute_forwarding_rule" "basic_load_balancer":
│ 29: resource "google_compute_forwarding_rule" "basic_load_balancer" {
│
I've beaten my head around a few times for a few things, but I'm not making much progress. Anyone experienced in the GCP Forwarding rules see my issue?
I've tried to pass the URI of the target group to the target, but it didn't like that either.
Thanks!

Related

Terraform referenced module has no attributes error

I am trying to use a Palo Alto Networks module to deploy a panorama VM instance to GCP with Terraform. In the example module, I see they create a VPC together with a subnetwork, however, I have an existing VPC I am adding to. So I data source the VPC and create the subnetwork with a module. Upon referencing this subnetwork in my VM instance module, it complains it has no attributes:
Error: Incorrect attribute value type
on ../../../../modules/panorama/main.tf line 67, in resource "google_compute_instance" "panorama":
67: subnetwork = var.subnet
|----------------
| var.subnet is object with no attributes
Here is the subnet code:
data "google_compute_network" "panorama" {
project = var.project_id
name = "fed-il4-p-net-panorama"
}
module "panorama_subnet" {
source = "../../../../modules/subnetwork-module"
subnet_name = "panorama-${var.region_short_name[var.region]}"
subnet_ip = var.panorama_subnet
subnet_region = var.region
project_id = var.project_id
network = data.google_compute_network.panorama.self_link
}
Here is the panorama VM instance code:
module "panorama" {
source = "../../../../modules/panorama"
name = "${var.project_id}-panorama-${var.region_short_name[var.region]}"
project = var.project_id
region = var.region
zone = data.google_compute_zones.zones.names[0]
*# panorama_version = var.panorama_version
ssh_keys = (file(var.ssh_keys))
network = data.google_compute_network.panorama.self_link
subnet = module.panorama <====== I cannot do module.panorama.id or .name here
private_static_ip = var.private_static_ip
custom_image = var.custom_image_pano
#attach_public_ip = var.attach_public_ip
}
Can anyone tell me what I may be doing wrong. Any help would be appreciated. Thanks!
Edit:
parent module for vm instance
resource "google_compute_instance" "panorama" {
name = var.name
zone = var.zone
machine_type = var.machine_type
min_cpu_platform = var.min_cpu_platform
labels = var.labels
tags = var.tags
project = var.project
can_ip_forward = false
allow_stopping_for_update = true
metadata = merge({
serial-port-enable = true
ssh-keys = var.ssh_keys
}, var.metadata)
network_interface {
/*
dynamic "access_config" {
for_each = var.attach_public_ip ? [""] : []
content {
nat_ip = google_compute_address.public[0].address
}
}
*/
network_ip = google_compute_address.private.address
network = var.network
subnetwork = var.subnet
}
I've come across this issue var.xxx is object with [n] attributes multiple times, and 9/10 times it has got to do with wrong referencing of a variable. In your case, in the panorama VM module , you're assigning value of subnet as:
subnet = module.panorama
Now, its not possible to assign a module to an attribute within the module. From your problem statement, i see you're trying to get name attribute assigned to subnet. Try this:
subnet = this.id OR
subnet = this.name
Also, regarding what values can be called, the resources defined in a module are encapsulated, so the calling module cannot access their attributes directly. However, the child module can declare output values to selectively export certain values to be accessed by the calling module.
For example, if the ./panorama module referenced in the example below exported an output value named subnet
module "panorama" {
source = "../../../../modules/panorama"
output "subnet_name" {
value = var.subnet
}
OR WITHOUT SETTING subnet VALUE
output "subnet_name" {
value = var.name
}
then the calling module can reference that result using the expression module.panorama.outputs.subnet_name. Hope this helps

Terraform aws_wafv2_ip_set delete ip on apply

I have a resource aws_wafv2_ip_set that is used by many different modules.
variable "addresses" {
type = set(string)
default = []
}
resource "aws_wafv2_ip_set" "ip_set" {
ip_address_version = "IPV4"
name = var.name
scope = "REGIONAL"
addresses = var.addresses
}
I need to create different ip sets that will be filled by a dynamic script from our admin section or directly from AWS console (not from terraform).
The problem is that every single apply detect that the ip set is not empty (like the var address), and so it delete all ip address added by console or by script.
How can I add aws_wafv2_ip_set without delete ip address on apply?
Thank you
According to the docs, adresses is an array of strings and is required.
Why don't you just go with the tf example:
resource "aws_wafv2_ip_set" "ip_set" {
name = "example"
description = "Example IP set"
scope = "REGIONAL"
ip_address_version = "IPV4"
addresses = ["YOUR_IP_1", "YOUR_IP_2"]
}

googleapi: Error 400: Precondition check failed., failedPrecondition while creating Cloud Composer Environment through Terraform

I'm trying to create Cloud Composer Environment through Terraform and getting this error
googleapi: Error 400: Precondition check failed., failedPrecondition while creating Cloud Composer Environment through Terraform
The service account of VM from which I'm trying to create composer has owner permissions in the GCP project.
I have tried with same composer configurations from GCP console and the environment got created without any issues.
I have tried disabling the Cloud Composer API and enabled it once again, yet no solution.
Eventually, for the very first time doing terraform apply, it was trying to create composer environment but ended up with version error and I changed the Image version of composer. Now I'm facing this issue. Can anyone help?
Error message from terminal
composer/main.tf
resource "google_composer_environment" "etl_env" {
provider = google-beta
name = var.env_name
region = var.region
config {
node_count = 3
node_config {
zone = var.zone
machine_type = var.node_machine_type
network = var.network
subnetwork = var.app_subnet_selflink
ip_allocation_policy {
use_ip_aliases = true
}
}
software_config {
image_version = var.software_image_version
python_version = 3
}
private_environment_config {
enable_private_endpoint = false
}
database_config {
machine_type = var.database_machine_type
}
web_server_config {
machine_type = var.web_machine_type
}
}
}
composer/variables.tf
variable "app_subnet_selflink" {
type = string
description = "App Subnet Selflink"
}
variable "region" {
type = string
description = "Region"
default = "us-east4"
}
variable "zone" {
type = string
description = "Availability Zone"
default = "us-east4-c"
}
variable "network" {
type = string
description = "Name of the network"
}
variable "env_name" {
type = string
default = "composer-etl"
description = "The name of the composer environment"
}
variable "node_machine_type" {
type = string
default = "n1-standard-1"
description = "The machine type of the worker nodes"
}
variable "software_image_version" {
type = string
default = "composer-1.15.2-airflow-1.10.14"
description = "The image version used in the software configurations of composer"
}
variable "database_machine_type" {
type = string
default = "db-n1-standard-2"
description = "The machine type of the database instance"
}
variable "web_machine_type" {
type = string
default = "composer-n1-webserver-2"
description = "The machine type of the web server instance"
}
Network and Subnetwork are referenced from another module and they are correct.
The issue will be with master_ipv4_cidr_block range. If left blank, the default value of '172.16.0.0/28' is used. As you already created it manually the range already is in use, use some other ranges.
Please follow links for more GCP and Terraform

Terraform Resource attribute not being removed when passing in empty values

I am working with a GCP Cloud Composer resource and added in a dynamic block to create an attribute for the resource to set allowed_ip_ranges which can be used as an IP filter for accessing the Apache Airflow Web UI.
I was able to get the allowed ranges setup and can update them in place to new values also.
If I attempt to pass in a blank list I am expecting the IP address(es) to be removed as attributes for the resource but Terraform seems to think that no changes are needed.
There is probably something wrong in my code but I am not sure what exactly I would need to do. Does it involve adding in a conditional expression to the for_each loop in the dynamic block?
Child module main.tf
web_server_network_access_control {
dynamic "allowed_ip_range" {
for_each = var.allowed_ip_range
content {
value = allowed_ip_range.value["value"]
description = allowed_ip_range.value["description"]
}
}
}
Child module variables.tf
variable "allowed_ip_range" {
description = "The IP ranges which are allowed to access the Apache Airflow Web Server UI."
type = list(map(string))
default = []
}
Parent module terraform.tfvars
allowed_ip_range = [
{
value = "11.0.0.2/32"
description = "Test dynamic block 1"
},
]
You can set the default value in your variables.tf file:
variable "allowed_ip_range" {
description = "The IP ranges which are allowed to access the Apache Airflow Web Server UI"
type = list(map(string))
default = [
{
value = "0.0.0.0/0"
description = "Allows access from all IPv4 addresses (default value)"
},
{
value = "::0/0"
description = "Allows access from all IPv6 addresses (default value)"
},
]
}
And when you will delete your variable from terraform.tfvars, you will have the default values

Fails with Health check error in GCP composer using terraform

I was trying to create a Cloud Composer in GCP using terraform. I was using the terraform version Terraform v0.12.5. But i am unable to launch an instance using terraform.
I am getting the following error
Error: Error waiting to create Environment: Error waiting for Creating Environment: Error code 3, message: Http error status code: 400
Http error message: BAD REQUEST
Additional errors:
{"ResourceType":"appengine.v1.version","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Legacy health checks are no longer supported for the App Engine Flexible environment. Please remove the 'health_check' section from your app.yaml and configure updated health checks. For instructions on migrating to split health checks see https://cloud.google.com/appengine/docs/flexible/java/migrating-to-split-health-checks","status":"INVALID_ARGUMENT","details":[],"statusMessage":"Bad Request","requestPath":"https://appengine.googleapis.com/v1/apps/qabc39fc336994cc4-tp/services/default/versions","httpMethod":"POST"}}
main.tf
resource "google_composer_environment" "sample-composer" {
provider= google-beta
project = "${var.project_id}"
name = "${var.google_composer_environment_name}"
region = "${var.region}"
config {
node_count = "${var.composer_node_count}"
node_config {
zone = "${var.zone}"
disk_size_gb = "${var.disk_size_gb}"
machine_type = "${var.composer_machine_type}"
network = google_compute_network.xxx-network.self_link
subnetwork = google_compute_subnetwork.xxx-subnetwork.self_link
}
software_config {
env_variables = {
AIRFLOW_CONN_SAMPLEMEDIA_FTP_CONNECTION = "ftp://${var.ftp_user}:${var.ftp_password}#${var.ftp_host}"
}
image_version = "${var.composer_airflow_version}"
python_version = "${var.composer_python_version}"
}
}
}
resource "google_compute_network" "sample-network" {
name = "composer-xxx-network"
project = "${var.project_id}"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "sample-subnetwork" {
name = "composer-xxx-subnetwork"
project = "${var.project_id}"
ip_cidr_range = "10.2.0.0/16"
region = "${var.region}"
network = google_compute_network.xxx-network.self_link
}
variables.tf
# Machine specific information for creating Instance in GCP
variable "project_id" {
description = "The name of GCP project"
default = "sample-test"
}
variable "google_composer_environment_name" {
description = "The name of the instance"
default = "sample-analytics-dev"
}
variable "region" {
description = "The name of GCP region"
default = "europe-west1"
}
variable "composer_node_count" {
description = "The number of node count"
default = "3"
}
variable "zone" {
description = "The zone in which instance to be launched"
default = "europe-west1-c"
}
variable "disk_size_gb" {
description = "The machine size in GB"
default = "100"
}
variable "composer_machine_type" {
description = "The type of machine to be launched in GCP"
default = "n1-standard-1"
}
# Environmental Variables
variable "ftp_user" {
description = "Environmental variables for FTP user"
default = "test"
}
variable "ftp_password" {
description = "Environmental variables for FTP password"
default = "4444erf"
}
variable "ftp_host" {
description = "Environmental variables for FTP host"
default = "sample.logs.llnw.net"
}
# Versions for Cloud Composer, Aiflow and Python
variable "composer_airflow_version" {
description = "The composer and airflow versions to launch instance in GCP"
default = "composer-1.7.2-airflow-1.10.2"
}
variable "composer_python_version" {
description = "The version of python"
default = "3"
}
# Network information
variable "composer_network_name" {
description = "Environmental variables for FTP user"
default = "composer-xxx-network"
}
variable "composer_subnetwork_name" {
description = "Environmental variables for FTP user"
default = "composer-xxx-subnetwork"
}
Creating Composer on GCP platform works without any issues. When creating using terraform it requires a health check.
I've tested your current user case within my GCP cloudshell Terraform binary and so far no issue occurred, Composer environment has been successfully created:
$ terraform -v
Terraform v0.12.9
+ provider.google v3.1.0
+ provider.google-beta v3.1.0
A few concerns from my side:
The issue you've reported might be relevant to the usage of legacy health checks, which are essentially deprecated and replaced by split health checks:
As of September 15, 2019, if you're using the legacy health checks,
your application will continue to run and receive health checks but
you won't be able to deploy new versions of your application.
You've not specified any info part about your Terraform GCP provider version and I suppose that issue can be hidden there, as I've seen in this Changelog that split_health_checks are enabled in google_app_engine_application.feature_settings since 3.0.0-beta.1 has been released.
Feel free to add some more insights in order to support you resolving the current issue.