gcp redis with authorized_network on a shared subnetwork - google-cloud-platform

I want to create a GCP redis instance on a service project that have a shared subnetwork from a host project shared with it. I don't want the redis instance to be on the top level of the vpc-network. But be part of a subnetwork on the vpc-network.
So instead of authorized_network equal to:
"projects/infra/global/networks/infra".
I want the authorized_network to be equal to:
"projects/infra/regions/europe-west1/subnetworks/service"
Under the vpc-network->Shared-VPC tab I can see my subnetwork "service" shared with the service-project and I can see it belongs to the "infra" vpc-network. But when I try to create the instance in the gui or with terraform I can't select the subnetwork only the vpc-network top level "infra".
terraform code I tried but did't work:
resource "google_redis_instance" "test" {
auth_enabled = true
authorized_network = "projects/infra/regions/europe-west1/subnetworks/service"
connect_mode = "PRIVATE_SERVICE_ACCESS"
name = "test"
project = local.infra_project_id
display_name = "test"
memory_size_gb = 1
redis_version = "REDIS_6_X"
region = "europe-west1"
}
terraform code that works, but on vpc-network not on subnetwork:
resource "google_redis_instance" "test" {
auth_enabled = true
authorized_network = "projects/infra/global/networks/infra"
connect_mode = "PRIVATE_SERVICE_ACCESS"
name = "test"
project = local.infra_project_id
display_name = "test"
memory_size_gb = 1
redis_version = "REDIS_6_X"
region = "europe-west1"
}
First of is this possible?
Second what is needed to get it to work?

Related

Terraform referenced module has no attributes error

I am trying to use a Palo Alto Networks module to deploy a panorama VM instance to GCP with Terraform. In the example module, I see they create a VPC together with a subnetwork, however, I have an existing VPC I am adding to. So I data source the VPC and create the subnetwork with a module. Upon referencing this subnetwork in my VM instance module, it complains it has no attributes:
Error: Incorrect attribute value type
on ../../../../modules/panorama/main.tf line 67, in resource "google_compute_instance" "panorama":
67: subnetwork = var.subnet
|----------------
| var.subnet is object with no attributes
Here is the subnet code:
data "google_compute_network" "panorama" {
project = var.project_id
name = "fed-il4-p-net-panorama"
}
module "panorama_subnet" {
source = "../../../../modules/subnetwork-module"
subnet_name = "panorama-${var.region_short_name[var.region]}"
subnet_ip = var.panorama_subnet
subnet_region = var.region
project_id = var.project_id
network = data.google_compute_network.panorama.self_link
}
Here is the panorama VM instance code:
module "panorama" {
source = "../../../../modules/panorama"
name = "${var.project_id}-panorama-${var.region_short_name[var.region]}"
project = var.project_id
region = var.region
zone = data.google_compute_zones.zones.names[0]
*# panorama_version = var.panorama_version
ssh_keys = (file(var.ssh_keys))
network = data.google_compute_network.panorama.self_link
subnet = module.panorama <====== I cannot do module.panorama.id or .name here
private_static_ip = var.private_static_ip
custom_image = var.custom_image_pano
#attach_public_ip = var.attach_public_ip
}
Can anyone tell me what I may be doing wrong. Any help would be appreciated. Thanks!
Edit:
parent module for vm instance
resource "google_compute_instance" "panorama" {
name = var.name
zone = var.zone
machine_type = var.machine_type
min_cpu_platform = var.min_cpu_platform
labels = var.labels
tags = var.tags
project = var.project
can_ip_forward = false
allow_stopping_for_update = true
metadata = merge({
serial-port-enable = true
ssh-keys = var.ssh_keys
}, var.metadata)
network_interface {
/*
dynamic "access_config" {
for_each = var.attach_public_ip ? [""] : []
content {
nat_ip = google_compute_address.public[0].address
}
}
*/
network_ip = google_compute_address.private.address
network = var.network
subnetwork = var.subnet
}
I've come across this issue var.xxx is object with [n] attributes multiple times, and 9/10 times it has got to do with wrong referencing of a variable. In your case, in the panorama VM module , you're assigning value of subnet as:
subnet = module.panorama
Now, its not possible to assign a module to an attribute within the module. From your problem statement, i see you're trying to get name attribute assigned to subnet. Try this:
subnet = this.id OR
subnet = this.name
Also, regarding what values can be called, the resources defined in a module are encapsulated, so the calling module cannot access their attributes directly. However, the child module can declare output values to selectively export certain values to be accessed by the calling module.
For example, if the ./panorama module referenced in the example below exported an output value named subnet
module "panorama" {
source = "../../../../modules/panorama"
output "subnet_name" {
value = var.subnet
}
OR WITHOUT SETTING subnet VALUE
output "subnet_name" {
value = var.name
}
then the calling module can reference that result using the expression module.panorama.outputs.subnet_name. Hope this helps

Terraform - re-use an existing subnetwork to create a cloud sql instance on GCP

I am attempting to create a cloud sql instance on GCP using terraform. I want to use an existing VPC subnetwork created in an earlier step but there does not seem to be a way to refer to it. Instead all examples seem to require a new IP range to be setup. This is my current code that creates the new IP range:
provider = google-beta
project = "project_name"
name = "private_range"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 18
network = "projects/project_name/global/networks/vpc_name"
address = "192.168.128.0"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider = google-beta
network = "projects/project_name/global/networks/vpc_name"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
resource "google_sql_database_instance" "instance" {
provider = google-beta
project = "project_name"
name = "db-instance10"
region = "us-east1"
database_version = "MYSQL_5_7"
depends_on = [google_service_networking_connection.private_vpc_connection]
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = false
private_network = "projects/project_name/global/networks/vpc_name"
}
}
}
provider "google-beta" {
region = "us-east1"
zone = "us-east1-c"
}
When I specify the exact same IP range as the existing subnet. I receive the error:
Error: Error waiting to create GlobalAddress: Error waiting for Creating GlobalAddress: Requested range conflicts with other resources: The provided IP range overlaps with existing subnetwork IP range.
There does not seem to be any obvious way to refer to the existing subnetwork resource as the reserved_peering_ranges parameter only seems to accept the name of an IP address range resource.
Here is the resource specification for the existing subnetwork:
creation_timestamp = "2020-06-03T07:28:05.762-07:00"
enable_flow_logs = true
fingerprint = "ied1TiEZjgc="
gateway_address = "192.168.128.1"
id = "us-east1/vpc_subnet_name"
ip_cidr_range = "192.168.128.0/18"
name = "vpc_subnet_name"
network = "https://www.googleapis.com/compute/v1/projects/project_name/global/networks/vpc_name"
private_ip_google_access = true
project = "project_name"
region = "us-east1"
secondary_ip_range = []
self_link = "https://www.googleapis.com/compute/v1/projects/project_name/regions/us-east1/subnetworks/vpc_subnet_name"
log_config {
aggregation_interval = "INTERVAL_5_SEC"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
}
}
Connecting to a Cloud sql instance through a private IP requires configuring private service access that uses an allocated IP address range that must not overlap with any existing VPC subnet.
The private connection links your VPC network with the service's VPC network. This connection allows VM instances in your VPC network to use internal IP addresses to reach the service resources, for example a Cloud sql instance that has internal IP addresses.
Once created, the allocated IP address range and the connection can then be reused with other services.
you can use the below module to create the cloud sql with exisiting private vpc but you need to modify it according to your network. in this scenario i have created a separate private network & creating the cloud sql using that network.
https://github.com/gruntwork-io/terraform-google-sql
get the existing network in your cloud infra from which you want to create your cloudsql, the below command gives the
gcloud network list --uri
you need to append the network where self link is mentioned & hash out the steps where the vpc is getting created. please refer the below main.tf file
the location of this file is --- Cloud_SQL.terraform\modules\sql_example_postgres-private-ip\examples\postgres-private-ip\main.tf
add the variable accordingly.
# ------------------------------------------------------------------------------
# LAUNCH A POSTGRES CLOUD SQL PRIVATE IP INSTANCE
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# CONFIGURE OUR GCP CONNECTION
# ------------------------------------------------------------------------------
provider "google-beta" {
project = var.project
region = var.region
}
terraform {
# This module is now only being tested with Terraform 0.14.x. However, to make upgrading easier, we are setting
# 0.12.26 as the minimum version, as that version added support for required_providers with source URLs, making it
# forwards compatible with 0.14.x code.
required_version = ">= 0.12.26"
required_providers {
google-beta = {
source = "hashicorp/google-beta"
version = "~> 3.57.0"
}
}
}
# ------------------------------------------------------------------------------
# CREATE A RANDOM SUFFIX AND PREPARE RESOURCE NAMES
# ------------------------------------------------------------------------------
resource "random_id" "name" {
byte_length = 2
}
####################################################################
# Reserve global internal address range for the peering
resource "google_compute_global_address" "private_ip_address" {
provider = google-beta
# name = local.private_ip_name
name = var.vpc_network
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
# network = google_compute_network.private_network.self_link
# network = google_compute_network.vpc_network.self_link
network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc/"
}
# Establish VPC network peering connection using the reserved address range
resource "google_service_networking_connection" "private_vpc_connection" {
provider = google-beta
# network = google_compute_network.private_network.self_link
network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
# ------------------------------------------------------------------------------
# CREATE DATABASE INSTANCE WITH PRIVATE IP
# ------------------------------------------------------------------------------
module "postgres" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "github.com/gruntwork-io/terraform-google-sql.git//modules/cloud-sql?ref=v0.2.0"
source = "../../modules/cloud-sql"
project = var.project
region = var.region
name = var.instance_name
db_name = var.db_name
engine = var.postgres_version
machine_type = var.machine_type
# To make it easier to test this example, we are disabling deletion protection so we can destroy the databases
# during the tests. By default, we recommend setting deletion_protection to true, to ensure database instances are
# not inadvertently destroyed.
deletion_protection = false
# These together will construct the master_user privileges, i.e.
# 'master_user_name'#'master_user_host' IDENTIFIED BY 'master_user_password'.
# These should typically be set as the environment variable TF_VAR_master_user_password, etc.
# so you don't check these into source control."
master_user_password = var.master_user_password
master_user_name = var.master_user_name
master_user_host = "%"
# Pass the private network link to the module
# private_network = google_compute_network.private_network.self_link
private_network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc"
# Wait for the vpc connection to complete
dependencies = [google_service_networking_connection.private_vpc_connection.network]
custom_labels = {
test-id = "postgres-private-ip-example"
}
}

Trying to create 2 ASGs in one terraform file

I'm trying to create an launch configuration, ELB and 2 ASG. I guess one ELB is fine to create 2 ASG (im not sure).
So I have a launch configuration and asg code in one file calling the as module. My question is, can I create 2 ASG using a single terraform file or with file in a single repo?
Also, Please let me know if this is a good configuration.
when I tried to put two different files calling same module I get following error.
Error downloading modules: Error loading modules: module asg: duplicated. module names must be unique
My Terraform code:
auto_scaling.tf
resource "aws_launch_configuration" "launch_config" {
image_id = "${var.ec2ami_id}"
instance_type = "${var.ec2_instance_type}"
security_groups = ["${aws_security_group.*******.id}"]
key_name = "${var.keypair}"
lifecycle {
create_before_destroy = true
}
}
module "asg" {
source = ****
name = "*****"
environment = "***"
service = "****"
product = "**"
team = "****"
owner = "*****"
ami = "${var.ec2_id}"
#instance_profile = "******"
instance_type = "t2.micro"
ebs_optimized = true
key_name = "${var.keypair}"
security_group = ["${aws_security_group.****.id}"]
user_data = "${path.root}/blank_user_data.sh"
load_balancer_names = "${module.elb.elb_name}"
associate_public_ip = false
asg_instances = 2
asg_min_instances = 2
asg_max_instances = 4
root_volume_size = 250
asg_wait_for_capacity_timeout = "5m"
vpc_zone_subnets = "${module.vpc.private_subnets}"
}
###elb.tf###
module "elb" {
source = "*****"
name = "***elb"
subnet_ids = "${element(split(",",
module.vpc.private_subnets), 0)}"
security_groups = "${aws_security_group.****.id}"
s3_access_logs_bucket = "****"
}
I want to create 2 ASGs in one subnet.
You can reuse your asg module - just give both instances different resource names, e.g.:
module "asg1" {
...
}
module "asg2" {
...
}

How to fix "An Unknown Error Occurred" when creating multiple Google Cloud SQL instances with private IP simultaneously?

Our cloud backend setup contains 5 Cloud SQL for Postgres instances. We manage our infrastructure using Terraform. We are using connecting them from GKE using a public IP and the Cloud SQL container.
In order to simplify our setup we wish to get rid of the proxy containers by moving to a private IP. I tried following the Terraform guide. While a creating a single instance works fine, trying to create 5 instances simultaneously ends in 4 failed ones and one successful:
The error which appears in the Google Clod Console on the failed instances is "An Unknown Error occurred":
Following is the code which reproduces it. Pay attention to the count = 5 line:
resource "google_compute_network" "private_network" {
provider = "google-beta"
name = "private-network"
}
resource "google_compute_global_address" "private_ip_address" {
provider = "google-beta"
name = "private-ip-address"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = "${google_compute_network.private_network.self_link}"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider = "google-beta"
network = "${google_compute_network.private_network.self_link}"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = ["${google_compute_global_address.private_ip_address.name}"]
}
resource "google_sql_database_instance" "instance" {
provider = "google-beta"
count = 5
name = "private-instance-${count.index}"
database_version = "POSTGRES_9_6"
depends_on = [
"google_service_networking_connection.private_vpc_connection"
]
settings {
tier = "db-custom-1-3840"
availability_type = "REGIONAL"
ip_configuration {
ipv4_enabled = "false"
private_network = "${google_compute_network.private_network.self_link}"
}
}
}
provider "google-beta" {
version = "~> 2.5"
credentials = "credentials.json"
project = "PROJECT_ID"
region = "us-central1"
zone = "us-central1-a"
}
I tried several alternatives:
Waiting a minute after creating the google_service_networking_connection and then creating all the instances simultaneously, but I got the same error.
Creating an address range and a google_service_networking_connection per instance, but I got an error that google_service_networking_connection cannot be created simultaneously.
Creating an address range per instance and a single google_service_networking_connection which links to all of them, but I got the same error.
Found an ugly yet working solution. There is a bug in GCP which does not prevent simultaneous creation of instances although it cannot be completed. There is neither documentation about it nor a meaningful error message. It appears in the Terraform Google provider issue tracker as well.
One alternative is adding a dependence between the instances. This allows their creation to complete successfully. However, each instance takes several minutes to create. This accumulates to many spent minutes. If we add an artificial delay of 60 seconds between instance creation, we manage to avoid the failures. Notes:
The needed amount of seconds to delay depends on the instance tier. For example, for db-f1-micro, 30 seconds were enough. They were not enough for db-custom-1-3840.
I am not sure what is the exact number of needed seconds for db-custom-1-3840. 30 seconds were not enough, 60 were.
Following is a the code sample to resolve the issue. It shows 2 instances only since due to depends_on limitations I could not use the count feature and showing the full code for 5 instances would be very long. It works the same for 5 instances:
resource "google_compute_network" "private_network" {
provider = "google-beta"
name = "private-network"
}
resource "google_compute_global_address" "private_ip_address" {
provider = "google-beta"
name = "private-ip-address"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = "${google_compute_network.private_network.self_link}"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider = "google-beta"
network = "${google_compute_network.private_network.self_link}"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = ["${google_compute_global_address.private_ip_address.name}"]
}
locals {
db_instance_creation_delay_factor_seconds = 60
}
resource "null_resource" "delayer_1" {
depends_on = ["google_service_networking_connection.private_vpc_connection"]
provisioner "local-exec" {
command = "echo Gradual DB instance creation && sleep ${local.db_instance_creation_delay_factor_seconds * 0}"
}
}
resource "google_sql_database_instance" "instance_1" {
provider = "google-beta"
name = "private-instance-delayed-1"
database_version = "POSTGRES_9_6"
depends_on = [
"google_service_networking_connection.private_vpc_connection",
"null_resource.delayer_1"
]
settings {
tier = "db-custom-1-3840"
availability_type = "REGIONAL"
ip_configuration {
ipv4_enabled = "false"
private_network = "${google_compute_network.private_network.self_link}"
}
}
}
resource "null_resource" "delayer_2" {
depends_on = ["google_service_networking_connection.private_vpc_connection"]
provisioner "local-exec" {
command = "echo Gradual DB instance creation && sleep ${local.db_instance_creation_delay_factor_seconds * 1}"
}
}
resource "google_sql_database_instance" "instance_2" {
provider = "google-beta"
name = "private-instance-delayed-2"
database_version = "POSTGRES_9_6"
depends_on = [
"google_service_networking_connection.private_vpc_connection",
"null_resource.delayer_2"
]
settings {
tier = "db-custom-1-3840"
availability_type = "REGIONAL"
ip_configuration {
ipv4_enabled = "false"
private_network = "${google_compute_network.private_network.self_link}"
}
}
}
provider "google-beta" {
version = "~> 2.5"
credentials = "credentials.json"
project = "PROJECT_ID"
region = "us-central1"
zone = "us-central1-a"
}
provider "null" {
version = "~> 1.0"
}
In case someone lands here with a slightly different case (creating google_sql_database_instance in a private network results in an "Unknown error"):
Launch one Cloud SQL instance manually (this will enable servicenetworking.googleapis.com and some other APIs for the project it seems)
Run your manifest
Terminate the instance created in step 1.
Works for me after that
¯_(ツ)_/¯
I land here with a slightly different case, same as #Grigorash Vasilij
(creating google_sql_database_instance in a private network results in an "Unknown error").
I was using the UI to deploy an SQL instance on a private VPC, for some reason that trows me an "Unknown error" as well. I finally solved using the gcloud command instead (why that works and no the UI? IDK, maybe the UI is not doing the same as the command)
gcloud --project=[PROJECT_ID] beta sql instances create [INSTANCE_ID]
--network=[VPC_NETWORK_NAME]
--no-assign-ip
follow this for more details

Terraform: Creating GCP Project using Shared VPC

I've been working through this for what feels like an eternity now.. so the host project already exists.. and has all the VPN's and networking set up. I am looking to create a new project, through Terraform and allowing it to use the host projects shared VPC.
Every time I run up against a problem and end up resolving it, I just run up against another one.
Right now I'm seeing:
google_compute_shared_vpc_service_project.project: googleapi: Error 404: The resource 'projects/intacct-staging-db3b7e7a' was not found, notFound
* google_compute_instance.dokku: 1 error(s) occurred:
As well as:
google_compute_instance.dokku: Error loading zone 'europe-west2-a': googleapi: Error 404: Failed to find project intacct-staging, notFound
I was originally convinced it was ordering, which is why I was playing around with depends_on configurations, to try and sort out the order. That hasn't seemed to resolve it.
Reading it simply, google_compute_shared_vpc_service_project doesn't exist as far as google_compute_shared_vpc_service_project is concerned. Even though I've added the following to google_compute_shared_vpc_service_project:
depends_on = ["google_project.project",
"google_compute_shared_vpc_host_project.host_project",
]
Perhaps, because the host project already exists I should use data to refer to it instead of resource?
My full TF File is here:
provider "google" {
region = "${var.gcp_region}"
credentials = "${file("./creds/serviceaccount.json")}"
}
resource "random_id" "id" {
byte_length = 4
prefix = "${var.project_name}-"
}
resource "google_project" "project" {
name = "${var.project_name}"
project_id = "${random_id.id.hex}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
resource "google_project_services" "project" {
project = "${google_project.project.project_id}"
services = [
"compute.googleapis.com"
]
depends_on = [ "google_project.project" ]
}
# resource "google_service_account" "service-account" {
# account_id = "intacct-staging-service"
# display_name = "Service Account for the intacct staging app"
# }
resource "google_compute_shared_vpc_host_project" "host_project" {
project = "${var.vpc_parent}"
}
resource "google_compute_shared_vpc_service_project" "project" {
host_project = "${google_compute_shared_vpc_host_project.host_project.project}"
service_project = "${google_project.project.project_id}"
depends_on = ["google_project.project",
"google_compute_shared_vpc_host_project.host_project",
]
}
resource "google_compute_address" "dokku" {
name = "fr-intacct-staging-ip"
address_type = "EXTERNAL"
project = "${google_project.project.project_id}"
depends_on = [ "google_project_services.project" ]
}
resource "google_compute_instance" "dokku" {
project = "${google_project.project.name}"
name = "dokku-host"
machine_type = "${var.comp_type}"
zone = "${var.gcp_zone}"
allow_stopping_for_update = "true"
tags = ["intacct"]
# Install Dokku
metadata_startup_script = <<SCRIPT
sed -i 's/PermitRootLogin no/PermitRootLogin yes/' /etc/ssh/sshd_config && service sshd restart
SCRIPT
boot_disk {
initialize_params {
image = "${var.compute_image}"
}
}
network_interface {
subnetwork = "${var.subnetwork}"
subnetwork_project = "${var.vpc_parent}"
access_config = {
nat_ip = "${google_compute_address.dokku.address}"
}
}
metadata {
sshKeys = "root:${file("./id_rsa.pub")}"
}
}
EDIT:
As discussed below I was able to resolve the latter project not found error by changing the reference to project_id instead of name as name does not include the random hex.
I'm now also seeing another error, referring to the static IP. The network interface is configured to use the subnetwork from the Host VPC...
network_interface {
subnetwork = "${var.subnetwork}"
subnetwork_project = "${var.vpc_parent}"
access_config = {
nat_ip = "${google_compute_address.dokku.address}"
}
}
The IP is setup here:
resource "google_compute_address" "dokku" {
name = "fr-intacct-staging-ip"
address_type = "EXTERNAL"
project = "${google_project.project.project_id}"
}
The IP should really be in the host project, which I've tried.. and when I do I get an error saying that cross-project is not allowed with this resource.
When I change to the above, it also errors saying that the new project is now capable of handling API Calls. Which I suppose would make sense as I only allowed compute API calls per the google_project_services resource.
I'll try allowing network API calls and see if that works, but I'm thinking the external IP needs to be in the host project's shared VPC?
For anyone encountering the same problem, in my case the project not found error was solved just by enabling the Compute Engine API.