In my terraform configuration file, I define my resource like so:
resource "google_compute_instance" "test" {
...
count = 2
}
What I now want is to create load balancer, that will balance between two instances of my google compute instance. Unfortunatelly, I could not find in documentation anything relative to this task. It seems like google_compute_target_pool or google_compute_lb_ip_ranges have nothing to do with my problem.
You would have to use 'forwarding rules' as indicated on this terraform document. To use load balancing and protocol forwarding, you must create a forwarding rule that directs traffic to specific target instances. The use on Cloud Platform of forwarding rules you can find here.
In common cases you can use something like the following:
resource "google_compute_instance" "test" {
name = "nlb-node${count.index}"
zone = "europe-west3-b"
machine_type = "f1-micro"
count = 2
boot_disk {
auto_delete = true
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1604-lts"
size = 10
type = "pd-ssd"
}
}
network_interface {
subnetwork = "default"
access_config {
nat_ip = ""
}
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
resource "google_compute_http_health_check" "nlb-hc" {
name = "nlb-health-checks"
request_path = "/"
port = 80
check_interval_sec = 10
timeout_sec = 3
}
resource "google_compute_target_pool" "nlb-target-pool" {
name = "nlb-target-pool"
session_affinity = "NONE"
region = "europe-west3"
instances = [
"${google_compute_instance.test.*.self_link}"
]
health_checks = [
"${google_compute_http_health_check.nlb-hc.name}"
]
}
resource "google_compute_forwarding_rule" "network-load-balancer" {
name = "nlb-test"
region = "europe-west3"
target = "${google_compute_target_pool.nlb-target-pool.self_link}"
port_range = "80"
ip_protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
You can get load balancer external ip via ${google_compute_forwarding_rule.network-load-balancer.ip_address}
// output.tf
output "network_load_balancer_ip" {
value = "${google_compute_forwarding_rule.network-load-balancer.ip_address}"
}
Related
How to provision multiple instances in GCP Compute Engine using Terraform. I've tried using 'count' parameter in the resource block. But terraform is not provisioning more than one instance because the VM with a particular name is created once when first count is executed.
provider "google" {
version = "3.5.0"
credentials = file("battleground01-5c86f5873d44.json")
project = "battleground01"
region = "us-east1"
zone = "us-east1-b"
}
variable "node_count" {
default = "3"
}
resource "google_compute_instance" "appserver" {
count = "${var.node_count}"
name = "battleground"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
}
}
In order for this to work, you would have to make a slight change in the way you are naming your compute instances:
provider "google" {
version = "3.5.0"
credentials = file("battleground01-5c86f5873d44.json")
project = "battleground01"
region = "us-east1"
zone = "us-east1-b"
}
variable "node_count" {
type = number
default = 3
}
resource "google_compute_instance" "appserver" {
count = var.node_count
name = "battleground-${count.index}"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
}
}
As you are using the count meta-argument, the way to access the array index is by using the count.index attribute [1]. You have also set the node_count variable default value to be a string and even though it would probably get converted to a number by Terraform, make sure to use the right variable types.
[1] https://www.terraform.io/language/meta-arguments/count#the-count-object
I have a TF GCP google_compute_instance_template configured to deploy a range of individual VMs, each of which will perform a different role in a "micro-services" style application. I am adding a single label to my instance_template, costing="app". However when I go to deploy the various VM components of the app with google_compute_instance_group_manager, I was expecting to be able to add another label in the in the instance group manager configuration, specific to the VM that is being deployed, such as "component=blah". The google_compute_instance_group_manager is not talking labels as a configuration element. Does anyone know how I can use the template to add a generic label, but then add additional machine-specific labels when the VMs are created?
Here is the TF code:
// instance template
resource "google_compute_instance_template" "app" {
name = "appserver-template"
machine_type = var.app_machine_type
labels = {
costing = "app"
}
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
disk_size_gb = 20
}
tags = ["compute", "app"]
network_interface {
subnetwork = var.subnetwork
}
// no access config
service_account {
email = var.service_account_email
// email = google_service_account.vm_sa.email
scopes = ["cloud-platform"]
}
}
// create instances --how to add instance-specific label here? eg component="admin"
resource "google_compute_instance_group_manager" "admin" {
provider = google-beta
name = "admin-igm"
base_instance_name = "${var.project_short_name}-admin"
zone = var.zone
target_size = 1
version {
name = "appserver"
instance_template = google_compute_instance_template.app.id
}
}
I got the desired outcome by creating a google_compute_instance_template for each server type in my application, which then allowed me to assign both a universal label and a component-specific label. It was more code than I hoped to have to write, but the objective is met.
resource "google_compute_instance_template" "admin" {
name = "admin-template"
machine_type = var.app_machine_type
labels = {
costing = "app",
component = "admin"
}
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
disk_size_gb = 20
}
tags = ["compute"]
network_interface {
subnetwork = var.subnetwork
}
// no access config
service_account {
email = var.service_account_email
// email = google_service_account.vm_sa.email
scopes = ["cloud-platform"]
}
}
I'm trying to deploy a managed instance group with a load balancer which will be running a web server container.
The container is stored in the google artifcat registry.
If I manually create a VM and define the container usage, it is successfully able to pull and activate the container.
When I try to create the managed instance group via terraform, the VM does not pull nor activate the container.
When I ssh to the VM and try to manually pull the container, I get the following error:
Error response from daemon: Get https://us-docker.pkg.dev/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
The only notable difference between the VM I created manually to the VM created by terraform is that the manual VM has an external IP address. Not sure if this matters and not sure how to add one to the terraform file.
Below is my main.tf file. Can anyone tell me what I'm doing wrong?
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.53.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.0"
}
}
}
provider "google" {
credentials = file("compute_lab2-347808-dab33a244827.json")
project = "lab2-347808"
region = "us-central1"
zone = "us-central1-f"
}
locals {
google_load_balancer_ip_ranges = [
"130.211.0.0/22",
"35.191.0.0/16",
]
}
module "gce-container" {
source = "terraform-google-modules/container-vm/google"
version = "~> 2.0"
cos_image_name = "cos-stable-77-12371-89-0"
container = {
image = "us-docker.pkg.dev/lab2-347808/web-server-repo/web-server-image"
volumeMounts = [
{
mountPath = "/cache"
name = "tempfs-0"
readOnly = false
},
]
}
volumes = [
{
name = "tempfs-0"
emptyDir = {
medium = "Memory"
}
},
]
restart_policy = "Always"
}
resource "google_compute_firewall" "rules" {
project = "lab2-347808"
name = "allow-web-ports"
network = "default"
description = "Opens the relevant ports for the web server"
allow {
protocol = "tcp"
ports = ["80", "8080", "5432", "5000", "443"]
}
source_ranges = ["0.0.0.0/0"]
#source_ranges = local.google_load_balancer_ip_ranges
target_tags = ["web-server-ports"]
}
resource "google_compute_autoscaler" "default" {
name = "web-autoscaler"
zone = "us-central1-f"
target = google_compute_instance_group_manager.default.id
autoscaling_policy {
max_replicas = 10
min_replicas = 1
cooldown_period = 60
cpu_utilization {
target = 0.5
}
}
}
resource "google_compute_instance_template" "default" {
name = "my-web-server-template"
machine_type = "e2-medium"
can_ip_forward = false
tags = ["ssh", "http-server", "https-server", "web-server-ports"]
disk {
#source_image = "cos-cloud/cos-73-11647-217-0"
source_image = module.gce-container.source_image
}
network_interface {
network = "default"
}
service_account {
#scopes = ["userinfo-email", "compute-ro", "storage-ro"]
scopes = ["cloud-platform"]
}
metadata = {
gce-container-declaration = module.gce-container.metadata_value
}
}
resource "google_compute_target_pool" "default" {
name = "web-server-target-pool"
}
resource "google_compute_instance_group_manager" "default" {
name = "web-server-igm"
zone = "us-central1-f"
version {
instance_template = google_compute_instance_template.default.id
name = "primary"
}
target_pools = [google_compute_target_pool.default.id]
base_instance_name = "web-server-instance"
}
Your VM templates haven't public IPs, therefore, you can't reach public IP.
However, you have 3 ways to solve that issue:
Add a public IP on the VM template (bad idea)
Add a Cloud NAT on your VM private IP range to allow outgoing traffic to the internet (good idea)
Activate the Google private access in the subnet that host the VM private iP range. It create a bridge to access to Google services without having a public IP (my prefered idea) -> https://cloud.google.com/vpc/docs/configure-private-google-access
Apparently I was missing the following acecss_config inside network_interface of the google_compute_instance_template as following:
network_interface {
network = "default"
access_config {
network_tier = "PREMIUM"
}
How to add the default-allow-http firewall rule in a terraform script to a Google Cloud Compute Instance?
provider "google" {
credentials = file("CREDENTIAL_FILE")
project = "gitlab-project"
region = var.region
}
resource "google_compute_instance" "gitlab" {
name = var.machine_specs.name
machine_type = var.machine_type.emicro
zone = var.zone
boot_disk {
initialize_params {
image = var.machine_specs.os
size = var.machine_specs.size
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {
nat_ip = google_compute_address.static.address
}
}
// Add the SSH key
metadata = {
ssh-keys = "martin:${file("~/.ssh/id_rsa.pub")}"
}
}
// A variable for extracting the external ip of the instance
output "ip" {
value = "${google_compute_instance.gitlab.network_interface.0.access_config.0.nat_ip}"
}
resource "google_compute_address" "static" {
name = "ipv4-address"
address_type = "EXTERNAL"
address = "XXX.XXX.XXX.XXX"
}
resource "google_compute_firewall" "allow-http" {
name = "default-allow-http"
network =
allow{
protocol = "tcp"
ports = ["80"]
}
}
You can use tags argument available in google_compute_instance resource.
it would look something like:
resource "google_compute_instance" "gitlab" {
name = var.machine_specs.name
machine_type = var.machine_type.emicro
zone = var.zone
tags = ["http-server"]
http-server tag is for default-allow-http firewall rule.
If you need default-allow-https then simply append https-server to the tag list.
Hope this helps.
You need to add the tags ["http-server", "https-server"] to your resource group google_compute_instance like so:
[...]
resource "google_compute_instance" "gitlab" {
name = var.machine_specs.name
machine_type = var.machine_type.emicro
zone = var.zone
tags = ["http-server", "https-server"]
[...]
Simply add the tags http-server and https-server to your google_cloud_instance resource gruop.
The tags can be found in the Firewall-settings in your GCloud-Console.
I am attempting to write automation to deploy instances in a shared VPC on GCP. I have a host network project and a service project. I can create a static internal IP address resource in the host project (resource "google_compute_address" "internal") in which I specify the VPC host project (NET_HUB_PROJ) but I am unable to use it when creating the instance. I receive the following error:
google_compute_instance.compute: Error creating instance: googleapi:
Error 400: Invalid value for field
'resource.networkInterfaces[0].networkIP': '10.128.0.10'. IP address
'projects/prototype-network-hub/regions/us-central1/addresses/bh-int-
ip' (10.128.0.10) is reserved by another project., invalid
My compute module:
data "google_compute_image" "image" {
name = "${var.IMAGE_NAME}"
project = "${var.IMAGE_PROJECT}"
}
resource "google_compute_address" "internal" {
name = "${var.NAME}-int-ip"
address_type = "INTERNAL"
address = "${var.PRIVATE_IP}"
subnetwork = "${var.NET_HUB_SUBNETWORK}"
region = "${var.NET_HUB_REGION}"
project = "${var.NET_HUB_PROJ}"
}
resource "google_compute_address" "external" {
count = "${var.EXT_IP_CREATE ? 1 : 0}"
name = "${var.NAME}-ext-ip"
address_type = "EXTERNAL"
region = "${var.REGION}"
}
resource "google_compute_instance" "compute" {
depends_on = ["google_compute_address.external"]
name = "${var.NAME}"
machine_type = "${var.MACHINE_TYPE}"
zone = "${var.ZONE}"
can_ip_forward = "${var.CAN_IP_FORWARD}"
deletion_protection ="${var.DELETION_PROTECTION}"
allow_stopping_for_update = "${var.ALLOW_STOPPING_FOR_UPDATE}"
tags = ["allow-ssh"]
metadata = {
"network" = "${var.NETWORK}"
"env" = "${var.ENV}"
"role" = "${var.ROLE}"
"region" = "${var.REGION}"
"zone" = "${var.ZONE}"
}
labels = {
"network" = "${var.NETWORK}"
"env" = "${var.ENV}"
"role" = "${var.ROLE}"
"region" = "${var.REGION}"
"zone" = "${var.ZONE}"
}
boot_disk {
device_name = "${var.NAME}"
auto_delete = "${var.BOOT_DISK_AUTO_DELETE}"
initialize_params {
size = "${var.BOOT_DISK_SIZE}"
type = "${var.BOOT_DISK_TYPE}"
image = "${data.google_compute_image.image.self_link}"
}
}
network_interface {
network_ip = "${google_compute_address.internal.address}"
subnetwork_project = "${var.NET_HUB_PROJ}"
subnetwork = "projects/prototype-network-hub/regions/us-central1/subnetworks/custom"
access_config {
nat_ip = "${element(concat(google_compute_address.external.*.address, list("")), 0)}"
}
}
service_account {
scopes = ["service-control", "service-management", "logging-write", "monitoring-write", "storage-ro", "https://www.googleapis.com/auth/trace.append" ]
}
}
The end goal would be to accomplish the following:
EDIT (new answer):
Per the GCP documentation, the static internal IP must belong to the service project (not the host network project as in your code) if you're looking to reserve internal IP on a shared VPC in a different project. See here:
https://cloud.google.com/vpc/docs/provisioning-shared-vpc#reserve_internal_ip
Seeing as a shared-vpc is unlikely to be found in your TF codebase, you'll have to use data to get the self_link of the subnetwork to use for google_compute_address. Something like the following:
data "google_compute_subnetwork" "subnet" {
name = "${var.NET_HUB_SUBNETWORK}"
project = "${var.NET_HUB_PROJ}"
region = "${var.NET_HUB_REGION}"
}
resource "google_compute_address" "internal" {
name = "${var.NAME}-int-ip"
address_type = "INTERNAL"
address = "${var.PRIVATE_IP}"
subnetwork = "${data.google_compute_subnetwork.subnet.self_link}"
}
This should create the resource under your service project, yet with an address within the designated subnet.
When you deploy your instance you should see it referenced under the internal_ip column on your VM instances tab for the assigned instance.
(old answer for posterity):
Unfortunately, google_compute_address doesn't contain a subnetwork_project like google_compute_instance. A fix around this is to provide a full URL to the subnetwork field in google_compute_address. Something like the following:
resource "google_compute_address" "internal" {
name = "${var.NAME}-int-ip"
address_type = "INTERNAL"
address = "${var.PRIVATE_IP}"
subnetwork = "https://www.googleapis.com/compute/v1/projects/${var.NET_HUB_PROJ}/regions/${var.NET_HUB_REGION}/subnetworks/${var.NET_HUB_SUBNETWORK}"
}
Adding my solution below:-
resource "google_compute_address" "internal_ip" {
count = 1
name = "${local.cluster_name}-int-ip-${count.index}"
project = <service project id>
subnetwork = <host project subnet self_link>
address_type = "INTERNAL"
region = "asia-northeast1"
purpose = "GCE_ENDPOINT"
}
output "internal_ipaddr_info" {
value = google_compute_address.internal_ip
}
resource "google_compute_instance" "test" {
project = module.gcp_service_project.project_id
name = "eps-gce-vm-d-swmgr-ane1-test"
machine_type = "n1-standard-1"
zone = "asia-northeast1-a"
can_ip_forward = true
boot_disk {
initialize_params {
image = "centos7"
}
}
network_interface {
subnetwork = <host project subnet self_link>
network_ip = google_compute_address.internal_ip[0].self_link
}
}