GCP Compute Engine using Terraform - google-cloud-platform

How to provision multiple instances in GCP Compute Engine using Terraform. I've tried using 'count' parameter in the resource block. But terraform is not provisioning more than one instance because the VM with a particular name is created once when first count is executed.
provider "google" {
version = "3.5.0"
credentials = file("battleground01-5c86f5873d44.json")
project = "battleground01"
region = "us-east1"
zone = "us-east1-b"
}
variable "node_count" {
default = "3"
}
resource "google_compute_instance" "appserver" {
count = "${var.node_count}"
name = "battleground"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
}
}

In order for this to work, you would have to make a slight change in the way you are naming your compute instances:
provider "google" {
version = "3.5.0"
credentials = file("battleground01-5c86f5873d44.json")
project = "battleground01"
region = "us-east1"
zone = "us-east1-b"
}
variable "node_count" {
type = number
default = 3
}
resource "google_compute_instance" "appserver" {
count = var.node_count
name = "battleground-${count.index}"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
}
}
As you are using the count meta-argument, the way to access the array index is by using the count.index attribute [1]. You have also set the node_count variable default value to be a string and even though it would probably get converted to a number by Terraform, make sure to use the right variable types.
[1] https://www.terraform.io/language/meta-arguments/count#the-count-object

Related

GCP terraform compute instance template labels

I have a TF GCP google_compute_instance_template configured to deploy a range of individual VMs, each of which will perform a different role in a "micro-services" style application. I am adding a single label to my instance_template, costing="app". However when I go to deploy the various VM components of the app with google_compute_instance_group_manager, I was expecting to be able to add another label in the in the instance group manager configuration, specific to the VM that is being deployed, such as "component=blah". The google_compute_instance_group_manager is not talking labels as a configuration element. Does anyone know how I can use the template to add a generic label, but then add additional machine-specific labels when the VMs are created?
Here is the TF code:
// instance template
resource "google_compute_instance_template" "app" {
name = "appserver-template"
machine_type = var.app_machine_type
labels = {
costing = "app"
}
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
disk_size_gb = 20
}
tags = ["compute", "app"]
network_interface {
subnetwork = var.subnetwork
}
// no access config
service_account {
email = var.service_account_email
// email = google_service_account.vm_sa.email
scopes = ["cloud-platform"]
}
}
// create instances --how to add instance-specific label here? eg component="admin"
resource "google_compute_instance_group_manager" "admin" {
provider = google-beta
name = "admin-igm"
base_instance_name = "${var.project_short_name}-admin"
zone = var.zone
target_size = 1
version {
name = "appserver"
instance_template = google_compute_instance_template.app.id
}
}
I got the desired outcome by creating a google_compute_instance_template for each server type in my application, which then allowed me to assign both a universal label and a component-specific label. It was more code than I hoped to have to write, but the objective is met.
resource "google_compute_instance_template" "admin" {
name = "admin-template"
machine_type = var.app_machine_type
labels = {
costing = "app",
component = "admin"
}
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
disk_size_gb = 20
}
tags = ["compute"]
network_interface {
subnetwork = var.subnetwork
}
// no access config
service_account {
email = var.service_account_email
// email = google_service_account.vm_sa.email
scopes = ["cloud-platform"]
}
}

Linking a series of GCP disks to corresponding instances dynamically in terraform

I need to dynamically create a variable number of riak instances, each with an attached disk across multiple zones in GCP using terraform.
Each attached disk must live in the same zone as its instance.
Upon terraform plan everything looked good, but when I ran apply terraform responded with an error saying that zone was undefined.
Okay, I thought, let's set the zone to be the same as the linked instance. No dice. Cycle error, so I move everything such that the information flows from the disk to the instance, but the cycle error persists. Here's the error:
│ Error: Cycle: module.riak_instances.google_compute_instance.riak_instance, module.riak_instances.google_compute_disk.data-disk
And the code in it's current incarnation:
data "google_compute_zones" "zones" {
}
resource "google_compute_instance" "riak_instance" {
count = var.instance_count
name = "riak-${count.index + 1}"
machine_type = var.machine_type
zone = google_compute_disk.data-disk[count.index].zone
boot_disk {
initialize_params {
image = var.disk_image
size = var.instance_disk_size
}
}
network_interface {
network = "default"
}
attached_disk {
source = google_compute_disk.data-disk[count.index].self_link
}
labels = {
environment = var.environment
owner = var.owner
}
}
resource "google_compute_disk" "data-disk" {
count = var.instance_count
name = "riak-disk-${count.index + 1}"
type = "pd-balanced"
size = var.data_disk_size
zone = data.google_compute_zones.zones.names[count.index]
labels = {
environment = var.environment
owner = var.owner
}
}

how to declare gcp compute engine images from gcp marketplace through terraform

I've a request in company to write a terraform script to deploy a compute engine image from GCP marketplace? This is most likely a deep-learning image. Can anyone please help?
Example image - https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning?q=compute%20engine%20images&id=8857b4a3-f60f-40b2-9b32-22b4428fd256
Please take a look at the following examples here
resource "random_id" "instance_id" {
byte_length = 8
}
resource "google_compute_instance" "default" {
name = "vm-${random_id.instance_id.hex}"
machine_type = var.instance_type
zone = var.zone
boot_disk {
initialize_params {
image = "deeplearning-platform-release/tf-ent-latest-gpu" # TensorFlow Enterprise
size = 50 // 50 GB Storage
}
}
network_interface {
network = "default"
access_config {}
}
guest_accelerator {
type = var.gpu_type
count = var.gpu_count
}
scheduling {
automatic_restart = true
on_host_maintenance = "TERMINATE"
}
metadata = {
install-nvidia-driver = "True"
proxy-mode = "service_account"
}
tags = ["deeplearning-vm"]
service_account {
scopes = ["https://www.googleapis.com/auth/cloud-platform"]
}
}
I recently added support for AI Platform Notebooks in Terraform as well.

Terraform multiple VM with multiple disk GCP

I am trying to create list of VM to which there is list of disk needs to be created and attached per VM.
In below example I have to create test-d01 Compute with 3 disk test-d01-data and test-d01-data-disk, and test-d01-commitlog-disk
similarly test-d02 Compute with 2 disk test-d02-data-01 and test-d02-data-02.
For expample below VM_info represents required configuration and it would
{
name = "test-d01"
zone = "us-east1-b"
disk = [
{
disk_name = "test-d01-data"
disk_type = "pd-ssd"
disk_size = "60"
},
{
disk_name = "test-d01-data-disk"
disk_type = "pd-standard"
disk_size = "15"
},
{
disk_name = "test-d01-commitlog-disk"
disk_type = "pd-ssd"
disk_size = "30"
}
]
},
{
name = "test-d02"
zone = "us-east1-b"
disk=[
{
disk_name = "test-d02-data"
disk_type = "pd-ssd"
disk_size = "60"
},
{
disk_name = "test-d02-data-disk"
disk_type = "pd-standard"
disk_size = "15"
}
]
},
]
great idea. When I use this I do get a
terraform plan
var.disks
Enter a value: 2
var.instance_name
Enter a value: DDVE5
Error: Reference to undeclared resource
on main.tf line 39, in resource "google_compute_attached_disk" "vm_attached_disk":
39: instance = google_compute_instance.vm_instance.self_link
A managed resource "google_compute_instance" "vm_instance" has not been
declared in the root module.
cat main.tf
variable "instance_name" {}
variable "instance_zone" {
default = "europe-west3-c"
}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {
default = "default"
}
variable "disks" {}
provider "google" {
credentials = file("key.json")
project = "ddve50"
region = "europe-west3"
zone = "europe-west3-a"
}
resource "google_compute_instance" "vm-instance" {
name = "ddve-gcp-5-7-2-0-20-65"
machine_type = "f1-micro"
tags = ["creator", "juergen"]
boot_disk {
initialize_params {
image = "ddve"
type = "pd-ssd"
}
}
network_interface {
network = "default"
}
}
resource "google_compute_attached_disk" "vm_attached_disk" {
for_each = toset(var.disks)
disk = each.key
instance = google_compute_instance.vm_instance.self_link
}
cat ../my_instances.tf
resource "google_compute_disk" "ddve-gcp-5-7-2-0-20-65-nvram" {
name = "ddve-gcp-5-7-2-0-20-65-nvram"
type = "pd-ssd"
size = 10
}
resource "google_compute_disk" "ddve-gcp-5-7-2-0-20-65-m1" {
name = "ddve-gcp-5-7-2-0-20-65-m1"
type = "pd-standard"
size = 1024
}
module "ddve-test-d01" {
source = "./instance"
instance_name = "ddve-test-d01"
disks = [
google_compute_disk.ddve-gcp-5-7-2-0-20-65-nvram,
google_compute_disk.ddve-gcp-5-7-2-0-20-65-m1
]
}
Terraform by HashiCorp > Compute Engine > Resources > google_compute_attached_disk:
Persistent disks can be attached to a compute instance using the
attached_disk section within the compute instance configuration.
However there may be situations where managing the attached disks via
the compute instance config isn't preferable or possible, such as
attaching dynamic numbers of disks using the count variable.
Therefore a straightforward approach with creation of google_compute_instance with attributes attached_disk may not be applicable in this case.
In brief the idea is to create persistent disks first, then create VM instances and attach new disks to the freshly created instances. This test deployment consists of a root configuration file my_instances.tf and reusable module ./instance/main.tf.
1. Persistent disks google_compute_disk can be created independently. For the sake of simplicity 5 literal blocks in the root file my_instances.tf were used.
2. Call the reusable module instance/main.tf and pass the VM attributes and a list of disks to the module so that:
Create a VM instance google_compute_instance;
Use binding objects google_compute_attached_disk to attach new empty disks to the freshly created VM instance.
To process the disk list the for_each meta-argument is used.
Since the for_each accepts only a map or a set of string, the toset function is used to convert a list of disks to a set.
Terraform by HashiCorp > Compute Engine > Data Sources > google_compute_instance
Terraform by HashiCorp > Compute Engine > Resources > google_compute_disk
Terraform by HashiCorp > Compute Engine > Resources > google_compute_attached_disk
Terraform by HashiCorp > Configuration language > Resources > lifecycle.ignore_changes
Terraform by HashiCorp > Configuration language > Resources > When to Use for_each Instead of count
Terraform by HashiCorp > Configuration language > Resources > The each Object
Terraform by HashiCorp > Configuration language > Resources > Using Sets
$ cat my_instances.tf
resource "google_compute_disk" "test-d01-data" {
name = "test-d01-data"
type = "pd-ssd"
size = 60
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d01-data-disk" {
name = "test-d01-data-disk"
type = "pd-standard"
size = 15
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d01-commitlog-disk" {
name = "test-d01-commitlog-disk"
type = "pd-ssd"
size = 30
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d02-data" {
name = "test-d02-data"
type = "pd-ssd"
size = 60
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d02-data-disk" {
name = "test-d02-data-disk"
type = "pd-standard"
size = 15
zone = "europe-west3-c"
}
module "test-d01" {
source = "./instance"
instance_name = "test-d01"
disks = [
google_compute_disk.test-d01-data.name,
google_compute_disk.test-d01-data-disk.name,
google_compute_disk.test-d01-commitlog-disk.name
]
}
module "test-d02" {
source = "./instance"
instance_name = "test-d02"
disks = [
google_compute_disk.test-d02-data.name,
google_compute_disk.test-d02-data-disk.name
]
}
$ cat instance/main.tf
variable "instance_name" {}
variable "instance_zone" {
default = "europe-west3-c"
}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {
default = "default"
}
variable "disks" {}
resource "google_compute_instance" "vm_instance" {
name = var.instance_name
zone = var.instance_zone
machine_type = var.instance_type
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
access_config {
# Allocate a one-to-one NAT IP to the instance
}
}
lifecycle {
ignore_changes = [attached_disk]
}
}
resource "google_compute_attached_disk" "vm_attached_disk" {
for_each = toset(var.disks)
disk = each.key
instance = google_compute_instance.vm_instance.self_link
}
$ terraform fmt
$ terraform init
$ terraform plan
$ terraform apply
you can alos get the disks attached with:
resource "google_compute_disk" "default" {
name = "test-disk"
type = "pd-ssd"
zone = var.instance_zone
# image = "debian-9-stretch-v20200805"
labels = {
environment = "dev"
}
physical_block_size_bytes = 4096
}
resource "google_compute_attached_disk" "vm_attached_disk" {
count = var.disks
disk = google_compute_disk.default.id
instance = google_compute_instance.vm-instance.id
}

How to load balance google compute instance using terraform?

In my terraform configuration file, I define my resource like so:
resource "google_compute_instance" "test" {
...
count = 2
}
What I now want is to create load balancer, that will balance between two instances of my google compute instance. Unfortunatelly, I could not find in documentation anything relative to this task. It seems like google_compute_target_pool or google_compute_lb_ip_ranges have nothing to do with my problem.
You would have to use 'forwarding rules' as indicated on this terraform document. To use load balancing and protocol forwarding, you must create a forwarding rule that directs traffic to specific target instances. The use on Cloud Platform of forwarding rules you can find here.
In common cases you can use something like the following:
resource "google_compute_instance" "test" {
name = "nlb-node${count.index}"
zone = "europe-west3-b"
machine_type = "f1-micro"
count = 2
boot_disk {
auto_delete = true
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1604-lts"
size = 10
type = "pd-ssd"
}
}
network_interface {
subnetwork = "default"
access_config {
nat_ip = ""
}
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
resource "google_compute_http_health_check" "nlb-hc" {
name = "nlb-health-checks"
request_path = "/"
port = 80
check_interval_sec = 10
timeout_sec = 3
}
resource "google_compute_target_pool" "nlb-target-pool" {
name = "nlb-target-pool"
session_affinity = "NONE"
region = "europe-west3"
instances = [
"${google_compute_instance.test.*.self_link}"
]
health_checks = [
"${google_compute_http_health_check.nlb-hc.name}"
]
}
resource "google_compute_forwarding_rule" "network-load-balancer" {
name = "nlb-test"
region = "europe-west3"
target = "${google_compute_target_pool.nlb-target-pool.self_link}"
port_range = "80"
ip_protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
You can get load balancer external ip via ${google_compute_forwarding_rule.network-load-balancer.ip_address}
// output.tf
output "network_load_balancer_ip" {
value = "${google_compute_forwarding_rule.network-load-balancer.ip_address}"
}