I'm trying to create a terraform script which takes user input and executes accordingly. I basically want to ask if the user wants static IP in Google cloud platform, if yes, then stitch the resource "google_compute_instance" accordingly, otherwise, let it go.
Sharing the code I have written:
variable "create_eip" {
description = "Enter 1 for true, 0 for false"
}
resource "google_compute_address" "external" {
count = "${var.create_eip}"
name = "external-ip",
address_type = "EXTERNAL",
}
resource "google_compute_instance" "compute-engine" {
name = "random",
machine_type = "f1-micro",
boot_disk {
initialize_params {
size = "10",
type = "pd-ssd",
image = "${data.google_compute_image.image.self_link}"
}
}
network_interface {
subnetwork = "default",
access_config {
nat_ip = "${google_compute_address.external.address}"
}
}
}
The error I'm getting here is when the user puts 0 as input the code control still goes to "nat_ip = "${google_compute_address.external.address}""
because of which I get this error:
google_compute_instance.compute-engine: Resource 'google_compute_address.external' not found for variable
'google_compute_address.external.address'.
I also tried it this way by replacing
nat_ip = "${var.create_eip == "1" ? "${google_compute_address.external.address}" : ""}"
(if create_ip = 1, execute "google_compute_address.external.address", else do nothing).
But it is not working as expected.
That's an issue with terraform...
You can't really do an if for something other than the count.
You could try something like that as you can't put condition inside a resource for now:
variable "create_eip" {
description = "Enter 1 for true, 0 for false"
}
resource "google_compute_address" "external" {
count = "${var.create_eip}"
name = "external-ip",
address_type = "EXTERNAL",
}
resource "google_compute_instance" "compute-engine-ip" {
count = "${var.create_eip == 1 ? 1 : 0}"
name = "random",
machine_type = "f1-micro",
boot_disk {
initialize_params {
size = "10",
type = "pd-ssd",
image = "${data.google_compute_image.image.self_link}"
}
}
network_interface {
subnetwork = "default",
access_config {
nat_ip = "${google_compute_address.external.address}"
}
}
}
resource "google_compute_instance" "compute-engine" {
count = "${var.create_eip == 1 ? 0 : 1}"
name = "random",
machine_type = "f1-micro",
boot_disk {
initialize_params {
size = "10",
type = "pd-ssd",
image = "${data.google_compute_image.image.self_link}"
}
}
network_interface {
subnetwork = "default",
access_config {
}
}
}
This code will create a compute instance using the created ip if the variable value is at one, in the other case it will create an ip, you could also add a lifecycle if you want to keep the same ip on the compute_address resource:
lifecycle = {
ignore_changes = ["node_pool"]
}
Related
My problem is that I can't dynamically connect the created disks to the vps. The google_compute_disk_attach module cannot be used
Here is my code
What is the correct way in this situation?
resource "google_compute_instance" "vps" {
name = var.server_name
description = var.server_description
machine_type = var.server_type
zone = var.server_datacenter
deletion_protection = var.server_delete_protection
labels = var.server_labels
metadata = var.server_metadata
tags = var.server_tags
boot_disk {
auto_delete = false
initialize_params {
size = var.boot_volume_size
type = var.boot_volume_type
image = var.boot_volume_image
labels = var.boot_volume_labels
}
}
dynamic "attached_disk" {
for_each = { for vol in var.volumes : vol.volume_name => vol }
content {
source = element(var.volumes[*].volume_name, 0)
}
}
network_interface {
subnetwork = var.server_network
access_config {
nat_ip = google_compute_address.static_ip.address
}
}
resource "google_compute_disk" "volume" {
for_each = { for vol in var.volumes : vol.volume_name => vol }
name = each.value.volume_name
type = each.value.volume_type
size = each.value.volume_size
zone = var.server_datacenter
labels = each.value.volume_labels
}
volumes variables
volumes = [{
volume_name = "v3-postgres-saga-import-test-storage"
volume_size = "40"
volume_type = "pd-ssd"
volume_labels = {
environment = "production"
project = "v3"
type = "storage"
}
}, {
volume_name = "volume-vpstest2"
volume_size = "20"
volume_type = "pd-ssd"
volume_labels = {
environment = "production"
project = "v2"
type = "storage"
}
}]
if do something like that - error
source = google_compute_disk.volume[*].self_link
This object does not have an attribute named "self_link".
Since you've used for_each in google_compute_disk.volume, it will be a map, not a list. Thus you can list all self_link as follows:
source = values(google_compute_disk.volume)[*].self_link
You can also use the volume variable directly as map instead of Array :
variables.tf file :
variable "volumes" {
default = {
postgres_saga = {
volume_name = "v3-postgres-saga-import-test-storage"
volume_size = "40"
volume_type = "pd-ssd"
volume_labels = {
environment = "production"
project = "v3"
type = "storage"
}
},
volume_vpstest2 = {
volume_name = "volume-vpstest2"
volume_size = "20"
volume_type = "pd-ssd"
volume_labels = {
environment = "production"
project = "v2"
type = "storage"
}
}
}
}
Instead of variable, you can also use a local variable from json configuration. Example of structure of Terraform module :
project
module
main.tf
locals.tf
resource
volumes.json
volumes.json file
{
"volumes": {
"postgres_saga" : {
"volume_name" : "v3-postgres-saga-import-test-storage"
"volume_size" : "40"
"volume_type" : "pd-ssd"
"volume_labels" : {
"environment" : "production"
"project" : "v3"
"type" : "storage"
}
},
"volume_vpstest2" : {
"volume_name" : "volume-vpstest2"
"volume_size" : "20"
"volume_type" : "pd-ssd"
"volume_labels" : {
"environment" : "production"
"project" : "v2"
"type" : "storage"
}
}
}
}
locals.tf file
locals {
tables = jsondecode(file("${path.module}/resource/volumes.json"))["volumes"]
}
main.tf file :
resource "google_compute_instance" "vps" {
name = var.server_name
description = var.server_description
machine_type = var.server_type
zone = var.server_datacenter
deletion_protection = var.server_delete_protection
labels = var.server_labels
metadata = var.server_metadata
tags = var.server_tags
boot_disk {
auto_delete = false
initialize_params {
size = var.boot_volume_size
type = var.boot_volume_type
image = var.boot_volume_image
labels = var.boot_volume_labels
}
}
dynamic "attached_disk" {
for_each = [
var.volumes
# local.volumes
]
content {
source = attached_disk.value["volume_name"]
}
}
network_interface {
subnetwork = var.server_network
access_config {
nat_ip = google_compute_address.static_ip.address
}
}
}
resource "google_compute_disk" "volume" {
for_each = var.volumes
# local.volumes
name = each.value["volume_name"]
type = each.value["volume_type"]
size = each.value["volume_size"]
zone = var.server_datacenter
labels = each.value["volume_labels"]
}
With a map, you can directly use foreach without any transformation on google_compute_disk/volume resource.
You can also use this map in a dynamic bloc.
How to add or remove block code access_config { } on terraform with GCP.
I have variable:
external_ip = false
if external IP is value false code:
resource "google_compute_instance_from_template" "default_name_index" {
name = "${length(var.instances[change_with_index].instance_backup_ip) == 1 ? var.instances[change_with_index].instance_backup_name : format("%s-%s", var.instances[change_with_index].instance_backup_name, count.index + 1)}"
count = length(var.instances[change_with_index].instance_backup_ip)
source_instance_template = "projects/${var.provider_project}/global/instanceTemplates/${replace(var.instances[change_with_index].instance_name, "-app-image", "")}-${var.release_version}"
network_interface {
network = var.instances[change_with_index].instance_network
subnetwork = var.instances[change_with_index].instance_subnetwork
network_ip = var.instances[change_with_index].instance_backup_ip[count.index]
}
}
if external_ip is value true code:
resource "google_compute_instance_from_template" "default_name_index" {
name = "${length(var.instances[change_with_index].instance_backup_ip) == 1 ? var.instances[change_with_index].instance_backup_name : format("%s-%s", var.instances[change_with_index].instance_backup_name, count.index + 1)}"
count = length(var.instances[change_with_index].instance_backup_ip)
source_instance_template = "projects/${var.provider_project}/global/instanceTemplates/${replace(var.instances[change_with_index].instance_name, "-app-image", "")}-${var.release_version}"
network_interface {
network = var.instances[change_with_index].instance_network
subnetwork = var.instances[change_with_index].instance_subnetwork
network_ip = var.instances[change_with_index].instance_backup_ip[count.index]
#access_config will add in here
access_config
{
}
}
}
thank u for helping me.
You can do this with dynamic blocks and for_each:
resource "google_compute_instance_from_template" "default_name_index" {
name = "${length(var.instances[change_with_index].instance_backup_ip) == 1 ? var.instances[change_with_index].instance_backup_name : format("%s-%s", var.instances[change_with_index].instance_backup_name, count.index + 1)}"
count = length(var.instances[change_with_index].instance_backup_ip)
source_instance_template = "projects/${var.provider_project}/global/instanceTemplates/${replace(var.instances[change_with_index].instance_name, "-app-image", "")}-${var.release_version}"
network_interface {
network = var.instances[change_with_index].instance_network
subnetwork = var.instances[change_with_index].instance_subnetwork
network_ip = var.instances[change_with_index].instance_backup_ip[count.index]
#access_config will add in here
dynamic "access_config"
{
for_each = external_ip == false ? [] : [1]
content {
// the normal content of access_config
}
}
}
}
Having this error and it is driving me crazy.... I can't get it to create more than 1 VM with a static IP.
Here is my main.tf
provider "google" {
credentials = file("terraform-key.json")
project = var.project
region = var.region
zone = var.zone
}
terraform {
backend "gcs" {
bucket = "my-bucket"
prefix = "terraform"
credentials = "terraform-key.json"
}
}
resource "google_compute_network" "vpc_network" {
name = "new-terraform-network"
}
resource "google_container_cluster" "primary" {
name = "prod-cluster"
location = var.zone
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = "pool-1"
location = var.zone
cluster = google_container_cluster.primary.name
node_count = 3
node_config {
preemptible = true
machine_type = "n1-standard-1"
disk_size_gb = 10
disk_type = "pd-standard"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}
resource "google_compute_address" "vm-host"{
count = var.vm-host-number
region = var.vm-host-region
name = "vm-host-${count.index}"
}
resource "google_compute_instance" "vm-host" {
name = "vm-host-${count.index}"
machine_type = "f1-micro"
zone = "europe-west1-a"
count = var.vm-host-number
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = google_compute_network.vpc_network.name
access_config {
nat_ip = "google_compute_address.vm-host-${count.index}.address"
}
}
}
My variable file
variable "project" {
default = "my-project"
}
variable "region" {
default = "us-central1"
}
variable "zone" {
default = "us-central1-c"
}
variable "cidr_ip" {
default = "10.0.0.0/16"
}
variable "vm-host-number"{
default = "2"
}
variable "vm-host-region"{
default = "us-central1"
}
variable "vm-host-zone"{
default = "europe-west1-a"
}
The error -
Error: Error loading zone 'europe-west1-a': googleapi: Error 404: The resource 'projects/GoogleProjectID/zones/europe-west1-a' was not found, notFound
on main.tf line 65, in resource "google_compute_instance" "vm-host":
65: resource "google_compute_instance" "vm-host" {
Cant understand why it won't create the VMs, If i try the same to just create 1 VM, without the variables/count it works fine ><. I mean the zone definitely exists...
Edit -
Next issue is I can't create a static IP for each VM.
resource "google_compute_address" "vm-host" {
count = var.vm-host-number
#region = var-vm-host-region
region = "us-central1-a"
name = "vm-host-${count.index}"
}
resource "google_compute_instance" "vm-host-vms" {
name = "vm-host-${count.index}"
machine_type = "f1-micro"
zone = "us-central1-a"
count = var.vm-host-number
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = google_compute_network.vpc_network.name
access_config {
nat_ip = "google_compute_address.vm-host-${count.index}.address"
}
}
}
The nat_ip = "google_compute_address.vm-host-${count.index}.address" line needs to be google_compute_address.resourcename.address however,
I've tried -
resource "google_compute_address" "vm-host-${count.index}" {
count = var.vm-host-number
region = "us-central1-a"
name = "vm-host-${count.index}"
}
resource "google_compute_address" "vm-host-$${count.index}" {
count = var.vm-host-number
region = "us-central1-a"
name = vm-host-${count.index}"
}
But whatever i do, it just won't work. Is there some special syntax for this?
A zone "europe-west1-a" does not exist.
You can get a list of existing zones using the command gcloud compute zones list:
CloudShell:$ gcloud compute zones list --filter=region=europe-west1
NAME REGION STATUS
europe-west1-b europe-west1 UP
europe-west1-d europe-west1 UP
europe-west1-c europe-west1 UP
We've Terraform module that creates compute_instance.
Some instances should get public IP.
Public IP created when you define "access_config {}" property under network_interface:
network_interface {
network = "default"
access_config {
}
}
We try to inject dynamically the network interface and the access_config from
"production/Main.tf" that called to this module:
module "arbiter" {
source = "../modules/compute"
name = "arbiter"
machine_type = "custom-1-2048"
zones = ["europe-west2-a"]
tags = ["mongo-db"]
metadata = {
sshKeys = "${var.ssh_user}:${file("ssh-keys/main.rsa.pub")}"
}
network_interface = { -> this line is worng
network = "default"
}
}
How can we inject a dynamic object to network_interface property?
Is it possible with Terraform if not, What are the alternatives?
In your arbiter module do this:
variable "external_ip" {
description = "Controls if VM gets external IP"
default = false
}
locals {
access_config = {
"0" = []
"1" = [{}]
}
}
resource "google_compute_instance" "arbiter" {
name = "${var.name}"
machine_type = "${var.type}"
zone = "${var.zones}"
tags = "${var.tags}"
metadata = "${var.metadata}"
boot_disk {
initialize_params {
image = "some/image"
}
}
network_interface {
network = "default"
access_config = "${local.access_config[var.external_ip]}"
}
}
Then, when using the module, you can specify external_ip variable to indicate that VM should be accessible from the internet.
module "arbiter" {
source = "../modules/compute"
name = "arbiter"
machine_type = "custom-1-2048"
zones = ["europe-west2-a"]
tags = ["mongo-db"]
metadata = {
sshKeys = "${var.ssh_user}:${file("ssh-keys/main.rsa.pub")}"
}
external_ip = true
}
More details about Terraform and null values tricks: Null values in Terraform v0.11.x
In my terraform configuration file, I define my resource like so:
resource "google_compute_instance" "test" {
...
count = 2
}
What I now want is to create load balancer, that will balance between two instances of my google compute instance. Unfortunatelly, I could not find in documentation anything relative to this task. It seems like google_compute_target_pool or google_compute_lb_ip_ranges have nothing to do with my problem.
You would have to use 'forwarding rules' as indicated on this terraform document. To use load balancing and protocol forwarding, you must create a forwarding rule that directs traffic to specific target instances. The use on Cloud Platform of forwarding rules you can find here.
In common cases you can use something like the following:
resource "google_compute_instance" "test" {
name = "nlb-node${count.index}"
zone = "europe-west3-b"
machine_type = "f1-micro"
count = 2
boot_disk {
auto_delete = true
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1604-lts"
size = 10
type = "pd-ssd"
}
}
network_interface {
subnetwork = "default"
access_config {
nat_ip = ""
}
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
resource "google_compute_http_health_check" "nlb-hc" {
name = "nlb-health-checks"
request_path = "/"
port = 80
check_interval_sec = 10
timeout_sec = 3
}
resource "google_compute_target_pool" "nlb-target-pool" {
name = "nlb-target-pool"
session_affinity = "NONE"
region = "europe-west3"
instances = [
"${google_compute_instance.test.*.self_link}"
]
health_checks = [
"${google_compute_http_health_check.nlb-hc.name}"
]
}
resource "google_compute_forwarding_rule" "network-load-balancer" {
name = "nlb-test"
region = "europe-west3"
target = "${google_compute_target_pool.nlb-target-pool.self_link}"
port_range = "80"
ip_protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
You can get load balancer external ip via ${google_compute_forwarding_rule.network-load-balancer.ip_address}
// output.tf
output "network_load_balancer_ip" {
value = "${google_compute_forwarding_rule.network-load-balancer.ip_address}"
}