I am trying to create list of VM to which there is list of disk needs to be created and attached per VM.
In below example I have to create test-d01 Compute with 3 disk test-d01-data and test-d01-data-disk, and test-d01-commitlog-disk
similarly test-d02 Compute with 2 disk test-d02-data-01 and test-d02-data-02.
For expample below VM_info represents required configuration and it would
{
name = "test-d01"
zone = "us-east1-b"
disk = [
{
disk_name = "test-d01-data"
disk_type = "pd-ssd"
disk_size = "60"
},
{
disk_name = "test-d01-data-disk"
disk_type = "pd-standard"
disk_size = "15"
},
{
disk_name = "test-d01-commitlog-disk"
disk_type = "pd-ssd"
disk_size = "30"
}
]
},
{
name = "test-d02"
zone = "us-east1-b"
disk=[
{
disk_name = "test-d02-data"
disk_type = "pd-ssd"
disk_size = "60"
},
{
disk_name = "test-d02-data-disk"
disk_type = "pd-standard"
disk_size = "15"
}
]
},
]
great idea. When I use this I do get a
terraform plan
var.disks
Enter a value: 2
var.instance_name
Enter a value: DDVE5
Error: Reference to undeclared resource
on main.tf line 39, in resource "google_compute_attached_disk" "vm_attached_disk":
39: instance = google_compute_instance.vm_instance.self_link
A managed resource "google_compute_instance" "vm_instance" has not been
declared in the root module.
cat main.tf
variable "instance_name" {}
variable "instance_zone" {
default = "europe-west3-c"
}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {
default = "default"
}
variable "disks" {}
provider "google" {
credentials = file("key.json")
project = "ddve50"
region = "europe-west3"
zone = "europe-west3-a"
}
resource "google_compute_instance" "vm-instance" {
name = "ddve-gcp-5-7-2-0-20-65"
machine_type = "f1-micro"
tags = ["creator", "juergen"]
boot_disk {
initialize_params {
image = "ddve"
type = "pd-ssd"
}
}
network_interface {
network = "default"
}
}
resource "google_compute_attached_disk" "vm_attached_disk" {
for_each = toset(var.disks)
disk = each.key
instance = google_compute_instance.vm_instance.self_link
}
cat ../my_instances.tf
resource "google_compute_disk" "ddve-gcp-5-7-2-0-20-65-nvram" {
name = "ddve-gcp-5-7-2-0-20-65-nvram"
type = "pd-ssd"
size = 10
}
resource "google_compute_disk" "ddve-gcp-5-7-2-0-20-65-m1" {
name = "ddve-gcp-5-7-2-0-20-65-m1"
type = "pd-standard"
size = 1024
}
module "ddve-test-d01" {
source = "./instance"
instance_name = "ddve-test-d01"
disks = [
google_compute_disk.ddve-gcp-5-7-2-0-20-65-nvram,
google_compute_disk.ddve-gcp-5-7-2-0-20-65-m1
]
}
Terraform by HashiCorp > Compute Engine > Resources > google_compute_attached_disk:
Persistent disks can be attached to a compute instance using the
attached_disk section within the compute instance configuration.
However there may be situations where managing the attached disks via
the compute instance config isn't preferable or possible, such as
attaching dynamic numbers of disks using the count variable.
Therefore a straightforward approach with creation of google_compute_instance with attributes attached_disk may not be applicable in this case.
In brief the idea is to create persistent disks first, then create VM instances and attach new disks to the freshly created instances. This test deployment consists of a root configuration file my_instances.tf and reusable module ./instance/main.tf.
1. Persistent disks google_compute_disk can be created independently. For the sake of simplicity 5 literal blocks in the root file my_instances.tf were used.
2. Call the reusable module instance/main.tf and pass the VM attributes and a list of disks to the module so that:
Create a VM instance google_compute_instance;
Use binding objects google_compute_attached_disk to attach new empty disks to the freshly created VM instance.
To process the disk list the for_each meta-argument is used.
Since the for_each accepts only a map or a set of string, the toset function is used to convert a list of disks to a set.
Terraform by HashiCorp > Compute Engine > Data Sources > google_compute_instance
Terraform by HashiCorp > Compute Engine > Resources > google_compute_disk
Terraform by HashiCorp > Compute Engine > Resources > google_compute_attached_disk
Terraform by HashiCorp > Configuration language > Resources > lifecycle.ignore_changes
Terraform by HashiCorp > Configuration language > Resources > When to Use for_each Instead of count
Terraform by HashiCorp > Configuration language > Resources > The each Object
Terraform by HashiCorp > Configuration language > Resources > Using Sets
$ cat my_instances.tf
resource "google_compute_disk" "test-d01-data" {
name = "test-d01-data"
type = "pd-ssd"
size = 60
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d01-data-disk" {
name = "test-d01-data-disk"
type = "pd-standard"
size = 15
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d01-commitlog-disk" {
name = "test-d01-commitlog-disk"
type = "pd-ssd"
size = 30
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d02-data" {
name = "test-d02-data"
type = "pd-ssd"
size = 60
zone = "europe-west3-c"
}
resource "google_compute_disk" "test-d02-data-disk" {
name = "test-d02-data-disk"
type = "pd-standard"
size = 15
zone = "europe-west3-c"
}
module "test-d01" {
source = "./instance"
instance_name = "test-d01"
disks = [
google_compute_disk.test-d01-data.name,
google_compute_disk.test-d01-data-disk.name,
google_compute_disk.test-d01-commitlog-disk.name
]
}
module "test-d02" {
source = "./instance"
instance_name = "test-d02"
disks = [
google_compute_disk.test-d02-data.name,
google_compute_disk.test-d02-data-disk.name
]
}
$ cat instance/main.tf
variable "instance_name" {}
variable "instance_zone" {
default = "europe-west3-c"
}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {
default = "default"
}
variable "disks" {}
resource "google_compute_instance" "vm_instance" {
name = var.instance_name
zone = var.instance_zone
machine_type = var.instance_type
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
access_config {
# Allocate a one-to-one NAT IP to the instance
}
}
lifecycle {
ignore_changes = [attached_disk]
}
}
resource "google_compute_attached_disk" "vm_attached_disk" {
for_each = toset(var.disks)
disk = each.key
instance = google_compute_instance.vm_instance.self_link
}
$ terraform fmt
$ terraform init
$ terraform plan
$ terraform apply
you can alos get the disks attached with:
resource "google_compute_disk" "default" {
name = "test-disk"
type = "pd-ssd"
zone = var.instance_zone
# image = "debian-9-stretch-v20200805"
labels = {
environment = "dev"
}
physical_block_size_bytes = 4096
}
resource "google_compute_attached_disk" "vm_attached_disk" {
count = var.disks
disk = google_compute_disk.default.id
instance = google_compute_instance.vm-instance.id
}
Related
I need to get the terraform state file from the backend GCS and to use it while I update the resource is it possible to overwrite the excising terraform state file and fetch it on need.
this is my main.tf
provider "google" {
project = var.project
region = var.region
zone = var.zone
}
###################################################
########################################################
data "google_compute_default_service_account" "default" {
}
#create instance1
resource "random_id" "bucket_prefix" {
byte_length = 8
}
#create instance
resource "google_compute_instance" "vm_instance" {
name = var.instance_name
machine_type = var.machine_type
zone = var.zone
metadata_startup_script = var.script
allow_stopping_for_update = true
#metadata = {
# enable-oslogin = "TRUE"
#}
service_account {
email = data.google_compute_default_service_account.default.email
scopes = ["cloud-platform"]
}
boot_disk {
initialize_params {
image = var.image
#image = "ubuntu-2004-lts" # TensorFlow Enterprise
size = 30
}
}
# Install Flask
tags = ["http-server","allow-ssh-ingress-from-iap", "https-server"]
network_interface {
network = "default"
access_config {
}
}
guest_accelerator{
#type = "nvidia-tesla-t4" // Type of GPU attahced
type = var.type
count = var.gpu_count
#count = 2 // Num of GPU attached
}
scheduling{
on_host_maintenance = "TERMINATE"
automatic_restart = true// Need to terminate GPU on maintenance
}
}
This is my variables.tfvars:
instance_name = "test-vm-v5"
machine_type = "n1-standard-16"
region = "europe-west4"
zone = "europe-west4-a"
image = "tf28-np-pandas-nltk-scikit-py39"
#image = "debian-cloud/debian-10"
project = "my_project"
network = "default"
type = ""
gpu_count = "0"
I wanted to create multiple instances by changing the variables.tfvars and need to modify the instance on the basis of the name of vm.
Hello I am trying to add an existing service account to a new Google compute instance I am spinning up.
How can I attach the already existing account?
Here is what my file looks like
provider "google" {
project = var.project
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_disk" "data_disk" {
name = "data-test"
size = 120
}
resource "google_compute_disk" "other_data_disk" {
name = "other-data-test"
size = 400
}
resource "google_compute_attached_disk" "data" {
disk = google_compute_disk.data_disk.id
instance = google_compute_instance.test_vm_instance.id
}
resource "google_compute_attached_disk" "other_data" {
disk = google_compute_disk.other_data_disk.id
instance = google_compute_instance.tyler_test_vm_instance.id
}
resource "google_compute_instance" "test_vm" {
name = "worker-test"
machine_type = "n2-standard-2"
min_cpu_platform = "Intel Ice Lake"
boot_disk {
initialize_params {
image = "ubuntu-2004-focal-v20210211"
}
}
network_interface {
network = var.gcp_project_network_name
subnetwork = var.subnet_name
#access_config {
#}
}
metadata_startup_script = "${file("./startup-script.sh")}"
What's the best way to insert add an already existing service account to the new VM.
I want to be able to create and destroy the VM without worrying about deleting this service account.
Add to this resource:
resource "google_compute_instance" "test_vm" {
...
service_account {
email = REPLACE_WITH_EXISTING_SERVICE_ACCOUNT_EMAIL_ADDRESS
scopes = ["cloud-platform"]
}
...
}
How to provision multiple instances in GCP Compute Engine using Terraform. I've tried using 'count' parameter in the resource block. But terraform is not provisioning more than one instance because the VM with a particular name is created once when first count is executed.
provider "google" {
version = "3.5.0"
credentials = file("battleground01-5c86f5873d44.json")
project = "battleground01"
region = "us-east1"
zone = "us-east1-b"
}
variable "node_count" {
default = "3"
}
resource "google_compute_instance" "appserver" {
count = "${var.node_count}"
name = "battleground"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
}
}
In order for this to work, you would have to make a slight change in the way you are naming your compute instances:
provider "google" {
version = "3.5.0"
credentials = file("battleground01-5c86f5873d44.json")
project = "battleground01"
region = "us-east1"
zone = "us-east1-b"
}
variable "node_count" {
type = number
default = 3
}
resource "google_compute_instance" "appserver" {
count = var.node_count
name = "battleground-${count.index}"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
}
}
As you are using the count meta-argument, the way to access the array index is by using the count.index attribute [1]. You have also set the node_count variable default value to be a string and even though it would probably get converted to a number by Terraform, make sure to use the right variable types.
[1] https://www.terraform.io/language/meta-arguments/count#the-count-object
I need to dynamically create a variable number of riak instances, each with an attached disk across multiple zones in GCP using terraform.
Each attached disk must live in the same zone as its instance.
Upon terraform plan everything looked good, but when I ran apply terraform responded with an error saying that zone was undefined.
Okay, I thought, let's set the zone to be the same as the linked instance. No dice. Cycle error, so I move everything such that the information flows from the disk to the instance, but the cycle error persists. Here's the error:
│ Error: Cycle: module.riak_instances.google_compute_instance.riak_instance, module.riak_instances.google_compute_disk.data-disk
And the code in it's current incarnation:
data "google_compute_zones" "zones" {
}
resource "google_compute_instance" "riak_instance" {
count = var.instance_count
name = "riak-${count.index + 1}"
machine_type = var.machine_type
zone = google_compute_disk.data-disk[count.index].zone
boot_disk {
initialize_params {
image = var.disk_image
size = var.instance_disk_size
}
}
network_interface {
network = "default"
}
attached_disk {
source = google_compute_disk.data-disk[count.index].self_link
}
labels = {
environment = var.environment
owner = var.owner
}
}
resource "google_compute_disk" "data-disk" {
count = var.instance_count
name = "riak-disk-${count.index + 1}"
type = "pd-balanced"
size = var.data_disk_size
zone = data.google_compute_zones.zones.names[count.index]
labels = {
environment = var.environment
owner = var.owner
}
}
In my terraform configuration file, I define my resource like so:
resource "google_compute_instance" "test" {
...
count = 2
}
What I now want is to create load balancer, that will balance between two instances of my google compute instance. Unfortunatelly, I could not find in documentation anything relative to this task. It seems like google_compute_target_pool or google_compute_lb_ip_ranges have nothing to do with my problem.
You would have to use 'forwarding rules' as indicated on this terraform document. To use load balancing and protocol forwarding, you must create a forwarding rule that directs traffic to specific target instances. The use on Cloud Platform of forwarding rules you can find here.
In common cases you can use something like the following:
resource "google_compute_instance" "test" {
name = "nlb-node${count.index}"
zone = "europe-west3-b"
machine_type = "f1-micro"
count = 2
boot_disk {
auto_delete = true
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1604-lts"
size = 10
type = "pd-ssd"
}
}
network_interface {
subnetwork = "default"
access_config {
nat_ip = ""
}
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
resource "google_compute_http_health_check" "nlb-hc" {
name = "nlb-health-checks"
request_path = "/"
port = 80
check_interval_sec = 10
timeout_sec = 3
}
resource "google_compute_target_pool" "nlb-target-pool" {
name = "nlb-target-pool"
session_affinity = "NONE"
region = "europe-west3"
instances = [
"${google_compute_instance.test.*.self_link}"
]
health_checks = [
"${google_compute_http_health_check.nlb-hc.name}"
]
}
resource "google_compute_forwarding_rule" "network-load-balancer" {
name = "nlb-test"
region = "europe-west3"
target = "${google_compute_target_pool.nlb-target-pool.self_link}"
port_range = "80"
ip_protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
You can get load balancer external ip via ${google_compute_forwarding_rule.network-load-balancer.ip_address}
// output.tf
output "network_load_balancer_ip" {
value = "${google_compute_forwarding_rule.network-load-balancer.ip_address}"
}