unable to create google compute disk using terraform - google-cloud-platform

i want to create a gcloud compute disk so in order to achieve i wrote below code
resource "google_compute_disk" "default2" {
name = "test-disk"
type = "pd-balanced"
zone = "us-central1-a"
image = "centos-7-v20210609"
physical_block_size_bytes = 20480
}
when i run terraform apply it show following error
how can i fix this

As described in the documentation
physical_block_size_bytes - (Optional) Physical block size of the persistent disk, in bytes. If not present in a request, a default value is used. Currently supported sizes are 4096 and 16384, other sizes may be added in the future. If an unsupported value is requested, the error message will list the supported values for the caller's project.

Related

How to update disk in GCP using terraform?

Is it possible to create a terraform module that updates a specific resource which is created by another module?
Currently, I have two modules...
linux-system: which creates a linux vm with boot disks
disk-updater: which I'm planning to use to update the disks I created from the first module
The reason behind is I want to create a pipeline that will do disk operations tasks via terraform like disk resizing.
data "google_compute_disk" "boot_disk" {
name = "linux-boot-disk"
zone = "europe-west2-b"
}
resource "google_compute_disk" "boot_disk" {
name = data.google_compute_disk.boot_disk.name
zone = data.google_compute_disk.boot_disk.zone
size = 25
}
I tried to use data block to retrieve the existing disk details and pass it to resource block hoping to update the same disk but it seems like it will just try to create a new disk with the same name thats why im getting this error.
Error creating Disk: googleapi: Error 409: The resource ... already exists, alreadyExists
I think I'm doing it wrong, can someone give me advice how to proceed without using the first module I built. btw I'm a newbie when it comes to terraform
updates a specific resource which is created by another module?
No. You have to update the resource using its original definition.
The only way to update it from other module, is to import to the other module, which is bad design, as now you will have to definitions for the same resource, resulting in out-sync state files.

Pub Sub Lite topics with Peak Capacity Throughput option

We are using Pub Sub lite instances along with reservations, we want to deploy it via Terraform, on UI while creating a Pub Sub Lite we get an option to specify Peak Publish Throughput (MiB/s) and Peak Subscribe Throughput (MiB/s) which is not available in the resource "google_pubsub_lite_topic" as per this doc https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/pubsub_lite_topic.
resource "google_pubsub_lite_reservation" "pubsub_lite_reservation" {
name = var.lite_reservation_name
project = var.project
region = var.region
throughput_capacity = var.throughput_capacity
}
resource "google_pubsub_lite_topic" "pubsub_lite_topic" {
name = var.topic_name
project = var.project
region = var.region
zone = var.zone
partition_config {
count = var.partitions_count
capacity {
publish_mib_per_sec = var.publish_mib_per_sec
subscribe_mib_per_sec = var.subscribe_mib_per_sec
}
}
retention_config {
per_partition_bytes = var.per_partition_bytes
period = var.period
}
reservation_config {
throughput_reservation = google_pubsub_lite_reservation.pubsub_lite_reservation.name
}
}
Currently use the above TF script to create pub sub lite instance, the problem here is we are mentioning the throughput capacity instead of setting the peak throughput capacity, and capacity block is a required field. Please help if there is any workaround to it ? we want topic to set throughput dynamically but with peak limit to the throughput, as we are setting a fix value to the lite reservation.
If you check the bottom of your Google Cloud console screenshot, you can see it suggests to have 4 partitions with 4MiB/s publish and subscribe throughput.
Therefore your Terraform partition_config should match this. Count should be 4 for the 4 partitions, with capacity of 4MiB/s publish and 4MiB/s subscribe for each partition.
The "peak throughput" in web UI is just for convenience to help you choose some numbers here. The actual underlying PubSub Lite API doesn't actually have this field, which is why there is no Terraform setting either. You will notice the sample docs require a per-partiton setting just like Terraform.
eg. https://cloud.google.com/pubsub/lite/docs/samples/pubsublite-create-topic
I think the only other alternative would be to create a reservation attached to your topic with enough throughput units for desired capacity. And then completely omit capacity block in Terraform and let the reservation decide.

How to define disk in gcp in a regional scope?

I am new to GCP and trying use Java SDK to create a template.
I am using the following code snippet and after executing it, I'm getting this error: "Disk must be in a regional scope".
InstanceTemplate instanceTemplate = new InstanceTemplate();
AttachedDisk attachedDisk = new AttachedDisk();
attachedDisk.setSource("");
List<AttachedDisk> list = new ArrayList<>();
list.add(attachedDisk);
InstanceProperties instanceProperties = new InstanceProperties();
instanceProperties.setDisks(list);
instanceProperties.setMachineType("e2");
List<NetworkInterface> listNetworkInterface = new ArrayList<>();
NetworkInterface networkInterface = new NetworkInterface();
networkInterface.setName("myname");
networkInterface.setNetwork("");
networkInterface.setSubnetwork("");
listNetworkInterface.add(networkInterface);
instanceProperties.setNetworkInterfaces(listNetworkInterface);
instanceTemplate
.setName("instance-template-1")
.setDescription("desc")
.setProperties(instanceProperties);
compute.instanceTemplates().insert("", instanceTemplate).execute();
I am not able to understand what i am missing here.
You must first create the disk before you can attach it. It is not possible to create and attach a disk at the same time. Also check the disk and the instance should be in the same region.
To attach the disk, use the compute.instances.attachDisk method and include the URL to the persistent disk that you’ve created.
If you are creating a new template to update an existing instance group, your new instance template must use the same network or, if applicable, the same subnetwork as the original template.
After you create and attach a new disk to a VM, you must format and mount the disk, so that the operating system can use the available storage space.
Note: Zonal disks must be specified with bare names; zonal disks specified with a path (even a matching one) results in an "Disk must be in a regional scope" error.

How to get AWS instance total memory size using python?

I want to run a python startup script inside AWS instance in which I want to get the total memory size selected during instance creation.
I have tried free -h , grep MemTotal /proc/meminfo commands. But, the problem with these commands is, I get some less amount of RAM than the actual memory size I selected while creating an instance (due to system use maybe). I want to get the exact memory size I selected while creating an instance. e.g., "2 GB" for "t2.small" , "4 GB" for "c5.large" ,etc.
Also, there is no metadata URL available to get AWS instance memory size.
Is there any way to do that?
I haven't done this myself, but I think the process might be something like:
client = boto3.client('ec2')
type = client.describe_instance_attribute(Attribute='instanceType', InstanceId='YOUR_ID' )
details = client.describe_instance_types( InstanceTypes=[ type ] )
memory = details['InstanceTypes']['MemoryInfo']['SizeInMiB']
You'll need to give the instance the right IAM permissions to get the data (it's not the same as the http://169.254.169.254 metadata).
You can also get the instance type from http://169.254.169.254/2020-10-27/meta-data/instance-type/ but I'm trying to go for a fully python solution with boto3.

Additional 500 GB persistent disk attached by default

I am trying to run a workflow on GCP using Nextflow. The problem is, whenever an instance is created to run a process, it has two disks attached. The first boot-disk (default 10GB) and an additional 'google-pipelines-worker' disk (default 500GB). When I run multiple processes in parallel, multiple VM's are created and each has an additional disk attached of 500GB. Is there any way to customize the 500GB default?
nextflow.config
process {
executor = 'google-pipelines'
}
cloud {
driver = 'google'
}
google {
project = 'my-project'
zone = 'europe-west2-b'
}
main.nf
#!/usr/bin/env nextflow
barcodes = Channel.from(params.analysis_cfg.barcodes.keySet())
process run_pbb{
machineType: n1-standard-2
container: eu.gcr.io/my-project/container-1
output:
file 'this.txt' into barcodes_ch
script:
"""
sleep 500
"""
}
The code provided is jus a sample. Basically, this will create a VM instance with an additional 500GB standard persistent disk attached to it.
Nextflow updated this in the previous release, will leave this here.
First run export NXF_VER=19.09.0-edge
Then in the scope 'process' you can declare a disk directive like so:
process this_process{
disk "100GB"
}
This updates the attached persistent disk (default: 500GB)
There is still no functionality to edit the size of the boot disk (default: 10GB)
I have been checking the Nextflow documentation, where is specified:
The compute nodes local storage is the default assigned by the Compute Engine service for the chosen machine (instance) type. Currently it is not possible to specify a custom disk size for local storage.