How to define disk in gcp in a regional scope? - google-cloud-platform

I am new to GCP and trying use Java SDK to create a template.
I am using the following code snippet and after executing it, I'm getting this error: "Disk must be in a regional scope".
InstanceTemplate instanceTemplate = new InstanceTemplate();
AttachedDisk attachedDisk = new AttachedDisk();
attachedDisk.setSource("");
List<AttachedDisk> list = new ArrayList<>();
list.add(attachedDisk);
InstanceProperties instanceProperties = new InstanceProperties();
instanceProperties.setDisks(list);
instanceProperties.setMachineType("e2");
List<NetworkInterface> listNetworkInterface = new ArrayList<>();
NetworkInterface networkInterface = new NetworkInterface();
networkInterface.setName("myname");
networkInterface.setNetwork("");
networkInterface.setSubnetwork("");
listNetworkInterface.add(networkInterface);
instanceProperties.setNetworkInterfaces(listNetworkInterface);
instanceTemplate
.setName("instance-template-1")
.setDescription("desc")
.setProperties(instanceProperties);
compute.instanceTemplates().insert("", instanceTemplate).execute();
I am not able to understand what i am missing here.

You must first create the disk before you can attach it. It is not possible to create and attach a disk at the same time. Also check the disk and the instance should be in the same region.
To attach the disk, use the compute.instances.attachDisk method and include the URL to the persistent disk that you’ve created.
If you are creating a new template to update an existing instance group, your new instance template must use the same network or, if applicable, the same subnetwork as the original template.
After you create and attach a new disk to a VM, you must format and mount the disk, so that the operating system can use the available storage space.
Note: Zonal disks must be specified with bare names; zonal disks specified with a path (even a matching one) results in an "Disk must be in a regional scope" error.

Related

How to update disk in GCP using terraform?

Is it possible to create a terraform module that updates a specific resource which is created by another module?
Currently, I have two modules...
linux-system: which creates a linux vm with boot disks
disk-updater: which I'm planning to use to update the disks I created from the first module
The reason behind is I want to create a pipeline that will do disk operations tasks via terraform like disk resizing.
data "google_compute_disk" "boot_disk" {
name = "linux-boot-disk"
zone = "europe-west2-b"
}
resource "google_compute_disk" "boot_disk" {
name = data.google_compute_disk.boot_disk.name
zone = data.google_compute_disk.boot_disk.zone
size = 25
}
I tried to use data block to retrieve the existing disk details and pass it to resource block hoping to update the same disk but it seems like it will just try to create a new disk with the same name thats why im getting this error.
Error creating Disk: googleapi: Error 409: The resource ... already exists, alreadyExists
I think I'm doing it wrong, can someone give me advice how to proceed without using the first module I built. btw I'm a newbie when it comes to terraform
updates a specific resource which is created by another module?
No. You have to update the resource using its original definition.
The only way to update it from other module, is to import to the other module, which is bad design, as now you will have to definitions for the same resource, resulting in out-sync state files.

Is it safe to apply Terraform plan when it says the database instance must be replaced?

I'm importing the existing resources (AWS RDS) but the terraform plan command showed a summary:
#aws_db_instance.my_main_db must be replaced
+/- resource "aws_db_instance" "my_main_db" {
~ address = x
allocated_storage = x
+ apply_immediately = x
~ arn = x
~ username = x
+ password = x
(others arguments with alot of +/- and ~)
}
my_main_db is online with persistent data. My question is as the title; Is it safe for the existing database to run terrafrom apply? I don't want to lose all my customer data.
"Replace" in Terraform's terminology means to destroy the existing object and create a new one to replace it. The +/- symbol (as opposed to -/+) indicates that this particular resource will be replaced in the "create before destroy" mode, where there will briefly be two database instances existing during the operation. (This may or may not be possible in practice, depending on whether the instance name is changing as part of this operation.)
For aws_db_instance in particular, destroying an instance is equivalent to deleting the instance in the RDS console: unless you have a backup of the contents of the database, it will be lost. Even if you do have a backup, you'll need to restore it via the RDS console or API rather than with Terraform because Terraform doesn't know about the backup/restore mechanism and so its idea of "create" is to produce an entirely new, empty database.
To sum up: applying a plan like this directly is certainly not generally "safe", because Terraform is planning to destroy your database and all of the contents along with it.
If you need to make changes to your database that cannot be performed without creating an entirely new RDS instance, you'll usually need to make those changes outside of Terraform using RDS-specific tools so that you can implement some process for transferring data between the old and new instances, whether that be backup and then restore (which will require a temporary outage) or temporarily running both instances and setting up replication from old to new until you are ready to shut off the old one. The details of such a migration are outside of Terraform's scope, because they are specific to whatever database engine you are using.
It's most likely not safe, but really only someone familiar with the application can make that decision. Look at the properties and what is going to change or be recreated. Unless you are comfortable with all of those properties changing, then it's not safe.

Attaching the disk with same device-path or UUID

I had one disk attached to an instance & i had taken snapshot of it.
Now, after few days - the disk went bad and i want to restore the disk.
What i have implemented is :
Store metadata of snapshot, when taken
When restore request comes, i create new disk from snapshot
detach original disk (say attached inside host as /dev/sdz )
attach Newly created disk to the same instance
With this way, the user will get the view that the disk has been restored using the snapshot he had taken.
Now, the problem i'm seeing with this approach is :
as the original disk was attached as /dev/sdz, after detach & attach of NEW disk, the new disk should be seen as /dev/sdz ONLY,
Otherwise the application or upper-layers may break.
So, is there any provision that google-cloud APIs provide to handle this ?
PLEASE NOTE: I'm using google-api-python-client library & code is in Python.
I believe the name you are referring to is the "index" of the disk. I am not sure of that however. If that is the case, you would just need to make sure the index of the new disk matches the index of the disk you remove.
That being said, there are better ways to do this if you can modify your fstab. For example, you can use the "deviceName" by mounting /dev/disk/by-id/whatever in which case you would just need to make sure that the new disk has the same deviceName as the old disk.
Another option is to use the UUID of the filesystem to mount. Since these new disks are snapshots of the old disk, they will have the same UUID.
ls -l /dev/disk/by-uuid/
That should not change unless you reformat the partition entirely. In your fstab, instead of /dev/sdz1, you would use UUID=ef7481ea-a6f9-425b-940f-56e9c93492dd or whatever.

Modify AMI attribute [create volume] via AWS API or CLI

I have shared a bunch of AMIs from an AWS account to another.
I used this EC2conn1.modify_image_attribute(AMI_id, operation='add', attribute='launchPermission', user_ids=[second_aws_account_id]) to do it.
But, by only adding launch permission for the 2nd account, I can launch an instance but I cannot copy the shared AMI to another region [in the 2nd account].
When I tick the checkbox to "create volume" from the UI of the 1st account, I can copy the shared AMI from the 2nd:
I can modify the launch permissions using the modify_image_attribute function from boto.
In the documentation says, attribute (string) – The attribute you wish to change but I understand that it can only change the launch permissions and add an account.
Yet, the get_image_attribute has 3 options Valid choices are: * launchPermission * productCodes * blockDeviceMapping.
So, is there a way to programmatically change it from the API along with the launch permissions or, it has not been implemented yet??
The console uses the API so there's almost nothing you can do in the console that you can't to using the API.
Remember that an AMI is just a configuration entity -- basic launch configuration, linked to (not containing) one or more backing snapshots, which are technically separate entities.
The console is almost certainly making an additional API request the ModifySnapshotAttribute API when it offers to optionally "add Create Volume permissions to the following associated snapshot."
See also http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html
Presumably, copying a snapshot to another region relies on the same "Create Volume" permission (indeed, you'll see that a copied snapshot has a fake source volume ID, presumably an artifact of the copying process).
Based on the accepted answer, this is the code I wrote for anyone interested.
# Add copy permission to the image's snapshot
# Find the snapshot of the specific AMI
image_object = EC2conn.get_image(AMI_id)
# Grab the block device mapping dynamically
ami_devices = []
for key in image_object.block_device_mapping.iterkeys():
# print key #debug
ami_devices.append(key)
# print ami_devices #debug
for ami_device in ami_devices:
snap_id = image_object.block_device_mapping[ami_device].snapshot_id
# Add permission
EC2conn.modify_snapshot_attribute(snap_id, attribute='createVolumePermission', operation='add', user_ids=second_aws_account_id)
print "{0} [{1}] Permission added to snapshot".format(AMI_name,snap_id)

how to create a vm snapshot using pyvmomi

I have a task of implementing a basic backup and recovery system within a django app. I have heard of pyvmomi, but never used it before.
My specific tasks at hand is:
1) make a call to a vCenter, pass the vm name, and request to make a snapshot
2) obtain the file location of the snapshot
3) and upload the snapshot file into an OpenStack Swift object store
What is the actual syntax of creating a vm snapshot using pyvmomi?
Also - what is the syntax to request the actual snapshot file from vCenter?
https://github.com/rreubenur/vmware-pyvmomi-examples/blob/master/create_and_remove_snapshot.py
This should be helpful
Snapshot task result itself contains Moref to snashot created
So that you can get reference to created snapshot.