How to update disk in GCP using terraform? - google-cloud-platform

Is it possible to create a terraform module that updates a specific resource which is created by another module?
Currently, I have two modules...
linux-system: which creates a linux vm with boot disks
disk-updater: which I'm planning to use to update the disks I created from the first module
The reason behind is I want to create a pipeline that will do disk operations tasks via terraform like disk resizing.
data "google_compute_disk" "boot_disk" {
name = "linux-boot-disk"
zone = "europe-west2-b"
}
resource "google_compute_disk" "boot_disk" {
name = data.google_compute_disk.boot_disk.name
zone = data.google_compute_disk.boot_disk.zone
size = 25
}
I tried to use data block to retrieve the existing disk details and pass it to resource block hoping to update the same disk but it seems like it will just try to create a new disk with the same name thats why im getting this error.
Error creating Disk: googleapi: Error 409: The resource ... already exists, alreadyExists
I think I'm doing it wrong, can someone give me advice how to proceed without using the first module I built. btw I'm a newbie when it comes to terraform

updates a specific resource which is created by another module?
No. You have to update the resource using its original definition.
The only way to update it from other module, is to import to the other module, which is bad design, as now you will have to definitions for the same resource, resulting in out-sync state files.

Related

How to define disk in gcp in a regional scope?

I am new to GCP and trying use Java SDK to create a template.
I am using the following code snippet and after executing it, I'm getting this error: "Disk must be in a regional scope".
InstanceTemplate instanceTemplate = new InstanceTemplate();
AttachedDisk attachedDisk = new AttachedDisk();
attachedDisk.setSource("");
List<AttachedDisk> list = new ArrayList<>();
list.add(attachedDisk);
InstanceProperties instanceProperties = new InstanceProperties();
instanceProperties.setDisks(list);
instanceProperties.setMachineType("e2");
List<NetworkInterface> listNetworkInterface = new ArrayList<>();
NetworkInterface networkInterface = new NetworkInterface();
networkInterface.setName("myname");
networkInterface.setNetwork("");
networkInterface.setSubnetwork("");
listNetworkInterface.add(networkInterface);
instanceProperties.setNetworkInterfaces(listNetworkInterface);
instanceTemplate
.setName("instance-template-1")
.setDescription("desc")
.setProperties(instanceProperties);
compute.instanceTemplates().insert("", instanceTemplate).execute();
I am not able to understand what i am missing here.
You must first create the disk before you can attach it. It is not possible to create and attach a disk at the same time. Also check the disk and the instance should be in the same region.
To attach the disk, use the compute.instances.attachDisk method and include the URL to the persistent disk that you’ve created.
If you are creating a new template to update an existing instance group, your new instance template must use the same network or, if applicable, the same subnetwork as the original template.
After you create and attach a new disk to a VM, you must format and mount the disk, so that the operating system can use the available storage space.
Note: Zonal disks must be specified with bare names; zonal disks specified with a path (even a matching one) results in an "Disk must be in a regional scope" error.

Create instance using terrafrom from GCP marketplace

I m trying to create terraform script to launch the fastai instance from the marketplace.
I m adding image name as,
boot_disk {
initialize_params {
image = "<image name>"
}
}
When I add
click-to-deploy-images/deeplearning
from url
https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning
is giving error,
Error: Error resolving image name 'click-to-deploy-images/deeplearning': Could not find image or family click-to-deploy-images/deeplearning
on fastai.tf line 13, in resource "google_compute_instance" "default":
13: resource "google_compute_instance" "default" {
If I use
debian-cloud/debian-9
from url
https://console.cloud.google.com/marketplace/details/debian-cloud/debian-stretch?project=<>
is working.
Can we deploy fastai image through terraform?
I made a deployment from the deep learning marketplace VM instance you share and review the source image[1], you should be able to use that url I provided to deploy with Terraform. I also notice a warning image stating that image is deprecated and there is this new version[2].
Hope this helps!
[1]sourceImage: https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf2-2-1-cu101-20200109
[2]https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf2-2-1-cu101-20200124
In this particular case, the name was "deeplearning-platform-release/pytorch-latest-gpu",
boot_disk {
initialize_params {
image = "deeplearning-platform-release/pytorch-latest-gpu"
...
}
}
Now I m able to create the instance.
To other newbies like me:
Apparently GCP Marketplace is using Deployment Manager which is google's own declarative tool to manage infrastructure. (I think modules are the closest abstraction in terraform to it.)
Hence, there is no simple/single answer to the question in the title.
In my opinion - if you start from scratch and/or can afford the effort the time - the best is to use terraform modules instead of GCP marketplace solutions - if such exists.
However, changes are good that you are importing an existing infra and you cannot just replace it immediately (or there is no such module).
In this case, I think the best that you can do is go to Deployment Manager in google console and open the particular deployment you need to import.
At this point you can see what resources make up the deployment. Probably there will be vm template(s), vm(s), firewall rule(s), etc...
Clicking on vm instance and the template will show you a lot of useful details.
Most importantly you can deduce what image was used.
E.g.:
In my case it showed:
sourceImage https://www.googleapis.com/compute/v1/projects/openvpn-access-server-200800/global/images/aspub275
From this I could define (based on an answer on issue #7319)
data "google_compute_image" "openvpn_server" {
name = "aspub275"
project = "openvpn-access-server-200800"
}
Which I could in turn use in google_compute_instance resource.
This will force a recreation of the VM though.

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}

Terraform looping a module

I have a module with in my terraform file that created some Database servers that does a few things.
First, it creates an auto scaling group to use a specific image, then it creates some EBS volumes and attaches them and then adds some lambda code so on launch the instances get registered to route 53. So in all about 80 lines of text.
Extract
module "systemt-sql-db01" {
source = "localmodules/tf-aws-asg"
name = "${var.envname}-sys-db01"
envname = "${var.envname}"
service = "dbpx"
ami_id = "${data.aws_ami.app_sqlproxy.id}"
user_data = "${data.template_cloudinit_config.config-enforcement-sqlproxy.rendered}"
#subnets = ["${module.subnets-enforcement.web_private_subnets}"]
subnets = ["${element(module.subnets-enforcement.web_private_subnets, 1)}"]
security_groups = ["${aws_security_group.unfiltered-egress-sg.id}", "${aws_security_group.sysopssg.id}", "${aws_security_group.system-sqlproxy.id}"]
key_name = "${var.keypair}"
load_balancers = ["${var.envname}-enf-dbpx-int-elb"]
iam_instance_profile = "${module.iam_profile_generic.profile_arn}"
instance_type = "${var.enforcement_instancesize_dbpx}"
min = 0
max = 0
}
And I then have two parameter files one that I call when launching to pre production and one called when launching to production. I don't want these to contain anything other than variables.
The problem is that for production I need to call the module twice, but for production I need it called three times.
People talk about a count function for modules but I don't think this is possible as yet. Can anyone suggest any other ways to do this? What I would like is to be able in my parameter file to set a list variable of all the DB ASG names, and then loop through this calling the module each time.
I hope that makes sense?
thank you
EDIT Looping in modules is in beta for Terraform 0.13 (https://discuss.hashicorp.com/t/terraform-0-13-beta-released/9555).
This is a highly requested feature in Terraform and as mentioned it is not yet supported. Later releases of Terraform v0.12 will introduce this feature (https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each).
I had a similar problem where I had to create multiple KMS keys for multiple accounts from a base KMS module. I ended up creating a second module that uses the core KMS module, this second module had many instances of the core module, but only required me to input the account details once.
This is still not ideal, but it worked well enough without over complicating things.

how to create a vm snapshot using pyvmomi

I have a task of implementing a basic backup and recovery system within a django app. I have heard of pyvmomi, but never used it before.
My specific tasks at hand is:
1) make a call to a vCenter, pass the vm name, and request to make a snapshot
2) obtain the file location of the snapshot
3) and upload the snapshot file into an OpenStack Swift object store
What is the actual syntax of creating a vm snapshot using pyvmomi?
Also - what is the syntax to request the actual snapshot file from vCenter?
https://github.com/rreubenur/vmware-pyvmomi-examples/blob/master/create_and_remove_snapshot.py
This should be helpful
Snapshot task result itself contains Moref to snashot created
So that you can get reference to created snapshot.