GCE: create instance template out of disk snapshot - templates

Is it possible? Official manual consists this:
Deterministic instance templates
Use custom images or disk snapshots rather than startup scripts.
But no more info how can I do this. Maybe someone have already did that? Tnx in advance.

It's possible. Here's the process:
Create your Snapshot
Create Disk from your Snapshot
Create an Image from Disk
Create an Instance Template from Image

So my solution is following:
I created an instance.
Install the required services on that instance.
Created the image from the disk using the steps mentioned on this link.
With that Image created a new template.
The startup scripts will run on instance boots up or restarts. So the only way I found to run it once is the same which you have tired i.e deleting them from metadata. In my setup when using startup scripts I don't reboot the instances I rather delete them and create a new ones if required.

I don't know whether it is anew feature, but you can simply replicate an instance by
Creating a snapshot
When the snapshot is ready, click on it (it'll show the details)
In the breadcrumbs bar, click on Create Instance.
It'll take a minute or so and the new instance will start-up, and that's all!

Related

Automating copy of Google Cloud Compute instances between projects

I need to move more than 50 compute instances from a Google Cloud project to another one, and I was wondering if there's some tool that can take care of this.
Ideally, the needed steps could be the following (I'm omitting regions and zones for the sake of simplicity):
Get all instances in source project
For each instance get machine sizing and the list of attached disks
For each disk create a disk-image
Create a new instance, of type machine sizing, in target project using the first disk-image as source
Attach remaining disk-images to new instance (in the same order they were created)
I've been checking on both Terraform and Ansible, but I have the feeling that none of them supports creating disk images, meaning that I could only use them for the last 2 steps.
I'd like to avoid writing a shell script because it doesn't seem a robust option, but I can't find tools that can help me doing the whole process either.
Just as a side note, I'm doing this because I need to change the subnet for all my machines, and it seems like you can't do it on already created machines but you need to clone them to change the network.
There is no tool by GCP to migrate the instances from one project to another one.
I was able to find, however, an Ansible module to create Images.
In Ansible:
You can specify the “source_disk” when creating a “gcp_compute_image” as mentioned here
Frederic

How to read/get files from Google Cloud Compute Engine Disk without connecting into it?

I accidentally messed up the permissions of the file system, which shows the message sudo: /usr/local/bin/sudo must be owned by uid 0 and have the setuid bit set when attempting to use sudo, such as read protected files, etc.
Response from this answer (https://askubuntu.com/a/471503) suggest to login as root to do so, however I didn't setup a root password before and this answer (https://stackoverflow.com/a/35017164/4343317) suggest me to use sudo passwd. Obviously I am stuck in an infinite loop from the two answers above.
How can I read/get the files from Google Cloud Compute Engine's disk without logging in into the VM (I have full control of the VM instance and the disk as well)? Is there another "higher" way to login as root (such as from gcloud tool or the Google Cloud interface) to access the VM disk externally?
Thanks.
It looks like the following recipe may be of value:
https://cloud.google.com/compute/docs/disks/detach-reattach-boot-disk
What this article says is that you can shutdown your VM, detach its boot disk and then attach it as a data disk to a second VM. In that second VM you will have the ability to make changes. However, if you don't know what changes you need to make to restore the system to sanity, then as #John Hanley says, we might want to use this mounting technique to copy of your work and then destroy your tainted VM and recreate a new one fresh and copy back in your work and start from there.

create instance template from private repo

I'm trying to create a GCP instance template which has the most recent version of my repo on it. My repository is private and I cant figure out how to clone it in the instance groups. I don't think I can use SSH because the machines will be randomly destroyed and created and therefore the generated keys will be inconsistent. Whats the best way to do this?
An Instance Template is based on an Image. This image can be a clean Ubuntu/Windows/Debian copy or a custom image created by you.
Saying that, I can think of 2 ways for you to get your repository inside there.
Using a custom image.
In essence, A snapshot of an instance with your latest code and dependencies installed on it.
There are two paths you can go with here.
a. Create a custom image when you clone the repository to the instance. You might need to that for every update in the code.
b. An alternative is to use some sort of Network File System (NFS/SMB). This will usually require more resources like another server that is always available.
If you want to avoid creating images, or as a solution to the issue mentioned in 1a you can set up a Startup Script to run on the server at boot(creation) time to clone/pull the latest copy.
There are Pros and Cons for both. I guess only you can tell what is best for you. I hope it gets you in the right direction.
Read more about creating an image here.
Read more about startup scripts here.

How do I get files from GCP VM?

I currently have a GCP VM where I tried to install something and there was a no memory left error on Ubuntu. I tried opening the SSH again and it is not working.
P.S there is no problem with firewall/connection.
I just want a way to download the files that I had stored in the VM. Is there a way to do this without accessing the Terminal?
If you are not able to login through serial console, then the only option left would be to retrieve the data from your OLD VM by creating a new VM.
You can follow the steps below to copy the data from the affected(OLD) VMs disk.
1 Create a snapshot from the boot disk of the OLD VM
2 Create a new VM. As a boot disk, you should use a Google public
image (important- do not use the snapshot you created).
3 Once that instance is created, try to SSH into it just to test if
you are able to access it. There should be no issue at this point with
this VM instance, as this is a new instance using a fresh operating
system.
4 In the newly created instance, click on the instance name (in the
Console), and then click ‘Edit’ at the top of the page to edit the
machine.
5 In the ‘Additional Disks’ section, click ‘Add item’.
6 In the ‘Name’ drop-down select ‘Create disk’. In the window that
opens add a name for the disk, and in the ‘Source snapshot’ drop-down
select the snapshot you created in Step 1. Now Click ‘Create’
7 Click ‘Save’ to save the instances new configuration.
8 Please SSH into the new instance, and run command $lsblk . You will
be able to see the new disk and partition added (It will most probably
be named sdb1 but you should check this and take note).
9) Please run the following command which will create a mount point at
/mnt/newdisk and then mounts the additional disk partition to that
mount point. Note- substitute /dev/sdb1 in the below command with the
name of the partition if it is different.
$ sudo mkdir /mnt/newdisk | sudo mount -o discard,defaults /dev/sdb1
/mnt/newdisk
The snapshots file system will now be mounted at /mnt/newdisk.
You should now be able to navigate the directories and retrieve any data.
I hope this helps you.
The description and results of your problem do not make sense. However, lets assume that your instance is out of memory and you cannot connect to the instance with SSH.
Reboot the instance and try again. Installing software might cause an out of memory issue. Rebooting should correct this.
Launch the instance with a larger machine type that has more memory. If this is a memory size problem, this will correct it.
Detach the instance's disk and attach to another instance that you can connect to. Mount the file system and copy off the files.
However, if instead your problem is out of disk space, this makes more sense.
Resize the instance disk. In the Google Cloud Console, go to Compute Engine -> Disks. Click on the disk for your instance. Click EDIT. Under Size enter a new larger disk size. Now launch your instance. For most operating systems (Ubuntu, Debian, etc.) the OS will automatically resize the root file system. I wrote an article that covers this in detail.
If you can't connect to the instance you always can take a snapshot of the disk and then create a copy to mount it in a new instance to recover the data from there.

EC2 instance - complete reinstall

I have an EC2 instance up and running (Linux). I've made some installations I'd like to completely undo. What would be the best/simplest way to get back to a clean instance?
Start a new instance, running through your installation and configuration steps.
You can do this without terminating the old instance. This lets you look at configuration on the old instance in case you forgot how you set things up. It also lets you copy data or other files from the old instance to the new instance.
One you're completely happy with the new instance, terminate the old instance.
This approach of starting the new before destroying the old didn't make sense back in the days of physical servers, but EC2 gives us new ways to think about things.
Also: Document and/or script your installation and configuration steps so you can easily reproduce them in the future on new instances. Think about separating your data onto a second EBS volume so it can be easily moved to a new instance.
You should be comfortable with testing your setup scripts/docs repeatedly until they work just right on a brand new instance.
Destroy it and create a new one using the same AMI, kernel, and any user-defined data/script that you passed to the original instance creation. Back any of your own data up to S3.