How to detach a disk in a Google Cloud TPU VM instance? - google-cloud-platform

I created a TPU-VM instance (not a normal compute instance) and attach an external disk to it using this command:
gcloud alpha compute tpus tpu-vm create TPU-VM-NAME \
--zone=europe-west4-a \
--accelerator-type=v3-8 \
--version=v2-alpha \
--data-disk source=[PATH/TO/DISK]
Now I want to detach that disk from the TPU-VM but I cannot find the instance in the VM instances tab in the Google cloud console (They treated it as a TPU instance so it's not listed there). I can only find it in the TPUs tab, but in the TPUs tab I cannot edit the disk out of the instance.
I tried using this command too but it doesn't work:
gcloud compute instances detach-disk INSTANCE-NAME --disk=DISK-NAME
It says that resource (projects/project-name/zone/instances/tpu-vm-name) was not found.

Detaching a disk for the TPU VM architecture is not supported right now.

Actually, it is supported according to this tutorial! You need to follow this config when you are in TPU VM.Don't forget creating disk before detaching it and be sure you are using same billing account in both TPU VM and Disk. Otherwise, system will throw INTERNAL ERROR.
sudo lsblk
#Find the disk here, and the device name. Most likely it will be "sdb". Given that is correct, format the disk with this command:
sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
#Mount the disk - and give everyone
sudo mkdir -p /mnt/disks/flaxdisk
sudo mount -o discard,defaults /dev/sdb /mnt/disks/flaxdisk
sudo chmod a+w /mnt/disks/flaxdisk
#Configure automatic mount on restarts
sudo cp /etc/fstab /etc/fstab.backup
#Find the uid of the disk - you need this value in the following steps
sudo blkid /dev/sdb
#Add this to /etc/fstab with the correct uuid
UUID=52af08e4-f249-4efa-9aa3-7c7a9fd560b0 /mnt/disks/flaxdisk ext4 discard,defaults,nofail 0 2

Related

EBS volume shows no mount point

UseCase: Launch an AWS cloud9 environment that has added EBS of 500GB. This environment will be extensively used to build and publish dockers by developers.
So I did start an m.5.large instance-based environment and attached an EBS volume of 500GB.
Attachment information: i-00xxxxxxb53 (aws-cloud9-dev-6f2xxxx3bda8c):/dev/sdf
This is my total storage and I do not see 500GB volume.
On digging further, it looks like the EBS volume is attached but not at the correct mount point.
EC2 EBS configuration
Question: What should be the next step in order to use this EBS volume?
Question: What should be done in order to make use attached EBS for docker building?
Question: What should be the most efficient instance type for docker building?
Tyoe df -hT here you will find the Filesystem type of root if it is xfs or ext4
If say root (/) is xfs, run the following command for the 500 GiB Volume
$ mkfs -t xfs /dev/nvme1n1
If root (/) is ext4,
$ mkfs -t ext4 /dev/nvme1n1
Create a directory in root say named mount
$ mkdir /mount
Now mount the 500 GiB Volume to /mount
$ mount /dev/nvme1n1 /mount
Now it will be mounted and can viewed in df -hT
Also make sure to update the /etc/fstab so mount remains stable if there is a reboot
To update first find UUID if 500 GiB EBS Volume
$ blkid /dev/nvme1n1
Note down the UUID from the output
Now go to /etc/fstab using editor of your choice
$ vi /etc/fstab
There must be an entry already for /, update for mount and save the file, (note replace xfs with ext4 if filesytem is ext4)
UUID=<Add_UUID_here_without_""> /mount xfs defaults 0 0
Now finally run mount command
$ mount -a
Generally, if its new EBS volume you have to format it manually and then mount it, also manually. From docs:
After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it.
The instruction for doing this in the link above.

Restore Google VM file permission

I accidentally ran "sudo make chmod -R 777 /" on my GCP, now I'm not able to access the SSH anymore (Neither by terminal or browser):
Permissions 0777 for '/Users/username/.ssh/id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
How can I access my VM and restore it?
As it was suggested by #John Hanley, you have (must) create a new instance to avoid serious problems in future with this broken VM.
To solve permissions issue with ~/.ssh/id_rsa.pub you can follow documentation Running startup scripts or/and article suggested by #John Hanley to execute command sudo chmod 644 ~/.ssh/id_rsa.pub or follow instructions from this article to connect to your instance via serial console and after that run sudo chmod 644 ~/.ssh/id_rsa.pub to set proper permissions.
Keep in mind that restoring SSH access WON'T SOLVE all other possible issues with your VM related to sudo make chmod -R 777 /. So, you can skip it and follow instructions below:
To move your data from broken VM to new VM you can follow this steps:
create snapshot of the boot disk of broken instance
$ gcloud compute disks snapshot BROKEN_INSTANCE_BOOT_DISK_NAME --snapshot-names=TEMPORARY_SNAPSHOT_NAME
create temporary disk with snapshot
$ gcloud compute disks create TEMPORARY_DISK_NAME --source-snapshot=TEMPORARY_SNAPSHOT_NAME
create new instance
attach temporary disk to new instance
$ gcloud compute instances attach-disk NEW_ISTANCE_NAME --disk=TEMPORARY_DISK_NAME
mount temporary disk
$ sudo su -
$ mkdir /mnt/TEMPORARY_DISK
$ mount /dev/disk/by-id/scsi-0Google_PersistentDisk_TEMPORARY_DISK_NAME /mnt/TEMPORARY_DISK
copy data from temporary disk to new instance
unmount temporary disk :
$ sudo umount /dev/disk/by-id/scsi-0Google_PersistentDisk_TEMPORARY_DISK_NAME
detach temporally disk
$ gcloud compute instances detach-disk NEW_ISTANCE_NAME --disk=TEMPORARY_DISK_NAME

Get AWS to automatically attach EC2 Volume to Ubuntu instance at startup

I would like to attach an EBS volume not a snapshot as a persistent store for my spot instances. I understand how to manually attach the volume, mount it and get it to survive reboots but how would I get it to automatically get attached at startup?
Is there something I could do in the user data at launching the instance?
Presently I have a ami that I run as a spot instance. I have a separate volume that persists and is used for both input to the instance and to save results. I only ever have one instance up at a time. The ami mounts the drive at /data. For the mount to survive reboots, I have edited /etc/fstab to include:
UUID=MY_VOLUME_UUID /data xfs defaults,nofail 0 2
Again Edited to show Passatizhi's Solution
I added the following to the Configure Instance Details > Advanced Details > User data part of the EC2 launch wizard:
#!/bin/bash
INSTANCE_ID=$(curl 169.254.169.254/latest/meta-data/instance-id)
export AWS_DEFAULT_REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
/home/ubuntu/miniconda3/bin/aws ec2 attach-volume --volume-id vol-myVol12345 --instance-id $INSTANCE_ID --device /dev/sdf
sleep 10
sudo mkdir -p /data
sudo mount /dev/nvme1n1 /data
Note:
I needed to add the full path to aws to get it to work. Also as the ami already has the /data setup I don't need the sudo mkdir -p /data
#!/bin/bash INSTANCE_ID=$(curl 169.254.169.254/latest/meta-data/instance-id)
export AWS_DEFAULT_REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
/bin/aws ec2 attach-volume --volume-id vol-0fdb738415896f8f6 --instance-id $INSTANCE_ID --device /dev/sdf
sleep 10
sudo mkdir -p /data
sudo mount /dev/nvme1n1 /data
Try /bin/aws instead aws. I used t3.small, so /dev/nvme1n1

Auto mounting EFS on EC2 instance

I created an EC2 instance and an EFS, and was able to mount EFS properly on the instance.
I need to auto mount in case the server is rebooted.
According to the documentation, i do that following in /etc/fstab
fs-xxxxxxxx:/ /mnt/efs efs defaults,_netdev 0 0
Using the EFS file system ID in place of xxxxxxxx
But when I reboot the server, EFS does not get mounted, and I save to remount it again
What should I do here?
I'm posting here a more detailed solution since this thread seems to show up near the top for related queries from search engine.
There are two methods to mount an Amazon EFS: "Amazon EFS mount helper" (amazon-efs-utils) and "NFS client" (nfs-utils).
Examples below shows how to mount manually and automatically with each method. Before using, replace the text [value] with your own values.
==============================
===============
Mounting with "Amazon EFS mount helper"
===============
To mount with "Amazon EFS mount helper" manually, you issue the following command format into CLI:
sudo mount -t efs [fs-XXXXXXXX]:/ /path/to/mount/dir
=====
To mount with "Amazon EFS mount helper" automatically, you insert the following line into /etc/fstab
[fs-XXXXXXXX]:/ /path/to/mount/dir efs defaults,_netdev 0 0
===============
Mounting with "NFS client"
===============
To mount with "NFS client" manually, you issue either of the following command format into CLI:
Use the command instruction given from "Amazon EC2 mount instructions (from local VPC)" when you click in to view the Elastic File System ID in question under EFS Web Console.
sudo mount -t nfs4 -o nfsvers=4.1,rsize=XXXXXXX,wsize=XXXXXXX,hard,timeo=XXX,retrans=X,noresvport [fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir
OR
sudo mount -t nfs4 -o defaults,_netdev [fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir
=====
To mount with "NFS client" automatically, you insert the following line into /etc/fstab
[fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir nfs4 defaults,_netdev 0 0
==============================
Given the above example format, do you notice your problem?
You thought you've "Amazon EFS mount helper" installed, but based on the manual mount command format you posted in your first comment reply (not opening post), you actually only have "NFS client" installed on your system. You were using "Amazon EFS mount helper" format inside /etc/fstab to auto mount, but the manual mount command that worked for you is in "NFS client" format. Since your system doesn't have "Amazon EFS mount helper" installed, it doesn't understand the auto mount format inside /etc/fstab so auto mount it doesn't work for you.
The manual mount command you posted above that worked for you is only for "NFS client", not for "Amazon EFS mount helper".
mount -t nfs4 -o nfsvers=4.1 ...
Notice the -t parameter above is nfs4, which is the format for "NFS client". If you were using "Amazon EFS mount helper", the -t parameter should be efs.
To solve the problem, you can use either Amazon EFS mount helper (amazon-efs-utils) or NFS client (nfs-utils), but the command format (in CLI or /etc/fstab) and the mount client being used should be consistent.
In other words:
"Amazon EFS mount helper" <=> efs in both CLI and /etc/fstab
"NFS client" <=> nfs4 in both CLI and /etc/fstab
==============================
Installation instructions for mount client software:
===============
If you want to use "Amazon EFS mount helper", use the following installation instructions for Amazon Linux and Other Distros:
https://docs.aws.amazon.com/efs/latest/ug/using-amazon-efs-utils.html
=====
If you want to use "NFS client", use the following installation instructions on your EC2 instance:
On a Red Hat Enterprise Linux or SUSE Linux instance, including Amazon Linux, use this command:
sudo yum install -y nfs-utils
On an Ubuntu instance, use this command:
sudo apt-get install nfs-common
==============================
Once you have the mount client software installed, use the corresponding mounting instructions posted above.
To solve this using NFS4, please follow the instructions below:
On your AWS account, notice the following:
1) Go to your EFS management screen, you should your EFS_WHATEVER... and there is a small triangle next to it, click down to expand.
2) Notice there is a "DNS Name" right in the middle of the screen, it will say something like: "fs-1234567c.efs.us-west-1.amazonaws.com", note that down, this is your mounting point that we will use later on.
3) By default, if you have just created the new instance, then you must allow it to be seen by your servers, trying to connect will freeze since the firewall is blocking your connection. to allow this scroll down until you see your security group, this is something like sg-abcdef
4) Go into your EC2 servers, select the server that you want it to access your EFS and then click on its "security groups", it should take you into the security groups management screen, note down it's security group ID (this is something sg-12345)
4) Now, clear the filter of your VPC management screen to see all of the SGs...
5) Enter your EFS security group (i.e. sg-abcdef) and click the search button, this should bring up the EFS ACL
6) Click on "Inbound Rules" -> EDIT..
7) Click "ADD" and select "EFS" from the list, enter your server's SG (i.e. sg-12345) and describe it as "Server XXX access" if you like.
8) Now the server should be able to see the EFS Mount,
9) Go into your server and install the necessary components by running as ROOT:
apt-get install nfs-common
10) Now, to test the MOUNT, create a new directory... something like: mkdir /mnt/heider
11) Mount the FS using the following command:
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-1234567c.efs.us-west-1.amazonaws.com:/ /mnt/heider
Please note that your fs-12345c..... is taken from your DNS name as mentioned above.
12) If this work then great, move to the next step, otherwise revise the above to see if you missed anything.
13) Now to auto-mount this, you need to:
14) Edit /etc/fstab
15) Add the following:
fs-1234567c.efs.us-west-1.amazonaws.com:/ /mnt/heider nfs4 defaults,_netdev 0 0
16) Save the file and exit
17) in Linux command shell type:
mount -a
this will test the mounting, if it's mounted then great, rebooting would auto-mount it.
18) This should auto-mount.
19) Reboot to test, all should be there.
I hope this helps.
In order to use the efs file system type, I believe you need to have the amazon-efs-utils package installed. This will install the additional dependencies.
Anyone who has this issue,
instead of
fs-xxxxxxxx:/ /mnt/efs efs defaults,_netdev 0 0
use
{target_ip}:/ /mnt/efs nfs4 defaults,_netdev 0 0
This works fine for me, and it auto mounts on the newly created instances

Add EBS to Ubuntu EC2 Instance

I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.
Here's what I did:
From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)"
Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device"
sudo mount /dev/sdf /vol
So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system.
sudo fdisk -l
sudo mount -v -t ext4 /dev/xvdf /vol
the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing
codepage or helper program, or other error In some cases useful info
is found in syslog - try dmesg | tail or so
"dmesg | tail" told me it gave the following error:
EXT4-fs (sda1): VFS: Can't find ext4 filesystem
I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.
Questions:
Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?
Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.
Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:
Create EBS volume.
Attach EBS volume to /dev/sdf (EC2's external name for this particular device number).
Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.
Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
Step 1: create volume
step 2: attach to your instance root volume
step 3: run sudo resize 2fs -p /dev/xvde
step 4: restart apache2 sudo service apache2 restart
step 4: run df -h
You can see total volume attached to your instance.