Google Cloud VM Files Deleted after Restart - google-cloud-platform

I've restarted my VM instance and found that all of my files are deleted. My VM has a 10 GB SSD persistent boost disk and an additional 50GB SSD persistent disk. Do VM's only save files within the current session?
Thanks!

Google does not delete data upon restart. Be sure the disk containing your data is mounted. Perhaps you restarted after manually mounting & the disk is no longer mounted. You may have also been using the cloud shell instead of your VM which "After the instance is terminated, any modifications that you made to it outside your $HOME are lost." As per John Hanley's comment, you should also check the console log to see what happened on restart. Lastly, be sure you don't have any start-up scripts that may have deleted data.

Related

Persistent disk losing some data

I have a question regarding GCP Persistent Disk (SSD). I created a disk, I created a VM to which I added the disk. Then I ssh'd into the VM, I saw the disk, I formatted it and mounted it to my VM. Everything worked just fine, I was able to create some files there and use it as a normal disk. However, when I deleted the VM and created another VM some time later, mounted the same disk, the data seemed to be lost, I didn't see it.
This contradicts the name "persistent" which means I might be doing something wrong or misunderstanding something. I would be greateful for any help.
It would help if you had posted your VM configuration, but if I had to guess you had the disk deletion rule in your VM set to delete the disk when you deleted your VM.
Edit your VM and make sure the "Deletion rule" setting is set to "Keep disk."

GCP: How to add Local SSD disk to a existing VM instance in google cloud platform

I am working in Google cloud platform and I have a situation where I need to attach local SSD disk to a existing VM machine but I cannot find a way to do it, Basically this is the process I need to do.
1. Attach a new local SSD disk
2. Copy the existing data to new disk
4. Unmount the old disk
5. Mount new disk to old data path
Thanks.
As John said, you can't if you initially created your VM without local SSD. But you can do something like this
Go to the console, select your current VM
Go the the boot disk section and select your boot disk
Create an image of the boot disk
Go back to the current VM page
Click on "create similar" button, on the top
Select the boot disk image that you just created
Add additional disk, with type local scratch SSD
Now you have a similar VM, with the same boot disk image but with local SSD.
Then, you have to detach the existing persistent disk on the old VM and attach it to the new one, make your copy, and delete it.

Google cloud VM reboot and data loss of attached persistent disk

This is the situation:
I hooked a disk to a VM. Reboot the VM (for any reasons). I have to remount the disk otherwise it is not available (unmounted) after the restart. So I remount the disk with the following command: sudo mount -o discard, defaults /dev/[DEVICE_ID] /mnt/disks/[MNT_DIR]
Does the fact that I have to remount the disk also mean that I have lost all the data inside?
Thanks in advance
The document that you shared with us says:
"If you detach this zonal persistent disk or create a snapshot from the boot disk for this instance, edit the /etc/fstab file and remove the entry for this zonal persistent disk"
Therefore, if you are not creating snapshots from the BOOTDISK you can reboot your instance without having any issue with your data.
However, if you are using snapshots or schedule snapshots of your SSD disk, I would recommend you to follow these best practises to create it :
https://cloud.google.com/compute/docs/disks/snapshot-best-practices
But also you can do persistent disk snapshots at any time without unmounting your disk. These recommendations are only in order to have a greater reliability and create the snapshots more quickly (this is also explained in the documentation: https://cloud.google.com/compute/docs/disks/snapshot-best-practices#prepare_for_consistency)
In the document that you linked, there is a description of how to add your mount point to /etc/fstab. Using the command line sudo mount -o ... mounts the disk temporarily, but the mount will be lost across reboots. Editing the /etc/fstab will cause the mount point to persist across reboots because that file is read during startup.
Thanks a lot for your answers. I'm sorry for my question that's not complete. I'm developer and I new in sysadmin.
As you can see here, I added "zonal persistent disk" (type permanent ssd disk) to my VM on Compute Engine (https://cloud.google.com/compute/docs/disks/add-persistent-disk).
Here says if I have a schedule snaphots it's no possible set automatically mount the disk to my VM after restart (that I may need for any reasons). So the question: how I sure that the restart, in addition to unmount the disc, it won't lose the data?
Having disk snapshots I would still be able to restore the data, but I would still understand what happens in this case. Meantime I read your suggestions on linux mount and I have understand that I will not lose the data on the disk with restart of the machine.
Thanks

How do I recover lost data from a Virtualbox VM that crashed?

My VM from Virtualbox crashed when I attempted a snapshot. Unfortunately I lost some pretty important data as the VMDK was corrupted. The VM then reverted to a prior snapshot on reboot.
I luckily saved the VMDK files prior to restarting the VM and am now trying to recover the data. I've tried mounting the VMDK but when I do it's like the VMDK that gets mounted is the default snapshot from day 1 w/no changes. I configured my VMDK file to be dynamic and be segmented in 2GB chunks as it grows so I'm thinking the changes from these files aren't loading when I mount.
Any other suggestions?
If you still have the Snapshot and all its files. You cloud try to create a clone from the snapshot, when that is done you could try to mount and repair its virtual hard drive. You also could try merge snapshots by commandline.

Mounting a NVME disk on AWS EC2

So I created i3.large with NVME disk on each nodes, here was my process :
lsblk -> nvme0n1 (check if nvme isn't yet mounted)
sudo mkfs.ext4 -E nodiscard /dev/nvme0n1
sudo mount -o discard /dev/nvme0n1 /mnt/my-data
/dev/nvme0n1 /mnt/my-data ext4 defaults,nofail,discard 0 2
sudo mount -a (check if everything is OK)
sudo reboot
So all of this works, I can connect back to the instance. I have 500 GiB on my new partition.
But after I stop and restart the EC2 machines, some of them randomly became inaccessible (AWS warning only 1/2 test status checked)
When I watch the logs of why it is inaccessible it tells me, it's about the nvme partition (but I did sudo mount -a to check if this was ok, so I don't understand)
I don't have the AWS logs exactly, but I got some lines of it :
Bad magic number in super-block while trying to open
then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
/dev/fd/9: line 2: plymouth: command not found
I have been using "c5" type instances since almost a month, mostly "c5d.4xlarge" with nvme drives. So, here's what has worked for me on Ubuntu instances:
first get the location nvme drive is located at:
lsblk
mine was always mounted at nvme1n1. Then check if it is an empty volume and doens't has any file system, (it mostly doesn't, unless you are remounting). the output should be /dev/nvme1n1: data for empty drives:
sudo file -s /dev/nvme1n1
Then do this to format(if from last step you learned that your drive had file system and isn't an empty drive. skip this and go to next step):
sudo mkfs -t xfs /dev/nvme1n1
Then create a folder in current directory and mount the nvme drive:
sudo mkdir /data
sudo mount /dev/nvme1n1 /data
you can now even check it's existence by running:
df -h
Stopping and starting an instance erases the ephemeral disks, moves the instance to new host hardware, and gives you new empty disks... so the ephemeral disks will always be blank after stop/start. When an instance is stopped, it doesn't exist on any physical host -- the resources are freed.
So, the best approach, if you are going to be stopping and starting instances is not to add them to /etc/fstab but rather to just format them on first boot and mount them after that. One way of testing whether a filesystem is already present is using the file utility and grep its output. If grep doesn't find a match, it returns false.
The NVMe SSD on the i3 instance class is an example of an Instance Store Volume, also known as an Ephemeral [ Disk | Volume | Drive ]. They are physically inside the instance and extremely fast, but not redundant and not intended for persistent data... hence, "ephemeral." Persistent data needs to be on an Elastic Block Store (EBS) volume or an Elastic File System (EFS), both of which survive instance stop/start, hardware failures, and maintenance.
It isn't clear why your instances are failing to boot, but nofail may not be doing what you expect when a volume is present but has no filesystem. My impression has been that eventually it should succeed.
But, you may need to apt-get install linux-aws if running Ubuntu 16.04. Ubuntu 14.04 NVMe support is not really stable and not recommended.
Each of these three storage solutions has its advantages and disadvantages.
The Instance Store is local, so it's quite fast... but, it's ephemeral. It survives hard and soft reboots, but not stop/start cycles. If your instance suffers a hardware failure, or is scheduled for retirement, as eventually happens to all hardware, you will have to stop and start the instance to move it to new hardware. Reserved and dedicated instances don't change ephemeral disk behavior.
EBS is persistent, redundant storage, that can be detached from one instance and moved to another (and this happens automatically across a stop/start). EBS supports point-in-time snapshots, and these are incremental at the block level, so you don't pay for storing the data that didn't change across snapshots... but through some excellent witchcraft, you also don't have to keep track of "full" vs. "incremental" snapshots -- the snapshots are only logical containers of pointers to the backed-up data blocks, so they are in essence, all "full" snapshots, but only billed as incrememental. When you delete a snapshot, only the blocks no longer needed to restore either that snapshot and any other snapshot are purged from the back-end storage system (which, transparent to you, actually uses Amazon S3).
EBS volumes are available as both SSD and spinning platter magnetic volumes, again with tradeoffs in cost, performance, and appropriate applications. See EBS Volume Types. EBS volumes mimic ordinary hard drives, except that their capacity can be manually increased on demand (but not decreased), and can be converted from one volume type to another without shutting down the system. EBS does all of the data migration on the fly, with a reduction in performance but no disruption. This is a relatively recent innovation.
EFS uses NFS, so you can mount an EFS filesystem on as many instances as you like, even across availability zones within one region. The size limit for any one file in EFS is 52 terabytes, and your instance will actually report 8 exabytes of free space. The actual free space is for all practical purposes unlimited, but EFS is also the most expensive -- if you did have a 52 TiB file stored there for one month, that storage would cost over $15,000. The most I ever stored was about 20 TiB for 2 weeks, cost me about $5k but if you need the space, the space is there. It's billed hourly, so if you stored the 52 TiB file for just a couple of hours and then deleted it, you'd pay maybe $50. The "Elastic" in EFS refers to the capacity and the price. You don't pre-provision space on EFS. You use what you need and delete what you don't, and the billable size is calculated hourly.
A discussion of storage wouldn't be complete without S3. It's not a filesystem, it's an object store. At about 1/10 the price of EFS, S3 also has effectively infinite capacity, and a maximum object size of 5TB. Some applications would be better designed using S3 objects, instead of files.
S3 can also be easily used by systems outside of AWS, whether in your data center or in another cloud. The other storage technologies are intended for use inside EC2, though there is an undocumented workaround that allows EFS to be used externally or across regions, with proxies and tunnels.
I just had a similar experience! My C5.xlarge instance detects an EBS as nvme1n1. I have added this line in fstab.
/dev/nvme1n1 /data ext4 discard,defaults,nofail 0 2
After a couple of rebooting, it looked working. It kept running for weeks. But today, I just got alert that instance was unable to be connected. I tried rebooting it from AWS console, no luck looks the culprit is the fstab. The disk mount is failed.
I raised the ticket to AWS support, no feedback yet. I have to start a new instance to recover my service.
In another test instance, I try to use UUID(get by command blkid) instead of /dev/nvme1n1. So far looks still working... will see if it cause any issue.
I will update here if any AWS support feedback.
================ EDIT with my fix ===========
AWS doesn't give me feedback yet, but I found the issue. Actually, in fstab, whatever you mount /dev/nvme1n1 or UUID, it doesn't matter. My issue is, my ESB has some errors in file system. I attached it to an instance then run
fsck.ext4 /dev/nvme1n1
After fixes a couple of file system error, put it in fstab, reboot, no problem anymore!
You may find useful new EC2 instance family equipped with local NVMe storage: C5d.
See announcement blog post: https://aws.amazon.com/blogs/aws/ec2-instance-update-c5-instances-with-local-nvme-storage-c5d/
Some excerpts from the blog post:
You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.
Other than the addition of local storage, the C5 and C5d share the same specs.
You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe
Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key.
Local NVMe devices have the same lifetime as the instance they are attached to and do not stick around after the instance has been stopped or terminated.