My VM from Virtualbox crashed when I attempted a snapshot. Unfortunately I lost some pretty important data as the VMDK was corrupted. The VM then reverted to a prior snapshot on reboot.
I luckily saved the VMDK files prior to restarting the VM and am now trying to recover the data. I've tried mounting the VMDK but when I do it's like the VMDK that gets mounted is the default snapshot from day 1 w/no changes. I configured my VMDK file to be dynamic and be segmented in 2GB chunks as it grows so I'm thinking the changes from these files aren't loading when I mount.
Any other suggestions?
If you still have the Snapshot and all its files. You cloud try to create a clone from the snapshot, when that is done you could try to mount and repair its virtual hard drive. You also could try merge snapshots by commandline.
Related
I have a question regarding GCP Persistent Disk (SSD). I created a disk, I created a VM to which I added the disk. Then I ssh'd into the VM, I saw the disk, I formatted it and mounted it to my VM. Everything worked just fine, I was able to create some files there and use it as a normal disk. However, when I deleted the VM and created another VM some time later, mounted the same disk, the data seemed to be lost, I didn't see it.
This contradicts the name "persistent" which means I might be doing something wrong or misunderstanding something. I would be greateful for any help.
It would help if you had posted your VM configuration, but if I had to guess you had the disk deletion rule in your VM set to delete the disk when you deleted your VM.
Edit your VM and make sure the "Deletion rule" setting is set to "Keep disk."
I have a Compute VM attached to a persistent disk instance. I was deleting my other unrelated projects and had downgraded my related firebase project. Somehow, the VM turned off and the disk is stuck at restoring since the last 3 hours. The disk did not have any snapshot. Any way to restore it?
I am working in Google cloud platform and I have a situation where I need to attach local SSD disk to a existing VM machine but I cannot find a way to do it, Basically this is the process I need to do.
1. Attach a new local SSD disk
2. Copy the existing data to new disk
4. Unmount the old disk
5. Mount new disk to old data path
Thanks.
As John said, you can't if you initially created your VM without local SSD. But you can do something like this
Go to the console, select your current VM
Go the the boot disk section and select your boot disk
Create an image of the boot disk
Go back to the current VM page
Click on "create similar" button, on the top
Select the boot disk image that you just created
Add additional disk, with type local scratch SSD
Now you have a similar VM, with the same boot disk image but with local SSD.
Then, you have to detach the existing persistent disk on the old VM and attach it to the new one, make your copy, and delete it.
This is the situation:
I hooked a disk to a VM. Reboot the VM (for any reasons). I have to remount the disk otherwise it is not available (unmounted) after the restart. So I remount the disk with the following command: sudo mount -o discard, defaults /dev/[DEVICE_ID] /mnt/disks/[MNT_DIR]
Does the fact that I have to remount the disk also mean that I have lost all the data inside?
Thanks in advance
The document that you shared with us says:
"If you detach this zonal persistent disk or create a snapshot from the boot disk for this instance, edit the /etc/fstab file and remove the entry for this zonal persistent disk"
Therefore, if you are not creating snapshots from the BOOTDISK you can reboot your instance without having any issue with your data.
However, if you are using snapshots or schedule snapshots of your SSD disk, I would recommend you to follow these best practises to create it :
https://cloud.google.com/compute/docs/disks/snapshot-best-practices
But also you can do persistent disk snapshots at any time without unmounting your disk. These recommendations are only in order to have a greater reliability and create the snapshots more quickly (this is also explained in the documentation: https://cloud.google.com/compute/docs/disks/snapshot-best-practices#prepare_for_consistency)
In the document that you linked, there is a description of how to add your mount point to /etc/fstab. Using the command line sudo mount -o ... mounts the disk temporarily, but the mount will be lost across reboots. Editing the /etc/fstab will cause the mount point to persist across reboots because that file is read during startup.
Thanks a lot for your answers. I'm sorry for my question that's not complete. I'm developer and I new in sysadmin.
As you can see here, I added "zonal persistent disk" (type permanent ssd disk) to my VM on Compute Engine (https://cloud.google.com/compute/docs/disks/add-persistent-disk).
Here says if I have a schedule snaphots it's no possible set automatically mount the disk to my VM after restart (that I may need for any reasons). So the question: how I sure that the restart, in addition to unmount the disc, it won't lose the data?
Having disk snapshots I would still be able to restore the data, but I would still understand what happens in this case. Meantime I read your suggestions on linux mount and I have understand that I will not lose the data on the disk with restart of the machine.
Thanks
I've restarted my VM instance and found that all of my files are deleted. My VM has a 10 GB SSD persistent boost disk and an additional 50GB SSD persistent disk. Do VM's only save files within the current session?
Thanks!
Google does not delete data upon restart. Be sure the disk containing your data is mounted. Perhaps you restarted after manually mounting & the disk is no longer mounted. You may have also been using the cloud shell instead of your VM which "After the instance is terminated, any modifications that you made to it outside your $HOME are lost." As per John Hanley's comment, you should also check the console log to see what happened on restart. Lastly, be sure you don't have any start-up scripts that may have deleted data.