How to reduce the disk size of a VM in google cloud [duplicate] - google-cloud-platform

This question already has answers here:
Reduce Persistent Disk Size
(2 answers)
Closed 2 years ago.
I created two VMs having size of 1 TB disk size. Though the VMs are not running , even then I found that GCP was charging me for the disk space. How can I reduce the disk size to lower the cost.What are the other alternatives? There are lot of services installed on this VM and creating new VM from scratch is not an option.
I have explored the followings.
Google documentation: Which says, reducing disk size is not a option.
Creating VM from snapshot: Apparently , this also does not allow to reduce the disk size.
Creating VM from machine image: No luck here as well

You can'd reduce the disk size, but only spin up a new instance from a startup script.

Is this working for you? the document has detailed steps
Select the “Compute -> Compute Engine -> VM Instances” menu item.
Select the instance you wish to resize.
In the “Boot disk” section, select the boot disk of the instance.
On the “Disks” detail page, click the “Edit” button.
Enter a new size (GB) for the disk in the “Size” field.
Click the “Save” button at the bottom of the page.
and the last step you have to do is to restart the instance.
UPDATE
As #Kerem commented: official doc says:
You can only resize a zonal persistent disk to increase its size. You
cannot reduce the size of a zonal persistent disk.

Related

Google Cloud SQL - Database instance storage size increased dramatically everyday

I have a database instance (MySQL 8) on Google Cloud and since 20 days ago, the instance's storage usage just keeps increasing (approx 2Gb every single day!).
But I couldn't find out why.
What I have done:
Take a look at Point-in-time recovery "Point-in-time recovery" option, it's already disabled.
Binary logs is not enabled.
Check the actual database size and I see my database is just only 10GB in size
No innodb_per_table flag, so it must be "false" by default
Storage usage chart:
Database flags:
The actual database size is 10GB, now the storage usage takes up to 220GB! That's a lot of money!
I couldn't resolve this issue, please give me some ideal tips. Thank you!
I had the same thing happen to me about a year ago. I couldn't determine any root cause of the huge increase in storage size. I restarted the server and the problem stopped. None of my databases experienced any significant increase in size. My best guess is that some runaway process causes the binlog to blow up.
Turns out the problem is in a Wordpress theme's function called "related_products" which just read and write every instance of the products that user comes accross (it would be millions per day) and makes the database physically blew up.

Replace HDD with SSD on google cloud compute engine

I am running GETH node on google cloud compute engine instance and started with HDD. It grows 1.5TB now. But it is damn slow. I want to move from HDD to SSD now.
How I can do that?
I got some solution like :
- make a snapshot from the existing disk(HDD)
- Edit the instance and attach new SSD with the snapshot made.
- I can disconnected old disk afterwards.
One problem here I saw is : Example - If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.
But, I want to understand if it actually works? Because this is a node I want to use for production. I already waiting too long and cannot afford to wait more.
One problem here I saw is : If my HDD is 500GB, it is not allowing SSD of size less than 500GB. My data is in TBs now. It will cost like anything.
You should try to use Zonal SSD persistent disks.
As standing in documentation
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes.
The description of the issue is confusing so I will try to help from my current understanding of the problem. First, you can use a booting disk snapshot to create a new booting disk accomplishing your requirements, see here. The size limit for persistent disk is of 2 TB so I don’t understand your comment about the 500 GB minimum size. If your disk have 1.5 TB then will meet the restriction.
Anyway, I don’t recommend having such a big disk as a booting disk. A better approach could be to use a smaller boot disk and expand the total capacity by attaching additional disks as needed, see this link.

VMWare share SAN volume across datastores. Is it possible?

I have recently inherited a VMWare setup with 2 ESXi hosts and an HP StoreVirtual SAN for storage.
On the SAN, there's a 2 TB volume which has been used to extend one of the datastores on VMWare however only 25% of this volume has been used for this. The remaining 75% is empty.
I now wanted to extend other datastores using the space on this volume but it will not show up as an available volume when trying to increase datastore size.
Basically my question is whether it's possible to share a SAN volume between datastores. I thought of reducing the SAN volume size but I feel it's too risky.
Before I start thinking of moving stuff etc. I wanted to know what I'm trying to do is possible.
I will also say that the reason for increasing the datastore size is for backup purposes. During backups the datastore must be big enough to accomodate snapshots etc.
Thanks in advance for any help.
No, not really, when you assign space to a datastore, it's owned by the datastore. So you have to reduce space on Datastore 1 to allocate it to Datastore 2

google compute engine , additional disk not working

we have google compute engine , our disk (hard disk) size has full, so we added an additional disk, I followed all steps but still, size has not increased,
please help what might the case.
its showing error message like " mnt/disks/disk-1 is not mounted completely or it is not available"
If your boot disk is full, you want to resize the boot disk. Please keep in mind that if the disk is too full (100%), the OS will not be able to update the partition table to increase the size if the partition despite the larger disk.
If this is the case, take a snapshot of the disk, create a new, larger, disk and then use that as your new boot disk.

google cloud hard disk deleted. all data lost

My google cloud VM hard disk got full. So I tried to increase its size. I have done this before. This time things went differently. I increased the size. But the VM was not picking up the new size. So I stopped VM. Next thing I know, my VM got deleted and recreated, my hard disk returned to previous size with all data lost. It had my database with over 2 months of changes.
I admit I was careless not to backup. But currently my concern is, is there a way to retrieve the data. On Google Cloud, it shows $400 for Gold Plan which includes Tech Support. If I can be certain that they will be able to recover the data, I will am willing to pay. Does anyone know if I pay $400, the google support team will be able to recover the data?
If there are other ways to recover data, kindly let me know.
UPDATE:
Few people have shown interest in investigating this.
This most likely happened because by default "Auto-delete boot disk" option is selected which I was not aware of. But even then, I would expect auto-delete to happen when I delete the VM, not when I simply stopped it.
I am attaching screenshot of all activities that happened after I resized the boot partition.
As you can see, I resized the disk at 2:00AM.
After receiving resize successful message, I stopped the VM.
Suddenly at 2:01, VM got deleted.
At this point I had not checked notifications, I simply thought, it stopped. Then I started VM hoping to see new resized disk.
Instead of starting my VM, new VM was created with new disk and all previous data was lost.
I tried stopping and starting VM again. But the result was still the same.
UPDATE:
Adding activities before the incident.
It is not possible to recover deleted PDs.
You have no snapshots either?
The disk may have been marked for auto-delete.
However, this disk shouldn't have been deleted when the instance was stopped even if it was marked for auto-delete.
You can also only recover a persistent disk from a snapshot.
In a managed instance group, when you stop an instance, health check fails and the MIG deletes and recreates an instance if autoscaler is on. The process is discussed here. I hope that sheds some light if that is your use case.