I'm new to GCP and just experimenting. Tried to install something in one of my projects and got a disk full exception. Rather than buy more I thought I would just do some cleanup.
I have now deleted ALL instances, buckets and projects. I know projects take awhile to be deleted so maybe one of them is consuming a lot of disk. Question:
How can I remove/delete whatever is consuming 99% + of /dev/sdb1 /home ? or ...
Increase the size of that resource?
It seems that you are using Cloud Shell. The Cloud shell comes with only 5GB
for storage, and no way to increase.
One possible solution would be to setup gcloud SDK on your own machine or GCE (Google Compute Engine) instance instead.
I hope this approach works for you.
Related
Local SSDs are the fastest storage available in Google Cloud Platform, which makes it clear why people would want to use it. But it comes with some severe drawbacks, most notably that all data on the SSDs will be lost if the VM is shut down manually.
It also seems to be impossible to take and image of a local SSD and restore it to a restarted disk.
What then is the recommended way to back-up local SSD data if I have to shut down the VM (if I want to change the machine specs for example)?
So far I can only thing of manually copying my files to a bucket or a persistent disk.
If you have a solution to this, would it work if multiple local SSDs were mounted into a single logical volume?
I'm guessing that you want to create a backup of data stored on local SSD every time you shut down the machine (or reboot).
To achieve this (as #John Hanley commented) you have to copy the data either manually or by some script to other storage (persistend disk, bucket etc).
If you're running Linux:
Here's a great answer how to run script at reboot / shutdown. You can then create a script that will copy all the data to some more persistend storage solution.
If I were you I'd just use rsync and cron. Run it every hour or day (depends on your use case). Here's a another great example how to use rsync to synchronize folders.
If you're running Windows:
It is also possible to run command at windows shutdown and here's how.
I am trying to use Google Cloud Shell as my development platform as it's free and come with an intensive code editor. But at the same time, I struggle because only 5GB disk storage and only 2 projects loaded.
Is there an option to buy storage for cloud shell?
I know i have option to subscribe another VM inside GCP but it doesn't suit me, due to ain't any COOL IDE as i get in Cloud Shell. All i can deal with only vi.
No, You cannot increase the disk size of cloud storage. In fact if you do not use cloud shell for 120 days It will delete your home directory.
See more limitations here
Your second point is an insult to open source community :)
Here is an alternative I can think of:
Setup cloud SDK in your local system
Google Cloud Shell Editor is eclipse's Orion based text editor. You can use eclipse IDE , It will have same shortcut keys and code validation features.
Alternatively , You can use Orion in case you're doing web development
I hope this helped.
there is more disk storage on root folder that you can use it
I am trying to run airflow in google cloud run.
Getting error Disk I/O error, I guess the disk write permission is missing.
can someone please help me with this how to give write permission inside cloud run.
I also have to write file and later delete it.
Only the directory /tmp is writable in Cloud Run. So, change the default write location to write into this directory.
However, you have to be aware of 2 things:
Cloud Run is stateless, that means when a new instance is created, the container start from scratch, with an empty /tmp directory
/tmp directory is an in-memory file system. The maximum allowed memory on Cloud Run is 2Gb, your app memory footprint included. In addition of your file and Airflow, not sure that you will have a lot of space.
A final remark. Cloud Run is active only when it process request, and a request has a maximum timeout of 15 minutes. When no request, the allowed cpu is close to 0%. I'm not sure of what you want to achieve with Airflow on Cloud Run, but my feeling tells me that your design is strange. And I prefer to warn you before you spend too much effort on this.
EDIT 1:
Cloud Run service has evolved in the right way. In 2022,
/tmp is no longer the only writable directory (you can write everywhere, but it's still in memory)
the timeout is no longer limited to 15 minutes, but to 60 minutes
The 2nd gen runtime execution (still in preview) allows you to mount NFS (Filestore) or Cloud Storage (GCSFuse) volume to have services "more stateful".
You can also execute jobs now. So, a lot of very great evolution!
My impression is that you have a write i/o error because you are using SQLite. Is that possible.
If you want to run Airflow using cointainers, I would recommend to use Postgres or MySQL as backend databases.
You can also mount the plugins and dag folder in some external volume.
I accidentally messed up the permissions of the file system, which shows the message sudo: /usr/local/bin/sudo must be owned by uid 0 and have the setuid bit set when attempting to use sudo, such as read protected files, etc.
Response from this answer (https://askubuntu.com/a/471503) suggest to login as root to do so, however I didn't setup a root password before and this answer (https://stackoverflow.com/a/35017164/4343317) suggest me to use sudo passwd. Obviously I am stuck in an infinite loop from the two answers above.
How can I read/get the files from Google Cloud Compute Engine's disk without logging in into the VM (I have full control of the VM instance and the disk as well)? Is there another "higher" way to login as root (such as from gcloud tool or the Google Cloud interface) to access the VM disk externally?
Thanks.
It looks like the following recipe may be of value:
https://cloud.google.com/compute/docs/disks/detach-reattach-boot-disk
What this article says is that you can shutdown your VM, detach its boot disk and then attach it as a data disk to a second VM. In that second VM you will have the ability to make changes. However, if you don't know what changes you need to make to restore the system to sanity, then as #John Hanley says, we might want to use this mounting technique to copy of your work and then destroy your tainted VM and recreate a new one fresh and copy back in your work and start from there.
I have a NFS Storage which I use for deploying VMs on my ESX.
I have been creating/deleting vms for a couple of years now on this storage.
But lately I noticed the free space is pretty low. Upon investiging, I found that older vm files( Vms which I deleted more than an year ago).
Any Ideas why the files are not removed from NFS?
Or how can I find out which vms are not being used by any esx, so that I can delete them manually.
There's two main ways to remove a VM from your vCenter inventory. The methods being used are UnregisterVM() and Destroy_Task().
Based on your discovery, I'm assuming you've been unregistering your VMs from inventory.
If you're ok with PowerShell, there's a pretty straight forward way to remedy this using a community resource: http://www.lucd.info/2016/09/13/orphaned-files-revisited/
LucD mainly uses the underlying API methods, so if you prefer another language... a majority of the discovery is done for you.