I am trying to use Google Cloud Shell as my development platform as it's free and come with an intensive code editor. But at the same time, I struggle because only 5GB disk storage and only 2 projects loaded.
Is there an option to buy storage for cloud shell?
I know i have option to subscribe another VM inside GCP but it doesn't suit me, due to ain't any COOL IDE as i get in Cloud Shell. All i can deal with only vi.
No, You cannot increase the disk size of cloud storage. In fact if you do not use cloud shell for 120 days It will delete your home directory.
See more limitations here
Your second point is an insult to open source community :)
Here is an alternative I can think of:
Setup cloud SDK in your local system
Google Cloud Shell Editor is eclipse's Orion based text editor. You can use eclipse IDE , It will have same shortcut keys and code validation features.
Alternatively , You can use Orion in case you're doing web development
I hope this helped.
there is more disk storage on root folder that you can use it
Related
I created a Compute instance with Ubuntu and would like to use it as a development environment, with the cloud shell editor as the IDE. I know how to ssh in the shell, but, the editor won't allow to browse the filesystem on the Compute instance. Please Help.
You would need to use a third-party tool for this. Cloud Shell does not have built-in support to browse external file systems.
I have a software that process some files. What I need is:
start a default image on google cloud (I think docker should be a good solution) using an API or a run command
download files from google storage
process it, run my software using those downloaded files
upload the result to google storage
shut the image down, expecting not to be billed anymore
What I do know is how to create my image hehe. But I can't find any info saying me what google cloud service should I use or even if I could do it like I'm thinking. I think I'm not using the right keywords to find what i need.
I was looking at Kubernetes, but i couldn't figure out how to manipulate those instances to execute a one time processing.
[EDIT]
Explaining better the process I have an app that receive images and send it to Google storage. After that, I need to process that images, apply filters, georeferencing, split image etc. So I want to start a docker image to process it and upload the results to google cloud again.
If you are using any of the runtimes supported by Google Cloud Functions, they are easiest way to do those kind of operations (i.e. fetch something from Google Cloud Storage, perform some actions on those files and upload them again). The Cloud Functions will be triggered by an event of your choice, and after the job, it will die.
Next option in terms of complexity would be to deploy a Google App Engine application in standard environment. It allows you to deploy your own application written in any of the supported languages for this environment. While there is traffic in your application, you will have instances serving, but the number of instances running can go down to 0 when they are not serving, which would mean less cost.
Another option would be Google App Engine in flexible environment. This product allows you to deploy your application in any custom runtime. This option has always at least one instance running, so it would never shut down.
Lastly, you can use Google Compute Engine to "create and run virtual machines on Google infrastructure". Otherwise than GAE, this is not that managed by Google, which means that most of the configuration is up to you. In this case, you would need to programmatically indicate your VM to shut down after you have finished your operations.
Based on your edit where you stated that you already have an app that is inserting images into Google Cloud Storage, your easiest option would be to use Cloud Functions that are triggered by additions, changes, or deletions to objects in Cloud Storage buckets.
You can follow the Cloud Functions tutorial for Cloud Storage to get an idea of the generic process and then implement your own code that handles your specific tasks. There are other tutorials like the Imagemagick tutorial for Cloud Functions that might also be relevant to the type of processing you intend to do.
Cloud Functions is probably your lightest weight approach. You could of course do more full scale applications, but that is likely overkill, more expensive, and more complex. You can write your processing code in Node.js, Python, or Go.
I want to create a new Google Cloud instance with Hardenedbsd iso. Hardenedbsd is a FreeBSD based OS. I checked public documentation on https://cloud.google.com/compute/docs/images/import-existing-image but I couldn't see FreeBSD on supported OS section.
Is there a way to do that?
FreeBSD works pretty well in GCE, the upload procedure of a custom image or making your own is quite easy I would say even better than with AWS, therefore high are the changes the same could apply for Hardenedbsd, the only "trick" is that after you have your raw disk you need to use gnu tar to upload the image:
gtar -cSzf freebsd.tar.gz disk.raw
To create the disk.raw I use this script https://github.com/fabrik-red/images/blob/master/fabrik.sh (root on ZFS) to read more about the procedures you could check: https://fabrik.red/post/google/
For testing or getting an idea, you could try FreeBSD 12.0
https://github.com/fabrik-red/images/releases/download/12.0/disk.tar.gz
I haven't tried working with any *BSD on Google Cloud Platform so take my words with a grain of salt.
You could try booting the instance in rescue mode (if supported) and perform a dd to write the Hardenedbsd to the main disk.
You could also take a look on Packer from Hashicorp which is meant to create OS images to be deployed on the cloud.
https://www.packer.io/docs/builders/googlecompute.html
I got a deletion notice for my Google Cloud Shell home directory. Does that mean that my data will also be deleted?
This is documented here:
If you do not access Cloud Shell for 120 days, we will delete your
home disk. You will receive an email notification before we do so and
simply starting a session will prevent its removal.
This only applies to the home directory of your Cloud Shell instance (you may want to store it on Cloud Storage anyway if you want to keep it). Any other Google services you use will be unaffected.
Considering i'm paying for their services this is extremely annoying. Lost a lot of important documents and feel like taking my business somewhere else.
Let's say that I setup my own cloud using the open source cloud foundry implementation provided on cloudfoundry.org. Will each app that I deploy be run as a separate user? Or is there any of VMWare's virtualization technology in use here? E.g. would each app run in a separate virtual machine or anything like that? How can I configure the memory, cpu, and disk resource limits for each app?
I asked this on the mailing list. Here's the response I got:
If your DEA is configured to run in secure mode, then each app runs as its own user and process isolation is used to protect them. We are moving toward a model of using linux cgroups http://en.wikipedia.org/wiki/Cgroups when on linux, using the warden cgroup wrappers that are already in our source tree.
VM based isolation for a single app is pretty heavy weight, but we have long term plans to provide this for apps that need/desire it. (As opposed to the warden/cgroup work which is a near term project)
Since this is related to the open source for cloud foundry, you can try asking your question on https://groups.google.com/a/cloudfoundry.org/group/vcap-dev
You should get a quick response there!