Google Cloud Datalab Minimum System Requirements - google-cloud-platform

Is it possible to create a google cloud datalab with f1-micro and 20GB of boot disk and 10GB of persistent disk?
Or is the minimum requirement, the default of n1-standard-1 and 200GB Standard Persistent Disk?
I tried to create a datalab instance with the following command:
datalab create --image-name practice --disk-size-gb 10 --idle-timeout "30m" --machine-type f1-micro practice
Although the VM is created, the datalab gets stuck at waiting for datalab to be available at localhost.
It works when I go with the default command of
datalab create practice
Any clarifications on this?

Don't include the "--image-name practice" arg. image-name is the Docker image to run on the VM, and needs to be either the Datalab provided one or a custom image you've created based off of that one.
This command should work: datalab create --disk-size-gb 10 --idle-timeout "30m" --machine-type f1-micro practice though you should note that this machine will be too small to run even some of the sample notebooks, and it's reduced size will cause a longer startup time.
I just tried it and startup was ~10 minutes.

Related

Google Cloud snapshot's boot issue

Hope all are safe and doing well.
I have few running servers on google cloud and for them, snapshots are scheduled on daily basis in an incremental way.
I am trying to create a new instance on a different VPC zone by using the same snapshots but it will be giving me an error.
For reference, I have added an attachment to this question.
Please help me to resolve this issue and thanks in advance.
Assuming that you have created a Snapshot with Application consistency(VSS) enabled:
When you create a VSS snapshot, Windows Server marks the volume in the
snapshot as read-only. Any disks that you create from the VSS
snapshot are also in read-only mode. So, the read-only flag on the new
boot disk prevents the VM instance from booting correctly.
You can follow this documentation to resolve your issue here.
If the disk you created from the VSS snapshot is a boot disk and you want to use it to boot a VM instance, you must temporarily attach the disk to a separate, existing VM instance. Once you complete the following steps, you can detach the disk from that existing VM instance and use it to boot a new VM instance.

GCP - VM disappeared after move

I am breaking out in a sweat now.
I wanted to move a VM to a different zone within the same region. I run
gcloud compute instances move move-this-vm --zone xxxx --destination-zone xxxx
I checked on the VM after the command has run, it has disappeared!! I can't find it. I did
gcloud compute instances list
It is not listed. It is not shown on the web console either.
I have created a machine image of a VM before move so I am not worried about data lost. However, I am bizzared.
Is this a common glitch for moving VM using the CLI? In this scenario, what can I do to retrieve the VM? Why does it happen?

Is it possible to mount the persistent disk that comes with CloudShell on another VM?

gcloud compute instances attach-disk wants a disk name, but it doesn't show up on my Disks page. It seems silly to create and pay for another disk when this one has much more storage than I plan to use.
Notice that the Cloud Shell is intended for interactive usage and that in general the disk is intended to be recycled, as you can't manage it and it will be deleted after 120 days of inactivity. You'll need to consider a different solution, such as Cloud Storage if you wish the data to persist in time. So you'd need to store your data in Cloud Storage and then create a new disk to store the information, as the Cloud Shell is a tool meant for rapid testing and prototyping and not as a development machine for persistent storage.
As per the GCP article enter here, you can attach-detach disk to the VM instance from gcloud shell.
To detach a disk from a instance:
gcloud compute instances detach-disk [INSTANCE_NAME] --disk=[DISK_NAME]
To attach a disk to another instance:
gcloud compute instances attach-disk [INSTANCE_NAME] --disk=[DISK_NAME] --boot

What should the specs be for a AWS VM to install and run a Metabase JAR app

Im planning an installation of this version of Metabase:
https://www.metabase.com/docs/v0.35.4/
I dont see any docs on what are the minimum requirements for RAM, CPU etc
Im only planning to run Metabase on this instance for a user base of 100 people
What specs should I choose for a VM?
Their documentation recommends t2.small instance type:
Instance type (Instances block) is for picking the size of AWS instance you want to run. Any size is fine but we recommend t2.small for most uses.
According to AWS documentation, t2.small instances have 1 vCPU and 2 GB RAM.

Increasing compute power temporarily on AWS

I have an Amazon EC2 Micro instance running using EBS storage. This more than meets my needs 99.9% of the time, however I need to perform a very intensive database operation as a once off which kills the Micro instance.
Is there a simple way to restart the exact same instance but with lots more power for a temporary period, and then revert back to the Micro instance when I'm done? I thought this seemed more than possible under the cloud based model Amazon uses but it doesn't appear to simply be a matter of shutting down and restarting with more power as I first thought it might be.
If you are manually running the database operation, then you can just create the image of the server, launch a small or a high cpu instance using the same image, run the database operation and then create the image and launch it again as a micro instance. You can also automate this process by writing scripts using AWS APIs.
In case you're using an EBS-backed AMI you don't have to create a new image and launch it. Just stop the machine and issue a simple EC2 API command to change the instance type:
ec2-modify-instance-attribute --instance-type <instance_type> <instance_id>
Keep in mind that not all instance types work for every AMI. The applicable instance types depend on the machine itself and the kernel. You can find a list of available instance types here: http://aws.amazon.com/ec2/instance-types/