Error starting virtual machine - Not enough resources available - google-cloud-platform

I am trying to start an virtual machine on Google Cloud. I get an error that there isn't enough resources to fulfill my request.
I have been using Google Cloud for about one week to study and try automated trading systems through Metatrader5 on a Linux server.
I was able to use my machine using VNC server, even this morning, but suddenly all my machines (are all on same location) started to show an error when trying to start:
The zone 'projects/metatrader-227016/zones/southamerica-east1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
I read about moving my instance to another region, but it's not a simple instruction. What is strange is that my VM is really small and lightweight.

Unfortunately this problem appears with Google Cloud Compute once in a while. You have several options:
Wait. The resource will eventually be available.
Resize your instance to a different size. A different instance size might be available.
Change regions.
If you have paid support, open a support ticket with Google Cloud Support.
The smaller instance sizes are cheaper and therefore in higher demand.
To move an instance to a different region:
Login to the Google Cloud Console. Go to Compute Engine -> Disks.
Select your disk for the instance you plan to move.
At the top of the screen click CREATE IMAGE. Give the image a name. For Family enter anything you want but remember it.
Once the image creation completes, create a new Compute Engine VM in the region that you want. When creating the new VM, under Boot disk, click Change. You will find your image under the tab Custom images.

Related

"Not enough resources available to fulfill the request" error in GCP

In GCP, I'm trying to create a new notebook instance.
However, I got this error from all the zones that I tried:
"tensorflow-2-4-20210214-113312: The zone 'projects/[PROJECT-ID]/zones/europe-west3-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later."
Even though the whole point of Cloud Computing is not to worry about the underlying infrastructure serving your application, at the end of the day there will be some servers with limited capacity and resources hosting your applications or supporting the underlying infrastructure of the product in question that you are using.
In the specific case of AI Platform Notebooks you can use the following command:
gcloud beta notebooks locations list
to get a list of the available locations and monitor the release notes to check when new locations are added. Try to create a new notebook in another location that do have available resources or wait for resources to be available on that particular zone.

How much time for GPU quota updating?

I'm trying some stuff on Google Cloud and I have the following issue. Some days ago I created a Deep Learning VM with Compute Engine, with 8 vCPU and 1 Tesla K80 GPU. All worked fine, but now I want to try another GPU with different memory size. So, I deleted the VM instance (from Compute Engine -> VM instances) and I also deleted the deployment from Deployment manager. Nevertheless, when I try to create a new VM, I get an error message referring to the fact that I no more resources available and in fact, in the quotas page, I still see the GPU usage to 1 (with a limit of 1, that's why I can't create a new instance). Does anyone knows what could be the problem? Do I just have to wait? Thank you everyone!
If you receive a resource error (such as ZONE_RESOURCE_POOL_EXHAUSTED or ZONE_RESOURCE_POOL_EXHAUSTED_WITH_DETAILS) when requesting new resources, it means that the zone cannot currently accommodate your request.
This error is due to the availability of Compute Engine resources in the zone, So, you could try to create the resources in another zone in the region or in another region.
You can search another available zone on this document: Available regions and zones
If possible, change the shape of the VM you are requesting. It's easier to get smaller machine types than larger ones. A change to your request, such as reducing the number of GPUs or using a custom VM with less memory or vCPUs, might allow your request to proceed.
Also, you can create reservations for Virtual Machine (VM) instances in a specific zone, using custom or predefined machine types, with or without additional GPUs or local SSDs, to ensure resources are available for your workloads when you need them.
Additionally, you can found more information to troubleshoot this issue in the following link

Google Cloud VM Instance Stuck on resizing suggested by Console

I had a vm instance running on Google Cloud, it's suggested me that "you should resize instance to 2CPU and 16GB RAM from 4CPU and 16GB RAM".
I pressed to Apply to set new config. Instance has stopped and stucked in resize process since an hour, neigher shows resized in gcloud instance list nor starting up.
Even try for taking snapshot of that vm's disk shows error that "it's being used in some operations"
Tried to force stop via gcloud, but no luck. In notification pop-up shows, resizing vm only.
Pls help me here.
The main reason for this issue is GCP resource availability which depends on users requests and therefore is dynamic. As result, issues like this could happen when you use cloud resources on-demand without reservation.
Let's have a look at the cause of this issue:
when you stop an instance it releases some resources like vCPU and memory;
when you start an instance it requests resources like vCPU and memory back;
when you resize your VM it's the same.
In case if there's not enough resources available in the zone you'll get an error message:
The zone 'projects/xyz-project-272905/zones/asia-south1-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later..
more details you can find in the documentation:
If you receive a resource error (such as ZONE_RESOURCE_POOL_EXHAUSTED
or ZONE_RESOURCE_POOL_EXHAUSTED_WITH_DETAILS) when requesting new
resources, it means that the zone cannot currently accommodate your
request. This error is due to Compute Engine resource obtainability,
and is not due to your Compute Engine quota.
There are a few ways to solve your issue:
Move your instance to another zone by following instructions.
Wait for a while and try to resize your VM instance again.
Reserve resources for your VM by following documentation to avoid such issue in future (extra payment will be required):
Create reservations for Virtual Machine (VM) instances in a specific
zone, using custom or predefined machine types, with or without
additional GPUs or local SSDs, to ensure resources are available for
your workloads when you need them. After you create a reservation, you
begin paying for the reserved resources immediately, and they remain
available for your project to use indefinitely, until the reservation
is deleted.

Google Cloud Compute API: Create a vm with a larger disk size

I am using nodejs client library, and I am stuck in a situation. I need to create disposable VM on demand, which means those instances would be up for a couple of days and then would be deleted. All of this would happen via an API.
The default size of the disk while creating a VM is 10GB, and I need a larger disk than that.
Is it possible to do that, without creating a disk first and then attaching it to my VM? As that would be a hassle since I'd have to make sure that I'd delete the disk too, whenever I'd delete the instance.
You can use the compute.instances.insert API method to create a Compute Engine instance, and set its disks[].initializeParams.diskSizeGb parameter specifying the desired disk size. You can find more information regarding this API call here.

Is it possible to auto scale with amazon web services, with ever changing AMI's?

Curious if this is possible:
We have a web application that at MOST times, works just fine with our single small instance. However, when we get multiple customers running simultaneously intense queries (we are a cloud scheduling service); our instance bogs way down to near 80% cpu load and becomes pretty unresponsive.
Is there a way to have AWS fire up another small instance (or a few), quickly, only for the times that its operating under this intense load? BUT, the real question is how does this work when we have very frequent programming updates to our application? Do we have to manually create a new image everytime we upload a code change???
Thanks
You should never be running anything important on a single EC2 instance. Instances can--and do--go offline randomly. Always use an autoscaling (AS) group that spans multiple availability zones. An AS group will automatically bring new instances online when you hit a certain trigger (in your case, CPU utilization). And then it will scale down the instances when traffic subsides. Autoscaling is the heart and soul of AWS and if you're not using it, you might as well be using a cheaper (and more durable) VPS host.
No, you don't want to be creating a new AMI for each code release. Ideally you should use a base AMI (like one of Amazon's official ones) and then have it auto-provision at boot. You can use the "user data" field when you launch an AMI to bootstrap this process. It can be as simple as a bash script that pulls from your Git repo to as something as sophisticated as Puppet or Chef.
The only time I create custom AMI's is if the provisioning process just takes too long. However that can almost always be solved by storing the needed files in S3.