Google compute commited use/resevation doesnt use my existing instance - google-cloud-platform

I have created instance with same configuration as mentioned in the Commmited use discount/reservation seciton, yet when i goto Reservations, it shows it is currently using none.
reservationslist
reservation configuration
instance configuration
The reservation type in instance is set as "Automatic", but it doesnt automatically detect and match, they both are in same regions and zones. Is there something i am missing?

As per official doc, VM can only consume a reservation if the
properties for both the VM and reservation are matching .
In this consumption model, existing and new instances automatically
count against the reservation if those instances' properties match
the reserved instance's properties
A VM instance can consume a reservation only if all of the following
properties for both the VM and
the reservation match exactly:
Project
Zone
Machine type
Minimum CPU platform
GPU type and count
Local SSD type and count
Refer to this doc explaining to you the requirements and
restrictions for compute engine Vm creations
Follow this official doc for the step by step process of how to Consume instances from any matching reservation.

Thanks for the post Hemanth, everything in my instance was matching but still its not attached to the reservation.
I resolved the issue by creating another instance, and this time while creating the instance, I expanded the "Advnaced Options" in the create instance page, and manually choose the Reservation i wanted it to consume. And it worked!

Related

When creating an AWS Auto-Scaling Launch Configuration & using spot instances - How can I set a maximum price based on unit type?

I regularly am seeing the following througout various AWS documentation:
If you set TargetCapacityUnitType to vcpu or memory-mib , the price
protection threshold is applied based on the per-vCPU or per-memory
price instead of the per-instance price.
Most importantly, I see it on the create-launch-template documentation.
I would like to create a launch configuration for an auto-scaling group that will use a variety of instance types based on their attribute-based selection.
This will of course, allow me to use a number of various instance types - making my spot request more eligible for being fulfilled and have less interruptions.
I've found that I'm able to set a maximum price defined as "Per instance/hour" - but if I'm using a variety of instances which have a slew of different pricing, this of course breaks down.
For this reason - The request-spot-fleet API call has a means of setting a TargetCapacityUnitType so that you're able to define a maximum price based on vCPU or memory instead.
It seems like all the pieces are here - and the aforementioned 'Note' is even on the create-launch-template documentation; but I cannot find where to actually define TargetCapacityUnitType in my Launch Configuration.
When creating an AWS Auto-Scaling Launch Configuration & using spot instances - How can I set a maximum price based on unit type? Is this possible?
You can set up a launch template with your AMI and then use the launch template to create the group. You don't use a launch configuration at all.
All properties you specify for attribute-based instance type selection are part of a mixed instances policy, which is part of the create-auto-scaling-group call.

In google cloud platform, can we name the instances in a specific series in instance group?

I'm new to the google cloud platform, I've created an instance group. I want to know that, Is it possible to name the instances of instance group in a specific series?
For eg. :
my-GCP-instance-001,
my-GCP-instance-002,
my-GCP-instance-003
then if a new instance is created it should acquire the name
my-GCP-instance-004
let's say instance group scales in (kills an instance), let's assume it terminates
my-GCP-instance-002
when it scales out and creates a new instance it should identify the missing series
my-GCP-instance-002
and name its instance
my-GCP-instance-002.
Let me know if it is possible, it will be very helpful.
When creating an instance group, you can supply the --base-instance-name flag. Each VM in that instance group will be assigned a random string as part of its name. The base name is prepended to this random string. For example, if you set the base name to my-GCP-instance-, VMs will have names like my-GCP-instance-yahs and my-GCP-instance-qtyz.
You can also take a look at Working with managed instances:
If you have a system that depends on specific names, use the gcloud tool or API to add VMs with specific names to an existing MIG.
The names that you assign to these managed instances persist if the MIG recreates the VM. For more information about preserving the state of MIG instances, see stateful MIGs.

Cannot start GCE VM instance "The zone does not have enough resources"

I am trying to restart an instance that has been shut down for about a week, however it will not start, I get the error message:
Starting VM instance 'gc-custom-europe-west2-xxxxxxxxxxxxxxxxxxxx' failed. Error: The zone 'projects/XXX/zones/europe-west2-c' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
There are no incidents reported that I can see, could anyone advise please?
You can control status of Google Cloud at Google Cloud Status Dashboard, but this isn't an issue, let me provide you some explanations:
When you stop an instance it releases some resources like vCPU and memory.
When you start an instance (or change it) it requests resources like vCPU and memory back and if there's not enough resources available in the zone you'll get an error message:
Error: Starting VM instance "INSTANCE_NAME" failed. Error: The zone 'projects/XXXX/zones/ZONE' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
more information available in the documentation:
If you receive a resource error (such as ZONE_RESOURCE_POOL_EXHAUSTED
or ZONE_RESOURCE_POOL_EXHAUSTED_WITH_DETAILS) when requesting new
resources, it means that the zone cannot currently accommodate your
request. This error is due to Compute Engine resource obtainability,
and is not due to your Compute Engine quota.
Resource availability are depending from users requests and therefore are dynamic.
There are a few ways to solve such issue without moving it to another zone:
Move your VM instance to another zone.
Wait for a while and try to start your VM instance again.
Reserve resources for your VM by following documentation to avoid such issue in future (extra payment required):
Create reservations for Virtual Machine (VM) instances in a specific
zone, using custom or predefined machine types, with or without
additional GPUs or local SSDs, to ensure resources are available for
your workloads when you need them. After you create a reservation, you
begin paying for the reserved resources immediately, and they remain
available for your project to use indefinitely, until the reservation
is deleted.
To protect data on your VM you can create a snapshot before making any changes.
You could try changing the instance form zone, let me provide you with the instructions for you to do so:
1.Go to Google Cloud Platform >>> Compute Engine
2.Go to Snapshots >>> create a snapshot >>> Select your Compute Engine instance
3.Once snapshot is completed click on snapshot.
4.Under "snapshot details". There, on the top, just click create instance. Here you are basically creating an instance with a copy of your disk.
5.Select your new zone setup previous setting, create new name.
6.Click create, at this point your image should now be running in the new zone

The zone 'projects/xxxx/zones/us-west3-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later

I have a suspended instance that I want to bring up, but cannot because of the error in the title. I'd expect that there'd be an obvious way to choose a different zone in which to resume the instance, but I see nothing. As long as us-west3-a is overbooked, how can I resume execution of this instance elsewhere?
I'm not running a major service - this one instance is the entire operation, and given what I'm running (an ancient game server) load balancing or multi-region availability is out of the question. I just need to be able to run this instance somewhere when the need strikes.
To be able to resume your instance in another zone you will need to create a snapshot first then create a new instance using the snapshot you have created. There are no possible ways to directly transfer an instance to another zone. Below are the step by step procedure to do so:
How to create snapshot
Go to the Compute Engine page > then select Snapshot.
Click Create Snapshot.
Select the disk of your instance.
Please check all your settings
Once you're done, please click the "Create" button.
How to create instance from a snapshot with new zone
Go to Compute Engine > Snapshots
Select the snapshot you need
Click Create Instance
Provide a name for your new instance
Select the new Region or Zone
Select other options needed ie. Machine type or GPU
Edit other settings like network and disk if needed
Click Create
Once the instance is created and started, it will be in the same state at the time you created the snapshot.
For more information and troubleshooting about the Stockout error, you can check the GCP official documentation.

Google Compute Engine autoscale based on 'used' memory

I'm looking to scale my Compute Engine instances based on memory which is an agent metric in Stackdriver. The caveat is that out of the 5 states that the agent can monitor(buffered, cached, free, slab, used) see the link here, I only want to look at 'used' memory and if that value is above certain %age threshold across the group(or per-instance would also work for me), I want to autoscale.
I've already installed the Stackdriver Monitoring agent in all the nodes across the Managed Instance Group and I am successfully able to visualize 'used' memory in my monitoring dashboard as I'm well acquainted with it.
Unfortunately, I can't do it for autoscaling. This is what I see when I go to configure it in the autoscaling section of MIG.
In my belief, adding filter expressions should work as expected, since this expression works correctly in the Stackdriver console using the Monitoring dashboard. Also, it's mentioned here that the syntax is compatible with Cloud Monitoring filter syntax that is given here.
I've tried different combinations for the syntax in the filter expression field but none of them has worked. Please help.
I was attempting the exact same configuration in attempts to scale based on memory usage. After testing various unsuccessful entries I reached out to Google support. Based on your question I can't tell what kind of instance group you have. It matters because of the following.
TLDR
Based on input from Google support, only zonal instance groups allow the filter expression entry.
Zonal Instance Group
Only zonal instance groups will allow the metric setting. The setting you are attempting to enter is correct with metric.state=used for a zonal instance group. However, that field must be left blank for regional instance group.
Regional Instance Group
As noted above, applying the filter for a regional instance group is not supported. As noted in their documentation they mention that you leave that field blank.
In the Additional filter expression section:For a zonal MIG, optionally enter a filter to use individual values from metrics with multiple streams or labels. For more information, see Filtering per-instance metrics.For a regional MIG, leave this section blank.
If you add an entry you'll receive the message "Regional managed instance groups do not support autoscaling using per-group metrics." when attempting to save your changes.
On the other hand if you leave the field empty it will save. However, I found that leaving the field empty and setting almost any number in the Target Utilization field always caused my group to scale to the maximum number.
Summary
Google informed me that they do have a feature request for this. I communicated that it didn't make sense to even have the option to select percent_used if it's not supported. The response was that we should see the documentation updated in the future to clarify that point.