How can I get virtual machines specifications from Google cloud? - google-cloud-platform

I would like to know if there is a way to collect the technical specifications of a virtual machine from Google Cloud (CPU, frequency, memory, storage) ?
I am using the billing API (https://cloudbilling.googleapis.com/v1/services/6F81-5844-456A/skus?key=API_KEY) to get the pricing information but there is no technical specifications.
Is there a way to get that data ? May be using the products' SKU or something ? I know AWS and Azure SDKs/APIs allows developers to get the technical information but I did not find the GCP equivalent for this.
I searched for a while for something like this but it seems like a lot of people had the same issue but no one had a working answer.

The compute API offers you several operations that you can use to obtain the desired information.
Especially consider review the instances get operation, it will provide you all the details of the instance.
You can obtain the list of instances in a certain zone if required using the instances list operation.
If you are trying to obtain information about machines families, GCP provides a rich API as well.

Related

Correct Architecture for Micro Services with Multiple Customer Interfaces

I am new to micro services and I am keen to use this architecture. I am interested to know what architecture structure should be used for systems with multiple customer interfaces where customer systems may use one or many of the available services. Here is a simple illustration of a couple of ways I think it would be used:
An example of this type of system could be:
Company with multiple staff using system for quotes of products
using products, quotes and users mirco services
Company with website to display products
using products micro service
Company with multiple staff using system for own quotes
using quotes and users micro services
Each of these companies would have their own custom build interface only displaying relevant services.
As in the illustrations all quotes, products and users could be stored local to the mirco services, using unique references to identify records for each company. I dont know if this is advisable as it could make data difficult to manage and could grow fast making it difficult to manage.
Alternatively I could store such as users and quotes local to the client system and reference the micro services for data thats generic. Here mirco services could be used just to handle common logic and return results. This does feel someone illogical and problematic to me.
I've not been able to find anything online to explain the best course of action for this scenario and would be grateful for any experienced feedback on this.
I am afraid you will not find many useful recipes or patterns for microservice architectures yet. I think that the relative quiet on your question is that it doesn’t have enough detail for anybody to readily grasp. I will make a wag:
From first principles, you have the concept of a quote which would have to interrogate the product to get a price and other details. It might need to access users to produce commission information, and customers for things like discounts and lead times. Similar concepts may be used in different applications; for example inventory, catalog, ordering [ slightly different from quote ].
The idea in microservices is to reduce the overlap between these concepts by dispatching the common operations as their own (micro) services, and constructing the aggregate services in terms of them. Just because something exists as a service does not mean it has to be publicly available. It can be private to just these services.
When you have strained your system into these single function services, the resulting system will communicate more, but will be able to be deployed more flexibly. For example, more resources &| redundancy might be applied to the product service if it is overtaxed by requests from many services. In the end, infrastructure like service mesh help to isolate the implementation of these micro services from the sorts of deployment considerations.
Don’t be misled into thinking there is a free lunch. Micro service architectures require more upfront work in defining the service boundaries. A failure in this critical area can yield much worse problems than a poorly scaled monolithic app. Even when you have defined your services well, you might find they rely upon external services that are not as well considered. The only solace there is that it is much easier to insulate your self from these if you have already insulated the rest of your system from its parts.
After much research following various courses online, video tutorials and some documentation provided by Netflix, I have come to understand the first structure in the diagram in the best solution.
Each service should be able to effectively function independently, with the exception of referencing other services for additional information. Each service should in effect be able to be picked up and put into another system without any need to be aware of anything beyond the API layer of the architecture.
I hope this is of some use to someone trying to get to grips with this architecture.

How do I get the query quotas from Deployment Manager via the API?

Over at https://console.cloud.google.com/apis/api/deploymentmanager.googleapis.com/quotas or https://console.cloud.google.com/iam-admin/quotas?service=deploymentmanager.googleapis.com, I am able to see the query and well as the write quotas and are can determine if I'm going to hit limits if any.
Unfortunately, there seems to be no way to get these values programmatically using the Deployment Manager APIs (using Go) or using gcloud.
Am I missing something here, or there are some other ways of getting at these values, possibly, not via the APIs directly.
Currently, there's no way to get the quotas programmatically or with gcloud(apart from the compute engine quotas) , however, there's a feature request to get/set the project quotas via API. I suggest starring this issue to track it and ask for updates from it.
knowing of no API, which could be used to do so ...
guess one could only limit the quota per user; see the documentation.
there are several questions concerning other API (all the same).

Google Cloud APIs usage data by projects

Is there any way to programmatically get data similar to APIs overview of Google CLoud dashboard. Specifically, I'm interested in the list of APIs enabled for the project and their usage/error stats for some predefined timeframe. I belive there's an API for that but I struggle to find it.
There's currently no API that gives you a report similar to the one you can see through the Google Cloud Console.
The Compute API can retrieve some quotas with the get method but it's somewhat limited (only Compute Engine quotas) and, for what I understood from your question, not quite what you're looking for.
However, I've found in Google's Issue Tracker a feature request that's close to what you're asking for.
If you would need something more specific or want to do the feature request yourself, check the "Report feature requests" documentation and create your own. The GCP team will take a look at it to evaluate and consider implementation.

Extracting Machine Specifications from Google Cloud from an SKU

I am trying to extract price information about the virtual machines supplied by the compute engine service in Google Cloud. I have successfully extracted a JSON file with some pricing information on it using an HTTP Get request from Google Cloud's Pricing API, however all of the pricing data in the file is mapped to individual machines via the SKU number, and there are no machine specifications associated with any of the SKU numbers. Here is the request:
GET https://cloudbilling.googleapis.com/v1/services?key=API_KEY
Ideally, I would like to find a way to have the machine specifications included in the JSON file returned by the HTTP request, but if that isn't possible I'd like to find a way use the SKU to look up a machine's specifications. For example, if my SKU is: 19E4-D27B-7C12, I'd like to use that code to look up the machine it specifies and see details about it such as amount of RAM, number of CPUs, etc... Does anyone know of any Google Cloud resources that would allow me to do such a thing? And if not, is there any other way to accomplish this task? I'd like this process to be programmatic so I cannot use the built in calculator in Google Cloud.
Thank you!
It looks like currently there is no an API that can get the features of a certain machine type, however, there is an API:
https://cloud.google.com/compute/docs/reference/rest/v1/instances/get
That allows you to get detailed features of an existing virtual machine based on the project, the zone and the instance name.

Is it possible to get pricing for AWS S3 programmatically through their SDK?

I'd like to be able to estimate the cost of a download operation from S3 programmatically but I hesitate to hard-code the prices they list (per GB) on their pricing page in the event they change. Does the SDK provide any kind of access to current pricing data. If it does, I can't seem to find it.
CLARIFICATION: I'm asking if the official Amazon SDK has hooks for pricing data, not if it's possible to get pricing data at all. Obviously, it is possible to get pricing data through non-documented means.
Since you're asking for SDK support and SDKs are language-specific, I have to stress I can only speak for the Ruby SDK.
The Ruby SDK in the latest major version (2.x) is mostly auto-generated by an API description in JSON format for each documented API. There is no official pricing API – only a static file that states:
This file is intended for use only on aws.amazon.com. We do not guarantee its availability or accuracy.
This means there is no way for the Ruby SDK to give you pricing information. Your mileage may vary in other languages (but I doubt it).