Is it possible to get price of compute engine machines using gcp sdk client libraries? - google-cloud-platform

I'm working on a project which requires me to generate a list of machine types available in gcp compute engine along with their price.
I'm able to generate the list of machines using the compute client for a particular region but am unable to get their price. I'm exploring the billing client to see if it is possible.
Can anyone suggest what would be best for this problem?

Related

How can I get virtual machines specifications from Google cloud?

I would like to know if there is a way to collect the technical specifications of a virtual machine from Google Cloud (CPU, frequency, memory, storage) ?
I am using the billing API (https://cloudbilling.googleapis.com/v1/services/6F81-5844-456A/skus?key=API_KEY) to get the pricing information but there is no technical specifications.
Is there a way to get that data ? May be using the products' SKU or something ? I know AWS and Azure SDKs/APIs allows developers to get the technical information but I did not find the GCP equivalent for this.
I searched for a while for something like this but it seems like a lot of people had the same issue but no one had a working answer.
The compute API offers you several operations that you can use to obtain the desired information.
Especially consider review the instances get operation, it will provide you all the details of the instance.
You can obtain the list of instances in a certain zone if required using the instances list operation.
If you are trying to obtain information about machines families, GCP provides a rich API as well.

Deploy multiple agents with dialogflow

I'm developing a dialogflow agent for bookings. My problem is that I need to deploy the agent for multiple clients with their own calendars. Unfortunately on the Google Cloud Platform is possible to have just one agent per project but at the same time the number of project is limited. How can i solve this? I may have 3 solutions but I'm open to suggestions.
Ask more projects to Google and associate each project to each of my clients. I will be able to manage the projects with a service account. But how much will it cost? May I request like more than 1000 projects?
Create a new Google Cloud Platform account for every client and create a project for each account (Like the qwicklabs account in the google courses). The problem is that I don't know how to scale this solution since I'd need to automate this process and i don't want to create an account manually each time.
Use the same GCP account and the same agent for multiple clients. This may require to insert a unique code when starting the chat to identify to which calendar we are referring. In this way though I won't be able to integrate the chat on the client's website or facebook page unless I don't give the same credentials to everyone.
What do you think could be the best solution? Do you have any other ideas to solve this problem?
Thank you guys
In terms of the best solution, it would best to create a project for each client. As for when using dialogflow products, Each project can have at most one agent, so you need multiple projects if you need multiple agents either way.
Additionally, when it comes to the amount of projects you can have in GCP, the limit for the average user is 30 projects. However, you can always increase the amount of projects by requesting a higher limit. You can do so by referencing this document here.

How to generate uptime reports through Google Cloud Stackdriver?

I am a new user with Google cloud (Stackdriver).
I would like to set and generate uptime reports on a monthly basis which would include the past 4 weeks through e-mail in the cloud but I have not been able to find from where I could do this.
I have done research but have not managed to find what I am looking for. The closes I got to was TRACE but is still not what I would like to have.
It's not possible to generate that kind of reports using tools available in Google Cloud.
Using traces is probably the best you can do now - although you can try the Cloud Trace API which may give you a way to extract the information in a more structured way.
If you want this feature included in GCP please go to IssueTracker and create a new feature request with detailes explanation of what your goal is and mention the time-span you want to be able to get data from.

Object Detection Django Rest API Deployment on Google Cloud Platform or Google ML Engine

I have developed Django API which accepts images from livefeed camera using in the form of base64 as request. Then, In API this image is converted into numpy arrays to pass to machine learning model i.e object detection using tensorflow object API. Response is simple text of detected objects.
I need GPU based cloud instance where i can deploy this application for fast processing to achieve real time results. I have searched a lot but no such resource found. I believe google cloud console (instances) can be connected to live API but I am not sure how exactly.
Thanks
I assume that you're using GPU locally or wherever your Django application is hosted.
First thing is to make sure that you are using tensorflow-gpu and all the necessary setup for Cuda is done.
You can start your GPU instance easily on Google Cloud Platform (GCP). There are multiple ways to do this.
Quick option
Search for notebooks and start a new instance with the required GPU and
RAM.
Instead of the notebook instance, you can set up the instance separately if you need some specific OS and more flexibility on choosing the machine.
To access the instance with ssh simply add your ssh public key
to Metadata which can be seen when you open the instance details.
Setup Django as you would do on the server. To test it simply just debug run it on host 0 or 0.0.0.0 and preferred port.
You can access the APIs with the external IP of the machine which can be found out in the instance details page.
Some suggestions
While the first option is quick and dirty, it's not recommended to use that in production.
It is better to use some deployment services such as tensorflow-serving along with Kubeflow.
If you think that you're handling the inference properly itself, then make sure that you load balance the server properly. Use NGINX or any other good server along with gunicorn/uwsgi.
You can use redis for queue management. When someone calls the API, it is not necessary that GPU is available for the inference. It is fine not to use this when you have very less number of hits on the API per second. But when we think of scaling up, think of 50 requests per second which a single GPU can't handle at a time, we can use a queue system.
All the requests should directly go to redis first and the GPU takes the jobs required to be done from the queue. If required, you can always scale the GPU.
Google Cloud actually offers Cloud GPUs. If you are looking to perform higher level computations with your applications that require real-time capabilities I would suggest your look into the following link for more information.
https://cloud.google.com/gpu/
Compute Engine also provides GPUs that can be added to your virtual machine instances. Use GPUs to accelerate specific workloads on your instances such as Machine Learning and data processing.
https://cloud.google.com/compute/docs/gpus/
However, if your application requires a lot of resources you’ll need to increase your quota to ensure you have enough GPUs available in your project. Make sure to pick a zone where GPUs are available. If this requires much more computing power you would need to submit a request for an increase of your quota. https://cloud.google.com/compute/docs/gpus/add-gpus#create-new-gpu-instance
Since you would be using the Tensorflow API for your application on ML Engine I would advise you to take a look at this link below. It provides instructions for creating a Deep Learning VM instance with TensorFlow and other tools pre-installed.
https://cloud.google.com/ai-platform/deep-learning-vm/docs/tensorflow_start_instance

Extracting Machine Specifications from Google Cloud from an SKU

I am trying to extract price information about the virtual machines supplied by the compute engine service in Google Cloud. I have successfully extracted a JSON file with some pricing information on it using an HTTP Get request from Google Cloud's Pricing API, however all of the pricing data in the file is mapped to individual machines via the SKU number, and there are no machine specifications associated with any of the SKU numbers. Here is the request:
GET https://cloudbilling.googleapis.com/v1/services?key=API_KEY
Ideally, I would like to find a way to have the machine specifications included in the JSON file returned by the HTTP request, but if that isn't possible I'd like to find a way use the SKU to look up a machine's specifications. For example, if my SKU is: 19E4-D27B-7C12, I'd like to use that code to look up the machine it specifies and see details about it such as amount of RAM, number of CPUs, etc... Does anyone know of any Google Cloud resources that would allow me to do such a thing? And if not, is there any other way to accomplish this task? I'd like this process to be programmatic so I cannot use the built in calculator in Google Cloud.
Thank you!
It looks like currently there is no an API that can get the features of a certain machine type, however, there is an API:
https://cloud.google.com/compute/docs/reference/rest/v1/instances/get
That allows you to get detailed features of an existing virtual machine based on the project, the zone and the instance name.