Is Bayesian hyperparameter tuning algorithm supported in Google ML Engine? - google-cloud-ml

According to https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#Algorithm there are only grid search and random search algorithms available.
According to this blog post https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization Bayesian is supported.
So, if supported, how can I tune hyperparameters using Bayesian optimization on Google Cloud ML Engine?

If you leave the algorithm set to default, it will use Bayesian optimization (see docs).

Related

Stepwise regression in Google BigQuery

How to perform stepwise regression in GCP BigQueryML? The purpose is to identify which variables are significant and should be taken into consideration for creating statistical models.
Could not find any documentation on GCP.
You can see the BQML explanation with the command ML.GLOBAL_EXPLAIN which is documented here
For each feature, you have an attribution value that explain the influence of the feature on the model inference/prediction.

Google AutoML Vision API and Google Vision API Custom Algorithm

I am looking at Google AutoML Vision API and Google Vision API. I know that if you use Google AutoML Vision API that it is a custom model because you train ML models based on your own images and define your own labels. And when using Google Vision API, you are using a pretrained model...
However, I am wondering if it is possible to use my own algorithm (one which I created and not provided by Google) and using that instead with Vision / AutoML Vision API ? ...
Sure, you can definitely deploy your own ML algorithm on Google Cloud, without being tied up to the Vision or AutoML API.
Two approaches that I have used many times for this same use case:
Serverless approach, if your model is relatively light in terms of computational resources requirement - Deploy your own custom cloud function. More info here.
To be more specific, the way it works is that you just call your cloud function, passing your image directly (base64 or pointing to a storage location). The function then automatically allocates all required resources (automatically), run your custom algorithm to process the image and/or run inferences, send the results back and vanishes (all resources released, no more running costs). Neat :)
Google AI Platform. More info here
Use AI Platform to train your machine learning models at scale, to host your trained model in the cloud, and to use your model to make predictions about new data.
In doubt, go for AI Platform, as the whole pipeline is nicely lined-up for any of your custom code/models. Perfect for deployment in production as well.

How to do OCR with Google's AutoML

I want to do OCR and I know that Cloud Vision API supports it. But I'm interested in making my custom model for it and wish to use AutoML for the same. But I couldn't find anything related to OCR using AutoML. Is it possible to do OCR using AutoML? How do we go about this? I know this is a very open-ended question, but I'd appreciate some help.
AutoML Natural Language can perform OCR on PDFs; however, this is just a step because is intended for creating your on models on text classification, entity extraction or sentiment analysis.
If you goal is just to perform OCR the best approach will be Vision API.
You cannot do OCR from AutoML. Your options are to use the Cloud Vision API to do OCR and then apply your own algorithms to put the detected letters together in a certain way, or to start from scratch and train your own OCR model (not recommended).

Google Cloud AutoML Natural Language for Chatbot like application

I want to develop a chatbot like application which gives response to input questions using Google Cloud Platform.
Naturally, Dialogflow is suited for this such applications. But due to business conditions, I cannot use Dialogflow.
An alternative could be AutoML Natural Language, where I do not need much machine learning expertise.
AutoML Natural Language requires documents which are labelled. These documents can be used for training a model.
My example document:
What is cost of Swiss tour?
Estimate of Switzerland tour?
I would use a label such as Switzerland_Cost for this document.
Now, in my application I would have a mapping between Labels and Responses.
During Prediction, when I give an input question to the trained model, I would get a predicted label. I can then use this label to return the mapped response.
Is there a better approach to my scenario?
I'm from Automl team. This seems like a good approach to me. People use Automl NL for intent detection, which is pretty aligned with what you try to do here.

Does ml-engine provide a similar to Google Cloud Prediction blackbox?

As Google Cloud Prediction API is deprecated (https://cloud.google.com/prediction/docs/end-of-life-faq), does ml-engine provide a similar black-box?
Google Cloud ML Engine is managed TensorFlow and supports higher level APIs (see Datalab notebooks for regression and image classification - runnable in Datalab). Compared to Prediction API, there are some capability differences between the data types and some user experience delta that is being addressed in the near term.
Note that TensorFlow and ML Engine allow you a greater degree of freedom to select and tune the model & much larger scale than a blackbox - albeit with some added complexity at present. That too will be addressed soon.
Dinesh Kulkarni
Product Manager, Google Cloud ML & Datalab