Google ML Engine - Scikit-Learn Models - google-cloud-platform

Looking at the documentation for ML Engine, it looks like they accept training and prediction services for Scikit Learn models.
Is it possible to train non Scikit Learn models that is wrapped with Mixins to transform into a Scikit Learn Interface?

ml-engine currently has support for this in alpha, however, it's currently broken for some custom models.
Furthermore logs for version creates for models is not visible. Last time I talked to GCP ml services people, they said they will add stackdriver functionality soon.
EDIT:
Since this answer was first given at Sep, 2018, now is Dec, 2018, and the logging functionality is still not available. I'm wrapping Facebook's Prophet in a custom model, and training works but versioning is still broken.
Versioning Error Message:
Create Version failed. Internal error happened.

Related

Create a model with google ML natural language or other potential service

So I have been collecting data of numerous text-descriptions about articles, where as each description was structred differently. Now, I would have to "create" an algorithm, which sorts out the title of that article for me what is a hard task. I have come around Google ML natural language and it seems to be able to create one for me.
Unfortunately, I am not really able to exactly find out how I can use it,
so my question is... How precisely can I set it up ? And additionally, it would be helpful to know if firebase has such a service, since I am planning to build a firebase project.
Thanks in advance for any help !
Unfortunately models created using Google AutoML Natural Language are not exportable to Tensorflow lite (mobile models). Based from your use case you will need a model for text classification, the provided link has a sample of how this model work. You can follow this tutorial to train a custom model using the data that you have so it can identify if a title of a article is a hard task or not.
Once training is done you can now:
Deploy it in Firebase
Download the model in your device and perform testing.
You can find detailed instructions from training the model to testing it on your device for either iOS or android.

Google AutoML Vision API and Google Vision API Custom Algorithm

I am looking at Google AutoML Vision API and Google Vision API. I know that if you use Google AutoML Vision API that it is a custom model because you train ML models based on your own images and define your own labels. And when using Google Vision API, you are using a pretrained model...
However, I am wondering if it is possible to use my own algorithm (one which I created and not provided by Google) and using that instead with Vision / AutoML Vision API ? ...
Sure, you can definitely deploy your own ML algorithm on Google Cloud, without being tied up to the Vision or AutoML API.
Two approaches that I have used many times for this same use case:
Serverless approach, if your model is relatively light in terms of computational resources requirement - Deploy your own custom cloud function. More info here.
To be more specific, the way it works is that you just call your cloud function, passing your image directly (base64 or pointing to a storage location). The function then automatically allocates all required resources (automatically), run your custom algorithm to process the image and/or run inferences, send the results back and vanishes (all resources released, no more running costs). Neat :)
Google AI Platform. More info here
Use AI Platform to train your machine learning models at scale, to host your trained model in the cloud, and to use your model to make predictions about new data.
In doubt, go for AI Platform, as the whole pipeline is nicely lined-up for any of your custom code/models. Perfect for deployment in production as well.

google AI model serving vs kfsering [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
We are going to do model serving infrastructure. I am comparing Google AI Prediction and kfserving. But I cannot find enough documents about the features of google ai serving and how it is implemented.
It seems that gcloud ai-platform versions create can create model version resource and start serving, which is the only point I can find.
I have three questions:
1, what is relationship between google ai serving and kfserving?
2, how gcloud ai-platform versions create works?
3, as for the features of google ai serving, do google ai serving provide all feature such as canary rollout, explainers, monitoring, etc listed in https://www.kubeflow.org/docs/components/serving/overview/?
The document you shared contains extensive information about Google AI Platform Prediction. In summary, it is a hosted service in GCP where you don't need to manage the infrastructure. You just deploy your model and a new REST endpoint will be available for you to start sending predictions via SDK or API.
Supports multiple frameworks:
TensorFlow
scikit-learn
XGBoost
Pytorch
Custom Docker containers (soon)
Support GPUs
Model versions
Online and Batch prediction
Logging and Monitoring
Multiple Regions
REST API
Answer to your questions:
KFServing you need to manage your own K8s/KubeFlow infrastructure.
Kubeflow supports two model serving systems that allow multi-framework model serving: KFServing and Seldon Core.
AI Platform Service you don't manage the infrastructure, nor need K8s/KF, you simply deploy your models and GCP takes care of the infra.
gcloud ai-platform versions create will deploy a VM(s) in Google Cloud where based on the settings (Runtime) and Framework all the dependencies will be installed automatically, also all you need to load your model will be installed so you can have access to a REST API.
Canary can be implemented used with different Models and versions, it may depend on routing your predictions. Check the What If tool and Model logging.
Google AI Platform can be used to manage the following stages in the ML workflow:
-Train an ML model on your data:
Train model
Evaluate model accuracy
Tune hyperparameters
-Deploy your trained model.
-Send prediction requests to your model:
Online prediction
Batch prediction (for TensorFlow only)
-Monitor the predictions on an ongoing basis.
-Manage your models and model versions.
KFServing enables serverless inferencing on Kubernetes and provides performant, high abstraction interfaces for common machine learning (ML) frameworks like TensorFlow, XGBoost, scikit-learn, PyTorch, and ONNX to solve production model serving use cases.

Unablee toexport trained model from AutoML vision

I trained a model using google AutoML Vision and now I want to export it to use it locally, I tried this tutorial from Google official doc with no success.
Actually, in model list, when I click the three dots (more actions) there is no export option:
Even in the test & use page there is no option to export the model:
Thanks in advance,
First of all, the tutorial you are following is for AutoML tables and, although similar, is not exactly the same as for AutoML Vision.
For AutoML Vision you can train two types of models, Cloud hosted and Edge-exportable. As the name may infer, only the second ones can be exported.
Here you can see the documentation for exporting AutoML Vision Edge models.
My assumption is you have trained a Cloud hosted model which is not exportable.
There is currently a feature request opened to allow this behavior. You can find it here. If you would also be interested on it you can star it to keep updated about the progress.

google ai platform vs ml engine

I did lots of search, but I cannot understand what the difference between google ai platform and ml engine.
It seems that both of them can be used for training and deploying models.
Other words like google-cloud-automl, google ai hub are also very confusing.
What are the differences between them? Thanks
The short answer is: there isn't. In 2019 "ML Engine" was renamed to "AI Platform" and in time some services changed and expanded. To see what has changed, check the release notes, starting from around April. "Around", as they haven't left much trace that ML Engine ever existed.
Here's one of pull requests to "Rename Cloud ML Engine to AI Platform" for Python samples.
Cloud ML Engine = AI Platform Training + AI Platform Prediction (It was just a name change). Used for training and deploying ML models.
AI Platform Training: Bring your own code and submit Training jobs using supported ML frameworks such as TensorFlow, scikit-learn, XGBoost, Keras, etc.
AI Platform Prediction: Host your Model and use AI Platform Prediction to infer target values for new data.
Google Cloud Auto ML = You don't need to code, bring your dataset and GCP automatically picks the best model for you.
Different products:
Vision
Video Intelligence
Natural Language
Translation
Tables.
Google AI Hub = It is a Catalog: Discover Notebooks, Models and Pipelines.
Edit: Now AI Platform is called Vertex AI
Correct, the previous ML Engine service is now under Cloud AI Platform portfolio of products and provides end-to-end platform to build, run, and manage ML projects.
Please follow the instructions on how to use the service here.