google AI model serving vs kfsering [closed] - google-cloud-platform

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
We are going to do model serving infrastructure. I am comparing Google AI Prediction and kfserving. But I cannot find enough documents about the features of google ai serving and how it is implemented.
It seems that gcloud ai-platform versions create can create model version resource and start serving, which is the only point I can find.
I have three questions:
1, what is relationship between google ai serving and kfserving?
2, how gcloud ai-platform versions create works?
3, as for the features of google ai serving, do google ai serving provide all feature such as canary rollout, explainers, monitoring, etc listed in https://www.kubeflow.org/docs/components/serving/overview/?

The document you shared contains extensive information about Google AI Platform Prediction. In summary, it is a hosted service in GCP where you don't need to manage the infrastructure. You just deploy your model and a new REST endpoint will be available for you to start sending predictions via SDK or API.
Supports multiple frameworks:
TensorFlow
scikit-learn
XGBoost
Pytorch
Custom Docker containers (soon)
Support GPUs
Model versions
Online and Batch prediction
Logging and Monitoring
Multiple Regions
REST API
Answer to your questions:
KFServing you need to manage your own K8s/KubeFlow infrastructure.
Kubeflow supports two model serving systems that allow multi-framework model serving: KFServing and Seldon Core.
AI Platform Service you don't manage the infrastructure, nor need K8s/KF, you simply deploy your models and GCP takes care of the infra.
gcloud ai-platform versions create will deploy a VM(s) in Google Cloud where based on the settings (Runtime) and Framework all the dependencies will be installed automatically, also all you need to load your model will be installed so you can have access to a REST API.
Canary can be implemented used with different Models and versions, it may depend on routing your predictions. Check the What If tool and Model logging.

Google AI Platform can be used to manage the following stages in the ML workflow:
-Train an ML model on your data:
Train model
Evaluate model accuracy
Tune hyperparameters
-Deploy your trained model.
-Send prediction requests to your model:
Online prediction
Batch prediction (for TensorFlow only)
-Monitor the predictions on an ongoing basis.
-Manage your models and model versions.
KFServing enables serverless inferencing on Kubernetes and provides performant, high abstraction interfaces for common machine learning (ML) frameworks like TensorFlow, XGBoost, scikit-learn, PyTorch, and ONNX to solve production model serving use cases.

Related

Google AutoML Vision API and Google Vision API Custom Algorithm

I am looking at Google AutoML Vision API and Google Vision API. I know that if you use Google AutoML Vision API that it is a custom model because you train ML models based on your own images and define your own labels. And when using Google Vision API, you are using a pretrained model...
However, I am wondering if it is possible to use my own algorithm (one which I created and not provided by Google) and using that instead with Vision / AutoML Vision API ? ...
Sure, you can definitely deploy your own ML algorithm on Google Cloud, without being tied up to the Vision or AutoML API.
Two approaches that I have used many times for this same use case:
Serverless approach, if your model is relatively light in terms of computational resources requirement - Deploy your own custom cloud function. More info here.
To be more specific, the way it works is that you just call your cloud function, passing your image directly (base64 or pointing to a storage location). The function then automatically allocates all required resources (automatically), run your custom algorithm to process the image and/or run inferences, send the results back and vanishes (all resources released, no more running costs). Neat :)
Google AI Platform. More info here
Use AI Platform to train your machine learning models at scale, to host your trained model in the cloud, and to use your model to make predictions about new data.
In doubt, go for AI Platform, as the whole pipeline is nicely lined-up for any of your custom code/models. Perfect for deployment in production as well.

google ai platform vs ml engine

I did lots of search, but I cannot understand what the difference between google ai platform and ml engine.
It seems that both of them can be used for training and deploying models.
Other words like google-cloud-automl, google ai hub are also very confusing.
What are the differences between them? Thanks
The short answer is: there isn't. In 2019 "ML Engine" was renamed to "AI Platform" and in time some services changed and expanded. To see what has changed, check the release notes, starting from around April. "Around", as they haven't left much trace that ML Engine ever existed.
Here's one of pull requests to "Rename Cloud ML Engine to AI Platform" for Python samples.
Cloud ML Engine = AI Platform Training + AI Platform Prediction (It was just a name change). Used for training and deploying ML models.
AI Platform Training: Bring your own code and submit Training jobs using supported ML frameworks such as TensorFlow, scikit-learn, XGBoost, Keras, etc.
AI Platform Prediction: Host your Model and use AI Platform Prediction to infer target values for new data.
Google Cloud Auto ML = You don't need to code, bring your dataset and GCP automatically picks the best model for you.
Different products:
Vision
Video Intelligence
Natural Language
Translation
Tables.
Google AI Hub = It is a Catalog: Discover Notebooks, Models and Pipelines.
Edit: Now AI Platform is called Vertex AI
Correct, the previous ML Engine service is now under Cloud AI Platform portfolio of products and provides end-to-end platform to build, run, and manage ML projects.
Please follow the instructions on how to use the service here.

ML for object search [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to find a way to build ML using AWS, preferably using their services such as SageMaker and not just EC2, for object detection in images using an image as input.
AWS Rekognition offers Image Comparison and Object detection APIs, but they are not exactly what I'm looking for, the comparison works only with faces and not objects and object detection is too basic.
AlibabCloud has that functionality as a service (https://www.alibabacloud.com/product/imagesearch) but I would like to use something similar on AWS, rather than Alibaba.
How would I go about and build something like this?
Thank you.
edited 03/08/2020 to add pointers for visual search
Since you seem interested both in the tasks of object detection (input an image, and return bounding boxes with object classes) and visual search (input an image and return relevant images) let me give you pointers for both :)
For object detection you have 3 options:
Using the managed service Amazon Rekognition Custom Labels. The key benefits of this service is that (1) it doesn't require writing ML code, as the service runs autoML internally to find the best model, (2) it is very flexible in terms of interaction (SDKs, console), data loading and annotation and (3) it can work even with small datasets (typically a few hundred images or less).
Using SageMaker Object Detection model (documentation, demo). In this option, the model is also already written (SSD architecture with Resnet or VGG backbone) and you just need to choose or tune hyperparameters
Using your own model on Amazon SageMaker. This could be your own code in docker, or code from an ML framework in a SageMaker ML Framework container. There are such containers for Pytorch, Tensorflow, MXNet, Chainer and Sklearn. In terms of model code, I recommend considering gluoncv, a compact python computer vision toolkit (based on mxnet backend) that comes with many state-of-the-art models and tutorials for object detection
The task of visual search requires more customization, since you need to provide the info of (1) what you define as search relevancy (eg is it visual similarity? or object complementarity? etc) and (2) the collection among which to search. If all you need is visual similarity, a popular option is to transform images into vectors with a pre-trained neural network and run kNN search between the query image and the collection of transformed images. There are 2 tutos showing how to build such systems on AWS here:
Blog Post Visual Search on AWS (MXNet resnet embeddings +
SageMaker kNN)
Visual Search on MMS demo (MXNet resnet
embeddings + HNSW kNN on AWS Fargate)

Planning an architecture in GCP

I want to plan an architecture based on GCP cloud platform. Below are the subject areas what I have to cover. Can someone please help me to find out the proper services which will perform that operation?
Data ingestion (Batch, Real-time, Scheduler)
Data profiling
AI/ML based data processing
Analytical data processing
Elastic search
User interface
Batch and Real-time publish
Security
Logging/Audit
Monitoring
Code repository
If I am missing something which I have to take care then please add the same too.
GCP offers many products with functionality that can overlap partially. What product to use would depend on the more specific use case, and you can find an overview about it here.
That being said, an overall summary of the services you asked about would be:
1. Data ingestion (Batch, Real-time, Scheduler)
That will depend on where your data comes from, but the most common options are Dataflow (both for batch and streaming) and Pub/Sub for streaming messages.
2. Data profiling
Dataprep (which actually runs on top of Dataflow) can be used for data profiling, here is an overview of how you can do it.
3. AI/ML based data processing
For this, you have several options depending on your needs. For developers with limited machine learning expertise there is AutoML that allows to quickly train and deploy models. For more experienced data scientists there is ML Engine, that allows training and prediction of custom models made with frameworks like TensorFlow or scikit-learn.
Additionally, there are some pre-trained models for things like video analysis, computer vision, speech to text, speech synthesis, natural language processing or translation.
Plus, it’s even possible to perform some ML tasks in GCP’s data warehouse, BigQuery in SQL language.
4. Analytical data processing
Depending on your needs, you can use Dataproc, which is a managed Hadoop and Spark service, or Dataflow for stream and batch data processing.
BigQuery is also designed with analytical operations in mind.
5. Elastic search
There is no managed Elastic search service directly provided by GCP, but you can find several options on the marketplace, like an API service or a Kubernetes app for Google’s Kubernetes Engine.
6. User interface
If you are referring to a user interface for your own use, GCP’s console is what you’d be using. If you are referring to a UI for end-users, I’d suggest using App Engine.
If you are referring to a UI for data exploration, there is Datalab, which is essentially a managed notebook service, and Data Studio, where you can build plots of your data in real time.
7. Batch and Real-time publish
The publishing service in GCP, for both synchronous and asynchronous messages is Pub/Sub.
8. Security
Most security concerns in GCP are addressed here. Which is a wide topic by itself and should probably need a separate question.
9. Logging/Audit
GCP uses Stackdriver for logging of most of its products, and provides many ways to process and analyze those logs.
10. Monitoring
Stackdriver also has monitoring features.
11. Code repository
For this there is Cloud Source Repositories, which integrate with GCP’s automated build system and can also be easily synched with a Github repository.
12. Analytical data warehouse
You did not ask for this one, but I think it's an important part of a data analysis stack.
In the case of GCP, this would be BigQuery.

Google ML Engine - Scikit-Learn Models

Looking at the documentation for ML Engine, it looks like they accept training and prediction services for Scikit Learn models.
Is it possible to train non Scikit Learn models that is wrapped with Mixins to transform into a Scikit Learn Interface?
ml-engine currently has support for this in alpha, however, it's currently broken for some custom models.
Furthermore logs for version creates for models is not visible. Last time I talked to GCP ml services people, they said they will add stackdriver functionality soon.
EDIT:
Since this answer was first given at Sep, 2018, now is Dec, 2018, and the logging functionality is still not available. I'm wrapping Facebook's Prophet in a custom model, and training works but versioning is still broken.
Versioning Error Message:
Create Version failed. Internal error happened.