We tested Cloud AutoML Vision product, the results are amazing 96% accuracy.
So what we did so far was: upload labeled dataset, train, evaluate so we have a MODEL.
Further we want to Export this model and implement on a iOS app.
But how do we export from Cloud AutoML?
What formats are supported?
(did we missed something? we want in the end to get a .mlmodel file, we can use a converter but first we need to export some format).
Model export feature is currently not supported in Cloud AutoML Vision.
The team is aware of this feature request. You can star and keep an eye on: https://issuetracker.google.com/113122585 for updates.
The export functionality has been since added and is documented here: https://cloud.google.com/vision/automl/docs/deploy
It seems the easiest way to do so is in the UI.
You can export an image classification model in either generic
Tensorflow Lite format, Edge TPU compiled TensorFlow Lite format, or
TensorFlow format
Related
I am looking for a solution that allows me to host my trained Sklearn model (that I am satisfied with) on SageMaker without having to retrain it before deploying to an endpoint.
On the one hand I have seen specific examples for bring-your-own scikit model that involve containerizing the trained model but - these guides go through the training step and dont specifically show how you can alternatively avoid retraining the model and just deploy. (https://github.com/awslabs/amazon-sagemaker-examples/blob/27d3aeb9166a4d4dbbb0721d381329e41d431078/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb)
On the other hand, there are guides that show you how to BYOM only for deploying - but these are specific to MXNet and TensorFlow frameworks. I noticed that the way you export your model artifacts among frameworks differs. I need something specific to Sklearn and how to get to a good point where I have model artifacts in the correct format Sagemaker expects(https://github.com/awslabs/amazon-sagemaker-examples/tree/27d3aeb9166a4d4dbbb0721d381329e41d431078/advanced_functionality/mxnet_mnist_byom)
The closest guide I have seen that might work is this one: https://aws.amazon.com/blogs/machine-learning/bring-your-own-pre-trained-mxnet-or-tensorflow-models-into-amazon-sagemaker/
However, I dont know what my sklearn "model artifacts" includes. I think I need a clear understanding of what sklearn model artifacts looks like and what it includes.
Any help is appreciated. The goal is to avoid training in Sagemaker and only deploy my already trained scikit model to an endpoint.
I am trying out the Aws Djl platform. I want to load a custom trained tensorflow model and perform inference. I couldnot find a direct example to in the official github that does this. Can anyone guide me on the same.
Here is demo project: https://github.com/aws-samples/djl-demo/tree/master/pneumonia-detection
And here is the document about loading tensorflow model: https://github.com/awslabs/djl/blob/master/docs/tensorflow/how_to_import_keras_models_in_DJL.md
and:
https://github.com/awslabs/djl/blob/master/docs/load_model.md
I created an Object Detection model using Google AutoML. I'd like to export the model to Core ML but on the export page this option isn't showing up. I can't find anything in the AutoML Documentation about when this export option is disabled.
Additionally, if I try to export from the command line I get the error message Unsupported model export format [core_ml] for model.
Can someone provide some clarity about why this isn't an option? Thanks in advance for your help.
The issue is with a confusion between automl vision documentation that focuses on classification models and specific automl vision object detection model documentation. In this index you can see all those docs.
As you can see in the links, for the specific case of object detection models there is no option to export to Core ML.
I'm looking for a way to figure out the signature, the inputs and outputs, of a model version running on Google ML Cloud.
None of the available Google ML REST APIs allows me to see the inputs a model version expects and its outputs.
We do not yet support this from the API. However, you can use saved_model_cli show --all --dir /path/to/model locally to view the signature(s) of a TensorFlow model.
Say I have code on App Engine reading Gmail attachments, parsing that it goes to Cloud Data Store, through Data Prep recipes and steps, stored back into Data Store, then predicted on by ML Engine Tensorflow model?
Reference:
Is this all achievable through Dataflow?
EDIT 1:
Is it possible to export the Data Prep steps and use them as preprocessing before an Ml Engine Tensorflow model?
The input for a Cloud ML Engine model can be defined how you better see fit for your project. This means you can apply the preprocessing steps the way you consider fit and then send your data to the Tensorflow model.
Be sure that the format you use in your Dataprep steps is supported by the Tensorflow model. Once you apply your Dataprep recipe with all the required steps, make sure that you use an appropriate format, such as CSV. It is recommended that you store your input to a Cloud Storage bucket for better access.
I don't know how familiar you are with Cloud Dataprep, but you can try this to check how to handle all the steps that you want to include in your recipe.