How to find out the signature of a Google ML Cloud model version - google-cloud-ml

I'm looking for a way to figure out the signature, the inputs and outputs, of a model version running on Google ML Cloud.
None of the available Google ML REST APIs allows me to see the inputs a model version expects and its outputs.

We do not yet support this from the API. However, you can use saved_model_cli show --all --dir /path/to/model locally to view the signature(s) of a TensorFlow model.

Related

How to schedule a retrain of a sagemaker pipeline model using airflow

I have already implemented a sagemaker pipeline model. In particular for an end-to-end notebook that trains a model, builds a pipeline model and deploys it, I have followed this sample notebook.
Now I would like to retrain and deploy the entire pipeline every day using Airflow, but I have seen here the possibility to retrain and deploy only a single sagemaker model.
Is there a way to retrain and deploy the entire pipeline? Thanks
SageMaker provides 2 options for users to do Airflow stuff:
Use the APIs in SageMaker Python SDK to generate input of all SageMaker operators in Airflow. The blog you linked goes this way. For example, they use API training_config in SageMaker Python SDK and operator SageMakerTrainingOperator in Airflow.
Use PythonOperator provided by Airflow and write Python codes to do what you want.
For 1, SageMaker only implemented APIs related to training, tuning, single model deployment and transform. Hence you are doing pipeline model, I don't think it has the API you want.
But for 2, if you can finish what you want in whatever Python codes with SageMaker. You should be able to adapt it as Python callables and make them work with PythonOperators. Here's an example for training in this way provided by SageMaker:
https://sagemaker.readthedocs.io/en/stable/using_workflow.html#using-airflow-python-operator
I think you can do similar things to make Airflow work with your pipeline model.

Exporting a model to be implemented in mobile app

We tested Cloud AutoML Vision product, the results are amazing 96% accuracy.
So what we did so far was: upload labeled dataset, train, evaluate so we have a MODEL.
Further we want to Export this model and implement on a iOS app.
But how do we export from Cloud AutoML?
What formats are supported?
(did we missed something? we want in the end to get a .mlmodel file, we can use a converter but first we need to export some format).
Model export feature is currently not supported in Cloud AutoML Vision.
The team is aware of this feature request. You can star and keep an eye on: https://issuetracker.google.com/113122585 for updates.
The export functionality has been since added and is documented here: https://cloud.google.com/vision/automl/docs/deploy
It seems the easiest way to do so is in the UI.
You can export an image classification model in either generic
Tensorflow Lite format, Edge TPU compiled TensorFlow Lite format, or
TensorFlow format

Prediction on GCP with ML Engine

I am working on GCP to predict, I'm using the census dataset, actually I'm discovering google APIs ( ML Engine ...).
When I launch the prediction job , the job runs successfully, but it doesn't display the result.
Can anyone help ? Do you have any idea why it doesn't generate an output ?
Thanks in advance :)
This is the error that occurs
https://i.stack.imgur.com/9gyTb.png
This error is common when you train with one version of TF and then try serving with a lower version. For instance, if you are using Cloud console to deploy your model, it currently has no way of letting you select the version of TensorFlow for serving, so the model is deployed using TF 1.0, but your model may have been trained with a higher version of TF (current version is 1.7).
Although the Cloud console doesn't currently let you select the version (but it will soon!), using gcloud or the REST API directly does allow you to.
In the docs, there is a section on creating a model that has code snippets under "gcloud" and "python". With gcloud you simply add the argument --runtime-version=1.6 (or whatever version) and with python you add the property "runtimeVersion": "1.6" to the body of the request.

Google cloud ML without trainer

Can we train a model by just giving data and related column names without creating trainer in Google Cloud ML either using Rest API or command line interface
Yes. You can use Google Cloud Datalab, which comes with a structured data solution. It has an easier interface and takes care of the trainer. You can view the notebooks without setting up Datalab:
https://github.com/googledatalab/notebooks/tree/master/samples/ML%20Toolbox
Once you set up Datalab, you can run the notebook. To set up Datalab, check https://cloud.google.com/datalab/docs/quickstarts.
Instead of building a model and calling CloudML service directly, you can try Datalab's ml toolbox which supports structured data and image classification. The ml toolbox takes your data, and automatically builds and trains a model. You just have to describe your data and what you want to do.
You can view the notebooks first without setting up datalab:
https://github.com/googledatalab/notebooks/tree/master/samples/ML%20Toolbox
To set up Datalab and actually run these notebooks, see https://cloud.google.com/datalab/docs/quickstarts.

scikit learn on google cloud platform through datalab or compute engine?

I am running a Django App inside GCP. My idea was to call a python script from "view.py" for some machine learning algorithm and then display the result on the page.
But now I understand that running a machine learning library like Scikit-learn on GAE will not be possible (read Tim's answer here and this thread).
But suppose I need to still do this, I believe there are 2 ways possible, but I am not sure weather my guess is right or wrong
1) As the Google-Datalab provides the entire anaconda like distribution, if we have any datalab api which can be called from a python file in the Django app, I can achieve my goal ?
2) If I can install the scikit-learn library on any compute engine on GCP and somehow send it the request to run my code and then return the output back to the python file in the Django app ?
I am very new to client-server and cloud computing on the whole, so please provide examples (if possible) for any suggestion/ pointer for the help.
Regards,
I believe what you want is to use the App Engine Flex environment rather than the standard App Engine environment.
App Engine Flex uses a compute engine VM for running your code, so it does not have the library limitations that standard App Engine has.
Specifically, you'll need to add a 'requirements.txt' file to specify the version of scikit-learn that you want installed, and then add a 'vm: true' clause to your app.yaml file.
sklearn is now supported on ML Engine.
So, another alternative now is to use online prediction on Cloud ML Engine, and deploy your scikit-learn model as a web service.
Here is a fully worked out example of using fully-managed scikit-learn training, online prediction and hyperparameter tuning:
https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/sklearn/babyweight_skl.ipynb