Vertex AI on G Cloud web interface : Unable to test model - google-cloud-platform

Following the starter tutorial "Train a Tabular Model" I get the following error at the step of testing the model with the deployed endpoint. (As you can see in the image).
The dataset used for training the model is provided by google tutorial at this cloud location : cloud-ml-tables-data/bank-marketing.csv
Error message :
The prediction did not succeed due to the following error: Deployed
model xxxxx does not support explanation.
Official Vertex tutorial (Tabular data)
What I belive is the old version of the tutorial (not on vertex) but almost the same

When you deploy your model, you should mark the option for enable feature attributions for this model in Explainability options as you can see here. As default the option is not enabled. I know that in the tutorial it is not specified and should be. This is the same error if the model does not have this 'feature attributions' enabled and you run gcloud ai endpoints explain ENDPOINT_ID

Related

creating custom model on Google vertex ai

I should use Google’s managed ML platform Vertex AI to build an end-to-end machine learning workflow for an internship. Although I completely follow the tutorial, when I run a training job, I see this error message:
Training pipeline failed with error message: There are no files under "gs://dps-fuel-bucket/mpg/model" to copy.
based on the tutorial, we should not have a /model directory in the bucket. And the model should create this directory and save the final result there.
# Export model and save to GCS
model.save(BUCKET + '/mpg/model')
I added this directory but still face this error.
Does anybody have any idea, thanks in advance :)
If you're using a pre-built container, ensure that your model artifacts have filenames that exactly match the following examples:
TensorFlow SavedModel: saved_model.pb
scikit-learn: model.joblib or model.pkl
XGBoost: model.bst, model.joblib, or model.pkl
Reference : Vertex-AI Model Import

How to deploy our own TensorFlow Object Detection Model in amazon Sagemaker?

I have my own trained TF Object Detection model. If I try to deploy/implement the same model in AWS Sagemaker. It was not working.
I have tried TensorFlowModel() in Sagemaker. But there is an argument called entrypoint- how to create that .py file for prediction?
entrypoint is a argument which contains the file name inference.py,which means,once you create a endpoint and try to predict the image using the invoke endpoint api. the instance will be created based on you mentioned and it will go to the inference.py script and execute the process.
Link : Documentation for tensor-flow model deployment in amazon sage-maker
.
The inference script must contain a methods input_handler and output_handler or handler which will cover both the function in inference.py script, this for pre and post processing of your image.
Example for Deploying the tensor flow model
In the above link, i have mentioned a medium post, this will be helpful for your doubts.

Not able to create uptimechecks

While creating uptimechecks i am getting below error**enter image description here**
There was an error testing the uptime check config: Eb`
This is a product issue, not a question about the product - I recommend using Google's product forums or submitting a support request with them.

Prediction failed: unknown error

I'm using Google Cloud Machine Learning to predict images with labels.
I've trained my model, named flower and I see the API end point at Google API Exporer but, when I call the API at API Explorer, I get the following error:
Image Error
I can't understanding why.
Thanks
Ibere
I guess you followed the tutorial from https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/flowers?
I had the exact same problem but with some trial and errors I succeeded with the payload:
{"instances":[{"image_bytes": {"b64": "/9j/4AAQ...rest of the base64..."}, "key": "0"}]}

Deploy a model on ml-engine, exporting with tf.train.Saver()

I want to deploy a model on the new version of Google ML Engine.
Previously, with Google ML, I could export my trained model creating a tf.train.Saver(), saving the model with saver.save(session, output).
So far I've not been able to find out if the exported model obtained this way is still deployable on ml-engine, or else I must follow the training procedure described here and create a new trainer package and necessarily train my model with ml-engine.
Can I still use tf.train.Saver() to obtain the model I will deploy on ml-engine?
tf.train.Saver() only produces a checkpoint.
Cloud ML Engine uses a SavedModel, produced from these APIs: https://www.tensorflow.org/versions/master/api_docs/python/tf/saved_model?hl=bn
A saved model is a checkpoint + a serialized protobuf containing one or more graph definitions + a set of signatures declaring the inputs and outputs of the graph/model + additional asset files if applicable, so that all of these can be used at serving time.
I suggest looking at couple of examples:
The census sample - https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/census/tensorflowcore/trainer/task.py#L334
And my own sample/library code - https://github.com/TensorLab/tensorfx/blob/master/src/training/_hooks.py#L208 that calls into https://github.com/TensorLab/tensorfx/blob/master/src/prediction/_model.py#L66 to demonstrate how to use a checkpoint, load it into a session and then produce a savedmodel.
Hope these pointers help adapt your existing code to produce a model to now produce a SavedModel.
I think you also asked another similar question to convert a previously exported model, and I'll link to it here for completeness for anyone else: Deploy retrained inception SavedModel to google cloud ml engine