Cloud Machine Learning Engine fails to deploy model - google-cloud-platform

I have trained both my own model and the one from the official tutorial.
I'm up to the step to deploy the model to support prediction. However, it keeps giving me an error saying:
"create version failed. internal error happened"
when I attempt to deploy the models by running:
gcloud ml-engine versions create v1 \
--model $MODEL_NAME \
--origin $MODEL_BINARIES \
--python-version 3.5 \
--runtime-version 1.13
*the model binary should be correct, as I pointed it to the folder containing model.pb and variables folder, e.g. MODEL_BINARIES=gs://$BUCKET_NAME/results/20190404_020134/saved_model/1554343466.
I have also tried to change the region setting for the model as well, but this doesn't help.

Turns out your GCS bucket and the trained model needs to be in the same region. This was not explained well in the Cloud ML tutorial, where it only says:
Note: Use the same region where you plan on running Cloud ML Engine jobs. The example uses us-central1 because that is the region used in the getting-started instructions.
Also note that a lot of regions cannot be used for both the bucket and model training (e.g. asia-east1).

Related

How to get list of all docker-machine images for google cloud

I'm creating docker-machines in Google Cloud with the shell command
docker-machine create --driver google \
--google-project my-project \
--google-zone my-zone \
--google-machine-image debian-cloud/global/images/debian-10-buster-v20191210 \
machine-name
As you see, i use the image debian-10-buster-v20191210. But I want to switch the version of the image to a less recent one. And the problem is that I can't find the place where the list of
such images (debian-10-buster-v*) can be found. Can you please help me to find the place?
Determining a list of available images can be done using the gcloud command line.
--show deprecated indicates you want to see ALL images, not just the latest
--filter= only selects images with a starting name of debian-10-buster
$ gcloud compute images list --filter="name=debian-10-buster" --show-deprecated
NAME PROJECT FAMILY DEPRECATED STATUS
debian-10-buster-v20191115 debian-cloud debian-10 DEPRECATED READY
debian-10-buster-v20191121 debian-cloud debian-10 DEPRECATED READY
debian-10-buster-v20191210 debian-cloud debian-10 READY
You can find additional information in the gcloud Images List documentation.

Object detection training job fails on GCP

I am running a training job on GCP for object detection using my own dataset. My training job script is like this:
JOB_NAME=object_detection"_$(date +%m_%d_%Y_%H_%M_%S)"
echo $JOB_NAME
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir=gs://$1 \
--scale-tier BASIC_GPU \
--runtime-version 1.12 \
--packages $PWD/models/research/dist/object_detection-0.1.tar.gz,$PWD/models/research/slim/dist/slim-0.1.tar.gz,/tmp/pycocotools/pycocotools-2.0.tar.gz \
--module-name $PWD/models/research/object_detection.model_main \
--region europe-west1 \
-- \
--model_dir=gs://$1 \
--pipeline_config_path=gs://$1/data/fast_rcnn_resnet101_coco.config
It fails at the following line :
python -m $PWD/models/research/object_detection.model_main --model_dir=gs://my-hand-detector --pipeline_config_path=gs://my-hand-detector/data/fast_rcnn_resnet101_coco.config --job-dir gs://my-hand-detector/
/usr/bin/python: Import by filename is not supported.
Based on logs, this is the source of error which I have understood. Any help in this regard would be helpful. Thank you.
I assume that you are using model_main.py file from Tensorflow GitHub repository. Using it, I have been able to replicate your error message. After troubleshooting, I successfully submitted the training job and could train the model properly.
In order to address your issue I suggest you to follow this tutorial, taking special consideration to the following steps:
Make sure to have an updated version of tensorflow (1.14 doesn’t include all necessary capabilities)
Properly generate TFRecords from input data and upload them to GCS bucket
Configure object detection pipeline (set the proper paths to data and label map)
In my case, I have reproduced the workflow using PASCAL VOC input data (See this).

Can't create Deep Learning VM using Tensorflow 2.0 framework

I'm trying to create a Deep Learning Virtual Machine using Google Cloud Platform that uses tensorflow 2.0. But when I instantiate it i get the following error:
deep-learning-training-vm: {"ResourceType":"compute.v1.instance","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"errors":[{"domain":"global","message":"Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf-2-0-cu100-experimental-20190909'. The referenced image resource cannot be found.","reason":"invalid"}],"message":"Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf-2-0-cu100-experimental-20190909'. The referenced image resource cannot be found.","statusMessage":"Bad Request","requestPath":"https://compute.googleapis.com/compute/v1/projects/my-project/zones/us-west1-b/instances","httpMethod":"POST"}}
I don't quite understand the error but I believe that gcp is not able to find the right image for my virtual machine, i.e, the image that have this version of tensorflow in it (maybe because of TF 2.0 release?).
Have someone faced this problem before? Is there a way to create a DL VM using tensorflow 2.0?
It seemed to be a transient issue, since it is available now.
In addition, you can create your DL VM via gcloud. Here's an example of the command:
gcloud compute instances create INSTANCE_NAME \
--zone=ZONE \
--image-family=tf2-latest-cu100 \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator="type=nvidia-tesla-v100,count=1" \
--metadata="install-nvidia-driver=True,proxy-mode=project_editors" \
--machine-type=n2-highmem-8
There's more information on how to this in the DL documentation.
Also, if you are looking to create a VM with Tensorflow and Jupyter, you can try using AI Platform Notebooks.
When you create a new Notebook, you can select Tensorflow 2.0 and further customize it to select the accelerator, machine-type, etc.

issue during deployement of model gcloud ml-engine versions create

When I create a version of a machine learning model (whether it is my own model or the ML Engine census example) using the command:
$ gcloud ml-engine versions create v1 --model $MODEL_NAME --origin $MODEL_BINARIES --runtime-version 1.10
I got an error saying: ERROR: (gcloud.ml-engine.versions.create) FAILED_PRECONDITION: Framework can not be identified from model path. Please make sure your model file name is correct.
Got the same problem,JOB_ID was empty in my case, fixed by adding
JOB_ID=census_211004_181920
before OUTPUT_PATH declaration. You can check your JOB_ID in Storage Browser.
Make sure that MODEL_BINARIES is a folder that contains the saved_model.pb file.
When I followed the google documentation,
gsutil cp -r SavedModel/saved_model ${YOUR_GCS_BUCKET}/model_dir_tmp/
it just copied the file saved_model.pb into ${YOUR_GCS_BUCKET}/model_dir_tmp, instead of creating ${YOUR_GCS_BUCKET}/model_dir_tmp/saved_model.
Later, when I passed in ${YOUR_GCS_BUCKET}/model_dir_tmp/saved_model to --origin, I received the complaint about Framework can not be identified from model path.
I manually went to the cloud console webpage, and created a folder saved_model and moved the file saved_model.pb into it.

Google Cloud ML returns empty predictions with object detection model

I am deploying a model to Google Cloud ML for the first time. I have trained and tested the model locally and it still needs work but it works ok.
I have uploaded it to Cloud ML and tested with the same example images I test locally that I know get detections. (using this tutorial)
When I do this, I get no detections. At first I thought I had uploaded the wrong checkpoint but I tested and the same checkpoint works with these images offline, I don't know how to debug further.
When I look at the results the file
prediction.results-00000-of-00001
is just empty
and the file
prediction.errors_stats-00000-of-00001
contains the following text: ('No JSON object could be decoded', 1)
Is this a sign the detection has run and detected nothing, or is there some problem while running?
Maybe the problem is I am preparing the images wrong for uploading?
The logs show no errors at all
Thank you
EDIT:
I was doing more tests and tried to run the model locally using the command "gcloud ml-engine local predict" instead of the usual local code. I get the same result as online, no answer at all, but also no error message
EDIT 2:
I am using a TF_Record file, so I don't understand the JSON response. Here is a copy of my command:
gcloud ml-engine jobs submit prediction ${JOB_ID} --data-
format=tf_record \ --input-paths=gs://MY_BUCKET/data_dir/inputs.tfr
\ --output-path=gs://MY_BUCKET/data_dir/version4 \ --region
us-central1 \ --model="gcp_detector" \ --version="Version4"
Works with the following commands
Model export:
# From tensorflow/models
export PYTHONPATH=$PYTHONPATH:/home/[user]/repos/DeepLearning/tools/models/research:/home/[user]/repos/DeepLearning/tools/models/research/slim
cd /home/[user]/repos/DeepLearning/tools/models/research
python object_detection/export_inference_graph.py \
--input_type encoded_image_string_tensor \
--pipeline_config_path /home/[user]/[path]/ssd_mobilenet_v1_pets.config \
--trained_checkpoint_prefix /[path_to_checkpoint]/model.ckpt-216593 \
--output_directory /[output_path]/output_inference_graph.pb
Cloud execution
gcloud ml-engine jobs submit prediction ${JOB_ID} --data-format=TF_RECORD \
--input-paths=gs://my_inference/data_dir/inputs/* \
--output-path=${YOUR_OUTPUT_DIR} \
--region us-central1 \
--model="model_name" \
--version="version_name"
I don't know what change exactly fixes the issue, but there are some small changes like tf_record now being TF_RECORD. Hope this helps someone else. Props to google support for their help (they suggested the changes)