Why I try to deploy my trained model to Google Cloud ML, I get the following error:
Create Version failed.Model validation failed: Model metagraph does not have inputs collection.
What does this mean and how to get around this?
The Tensorflow model deployed on CloudML did not have a collection named “inputs”. This collection should name all the input tensors for your graph. Similarly, a collection named “outputs” is required to name the output tensors for your graph. Assuming your graph has two input tensors x and y, and one output tensor scores, this can be done as follows:
tf.add_to_collection(“inputs”, json.dumps({“x” : x.name, “y”: y.name}))
tf.add_to_collection(“outputs”, json.dumps({“scores”: scores.name}))
Here “x”, “y” and “scores” become aliases to the actual tensor names (x.name, y.name and scores.name)
Related
I am trying to load, multiple ONNX models, whereby I can process different inputs inside the same algorithm. Let's assume that model 1 would receive as input an image and output a set of 6 values (int) related to this image. The second model would also receive an image but output a binary classification instead.
So far, I have managed to successfully run on a single model. However when I try to load the second one I got an error:
// Load first .onnx model using OpenCV's DNN module
cv::dnn::Net net = cv::dnn::readNet("model_1.onnx");
net.setPreferableBackend(cv::dnn::DNN_BACKEND_DEFAULT);
net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
// Load second .onnx model using OpenCV's DNN module
cv::dnn::Net net_2 = cv::dnn::readNet("model_2.onnx");
net_2.setPreferableBackend(cv::dnn::DNN_BACKEND_DEFAULT);
net_2.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
Everything runs without errors until this point. But I got errors when trying to retrieve data from the second model.
// Forward first model
net.setInput(blob);
net.forward(output_blobs, output_names);
// Forward second model
net_2.setInput(blob);
net_2.forward(output_blobs_2, output_names_2);
Any help or advice will be more than welcome.
Currently AutoML Vision API is outputting a SingleLabel with the respective Score
For example:
I trained the model with 3 classes:
A
B
C
Then when I am using Test & Use and I am uploading another image, I got only
[CURRENT OUTPUT]
Class A and 0.988437 / 0.99
Is there a way I can get this type of output with Top_K classes ( for example Top 3 (k=3) )
[DESIRED OUTPUT]
Class A and 0.988437 / 0.99
Class C and 0.3551 / 0.36
Class B and 0.1201 / 0.12
Sorted based on their Score.
Thanks in Advance.
Single-label classification assigns a single label to each classified image and it returns only one predicted class.
Multi-label is more suited for your use case as it allows an image to be assigned multiple labels.
In the UI (which is what you seem to be using) you can specify the type of classification you want your custom model to perform when you create your dataset.
If, for any reason, you would like to have the option to get all/k predicted classes scores on the single-label classification, I suggest that you raise a Feature Request.
I'm running a model on AWS SageMaker, using their example object detection Jupyter notebook (https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb). In the results it gives the following:
validation mAP =(0.111078678154)
I was wondering what this mAP score is referring to?
I've used tensorflow, where it gives an averaged mAP(averages from .5IoU to .95IoU with .05 increments), mAP#.5IoU, mAP#.75IoU. I've checked the documents on SageMaker, but cannot find anything referring to what the definition of mAP is.
Is it safe to assume that the mAP score SageMaker reports is the "averaged mAP(averages from .5IoU to .95IoU with .05 increments)"?
Heyo,
The mAP score is the mean average precision score that is widely used for object detection (https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection-tuning.html)
Take a look at this link for more info on mAP: https://medium.com/#jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173
I came across this snippet in the Tensorflow documentation, MNIST For ML Beginners.
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
Now, I want to feed my own test images, without labelling them and would like the model to predict the labels, how do I achieve this?
Yes you can, but it would not be deep learning instead it would be clustering. ( Ex: K means Clustering )
Basic idea is like the following:
Create two placeholders for input and centroids
Decide a distance metric
Create graph
feed only dataset to run the graph
How to save a model in Tensorflow by using c++? I have searched on google and baidu but not find any solutions for it. I then reading the api document of tensorflow, and the introduce is fewer introduction about C++
Model saving is implemented in Python only. There is currently no way to save a model using C++ APIs. C++ APIs allow you to load and use the models, not to train or save them.
Assume you have basic understanding of tensorflow C++ API and know how to construct a graph using the C++ API. You can make use of the 2 functions :
tensorflow::WriteTextProto() : your can get tensorflow::GraphDef (that represents all the operations you defined e.g. Add, multiply, Mean .... etc ) from tensorflow::Scope::ToGraphDef(), save the tensorflow::GraphDef to text protobuf file
tensorflow::checkpoint::TensorSliceWriter saves the current state of parameter matrices to external file (checkpoint), it's little complicated but it works well for me
firstly you'll have to get trained parameter to by calling tensorflow::Session::Run, which will return a list of parameter matrices to output_tensor (see sample below) :
std::vector<tensorflow::Tensor> output_tensor;
tensorflow::Session::Run({}, {"name_of_param_mtx_1", "name_of_param_mtx_2",}, {}, &output_tensor);
where the name_of_param_mtx_1 and name_of_param_mtx_2 above should be the name of your parameter matrices in tensorflow::Variable, e.g.
auto name_of_param_mtx_1 = tensorflow::ops::Variable (root.WithOpName("name_of_param_mtx_1"), {7, 17}, tensorflow::DT_FLOAT);
then you need to prepare following for tensorflow::checkpoint::TensorSliceWriter:
base address of the parameter raw data by calling tensorflow::Tensor.tensor_data().data()
shape of each tensorflow::Tensor , by calling tensorflow::Tensor::dim_size(NUM_DIMENSION). For eaxmple a 7x17 2D parameter matrix, NUM_DIMENSION can be 0 and 1, where tensorflow::Tensor::dim_size(0) is 7 and tensorflow::Tensor::dim_size(1) is 17.
name of this checkpoint, the name must be unique from other checkpoints in one file
create tensorflow::TensorSlice by calling tensorflow::TensorSlice::ParseOrDie("-:-"), it seems that the only argument of tensorflow::TensorSlice::ParseOrDie will be internally analyzed e.g. -:- means taking all items of a matrix. if users only want part of trained parameter matrix e.g. to only take 2nd column of all rows, then the string argument would be likely -:2 , I haven't figured out such advanced usage of tensorflow::TensorSlice::ParseOrDie.
Hope that helps.