I have a trained fastrcnn model on a custom set of images. I want to evaluate a new image using the model and the C++ Eval API. I flattened in the image into a 1D vector and acquired rois to input into the eval function.
GetEvalF(&model);
// Load model with desired outputs
std::string networkConfiguration;
//networkConfiguration += "outputNodeNames=\"h1.z:ol.z\"\n";
networkConfiguration += "modelPath=\"" + modelFilePath + "\"";
model->CreateNetwork(networkConfiguration);
// inputs are features of image: 1000:1000:3 & rois for image: 100
std::unordered_map<string, vector<float>> inputs = { { "features", imgVector },{ "rois", roisVector } };
//outputs are roiLabels and prediction values for each one: 500
std::unordered_map<string, vector<float>*> outputs = { { "roiLabels", &labelsVector }};
but when I try to evaluate with
model->Evaluate(inputs, outputs);
I have a 'no instance of overloaded function error'
Does somebody know how I'm wrong in my formatting?
Did you train your model using Python or BrainScript? If using Python, you should use CNTKLibrary API for evaluation, but not the EvalDll API (which works only for models trained with BrainScript). You can find more information about difference beween these two APIs in our Wiki page here. You can check this page about how to use CNTKLibrary API for model evaluation, and the example code. Instructions about how to build examples are described in this page.
You can also use our Nuget packages to build your application.
Thanks!
Related
I have a functioning tf.estimator pipeline build in TF 1, but now I made the decision to move to TF 2.0, and I have problems in the end of my pipeline, when I want to save the model in the .pb format
I'm using this high level estimator export_saved_model method:
https://www.tensorflow.org/api_docs/python/tf/estimator/BoostedTreesRegressor#export_saved_model
I have two numeric features, 'age' and 'time_spent'
They're defined using tf.feature_column as such:
age = tf.feature_column.numeric_column('age')
time_spent = tf.feature_column.numeric_column('time_spent')
features = [age,time_spent]
After the model has been trained I turn the list of features into a dict using the method feature_column_make_parse_example_spec() and feed it to another method build_parsing_serving_input_receiver_fn() excactly as outlied on tensorflow's webpage, https://www.tensorflow.org/guide/saved_model under estimators.
columns_dict = tf.feature_column_make_parse_example_spec(features)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(columns_dict)
model.export_saved_model(export_dir,input_receiver_fn)
I then inspect the output using the CLI tools
saved_model_cli show --dir mydir --all:
Resulting in the following:
enter image description here
Somehow Tensorflow squashes my two usefull numeric features into a useless string input crap called "inputs".
In TF 1 this could be circumvented by creating a custom input_receiver_fn() function using some tf.placeholder method, and I'd get the correct output with two distinct numeric features. But tf.placeholder doesn't exist in TF 2, so now it's pretty useless.
Sorry about the raging, but Tensorflow is horribly documented, and I'm really working with high level API's and it should just be straight out on the horse, but no.
I'd really appreciate any help :)
Tensorflow squashes my two usefull numeric features into a useless
string input crap called "inputs"
is not exactly true, as the exported model expects a serialized tf.Example proto. So, you can warp your age and time_spent into two features which will look like:
features {
feature {
key: "age"
value {
float32_list {
value: 10.2
}
}
}
feature {
key: "time_spent"
value {
float32_list {
value: 40.3
}
}
}
}
you can then call your regress function with the serialized string.
Currently I'm using FastTree for binary classification, but I would like to give SVM a try and compare metrics.
All the docs mention LinearSvm, but I can't find code example anywhere.
mlContext.BinaryClassification.Trainers does not have public SVM trainers. There is LinearSvm class and LinearSvm.TrainLinearSvm static method, but they seem to be intended for different things.
What am I missing?
Version: 0.7
For some reason there is no trainer in the runtime API but there is a linear SVM trainer in the Legacy API (for v0.7) found here. They might be generating a new one for the upcoming API, so my advice is to either use the legacy one, or wait for a newer API.
At this stage, ML.Net is very much in development.
Copy pasting the response I got on Github:
I have two answers for you: What the status of the API is, and how to use the LinearSVM in the meantime.
First, we have LinearSVM in the ML.NET codebase, but we do not yet have samples or the API extensions to place it in mlContext.BinaryClassification.Trainers. This is being worked through in issue #1318. I'll link this to that issue, and mark it as a bug.
In the meantime, you can use direct instantiation to get access to LinearSVM:
var arguments = new LinearSvm.Arguments()
{
NumIterations = 20
};
var linearSvm = new LinearSvm(mlContext, arguments);
var svmTransformer = linearSvm.Fit(trainSet);
var scoredTest = svmTransformer.Transform(testSet);
This will give you an ITransformer, here called svmTransformer that you can use to operate on IDataView objects.
Hi I'm trying to export "the default code" that is automatic generated from the Registration Estimator app within MATLAB to C++ using the MATLAB Coder tool.
This is an sample code I generated today:
function [MOVINGREG] = registerImages(MOVING,FIXED)
%registerImages Register grayscale images using auto-generated code from Registration Estimator app.
% [MOVINGREG] = registerImages(MOVING,FIXED) Register grayscale images
% MOVING and FIXED using auto-generated code from the Registration
% Estimator App. The values for all registration parameters were set
% interactively in the App and result in the registered image stored in the
% structure array MOVINGREG.
% Auto-generated by registrationEstimator app on 21-Jun-2017
%-----------------------------------------------------------
% Convert RGB images to grayscale
FIXED = rgb2gray(FIXED);
MOVING = rgb2gray(MOVING);
% Default spatial referencing objects
fixedRefObj = imref2d(size(FIXED));
movingRefObj = imref2d(size(MOVING));
% Intensity-based registration
[optimizer, metric] = imregconfig('monomodal');
optimizer.GradientMagnitudeTolerance = 1.00000e-04;
optimizer.MinimumStepLength = 1.00000e-05;
optimizer.MaximumStepLength = 6.25000e-02;
optimizer.MaximumIterations = 100;
optimizer.RelaxationFactor = 0.500000;
% Align centers
fixedCenterXWorld = mean(fixedRefObj.XWorldLimits);
fixedCenterYWorld = mean(fixedRefObj.YWorldLimits);
movingCenterXWorld = mean(movingRefObj.XWorldLimits);
movingCenterYWorld = mean(movingRefObj.YWorldLimits);
translationX = fixedCenterXWorld - movingCenterXWorld;
translationY = fixedCenterYWorld - movingCenterYWorld;
% Coarse alignment
initTform = affine2d();
initTform.T(3,1:2) = [translationX, translationY];
% Apply transformation
tform = imregtform(MOVING,movingRefObj,FIXED,fixedRefObj,'similarity',optimizer,metric,'PyramidLevels',3,'InitialTransformation',initTform);
MOVINGREG.Transformation = tform;
MOVINGREG.RegisteredImage = imwarp(MOVING, movingRefObj, tform, 'OutputView', fixedRefObj, 'SmoothEdges', true);
% Store spatial referencing object
MOVINGREG.SpatialRefObj = fixedRefObj;
end
Within the Coder Tool in the section Run-Time Issues I received a couple of issues e.g. that coder need to declare the extrinsic . So far so good. I added for instance: coder.extrinsic('imregconfig'); and coder.extrinsic('optimizer');. But I still getting errors like:
Attempt to extract field 'GradientMagnitudeTolerance' from 'mxArray'.
Attempt to extract field 'MinimumStepLength' from 'mxArray'.
Attempt to extract field 'MaximumStepLength' from 'mxArray'.
...
Pointing to the line with optimizer.GradientMagnitudeTolerance = 1.00000e-04; (and below).
I found out that usually the initialisation of variables is missing. But I don't know how to initialise the property optimizer.GradientMagnitudeTolerance in advanced. Can anyone help me with this?
PS: I'm using MATLAB R2017a and Microsoft Visual C++ 2017 (C) Compiler
Based on the supported functions for code generation list at https://www.mathworks.com/help/coder/ug/functions-supported-for-code-generation--categorical-list.html#bsl0arh-1, imregconfig is not supported for code generation. Which explains the issue you got first. Adding coder.extrinsic means that MATLAB Coder generated file will call into MATLAB to run that function. You can do this only for a mex target that needs MATLAB to run. imregconfig is not going to generate any C code. Standalone C code generation for use from external application is not possible with this code.
When functions declared as coder.extrinsic call into MATLAB they return an mxArray. The rest of the code can handle this mxArray only by passing it to MATLAB i.e. similar extrinsic functions. From Coder's point of view these are opaque types and hence the error about attempting extract fields from mxArray.
I'm trying to load a model trained in Python into C++ and classify some data from a CSV. I found this tutorial:
https://medium.com/#hamedmp/exporting-trained-tensorflow-models-to-c-the-right-way-cf24b609d183#.3bmbyvby0
Which lead me to this piece of example code:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/main.cc
Which is looking very hopeful for me. However, the data I want to load is in a CSV, and not an image file, so I'm trying to rewrite the ReadTensorFromImageFile function. I was able to find a class DecodeCSV, but it's a little different than the DecodePNG and DecodeJpeg classes in the example code, and I end up with an OutputList instead of and Output. Using the [] operator on the list seems to crash my program. If anyone happens to know how to deal with this, it would be greatly appreciated. He are the relevant changes to the code:
// inside ReadTensorFromText
Output image_reader;
std::initializer_list<Input>* x = new std::initializer_list<Input>;
::tensorflow::ops::InputList defaults = ::tensorflow::ops::InputList(*x);
OutputList image_read_list;
image_read_list = DecodeCSV(root.WithOpName("csv_reader"), file_reader, defaults).output;
// Now cast the image data to float so we can do normal math on it.
// image_read_list.at(0) crashes the executable.
auto float_caster =
Cast(root.WithOpName("float_caster"), image_read_list.at(0), tensorflow::DT_FLOAT);
#haehn Hi Haehn (XTK)
I'm using edge-XTK with GWT and trying to render a simple STL. However XTK code fails at the line where we assign color to the mesh.
mesh.color = [0.7,0,0] // this line fails
Error message emitted by XTK code: "Invalid color"
This behavior is observed only when using XTK with GWT.
The error seems to be coming from this XTK code snippet
X.displayable.prototype.__defineSetter__('color', function(color) {
// we accept only numbers as arguments
if (!goog.isDefAndNotNull(color) || !(color instanceof Array) ||
(color.length != 3)) {
throw new Error('Invalid color.');
}
I'm guessing that the issue is with the way GWT builds page with iframes... because of which the above if condition could be failing in GWT. I think if you replace the above check with following snippet (got idea from: here).
It might fix the problem.
use goog.isArray(color) instead of (color instanceof Array)
Can you please investigate and comment?
Edit:
Hi XTK
Here is the code snippet which shows how I'm using XTK with GWT.
public class TestGwtXtk implements EntryPoint {
public void onModuleLoad() {
testXtk();
}
// GWT JSNI method, which allows mixing Java and JS natively.
// it is akin using c++ or c libraries in java or android
private native void testXtk() /*-{
var r = new $wnd.X.renderer3D();
r.container = 'xtk_container'; // div ele
r.config.PROGRESSBAR_ENABLED = false;
r.init();
cube = new $wnd.X.cube();
cube.lengthX = cube.lengthY = cube.lengthZ = 20;
cube.color = [ 1, 1, 1 ]; // fails here in XTK code
cube.center = [ 0, 0, 0 ]; // fails here in XTK code
r.add(cube);
r.render();
}-*/;
}
As noted by the inline comments, use of javascript array fails. Failure is not because js array usage, such as [0,0,0] or new Array(0,0,0) is wrong. Failure is because the way XTK code checks for "instance of Array".
Edit: 2
Dear XTK
I was able to checkout XTK code from git, make changes that I'm proposing, re-build XTK.js and finally test successfully that my fix solves the problem.
for example: in displayable.js I commented one line and added another line thus:
// if (!goog.isDefAndNotNull(color) || !(color instanceof Array) || (color.length != 3)) {
if (!goog.isDefAndNotNull(color) || !(goog.isArray(color)) || (color.length != 3)) {
I made similar changes in couple of other places in the xtk codebase to get my usecase going. Explanation of why this is the right solution is here: Closure: The Definitive Guide. Would you please consider making this fix in the codebase for release 8? Thank you
Using XTK with GWT ? What do you mean ? Did you write your own wrappers to compile code with xtk calls from Java to JavaScript ? Or do you directly use xtk.js in the war file and write manualy some JavaScript using it ? Or do you only use GAE (Google App Engine), the Google environnement for web applications (the ones made with GWT, but also not compiled from Java ones). Could you be more accurate please ?
Here they deal with some issues with GWT and type test, did you try to create your array with the "new" operator ?
var mycolor = new Array(0.7,0,0);
mesh.color = mycolor;