error running tensorflow trained model c++ - c++

I am working on Tensorflow on c++ with network I trained myself. I trained facenet on MS-Celeb-1M then I created my graph.pb. I modified the example provided here : https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/label_image in order to test my network.
In main.cpp:
string graph = "data/graph1.pb";
string output_layer = "InceptionResnetV1/Repeat/block35_5/Relu";
I get this error if I test :
Running model failed: Invalid argument: You must feed a value for placeholder tensor 'phase_train' with dtype bool [[Node: phase_train = Placeholderdtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0 /cpu:0"]]
I have looked for some answers such as here https://github.com/davidsandberg/facenet/issues/108:
But there is still a problem
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'phase_train' with dtype bool
when global variables are initialized. I'm not sure why this problem happens but it has to do with batch normalization. It can be fixed by changing
phase_train_placeholder = tf.placeholder(tf.bool, name='phase_train')
to
phase_train_placeholder = tf.placeholder_with_default(tf.convert_to_tensor(True, dtype=tf.bool), shape=(), name='phase_train')
And then it seems to work fine.
David Sandberg is speaking about changing a line. However, I don't know how can I provide the parameter phase_train in c++.

When you call Session->Run, the first input to the method is a vector of pair. You need to create a tensor with the name phase_train, type Boolean, and a value of whatever makes sense. Add that tensor to the input list.

Related

serving_input_receiver_fn() function without the deprecated tf.placeholder method in TF 2.0

I have a functioning tf.estimator pipeline build in TF 1, but now I made the decision to move to TF 2.0, and I have problems in the end of my pipeline, when I want to save the model in the .pb format
I'm using this high level estimator export_saved_model method:
https://www.tensorflow.org/api_docs/python/tf/estimator/BoostedTreesRegressor#export_saved_model
I have two numeric features, 'age' and 'time_spent'
They're defined using tf.feature_column as such:
age = tf.feature_column.numeric_column('age')
time_spent = tf.feature_column.numeric_column('time_spent')
features = [age,time_spent]
After the model has been trained I turn the list of features into a dict using the method feature_column_make_parse_example_spec() and feed it to another method build_parsing_serving_input_receiver_fn() excactly as outlied on tensorflow's webpage, https://www.tensorflow.org/guide/saved_model under estimators.
columns_dict = tf.feature_column_make_parse_example_spec(features)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(columns_dict)
model.export_saved_model(export_dir,input_receiver_fn)
I then inspect the output using the CLI tools
saved_model_cli show --dir mydir --all:
Resulting in the following:
enter image description here
Somehow Tensorflow squashes my two usefull numeric features into a useless string input crap called "inputs".
In TF 1 this could be circumvented by creating a custom input_receiver_fn() function using some tf.placeholder method, and I'd get the correct output with two distinct numeric features. But tf.placeholder doesn't exist in TF 2, so now it's pretty useless.
Sorry about the raging, but Tensorflow is horribly documented, and I'm really working with high level API's and it should just be straight out on the horse, but no.
I'd really appreciate any help :)
Tensorflow squashes my two usefull numeric features into a useless
string input crap called "inputs"
is not exactly true, as the exported model expects a serialized tf.Example proto. So, you can warp your age and time_spent into two features which will look like:
features {
feature {
key: "age"
value {
float32_list {
value: 10.2
}
}
}
feature {
key: "time_spent"
value {
float32_list {
value: 40.3
}
}
}
}
you can then call your regress function with the serialized string.

Using MATLAB coder to export code from Registration Estimator app to C++

Hi I'm trying to export "the default code" that is automatic generated from the Registration Estimator app within MATLAB to C++ using the MATLAB Coder tool.
This is an sample code I generated today:
function [MOVINGREG] = registerImages(MOVING,FIXED)
%registerImages Register grayscale images using auto-generated code from Registration Estimator app.
% [MOVINGREG] = registerImages(MOVING,FIXED) Register grayscale images
% MOVING and FIXED using auto-generated code from the Registration
% Estimator App. The values for all registration parameters were set
% interactively in the App and result in the registered image stored in the
% structure array MOVINGREG.
% Auto-generated by registrationEstimator app on 21-Jun-2017
%-----------------------------------------------------------
% Convert RGB images to grayscale
FIXED = rgb2gray(FIXED);
MOVING = rgb2gray(MOVING);
% Default spatial referencing objects
fixedRefObj = imref2d(size(FIXED));
movingRefObj = imref2d(size(MOVING));
% Intensity-based registration
[optimizer, metric] = imregconfig('monomodal');
optimizer.GradientMagnitudeTolerance = 1.00000e-04;
optimizer.MinimumStepLength = 1.00000e-05;
optimizer.MaximumStepLength = 6.25000e-02;
optimizer.MaximumIterations = 100;
optimizer.RelaxationFactor = 0.500000;
% Align centers
fixedCenterXWorld = mean(fixedRefObj.XWorldLimits);
fixedCenterYWorld = mean(fixedRefObj.YWorldLimits);
movingCenterXWorld = mean(movingRefObj.XWorldLimits);
movingCenterYWorld = mean(movingRefObj.YWorldLimits);
translationX = fixedCenterXWorld - movingCenterXWorld;
translationY = fixedCenterYWorld - movingCenterYWorld;
% Coarse alignment
initTform = affine2d();
initTform.T(3,1:2) = [translationX, translationY];
% Apply transformation
tform = imregtform(MOVING,movingRefObj,FIXED,fixedRefObj,'similarity',optimizer,metric,'PyramidLevels',3,'InitialTransformation',initTform);
MOVINGREG.Transformation = tform;
MOVINGREG.RegisteredImage = imwarp(MOVING, movingRefObj, tform, 'OutputView', fixedRefObj, 'SmoothEdges', true);
% Store spatial referencing object
MOVINGREG.SpatialRefObj = fixedRefObj;
end
Within the Coder Tool in the section Run-Time Issues I received a couple of issues e.g. that coder need to declare the extrinsic . So far so good. I added for instance: coder.extrinsic('imregconfig'); and coder.extrinsic('optimizer');. But I still getting errors like:
Attempt to extract field 'GradientMagnitudeTolerance' from 'mxArray'.
Attempt to extract field 'MinimumStepLength' from 'mxArray'.
Attempt to extract field 'MaximumStepLength' from 'mxArray'.
...
Pointing to the line with optimizer.GradientMagnitudeTolerance = 1.00000e-04; (and below).
I found out that usually the initialisation of variables is missing. But I don't know how to initialise the property optimizer.GradientMagnitudeTolerance in advanced. Can anyone help me with this?
PS: I'm using MATLAB R2017a and Microsoft Visual C++ 2017 (C) Compiler
Based on the supported functions for code generation list at https://www.mathworks.com/help/coder/ug/functions-supported-for-code-generation--categorical-list.html#bsl0arh-1, imregconfig is not supported for code generation. Which explains the issue you got first. Adding coder.extrinsic means that MATLAB Coder generated file will call into MATLAB to run that function. You can do this only for a mex target that needs MATLAB to run. imregconfig is not going to generate any C code. Standalone C code generation for use from external application is not possible with this code.
When functions declared as coder.extrinsic call into MATLAB they return an mxArray. The rest of the code can handle this mxArray only by passing it to MATLAB i.e. similar extrinsic functions. From Coder's point of view these are opaque types and hence the error about attempting extract fields from mxArray.

How do you use load data from a CSV in C++ TensorFlow?

I'm trying to load a model trained in Python into C++ and classify some data from a CSV. I found this tutorial:
https://medium.com/#hamedmp/exporting-trained-tensorflow-models-to-c-the-right-way-cf24b609d183#.3bmbyvby0
Which lead me to this piece of example code:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/main.cc
Which is looking very hopeful for me. However, the data I want to load is in a CSV, and not an image file, so I'm trying to rewrite the ReadTensorFromImageFile function. I was able to find a class DecodeCSV, but it's a little different than the DecodePNG and DecodeJpeg classes in the example code, and I end up with an OutputList instead of and Output. Using the [] operator on the list seems to crash my program. If anyone happens to know how to deal with this, it would be greatly appreciated. He are the relevant changes to the code:
// inside ReadTensorFromText
Output image_reader;
std::initializer_list<Input>* x = new std::initializer_list<Input>;
::tensorflow::ops::InputList defaults = ::tensorflow::ops::InputList(*x);
OutputList image_read_list;
image_read_list = DecodeCSV(root.WithOpName("csv_reader"), file_reader, defaults).output;
// Now cast the image data to float so we can do normal math on it.
// image_read_list.at(0) crashes the executable.
auto float_caster =
Cast(root.WithOpName("float_caster"), image_read_list.at(0), tensorflow::DT_FLOAT);

Adding weka instances after classification but before evaluation?

Suppose X is a raw, labeled (ie, with training labels) data set, and Process(X) returns a set of Y instances
that have been encoded with attributes and converted into a weka-friendly file like Y.arff.
Also suppose Process() has some 'leakage':
some instances Leak = X-Y can't be encoded consistently, and need
to get a default classification FOO. The training labels are also known for the Leak set.
My question is how I can best introduce instances from Leak into the
weka evaluation stream AFTER some classifier has been applied to the
subset Y, folding the Leak instances in with their default
classification label, before performing evaulation across the full set X? In code:
DataSource LeakSrc = new DataSource("leak.arff");
Instances Leak = LeakSrc.getDataSet();
DataSource Ysrc = new DataSource("Y.arff");
Instances Y = Ysrc.getDataSet();
classfr.buildClassifer(Y)
// YunionLeak = ??
eval.crossValidateModel(classfr, YunionLeak);
Maybe this is a specific example of folding together results
from multiple classifiers?
the bounty is closing, but Mark Hall, in another forum (
http://list.waikato.ac.nz/pipermail/wekalist/2015-November/065348.html) deserves what will have to count as the current answer:
You’ll need to implement building the classifier for the cross-validation
in your code. You can still use an evaluation object to compute stats for
your modified test folds though, because the stats it computes are all
additive. Instances.trainCV() and Instances.testCV() can be used to create
the folds:
http://weka.sourceforge.net/doc.stable/weka/core/Instances.html#trainCV(int,%20int,%20java.util.Random)
You can then call buildClassifier() to process each training fold, modify
the test fold to your hearts content, and then iterate over the instances
in the test fold while making use of either Evaluation.evaluateModelOnce()
or Evaluation.evaluateModelOnceAndRecordPrediction(). The later version is
useful if you need the area under the curve summary metrics (as these
require predictions to be retained).
http://weka.sourceforge.net/doc.stable/weka/classifiers/Evaluation.html#evaluateModelOnce(weka.classifiers.Classifier,%20weka.core.Instance)
http://weka.sourceforge.net/doc.stable/weka/classifiers/Evaluation.html#evaluateModelOnceAndRecordPrediction(weka.classifiers.Classifier,%20weka.core.Instance)
Depending on your classifier, it could be very easy! Weka has an interface called UpdateableClassifier, any class using this can be updated after it has been built! The following classes implement this interface:
HoeffdingTree
IBk
KStar
LWL
MultiClassClassifierUpdateable
NaiveBayesMultinomialText
NaiveBayesMultinomialUpdateable
NaiveBayesUpdateable
SGD
SGDText
It can then be updated something like the following:
ArffLoader loader = new ArffLoader();
loader.setFile(new File("/data/data.arff"));
Instances structure = loader.getStructure();
structure.setClassIndex(structure.numAttributes() - 1);
NaiveBayesUpdateable nb = new NaiveBayesUpdateable();
nb.buildClassifier(structure);
Instance current;
while ((current = loader.getNextInstance(structure)) != null) {
nb.updateClassifier(current);
}

Setting input parameter for a method

I have an object called X with a method GET_BANK, like in the picture below:
I want to call the function GET_BANK and I am trying to set the input parameter BLZ with a certain value.
I don't quite understand the data structure that is presented here and how I can access it.
At this point my code looks like this (simple version):
data: testobj type ref to ZCO_BLZSERVICE_PORT_TYPE .
data: input type ZGET_BANK .
input-BLZ = '10070000'.
I think the error that I am getting "The data object "INPUT" does not have a component called "BLZ"." is not relevant as I obviously have no idea on how to set the BLZ parameter.
Edit: Getting to BLZ can be done by chaining multiple parameters / objects:
input-PARAMETERS-BLZ = '10070000'.
As far as I can see, your input data should refer to TYPE ZGET_BANK_TYPE. Try double-clicking the field with that content in the screen you showed to see whether it leads to a structure with a component named BLZ.