Tensorflow C++ YOU_MADE_A_PROGRAMMING_MISTAKE - c++

I train my NN-model in python and load it in VS2015 C++. The piece of code:
// The session will initialize the outputs
vector<Tensor> outputs;
// Run the session, evaluating our "c" operation from the graph
status = session->Run(inputs, { "y_pred" }, {}, &outputs);
// Convert the node to a scalar representation.
auto output_c = outputs[0].flat<float>();
The y_pred is a 2-element tensor, so I use flat to get it. However, I got an error, "YOU_MADE_A_PROGRAMMING_MISTAKE", from EIGEN_STATIC_ASSERT.
Anyone has this issue before? How should I solve it? Thanks!

Finally, I found a post in stackoverflow but I cannot make sure who is the original author. Indeed we need flat function.
session->Run(inputs, { "pred" }, {}, &outputs);
TTypes<float>::Flat indices_flat = outputs[0].flat<float>();
float coutput[6];
for (int i = 0; i<dataSize; i++) {
coutput[i] = indices_flat(i);
cout << "outptut[i]: " << indices_flat(i) << endl;
}

Related

warning: Failed to call `main()` to execute the macro

I am trying to learn ROOT and I have a few codes that I can work with. Sometimes codes work but sometimes they don't.
{
c1 = new TCanvas("c1", "My Root Plots",600, 400);
c1->Divide(2,2);
c1->cd(1);
f=new TF1("f","[0]*exp(-0.5*((x-[1])/[2])**2)/(sqrt(2.0*TMath::Pi())*[2])",-100,100); f->SetTitle("Gaus;X axis ;Y axis");
f->SetParameter(0,0.5*sqrt(2*TMath::Pi()));
f->SetParameter(1,8);
f->SetParameter(2,5);
f->SetLineColor(3);
f->SetMarkerColor(1);
f->SetMarkerStyle(kOpenStar);
f->SetMarkerSize(5);
f->Draw();
c1->cd(2);
f1 = new TF1("f1", "[0]*x+[1]", 0,50);
f1->SetParameters(10,4);
f1->SetLineColor(5);
f1->SetTitle("ax+b;x;y");
f1->Draw();
}
This is the code I am trying to do. Code is kinda working , ''what do you mean kinda working''. I mean it's giving me a graph but as you can see in the code I wrote ( f->SetMarkerColor(1);
f->SetMarkerStyle(kOpenStar);) But markers didn't appear on the graph. Terminal doesn't giving me any errors. Is it my ROOT library missing ? I cannot upload images because I am new here.
I have a another problem. I want to share it maybe it will help solving the problem that I have.
void testRandom(Int_t nrEvents=500000000)
{
TRandom *r1=new TRandom();
TRandom2 *r2=new TRandom2();
TRandom3 *r3=new TRandom3();
TCanvas* c1=new TCanvas("c1","TRandom Number Generators", 800,600); c1->Divide(3,1);
TH1D *h1=new TH1D("h1","TRandom",500,0,1); TH1D *h2=new TH1D("h2","TRandom2",500,0,1); TH1D *h3=new TH1D("h3","TRandom3",500,0,1); TStopwatch *st=new TStopwatch();
st->Start();
for (Int_t i=0; i<nrEvents; i++) { h1->Fill(r1->Uniform(0,1)); } st->Stop(); cout << "Random: " << st->CpuTime() << endl; st->Start();
c1->cd(1); h1->SetFillColor(kRed+1); h1->SetMinimum(0); h1->Draw();
for (Int_t i=0; i<nrEvents; i++) { h2->Fill(r2->Uniform(0,1)); } st->Stop(); cout << "Random2: " << st->CpuTime() << endl; st->Start();
c1->cd(2); h2->SetFillColor(kGreen+1); h2->SetMinimum(0); h2->Draw();
for (Int_t i=0; i<nrEvents; i++) { h3->Fill(r3->Uniform(0,1)); } st->Stop(); cout << "Random3:" << st->CpuTime() << endl;
c1->cd(3);
h3->Draw(); h3->SetFillColor(kBlue+1); h3->SetMinimum(0);
}
This is a another code I am trying to run. But this code doesn't work an it's giving me this error.
warning: Failed to call main() to execute the macro.
Add this function or rename the macro. Falling back to .L.
I tried different things. I tried ,
root [1] .x main.cpp
root [1] .L main.cpp
still giving me same error.
f->SetMarkerColor(1); f->SetMarkerStyle(kOpenStar);) But markers
didn't appear on the graph.
Try f->Draw("PL") instead of f->Draw() to make the markers visible.
warning: Failed to call main() to execute the macro.
Rename your file, it should be called testRandom.cpp instead of main.cpp
Then, you can execute it with .x testRandom.cpp.

How to give multi-dimensional inputs to tflite via C++ API

I am trying out tflite C++ API for running a model that I built. I converted the model to tflite format by following snippet:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5')
tfmodel = converter.convert()
open("model.tflite", "wb").write(tfmodel)
I am following the steps provided at tflite official guide, and my code upto this point looks like this
// Load the model
std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("model.tflite");
// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder builder(*model, resolver);
builder(&interpreter);
interpreter->AllocateTensors();
// Check interpreter state
tflite::PrintInterpreterState(_interpreter.get());
This shows my input layer has a shape of (1, 2050, 6). For giving input from C++, I followed this thread, and my input code looks like this:
std::vector<std::vector<double>> tensor; // I filled this vector, (dims are 2050, 6)
int input = interpreter->inputs()[0];
float* input_data_ptr = interpreter->typed_input_tensor<float>(input);
for (int i = 0; i < 2050; ++i) {
for (int j = 0; j < 6; j++) {
*(input_data_ptr) = (float)tensor[i][j];
input_data_ptr++;
}
}
Last layer of this model returns a single floating point(a probability). I get output from following code.
interpreter->Invoke();
int output_idx = interpreter->outputs()[0];
float* output = interpreter->typed_output_tensor<float>(output_idx);
std::cout << "OUTPUT: " << *output << std::endl;
My problem is that I am getting same output for different inputs. Moreover, the outputs are not matching with tensorflow-python outputs.
I don't understand why it's behaving this way. Also, can anyone confirm if this is the right way to give inputs to the model?
Some extra information:
I built tflite from source, v1.14.0, using command: bazel build -c opt //tensorflow/contrib/lite:libtensorflowLite.so --cxxopt="-std=c++11" --verbose_failures
I trained my model and converted it to tflite on a different machine, with tensorflow v2.0
This is wrong API usage.
Changing typed_input_tensor to typed_tensor and typed_output_tensor to typed_tensor resolved the issue for me.
For anyone else having the same issue,
int input_tensor_idx = 0;
int input = interpreter->inputs()[input_tensor_idx];
float* input_data_ptr = interpreter->typed_input_tensor<float>(input_tensor_idx);
and
int input_tensor_idx = 0;
int input = interpreter->inputs()[input_tensor_idx];
float* input_data_ptr = interpreter->typed_tensor<float>(input);
are identical.
This can be verified by looking at implementation of typed_input_tensor.
template <class T>
T* typed_input_tensor(int index) {
return typed_tensor<T>(inputs()[index]);
}

Run tensorflow model in CPP

I trained my model using tf.keras. I convert this model to '.pb' by,
import os
import tensorflow as tf
from tensorflow.keras import backend as K
K.set_learning_phase(0)
from tensorflow.keras.models import load_model
model = load_model('model_checkpoint.h5')
model.save('model_tf2', save_format='tf')
This creates a folder 'model_tf2' with 'assets', varaibles, and saved_model.pb
I'm trying to load this model in cpp. Referring to many other posts (mainly, Using Tensorflow checkpoint to restore model in C++), I am now able to load the model.
RunOptions run_options;
run_options.set_timeout_in_ms(60000);
SavedModelBundle model;
auto status = LoadSavedModel(SessionOptions(), run_options, model_dir_path, tags, &model);
if (!status.ok()) {
std::cerr << "Failed: " << status1;
return -1;
}
The above screenshot shows that the model was loaded.
I have the following questions
How do I do a forward pass through the model?
I understand 'tag' can be gpu, serve, train.. What is the difference between serve and gpu?
I don't understand the first 2 arguments to LoadSavedModel i.e. session options and run options. What purpose do they serve? Also, could you help me understand with a syntactical example? I have set run_options by looking at another stackoverflow post, however I don't understand its purpose.
Thank you!! :)
Code to perform forward pass through the model, as mentioned by Patwie in the comments, is given below:
#include <tensorflow/core/protobuf/meta_graph.pb.h>
#include <tensorflow/core/public/session.h>
#include <tensorflow/core/public/session_options.h>
#include <iostream>
#include <string>
typedef std::vector<std::pair<std::string, tensorflow::Tensor>> tensor_dict;
/**
* #brief load a previous store model
* #details [long description]
*
* in Python run:
*
* saver = tf.train.Saver(tf.global_variables())
* saver.save(sess, './exported/my_model')
* tf.train.write_graph(sess.graph, '.', './exported/graph.pb, as_text=False)
*
* this relies on a graph which has an operation called `init` responsible to
* initialize all variables, eg.
*
* sess.run(tf.global_variables_initializer()) # somewhere in the python
* file
*
* #param sess active tensorflow session
* #param graph_fn path to graph file (eg. "./exported/graph.pb")
* #param checkpoint_fn path to checkpoint file (eg. "./exported/my_model",
* optional)
* #return status of reloading
*/
tensorflow::Status LoadModel(tensorflow::Session *sess, std::string graph_fn,
std::string checkpoint_fn = "") {
tensorflow::Status status;
// Read in the protobuf graph we exported
tensorflow::MetaGraphDef graph_def;
status = ReadBinaryProto(tensorflow::Env::Default(), graph_fn, &graph_def);
if (status != tensorflow::Status::OK()) return status;
// create the graph
status = sess->Create(graph_def.graph_def());
if (status != tensorflow::Status::OK()) return status;
// restore model from checkpoint, iff checkpoint is given
if (checkpoint_fn != "") {
tensorflow::Tensor checkpointPathTensor(tensorflow::DT_STRING,
tensorflow::TensorShape());
checkpointPathTensor.scalar<std::string>()() = checkpoint_fn;
tensor_dict feed_dict = {
{graph_def.saver_def().filename_tensor_name(), checkpointPathTensor}};
status = sess->Run(feed_dict, {}, {graph_def.saver_def().restore_op_name()},
nullptr);
if (status != tensorflow::Status::OK()) return status;
} else {
// virtual Status Run(const std::vector<std::pair<string, Tensor> >& inputs,
// const std::vector<string>& output_tensor_names,
// const std::vector<string>& target_node_names,
// std::vector<Tensor>* outputs) = 0;
status = sess->Run({}, {}, {"init"}, nullptr);
if (status != tensorflow::Status::OK()) return status;
}
return tensorflow::Status::OK();
}
int main(int argc, char const *argv[]) {
const std::string graph_fn = "./exported/my_model.meta";
const std::string checkpoint_fn = "./exported/my_model";
// prepare session
tensorflow::Session *sess;
tensorflow::SessionOptions options;
TF_CHECK_OK(tensorflow::NewSession(options, &sess));
TF_CHECK_OK(LoadModel(sess, graph_fn, checkpoint_fn));
// prepare inputs
tensorflow::TensorShape data_shape({1, 2});
tensorflow::Tensor data(tensorflow::DT_FLOAT, data_shape);
// same as in python file
auto data_ = data.flat<float>().data();
data_[0] = 42;
data_[1] = 43;
tensor_dict feed_dict = {
{"input_plhdr", data},
};
std::vector<tensorflow::Tensor> outputs;
TF_CHECK_OK(
sess->Run(feed_dict, {"sequential/Output_1/Softmax:0"}, {}, &outputs));
std::cout << "input " << data.DebugString() << std::endl;
std::cout << "output " << outputs[0].DebugString() << std::endl;
return 0;
}
The tags Serve and GPU can be used together if we want to perform inference on a Model using GPU.
The argument session_options in C++ is equivalent to
tf.ConfigProto(allow_soft_placement=True, log_device_placement=True),
which means that, If allow_soft_placement is true, an op will be placed on CPU if
(i) there's no GPU implementation for the OP (or)
(ii) no GPU devices are known or registered (or)
(iii) need to co-locate with reftype input(s) which are from CPU.
The argument run_options is used if we want to use the Profiler, i.e., to extract runtime statistics of the graph execution. It adds information about the time of execution and memory consumption to your event files and allow you to see this information in tensorboard.
Syntax to use session_options and run_options is given in the code mentioned above.
This worked well with TF1.5
load graph function
Status LoadGraph(const tensorflow::string& graph_file_name,
std::unique_ptr<tensorflow::Session>* session, tensorflow::SessionOptions options) {
tensorflow::GraphDef graph_def;
Status load_graph_status =
ReadBinaryProto(tensorflow::Env::Default(), graph_file_name, &graph_def);
if (!load_graph_status.ok()) {
return tensorflow::errors::NotFound("Failed to load compute graph at '",
graph_file_name, "'");
}
//session->reset(tensorflow::NewSession(tensorflow::SessionOptions()));
session->reset(tensorflow::NewSession(options));
Status session_create_status = (*session)->Create(graph_def);
if (!session_create_status.ok()) {
return session_create_status;
}
return Status::OK();
}
Call the load graph function with path to .pb model and other session configuration. Once the model is loaded you can do forward pass by calling Run
Status load_graph_status = LoadGraph(graph_path, &session_fpass, options);
if (!load_graph_status.ok()) {
LOG(ERROR) << load_graph_status;
return -1;
}
std::vector<tensorflow::Tensor> outputs;
Status run_status = session_fpass->Run({ {input_layer, image_in} },
{ output_layer1}, { output_layer1}, &outputs);
if (!run_status.ok()) {
LOG(ERROR) << "Running model failed: " << run_status;
return -1;
}

Problems with JSON for Modern C++ Multidimentional Array

So I have a JSON string I get using cURL that I'm trying to parse for data using JSON for Modern C++ (nlohmann::json). Here is my code:
double retValue(string data) {
string str;
double value = 0;
try {
auto jsonData = json::parse(data.c_str());
str = jsonData["layer"][1]["Page"]["Number"];
value = stoi(str);
}
catch(json::parse_error& e) {
cout << "Error: " << e.what() << endl;
return 0;
}
return value;
}
So In PHP json_decode works fine to decode into an array and the values can be easily parsed in this way, but I am having trouble with C++ and this library. I get the following error at run time but compiles fine:
terminate called after throwing an instance of 'nlohmann::detail::type_error'
what(): [json.exception.type_error.305] cannot use operator[] with object
Aborted (core dumped)
The JSON data is similar to this I'm trying to parse and I figure being multidimensional is the problem and that I'm not handling the data properly.
{
"layer": {
"1": {
"Page": {
"Number": 3.14
}
}
}
}
Can anybody point me in the right direction?
C++ is a strong-typed language, you must use correct data type:
str = jsonData["layer"]["1"]["Page"]["Number"];
But in PHP, you access data[1], which is the same as data["1"]

Z3 Optimizer Unsatisfiability with Real Constraints Using C++ API

I'm running into a problem when trying to use the Z3 optimizer to solve graph partitioning problems. Specifically, the code bellow will fail to produce a satisfying model:
namespace z3 {
expr ite(context& con, expr cond, expr then_, expr else_) {
return to_expr(con, Z3_mk_ite(con, cond, then_, else_));;
}
}
bool smtPart(void) {
// Graph setup
vector<int32_t> nodes = {{ 4, 2, 1, 1 }};
vector<tuple<node_pos_t, node_pos_t, int32_t>> edges;
GraphType graph(nodes, edges);
// Z3 setup
z3::context con;
z3::optimize opt(con);
string n_str = "n", sub_p_str = "_p";
// Re-usable constants
z3::expr zero = con.int_val(0);
// Create the sort representing the different partitions.
const char* part_sort_names[2] = { "P0", "P1" };
z3::func_decl_vector part_consts(con), part_preds(con);
z3::sort part_sort =
con.enumeration_sort("PartID",
2,
part_sort_names,
part_consts,
part_preds);
// Create the constants that represent partition choices.
vector<z3::expr> part_vars;
part_vars.reserve(graph.numNodes());
z3::expr p0_acc = zero,
p1_acc = zero;
typename GraphType::NodeData total_weight = typename GraphType::NodeData();
for (const auto& node : graph.nodes()) {
total_weight += node.data;
ostringstream name;
name << n_str << node.id << sub_p_str;
z3::expr nchoice = con.constant(name.str().c_str(), part_sort);
part_vars.push_back(nchoice);
p0_acc = p0_acc + z3::ite(con,
nchoice == part_consts[0](),
con.int_val(node.data),
zero);
p1_acc = p1_acc + z3::ite(con,
nchoice == part_consts[1](),
con.int_val(node.data),
zero);
}
z3::expr imbalance = con.int_const("imbalance");
opt.add(imbalance ==
z3::ite(con,
p0_acc > p1_acc,
p0_acc - p1_acc,
p1_acc - p0_acc));
z3::expr imbalance_limit = con.real_val(total_weight, 100);
opt.add(imbalance <= imbalance_limit);
z3::expr edge_cut = zero;
for(const auto& edge : graph.edges()) {
edge_cut = edge_cut +
z3::ite(con,
(part_vars[edge.node0().pos()] ==
part_vars[edge.node1().pos()]),
zero,
con.int_val(edge.data));
}
opt.minimize(edge_cut);
opt.minimize(imbalance);
z3::check_result opt_result = opt.check();
if (opt_result == z3::check_result::sat) {
auto mod = opt.get_model();
size_t node_id = 0;
for (z3::expr& npv : part_vars) {
cout << "Node " << node_id++ << ": " << mod.eval(npv) << endl;
}
return true;
} else if (opt_result == z3::check_result::unsat) {
cerr << "Constraints are unsatisfiable." << endl;
return false;
} else {
cerr << "Result is unknown." << endl;
return false;
}
}
If I remove the minimize commands and use a solver instead of an optimize it will find a satisfying model with 0 imbalance. I can also get an optimize to find a satisfying model if I either:
Remove the constraint imbalance <= imbalance_limit or
Make the imbalance limit reducible to an integer. In this example the total weight is 8. If the imbalance limit is set to 8/1, 8/2, 8/4, or 8/8 the optimizer will find satisfying models.
I have tried to_real(imbalance) <= imbalance_limit to no avail. I also considered the possibility that Z3 is using the wrong logic (one that doesn't include theories for real numbers) but I haven't found a way to set that using the C/C++ API.
If anyone could tell me why the optimizer fails in the presence of the real valued constraint or could suggest improvements to my encoding it would be much appreciated. Thanks in advance.
Could you reproduce the result by using opt.to_string() to dump the state (just before the check())? This would create a string formatted in SMT-LIB2 with optimization commands. It is then easier to exchange benchmarks. You should see that it reports unsat with the optimization commands and sat if you comment out the optimization commands.
If you are able to produce a bug, then post an issue on GitHub.com/z3prover/z3.git with a repro.
If not, you can use Z3_open_log before you create the z3 context and record a rerunnable log file. It is possible (but not as easy) to dig into unsoundness bugs that way.
It turns out that this was a bug in Z3. I created an Issue on GitHub and they have since responded with a patch. I'm compiling and testing the fix now, but I expect it to work.
Edit: Yup, that patch fixed the issue for the command line tool and the C++ API.