Tensorflow C++ - Avoiding overlapping names when loading two graphs - c++

As a result of earlier parts of a pipeline, I have two .pb files containing a frozen optimized Tensorflow graph for slightly different architectures for the same production model. I would like to load them both in the same C++ program into the same session for inference, but of course the graph nodes have lots of conflicting names.
In Python, I was under the impression that you could load the two graphs under into the same session within different variable scopes, but in C++ I'm not sure how to do this.
So I've been doing essentially the following, which seems to work, but also seems a bit hacky and fragile to do manually like this, particularly handling the caret in the name for control edges. Is this a reasonable way to do this, and/or is there a pre-existing function in the C++ api that I can call instead that accomplishes the same thing?
Status status;
GraphDef graphDef1;
GraphDef graphDef2;
status = ReadBinaryProto(Env::Default(), string("frozen_graph_optimized1.pb"), &graphDef1);
CHECK_STATUS(status,"reading graph1");
status = ReadBinaryProto(Env::Default(), string("frozen_graph_optimized2.pb"), &graphDef2);
CHECK_STATUS(status,"reading graph2");
auto addPrefixToGraph = [](GraphDef& graphDef, const string& prefix) {
for(int i = 0; i<graphDef.node_size(); ++i)
{
auto node = graphDef.mutable_node(i);
string* name = node->mutable_name();
*name = prefix + *name;
int inputSize = node->input_size();
for(int j = 0; j<inputSize; ++j) {
string* inputName = node->mutable_input(j);
if(inputName->size() > 0 && (*inputName)[0] == '^')
*inputName = "^" + prefix + inputName->substr(1);
else
*inputName = prefix + *inputName;
}
}
};
addPrefixToGraph(graphDef1,"g1/");
addPrefixToGraph(graphDef2,"g2/");
status = session->Create(graphDef1);
CHECK_STATUS(status,"adding graph1 to session");
status = session->Extend(graphDef2);
CHECK_STATUS(status,"adding graph2 to session");

Related

Simulating Leach has encounter a problem. Finished with error in omnet++

When running the simulation in omnet++ 5.7 the execution stops suddenly and closes.
This is the code that is being run in omnet
auto simulation = getSimulation();
for (i = 1; i <= simulation->getLastComponentId(); i++) {
int x, y, id;
//scan the simulation module vector
mod = (cModule*)simulation->getModule(i);
if (strcmp(mod->getName(), "node") == 0) {
id = ((Node*)mod)->myId;
x = ((Node*)mod)->xpos;
y = ((Node*)mod)->ypos;
nodePtr[id] = ((Node*)mod);
if (id != this->myId) {
cGate* g;
char gName1[32], gName2[32];
// make new gate here
if (this->hasGate(gName1)) {
this->gate(gName1)->disconnect();
this->deleteGate(gName1);
}
this->addGate(gName1, cGate::OUTPUT, false);
// make new gate at other side
if (mod->hasGate(gName2)) {
mod->gate(gName2)->disconnect();
mod->deleteGate(gName2);
}
mod->addGate(gName2, omnetpp::cGate::INPUT, false);
//CHANNEL
cIdealChannel* ch = NULL;
this->gate(gName1)->connectTo(mod->gate(gName2), ch);
g = this->gate(gName1);
g->setDisplayString(g->getDisplayString());
}
}
}
I assume that the last line g->setDisplayString(g->getDisplayString()); is probably where the code breaks. The code repeats in the for loop with i<= simulation->getLastComponentId(). I'm new to Omnet++. Any suggestion to fix this would be helpful.
Thanks.
Several things in your code may be source of crashing:
getModule(i) may return nullptr, see OMNeT++ Simulation API, so you should check in the code whether result is not nullptr.
gName1 and gName2 are not set!
Other issues:
instead of (Node*)mod use dynamic_cast<Node*)>(mod) and check whether results is not nullptr.
instead of strcmp(mod->getName(), "node") == 0 I advice using mod->isName("node") - see OMNeT++ Simulation API
if you want to obtain a module whose name is "node", you do not need to manually check the name of every module - there is a useful method getModuleByPath() see OMNeT++ Simulation Manual

libusb equivalent of PyUSB usb.util.find_descriptor

PyUSB has a little utility function called usb.util.find_descriptor which, as the name implies, allows me to search for a description with specific criteria, for example:
ep = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_OUT)
I need the same functionality in a C++ application built with libusb. However I don't see anything similar in the libusb API specification. What would be the easiest way to implement the same functionality on top of libusb?
I'm on Linux, if that makes a difference. I'd however would rather not add any additional dependency, unless strictly required.
Update:
This is what I have so far:
libusb_config_descriptor* config;
int ret = libusb_get_config_descriptor(m_dev, 0 /* config_index */, &config);
if (ret != LIBUSB_SUCCESS) {
raise_exception(std::runtime_error, "libusb_get_config_descriptor() failed: " << usb_strerror(ret));
}
// Do not assume endpoint_in is 1
const libusb_interface *interface = config->interface;
const libusb_interface_descriptor configuration = interface->altsetting[0];
for(int i = 0; i < configuration.bNumEndpoints; ++i) {
const libusb_endpoint_descriptor endpoint = configuration.endpoint[i];
if ((endpoint.bEndpointAddress & LIBUSB_ENDPOINT_IN) == LIBUSB_ENDPOINT_IN) {
endpoint_in = endpoint.bEndpointAddress >> 8; // 3 first bytes
log_debug("endpoint_in: " << endpoint_in);
}
}
Iteration works this way, albeit it looks quite ugly and it's mostly non-reusable. Also, extracting the endpoint number with:
endpoint_in = endpoint.bEndpointAddress >> 8; // 3 first bytes
does not seem to work.

Setting a protobuf item's value replaces a previously set variable as well

I have created a simple Protobuf based config file and have been using it without any issues until now. The issue is that I added two new items to my settings (Config.proto) and now whatever value I set for the last variable is reflected in the previous one.
The following snapshot demonstrates this better. As you can see below, the value of fv_shape_predictor_path and fv_eyenet_path depend solely on order of being set. the one that is set last changes the others value.
I made sure the cpp files related to Config.proto are built afresh. I also tested this under Linux and there it works just fine. It seems its only happening in windows! it also doesn't affect any other items in the same settings. its just these two new ones.
I have no idea what is causing this or how to go about it. For reference this is how the protobuf looks like:
syntax = "proto3";
package FVConfig;
message Config {
FV Configuration = 4 ;
message FAN_MODELS_WEIGHTS{
string fan_2DFAN_4 = 1;
string fan_3DFAN_4 = 2;
string fan_depth = 3;
}
message S3FD_MODELS_WEIGHTS{
string s3fd = 1;
}
message DLIB_MODELS_WEIGHTS{
string dlib_default = 1;
}
message MTCNN_MODELS_WEIGHTS {
string mt_onet = 1;
string mt_pnet = 2;
string mt_rnet = 3;
}
message FV_MODEL_WEIGHTS {
string r18 = 1;
string r50 = 2;
string r101 = 3;
repeated ModelContainer new_models_weights = 4;
message ModelContainer{
string model_name = 1;
string model_weight_path = 2;
string description = 3;
}
}
message FV {
MTCNNDetectorSettings mtcnn = 1 ;
FaceVerificationSettings fv = 2 ;
}
message MTCNNDetectorSettings {
Settings settings = 1;
MTCNN_MODELS_WEIGHTS model_weights = 4;
message Settings {
string mt_device = 2;
int32 mt_webcam_source = 100;
int32 mt_upper_threshold = 600;
int32 mt_hop = 700;
}
}
message FaceVerificationSettings {
Settings settings = 1;
FV_MODEL_WEIGHTS model_weights = 2;
message Settings {
string fv_model_name = 1;
string fv_model_checkpoint_path = 2;
bool fv_rebuild_cache = 3;
bool fv_short_circut = 6;
bool fv_accumulate_score = 7;
string fv_config_file_path = 10;
string fv_img_bank_folder_root = 11;
string fv_cache_folder = 12;
string fv_postfix = 13;
string fv_device = 14;
int32 fv_idle_interval = 15;
bool fv_show_dbg_info = 16;
// these are the new ones
string fv_shape_predictor_path = 17;
string fv_eyenet_path = 18;
}
}
} //end of Config message
What am I missing here? How should I be going about this? Restarting Windows and Visual Studio didn't do any good either. I'm using protobuf 3.11.4 both on Linux and Windows.
This issue seems to only exist in the Windows version of Protobuf 3.11.4 (didn't test with any newer version though).
Basically what happened was that I use to first create a Config object and initialize it with some default values. When I added these two entries to my Config.proto, I forgot to also add an initialization entry like other entries, thinking I'm fine with the default (which I assumed would be "").
This doesn't pose any issues under Linux/G++ and the program builds and runs just fine and works as intended. However under Windows this results in the behavior you just witnessed i.e. setting any of those newly added entries, would also set the other entries values. So basically I either had to create a whole new Config object or had to explicitly set their values when using the load_default_config.
To be more concrete this is the snippet I used for setting some default values in my Protobuf configs.
These reside in a separate header called Utility.h:
inline FVConfig::Config load_default_config()
{
FVConfig::Config baseCfg;
auto config = baseCfg.mutable_configuration();
load_fv_default_settings(config->mutable_fv());
load_mtcnn_default_settings(config->mutable_mtcnn());
return baseCfg;
}
inline void load_fv_default_settings(FVConfig::Config_FaceVerificationSettings* fv)
{
fv->mutable_settings()->set_fv_config_file_path(fv::config_file_path);
fv->mutable_settings()->set_fv_device(fv::device);
fv->mutable_settings()->set_fv_rebuild_cache(fv::rebuild_cache);
...
// these two lines were missing previously and to my surprise, this was indeed
// the cause of the weird behavior.
fv->mutable_settings()->set_fv_shape_predictor_path(fv::shape_predictor_path);
fv->mutable_settings()->set_fv_eyenet_path(fv::eyenet_path);
auto new_model_list = fv->mutable_model_weights()->mutable_new_models_weights()->Add();
new_model_list->set_model_name("r18");
new_model_list->set_description("default");
new_model_list->set_model_weight_path(fv::model_weights_r18);
}
inline void load_mtcnn_default_settings(FVConfig::Config_MTCNNDetectorSettings* mt)
{
mt->mutable_settings()->set_mt_device(mtcnn::device);
mt->mutable_settings()->set_mt_hop(mtcnn::hop);
....
}
Not sure this counts as a bug in Protobuf, or my wrong approach here.

How to give multi-dimensional inputs to tflite via C++ API

I am trying out tflite C++ API for running a model that I built. I converted the model to tflite format by following snippet:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5')
tfmodel = converter.convert()
open("model.tflite", "wb").write(tfmodel)
I am following the steps provided at tflite official guide, and my code upto this point looks like this
// Load the model
std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("model.tflite");
// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder builder(*model, resolver);
builder(&interpreter);
interpreter->AllocateTensors();
// Check interpreter state
tflite::PrintInterpreterState(_interpreter.get());
This shows my input layer has a shape of (1, 2050, 6). For giving input from C++, I followed this thread, and my input code looks like this:
std::vector<std::vector<double>> tensor; // I filled this vector, (dims are 2050, 6)
int input = interpreter->inputs()[0];
float* input_data_ptr = interpreter->typed_input_tensor<float>(input);
for (int i = 0; i < 2050; ++i) {
for (int j = 0; j < 6; j++) {
*(input_data_ptr) = (float)tensor[i][j];
input_data_ptr++;
}
}
Last layer of this model returns a single floating point(a probability). I get output from following code.
interpreter->Invoke();
int output_idx = interpreter->outputs()[0];
float* output = interpreter->typed_output_tensor<float>(output_idx);
std::cout << "OUTPUT: " << *output << std::endl;
My problem is that I am getting same output for different inputs. Moreover, the outputs are not matching with tensorflow-python outputs.
I don't understand why it's behaving this way. Also, can anyone confirm if this is the right way to give inputs to the model?
Some extra information:
I built tflite from source, v1.14.0, using command: bazel build -c opt //tensorflow/contrib/lite:libtensorflowLite.so --cxxopt="-std=c++11" --verbose_failures
I trained my model and converted it to tflite on a different machine, with tensorflow v2.0
This is wrong API usage.
Changing typed_input_tensor to typed_tensor and typed_output_tensor to typed_tensor resolved the issue for me.
For anyone else having the same issue,
int input_tensor_idx = 0;
int input = interpreter->inputs()[input_tensor_idx];
float* input_data_ptr = interpreter->typed_input_tensor<float>(input_tensor_idx);
and
int input_tensor_idx = 0;
int input = interpreter->inputs()[input_tensor_idx];
float* input_data_ptr = interpreter->typed_tensor<float>(input);
are identical.
This can be verified by looking at implementation of typed_input_tensor.
template <class T>
T* typed_input_tensor(int index) {
return typed_tensor<T>(inputs()[index]);
}

MRPT Graph Slam Minimal Example

I am trying to come up with a "minimal" way of running a graph slam application using MRPT. The sensor data (LaserScan / Odometry) will be provided by a custom middleware similiar to ROS. After reading docs and source codes (both for the MRPT and the ROS bridge) extensively, I came up with the following snippet:
std::string config_file = "../../../laser_odometry.ini";
std::string rawlog_fname = "";
std::string fname_GT = "";
auto node_reg = mrpt::graphslam::deciders::CICPCriteriaNRD<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto edge_reg = mrpt::graphslam::deciders::CICPCriteriaERD<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto optimizer = mrpt::graphslam::optimizers::CLevMarqGSO<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto win3d = mrpt::gui::CDisplayWindow3D{"Slam", 800, 600};
auto win_observer = mrpt::graphslam::CWindowObserver{};
auto win_manager = mrpt::graphslam::CWindowManager{&win3d, &win_observer};
auto engine = mrpt::graphslam::CGraphSlamEngine<mrpt::graphs::CNetworkOfPoses2DInf>{
config_file, rawlog_fname, fname_GT, &win_manager, &node_reg, &edge_reg, &optimizer};
for (size_t measurement_count = 0;;) {
// grab laser scan from the network, then fill it (hardcoded values for now), e.g:
auto scan_ptr = mrpt::obs::CObservation2DRangeScan::Create();
scan_ptr->timestamp = std::chrono::system_clock::now().time_since_epoch().count();
scan_ptr->rightToLeft = true;
scan_ptr->sensorLabel = "";
scan_ptr->aperture = 3.14; // rad (max-min)
scan_ptr->maxRange = 3.0; // m
scan_ptr->sensorPose = mrpt::poses::CPose3D{};
scan_ptr->resizeScan(30);
for (int i = 0; i < 30; ++i) {
scan_ptr->setScanRange(i, 0.5);
scan_ptr->setScanRangeValidity(i, true);
}
{ // Send LaserScan measurement to the slam engine
auto obs_ptr = std::dynamic_pointer_cast<mrpt::obs::CObservation>(scan_ptr);
engine.execGraphSlamStep(obs_ptr, measurement_count);
++measurement_count;
}
// grab odometry from the network, then fill it (hardcoded values for now), e.g:
auto odometry_ptr = mrpt::obs::CObservationOdometry::Create();
odometry_ptr->timestamp = std::chrono::system_clock::now().time_since_epoch().count();
odometry_ptr->hasVelocities = false;
odometry_ptr->odometry.x(0);
odometry_ptr->odometry.y(0);
odometry_ptr->odometry.phi(0);
{ // Send Odometry measurement to the slam engine
auto obs_ptr = std::dynamic_pointer_cast<mrpt::obs::CObservation>(odometry_ptr);
engine.execGraphSlamStep(obs_ptr, measurement_count);
++measurement_count;
}
// Get pose estimation from the engine
auto pose = engine.getCurrentRobotPosEstimation();
}
Am I in the right direction here? Did I miss something?
Hmm, at a first look the script seems fine, you are providing odometry and the laser scan in two different steps and in Observation form.
Minor note
auto node_reg = mrpt::graphslam::deciders::CICPCriteriaNRD{};
If you want to run with Odometry + laser scans use CFixedIntervalsNRD instead. It's much better tested and actually makes use of those measurements.
There is no minimal graphslam-engine example at present in MRPT but here's here's the main method for running graph-slam with datasets:
https://github.com/MRPT/mrpt/blob/26ee0f2d3a9366c50faa5f78d0388476ae886808/libs/graphslam/include/mrpt/graphslam/apps_related/CGraphSlamHandler_impl.h#L395
template <class GRAPH_T>
void CGraphSlamHandler<GRAPH_T>::execute()
{
using namespace mrpt::obs;
ASSERTDEB_(m_engine);
// Variables initialization
mrpt::io::CFileGZInputStream rawlog_stream(m_rawlog_fname);
CActionCollection::Ptr action;
CSensoryFrame::Ptr observations;
CObservation::Ptr observation;
size_t curr_rawlog_entry;
auto arch = mrpt::serialization::archiveFrom(rawlog_stream);
// Read the dataset and pass the measurements to CGraphSlamEngine
bool cont_exec = true;
while (CRawlog::getActionObservationPairOrObservation(
arch, action, observations, observation, curr_rawlog_entry) &&
cont_exec)
{
// actual call to the graphSLAM execution method
// Exit if user pressed C-c
cont_exec = m_engine->_execGraphSlamStep(
action, observations, observation, curr_rawlog_entry);
}
m_logger->logFmt(mrpt::system::LVL_WARN, "Finished graphslam execution.");
}
You basically grab the data and then continuously feed them to CGraphSlamEngine via either execGraphSlamStep or _execGraphSlamStep methods.
Here's also the relevant snippet for processing measurements in the corresponding ROS wrapper that operates with measurements from ROS topics:
https://github.com/mrpt-ros-pkg/mrpt_slam/blob/8b32136e2a381b1759eb12458b4adba65e2335da/mrpt_graphslam_2d/include/mrpt_graphslam_2d/CGraphSlamHandler_ROS_impl.h#L719
template<class GRAPH_T>
void CGraphSlamHandler_ROS<GRAPH_T>::processObservation(
mrpt::obs::CObservation::Ptr& observ) {
this->_process(observ);
}
template<class GRAPH_T>
void CGraphSlamHandler_ROS<GRAPH_T>::_process(
mrpt::obs::CObservation::Ptr& observ) {
using namespace mrpt::utils;
if (!this->m_engine->isPaused()) {
this->m_engine->execGraphSlamStep(observ, m_measurement_cnt);
m_measurement_cnt++;
}
}