I am trying out tflite C++ API for running a model that I built. I converted the model to tflite format by following snippet:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5')
tfmodel = converter.convert()
open("model.tflite", "wb").write(tfmodel)
I am following the steps provided at tflite official guide, and my code upto this point looks like this
// Load the model
std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("model.tflite");
// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder builder(*model, resolver);
builder(&interpreter);
interpreter->AllocateTensors();
// Check interpreter state
tflite::PrintInterpreterState(_interpreter.get());
This shows my input layer has a shape of (1, 2050, 6). For giving input from C++, I followed this thread, and my input code looks like this:
std::vector<std::vector<double>> tensor; // I filled this vector, (dims are 2050, 6)
int input = interpreter->inputs()[0];
float* input_data_ptr = interpreter->typed_input_tensor<float>(input);
for (int i = 0; i < 2050; ++i) {
for (int j = 0; j < 6; j++) {
*(input_data_ptr) = (float)tensor[i][j];
input_data_ptr++;
}
}
Last layer of this model returns a single floating point(a probability). I get output from following code.
interpreter->Invoke();
int output_idx = interpreter->outputs()[0];
float* output = interpreter->typed_output_tensor<float>(output_idx);
std::cout << "OUTPUT: " << *output << std::endl;
My problem is that I am getting same output for different inputs. Moreover, the outputs are not matching with tensorflow-python outputs.
I don't understand why it's behaving this way. Also, can anyone confirm if this is the right way to give inputs to the model?
Some extra information:
I built tflite from source, v1.14.0, using command: bazel build -c opt //tensorflow/contrib/lite:libtensorflowLite.so --cxxopt="-std=c++11" --verbose_failures
I trained my model and converted it to tflite on a different machine, with tensorflow v2.0
This is wrong API usage.
Changing typed_input_tensor to typed_tensor and typed_output_tensor to typed_tensor resolved the issue for me.
For anyone else having the same issue,
int input_tensor_idx = 0;
int input = interpreter->inputs()[input_tensor_idx];
float* input_data_ptr = interpreter->typed_input_tensor<float>(input_tensor_idx);
and
int input_tensor_idx = 0;
int input = interpreter->inputs()[input_tensor_idx];
float* input_data_ptr = interpreter->typed_tensor<float>(input);
are identical.
This can be verified by looking at implementation of typed_input_tensor.
template <class T>
T* typed_input_tensor(int index) {
return typed_tensor<T>(inputs()[index]);
}
Related
When running the simulation in omnet++ 5.7 the execution stops suddenly and closes.
This is the code that is being run in omnet
auto simulation = getSimulation();
for (i = 1; i <= simulation->getLastComponentId(); i++) {
int x, y, id;
//scan the simulation module vector
mod = (cModule*)simulation->getModule(i);
if (strcmp(mod->getName(), "node") == 0) {
id = ((Node*)mod)->myId;
x = ((Node*)mod)->xpos;
y = ((Node*)mod)->ypos;
nodePtr[id] = ((Node*)mod);
if (id != this->myId) {
cGate* g;
char gName1[32], gName2[32];
// make new gate here
if (this->hasGate(gName1)) {
this->gate(gName1)->disconnect();
this->deleteGate(gName1);
}
this->addGate(gName1, cGate::OUTPUT, false);
// make new gate at other side
if (mod->hasGate(gName2)) {
mod->gate(gName2)->disconnect();
mod->deleteGate(gName2);
}
mod->addGate(gName2, omnetpp::cGate::INPUT, false);
//CHANNEL
cIdealChannel* ch = NULL;
this->gate(gName1)->connectTo(mod->gate(gName2), ch);
g = this->gate(gName1);
g->setDisplayString(g->getDisplayString());
}
}
}
I assume that the last line g->setDisplayString(g->getDisplayString()); is probably where the code breaks. The code repeats in the for loop with i<= simulation->getLastComponentId(). I'm new to Omnet++. Any suggestion to fix this would be helpful.
Thanks.
Several things in your code may be source of crashing:
getModule(i) may return nullptr, see OMNeT++ Simulation API, so you should check in the code whether result is not nullptr.
gName1 and gName2 are not set!
Other issues:
instead of (Node*)mod use dynamic_cast<Node*)>(mod) and check whether results is not nullptr.
instead of strcmp(mod->getName(), "node") == 0 I advice using mod->isName("node") - see OMNeT++ Simulation API
if you want to obtain a module whose name is "node", you do not need to manually check the name of every module - there is a useful method getModuleByPath() see OMNeT++ Simulation Manual
As a result of earlier parts of a pipeline, I have two .pb files containing a frozen optimized Tensorflow graph for slightly different architectures for the same production model. I would like to load them both in the same C++ program into the same session for inference, but of course the graph nodes have lots of conflicting names.
In Python, I was under the impression that you could load the two graphs under into the same session within different variable scopes, but in C++ I'm not sure how to do this.
So I've been doing essentially the following, which seems to work, but also seems a bit hacky and fragile to do manually like this, particularly handling the caret in the name for control edges. Is this a reasonable way to do this, and/or is there a pre-existing function in the C++ api that I can call instead that accomplishes the same thing?
Status status;
GraphDef graphDef1;
GraphDef graphDef2;
status = ReadBinaryProto(Env::Default(), string("frozen_graph_optimized1.pb"), &graphDef1);
CHECK_STATUS(status,"reading graph1");
status = ReadBinaryProto(Env::Default(), string("frozen_graph_optimized2.pb"), &graphDef2);
CHECK_STATUS(status,"reading graph2");
auto addPrefixToGraph = [](GraphDef& graphDef, const string& prefix) {
for(int i = 0; i<graphDef.node_size(); ++i)
{
auto node = graphDef.mutable_node(i);
string* name = node->mutable_name();
*name = prefix + *name;
int inputSize = node->input_size();
for(int j = 0; j<inputSize; ++j) {
string* inputName = node->mutable_input(j);
if(inputName->size() > 0 && (*inputName)[0] == '^')
*inputName = "^" + prefix + inputName->substr(1);
else
*inputName = prefix + *inputName;
}
}
};
addPrefixToGraph(graphDef1,"g1/");
addPrefixToGraph(graphDef2,"g2/");
status = session->Create(graphDef1);
CHECK_STATUS(status,"adding graph1 to session");
status = session->Extend(graphDef2);
CHECK_STATUS(status,"adding graph2 to session");
I am trying to come up with a "minimal" way of running a graph slam application using MRPT. The sensor data (LaserScan / Odometry) will be provided by a custom middleware similiar to ROS. After reading docs and source codes (both for the MRPT and the ROS bridge) extensively, I came up with the following snippet:
std::string config_file = "../../../laser_odometry.ini";
std::string rawlog_fname = "";
std::string fname_GT = "";
auto node_reg = mrpt::graphslam::deciders::CICPCriteriaNRD<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto edge_reg = mrpt::graphslam::deciders::CICPCriteriaERD<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto optimizer = mrpt::graphslam::optimizers::CLevMarqGSO<mrpt::graphs::CNetworkOfPoses2DInf>{};
auto win3d = mrpt::gui::CDisplayWindow3D{"Slam", 800, 600};
auto win_observer = mrpt::graphslam::CWindowObserver{};
auto win_manager = mrpt::graphslam::CWindowManager{&win3d, &win_observer};
auto engine = mrpt::graphslam::CGraphSlamEngine<mrpt::graphs::CNetworkOfPoses2DInf>{
config_file, rawlog_fname, fname_GT, &win_manager, &node_reg, &edge_reg, &optimizer};
for (size_t measurement_count = 0;;) {
// grab laser scan from the network, then fill it (hardcoded values for now), e.g:
auto scan_ptr = mrpt::obs::CObservation2DRangeScan::Create();
scan_ptr->timestamp = std::chrono::system_clock::now().time_since_epoch().count();
scan_ptr->rightToLeft = true;
scan_ptr->sensorLabel = "";
scan_ptr->aperture = 3.14; // rad (max-min)
scan_ptr->maxRange = 3.0; // m
scan_ptr->sensorPose = mrpt::poses::CPose3D{};
scan_ptr->resizeScan(30);
for (int i = 0; i < 30; ++i) {
scan_ptr->setScanRange(i, 0.5);
scan_ptr->setScanRangeValidity(i, true);
}
{ // Send LaserScan measurement to the slam engine
auto obs_ptr = std::dynamic_pointer_cast<mrpt::obs::CObservation>(scan_ptr);
engine.execGraphSlamStep(obs_ptr, measurement_count);
++measurement_count;
}
// grab odometry from the network, then fill it (hardcoded values for now), e.g:
auto odometry_ptr = mrpt::obs::CObservationOdometry::Create();
odometry_ptr->timestamp = std::chrono::system_clock::now().time_since_epoch().count();
odometry_ptr->hasVelocities = false;
odometry_ptr->odometry.x(0);
odometry_ptr->odometry.y(0);
odometry_ptr->odometry.phi(0);
{ // Send Odometry measurement to the slam engine
auto obs_ptr = std::dynamic_pointer_cast<mrpt::obs::CObservation>(odometry_ptr);
engine.execGraphSlamStep(obs_ptr, measurement_count);
++measurement_count;
}
// Get pose estimation from the engine
auto pose = engine.getCurrentRobotPosEstimation();
}
Am I in the right direction here? Did I miss something?
Hmm, at a first look the script seems fine, you are providing odometry and the laser scan in two different steps and in Observation form.
Minor note
auto node_reg = mrpt::graphslam::deciders::CICPCriteriaNRD{};
If you want to run with Odometry + laser scans use CFixedIntervalsNRD instead. It's much better tested and actually makes use of those measurements.
There is no minimal graphslam-engine example at present in MRPT but here's here's the main method for running graph-slam with datasets:
https://github.com/MRPT/mrpt/blob/26ee0f2d3a9366c50faa5f78d0388476ae886808/libs/graphslam/include/mrpt/graphslam/apps_related/CGraphSlamHandler_impl.h#L395
template <class GRAPH_T>
void CGraphSlamHandler<GRAPH_T>::execute()
{
using namespace mrpt::obs;
ASSERTDEB_(m_engine);
// Variables initialization
mrpt::io::CFileGZInputStream rawlog_stream(m_rawlog_fname);
CActionCollection::Ptr action;
CSensoryFrame::Ptr observations;
CObservation::Ptr observation;
size_t curr_rawlog_entry;
auto arch = mrpt::serialization::archiveFrom(rawlog_stream);
// Read the dataset and pass the measurements to CGraphSlamEngine
bool cont_exec = true;
while (CRawlog::getActionObservationPairOrObservation(
arch, action, observations, observation, curr_rawlog_entry) &&
cont_exec)
{
// actual call to the graphSLAM execution method
// Exit if user pressed C-c
cont_exec = m_engine->_execGraphSlamStep(
action, observations, observation, curr_rawlog_entry);
}
m_logger->logFmt(mrpt::system::LVL_WARN, "Finished graphslam execution.");
}
You basically grab the data and then continuously feed them to CGraphSlamEngine via either execGraphSlamStep or _execGraphSlamStep methods.
Here's also the relevant snippet for processing measurements in the corresponding ROS wrapper that operates with measurements from ROS topics:
https://github.com/mrpt-ros-pkg/mrpt_slam/blob/8b32136e2a381b1759eb12458b4adba65e2335da/mrpt_graphslam_2d/include/mrpt_graphslam_2d/CGraphSlamHandler_ROS_impl.h#L719
template<class GRAPH_T>
void CGraphSlamHandler_ROS<GRAPH_T>::processObservation(
mrpt::obs::CObservation::Ptr& observ) {
this->_process(observ);
}
template<class GRAPH_T>
void CGraphSlamHandler_ROS<GRAPH_T>::_process(
mrpt::obs::CObservation::Ptr& observ) {
using namespace mrpt::utils;
if (!this->m_engine->isPaused()) {
this->m_engine->execGraphSlamStep(observ, m_measurement_cnt);
m_measurement_cnt++;
}
}
I created a dll file from Mathlab .m files using Matlab coder. In matlab, the matlab code runs fine without any error and DLL creation was successful. MV2010 compiles and runs fine. However, when the software calls any function from this DLL in Visual studio, i get "unhandled exception" with memory location and "access violation reading location" with another memory location.
I thought the created dll is depending on another dll that is missing. So, I used dependency Walker to check the dependency. However, dependency walker did not show any missing dependency.
The Matlab functions takes 5 input variables ( four as a double and one as 1D-array of double) and return 4 double values. So I thought it might be the size of the array i am passing to the matlab-generated code is small. So, I re-sized the array. The original size was 750, while new size is 5000. Still getting the same unhandled exception.
Any idea how to debug this unhandled exception? anything else i need to check for?
/* Function Declarations */
extern void mat3(double ValueF, double DesiredValue, double io, double del, emxArray_real_T *a, double *status, double *slope_deg, double *fval, double *SVal);
and I am calling this method in MS2010 as shown below.
double lValueF = 166.6;
double lDesiredValue = 42.00;
double io = 1.0;
double del = 4.0;
emxArray_real_T *lInputRoi;
// a = emxCreate_real_T(height,width);
lInputRoi = emxCreate_real_T(size_y,size_x);
// Converting 2D image to ID array
if(!isHorz)
{
int lRow = 0;
int lCol = 0;
for (int row=roi.y+odd_y_offset;row< roi.y+size_y;row=row+2)
{
lCol = 0;
for (int col=roi.x;col< roi.x+size_x;col++)
{
lInputRoi->data[lRow *size_x + lCol] = (real_T)img.at<uchar>(row,col);
lCol++;
}
lRow++;
}
}
else
{
int lRow = 0;
int lCol = 0;
for (int col=roi.x+odd_x_offset;col< roi.x+size_x;col=col+2)
{
lRow = 0;
for (int row=roi.y;row< roi.y+size_y;row++)
{
lInputRoi->data[lCol *size_y + lRow] = (real_T)img.at<uchar>(row,col);
lRow++;
}
lCol++;
size_x = roi.height;
size_y = roi.width;
}
}
// initializing matlab generated function
mat3_initialize();
double lStatus = 0;
double lslope_deg = 0.0;
double lfreqval = 0.0;
double lsfrvalue = 0.0;
mat3(lValueF, lDesiredValue, io, del, lInputRoi, &lStatus, &lslope_deg, &lfreqval, &lsfrvalue);
note:
1) img -> is the input image data.
2) roi -> is an object of struct type that has two int variables (x and y)
3) MV2010 application is an application that used "MFC in a shared DLL"
4) I have seen a tutorial on how to integrate the Mathlab generated dll in Microsoft Visual studio. However, the tutorial does not get any error. http://www.mathworks.com/videos/integrate-code-into-visual-studio-77402.html
Thanks you for any help in advance.
i am using a GPS reciever that will print GPS message contiuously in terminal using a C++ program like this
Latitude:13.3 Longitude:80.25
Latitude:13.4 Longitude:80.27
Latitude:13.5 Longitude:80.28
I want to take this data inside my c++ program (QT Application)
Below is my full program code
void QgsGpsPlotPluginGui::on_buttonBox_accepted()
{
QString myPluginsDir = "usr/lib/qgis/plugins";
QgsProviderRegistry::instance(myPluginsDir);
QgsVectorLayer * mypLayer = new QgsVectorLayer("/home/mit/Documents/Dwl/GIS DataBase/india_placename.shp","GPS","ogr");
QgsSingleSymbolRenderer *mypRenderer = new
QgsSingleSymbolRenderer(mypLayer->geometryType());
QList <QgsMapCanvasLayer> myLayerSet;
mypLayer->setRenderer(mypRenderer);
if (mypLayer->isValid())
{
qDebug("Layer is valid");
}
else
{
qDebug("Layer is NOT valid");
}
// Add the Vector Layer to the Layer Registry
QgsMapLayerRegistry::instance()->addMapLayer(mypLayer, TRUE);
// Add the Layer to the Layer Set
myLayerSet.append(QgsMapCanvasLayer(mypLayer, TRUE));
QgsMapCanvas * mypMapCanvas = new QgsMapCanvas(0, 0);
mypMapCanvas->setExtent(mypLayer->extent());
mypMapCanvas->enableAntiAliasing(true);
mypMapCanvas->setCanvasColor(QColor(255, 255, 255));
mypMapCanvas->freeze(false);
QgsFeature * mFeature = new QgsFeature();
QgsGeometry * geom = QgsGeometry::fromPoint(*p);
QGis::GeometryType geometryType=QGis::Point;
QgsRubberBand * mrub = new QgsRubberBand (mypMapCanvas,geometryType);
QgsPoint * p = new QgsPoint();
double latitude =13.3;
double longitude = 80.25;
p->setX(latitude);
p->setY(longitude);
mrub->setToGeometry(geom,mypLayer);
mrub->show()
}
In the above code i have manually entered the value for Latitude and Longitude like this,
double latitude =13.3;
double longitude = 80.25;
p->setX(latitude);
p->setY(longitude);
but i need to get these value from terminal.
Both program are written in c++ but they belong to different framework.
I assume that your library doesn't have an API you can use.
Then one fairly straight forward way to integrate them would be to use pipes.
You can quickly do something like
gps_program | qt_program
And now you get the coordinates via stdin.
The more complex way to set it up is using exec and fork. You create pipe objects, then fork and run using exec the gps_programon one of the branches. This you can do entirely in your code without depending on bash or something like it. You still have to parse the data coming from the pipe the same way.
Just create a pipe:
#include <cstdio>
#include <iostream>
#define WWRITER 0
#if WWRITER
int main() {
while (true) {
std::cout << "Latitude:13.3 Longitude:80.25";
}
return 0;
}
#else
int main() {
FILE* fp = popen("Debug/Writer", "r");
if(fp == 0) perror(0);
else {
const std::size_t LatitudeLength = 9;
const std::size_t LongitudeLength = 10;
char latitude_name[LatitudeLength+1];
char longitude_name[LongitudeLength+1];
double latitude;
double longitude;
while(fscanf(fp, "%9s%lf%10s%lf",
latitude_name,
&latitude,
longitude_name,
&longitude) == 4)
{
std::cout << "Values: " << latitude << ", " << longitude << std::endl;
}
pclose(fp);
}
return 0;
}
#endif
Note: The example runs endlessly.
+1 to Sorin's answer, makes this nice and easy passing stdout to stdin :) but assuming you are running in linux / cigwin?
But if you have access to both program codes then a nicer solution is to use UdpSockets (or maybe Tcp, but Udp is simpler) and pass the data between programs in this way... not sure how big/long-term your solution needs to be but if you want to integrate them in a "long-term" and more maintainable way this would be a better approach.