Compiler ignores my includes -I for new library version - c++

I think that this question is not specifically on OpenCV, it seems more an include problem, so please read it even if you don't know the library.
There is an old version of a library (OpenCV) that's installed in /usr/local on the remote machine where I'm working on and where I don't have sudo access.
I've installed an updated version in my local environment /home/spm1428/local and I'm compiling it by using -I/home/spm1428/local/include/opencv and -I/home/spm1428/local/include/opencv2.
However, in a file where #include "opencv/opencv.hpp" I get this error:
In file included from /usr/local/include/opencv2/opencv.hpp:77:0,
from /home/spm1428/CloudCache/Utilities/Utility.hpp:11,
from ../Descriptors/Descriptor.cpp:17:
/usr/local/include/opencv2/highgui/highgui.hpp:165:25: error: redeclaration of ‘IMREAD_UNCHANGED’
It's an error given by the old version of the library installed in /usr/local, but I told him to use the local version using -I!
So it seems that the compiler ignores my -I directives and instead gives the priority to /usr/local/include
Why this happens?
If you wonder the whole compiling command is:
g++ -DCC_DISABLE_CUDA -I/home/spm1428/CloudCache -I/home/spm1428/local/include/opencv -I/home/spm1428/local/include/opencv2 -I/usr/include/boost -I/home/spm1428/vlfeat -O3 -g -Wall -c -fopenmp -std=c++11 -c -o Descriptor.o ../Descriptors/Descriptor.cpp
This error happens both using #include <opencv2/core.hpp> and include "opencv2/core.hpp".
UPDATE OF NEW ERROR:
Changing: #incldue <opencv2/opencv.hpp> to #include "opencv2/core.hpp"
#include "opencv2/imgproc/imgproc.hpp" solved the problem for some reason. However, now when I compile:
g++ -DCC_DISABLE_CUDA -I/home/spm1428/local/include -I/home/spm1428/CloudCache -I/home/spm1428/local/include/opencv -I/home/spm1428/local/include/opencv2 -I/usr/include/boost -I/home/spm1428/vlfeat -O3 -g -Wall -c -fopenmp -std=c++11 -c -o SIFTOpenCV.o ../Descriptors/SIFTOpenCV.cpp
I get this error:
../Descriptors/SIFTOpenCV.cpp:31:9: error: ‘class cv::Feature2D’ has no member named ‘detectAndCompute’
sift->detectAndCompute(img, cv::Mat(), pts, descriptors);
SIFTOpenCV.cpp includes SIFTOpenCV.hpp, which includes #include "opencv2/xfeatures2d.hpp". I think that this error is related somehow to the previous one.
The weirdest thing is that this compiles correctly on my local machine (where I have sudo access and I installed it /usr/local)
This is the class SIFTOpenCV.hpp:
#include "Descriptors/Descriptor.hpp"
#include <opencv2/xfeatures2d.hpp>
namespace cc{
class SIFTOpenCV : public Descriptor{
public:
SIFTOpenCV(int nFeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6);
void mapParams(std::vector<std::pair<std::string,std::string>> &params);
void ComputeDescriptors(cv::Mat &img, cv::Mat1f &descriptors);
private:
cv::Ptr<cv::xfeatures2d::SIFT> sift;
int nFeatures, nOctaveLayers;
double contrastThreshold, edgeThreshold, sigma;
};
}
While Descriptor.hpp:
#include "opencv2/core.hpp"
#include <string>
#include <map>
namespace cc{
class Descriptor{
public:
virtual void mapParams(std::vector<std::pair<std::string,std::string>> &params) = 0;
virtual void ComputeDescriptors(cv::Mat &img, cv::Mat1f &descriptors) = 0;
virtual void ComputeDescriptors(const std::string &fileName, const std::string &imgExt, cv::Mat1f &descriptor);
virtual void ComputeDescriptors(const std::string &dirPath, const std::string &imgExt, std::vector<cv::Mat1f> &descriptors);
void setResizeDim(const size_t resizeDim);
void setSamples (const size_t samples);
void setOMP(const bool omp);
virtual ~Descriptor();
private:
void ComputeDescriptorsRange(const std::vector<std::string> &files, std::vector<cv::Mat1f> &descriptors, const int start, const int finish, size_t errors);
size_t resizeDim = 0; //comput full-size image
int samples = 0;
bool omp = true;
};
}

I found the solution by myself: executing cpp -v returned me:
/usr/local/include
/usr/include
/home/spm1428/local/include
...
This means that using -I/home/spm1428/local/include was ignored because already included, but the priority was given to the old version in /usr/local/include.
Installing OpenCV in a new local directory and include it with -I gave the priority to the new version and solved the problem.

Related

When using bazel to build TensorFlow C++ call model, .so file doesn't work

I want to use TensorFlow C++ api to call model and predict answers.
At fisrt, I clone the tensorflow repo
git clone --recursive https://github.com/tensorflow/tensorflow
Then I write the C++ code like below:
One code is a class to call TensorFlow api, the header file is like:
#ifndef _DEEPMODEL_H_
#define _DEEPMODEL_H_
#include <iostream>
#include <string>
#include <vector>
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/protobuf/meta_graph.pb.h"
#include "tensorflow/cc/client/client_session.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/tensor.h"
using namespace std;
using namespace tensorflow;
class DeepModel{
public:
DeepModel(const string graph_path, const string checkpoint_path);
virtual ~DeepModel();
bool onInit();
void unInit();
vector<float> predict(vector<vector<float>>& x, string input_name, string output_name);
private:
string graph_path;
string checkpoint_path;
MetaGraphDef graph_def;
Session* my_sess;
};
#endif
After this, I write a simple encapsulation code. I want to compile a .so, and use the .so without tensorflow source code in the future. And my encapsulation code is like below:
#ifndef _MODEL_HELPER_H_
#define _MODEL_HELPER_H_
#include <vector>
#include <string>
using namespace std;
class ModelHelper{
public:
ModelHelper(const string graph_path, const string checkpoint_path);
virtual ~ModelHelper();
vector<float> predict(vector<vector<float> >& x, string input_name, string output_name);
private:
string graph_path;
string checkpoint_path;
};
#endif
I have write code to test above code, it works well. Then I want to compile .so using bazel.
My BUILD file is like below:
load("//tensorflow:tensorflow.bzl", "tf_cc_binary")
tf_cc_binary(
name = "my_helper.so",
srcs = ["model_helper.cc", "model_helper.h", "deepmodel.cc", "deepmodel.h"],
linkshared = 1,
deps = [
"//tensorflow/cc:cc_ops",
"//tensorflow/cc:client_session",
"//tensorflow/core:tensorflow"
],
)
then I rename model_helper.so to libmodel_helper.so, and write cpp code to test the .so file. And I want to compile the code, command is like this
g++ -std=c++11 test_so.cpp -L./ -lmy_helper -I./ -o my_helper
Then I meet the error:
.//libmy_helper.so: undefined reference to `stream_executor::cuda::ScopedActivateExecutorContext::~ScopedActivateExecutorContext()'
.//libmy_helper.so: undefined reference to `stream_executor::cuda::ScopedActivateExecutorContext::ScopedActivateExecutorContext(stream_executor::StreamExecutor*)'
.//libmy_helper.so: undefined reference to `tensorflow::DeviceName<Eigen::GpuDevice>::value[abi:cxx11]'
collect2: error: ld returned 1 exit status
I really don't know why. I can't use the .so alone?
You should reference libtensorflow_frameowork.so in your makefile. Just like code below:
g++ -std=c++11 test_so.cpp -L./ -lmy_helper -ltensorflow_framework -I./ -o my_helper
I guess bazel misses some source code in tensorflow into .so when compile my code.

Linking g++ compiled code against libraries created by clang++

In my Homebrew installation my libraries are compiled with clang, whereas I would like to, for performance reasons, compile my scientific code with gcc. In order to understand this problem better, I have created a minimal test:
// FILE print.cxx
#include <string>
#include <iostream>
void print_message(const std::string& message)
{
std::cout << message << std::endl;
}
// FILE test.cxx
#include <string>
void print_message(const std::string&);
int main()
{
std::string message = "Hello World!";
print_message(message);
return 0;
}
This code I compile with:
// SCRIPT compile.sh
clang++ -stdlib=libstdc++ -c print.cxx
g++ test.cxx print.o
The example that I have added works, but is it possible to make it work with libraries that are compiled without the -stdlib=libstdc++ flag and instead use the libc++?

Code with XCppRefl does not work

I am trying to use XCppRefl lib to achieve reflections in c++. http://www.extreme.indiana.edu/reflcpp/. I could successfully install this library in linux and run the tests given with the source code of the library.
Here is the code that I have written --
#include <iostream>
using namespace std;
#include <reflcpp/ClassType_tmpl.hpp>
#include <reflcpp/BoundClassType_tmpl.hpp>
#include <reflcpp/Exceptions.hpp>
using namespace reflcpp;
#include "Complex.h"
int main()
{
//ClassType ct = ClassType::getClass( string("Complex") );
////PtrHolder_smptr_t obj = ct.createInstance();
//assert(ct.name() == "B");
Complex x;
int ret;
Complex a;
ClassType c = ClassType::getClass( string("Complex") );
//cout<<"name :: "<<c.name()<<endl;
}
It seems to compile just fine --
$ g++ -g -I /usr/local/include/reflcpp-0.2/ -L /usr/local/include/reflcpp-0.2/ -lreflcpp main.cpp
However when I execute the executable (a.out), I get a core-dump
a.out: Type.cpp:87: static const reflcpp::Type_body* reflcpp::Type_body::getType(const std::string&): Assertion `s_class_name_map' failed.
Aborted (core dumped)
Does anyone has used this lib before? Please help.
you have to link your main.o to libreflcpp.a . after compiling use this:
g++ -p -pg -o"project_name" ./A.o ./A_reflection.o ./main.o /usr/local/lib/libreflcpp.a

C++ library in using boost library

I wrote a test.cpp:
#include <iostream>
#include <stack>
#include <boost/lexical_cast.hpp>
#include <boost/config/warning_disable.hpp>
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/phoenix.hpp>
using namespace std;
namespace phoenix = boost::phoenix;
namespace qi = boost::spirit::qi;
namespace ascii = boost::spirit::ascii;
struct calculator
{
bool interpret(const string& s);
void do_neg();
void do_add();
void do_sub();
void do_mul();
void do_div();
void do_number(const char* first, const char* last);
int val() const;
private:
stack<int> values_;
int *pn1_, n2_;
void pop_1();
void pop_2();
};
......................
....................
But when I use g++ test.cpp -o test, there are errors like boost/lexical_cast.hpp: No such file or directory, but I have copy all of files in boost (download from boost.org) to the test.cpp folder, how to make g++ know the headers paths? Thanks
I used g++ test.cpp -o test
Using " " is not possible, I have a lot of header's dependency.
You have to make sure you modify the include to g++'s command. Reading from the man page (which is your best friend for this sorta stuff):
Add the directory dir to the list of directories to be searched for
header files. Directories named by -I are searched before the
standard system include directories. If the directory dir is a
standard system include directory, the option is ignored to ensure
that the default search order for system directories and the
special treatment of system headers are not defeated . If dir
begins with "=", then the "=" will be replaced by the sysroot
prefix; see --sysroot and -isysroot.
For you the command should look like this:
g++ -I path/to/boost test.cpp

undefined reference to a class ERROR

I am working in c++ /ubuntu.
I have:
libr.hpp
#ifndef LIBR
#define LIBR
#include <string>
using namespace std;
class name
{
public:
name();
~name();
std::string my_name;
std::string method (std::string s);
};
#endif
and
libr.cpp
#include <iostream>
#include <string>
#include <stdlib.h>
#include "libr.hpp"
using namespace std;
name::name()
{
}
std::string name::method(std::string s)
{
return ("YOUR NAME IS: "+s);
}
From these two I've created a libr.a.
In test.cpp:
#include <iostream>
#include <string>
#include <stdlib.h>
#include "libr.hpp"
using namespace std;
int main()
{
name *n = new name();
n->my_name="jack";
cout<<n->method(n->my_name)<<endl;
return 0;
}
I compile with g++ and libr.a. I have an error: "name::name() undefined reference", why?
I would like to mention that I've added in qt creator at qmake the .a. When I compile, I have the error. How can I solve it?
This is a linker error, not a compiler error. It means that you have called but you have not defined the constructor. Your allocation name *n = new name(); calls the constructor.
Since you defined the constructor in your libr.cpp, what this means is that this compilation unit is not making its way into your executable. You mentioned that you are compiling with libr.a. When you compile your libr.cpp the result is a .o file, not a .a file.
You are not linking libr.o into your executable.
What are the steps you're using to compile your "project"?
I performed the following steps and managed to build it with warnings/errors.
g++ -Wall -c libr.cpp
ar -cvq libr.a libr.o
g++ -Wall -o libr main.cpp libr.a
One last thing, if I change the order off the last command, like
g++ -Wall -o libr libr.a main.cpp
I get the following error:
Undefined first referenced
symbol in file
name::name() /tmp/cc4Ro1ZM.o
name::method(std::basic_string<char, std::char_traits<char>, std::allocator<char
> >)/tmp/cc4Ro1ZM.o
ld: fatal: Symbol referencing errors. No output written to libr
collect2: ld returned 1 exit status
in fact , you needn't define the destructor yourself because the default destructor will be used when the class calling is over.
and in the VS2008,it's all right!