I wanted to write a camera calibration for a 140 deg fisheye lens.
As the normal calibration of opencv is not working with these lenses I found that there seems to be a cv::fisheye module within the calib3d bundle.
But every time I try to compile my code the g++ states:
error: ‘cv::fisheye’ has not been declared
the problem is that the documentation of opencv 2.4.11 contains the methods and the additional namespace.
I have the following includes in my c++ file:
//OPENCV Stuff
#include "opencv2/opencv.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/features2d/features2d.hpp"
the call of the function looks like that:
double rmsL = cv::fisheye::calibrate(objectPoints, imagePointsLeft, imagesize, cameraMatrices[LEFT], distCoeffs[LEFT], rvecs, tvecs);
I am compiling with the include path for pkg-config
OPENCV = `pkg-config opencv --cflags --libs`
In fact the normal calibration stuff is working properly as well as any other opencv related stuff.
Is anyone out there able to help me with tis problem?
It would be quiet nice to use the fisheye calibration because of the wide fov and its improvements for calculating the disparity map.
Cheers hGen
Do you use these compile options?
-I/pathto/opencv/include -L/pathto/library lopencv_core
error: ‘cv::fisheye’ has not been declared is a compilation error. Not a linker error. Which means, your compiler is not able to look for defnitions of cv::fisheye.
Either try to give explicit path.
#include "path/to/opencv2/.....h"
or
Provide the include path in -I switch.
g++ -I<path to opencv2> *.cpp -l <path to opencv2 lib> -o <target-name>
Related
i am making a simple c++ program with openCV library included. Eclipse IDE recognises openCV commands and library locations, but when i try to build the project, compiler gives external error, referring to opencv.hpp or core.hpp file calling a "opencv2/core.hpp" path, which does not exist in opencv folder. I figured out that the problem is linked to the way core.hpp is called, but the library files are read-only.
From what i saw in opencv.hpp file, this relative "opencv2/[module].hpp" reference is not only for the core, but all other modules as well. There is no opencv2 folder inside the one to where openCV is installed at all, in fact.
I've tried reinstalling and remaking openCV with different making arguments, using a different IDE and including direct search folders in eclipse. The problem, apparently, lies in the files themselves, or the way it maybe gets installed in the system the wrong way. The problem persists on both my main ubuntu machine and the ARMbian orange pi.
i get this error when trying to include any openCV library that contains
#include "opencv2/[opencv module].hpp" in it
as a result, compilation is terminated with the error message stating "/usr/local/include/opencv4/opencv2/opencv.hpp:52:28: fatal error: opencv2/core.hpp: No such file or directory"
edit 1: GCC c++ compiler options are -Iusr/local/include/opencv4/opencv2 -O3 -Wall -c -fmessage-length=0 and linker's options are -L/usr/local/lib.
The code is a simple displayImage
#include <opencv4/opencv2/opencv.hpp>
#include <opencv4/opencv2/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main( int argc, char** argv )
{
Mat image;
image = imread( argv[1], 1 );
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", image );
waitKey(0);
return 0;
}
edit 2: $ pkg-config --libs opencv does not see openCV as installed in the system, altho i've made sure to run make install and ldconfig on the path. This may be a signal of faulty installation, but this is just a sidenote, not entirely related to main problem. I have tried reinstalls and to different folders, but this also persists as well as a main problem
apparently, #sgarizvi 's comment was the answer. I just needed to set the include path to I/usr/local/include/opencv4 and it worked. After that, the error was fixed.
I am replying to my own question to close the case, as i cannot upvote/veryfy a comment
In your case, since your include path is /usr/local/include/opencv4/opencv2
Replace the first three lines
#include <opencv4/opencv2/opencv.hpp>
#include <opencv4/opencv2/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
by
#include <opencv.hpp>
#include <imgproc.hpp>
#include <highgui.hpp>
I compiled the library for the C++ API for TensorFlow Lite (r1.97) using the script ${TENSORFLOW_ROOT}/tensorflow/lite/tools/make/build_rpi_lib.sh following the steps suggested at this official page (Native Compiling, downloading the necessary libraries), where ${TENSORFLOW_ROOT} is the root folder where I cloned the repository.
I am trying to compile this simple test.cpp program:
#include <memory>
#include "tensorflow/lite/interpreter.h"
int main(void)
{
std::unique_ptr<tflite::Interpreter> interpreter(new tflite::Interpreter);
}
using the command:
gcc-6 test.cpp -I${TENSORFLOW_ROOT} -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads/eigen -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads/protobuf/src -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads -L${TENSORFLOW_ROOT}/tensorflow/lite/tools/make/gen/rpi_armv7l/lib -lstdc++ -ldl -ltensorflow-lite
The list of includes was suggested in the Integrating TensorFlow libraries page (specifically from the section iOS). Compilation fails with the following error related to the inclusion of Eigen:
${TENSORFLOW_ROOT}/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1:42: fatal error: unsupported/Eigen/CXX11/Tensor: No such file or directory
#include "unsupported/Eigen/CXX11/Tensor"
I found several links where an apparently similar problem is discussed (such as this one), but the proposed solutions involve using references to the TensorFlow python package which is something that is not possible in my case (and it feels quite patchy - I am not considering using python for this project).
I also tried using a different include path to Eigen (e.g. ${TENSORFLOW_ROOT}/third_party/eigen3):
gcc-6 test.cpp -I${TENSORFLOW_ROOT} -I${TENSORFLOW_ROOT}/third_party/eigen3 -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads/protobuf/src -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads -L${TENSORFLOW_ROOT}/tensorflow/lite/tools/make/gen/rpi_armv7l/lib -lstdc++ -ldl -ltensorflow-lite
and also this causes Eigen related compilation errors of this sort:
...
${TENSORFLOW_ROOT}/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1:42: error: #include nested too deeply
#include "unsupported/Eigen/CXX11/Tensor"
...
${TENSORFLOW_ROOT}/third_party/eigen3/Eigen/Core:1:22: error: #include nested too deeply
#include "Eigen/Core"
...
Any suggestions on how to solve this issue? What is the right set of include paths?
Turns out I was including the wrong folder. Instead of ${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads/eigen or ${TENSORFLOW_ROOT}/third_party/eigen3, the right one is ${TFLITE_ROOT}/tensorflow/lite/tools/make/downloads/eigen.
I am still puzzled by the number of eigen folders inside the repository:
find . -name "eigen*" -type d
./third_party/eigen3
./tensorflow/lite/tools/make/downloads/eigen
First, I am new to C++ and dlib but I have successfully built the examples and started working on my own project. Things have been progressing smoothly until I try to save a jpeg. Attempting to compile code using dlib::save_jpeg throws a linker error and I cannot track down the solution. I have attempted to add #define DLIB_JPEG_SUPPORT above and below my #includes but no luck. I am using XCode and used cmake -G "Xcode" .. when I compiled the examples. Relevant code below. Since I am on a Mac, I have added header and library search paths for X11 (for dlib gui), OpenCV, and DLIB. I have libjpeg.dylib and linked that to my project with and without #define DLIB_JPEG_SUPPORT in main.cpp. Is there some other build setting I need to specify? Thank you in advance for your help.
Finally, I have seen other questions and pages about dlib and libjpeg issues but no luck yet. And yes I have source.cpp included in the project.
// the standard stuff
#include <string>
#include <iostream>
#include <unistd.h>
// opencv mat object
#include <opencv2/opencv.hpp>
// dlib>
#include <dlib/opencv.h>
#include <dlib/image_io.h>
#include <dlib/gui_widgets.h>
#include <dlib/image_transforms.h>
int main(int argc, const char * argv[]) {
// retrieving images from a TCP connection
// decode data stream
img = cv::imdecode(rawImage, CV_LOAD_IMAGE_COLOR);
// perform image processing
dlib::cv_image<dlib::bgr_pixel> d_image(img);
// finally save the result to jpg
std::string fname = argv[1] + std::to_string(image_id) + ".jpg";
dlib::save_jpeg(d_image, fname); // <- line that won't compile
return 0;
}
After quit a bit of struggling and side-by-side comparisons I finally found the issue. In XCode go to to Build Settings and modify Other Linker Flags, Run Search Paths, and Other C++ Flags to match the compiled and working face_ex example. I wholesale copied all of those flags and included a missing libjpeg.dylib and was able to get things running. It should look something like this for the C++ flags . Hope this helps the next person.
I can not do matrix multiplications with armadillo. I don't know if there are more features I can't use. So far, I've only been using vectors and dot product with no problem.
Basically:
#include <iostream>
#include <armadillo>
using namespace std;
using namespace arma;
int main(){
//this works
vec v = randu<vec>(10);
cout<<dot(v,v)<<endl;
int n =5;
//this doesn't work
mat M = randu<mat>(n,n); // program compiles but stops running when reaches here
cout<<M*M<<endl;
return 0;
}
I am using the newest version of codeblock and armadillo. OS is windows 7. I've included the lapack and blas libraries in the compiler linker settings and blas_win64_MT and lapack_win64_MT are both in PATH. And I've also included de armadillo folder in the search directory. In config.hpp (armadillo folder), #define ARMA_USE_LAPACK and #define ARMA_USE_BLAS are uncommented. #define ARMA_USE_WRAPPER is commented. Also, I have tried to add -lapack -lblas to the build->project options->compiler->other options and to build->project options->linker setting->other options, but I had no success. Same thing happened when I tried to add -larmadillo to the compiler with #define ARMA_USE_WRAPPER uncommented.
What am I missing?
Your program is of course perfectly fine as Armadillo is a well-designed and delivered library.
What may not be right is your installation, or local setup. Only you can figure that out.
On my box, and on the command-line -- where I only need to link with libarmadillo which itself has linkage to lapack and blas:
edd#max:/tmp$ g++ -o arma5by5 arma5by5.cpp -larmadillo
edd#max:/tmp$ ./arma5by5
4.06892
1.5043 1.3996 0.6353 0.8246 1.4694
1.6543 1.6822 0.4338 0.6739 1.5782
1.3145 1.2759 0.3825 0.4967 1.2959
1.4222 1.4584 1.0028 1.3742 1.3593
1.6126 1.7886 0.4599 0.8348 1.5648
edd#max:/tmp$
So i am really new to openCV and all the image recognition stuff. So i use monodevelop and i installed OpenCV using apt-getand i included these files
include "opencv2/highgui/highgui.hpp"
include "opencv2/imgproc/imgproc.hpp"
include <opencv2/opencv.hpp>
include <opencv2/imgproc/imgproc.hpp>
include<iostream>
include<vector>
include<algorithm>
include <X11/Xlib.h>
include <X11/Xutil.h>
but this function convexityDefects() shows as undefined
EDIT
So the problem is that the compiler shows that function convexityDefects is not defined in this scope
Here is the full code that i cant get to work -> Code
Could you please point me out!?
Thank you.
So i found the answer i was looking for.....
The libraries i used to compile my project where in wrong location.
so i used pkg-config --libs --cflags opencv to find exactly where are my libs....
After that i included these libraries in Code-Blocks and everything magicly started to work.