I am trying to setup opencv in eclipse Luna. I have written a sample application as follows :
#include <cv.h>
#include <highgui.h>
#include<iostream>
using namespace cv;
int main( int argc, char** argv )
{
Mat image;
image = imread( argv[1], 1 );
if( argc != 2 || !image.data )
{
printf( "No image data \n" );
return -1;
}
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", image );
waitKey(0);
return 0;
}
In my project properties i have included /usr/local/include/opencv in (Project->Properties->C/C++ Build->Settings->Tool Settings -> GCC C++ Compiler -> Includes -> Include Paths. )
and /usr/local/lib in (Project->Properties->C/C++ Build->Settings->Tool Settings -> GCC C++ Linker -> Libraries -> Library Search Path. )
My output of the command pkg-config --cflags opencv is -I/usr/local/include/opencv -I/usr/local/include
and the output of pkg-config --libs opencv is
-L/usr/local/lib -lopencv_stitching -lopencv_superres -lopencv_videostab -lopencv_viz -lopencv_adas -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_datasets -lopencv_face -lopencv_latentsvm -lopencv_objdetect -lopencv_line_descriptor -lopencv_optflow -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_surface_matching -lopencv_text -lopencv_tracking -lopencv_xfeatures2d -lopencv_shape -lopencv_video -lopencv_ximgproc -lopencv_calib3d -lopencv_features2d -lopencv_ml -lopencv_flann -lopencv_xobjdetect -lopencv_xphoto -lopencv_highgui -lopencv_videoio -lopencv_imgcodecs -lopencv_photo -lopencv_imgproc -lopencv_core -lopencv_hal
When I tried building my project i got the following errors.
‘imread’ was not declared in this scope
‘imshow’ was not declared in this scope
‘namedWindow’ was not declared in this scope
‘waitKey’ was not declared in this scope
Function 'imread' could not be resolved
Function 'imshow' could not be resolved
Function 'namedWindow' could not be resolved
Function 'waitKey' could not be resolved
Can anyone help me fixing the problem and explain what is that I was missing.
Try to change:
#include <cv.h>
#include <highgui.h>
To this:
#include <opencv2/opencv.hpp>
You also need to link the Libraries (GCC C++ Linker » Libraries):
opencv_core
opencv_imgcodecs
opencv_highgui
You didn't say which version you are using, but as you have -lopencv_imgcodecs, you are probably using OpenCV 3. If you prefer, follow the instructions here. Also change from CV_WINDOW_AUTOSIZE to WINDOW_AUTOSIZE.
Related
I am trying to compile a program for the raspberry pi.
But when I run the build in Geany I got this error:
g++ $(pkg-config opencv4 --cflags --libs) -o g++ $(pkg-config raspicam --cflags --libs) -o camera_2 camera_2.cpp (in directory: /home/pi/Desktop)
/usr/bin/ld: /tmp/ccTDUfOT.o: undefined reference to symbol '_ZN2cv6imshowERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKNS_11_InputArrayE'
/usr/bin/ld: //usr/local/lib/libopencv_highgui.so.405: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
Compilation failed.
The camera.cpp file looks like this:
#include <opencv2/opencv.hpp>
#include <raspicam_cv.h>
#include <iostream>
using namespace std;
using namespace cv;
using namespace raspicam;
Mat frame;
void Setup ( int argc,char **argv, RaspiCam_Cv &Camera )
{
Camera.set ( CAP_PROP_FRAME_WIDTH, ( "-w",argc,argv,400 ) );
Camera.set ( CAP_PROP_FRAME_HEIGHT, ( "-h",argc,argv,240 ) );
Camera.set ( CAP_PROP_BRIGHTNESS, ( "-br",argc,argv,50 ) );
Camera.set ( CAP_PROP_CONTRAST ,( "-co",argc,argv,50 ) );
Camera.set ( CAP_PROP_SATURATION, ( "-sa",argc,argv,50 ) );
Camera.set ( CAP_PROP_GAIN, ( "-g",argc,argv ,50 ) );
Camera.set ( CAP_PROP_FPS, ( "-fps",argc,argv,100));
}
int main(int argc,char **argv)
{
RaspiCam_Cv Camera;
Setup(argc, argv, Camera);
cout<<"Connecting to camera"<<endl;
if (!Camera.open())
{
cout<<"Failed to Connect"<<endl;
return -1;
}
cout<<"Camera Id = "<<Camera.getId()<<endl;
Camera.grab();
Camera.retrieve(frame);
imshow("frame", frame);
waitKey();
return 0;
}
So far I have figured that when I remove
Mat frame;
the error does not appear.
The pkg-config file looks like this:
prefix=/usr/local
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir_old=${prefix}/include/opencv4/opencv2
includedir_new=${prefix}/include/opencv4
Name: OpenCV
Description: Open Source Computer Vision Library
Version: 4.5.5
L: -Libs${exec_prefix}/lib -lopencv_calib3d -lopencv_core -lopencv_dnn -lopencv_features2d -lopencv_flann -lopencv_gapi -lopencv_highgui -lopencv_imgcodecs -lopencv_imgproc -lopencv_ml -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_video -lopencv_videoio
Libs.private: -ldl -lm -lpthread -lrt
Cflags: -I${includedir_old} -I${includedir_new}
The command in Geany looks like this:
g++ $(pkg-config opencv4 --cflags --libs) -o g++ $(pkg-config raspicam --cflags --libs) -o %e %f
Do you have any idea what is wrong and what I do have to change?
Thank you
The argument -o g++ is very weird because it will make an output file named g++, which is confusing because that's also the name of your compiler. You probably want to remove that since you already have a -o argument.
Secondly, the order of the different objects/libraries that are getting linked together often matters when you are linking a program. Try putting the calls to pkg-config at the end of the command.
I have solved the error.
I have added two libraries to the command:
g++ $(pkg-config opencv4 --cflags --libs) $(pkg-config raspicam --cflags --libs) -o %e %f -lopencv_highgui -lopencv_core
I wonder why the libraries do not work in the config file.
I am using OpenCV 3.2.0 on Ubuntu 64-bit and have the written following code which I call WebImageIO.cpp
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "cv.h"
#include "highgui.h"
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/videoio.hpp>
#include <math.h>
#include <windows.h>
#include "Generic.h"
#include "Palette.h"
#include "CWebImageIO.h"
using namespace cv;
ERROR_NUMBER CWebImageIO::WriteColourImageToPNGFile(BytePlane bytePlane, BYTE *bpOutputImage,
TCHAR *outputFileName)
{
int iNumBands=3;
int row, col, inputInc, inc;
BytePlane bpBufferPlane;
// Make real colour output plane
bpBufferPlane=MakeBytePlane(bytePlane.iRows, iNumBands*bytePlane.iColumns, NULL);
if (!(bpBufferPlane.Data))
return ERROR_MEMORY_ALLOCATION;
// Fill buffer plane with test values
cv::Mat outputImage=cv::Mat(bytePlane.iRows, bytePlane.iColumns, CV_8UC3);
if (!(outputImage.data))
{
FreeBytePlane(bpBufferPlane);
return ERROR_MEMORY_ALLOCATION;
}
memcpy((void *)(outputImage.data), (void *)(bpOutputImage),
bytePlane.iRows*bytePlane.iColumns*3);
if (!(imwrite((char *)outputFileName, outputImage)))
{
free(bpOutputImage);
return ErrorWritingFile(outputFileName);
}
// Cleanup
FreeBytePlane(bpBufferPlane);
return ERROR_NONE;
}
I then try to build it with the following.
g++ `pkg-config opencv --libs` -o dist/Debug/GNU-Linux/progName build/Debug/GNU-Linux/_ext/73c876dd/WebImageIO.o build/Debug/GNU-Linux/main.o /usr/lib/x86_64-linux-gnu/libopencv_calib3d.so -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_objdetect -lopencv_ocl -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_ts -lopencv_video -lopencv_videostab -lpng
I get the following error message.
/home/peter/NetBeansProjects/ApplyModelToSet/../../DraculaFiles/CWebImageIO.cpp:247: undefined reference to `cv::imwrite(cv::String const&, cv::_InputArray const&, std::vector<int, std::allocator<int> > const&)'
I have read that cv::imread (at least) uses the imgcodecs library but
sudo find / -name '*imgcodecs*.so' -print
returns no results after my having installed opencv 3.2 on my system
EDIT:
pkg-config opencv --libs
returns
/usr/lib/x86_64-linux-gnu/libopencv_calib3d.so -lopencv_calib3d /usr/lib/x86_64-linux-gnu/libopencv_contrib.so -lopencv_contrib /usr/lib/x86_64-linux-gnu/libopencv_core.so -lopencv_core /usr/lib/x86_64-linux-gnu/libopencv_features2d.so -lopencv_features2d /usr/lib/x86_64-linux-gnu/libopencv_flann.so -lopencv_flann /usr/lib/x86_64-linux-gnu/libopencv_gpu.so -lopencv_gpu /usr/lib/x86_64-linux-gnu/libopencv_highgui.so -lopencv_highgui /usr/lib/x86_64-linux-gnu/libopencv_imgproc.so -lopencv_imgproc /usr/lib/x86_64-linux-gnu/libopencv_legacy.so -lopencv_legacy /usr/lib/x86_64-linux-gnu/libopencv_ml.so -lopencv_ml /usr/lib/x86_64-linux-gnu/libopencv_objdetect.so -lopencv_objdetect /usr/lib/x86_64-linux-gnu/libopencv_ocl.so -lopencv_ocl /usr/lib/x86_64-linux-gnu/libopencv_photo.so -lopencv_photo /usr/lib/x86_64-linux-gnu/libopencv_stitching.so -lopencv_stitching /usr/lib/x86_64-linux-gnu/libopencv_superres.so -lopencv_superres /usr/lib/x86_64-linux-gnu/libopencv_ts.so -lopencv_ts /usr/lib/x86_64-linux-gnu/libopencv_video.so -lopencv_video /usr/lib/x86_64-linux-gnu/libopencv_videostab.so -lopencv_videostab
I'm trying to run a C++ program which is based on tesseract API and I'm using QtCreator as IDE on Ubuntu, in order to perfom page layout analysis :
int main(void)
{
int left, top, right, bottom;
tesseract::TessBaseAPI tessApi;
tessApi.InitForAnalysePage();
cv::Mat img = cv::imread("document.png");
tessApi.SetImage(reinterpret_cast<const uchar*>(img.data), img.size().width, img.size().height, img.channels(), img.step1());
tesseract::PageIterator *iter = tessApi.AnalyseLayout();
while (iter->Next(tesseract::RIL_BLOCK))
iter->BoundingBox( tesseract::RIL_BLOCK, &left, &top, &right, &bottom);
return EXIT_SUCCESS;
}
But in turn I got these kind of errors confirming that tesseract and Qt aren't linked :
main.cpp:11: error: undefined reference to `tesseract::TessBaseAPI::TessBaseAPI()'
main.cpp:12: error: undefined reference to `tesseract::TessBaseAPI::InitForAnalysePage()'
main.cpp:16: error: undefined reference to `tesseract::TessBaseAPI::SetImage(unsigned char const*, int, int, int, int)'
main.cpp:18: error: undefined reference to `tesseract::TessBaseAPI::AnalyseLayout()'
Here is my .pro file :
INCLUDEPATH += /usr/local/include/opencv \
/usr/include/tesseract
LIBS += -L"/usr/local/opencv/lib" -lopencv_calib3d \
-lopencv_contrib \
-lopencv_core \
-lopencv_features2d \
-lopencv_flann \
-lopencv_gpu \
-lopencv_highgui \
-lopencv_imgproc \
-lopencv_legacy \
-lopencv_ml \
-lopencv_nonfree \
-lopencv_objdetect \
-lopencv_ocl \
-lopencv_photo \
-lopencv_stitching \
-lopencv_superres \
-lopencv_video \
-lopencv_videostab
LIBS += -L"/usr/bin/tesseract"
You only have the path for the lib -L"/usr/bin/tesseract", you forgot to include the lib as well. Just add it like you did for the openCV libs.
I have acquired the Nvidia Jetson TK1 a few weeks ago and I'm trying to use CPU and GPU at the same time, hence the use of the Stream class. With a simple test I realize it does not do what I think it should, I'm probably using it wrong, or maybe a compiler option.
I checked this link for answers before posting this question : how to use gpu::Stream in OpenCV?
Here is my code :
#include <stdio.h>
#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/gpu/gpu.hpp"
#include <time.h>
using namespace cv;
using namespace std;
using namespace gpu;
int main(int argc,char** argv)
{
unsigned long AAtime=0, BBtime=0;
gpu::setDevice(0);
gpu::FeatureSet(FEATURE_SET_COMPUTE_30);
Mat host_src= imread(argv[1],0);
GpuMat gpu_src, gpu_dst;
Stream stream;
gpu_src.upload(host_src);
AAtime = getTickCount();
blur(gpu_src, gpu_dst, Size(5,5), Point(-1,-1), stream);
//Cpu function
int k=0;
for(unsigned long long int j=0;j<10;j++)
for(unsigned long long int i=0;i<10000000;i++)
k+=rand();
stream.waitForCompletion();
Mat host_dst;
BBtime = getTickCount();
cout<<(BBtime - AAtime)/getTickFrequency()<<endl;
gpu_dst.download(host_dst);
return 0;
}
With the timer function I saw that the overall time is CPU + GPU, not the longest of the two, so they do not work in parallel. I tried using the CudaMem as jet47 showed but when I watch the image it's only stripes and not my image:
CudaMem host_src_pl(Size(900, 1200), CV_8UC1, CudaMem::ALLOC_PAGE_LOCKED); // My image is 1200 by 900
CudaMem host_dst_pl;
Mat host_src= imread(argv[1],0);
host_src = host_src_pl;
//rest of the code
To compile I used this command : "g++ -Ofast -mfpu=neon -funsafe-math-optimizations -fabi-version=8 -Wabi -std=c++11 -march=armv7-a testStream.cpp -fopenmp -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_calib3d -lopencv_contrib -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_legacy -lopencv_ml -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_video -lopencv_videostab -o gpuStream" Some might be redundant, I tried without them and it does the same.
What do I miss? Thanks for you answers :)
EDIT
Hey,
For anyone else having a similar issue, I figured something of a work around out. If you just compile this using :
gcc `pkg-config --cflags opencv` CameraMotionTest.cpp `pkg-config --libs opencv` -o cammotion
instead of the makefile that I used, it compiles correctly. I'm not exactly sure what was wrong with the method I was using before still so if someone still wants to comment on that go ahead.
After doing this i found some other issues in the code that needed fixed as well but those didn't have anything to do with this question so I won't go into them here.
Thanks!
ORIGINAL
I am trying to compile a short code for camera motion estimation on Ubuntu using openCV but am running into and "undefined reference" error for one of the openCV functions (and only one). The error I get when I try to compile is as follows:
g++ CameraMotionTest.cpp -lopencv_video -lopencv_calib3d -lopencv_imgproc -lopencv_objdetect -lopencv_features2d -lopencv_core -lopencv_highgui -lopencv_videostab -lopencv_contrib -lopencv_flann -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_ts -lopencv_gpu -lopencv_ocl -o CameraMotion
/tmp/ccdHB3Pr.o: In function `main':
CameraMotionTest.cpp:(.text+0x77f): undefined reference to `cv::calcOpticalFlowPyrLK(cv::_InputArray const&, cv::_InputArray const&, cv::_InputArray
const&, cv::_OutputArray const&, cv::_OutputArray const&, cv::_OutputArray const&, cv::Size_<int>, int, cv::TermCriteria, int, double)'
collect2: ld returned 1 exit status
make: *** [CameraMotion] Error 1
I am using this makefile to try and compile and run the program:
all: run
run: CameraMotion
./CameraMotion *.jpg
CameraMotion: CameraMotionTest.cpp
g++ CameraMotionTest.cpp -lopencv_video -lopencv_calib3d -lopencv_imgproc -lopencv_objdetect -lopencv_features2d -lopencv_core -lopencv_highgui -lopencv_videostab -lopencv_contrib -lopencv_flann -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_ts -lopencv_gpu -lopencv_ocl -o CameraMotion
Finally, the code I am trying to compile is:
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/video/tracking.hpp>
#include <opencv/cv.h>
#include <opencv/cxcore.h>
#include <iostream>
#include <stdio.h>
#include <fstream>
using namespace std;
using namespace cv;
int main(int argc, const char** argv){
//storing the image in a temporary variable
vector<Mat> img;
int noi=5;
for( int index=0; index<noi;index++){
img.push_back(imread(argv[index+1]));
}
Mat im1=img[0];
//converting image to grayscale
cvtColor(im1,im1,CV_RGB2GRAY);
//initializing variable
vector<Point2f> corners1, corners2;
//setting parameters for corner detection
int maxCorner=200;
double quality=0.01;
double minDist=20;
int blockSize=3;
double k=0.04;
Mat mask;
vector<uchar> status;
vector<float> track_err;
int maxlevel=3;
Mat im2=img[1];
TermCriteria termcrit(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS,20,.03);
vector<Point2f> pointskept1,pointskept2;
vector<int>pointskeptindex;
Mat F,E,R,tran;
Matx33d W(0,-1,0,
1,0,0,
0,0,1);
Matx33d Winv(0,1,0,
-1,0,0,
0,0,1);
OutputArray statF=noArray();
float fx=951.302687761842550;
float fy=951.135570101293520;
float cx=484.046807724895250;
float cy=356.325026020307800;
float alpha=0;
float kmatdata[3][3]={{fx,fy*tan(alpha),cx},{0,fy,cy},{0,0,1}};
Mat K(3,3,CV_32FC1,kmatdata);
cout<<K<<endl;
ofstream myfile;
//collecting new images, determining corners, and calculating optical flow
for (int i=1; i<noi-1; i++) {
//capturing next image
//converting new image to grayscale
cvtColor(im2,im2,CV_RGB2GRAY);
//determining corner features
goodFeaturesToTrack(im1,corners1, maxCorner, quality, minDist, mask, blockSize, false,k);
goodFeaturesToTrack(im2,corners2, maxCorner, quality, minDist, mask, blockSize, false,k);
//calculating optical flow
calcOpticalFlowPyrLK(im1,im2,corners1,corners2,status,track_err,Size(10,10),maxlevel,termcrit,0.0001);
//filtering points
for(int t=0; t<status.size();i++){
if(status[t] && track_err[i]<12.0){
pointskeptindex.push_back(i);
pointskept1.push_back(corners1[i]);
pointskept2.push_back(corners2[i]);
} else {
status[i]=0;
}
}
F=findFundamentalMat(pointskept1,pointskept2,FM_RANSAC,1,0.99,statF);
E=K.t()*F*K;
SVD svd(E);
R=svd.u*Mat(W)*svd.vt;
tran=svd.u.col(2);
//renaming new image to image 1
im2.copyTo(im1);
im2=img[i+1];
myfile.open("output.txt", ios_base::app);
myfile<<"Rotation mat: ";
for(int l=0;l<R.rows;l++){
for(int m=0; m<R.cols; m++){
myfile<<R.at<float>(i,m)<<", ";
}
}
myfile<<"Translation vector: ";
for(int l=0; l<tran.rows;l++){
myfile<<tran.at<float>(l,1)<<", ";
}
myfile<<"\n";
myfile.close();
}
return 0;
}
Has anyone else run into a problem like this? I am assuming that there is just a linking error somewhere but I am quite frankly pretty new to opencv and c++ in general and i haven't been able to figure out what is wrong yet.
Thanks!
Andrew
For anyone else having a similar issue, I figured something of a work around out. If you just compile this using :
gcc `pkg-config --cflags opencv` CameraMotionTest.cpp `pkg-config --libs opencv` -o cammotion
instead of the makefile that I used, it compiles correctly. I'm not exactly sure what was wrong with the method I was using before still so if someone still wants to comment on that go ahead.
After doing this i found some other issues in the code that needed fixed as well but those didn't have anything to do with this question so I won't go into them here.
Thanks!
It seems that You have some problem with Your OpenCV instalation. To compile Your code, on OpenCV 2.4.9, it was enough to use
g++ t1.cpp -lopencv_video -lopencv_core -lopencv_objdetect -lopencv_imgproc -lopencv_highgui -lopencv_calib3d -o CameraMotion
You can also try using nm -g <library> | grep -i <function_name> to check if Your libopencv_video.so contains calcOpticalFlowPyrLK... (based on this answer).