I'm trying to detect whether a template image (logo) is present in a pdf document. The document can be either a scan encapsulated in a pdf or a "pure" pdf document, but this is completely random.
First, I convert the pdf document to a png image using ImageMagick's convert tool, then I cut the output images in half because they're so big, and after that I try to match a logo from a database with any of the shapes present in the half-cut image.
To do so, I use an Orb Feature Detector with an Orb Descriptor, and a RobustMatcher (sort of improved BruteForce matcher, source code available here). Here is a snippet of code from my adaptation of it :
// Read input images
Mat image1 = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
Mat image2 = imread(argv[2], CV_LOAD_IMAGE_GRAYSCALE);
if (!image1.data || !image2.data) {
std::cout << " --(!) Error reading images " << std::endl;
exit(1);
}
// Setting up values for ORB Detector
int nfeatures = 800;
//float scaleFactor = 1.10;
int nlevels = 8;
int edgeThreshold = 12;
int firstLevel = 0;
int WTA_K = 2;
int scoreType = 0;
int patchSize = 31;
// Prepare the matcher
RobustMatcher rmatcher;
rmatcher.setConfidenceLevel(0.98);
rmatcher.setMinDistanceToEpipolar(1.0);
rmatcher.setRatio(0.80f);
cv::Ptr<cv::FeatureDetector> pfd = new cv::OrbFeatureDetector(nfeatures, scaleFactor, nlevels, edgeThreshold, firstLevel, WTA_K, scoreType, patchSize);
rmatcher.setFeatureDetector(pfd);
cv::Ptr<cv::DescriptorExtractor> pde = new cv::OrbDescriptorExtractor();
rmatcher.setDescriptorExtractor(pde);
// Match the two images
std::vector<cv::DMatch> matches;
std::vector<cv::KeyPoint> keypoints1, keypoints2;
cv::Mat fundemental = rmatcher.match(image1, image2, matches, keypoints1, keypoints2);
// If nothing could be matched, stop here
if(matches.size() < 4){
exit(2);
}
The code works great on some examples that I chose carefully, with a highly-recognizable logo and a clean image, with certain proportions... etc. But when I try to apply the process to random pdf files, I start to get this error from OpenCV :
OpenCV Error: Assertion failed (type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U)) in batchDistance, file /home/das/Downloads/opencv-2.4.5/modules/core/src/stat.cpp, line 1797
terminate called after throwing an instance of 'cv::Exception'
what(): /home/das/Downloads/opencv-2.4.5/modules/core/src/stat.cpp:1797: error: (-215) type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U) in function batchDistance
Aborted (core dumped)
I checked for this error, and it appeared that src1.cols != src2.cols, and a quick fix for it would be to test the condition before trying to match the images. The problem is that I miss a lot of images doing so, and this would be OK only if I were working on a video stream... but I'm not, and the next image has nothing in common with the previous one, and I can't determine whether my logo was present or not in the document.
Here is the code from stat.cpp, lines 1789 to 1826 : (assertion is at the beginning on line 1797)
void cv::batchDistance( InputArray _src1, InputArray _src2,
OutputArray _dist, int dtype, OutputArray _nidx,
int normType, int K, InputArray _mask,
int update, bool crosscheck )
{
Mat src1 = _src1.getMat(), src2 = _src2.getMat(), mask = _mask.getMat();
int type = src1.type();
CV_Assert( type == src2.type() && src1.cols == src2.cols &&
(type == CV_32F || type == CV_8U));
CV_Assert( _nidx.needed() == (K > 0) );
if( dtype == -1 )
{
dtype = normType == NORM_HAMMING || normType == NORM_HAMMING2 ? CV_32S : CV_32F;
}
CV_Assert( (type == CV_8U && dtype == CV_32S) || dtype == CV_32F);
K = std::min(K, src2.rows);
_dist.create(src1.rows, (K > 0 ? K : src2.rows), dtype);
Mat dist = _dist.getMat(), nidx;
if( _nidx.needed() )
{
_nidx.create(dist.size(), CV_32S);
nidx = _nidx.getMat();
}
if( update == 0 && K > 0 )
{
dist = Scalar::all(dtype == CV_32S ? (double)INT_MAX : (double)FLT_MAX);
nidx = Scalar::all(-1);
}
if( crosscheck )
{
CV_Assert( K == 1 && update == 0 && mask.empty() );
Mat tdist, tidx;
batchDistance(src2, src1, tdist, dtype, tidx, normType, K, mask, 0, false);
So I'm wondering what does this assertion mean ? What are exactly the src1 and src2 files in stat.cpp ? Why do they need to have the same number of columns ?
I tried changing to a Surf detector and extractor, but I still get the error.
If anyone has an idea, do not hesitate to post, I welcome any advice or notice !
Thanks in advance.
EDIT
I have a more precise question now : how do I ensure that src1.cols == src2.cols ? To answer that question, I think I should know what are the transformations applied to my cv::Mat image1 and image2 before batchDistance(...) is called, in order to find a condition on image1 and image2 which will ensure that src1.cols == src2.cols, so my code would look like this :
// Match the two images
std::vector<cv::DMatch> matches;
std::vector<cv::KeyPoint> keypoints1, keypoints2;
if( CONDITION_ON_IMAGE1&IMAGE2_TO_ENSURE_SRC1.COLS==SRC2.COLS ){
cv::Mat fundemental = rmatcher.match(image1, image2, matches, keypoints1, keypoints2);
}
To get rid of errors, You can play with copying and pasting the images into empty one of required size, but this is only a quick and dirty solution for the assertion.
To make detector and descriptor work properly, You might have to get to know how it works. Maybe then You will be able to get images that will work. After reading this article, it looks that ORB will have problems with scaling (they mention it in the conclusion section). This means You will need to find a workaround for it (like image pyramids, or another way to check the image at multiple scales) or use another extractor and descriptor, which is scale and rotation invariant.
Related
I have created a multithreaded program that gets one frame on an image acquisition thread and the transfers it to the image processing thread. The image seems to get copied entirely but opencv crashes during the inRange function call. I'm going to briefly outline the flow and paste a few code snippets to show how the images are being copied around.
First: this is opencv 2.4.13
for the flow of data, I have one thread read from the camera and stick the image in an Mat accessible to both of the threads. I use a lock when accessing these threads to ensure the reading on CV thread doesn't happen as the same time a write is occurring on the image acquisition thread. Below are the code snippets of each piece.
Mat Temp=Mat(Instance->Frame->rows,Instance->Frame->cols,CV_8UC4);
CLEyeCameraGetFrame(*(Instance->CameraInstance),Temp.data);
//image acquisition thread
Instance->imgMtx->lock();
*(Instance->Frame) = Temp.clone();
Instance->imgMtx->unlock();
//image copy used in the CV thread
imgMtx.lock();
frame = Frame.clone();
imgMtx.unlock();
//declared variables
int LowH, HighH, LowS, HighS, LowV, HighV;
cv::Mat binary, hsv;
//image conversion in the CV code
cv::cvtColor(frame, hsv, CV_BGR2HSV);
//inRange call where it crashes
inRange(hsv, cv::Scalar(LowH, LowS, LowV), cv::Scalar(HighH, HighS, HighV), binary);
Where it crashes, it looks like the inRange function was trying to determine the type of arguments is was getting. Below is the relevant code in OpenCV
void cv::inRange(InputArray _src, InputArray _lowerb,
InputArray _upperb, OutputArray _dst)
{
int skind = _src.kind(), lkind = _lowerb.kind(), ukind = _upperb.kind();
It crashes on something in that last line. I can't really tell what is breaking here but opencv just keeps calling the type function until it hits a stackoverflow. I have gone through and at every point made sure the images can be displayed using imshow. I realize that's not the entire picture but I don't know what else to check. Any help would be appreciated.
EDIT: best I can tell, it hits the infinite trying to resolve the type of the Scalar. Below is the code for that function inside opencv.
int _InputArray::type(int i) const
{
int k = kind();
if( k == MAT )
return ((const Mat*)obj)->type();
if( k == EXPR )
return ((const MatExpr*)obj)->type();
if( k == MATX || k == STD_VECTOR || k == STD_VECTOR_VECTOR )
return CV_MAT_TYPE(flags);
if( k == NONE )
return -1;
if( k == STD_VECTOR_MAT )
{
const vector<Mat>& vv = *(const vector<Mat>*)obj;
CV_Assert( i < (int)vv.size() );
return vv[i >= 0 ? i : 0].type();
}
if( k == OPENGL_BUFFER )
return ((const ogl::Buffer*)obj)->type();
CV_Assert( k == GPU_MAT );
//if( k == GPU_MAT )
return ((const gpu::GpuMat*)obj)->type();
}
Following is an Assertion Error report (displayed on console) when calling cvtColor() function in opencv giving the argument CV_GRAY2BGR on a Mat object which is already a BGR image. I want to know how to interpret this error message by a person who yet doesn't know what the error here. (Hope some erudites won't vote to close this question as off topic, as I know there is a big value in learning to read Assertion or any other error messages for newbees for c++. ) And as I guess this might be most probably a opencv issue on reading assertion errors.
OpenCV Error: Assertion failed (scn == 1 && (dcn == 3 ||
dcn == 4)) in cv::cvtColor, file C:\builds\2_4_PackSlave-win32-vc12-shared\open
cv\modules\imgproc\src\color.cpp, line 3791
I know 2 conditions are tested here,
(scn == 1)
(dcn == 3 || dcn == 4)
and one of them should have failed which has caused the Assertion error. How to distinct and clarify the failed condition? May be I might have to seek into the cvtColor function source code and that would be no problem. (actually I did it but I couldn't find variables with names scn or dcn there in that improc.cpp class there)
This snippet
#include <opencv2\opencv.hpp>
using namespace cv;
int main(int argc, char** argv)
{
// Just a small green BGR image
Mat3b img(10,10,Vec3b(0,255,0));
Mat1b gray;
cvtColor(img, gray, CV_GRAY2BGR); // WARNING: this won't work (on purpose)
return 0;
}
will produce your exact error:
OpenCV Error: Assertion failed (scn == 1 && (dcn == 3 || dcn == 4)) in cv::cvtCo
lor, file C:\builds\2_4_PackSlave-win32-vc12-static\opencv\modules\imgproc\src\c
olor.cpp, line 3789
This code is obviuosly wrong, because you're trying to convert an BGR image from GRAY.
OpenCV is telling you:
Since you're using the code CV_GRAY2BGR, I'm expecting to convert from GRAY (1 channel) source image to a BGR (3 channel) destination image. (I'll allow also BGRA (4 channels) as destination image, even if CV_GRAY2BGRA would be more appropriate in this case.)
In the documentation OpenCV is telling you:
src: input image: 8-bit unsigned, 16-bit unsigned ( CV_16UC... ), or single-precision floating-point.
dst: output image of the same size and depth as src.
code: color space conversion code (see the description below).
dstCn: number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code .
In C++, OpenCV say this as
CV_Assert( scn == 1 && (dcn == 3 || dcn == 4));
where scn stands for "Source Channels Number", and dcn stands for "Destination Channels Number".
Now the last point, where do scn and dcn comes from? If you use a debugger and follow the execution path, you'll see in function void cv::cvtColor( InputArray _src, OutputArray _dst, int code, int dcn ) in color.cpp that (comments added by me):
void cv::cvtColor( InputArray _src /* source image*/,
OutputArray _dst /* destination image*/,
int code, /* here is CV_GRAY2BGR*/
int dcn /* defaults to -1*/ )
{
Mat src = _src.getMat(), dst;
...
int scn = src.channels(); // scn is the number of channels of the source image
...
switch( code ) {
...
case CV_GRAY2BGR: case CV_GRAY2BGRA:
if( dcn <= 0 ) dcn = (code==CV_GRAY2BGRA) ? 4 : 3;
// destination channels are set to 3 because of the code CV_GRAY2BGR
// Check that the input arguments are correct
CV_Assert( scn == 1 && (dcn == 3 || dcn == 4));
...
}
calling cvtColor() function in opencv giving the argument CV_GRAY2BGR on a Mat object which is already a BGR image
You have already answered your own question here. The assertion will have originally have been something like:
CV_Assert( scn == 1 && (dcn == 3 || dcn == 4));
Since you're using a BGR Mat, scn - which is the number of channels in the source Mat - will be 3 causing the whole expression to evaluate to false, and the assertion to fail.
The operation you are performing makes no sense. Omit it, and your code will probably work.
I got an issue with:
cv::FlannBasedMatcher
More precisely with the knnMatch method.
My program take as input and IplImage* and detect a face then it cut the faces and compare the face founded with an image stored on my computer. If i got more than 10 good match it writes on standard output Matched.
The images loaded aren't grayscale. Is that important?
My problem is that it works but for a random amount of time which vary from 1 minute to 3 minutes more or less.
The error messages always appears on the knnMatch methods. Here they are (Note that there is only one of them each time):
OpenCV Error: Assertion failed ((globalDescIdx>=0) && (globalDescIdx < size())) in getLocalIdx, file /opt/local/var/macports/build/_opt_mports_dports_graphics_opencv/opencv/work/opencv- 2.4.9/modules/features2d/src/matchers.cpp, line 163
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /opt/local/var/macports/build/_opt_mports_dports_graphics_opencv/opencv/work/opencv-2.4.9/modules/features2d/src/matchers.cpp:163: error: (-215) (globalDescIdx>=0) && (globalDescIdx < size()) in function getLocalIdx
I don't get why this exception is thrown...
Here is my code:
int DroneCV::matchFaces()
{
std::vector<cv::KeyPoint> keypointsO;
std::vector<cv::KeyPoint> keypointsS;
cv::Mat descriptors_object, descriptors_scene;
cv::Mat foundFaces(this->_faceCut);
cv::FlannBasedMatcher matcher;
std::vector<std::vector<cv::DMatch>> matches;
std::vector<cv::DMatch> good_matches;
cv::SurfDescriptorExtractor extractor;
cv::SurfFeatureDetector surf(this->_minHessian);
surf.detect(foundFaces,keypointsS);
surf.detect(this->_faceToRecognize,keypointsO);
if (!this->_faceToRecognize.data || !foundFaces.data)
{
this->log("Fail to init data in DronceCV::matchFaces");
return (0);
}
extractor.compute(foundFaces, keypointsS, descriptors_scene);
extractor.compute(this->_faceToRecognize, keypointsO, descriptors_object);
if(descriptors_scene.empty())//descriptors_scene.type()!=CV_32F)
{
this->log("Descriptor got wrong type");
descriptors_scene.convertTo(descriptors_scene, CV_32F);
return 0;
}
if(descriptors_object.type()!=CV_32F || descriptors_scene.type()!=CV_32F)
{
this->log("TYPE OBJECT " + std::to_string(descriptors_object.type()));
this->log("TYPE SCENE " + std::to_string(descriptors_scene.type()));
return (0);
}
//Both image must be in grayscale ???
try {
matcher.knnMatch( descriptors_object, descriptors_scene, matches, 5 ); // find the 2 nearest neighbors
} catch (cv::Exception e) {
this->log(e.err);
}
good_matches.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
if (matches[i].size() < 2)
continue;
const cv::DMatch &m1 = matches[i][0];
const cv::DMatch &m2 = matches[i][1];
if(m1.distance <= this->_nndrRatio * m2.distance)
good_matches.push_back(m1);
}
this->log("Number of good matches" + std::to_string(good_matches.size()));
foundFaces.release();
if (good_matches.size() > 8)
return (1);
else
return (0);
}
void DroneCV::analyzeFrame(IplImage *img)
{
if (!img)
{
this->log("Frame empty");
return;
}
if (this->detectFaces(img) == 1)
{
if (this->matchFaces() == 1)
{
this->log("Matched");
cvReleaseImage(&this->_faceCut);
}
}
}
Thanks in advance for your help
I got stuck with this too and it took me almost 3-4 hours to figure this out. When you apply knn match , make sure that number of features in both test and query image is greater than or equal to number of nearest neighbors in knn match.
say for example, we have this code:
Mat img1,img2,desc1,desc2;
vector<KeyPoint> kpt1,kpt2;
FAST(img1,kpt1,30,true) ;
FAST(img2,kpt1,30,true) ;
SurfDescriptorExtractor sfdesc1,sfdesc2;
sfdesc1.compute(img1,kpt1,desc1);
sfdesc2.compute(img2,kpt2,desc2);
FlannBasedMatcher matcher;
vector< vector<DMatch> > matches1,matches2;
matcher.knnMatch(desc1,desc2,matches1,2);
this code will return exception as in the post, just modify the code as show below:
if(kpt1.size()>=2 && kpt2.size()>=2)
matcher.knnMatch(desc1,desc2,matches1,2);
this method worked for me ..!!
When I call the function cvGoodFeaturesToTrack to find Harris corners I get this error:
OpenCV Error: Assertion failed (src.type() == CV_8UC1 || src.type() == CV_32FC1) in cornerEigenValsVecs, file /build/buildd/opencv-2.1.0/src/cv/cvcorner.cpp,line 254
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.1.0/src/cv/cvcorner.cpp:254: error: (-215) src.type() == CV_8UC1 || src.type() == CV_32FC1 in function cornerEigenValsVecs
Aborted
It compiles correctly but when I try to run it, it gives me that error.
Here is the code:
IplImage* eig_image = 0;
IplImage* temp_image = 0;
IplImage *img1 = 0;
img1 = cvLoadImage("im1.pgm");
if(img1==0) {
printf("oh no!");
}
eig_image = cvCreateImage(cvGetSize(img1),IPL_DEPTH_32F, 1);
temp_image = cvCreateImage(cvGetSize(img1),IPL_DEPTH_32F, 1);
const int MAX_CORNERS = 100;
CvPoint2D32f corners[MAX_CORNERS] = {0};
int corner_count = MAX_CORNERS;
double quality_level = 0.1;
double min_distance = 1;
int eig_block_size = 3;
int use_harris = true;
double k = .4;
cvGoodFeaturesToTrack(img1, eig_image, temp_image,corners,&corner_count,quality_level,min_distance,NULL,eig_block_size,use_harris,k);
Why is this happening and how can I fix it? I appreciate any help!
OpenCV is trying to tell you that one of the images you passed to cvGoodFeaturesToTrack() (the error is actually originating in the helper function cornerEigenValsVecs()) is not of the required type CV_8UC1 or CV_32FC1.
I suspect img1 may not be of the type you need it to be. What is the type of the img1 matrix? If it is color, then it may be of type CV_8UC3. Consider using cvCvtColor to make it a grayscale image.
Or, alternatively you can initially load the image as grayscale like:
cvLoadImage("im1.pgm", CV_LOAD_IMAGE_GRAYSCALE);
I am using openCV and trying to calculate a moving average of the background, then taking the current frame and subtracting the background to determine movement (of some sort).
However, when running the program I get:
OpenCV Error: Assertion failed (func != 0) in accumulateWeighted, file /home/sebbe/projekt/opencv/trunk/opencv/modules/imgproc/src/accum.cpp, line 431
terminate called after throwing an instance of 'cv::Exception'
what(): /home/sebbe/projekt/opencv/trunk/opencv/modules/imgproc/src/accum.cpp:431: error: (-215) func != 0 in function accumulateWeighted
I cant possibly see what arguments are wrong to accumulateWeighted.
Code inserted below:
#include <stdio.h>
#include <stdlib.h>
#include "cv.h"
#include "highgui.h"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "cxcore.h"
using namespace cv;
int main( int argc, char **argv )
{
Mat colourFrame;
Mat frame;
Mat greyFrame;
Mat movingAverage;
Mat difference;
Mat temp;
int key = 0;
VideoCapture cap(0);
/* always check */
if ( !cap.isOpened() ) {
fprintf( stderr, "Cannot open initialize webcam!\n" );
return 1;
}
namedWindow("Camera Window", 0);
// Initialize
cap >> movingAverage;
while( key != 'q' ) {
/* get a frame */
cap >> colourFrame;
/* Create a running average of the motion and convert the scale */
accumulateWeighted(colourFrame, movingAverage, 0.02, Mat() );
/* Take the difference from the current frame to the moving average */
absdiff(colourFrame, movingAverage, difference);
/* Convert the image to grayscale */
cvtColor(difference, greyFrame, CV_BGR2GRAY);
/* Convert the image to black and white */
threshold(greyFrame, greyFrame, 70, 255, CV_THRESH_BINARY);
/* display current frame */
imshow("Camera Window",greyFrame);
/* exit if user press 'q' */
key = cvWaitKey( 1 );
}
return 0;
}
Looking at the OpenCV sources, specifically at modules/imgproc/src/accum.cpp line 431, the lines that precede this assertion are:
void cv::accumulateWeighted( InputArray _src, CV_IN_OUT InputOutputArray _dst,
double alpha, InputArray _mask )
{
Mat src = _src.getMat(), dst = _dst.getMat(), mask = _mask.getMat();
int sdepth = src.depth(), ddepth = dst.depth(), cn = src.channels();
CV_Assert( dst.size == src.size && dst.channels() == cn );
CV_Assert( mask.empty() || (mask.size == src.size && mask.type() == CV_8U) );
intfidx = getAccTabIdx(sdepth, ddepth);
AccWFunc func = fidx >= 0 ? accWTab[fidx] : 0;
CV_Assert( func != 0 ); // line 431
What's happening in your case is that getAccTabIdx() is returning -1, which in turn makes func be ZERO.
For accumulateWeighted() to work properly, the depth of colourFrame and movingAverage must be one of the following options:
colourFrame.depth() == CV_8U && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_8U && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_16U && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_16U && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_32F && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_32F && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_64F && movingAverage.depth() == CV_64F
Anything different than that will make getAccTabIdx() return -1 and trigger the exception at line 431.
From the documentation on OpenCV API you can see that the output image from accumulateWeighted is
dst – Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.
So your initialization is wrong. You should retrieve the colourFrame size first and then do this:
cv::Mat movingAverage = cv::Mat::zeros(colourFrame.size(), CV_32FC3);
On Python a working solution is to initiate movingAverage using FIRSTcolourFrame.copy().astype("float").
I found the solution on this website