ORB/BruteForce-drawing matches when there are none - c++

I'm trying to write a program that uses ORB algorithm to detect and compute the keypoints of an image and a video and matches descriptor vectors using BruteForce matcher. The issue I am facing is, that every time I run the program on Visual C++, when the object that I'm trying to detect is not visible, my algorithm is drawing all the supposed matching lines between the keypoints detected(it matches all the keypoints). When the object that I'm trying to detect appears in the image I don't face this issue, in fact, I hardly get any mismatches.
This is a brief sequence of the main test:
• convert input image to grayscale
• convert input videos to grayscale
• detect keypoints and extract descriptors from input grayscale image
• detect keypoints and extract descriptors from input grayscale videos
• match descriptors(see below)
BFMatcher matcher(NORM_HAMMING);
vector<DMatch> matches;
matcher.match(descriptors_1, descriptors_2, matches);
double max_dist = 0; double min_dist = 100;
////calcularea distantelor max si min distances intre keypoints
for (int i = 0; i < descriptors_1.rows; i++)
{
double dist = matches[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist);
printf("-- Min dist : %f \n", min_dist);
std::vector< DMatch > good_matches;
for (int i = 0; i < descriptors_1.rows; i++)
{
if (matches[i].distance <= max(2 * min_dist, 0.02))
{
good_matches.push_back(matches[i]);
}
}
////-- Desenarea matches-urilor "bune"
Mat img_matches;
drawMatches(img1, keypoints_1, cadruProcesat, keypoints_2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
////-- Afisare matches
imshow("good matches", img_matches);
int gm = 0;
for (int i = 0; i < (int)good_matches.size(); i++)
{
printf("-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx);
gm += 1;
}
printf("%d",gm);
///////////////////////////////////////////////////////////////////
//imshow(windowName2, cadruProcesat);
switch (waitKey(10)) {
case 27:
//tasta 'esc' a fost apasata(ASCII 27)
return 0;
}
}
return 0;
Please help me find the problem.

So I've changed the value of min_dist to 15(not 100) and it seems that it works very well for patterns... but this was done by trial and error...

Related

Image Stitching warsPerspective size issue

I am trying to stitch two images. tech stack is opecv c++ on vs 2017.
The image that I had considered are:
image1 of code :
and
image2 of code:
I have found the homoography matrix using this code. I have considered image1 and image2 as given above.
int minHessian = 400;
Ptr<SURF> detector = SURF::create(minHessian);
vector< KeyPoint > keypoints_object, keypoints_scene;
detector->detect(gray_image1, keypoints_object);
detector->detect(gray_image2, keypoints_scene);
Mat img_keypoints;
drawKeypoints(gray_image1, keypoints_object, img_keypoints);
imshow("SURF Keypoints", img_keypoints);
Mat img_keypoints1;
drawKeypoints(gray_image2, keypoints_scene, img_keypoints1);
imshow("SURF Keypoints1", img_keypoints1);
//-- Step 2: Calculate descriptors (feature vectors)
Mat descriptors_object, descriptors_scene;
detector->compute(gray_image1, keypoints_object, descriptors_object);
detector->compute(gray_image2, keypoints_scene, descriptors_scene);
//-- Step 3: Matching descriptor vectors using FLANN matcher
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::FLANNBASED);
vector< DMatch > matches;
matcher->match(descriptors_object, descriptors_scene, matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for (int i = 0; i < descriptors_object.rows; i++)
{
double dist = matches[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
printf("-- Max dist: %f \n", max_dist);
printf("-- Min dist: %f \n", min_dist);
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
vector< DMatch > good_matches;
Mat result, H;
for (int i = 0; i < descriptors_object.rows; i++)
{
if (matches[i].distance < 3 * min_dist)
{
good_matches.push_back(matches[i]);
}
}
Mat img_matches;
drawMatches(gray_image1, keypoints_object, gray_image2, keypoints_scene, good_matches, img_matches, Scalar::all(-1),
Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
imshow("Good Matches", img_matches);
std::vector< Point2f > obj;
std::vector< Point2f > scene;
cout << "Good Matches detected" << good_matches.size() << endl;
for (int i = 0; i < good_matches.size(); i++)
{
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}
// Find the Homography Matrix for img 1 and img2
H = findHomography(obj, scene, RANSAC);
The next step would be to warp these. I used perspectivetransform function to find the corner of image1 on the stitched image. I had considered this as the number of columns to be used in the Mat result.This is the code I wrote ->
vector<Point2f> imageCorners(4);
imageCorners[0] = Point(0, 0);
imageCorners[1] = Point(image1.cols, 0);
imageCorners[2] = Point(image1.cols, image1.rows);
imageCorners[3] = Point(0, image1.rows);
vector<Point2f> projectedCorners(4);
perspectiveTransform(imageCorners, projectedCorners, H);
Mat result;
warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));
Mat half(result, Rect(0, 0, image2.cols, image2.rows));
image2.copyTo(half);
imshow("result", result);
I am getting a stitched output of these images. But the issue is with the size of the image. I was doing a comparison by combining the two original images manually with the result of the above code. The size of the result from code is more. What should I do to make it of perfect size? The ideal size should be image1.cols + image2.cols - overlapping length.
warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));
This line seems problematic.
You should choose the extremum points for the size.
Rect rec = boundingRect(projectedCorners);
warpPerspective(image1, result, H, rec.size());
But you will lose the parts if rec.tl() falls to negative axes, so you should shift the homography matrix to fall in the first quadrant.
See Warping to perspective section of my answer to Fast and Robust Image Stitching Algorithm for many images in Python.

Panoromic image construction

I am trying to construct a panoromic view from different images.
Initially I tried to stitch two images as part of panoromic construction.
The two input images I am trying to stitch are:
I used ORB feature descriptor to find features in the image,then I found out Homography matrix between these two images.
My code is:
int main(int argc, char **argv){
Mat img1 = imread(argv[1],1);
Mat img2 = imread(argv[2],1);
//-- Step 1: Detect the keypoints using orb Detector
std::vector<KeyPoint> kp2,kp1;
// Default parameters of ORB
int nfeatures=500;
float scaleFactor=1.2f;
int nlevels=8;
int edgeThreshold=15; // Changed default (31);
int firstLevel=0;
int WTA_K=2;
int scoreType=ORB::HARRIS_SCORE;
int patchSize=31;
int fastThreshold=20;
Ptr<ORB> detector = ORB::create(
nfeatures,
scaleFactor,
nlevels,
edgeThreshold,
firstLevel,
WTA_K,
scoreType,
patchSize,
fastThreshold );
Mat descriptors_img1, descriptors_img2;
//-- Step 2: Calculate descriptors (feature vectors)
detector->detect(img1, kp1,descriptors_img1);
detector->detect(img2, kp2,descriptors_img2);
Ptr<DescriptorExtractor> extractor = ORB::create();
extractor->compute(img1, kp1, descriptors_img1 );
extractor->compute(img2, kp2, descriptors_img2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
if ( descriptors_img1.empty() )
cvError(0,"MatchFinder","1st descriptor empty",__FILE__,__LINE__);
if ( descriptors_img2.empty() )
cvError(0,"MatchFinder","2nd descriptor empty",__FILE__,__LINE__);
descriptors_img1.convertTo(descriptors_img1, CV_32F);
descriptors_img2.convertTo(descriptors_img2, CV_32F);
FlannBasedMatcher matcher;
std::vector<DMatch> matches;
matcher.match(descriptors_img1,descriptors_img2,matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_img1.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist )
min_dist = dist;
if( dist > max_dist )
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_img1.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
Mat img_matches;
drawMatches(img1,kp1,img2,kp2,good_matches,img_matches,Scalar::all(-1),
Scalar::all(-1),vector<char>(),DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( kp1[ good_matches[i].queryIdx ].pt );
scene.push_back( kp2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
After wards some people told me to include the following code
cv::Mat result;
warpPerspective( img1, result, H, cv::Size( img1.cols+img2.cols, img1.rows) );
cv::Mat half(result, cv::Rect(0, 0, img2.cols, img2.rows) );
img2.copyTo(half);
imshow("result",result);
The result I got is
I also tried using inbuilt opencv stitch function. And I got the result
I am trying to implement stitch function so I dont want to use inbuilt opencv stitch function.
Can any one tell me where I went wrong and correct my code.Thanks in advance
Image stitching includes the following steps:
Feature finding
Find camera parameters
Warping
Exposure compensation
Seam Finding
Blending
You have to do all these steps in order to get the perfect result.
In your code you have only done the first part, that is feature finding.
You can find a detailed explanation on how image stitching works in Learn OpenCV
Also I have the code on Github
Hope this helps.

Solution for OpenCV Error: Unsupported format or combination of format - when matching ORB features with FlannBasedMatcher

I tried to find good matches using ORB.My code is as follows:
Ptr<FeatureDetector> detector = ORB::create();
Mat descriptors_img1, descriptors_img2;
//-- Step 2: Calculate descriptors (feature vectors)
detector->detect(img1, kp1,descriptors_img1);
detector->detect(img2, kp2,descriptors_img2);
Ptr<DescriptorExtractor> extractor = ORB::create();
extractor->compute(img1, kp1, descriptors_img1 );
extractor->compute(img2, kp2, descriptors_img2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
descriptors_img1.convertTo(descriptors_img1, CV_32F);
descriptors_img2.convertTo(descriptors_img2, CV_32F);
FlannBasedMatcher matcher;
std::vector<DMatch> matches;
matcher.match(descriptors_img1,descriptors_img2,matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_img1.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist )
min_dist = dist;
if( dist > max_dist )
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_img1.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
Mat img_matches;
drawMatches(img1,kp1,img2,kp2,good_matches,img_matches,Scalar::all(-1),
Scalar::all(-1),vector<char>(),DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
imshow( "Good Matches", img_matches );
But when I run it, I get error saying:
OpenCV Error: Unsupported format or combination of formats (type=0
) in buildIndex_, file /home/opencv-3.2.0/modules/flann/src/miniflann.cpp, line 315
terminate called after throwing an instance of 'cv::Exception'
what(): /home/opencv-3.2.0/modules/flann/src/miniflann.cpp:315: error: (-210) type=0 in function buildIndex_
I looked at similar questions ,but i didnt find my answer.After debugging I came to know that error is at
matcher.match(....);
Please help me in fixing this out.Thanks in advance
I solved this error.
You just modify the code
Ptr<ORB> detector = ORB::create()
instead of
Ptr<FeatureDetector> detector = ORB::create();
Then it worked for me.

OpenCV drawMatches error

My code consists of a section where I sort through a set of matches and define good matches based on distance. When I try to drawMatches, I receive an error:
OpenCV Error: Assertion failed (i1 >= 0 && i1 < static_cast<int>(keypoints1.size())) in drawMatches, file /home/user/OpenCV/opencv-2.4.10/modules/features2d/src/draw.cpp, line 207
terminate called after throwing an instance of 'cv::Exception'
what(): /home/user/OpenCV/opencv-2.4.10/modules/features2d/src/draw.cpp:207: error: (-215) i1 >= 0 && i1 < static_cast<int>(keypoints1.size()) in function drawMatches
draw.cpp file shows:
// draw matches
for( size_t m = 0; m < matches1to2.size(); m++ )
{
if( matchesMask.empty() || matchesMask[m] )
{
int i1 = matches1to2[m].queryIdx;
int i2 = matches1to2[m].trainIdx;
CV_Assert(i1 >= 0 && i1 < static_cast<int>(keypoints1.size()));
CV_Assert(i2 >= 0 && i2 < static_cast<int>(keypoints2.size()));
const KeyPoint &kp1 = keypoints1[i1], &kp2 = keypoints2[i2];
_drawMatch( outImg, outImg1, outImg2, kp1, kp2, matchColor, flags );
}
}
My drawMatches call follows:
Mat matchesImage;
drawMatches( im1, keypoints1, im2, keypoints2,
good_matches, matchesImage, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
Can anybody help explain this error to me?
Update:
Here is my code for the good_matches calculation
double min_dist = 10000;
double max_dist = 0;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors1.rows; i++ ) {
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors1.rows; i++ ) {
if( matches[i].distance <= max(2*min_dist, 0.02) ) {
good_matches.push_back( matches[i]);
}
}
for( int i = 0; i < (int)good_matches.size(); i++ ) {
printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n",
i, good_matches[i].queryIdx, good_matches[i].trainIdx );
}
cout << "number of good matches: " << (int)good_matches.size() << endl;;
//Draw matches and save file
Mat matchesImage;
drawMatches( im1, keypoints1, im2, keypoints2,
good_matches, matchesImage, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
Update 2:
BFMatcher matcher(NORM_L2, true);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
The problem has to do with the order you matched the points.
If you did, e.g.:
match(right_desc, left_desc)
function drawMatches will have to follow the same order. This will work (considering my matching example):
drawMatches(right_rgb, right_pts, left_rgb, left_pts, matches)
this will produce the error you have:
drawMatches(left_rgb, left_pts, right_rgb, right_pts, matches)
The order will also affect what is query and what is train (i.e. queryIdx and trainIdx), when you access the coordinates of the matches. Note that right_pts and left_pts are the keypoints described by right_desc and left_desc respectively.
Hope it helps someone.
I think you should clear the contents of the good_matches vector with the good_matches.clear() command for each iteration (if you use while(1) loop for get frame from camera, you will write good_matches.clear() command after while(1) loop:
while(1) {
good_matches.clear();
// other code ...
}
In general, good_matches is an array, which binds points from both keypoints1 and keypoints2 array. So point keypoints1[good_matches[m].queryIdx] corresponds to point keypoints2[good_matches[m].trainIdx].
As you can see, assertions in opencv code have sense.
It seems, the problem is in matches array.
I don't agree with answers above.
Parameters you pass into matcher.match(dscp1, dscp2, matches) and drawMatch(img1, kp1, img2, kp2, good_matches) are corresponding (they are both matching every descriptor in keypoints1 from keypoints2, keypoints1 is called query set while keypoints2 is called train set.)
The error might be caused by the wrong keypoints1 and keypoints2 you pass into drawMatch(). Check whether the index in good_matches correspond with kp in keypoints1. (If you reduce the number of keypoints while picking good matches, this occasion could happen.)

Opencv Image Stitching or Panorama

I am doing image stitching in OpenCV (A panorama) but I have one problem.
I can't use the class Stitching from OpenCV so I must create it with only feature points and homographies.
OrbFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
Mat descriptors_1a, descriptors_2a;
detector.detect( img_1, keypoints_1 , descriptors_1a);
detector.detect( img_2, keypoints_2 , descriptors_2a);
//-- Step 2: Calculate descriptors (feature vectors)
OrbDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
cout<<"La distancia es " <<endl;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_HAMMING, true);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
Here I obtain the feature points in matches, but I need to filter it:
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < matches.size(); i++ )
{
double dist = matches[i].distance;
//cout<<"La distancia es " << i<<endl;
if( dist < min_dist && dist >3)
{
min_dist = dist;
}
if( dist > max_dist) max_dist = dist;
}
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < matches.size(); i++ )
{
//cout<<matches[i].distance<<endl;
if( matches[i].distance < 3*min_dist && matches[i].distance > 3)
{
good_matches.push_back( matches[i]); }
}
Now, I calculate the Homography
vector<Point2f> p1, p2;
for (unsigned int i = 0; i < matches.size(); i++) {
p1.push_back(keypoints_1[matches[i].queryIdx].pt);
p2.push_back(keypoints_2[matches[i].trainIdx].pt);
}
// Homografía
vector<unsigned char> match_mask;
Mat h = findHomography(Mat(p1),Mat(p2), match_mask,CV_RANSAC);
ANd finally, obtain the transform matrix and apply warpPerspective to obtain the join of the two images, but my problem is that in the final image, appears black areas around the photo, and when I loop again, the final image will be ilegible.
// Transformar perspectiva para imagen 2
vector<Point2f> cuatroPuntos;
cuatroPuntos.push_back(Point2f (0,0));
cuatroPuntos.push_back(Point2f (img_1.size().width,0));
cuatroPuntos.push_back(Point2f (0, img_1.size().height));
cuatroPuntos.push_back(Point2f (img_1.size().width, img_1.size().height));
Mat MDestino;
perspectiveTransform(Mat(cuatroPuntos), MDestino, h);
// Calcular esquinas de imagen 2
double min_x, min_y, tam_x, tam_y;
float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
min_x1 = min(MDestino.at<Point2f>(0).x, MDestino.at<Point2f>(1).x);
min_x2 = min(MDestino.at<Point2f>(2).x, MDestino.at<Point2f>(3).x);
min_y1 = min(MDestino.at<Point2f>(0).y, MDestino.at<Point2f>(1).y);
min_y2 = min(MDestino.at<Point2f>(2).y, MDestino.at<Point2f>(3).y);
max_x1 = max(MDestino.at<Point2f>(0).x, MDestino.at<Point2f>(1).x);
max_x2 = max(MDestino.at<Point2f>(2).x, MDestino.at<Point2f>(3).x);
max_y1 = max(MDestino.at<Point2f>(0).y, MDestino.at<Point2f>(1).y);
max_y2 = max(MDestino.at<Point2f>(2).y, MDestino.at<Point2f>(3).y);
min_x = min(min_x1, min_x2);
min_y = min(min_y1, min_y2);
tam_x = max(max_x1, max_x2);
tam_y = max(max_y1, max_y2);
// Matriz de transformación
Mat Htr = Mat::eye(3,3,CV_64F);
if (min_x < 0){
tam_x = img_2.size().width - min_x;
Htr.at<double>(0,2)= -min_x;
}
if (min_y < 0){
tam_y = img_2.size().height - min_y;
Htr.at<double>(1,2)= -min_y;
}
// Construir panorama
Mat Panorama;
Panorama = Mat(Size(tam_x,tam_y), CV_32F);
warpPerspective(img_2, Panorama, Htr, Panorama.size(), INTER_LINEAR, BORDER_CONSTANT, 0);
warpPerspective(img_1, Panorama, (Htr*h), Panorama.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
Anyone knows how can I eliminate this black areas? Is something that I do bad? Anyone knows a functional code that I can see to compare it?
Thanks for your time
EDIT:
That is my image:
And I want to eliminate the black part.
As Micka suggested, when you do stitching, the panorama is usually wavy, because homography or other projection methods do not map a rectangle to another rectangle. You can compensate this effect by using some "straightening", referring to this article:
M. Brown and D. G. Lowe. Automatic panoramic image stitching using invariant features. IJCV, 74(1):59–73, 2007
As to cropping the black part, I wrote this class that you can use. This class assumes the image is BGR and the black pixels have value Vec3b(0,0,0). The source code can be accessed here:
https://github.com/chmos/crop-images.git
Best,