OpenCV - RobustMatcher using findHomography - c++

I've implement a Robust matcher found on the internet based on differents tests : symmetry test, Ratio Test and RANSAC test. It works well.
I used then findHomography in order to have good matches.
Here the code :
RobustMatcher::RobustMatcher() : ratio(0.65f), refineF(true),confidence(0.99), distance(3.0) {
detector = new cv::SurfFeatureDetector(400); //Better than ORB
//detector = new cv::SiftFeatureDetector; //Better than ORB
//extractor= new cv::OrbDescriptorExtractor();
//extractor= new cv::SiftDescriptorExtractor;
extractor= new cv::SurfDescriptorExtractor;
// matcher= new cv::FlannBasedMatcher;
matcher= new cv::BFMatcher();
}
// Clear matches for which NN ratio is > than threshold
// return the number of removed points
// (corresponding entries being cleared,
// i.e. size will be 0)
int RobustMatcher::ratioTest(std::vector<std::vector<cv::DMatch> >
&matches) {
int removed=0;
// for all matches
for (std::vector<std::vector<cv::DMatch> >::iterator
matchIterator= matches.begin();
matchIterator!= matches.end(); ++matchIterator) {
// if 2 NN has been identified
if (matchIterator->size() > 1) {
// check distance ratio
if ((*matchIterator)[0].distance/
(*matchIterator)[1].distance > ratio) {
matchIterator->clear(); // remove match
removed++;
}
} else { // does not have 2 neighbours
matchIterator->clear(); // remove match
removed++;
}
}
return removed;
}
// Insert symmetrical matches in symMatches vector
void RobustMatcher::symmetryTest(
const std::vector<std::vector<cv::DMatch> >& matches1,
const std::vector<std::vector<cv::DMatch> >& matches2,
std::vector<cv::DMatch>& symMatches) {
// for all matches image 1 -> image 2
for (std::vector<std::vector<cv::DMatch> >::
const_iterator matchIterator1= matches1.begin();
matchIterator1!= matches1.end(); ++matchIterator1) {
// ignore deleted matches
if (matchIterator1->size() < 2)
continue;
// for all matches image 2 -> image 1
for (std::vector<std::vector<cv::DMatch> >::
const_iterator matchIterator2= matches2.begin();
matchIterator2!= matches2.end();
++matchIterator2) {
// ignore deleted matches
if (matchIterator2->size() < 2)
continue;
// Match symmetry test
if ((*matchIterator1)[0].queryIdx ==
(*matchIterator2)[0].trainIdx &&
(*matchIterator2)[0].queryIdx ==
(*matchIterator1)[0].trainIdx) {
// add symmetrical match
symMatches.push_back(
cv::DMatch((*matchIterator1)[0].queryIdx,
(*matchIterator1)[0].trainIdx,
(*matchIterator1)[0].distance));
break; // next match in image 1 -> image 2
}
}
}
}
// Identify good matches using RANSAC
// Return fundemental matrix
cv::Mat RobustMatcher::ransacTest(const std::vector<cv::DMatch>& matches,const std::vector<cv::KeyPoint>& keypoints1,
const std::vector<cv::KeyPoint>& keypoints2,
std::vector<cv::DMatch>& outMatches) {
// Convert keypoints into Point2f
std::vector<cv::Point2f> points1, points2;
cv::Mat fundemental;
for (std::vector<cv::DMatch>::const_iterator it= matches.begin();it!= matches.end(); ++it) {
// Get the position of left keypoints
float x= keypoints1[it->queryIdx].pt.x;
float y= keypoints1[it->queryIdx].pt.y;
points1.push_back(cv::Point2f(x,y));
// Get the position of right keypoints
x= keypoints2[it->trainIdx].pt.x;
y= keypoints2[it->trainIdx].pt.y;
points2.push_back(cv::Point2f(x,y));
}
// Compute F matrix using RANSAC
std::vector<uchar> inliers(points1.size(),0);
if (points1.size()>0&&points2.size()>0){
cv::Mat fundemental= cv::findFundamentalMat(
cv::Mat(points1),cv::Mat(points2), // matching points
inliers, // match status (inlier or outlier)
CV_FM_RANSAC, // RANSAC method
distance, // distance to epipolar line
confidence); // confidence probability
// extract the surviving (inliers) matches
std::vector<uchar>::const_iterator itIn= inliers.begin();
std::vector<cv::DMatch>::const_iterator itM= matches.begin();
// for all matches
for ( ;itIn!= inliers.end(); ++itIn, ++itM) {
if (*itIn) { // it is a valid match
outMatches.push_back(*itM);
}
}
if (refineF) {
// The F matrix will be recomputed with
// all accepted matches
// Convert keypoints into Point2f
// for final F computation
points1.clear();
points2.clear();
for (std::vector<cv::DMatch>::const_iterator it= outMatches.begin();it!= outMatches.end(); ++it) {
// Get the position of left keypoints
float x= keypoints1[it->queryIdx].pt.x;
float y= keypoints1[it->queryIdx].pt.y;
points1.push_back(cv::Point2f(x,y));
// Get the position of right keypoints
x= keypoints2[it->trainIdx].pt.x;
y= keypoints2[it->trainIdx].pt.y;
points2.push_back(cv::Point2f(x,y));
}
// Compute 8-point F from all accepted matches
if (points1.size()>0&&points2.size()>0){
fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2), // matches
CV_FM_8POINT); // 8-point method
}
}
}
return fundemental;
}
// Match feature points using symmetry test and RANSAC
// returns fundemental matrix
cv::Mat RobustMatcher::match(cv::Mat& image1,
cv::Mat& image2, // input images
// output matches and keypoints
std::vector<cv::DMatch>& matches,
std::vector<cv::KeyPoint>& keypoints1,
std::vector<cv::KeyPoint>& keypoints2) {
if (!matches.empty()){
matches.erase(matches.begin(),matches.end());
}
// 1a. Detection of the SIFT features
detector->detect(image1,keypoints1);
detector->detect(image2,keypoints2);
// 1b. Extraction of the SIFT descriptors
/*cv::Mat img_keypoints;
cv::Mat img_keypoints2;
drawKeypoints( image1, keypoints1, img_keypoints, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
drawKeypoints( image2, keypoints2, img_keypoints2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
//-- Show detected (drawn) keypoints
//cv::imshow("Result keypoints detected", img_keypoints);
// cv::imshow("Result keypoints detected", img_keypoints2);
cv::waitKey(5000);*/
cv::Mat descriptors1, descriptors2;
extractor->compute(image1,keypoints1,descriptors1);
extractor->compute(image2,keypoints2,descriptors2);
// 2. Match the two image descriptors
// Construction of the matcher
//cv::BruteForceMatcher<cv::L2<float>> matcher;
// from image 1 to image 2
// based on k nearest neighbours (with k=2)
std::vector<std::vector<cv::DMatch> > matches1;
matcher->knnMatch(descriptors1,descriptors2,
matches1, // vector of matches (up to 2 per entry)
2); // return 2 nearest neighbours
// from image 2 to image 1
// based on k nearest neighbours (with k=2)
std::vector<std::vector<cv::DMatch> > matches2;
matcher->knnMatch(descriptors2,descriptors1,
matches2, // vector of matches (up to 2 per entry)
2); // return 2 nearest neighbours
// 3. Remove matches for which NN ratio is
// > than threshold
// clean image 1 -> image 2 matches
int removed= ratioTest(matches1);
// clean image 2 -> image 1 matches
removed= ratioTest(matches2);
// 4. Remove non-symmetrical matches
std::vector<cv::DMatch> symMatches;
symmetryTest(matches1,matches2,symMatches);
// 5. Validate matches using RANSAC
cv::Mat fundemental= ransacTest(symMatches,
keypoints1, keypoints2, matches);
// return the found fundemental matrix
return fundemental;
}
cv::Mat img_matches;
drawMatches(image1, keypoints_img1,image2, keypoints_img2,
matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
std::cout << "Number of good matching " << (int)matches.size() << "\n" << endl;
if ((int)matches.size() > 5 ){
Debug::info("Good matching !");
}
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_img1[ matches[i].queryIdx ].pt );
scene.push_back( keypoints_img2[matches[i].trainIdx ].pt );
}
cv::Mat arrayRansac;
std::vector<uchar> inliers(obj.size(),0);
Mat H = findHomography( obj, scene, CV_RANSAC,3,inliers);
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( image1.cols, 0 );
obj_corners[2] = cvPoint( image1.cols, image1.rows ); obj_corners[3] = cvPoint( 0, image1.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( image1.cols, 0), scene_corners[1] + Point2f( image1.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( image1.cols, 0), scene_corners[2] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( image1.cols, 0), scene_corners[3] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( image1.cols, 0), scene_corners[0] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
}
</pre><code>
I have results like this (Homography is good):
But I don't understand why for some of my results where the match is good I have these kind of results (homography not seems to be good):
Can someone explain me? Maybe I have to adjust the parameters? But if I reduce constraints (rise the ratio for example) instead of have no matching between two pictures (this is good), I have a lot of matching... And I don't want to. Besides the homography doesn't work at all (I have a green line only like above).
And inversely, my robust matcher works (too) well that is to say that for differents sames picture (just rotated, differents scale etc) , that's work fine but when I have two similar image, I have no match at all...
So I don't how can I do a good computation. I'm a beginner. The robust matcher works well but for the exactly same image but for two similar images like above, it doesn't work and this is a problem.
Maybe I'm on the wrong way.
Before post this message, I of course read a lot on Stack but I didn't find the answer. (For example Here)

It is due to how SURF descriptors work, see http://docs.opencv.org/trunk/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.html
Basically with Droid the image is mostly flat color and it's difficult to find keypoints that are not ambiguous. With Nike, the shape is the same, but the intensity ratio is completely different in the descriptors: imagine on the left the center of a descriptor will be intensity 0 and on the right 1. Even if you normalize the intensity of the images, you're not going to have a match.
If your goal is just to match logos, I suggest you look into edge detection algorithms, like: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html

Related

Image Stitching warsPerspective size issue

I am trying to stitch two images. tech stack is opecv c++ on vs 2017.
The image that I had considered are:
image1 of code :
and
image2 of code:
I have found the homoography matrix using this code. I have considered image1 and image2 as given above.
int minHessian = 400;
Ptr<SURF> detector = SURF::create(minHessian);
vector< KeyPoint > keypoints_object, keypoints_scene;
detector->detect(gray_image1, keypoints_object);
detector->detect(gray_image2, keypoints_scene);
Mat img_keypoints;
drawKeypoints(gray_image1, keypoints_object, img_keypoints);
imshow("SURF Keypoints", img_keypoints);
Mat img_keypoints1;
drawKeypoints(gray_image2, keypoints_scene, img_keypoints1);
imshow("SURF Keypoints1", img_keypoints1);
//-- Step 2: Calculate descriptors (feature vectors)
Mat descriptors_object, descriptors_scene;
detector->compute(gray_image1, keypoints_object, descriptors_object);
detector->compute(gray_image2, keypoints_scene, descriptors_scene);
//-- Step 3: Matching descriptor vectors using FLANN matcher
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::FLANNBASED);
vector< DMatch > matches;
matcher->match(descriptors_object, descriptors_scene, matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for (int i = 0; i < descriptors_object.rows; i++)
{
double dist = matches[i].distance;
if (dist < min_dist) min_dist = dist;
if (dist > max_dist) max_dist = dist;
}
printf("-- Max dist: %f \n", max_dist);
printf("-- Min dist: %f \n", min_dist);
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
vector< DMatch > good_matches;
Mat result, H;
for (int i = 0; i < descriptors_object.rows; i++)
{
if (matches[i].distance < 3 * min_dist)
{
good_matches.push_back(matches[i]);
}
}
Mat img_matches;
drawMatches(gray_image1, keypoints_object, gray_image2, keypoints_scene, good_matches, img_matches, Scalar::all(-1),
Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
imshow("Good Matches", img_matches);
std::vector< Point2f > obj;
std::vector< Point2f > scene;
cout << "Good Matches detected" << good_matches.size() << endl;
for (int i = 0; i < good_matches.size(); i++)
{
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}
// Find the Homography Matrix for img 1 and img2
H = findHomography(obj, scene, RANSAC);
The next step would be to warp these. I used perspectivetransform function to find the corner of image1 on the stitched image. I had considered this as the number of columns to be used in the Mat result.This is the code I wrote ->
vector<Point2f> imageCorners(4);
imageCorners[0] = Point(0, 0);
imageCorners[1] = Point(image1.cols, 0);
imageCorners[2] = Point(image1.cols, image1.rows);
imageCorners[3] = Point(0, image1.rows);
vector<Point2f> projectedCorners(4);
perspectiveTransform(imageCorners, projectedCorners, H);
Mat result;
warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));
Mat half(result, Rect(0, 0, image2.cols, image2.rows));
image2.copyTo(half);
imshow("result", result);
I am getting a stitched output of these images. But the issue is with the size of the image. I was doing a comparison by combining the two original images manually with the result of the above code. The size of the result from code is more. What should I do to make it of perfect size? The ideal size should be image1.cols + image2.cols - overlapping length.
warpPerspective(image1, result, H, Size(projectedCorners[2].x, image1.rows));
This line seems problematic.
You should choose the extremum points for the size.
Rect rec = boundingRect(projectedCorners);
warpPerspective(image1, result, H, rec.size());
But you will lose the parts if rec.tl() falls to negative axes, so you should shift the homography matrix to fall in the first quadrant.
See Warping to perspective section of my answer to Fast and Robust Image Stitching Algorithm for many images in Python.

I want to get high quality feature points only

I'm currently working on real-time feature matching using OpenCV3.4.0, c++ in QT creator.
My code matches features between the first frame that I got by webcam and current frame input from webcam.
Mat frame1, frame2, img1, img2, img1_gray, img2_gray;
int n = 0;
VideoCapture cap1(0);
namedWindow("Video Capture1", WINDOW_NORMAL);
namedWindow("Reference img", WINDOW_NORMAL);
namedWindow("matches1", WINDOW_NORMAL);
moveWindow("Video Capture1",50, 0);
moveWindow("Reference img",50, 100);
moveWindow("matches1",100,100);
while((char)waitKey(1)!='q'){
//raw image saved in frame
cap1>>frame1;
n=n+1;
if (n ==1){
imwrite("frame1.jpg", frame1);
cout<<"First frame saved as 'frame1'!!"<<endl;
}
if(frame1.empty())
break;
imshow("Video Capture1",frame1);
img1 = imread("frame1.jpg");
img2 = frame1;
cvtColor(img1, img1_gray, cv::COLOR_BGR2GRAY);
cvtColor(img2, img2_gray, cv::COLOR_BGR2GRAY);
imshow("Reference img",img1);
// detecting keypoints
int minHessian = 400;
Ptr<Feature2D> detector = xfeatures2d::SurfFeatureDetector::create();
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(img1_gray,keypoints1);
detector->detect(img2_gray,keypoints2);
// computing descriptors
Ptr<DescriptorExtractor> extractor = xfeatures2d::SurfFeatureDetector::create();
Mat descriptors1, descriptors2;
extractor->compute(img1_gray,keypoints1,descriptors1);
extractor->compute(img2_gray,keypoints2,descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches1", img_matches);
But the code returns so many matched points that I cannot distinguish which one matches which.
So, are there any methods to get high-quality matched points only?
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
So, are there any methods to get high-quality matched points only?
I bet there are a lot of different methods. I am using e.g. a symmetry test. So matches from img1 to img2 also have to exist when matching from img2 to img1. I am using the test of Improve matching of feature points with OpenCV. Multiple other tests are shown there.
void symmetryTest(const std::vector<cv::DMatch> &matches1,const std::vector<cv::DMatch> &matches2,std::vector<cv::DMatch>& symMatches)
{
symMatches.clear();
for (vector<DMatch>::const_iterator matchIterator1= matches1.begin();matchIterator1!= matches1.end(); ++matchIterator1)
{
for (vector<DMatch>::const_iterator matchIterator2= matches2.begin();matchIterator2!= matches2.end();++matchIterator2)
{
if ((*matchIterator1).queryIdx ==(*matchIterator2).trainIdx &&(*matchIterator2).queryIdx ==(*matchIterator1).trainIdx)
{
symMatches.push_back(DMatch((*matchIterator1).queryIdx,(*matchIterator1).trainIdx,(*matchIterator1).distance));
break;
}
}
}
}
Like András Kovács says in the related answer you can also calculate a Fundamental Matrix with RANSAC to eliminate outliers using cv::findFundamentalMat.
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
I hope I understood it right that you want to have the point coordinates of homologue points that match. I am extracting the coordinates of the points after the symmetryTest.
The coordinates are inside the keypoints.
for (size_t rows = 0; rows < sym_matches.size(); rows++) {
float x1 = keypoints_1[sym_matches[rows].queryIdx].pt.x;
float y1 = keypoints_1[sym_matches[rows].queryIdx].pt.y;
float x2 = keypoints_2[sym_matches[rows].trainIdx].pt.x;
float y2 = keypoints_2[sym_matches[rows].trainIdx].pt.y;
// Push the coordinates in a vector e.g. std:vector<cv::Point2f>>
}
You can do the same with your matches, keypoints1 and keypoint2.

OpenCV C++ create reusable set of keypoints and descriptors for stitching multiple images

I have created a program that can stitch multiple images together and am now looking to improve the efficiency of it. Depending on the size of the stitched image, eventually it is to large and contains too many keypoints that the machine runs out of allocatable memory. To compensate for this my goal is to store all the keypoints and descriptors as they are found so that I don't need to find them again in the master stitched image and only need to find them in the new image being stitched. I had this process working in python but haven't had the same luck in C++.
In order to do this I need to perform a perspectiveTransform() on the keypoints and therefor convert them from vector<keypoint> to vector<point2f> and back to vector<keypoint>. I have been able to achieve this and can confirm it works (pick to follow). I am not sure if the same process needs to be done to the descriptors (currently I have don it but either its wrong or not effective).
Issue: When I run this the keypoints and descriptors don't appear to work and I throw an error I created being: "Not enough matches found" even though I know at least the keypoints are making its way to the function.
Here is the code for the keypoint and descriptor transforms. The code first calculates the warpPerspective to be applied to image one as homography will warp the second image only. The rest of the codd deals with keypoints and descriptors.
tuple<Mat, vector<KeyPoint>, Mat> stitchMatches(Mat image1,Mat image2, Mat homography, vector<KeyPoint> kp1, vector<KeyPoint> kp2 , Mat desc1, Mat desc2){
Mat result, destination, descriptors_updated;
vector<Point2f> fourPoint;
vector<KeyPoint> keypoints_updated;
//-Get the four corners of the first image (master)
fourPoint.push_back(Point2f (0,0));
fourPoint.push_back(Point2f (image1.size().width,0));
fourPoint.push_back(Point2f (0, image1.size().height));
fourPoint.push_back(Point2f (image1.size().width, image1.size().height));
//perspectiveTransform(Mat(fourPoint), destination, homography);
//- Get points used to determine Htr
double min_x, min_y, tam_x, tam_y;
float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
min_x1 = min(fourPoint.at(0).x, fourPoint.at(1).x);
min_x2 = min(fourPoint.at(2).x, fourPoint.at(3).x);
min_y1 = min(fourPoint.at(0).y, fourPoint.at(1).y);
min_y2 = min(fourPoint.at(2).y, fourPoint.at(3).y);
max_x1 = max(fourPoint.at(0).x, fourPoint.at(1).x);
max_x2 = max(fourPoint.at(2).x, fourPoint.at(3).x);
max_y1 = max(fourPoint.at(0).y, fourPoint.at(1).y);
max_y2 = max(fourPoint.at(2).y, fourPoint.at(3).y);
min_x = min(min_x1, min_x2);
min_y = min(min_y1, min_y2);
tam_x = max(max_x1, max_x2);
tam_y = max(max_y1, max_y2);
//- Htr use to map image one to result in line with the alredy warped image 1
Mat Htr = Mat::eye(3,3,CV_64F);
if (min_x < 0){
tam_x = image2.size().width - min_x;
Htr.at<double>(0,2)= -min_x;
}
if (min_y < 0){
tam_y = image2.size().height - min_y;
Htr.at<double>(1,2)= -min_y;
}
result = Mat(Size(tam_x*2,tam_y*2), CV_8UC3,cv::Scalar(0,0,0));
warpPerspective(image2, result, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(image1, result, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//-- Variables to hold the keypoints at the respective stages
vector<Point2f> kp1Local,kp2Local;
vector<KeyPoint> kp1updated, kp2updated;
//Localize the keypoints to allow for perspective change
KeyPoint::convert(kp1, kp1Local);
KeyPoint::convert(kp2, kp2Local);
//perform persepctive transform on the keypoints of type vector<point2f>
perspectiveTransform(kp1Local, kp1Local, (Htr));
perspectiveTransform(kp2Local, kp2Local, (Htr*homography));
//convert keypoints back to type vector<keypoint>
for( size_t i = 0; i < kp1Local.size(); i++ ) {
kp1updated.push_back(KeyPoint(kp1Local[i], 1.f));
}
for( size_t i = 0; i < kp2Local.size(); i++ ) {
kp2updated.push_back(KeyPoint(kp2Local[i], 1.f));
}
//Add to master of list of keypoints to be passed along during next iteration of image
keypoints_updated.reserve(kp1updated.size() + kp2updated.size());
keypoints_updated.insert(keypoints_updated.end(),kp1updated.begin(),kp1updated.end());
keypoints_updated.insert(keypoints_updated.end(),kp2updated.begin(),kp2updated.end());
//WarpPerspective of decriptors to match that of the images and cooresponding keypoints
Mat desc1New, desc2New;
warpPerspective(desc2, desc2New, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(desc1, desc1New, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//create a new Mat including the descriports from desc1 and desc2
descriptors_updated.push_back(desc1New);
descriptors_updated.push_back(desc2New);
//------------TEST TO see if keypoints have moved
Mat img_keypoints;
drawKeypoints( result, keypoints_updated, img_keypoints, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
imshow("Keypoints 1", img_keypoints );
waitKey();
destroyAllWindows();
return {result, keypoints_updated, descriptors_updated};
}
The following code is my master stitching program that does the actual stitching.
tuple<Mat,vector<KeyPoint>,Mat> stitch(Mat img1,Mat img2 ,vector<KeyPoint> keypoints, Mat descriptors, String featureDetection,String featureExtractor,String keypointsMatcher,String showMatches){
Mat desc, desc1, desc2, homography, result, croppedResult,descriptors_updated;
std::vector<KeyPoint> keypoints_updated, kp1, kp2;
std::vector<DMatch> matches;
//-Base Case[2]
if (keypoints.empty()){
//-Detect Keypoints and their descriptors
tie(kp1,desc1) = KeyPointDescriptor(img1, featureDetection,featureExtractor);
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//Find matches and calculated homography based on keypoints and descriptors
std::tie(matches,homography) = matchFeatures(kp1, desc1,kp2, desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true"){
drawMatchedImages( img1, kp1, img2, kp2, matches);
}
//stitch the images and update the keypoint and descriptors
std::tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,kp1,kp2,desc1,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
//base case[3:n]
else{
//Get keypoints and descriptors of new image and add to respective lists
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//find matches and determine homography
std::tie(matches,homography) = matchFeatures(keypoints_updated,descriptors_updated,kp2,desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true")
drawMatchedImages( img1, keypoints, img2, kp2, matches);
//stitch the images and update the keypoint and descriptors
tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,keypoints,kp2,descriptors,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
}
Lastly here is the image of the keypoints that are being transformed onto the stitched image. Any help is so greatly appreciated!
After combing through the I just happened to find I was using the wrong variable at one point!:)

How to draw matches in Opencv?

I have matched two vectors of descriptors of two images:
cv::Ptr<BinaryDescriptorMatcher> bdm = BinaryDescriptorMatcher::createBinaryDescriptorMatcher();
std::vector<std::vector<cv::DMatch> > matches;
float maxDist = 10.0;
bdm->radiusMatch(descr2, descr1, matches, maxDist);
// descr1 from image1, descr2 from image2
std::vector<char> mask(matches.size(), 1);
But now I want to draw the found matches from the two images.
This does not work:
drawMatches(gmlimg, keylines, walls, keylines1, matches, outImg, cv::Scalar::all(-1), cv::Scalar::all(-1), mask, DrawLinesMatchesFlags::DEFAULT);
And this neither:
drawLineMatches(gmlimg, keylines, walls, keylines1, matches, outImg, cv::Scalar::all(-1), cv::Scalar::all(-1), mask, DrawLinesMatchesFlags::DEFAULT);
Since you are getting the matches as std::vector< std::vector<cv::DMatch> >, which is what you would use when using BinaryDescriptorMatcher, you can draw the matches as follows:
std::vector<DMatch> matches_to_draw;
std::vector< std::vector<DMatch> >matches_from_matcher;
std::vector< cv::Keypoint > keypoints_Object, keypoints_Scene; // Keypoints
// Iterate through the matches from descriptor
for(unsigned int i = 0; i < matches_from_matcher.size(); i++)
{
if (matches_from_matcher[i].size() >= 1)
{
cv::DMatch v = matches[i][0];
/*
May be you can add a filtering here for the matches
Skip it if you want to see all the matches
Something like this - avg is the average distance between all keypoint pairs
double difference_for_each_match = fabs(keypoints_Object[v.queryIdx].pt.y
- keypoints_Scene[v.trainIdx].pt.y);
if( (fabs (avg - difference_for_each_match)) <= 5))
{
matches_to_draw.push_back(v);
}
*/
// This is for all matches
matches_to_draw.push_back(v);
}
}
cv::drawMatches(image, keypoints_Object, walls, keypoints_Scene, matches_to_draw, outImg, cv::Scalar::all(-1), cv::Scalar::all(-1), mask, DrawLinesMatchesFlags::DEFAULT);`
The outImg should have the matches and the keypoints drawn.
Let me know if it helps!

How to get efficient Result in ORB using opencv 2.4.9?

int method = 0;
std::vector<cv::KeyPoint> keypoints_object, keypoints_scene;
cv::Mat descriptors_object, descriptors_scene;
cv::ORB orb;
int minHessian = 500;
//cv::OrbFeatureDetector detector(500);
//ORB orb(25, 1.0f, 2, 10, 0, 2, 0, 10);
cv::OrbFeatureDetector detector(25, 1.0f, 2, 10, 0, 2, 0, 10);
//cv::OrbFeatureDetector detector(500,1.20000004768,8,31,0,2,ORB::HARRIS_SCORE,31);
cv::OrbDescriptorExtractor extractor;
//-- object
if( method == 0 ) { //-- ORB
orb.detect(img_object, keypoints_object);
//cv::drawKeypoints(img_object, keypoints_object, img_object, cv::Scalar(0,255,255));
//cv::imshow("template", img_object);
orb.compute(img_object, keypoints_object, descriptors_object);
} else { //-- SURF test
detector.detect(img_object, keypoints_object);
extractor.compute(img_object, keypoints_object, descriptors_object);
}
// http://stackoverflow.com/a/11798593
//if(descriptors_object.type() != CV_32F)
// descriptors_object.convertTo(descriptors_object, CV_32F);
//for(;;) {
cv::Mat frame = cv::imread("E:\\Projects\\Images\\2-134-2.bmp", 1);
cv::Mat img_scene = cv::Mat(frame.size(), CV_8UC1);
cv::cvtColor(frame, img_scene, cv::COLOR_RGB2GRAY);
//frame.copyTo(img_scene);
if( method == 0 ) { //-- ORB
orb.detect(img_scene, keypoints_scene);
orb.compute(img_scene, keypoints_scene, descriptors_scene);
} else { //-- SURF
detector.detect(img_scene, keypoints_scene);
extractor.compute(img_scene, keypoints_scene, descriptors_scene);
}
//-- matching descriptor vectors using FLANN matcher
cv::BFMatcher matcher;
std::vector<cv::DMatch> matches;
cv::Mat img_matches;
if(!descriptors_object.empty() && !descriptors_scene.empty()) {
matcher.match (descriptors_object, descriptors_scene, matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min idstance between keypoints
for( int i = 0; i < descriptors_object.rows; i++)
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//printf("-- Max dist : %f \n", max_dist );
//printf("-- Min dist : %f \n", min_dist );
//-- Draw only good matches (i.e. whose distance is less than 3*min_dist)
std::vector< cv::DMatch >good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ if( matches[i].distance < (max_dist/1.6) )
{ good_matches.push_back( matches[i]); }
}
cv::drawMatches(img_object, keypoints_object, img_scene, keypoints_scene, \
good_matches, img_matches, cv::Scalar::all(-1), cv::Scalar::all(-1),
std::vector<char>(), cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
//-- localize the object
std::vector<cv::Point2f> obj;
std::vector<cv::Point2f> scene;
for( size_t i = 0; i < good_matches.size(); i++) {
//-- get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
if( !obj.empty() && !scene.empty() && good_matches.size() >= 4) {
cv::Mat H = cv::findHomography( obj, scene, cv::RANSAC );
//-- get the corners from the object to be detected
std::vector<cv::Point2f> obj_corners(4);
obj_corners[0] = cv::Point(0,0);
obj_corners[1] = cv::Point(img_object.cols,0);
obj_corners[2] = cv::Point(img_object.cols,img_object.rows);
obj_corners[3] = cv::Point(0,img_object.rows);
std::vector<cv::Point2f> scene_corners(4);
cv::perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
cv::line( img_matches, \
scene_corners[0] + cv::Point2f(img_object.cols, 0), \
scene_corners[1] + cv::Point2f(img_object.cols, 0), \
cv::Scalar(0,255,0), 4 );
cv::line( img_matches, \
scene_corners[1] + cv::Point2f(img_object.cols, 0), \
scene_corners[2] + cv::Point2f(img_object.cols, 0), \
cv::Scalar(0,255,0), 4 );
cv::line( img_matches, \
scene_corners[2] + cv::Point2f(img_object.cols, 0), \
scene_corners[3] + cv::Point2f(img_object.cols, 0), \
cv::Scalar(0,255,0), 4 );
cv::line( img_matches, \
scene_corners[3] + cv::Point2f(img_object.cols, 0), \
scene_corners[0] + cv::Point2f(img_object.cols, 0), \
cv::Scalar(0,255,0), 4 );
}
}
t =(double) getTickCount() - t;
printf("Time :%f",(double)(t*1000./getTickFrequency()));
cv::imshow("match result", img_matches );
cv::waitKey();
return 0;
Here I am performing template matching between two Images. where I extract key points using ORB algorithm and matching that with BF Matcher but I am not getting good result. Here I am adding Image to understand problem
Here as you can see Dark Blue line on teddy which is actually a rectangle which would be drawn around object from frame Image when object will be recognized by matching key points.
Here I am using Opencv 2.4.9, what changes should I make to get good result?
In any feature detection+extraction followed by a homography estimation, there are many parameters you can play with. However the main point to realise is that it's almost always the issue of Computation Time VS. Accuracy.
The most crucial fail point of your code is your ORB initialization:
cv::OrbFeatureDetector detector(25, 1.0f, 2, 10, 0, 2, 0, 10);
The first parameter tells the extractor to only use the top 25 results from the detector. For a reliable estimation of an 8 DOF homography with no constraints on parameters, you should have an order of magnitude more features than parameters, i.e. 80, or just make it an even 100.
The second parameter is for scaling the images down (or the detector patch up) between octaves (or levels). using the number 1.0f means you don't change the scale between octaves, this makes no sense, especially since your third parameter is the number of levels which is 2 and not 1. The default is 1.2f for scale and 8 levels, for less calculations, use a scaling of 1.5f and 4 levels (again, just a suggestion, other parameters will work too).
your fourth and last parameters say that the patch size to calculate on is 10x10, that's pretty small, but if you work on low resolution that's fine.
your score type (one before last parameter) can change runtime a bit, you can use the ORB::FAST_SCORE instead of the ORB::HARRIS_SCORE but it doesn't matter much.
Last but not least, when you initialise the BruteForce Matcher object, you should remember to use the cv::NORM_HAMMING type since ORB is a binary feature, this will make the norm calculations on the matching process actually mean something.