OpenCV C vs C++ - c++

I am trying to use SURF but I am having trouble finding way to do so in C. The documentation only seems to have stuff for C++ in terms of.
I have been able to detect SURF feature:
IplImage *img = cvLoadImage("img5.jpg");
CvMat* image = cvCreateMat(img->height, img->width, CV_8UC1);
cvCvtColor(img, image, CV_BGR2GRAY);
// detecting keypoints
CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
int i;
//Extract SURF points by initializing parameters
CvSURFParams params = cvSURFParams(1, 1);
cvExtractSURF( image, 0, &imageKeypoints, &imageDescriptors, storage, params );
printf("Image Descriptors: %d\n", imageDescriptors->total);
//draw the keypoints on the captured frame
for( i = 0; i < imageKeypoints->total; i++ )
{
CvSURFPoint* r = (CvSURFPoint*)cvGetSeqElem( imageKeypoints, i );
CvPoint center;
int radius;
center.x = cvRound(r->pt.x);
center.y = cvRound(r->pt.y);
radius = cvRound(r->size*1.2/9.*2);
cvCircle( image, center, radius, CV_RGB(0,255,0), 1, 8, 0 );
}
But I can't find the method that I need to compare the descriptors of 2 images. I found this code in C++ but I'm having trouble translating it:
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
I would appreciate if someone could lead me on to a descriptor matcher or even better, let me know where I can find the OpenCV documentation in C only.

This link might give you a hint. https://projects.developer.nokia.com/opencv/browser/opencv/opencv-2.3.1/samples/c/find_obj.cpp . Look in the function naiveNearestNeighbor

Check out the blog post from thioldhack. Contains a sample code. Its for QT, but you can easily do it for VC++ or any other. You will need to match the Key points using K-nearest neighbour algorithm. It has all.

The slightly longer but surest way is to compile OpenCV on your computer with debug information and just to step into the C++ implementation with a debugger. You can also copy it aside to your project and start peeling it like an onion till you get to pure C.

Related

I want to get high quality feature points only

I'm currently working on real-time feature matching using OpenCV3.4.0, c++ in QT creator.
My code matches features between the first frame that I got by webcam and current frame input from webcam.
Mat frame1, frame2, img1, img2, img1_gray, img2_gray;
int n = 0;
VideoCapture cap1(0);
namedWindow("Video Capture1", WINDOW_NORMAL);
namedWindow("Reference img", WINDOW_NORMAL);
namedWindow("matches1", WINDOW_NORMAL);
moveWindow("Video Capture1",50, 0);
moveWindow("Reference img",50, 100);
moveWindow("matches1",100,100);
while((char)waitKey(1)!='q'){
//raw image saved in frame
cap1>>frame1;
n=n+1;
if (n ==1){
imwrite("frame1.jpg", frame1);
cout<<"First frame saved as 'frame1'!!"<<endl;
}
if(frame1.empty())
break;
imshow("Video Capture1",frame1);
img1 = imread("frame1.jpg");
img2 = frame1;
cvtColor(img1, img1_gray, cv::COLOR_BGR2GRAY);
cvtColor(img2, img2_gray, cv::COLOR_BGR2GRAY);
imshow("Reference img",img1);
// detecting keypoints
int minHessian = 400;
Ptr<Feature2D> detector = xfeatures2d::SurfFeatureDetector::create();
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(img1_gray,keypoints1);
detector->detect(img2_gray,keypoints2);
// computing descriptors
Ptr<DescriptorExtractor> extractor = xfeatures2d::SurfFeatureDetector::create();
Mat descriptors1, descriptors2;
extractor->compute(img1_gray,keypoints1,descriptors1);
extractor->compute(img2_gray,keypoints2,descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches1", img_matches);
But the code returns so many matched points that I cannot distinguish which one matches which.
So, are there any methods to get high-quality matched points only?
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
So, are there any methods to get high-quality matched points only?
I bet there are a lot of different methods. I am using e.g. a symmetry test. So matches from img1 to img2 also have to exist when matching from img2 to img1. I am using the test of Improve matching of feature points with OpenCV. Multiple other tests are shown there.
void symmetryTest(const std::vector<cv::DMatch> &matches1,const std::vector<cv::DMatch> &matches2,std::vector<cv::DMatch>& symMatches)
{
symMatches.clear();
for (vector<DMatch>::const_iterator matchIterator1= matches1.begin();matchIterator1!= matches1.end(); ++matchIterator1)
{
for (vector<DMatch>::const_iterator matchIterator2= matches2.begin();matchIterator2!= matches2.end();++matchIterator2)
{
if ((*matchIterator1).queryIdx ==(*matchIterator2).trainIdx &&(*matchIterator2).queryIdx ==(*matchIterator1).trainIdx)
{
symMatches.push_back(DMatch((*matchIterator1).queryIdx,(*matchIterator1).trainIdx,(*matchIterator1).distance));
break;
}
}
}
}
Like András Kovács says in the related answer you can also calculate a Fundamental Matrix with RANSAC to eliminate outliers using cv::findFundamentalMat.
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
I hope I understood it right that you want to have the point coordinates of homologue points that match. I am extracting the coordinates of the points after the symmetryTest.
The coordinates are inside the keypoints.
for (size_t rows = 0; rows < sym_matches.size(); rows++) {
float x1 = keypoints_1[sym_matches[rows].queryIdx].pt.x;
float y1 = keypoints_1[sym_matches[rows].queryIdx].pt.y;
float x2 = keypoints_2[sym_matches[rows].trainIdx].pt.x;
float y2 = keypoints_2[sym_matches[rows].trainIdx].pt.y;
// Push the coordinates in a vector e.g. std:vector<cv::Point2f>>
}
You can do the same with your matches, keypoints1 and keypoint2.

Cannot find the Points() function when following the OpenCV 3.4.0 AKAZE and ORB planar tracking tutorial

I am new to openCV. I am trying to follow the OpenCV3.4.0 tutorial on AKAZE and ORB planar tracking tutorial to perform feature matching.
I am using the android NDK environment to write c++ code.
I managed to get the program to detect features from both the template and the camera pictures, and matched them using the FlannBasedMatcher. The returned matches are stored in the "matched1" and "matched2" array.
How can I get the keypoints stored in matche1 and match2 out? The tutorial suggest to use the function: Points(matched1), which is not available in my coding environment. I checked that I have included all the header files in the tutorial.
Mat& mScene= *(Mat *) scene;
Mat& mTempl = *(Mat *) templ;
vector<Point2f> object_bb;
Mat descriptorsTemplate,descriptorsCamera;
vector<KeyPoint> keypointsTemplate,keypointsCamera;
Ptr<ORB> fd_de = ORB::create();
fd_de->detectAndCompute( mScene, Mat(), keypointsCamera,
descriptorsCamera);
fd_de->detectAndCompute( mTempl, Mat(), keypointsTemplate,
descriptorsTemplate );
Mat img_keypoints_1,img_keypoints_2;
cv::FlannBasedMatcher matcher =
cv::FlannBasedMatcher(cv::makePtr<cv::flann::LshIndexParams>(12, 20,
2));
vector< vector<DMatch> > matches;
vector<KeyPoint> matched1, matched2;
matcher.knnMatch(descriptorsTemplate, descriptorsCamera, matches,2);
for( int i = 0; i < matches.size(); i++ ){
if(matches[i][0].distance < nn_match_ratio * matches[i]
[1].distance) {
matched1.push_back(keypointsTemplate[matches[i][0].queryIdx]);
matched2.push_back(keypointsCamera[matches[i][0].trainIdx]);
}
}
Mat inlier_mask, homography;
vector<KeyPoint> inliers1, inliers2;
vector<DMatch> inlier_matches;
if(matched1.size() >= 4) {
homography = findHomography(Points(matched1), Points(matched2),
RANSAC, ransac_thresh, inlier_mask);
}
The Points function can be found in utils.h - remember to import it.

OpenCV C++ create reusable set of keypoints and descriptors for stitching multiple images

I have created a program that can stitch multiple images together and am now looking to improve the efficiency of it. Depending on the size of the stitched image, eventually it is to large and contains too many keypoints that the machine runs out of allocatable memory. To compensate for this my goal is to store all the keypoints and descriptors as they are found so that I don't need to find them again in the master stitched image and only need to find them in the new image being stitched. I had this process working in python but haven't had the same luck in C++.
In order to do this I need to perform a perspectiveTransform() on the keypoints and therefor convert them from vector<keypoint> to vector<point2f> and back to vector<keypoint>. I have been able to achieve this and can confirm it works (pick to follow). I am not sure if the same process needs to be done to the descriptors (currently I have don it but either its wrong or not effective).
Issue: When I run this the keypoints and descriptors don't appear to work and I throw an error I created being: "Not enough matches found" even though I know at least the keypoints are making its way to the function.
Here is the code for the keypoint and descriptor transforms. The code first calculates the warpPerspective to be applied to image one as homography will warp the second image only. The rest of the codd deals with keypoints and descriptors.
tuple<Mat, vector<KeyPoint>, Mat> stitchMatches(Mat image1,Mat image2, Mat homography, vector<KeyPoint> kp1, vector<KeyPoint> kp2 , Mat desc1, Mat desc2){
Mat result, destination, descriptors_updated;
vector<Point2f> fourPoint;
vector<KeyPoint> keypoints_updated;
//-Get the four corners of the first image (master)
fourPoint.push_back(Point2f (0,0));
fourPoint.push_back(Point2f (image1.size().width,0));
fourPoint.push_back(Point2f (0, image1.size().height));
fourPoint.push_back(Point2f (image1.size().width, image1.size().height));
//perspectiveTransform(Mat(fourPoint), destination, homography);
//- Get points used to determine Htr
double min_x, min_y, tam_x, tam_y;
float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
min_x1 = min(fourPoint.at(0).x, fourPoint.at(1).x);
min_x2 = min(fourPoint.at(2).x, fourPoint.at(3).x);
min_y1 = min(fourPoint.at(0).y, fourPoint.at(1).y);
min_y2 = min(fourPoint.at(2).y, fourPoint.at(3).y);
max_x1 = max(fourPoint.at(0).x, fourPoint.at(1).x);
max_x2 = max(fourPoint.at(2).x, fourPoint.at(3).x);
max_y1 = max(fourPoint.at(0).y, fourPoint.at(1).y);
max_y2 = max(fourPoint.at(2).y, fourPoint.at(3).y);
min_x = min(min_x1, min_x2);
min_y = min(min_y1, min_y2);
tam_x = max(max_x1, max_x2);
tam_y = max(max_y1, max_y2);
//- Htr use to map image one to result in line with the alredy warped image 1
Mat Htr = Mat::eye(3,3,CV_64F);
if (min_x < 0){
tam_x = image2.size().width - min_x;
Htr.at<double>(0,2)= -min_x;
}
if (min_y < 0){
tam_y = image2.size().height - min_y;
Htr.at<double>(1,2)= -min_y;
}
result = Mat(Size(tam_x*2,tam_y*2), CV_8UC3,cv::Scalar(0,0,0));
warpPerspective(image2, result, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(image1, result, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//-- Variables to hold the keypoints at the respective stages
vector<Point2f> kp1Local,kp2Local;
vector<KeyPoint> kp1updated, kp2updated;
//Localize the keypoints to allow for perspective change
KeyPoint::convert(kp1, kp1Local);
KeyPoint::convert(kp2, kp2Local);
//perform persepctive transform on the keypoints of type vector<point2f>
perspectiveTransform(kp1Local, kp1Local, (Htr));
perspectiveTransform(kp2Local, kp2Local, (Htr*homography));
//convert keypoints back to type vector<keypoint>
for( size_t i = 0; i < kp1Local.size(); i++ ) {
kp1updated.push_back(KeyPoint(kp1Local[i], 1.f));
}
for( size_t i = 0; i < kp2Local.size(); i++ ) {
kp2updated.push_back(KeyPoint(kp2Local[i], 1.f));
}
//Add to master of list of keypoints to be passed along during next iteration of image
keypoints_updated.reserve(kp1updated.size() + kp2updated.size());
keypoints_updated.insert(keypoints_updated.end(),kp1updated.begin(),kp1updated.end());
keypoints_updated.insert(keypoints_updated.end(),kp2updated.begin(),kp2updated.end());
//WarpPerspective of decriptors to match that of the images and cooresponding keypoints
Mat desc1New, desc2New;
warpPerspective(desc2, desc2New, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(desc1, desc1New, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//create a new Mat including the descriports from desc1 and desc2
descriptors_updated.push_back(desc1New);
descriptors_updated.push_back(desc2New);
//------------TEST TO see if keypoints have moved
Mat img_keypoints;
drawKeypoints( result, keypoints_updated, img_keypoints, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
imshow("Keypoints 1", img_keypoints );
waitKey();
destroyAllWindows();
return {result, keypoints_updated, descriptors_updated};
}
The following code is my master stitching program that does the actual stitching.
tuple<Mat,vector<KeyPoint>,Mat> stitch(Mat img1,Mat img2 ,vector<KeyPoint> keypoints, Mat descriptors, String featureDetection,String featureExtractor,String keypointsMatcher,String showMatches){
Mat desc, desc1, desc2, homography, result, croppedResult,descriptors_updated;
std::vector<KeyPoint> keypoints_updated, kp1, kp2;
std::vector<DMatch> matches;
//-Base Case[2]
if (keypoints.empty()){
//-Detect Keypoints and their descriptors
tie(kp1,desc1) = KeyPointDescriptor(img1, featureDetection,featureExtractor);
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//Find matches and calculated homography based on keypoints and descriptors
std::tie(matches,homography) = matchFeatures(kp1, desc1,kp2, desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true"){
drawMatchedImages( img1, kp1, img2, kp2, matches);
}
//stitch the images and update the keypoint and descriptors
std::tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,kp1,kp2,desc1,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
//base case[3:n]
else{
//Get keypoints and descriptors of new image and add to respective lists
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//find matches and determine homography
std::tie(matches,homography) = matchFeatures(keypoints_updated,descriptors_updated,kp2,desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true")
drawMatchedImages( img1, keypoints, img2, kp2, matches);
//stitch the images and update the keypoint and descriptors
tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,keypoints,kp2,descriptors,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
}
Lastly here is the image of the keypoints that are being transformed onto the stitched image. Any help is so greatly appreciated!
After combing through the I just happened to find I was using the wrong variable at one point!:)

BRIEF implementation with OpenCV 2.4.10

Does someone know of the link to BRIEF implementation with OpenCV 2.4? Regards.
PS: I know such questions are generally not welcome on SO, as the primary focus is what work you have done. But there was a similar question which was quite well received.
One of the answers to that questions suggests a generic manner for SIFT, which could be extended to BRIEF. Here is my slightly modified code.
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/highgui/highgui.hpp>
//using namespace std;
using namespace cv;
int main(int argc, char *argv[])
{
Mat image = imread("load02.jpg", CV_LOAD_IMAGE_GRAYSCALE);
cv::initModule_nonfree();
// Create smart pointer for SIFT feature detector.
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("HARRIS"); // "BRIEF was initially written. Changed after answer."
vector<KeyPoint> keypoints;
// Detect the keypoints
featureDetector->detect(image, keypoints); // NOTE: featureDetector is a pointer hence the '->'.
//Similarly, we create a smart pointer to the SIFT extractor.
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("BRIEF");
// Compute the 128 dimension SIFT descriptor at each keypoint.
// Each row in "descriptors" correspond to the SIFT descriptor for each keypoint
Mat descriptors;
featureExtractor->compute(image, keypoints, descriptors);
// If you would like to draw the detected keypoint just to check
Mat outputImage;
Scalar keypointColor = Scalar(255, 0, 0); // Blue keypoints.
drawKeypoints(image, keypoints, outputImage, keypointColor, DrawMatchesFlags::DEFAULT);
namedWindow("Output");
imshow("Output", outputImage);
char c = ' ';
while ((c = waitKey(0)) != 'q'); // Keep window there until user presses 'q' to quit.
return 0;
}
The issue with this code is that it gives an error: First-chance exception at 0x00007FFB84698B9C in Project2.exe: Microsoft C++ exception: cv::Exception at memory location 0x00000071F4FBF8E0.
The error results in the function execution breaking. A tag says that execution will resume at the namedWindow("Output"); line.
Could someone please help fix this issue, or suggest a new code altogether? Thanks.
EDIT: The terminal now shows an error: Assertion failed (!outImage.empty()) in cv::drawKeypoints, file ..\..\..\..opencv\modules\features2d\src\draw.cpp, line 115. The next statement from where the code will resume remains the same, as drawKepoints is called just before it.
In OpenCV, BRIEF is a DescriptorExtractor, not a FeatureDetector. According to FeatureDetector::create, this factory method does not support "BRIEF" algorithm. In other words, FeatureDetector::create("BRIEF") returns a null pointer and your program crashes.
The general steps in feature matching are:
Find some interesting (feature) points in an image: FeatureDetector
Find a way to describe those points: DescriptorExtractor
Try to match descriptors (feature vectors) in two images: DescriptorMatcher
BRIEF is an algorithm only for step 2. You can use some other methods, HARRIS, ORB, ..., in step 1 and pass the result to step 2 using BRIEF. Besides, SIFT can be used in both step 1 and 2 because the algorithm provides methods for both steps.
Here's a simple example to use BRIEF in OpenCV. First step, find points that looks interesting (key points) in an image:
vector<KeyPoint> DetectKeyPoints(const Mat &image)
{
auto featureDetector = FeatureDetector::create("HARRIS");
vector<KeyPoint> keyPoints;
featureDetector->detect(image, keyPoints);
return keyPoints;
}
You can try any FeatureDetector algorithm instead of "HARRIS". Next step, compute the descriptors from key points:
Mat ComputeDescriptors(const Mat &image, vector<KeyPoint> &keyPoints)
{
auto featureExtractor = DescriptorExtractor::create("BRIEF");
Mat descriptors;
featureExtractor->compute(image, keyPoints, descriptors);
return descriptors;
}
You can use algorithm different than "BRIEF", too. And you can see that the algorithms in DescriptorExtractor is not the same as the algorithms in FeatureDetector. The last step, match two descriptors:
vector<DMatch> MatchTwoImage(const Mat &descriptor1, const Mat &descriptor2)
{
auto matcher = DescriptorMatcher::create("BruteForce");
vector<DMatch> matches;
matcher->match(descriptor1, descriptor2, matches);
return matches;
}
Similarly, you can try different matching algorithm other than "BruteForce". Finally back to main program, you can build the application from those functions:
auto img1 = cv::imread("image1.jpg");
auto img2 = cv::imread("image2.jpg");
auto keyPoints1 = DetectKeyPoints(img1);
auto keyPoints2 = DetectKeyPoints(img2);
auto descriptor1 = ComputeDescriptors(img1, keyPoints1);
auto descriptor2 = ComputeDescriptors(img2, keyPoints2);
auto matches = MatchTwoImage(descriptor1, descriptor2);
and use matches vector to complete your application. If you want to check the results, OpenCV also provides functions to draw results of step 1 & 3 in an image. For example, draw the matches in the final step:
Mat result;
drawMatches(img1, keyPoints1, img2, keyPoints2, matches, result);
imshow("result", result);
waitKey(0);

SIFT matching gives very poor results

I'm working on a project where I will use homography as features in a classifier. My problem is in automatically calculating homographies, i'm using SIFT descriptors to find the points between the two images on which to calculate homography but SIFT are giving me very poor results, hence i can't use them in my work.
I'm using OpenCV 2.4.3.
At first I was using SURF, but I had similar results and I decided to use SIFT which are slower but more precise. My first guess was that the image resolution in my dataset was too low but i ran my algorithm on a state-of-the-art dataset (Pointing 04) and I obtained pretty much the same results, so the problem lies in what I do and not in my dataset.
The match between the SIFT keypoints found in each image is done with the FlannBased matcher, i tried the BruteForce one but the results were again pretty much the same.
This is an example of the match I found (image from Pointing 04 dataset)
The above image shows how poor is the match found with my program. Only 1 point is a correct match. I need (at least) 4 correct matches for what I have to do.
Here is the code i use:
This is the function that extracts SIFT descriptors from each image
void extract_sift(const Mat &img, vector<KeyPoint> &keypoints, Mat &descriptors, Rect* face_rec) {
// Create masks for ROI on the original image
Mat mask1 = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi1(mask1, *face_rec);
roi1 = Scalar(255, 255, 255);
// Extracts keypoints in ROIs only
Ptr<DescriptorExtractor> featExtractor;
Ptr<FeatureDetector> featDetector;
Ptr<DescriptorMatcher> featMatcher;
featExtractor = new SIFT();
featDetector = FeatureDetector::create("SIFT");
featDetector->detect(img,keypoints,mask1);
featExtractor->compute(img,keypoints,descriptors);
}
This is the function that matches two images' descriptors
void match_sift(const Mat &img1, const Mat &img2, const vector<KeyPoint> &kp1,
const vector<KeyPoint> &kp2, const Mat &descriptors1, const Mat &descriptors2,
vector<Point2f> &p_im1, vector<Point2f> &p_im2) {
// Matching descriptor vectors using FLANN matcher
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
std::vector< DMatch > matches;
matcher->match( descriptors1, descriptors2, matches );
double max_dist = 0; double min_dist = 100;
// Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors1.rows; ++i ){
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
// Draw only the 4 best matches
std::vector< DMatch > good_matches;
// XXX: DMatch has no sort method, maybe a more efficent min extraction algorithm can be used here?
double min=matches[0].distance;
int min_i = 0;
for( int i = 0; i < (matches.size()>4?4:matches.size()); ++i ) {
for(int j=0;j<matches.size();++j)
if(matches[j].distance < min) {
min = matches[j].distance;
min_i = j;
}
good_matches.push_back( matches[min_i]);
matches.erase(matches.begin() + min_i);
min=matches[0].distance;
min_i = 0;
}
Mat img_matches;
drawMatches( img1, kp1, img2, kp2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
imwrite("imgMatch.jpeg",img_matches);
imshow("",img_matches);
waitKey();
for( int i = 0; i < good_matches.size(); i++ )
{
// Get the points from the best matches
p_im1.push_back( kp1[ good_matches[i].queryIdx ].pt );
p_im2.push_back( kp2[ good_matches[i].trainIdx ].pt );
}
}
And these functions are called here:
extract_sift(dataset[i].img,dataset[i].keypoints,dataset[i].descriptors,face_rec);
[...]
// Extract keypoints from i+1 image and calculate homography
extract_sift(dataset[i+1].img,dataset[i+1].keypoints,dataset[i+1].descriptors,face_rec);
dataset[front].points_r.clear(); // XXX: dunno if clearing the points every time is the best way to do it..
match_sift(dataset[front].img,dataset[i+1].img,dataset[front].keypoints,dataset[i+1].keypoints,
dataset[front].descriptors,dataset[i+1].descriptors,dataset[front].points_r,dataset[i+1].points_r);
dataset[i+1].H = findHomography(dataset[front].points_r,dataset[i+1].points_r, RANSAC);
Any help on how to improve the matching performance would be really appreciated, thanks.
You apparently use the "best four points" in your code w.r.t. the distance of the matches. In other words, you consider that a match is valid if both descriptors are really similar. I believe this is wrong. Did you try to draw all of the matches? Many of them should be wrong, but many should be good as well.
The distance of a match just tells how similar both points are. This doesn't tell if the match is coherent geometrically. Selecting the best matches should definitely consider the geometry.
Here is how I would do:
Detect the corners (you already do this)
Find the matches (you already do this)
Try to find a homography transform between both images by using the matches (don't filter them before!) using findHomography(...)
findHomography(...) will tell you which are the inliers. Those are your good_matches.