OpenCV FREAK returns too many outliers - c++

I am trying the quite new descriptor FREAK from the latest version of OpenCV following the freak_demo.cpp example. Instead of using SURF I use FAST. My basic code is something like this:
std::vector<KeyPoint> keypointsA, keypointsB;
Mat descriptorsA, descriptorsB;
std::vector<DMatch> matches;
FREAK extractor;
BruteForceMatcher<Hamming> matcher;
FAST(imgA,keypointsA,100);
FAST(imgB,keypointsB,20);
extractor.compute( imgA, keypointsA, descriptorsA );
extractor.compute( imgB, keypointsB, descriptorsB );
matcher.match(descriptorsA, descriptorsB, matches);
The algorithm finds a lot of matches, but there are a lot of outliers. Am I doing things right? Is there a way for tuning the algorithm?

When doing matching there are always some refinement steps for getting rid out of outliers.
What I usually do is discarding matches that have a distance over a threshold, for example:
for (int i = 0; i < matches.size(); i++ )
{
if(matches[i].distance > 200)
{
matches.erase(matches.begin()+i-1);
}
}
Then, I use RANSAC to see which matches fit the homography model. OpenCV has a function for this:
for( int i = 0; i < matches.size(); i++ )
{
trainMatches.push_back( cv::Point2f(keypointsB[ matches[i].trainIdx ].pt.x/500.0f, keypointsB[ matches[i].trainIdx ].pt.y/500.0f) );
queryMatches.push_back( cv::Point2f(keypointsA[ matches[i].queryIdx ].pt.x/500.0f, keypointsA[ matches[i].queryIdx ].pt.y/500.0f) );
}
Mat h = cv::findHomography(trainMatches,queryMatches,CV_RANSAC,0.005, status);
And I just draw the inliers:
for(size_t i = 0; i < queryMatches.size(); i++)
{
if(status.at<char>(i) != 0)
{
inliers.push_back(matches[i]);
}
}
Mat imgMatch;
drawMatches(imgA, keypointsA, imgB, keypointsB, inliers, imgMatch);
Just try different thresholds and distances until you get the desired resutls.

You can also train the descriptor by giving your own selected pairs. And tune the parameters in the constructor.
explicit FREAK( bool orientationNormalized = true
, bool scaleNormalized = true
, float patternScale = 22.0f
, int nbOctave = 4
, const vector<int>& selectedPairs = vector<int>()
);
BTW, a more efficient version of FREAK is on the way :-)

Related

Filtering out false positives from feature matching / homography – OpenCV

I have a program that takes for input a picture, and who's objective is to determine if a certain object (essentially an image) is contained with this picture. If so it tries to estimate it's position. This works really well when the object is in the picture. However I get a lot of false positives when I put anything complex enough in the picture.
I was wondering if there is any good way to filter out these false positives. Hopefully something not too computationally expensive.
My program is based on the tutorial found here. Except I use BRISK instead of SURF so I don't need the contrib stuff.
HOW I GET MATCHES
descriptorMatcher->match(descImg1, descImg2, matches, Mat());
GOOD MATCHES
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descImg1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
std::vector< DMatch > good_matches;
for( int i = 0; i < descImg1.rows; i++ )
{ if( matches[i].distance < 4*min_dist )
{ good_matches.push_back( matches[i]); }
}
HOMOGRAPHY
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keyImg1[ good_matches[i].queryIdx ].pt );
scene.push_back( keyImg2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, FM_RANSAC );
OBJECT CORNERS
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img1.cols, 0 );
obj_corners[2] = cvPoint( img1.cols, img1.rows ); obj_corners[3] = cvPoint( 0, img1.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
You cannot completely eliminate false positives. That's why RANSCAC algorithm is used to find homography. However, you may check if estimated homography is "good". See this question for details. And if estimated homography is wrong you may discard it and assume that no object is found. As you need at least 4 corresponding points to estimate homography, you can reject those homographies that were estimated using less points that a predefined threshold (eg.6). That will likely filter out all wrongly estimated homographies:
int minInliers = 6; //can be any value > 4
double reprojectionError = 3; // default value, you can change it to some lower to get more reliable estimation.
Mat mask;
Mat H = findHomography( obj, scene, FM_RANSAC, reprojectionError, mask );
int inliers = 0;
for (int i=0; i< mask.rows; ++i)
{
if(mask[i] == 1) inliers++;
}
if(inliers > minInliers)
{
//homography is good
}
You can also test the method proposed in original SIFT paper to get better matches. You need to find two closest descriptors to each query point and then check if ratio between their distances is less that threshold (David Lowe suggest 0.8) Check this link for details:
descriptorMatcher->knnMatch( descImg1, descImg2, knn_matches, 2 );
//-- Filter matches using the Lowe's ratio test
const float ratio_thresh = 0.8f;
std::vector<DMatch> good_matches;
for (size_t i = 0; i < knn_matches.size(); i++)
{
if (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance)
{
good_matches.push_back(knn_matches[i][0]);
}
}

Panoromic image construction

I am trying to construct a panoromic view from different images.
Initially I tried to stitch two images as part of panoromic construction.
The two input images I am trying to stitch are:
I used ORB feature descriptor to find features in the image,then I found out Homography matrix between these two images.
My code is:
int main(int argc, char **argv){
Mat img1 = imread(argv[1],1);
Mat img2 = imread(argv[2],1);
//-- Step 1: Detect the keypoints using orb Detector
std::vector<KeyPoint> kp2,kp1;
// Default parameters of ORB
int nfeatures=500;
float scaleFactor=1.2f;
int nlevels=8;
int edgeThreshold=15; // Changed default (31);
int firstLevel=0;
int WTA_K=2;
int scoreType=ORB::HARRIS_SCORE;
int patchSize=31;
int fastThreshold=20;
Ptr<ORB> detector = ORB::create(
nfeatures,
scaleFactor,
nlevels,
edgeThreshold,
firstLevel,
WTA_K,
scoreType,
patchSize,
fastThreshold );
Mat descriptors_img1, descriptors_img2;
//-- Step 2: Calculate descriptors (feature vectors)
detector->detect(img1, kp1,descriptors_img1);
detector->detect(img2, kp2,descriptors_img2);
Ptr<DescriptorExtractor> extractor = ORB::create();
extractor->compute(img1, kp1, descriptors_img1 );
extractor->compute(img2, kp2, descriptors_img2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
if ( descriptors_img1.empty() )
cvError(0,"MatchFinder","1st descriptor empty",__FILE__,__LINE__);
if ( descriptors_img2.empty() )
cvError(0,"MatchFinder","2nd descriptor empty",__FILE__,__LINE__);
descriptors_img1.convertTo(descriptors_img1, CV_32F);
descriptors_img2.convertTo(descriptors_img2, CV_32F);
FlannBasedMatcher matcher;
std::vector<DMatch> matches;
matcher.match(descriptors_img1,descriptors_img2,matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_img1.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist )
min_dist = dist;
if( dist > max_dist )
max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_img1.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
Mat img_matches;
drawMatches(img1,kp1,img2,kp2,good_matches,img_matches,Scalar::all(-1),
Scalar::all(-1),vector<char>(),DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( kp1[ good_matches[i].queryIdx ].pt );
scene.push_back( kp2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
After wards some people told me to include the following code
cv::Mat result;
warpPerspective( img1, result, H, cv::Size( img1.cols+img2.cols, img1.rows) );
cv::Mat half(result, cv::Rect(0, 0, img2.cols, img2.rows) );
img2.copyTo(half);
imshow("result",result);
The result I got is
I also tried using inbuilt opencv stitch function. And I got the result
I am trying to implement stitch function so I dont want to use inbuilt opencv stitch function.
Can any one tell me where I went wrong and correct my code.Thanks in advance
Image stitching includes the following steps:
Feature finding
Find camera parameters
Warping
Exposure compensation
Seam Finding
Blending
You have to do all these steps in order to get the perfect result.
In your code you have only done the first part, that is feature finding.
You can find a detailed explanation on how image stitching works in Learn OpenCV
Also I have the code on Github
Hope this helps.

C++ - OpenCV feature detection with ORB

I'm trying to extract and match features with OpenCV using ORB for detecting and FLANN for matching, and i get a really weird result. After loading my 2 images and converting them to grayscale, here's my code:
// Initiate ORB detector
Ptr<FeatureDetector> detector = ORB::create();
// find the keypoints and descriptors with ORB
detector->detect(gray_image1, keypoints_object);
detector->detect(gray_image2, keypoints_scene);
Ptr<DescriptorExtractor> extractor = ORB::create();
extractor->compute(gray_image1, keypoints_object, descriptors_object );
extractor->compute(gray_image2, keypoints_scene, descriptors_scene );
// Flann needs the descriptors to be of type CV_32F
descriptors_scene.convertTo(descriptors_scene, CV_32F);
descriptors_object.convertTo(descriptors_object, CV_32F);
FlannBasedMatcher matcher;
vector<DMatch> matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{
if( matches[i].distance < 3*min_dist )
{
good_matches.push_back( matches[i]);
}
}
vector< Point2f > obj;
vector< Point2f > scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
// Find the Homography Matrix
Mat H = findHomography( obj, scene, CV_RANSAC );
// Use the Homography Matrix to warp the images
cv::Mat result;
warpPerspective(image1,result,H,Size(image1.cols+image2.cols,image1.rows));
cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows));
image2.copyTo(half);
imshow( "Result", result );
And this is a screen shot of the weird result i'm getting:
screen shot
What might be the problem?
Thanks!
You are experiencing the results of a bad matching: The homography which fits the data is not "realistic" and thus distorts the image.
You can debug your matching with imshow( "Good Matches", img_matches ); as done in the example.
There are multiple approaches to improve your matches:
Use the crossCheck option
Use the SIFT ratio test
Use the OutputArray mask in cv::findHompgraphy to identify totally wrong homography computations
... and so on...
ORB are binary feature vectors which don't work with Flann. Use Brute Force (BFMatcher) instead.

Improve matching of feature points with OpenCV

I want to match feature points in stereo images. I've already found and extracted the feature points with different algorithms and now I need a good matching. In this case I'm using the FAST algorithms for detection and extraction and the BruteForceMatcher for matching the feature points.
The matching code:
vector< vector<DMatch> > matches;
//using either FLANN or BruteForce
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 1 );
//just some temporarily code to have the right data structure
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
good_matches2.push_back(matches[i][0]);
}
Because there are a lot of false matches I caluclated the min and max distance and remove all matches that are too bad:
//calculation of max and min distances between keypoints
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors_1.rows; i++ )
{
double dist = good_matches2[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//find the "good" matches
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{
if( good_matches2[i].distance <= 5*min_dist )
{
good_matches.push_back( good_matches2[i]);
}
}
The problem is, that I either get a lot of false matches or only a few right ones (see the images below).
(source: codemax.de)
(source: codemax.de)
I think it's not a problem of programming but more a matching thing. As far as I understood the BruteForceMatcher only regards the visual distance of feature points (which is stored in the FeatureExtractor), not the local distance (x&y position), which is in my case important, too. Has anybody any experiences with this problem or a good idea to improve the matching results?
EDIT
I changed the code, that it gives me the 50 best matches. After this I go through the first match to check, whether it's in a specified area. If it's not, I take the next match until I have found a match inside the given area.
vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 50 );
//look if the match is inside a defined area of the image
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
for (int j = 0; j < matches[i].size(); j++)
{
//calculate local distance for each possible match
Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
Point2f to = keypoints_2[matches[i][j].trainIdx].pt;
double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));
//save as best match if local distance is in specified area
if (dist < tresholdDist)
{
good_matches2.push_back(matches[i][j]);
j = matches[i].size();
}
}
I think I don't get more matches, but with this I'm able to remove more false matches:
(source: codemax.de)
An alternate method of determining high-quality feature matches is the ratio test proposed by David Lowe in his paper on SIFT (page 20 for an explanation). This test rejects poor matches by computing the ratio between the best and second-best match. If the ratio is below some threshold, the match is discarded as being low-quality.
std::vector<std::vector<cv::DMatch>> matches;
cv::BFMatcher matcher;
matcher.knnMatch(descriptors_1, descriptors_2, matches, 2); // Find two nearest matches
vector<cv::DMatch> good_matches;
for (int i = 0; i < matches.size(); ++i)
{
const float ratio = 0.8; // As in Lowe's paper; can be tuned
if (matches[i][0].distance < ratio * matches[i][1].distance)
{
good_matches.push_back(matches[i][0]);
}
}
By comparing all feature detection algorithms I found a good combination, which gives me a lot more matches. Now I am using FAST for feature detection, SIFT for feature extraction and BruteForce for the matching. Combined with the check, whether the matches is inside a defined region I get a lot of matches, see the image:
(source: codemax.de)
The relevant code:
Ptr<FeatureDetector> detector;
detector = new DynamicAdaptedFeatureDetector ( new FastAdjuster(10,true), 5000, 10000, 10);
detector->detect(leftImageGrey, keypoints_1);
detector->detect(rightImageGrey, keypoints_2);
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("SIFT");
extractor->compute( leftImageGrey, keypoints_1, descriptors_1 );
extractor->compute( rightImageGrey, keypoints_2, descriptors_2 );
vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
matcher->knnMatch( descriptors_1, descriptors_2, matches, 500 );
//look whether the match is inside a defined area of the image
//only 25% of maximum of possible distance
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
for (int j = 0; j < matches[i].size(); j++)
{
Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
Point2f to = keypoints_2[matches[i][j].trainIdx].pt;
//calculate local distance for each possible match
double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));
//save as best match if local distance is in specified area and on same height
if (dist < tresholdDist && abs(from.y-to.y)<5)
{
good_matches2.push_back(matches[i][j]);
j = matches[i].size();
}
}
}
Besides ratio test, you can:
Only use symmetric matches:
void symmetryTest(const std::vector<cv::DMatch> &matches1,const std::vector<cv::DMatch> &matches2,std::vector<cv::DMatch>& symMatches)
{
symMatches.clear();
for (vector<DMatch>::const_iterator matchIterator1= matches1.begin();matchIterator1!= matches1.end(); ++matchIterator1)
{
for (vector<DMatch>::const_iterator matchIterator2= matches2.begin();matchIterator2!= matches2.end();++matchIterator2)
{
if ((*matchIterator1).queryIdx ==(*matchIterator2).trainIdx &&(*matchIterator2).queryIdx ==(*matchIterator1).trainIdx)
{
symMatches.push_back(DMatch((*matchIterator1).queryIdx,(*matchIterator1).trainIdx,(*matchIterator1).distance));
break;
}
}
}
}
and since its a stereo image use ransac test:
void ransacTest(const std::vector<cv::DMatch> matches,const std::vector<cv::KeyPoint>&keypoints1,const std::vector<cv::KeyPoint>& keypoints2,std::vector<cv::DMatch>& goodMatches,double distance,double confidence,double minInlierRatio)
{
goodMatches.clear();
// Convert keypoints into Point2f
std::vector<cv::Point2f> points1, points2;
for (std::vector<cv::DMatch>::const_iterator it= matches.begin();it!= matches.end(); ++it)
{
// Get the position of left keypoints
float x= keypoints1[it->queryIdx].pt.x;
float y= keypoints1[it->queryIdx].pt.y;
points1.push_back(cv::Point2f(x,y));
// Get the position of right keypoints
x= keypoints2[it->trainIdx].pt.x;
y= keypoints2[it->trainIdx].pt.y;
points2.push_back(cv::Point2f(x,y));
}
// Compute F matrix using RANSAC
std::vector<uchar> inliers(points1.size(),0);
cv::Mat fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2),inliers,CV_FM_RANSAC,distance,confidence); // confidence probability
// extract the surviving (inliers) matches
std::vector<uchar>::const_iterator
itIn= inliers.begin();
std::vector<cv::DMatch>::const_iterator
itM= matches.begin();
// for all matches
for ( ;itIn!= inliers.end(); ++itIn, ++itM)
{
if (*itIn)
{ // it is a valid match
goodMatches.push_back(*itM);
}
}
}

SIFT matching gives very poor results

I'm working on a project where I will use homography as features in a classifier. My problem is in automatically calculating homographies, i'm using SIFT descriptors to find the points between the two images on which to calculate homography but SIFT are giving me very poor results, hence i can't use them in my work.
I'm using OpenCV 2.4.3.
At first I was using SURF, but I had similar results and I decided to use SIFT which are slower but more precise. My first guess was that the image resolution in my dataset was too low but i ran my algorithm on a state-of-the-art dataset (Pointing 04) and I obtained pretty much the same results, so the problem lies in what I do and not in my dataset.
The match between the SIFT keypoints found in each image is done with the FlannBased matcher, i tried the BruteForce one but the results were again pretty much the same.
This is an example of the match I found (image from Pointing 04 dataset)
The above image shows how poor is the match found with my program. Only 1 point is a correct match. I need (at least) 4 correct matches for what I have to do.
Here is the code i use:
This is the function that extracts SIFT descriptors from each image
void extract_sift(const Mat &img, vector<KeyPoint> &keypoints, Mat &descriptors, Rect* face_rec) {
// Create masks for ROI on the original image
Mat mask1 = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi1(mask1, *face_rec);
roi1 = Scalar(255, 255, 255);
// Extracts keypoints in ROIs only
Ptr<DescriptorExtractor> featExtractor;
Ptr<FeatureDetector> featDetector;
Ptr<DescriptorMatcher> featMatcher;
featExtractor = new SIFT();
featDetector = FeatureDetector::create("SIFT");
featDetector->detect(img,keypoints,mask1);
featExtractor->compute(img,keypoints,descriptors);
}
This is the function that matches two images' descriptors
void match_sift(const Mat &img1, const Mat &img2, const vector<KeyPoint> &kp1,
const vector<KeyPoint> &kp2, const Mat &descriptors1, const Mat &descriptors2,
vector<Point2f> &p_im1, vector<Point2f> &p_im2) {
// Matching descriptor vectors using FLANN matcher
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
std::vector< DMatch > matches;
matcher->match( descriptors1, descriptors2, matches );
double max_dist = 0; double min_dist = 100;
// Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors1.rows; ++i ){
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
// Draw only the 4 best matches
std::vector< DMatch > good_matches;
// XXX: DMatch has no sort method, maybe a more efficent min extraction algorithm can be used here?
double min=matches[0].distance;
int min_i = 0;
for( int i = 0; i < (matches.size()>4?4:matches.size()); ++i ) {
for(int j=0;j<matches.size();++j)
if(matches[j].distance < min) {
min = matches[j].distance;
min_i = j;
}
good_matches.push_back( matches[min_i]);
matches.erase(matches.begin() + min_i);
min=matches[0].distance;
min_i = 0;
}
Mat img_matches;
drawMatches( img1, kp1, img2, kp2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
imwrite("imgMatch.jpeg",img_matches);
imshow("",img_matches);
waitKey();
for( int i = 0; i < good_matches.size(); i++ )
{
// Get the points from the best matches
p_im1.push_back( kp1[ good_matches[i].queryIdx ].pt );
p_im2.push_back( kp2[ good_matches[i].trainIdx ].pt );
}
}
And these functions are called here:
extract_sift(dataset[i].img,dataset[i].keypoints,dataset[i].descriptors,face_rec);
[...]
// Extract keypoints from i+1 image and calculate homography
extract_sift(dataset[i+1].img,dataset[i+1].keypoints,dataset[i+1].descriptors,face_rec);
dataset[front].points_r.clear(); // XXX: dunno if clearing the points every time is the best way to do it..
match_sift(dataset[front].img,dataset[i+1].img,dataset[front].keypoints,dataset[i+1].keypoints,
dataset[front].descriptors,dataset[i+1].descriptors,dataset[front].points_r,dataset[i+1].points_r);
dataset[i+1].H = findHomography(dataset[front].points_r,dataset[i+1].points_r, RANSAC);
Any help on how to improve the matching performance would be really appreciated, thanks.
You apparently use the "best four points" in your code w.r.t. the distance of the matches. In other words, you consider that a match is valid if both descriptors are really similar. I believe this is wrong. Did you try to draw all of the matches? Many of them should be wrong, but many should be good as well.
The distance of a match just tells how similar both points are. This doesn't tell if the match is coherent geometrically. Selecting the best matches should definitely consider the geometry.
Here is how I would do:
Detect the corners (you already do this)
Find the matches (you already do this)
Try to find a homography transform between both images by using the matches (don't filter them before!) using findHomography(...)
findHomography(...) will tell you which are the inliers. Those are your good_matches.