I am using FAST and FREAK to get the descriptors of a couple of images and then I apply knnMatch with a BruteForceMatcher matcher and next I am using a loop to separate the good matches:
float nndrRatio = 0.7f;
std::vector<KeyPoint> keypointsA, keypointsB;
Mat descriptorsA, descriptorsB;
std::vector< vector< DMatch > > matches;
int threshold=9;
// detect keypoints:
FAST(objectMat,keypointsA,threshold,true);
FAST(sceneMat,keypointsB,threshold,true);
FREAK extractor;
// extract descriptors:
extractor.compute( objectMat, keypointsA, descriptorsA );
extractor.compute( sceneMat, keypointsB, descriptorsB );
BruteForceMatcher<Hamming> matcher;
// match
matcher.knnMatch(descriptorsA, descriptorsB, matches, 2);
// good matches search:
vector< DMatch > good_matches;
for (size_t i = 0; i < matches.size(); ++i)
{
if (matches[i].size() < 2)
continue;
const DMatch &m1 = matches[i][0];
const DMatch &m2 = matches[i][1];
if(m1.distance <= nndrRatio * m2.distance)
good_matches.push_back(m1);
}
//If there are at least 7 good matches, then object has been found
if ( (good_matches.size() >=7))
{
cout << "OBJECT FOUND!" << endl;
}
I think the problem could be the good matches search method, because using it with the FlannBasedMatcher works fine but with the BruteForceMatcher very weirdly. I'm suspecting that I may be doing a nonsense with this method because the Hamming distance uses binary descriptors, but I can't think of a way to adapt it!
Any links, snippets, ideas,... please?
Your code is not bad, but I don't think it is what you want to do. Why did you choose this method?
If you want to detect an object in an image using OpenCV, you should maybe try the Cascade Classification. This link will explain how to train a classifier.
EDIT: If you think it is too complicated and if the object you want to detect is planar, you can try this tutorial (it basically computes the inliers by trying to find a homography transform between the object and the image). But the cascade classification is more general for object detection.
Related
I have two pictures taken from two cameras disposed next to each other.The pictures are almost the same, but in one of them I have some rain drops and I need to compare and combine those two pictures in the same image in the end (eliminating of course the rains drops), as a result.
Actually I used a lot of methods that I found online but I did not get the result I need. First, I tried to find all the correspondence points and I found a lot of them and then, I tried to eliminate all the "bad" matches to have at the end only "good " points. And here is the tutorial I used.
Second, I used the affine transformation method provided by OpenCV library to know the transformation that happened to these points from one image to the other and unfortunately I got a very wrong result. (I think since I have the points and their coordinates I need to know the matrix M itself now ).
Here is the code I tried and sorry if it is very well written since a newbie and this my first time with this kind of work.
vector<KeyPoint> keypoints1;
vector<KeyPoint> keypoints2;
Mat descriptors1, descriptors2;
cv::Ptr<cv::AKAZE> akaze = cv::AKAZE::create();
akaze->detectAndCompute(Src1, cv::Mat(), keypoints1, descriptors1);
akaze->detectAndCompute(Src2, cv::Mat(), keypoints2, descriptors2);
vector<vector<cv::DMatch>> knnmatch_points;
cv::BFMatcher match(cv::NORM_HAMMING);
match.knnMatch(descriptors1, descriptors2, knnmatch_points, 2);
const double match_par = 0.4;
vector<cv::DMatch> goodMatch;
vector<cv::Point2f> match_point1;
vector<cv::Point2f> match_point2;
for (size_t i = 0; i < knnmatch_points.size(); ++i) {
double distance1 = knnmatch_points[i][0].distance;
double distance2 = knnmatch_points[i][1].distance;
if (distance1 <= distance2 * match_par) {
goodMatch.push_back(knnmatch_points[i][0]);
match_point1.push_back(keypoints1[knnmatch_points[i][0].queryIdx].pt);
match_point2.push_back(keypoints2[knnmatch_points[i][0].trainIdx].pt);
...
srcTri[i] = Point2f(keypoints1[knnmatch_points[i][0].queryIdx].pt);
dstTri[i] = Point2f(keypoints2[knnmatch_points[i][0].queryIdx].pt);
}
}
cv::Mat masks;
cv::Mat H = cv::findHomography(match_point1, match_point2, masks, cv::RANSAC, 3);
vector<cv::DMatch> inlinerMatch;
for (size_t i = 0; i < masks.rows; ++i) {
uchar *inliner = masks.ptr<uchar>(i);
if (inliner[0] == 1) {
inlinerMatch.push_back(goodMatch[i]);
}
}
warp_mat = getAffineTransform(srcTri, dstTri);
warpAffine(Src2, warp_dst, warp_mat, warp_dst.size());
namedWindow(warp_window, WINDOW_AUTOSIZE);
imshow(warp_window, warp_dst);
I think I can use other methods to get my result like binary search to find the error or gradient descent to minimize the system of equation but I am not really familiar with any of them .. so please if there is any help in this subject
Thanks in advance and any help will be very appreciated
I am using OpenCV 3.2
I am trying to use FLANN to match features descriptors in a faster way than brute force.
// Ratio to the second neighbor to consider a good match.
#define RATIO 0.75
void matchFeatures(const cv::Mat &query, const cv::Mat &target,
std::vector<cv::DMatch> &goodMatches) {
std::vector<std::vector<cv::DMatch>> matches;
cv::Ptr<cv::FlannBasedMatcher> matcher = cv::FlannBasedMatcher::create();
// Find 2 best matches for each descriptor to make later the second neighbor test.
matcher->knnMatch(query, target, matches, 2);
// Second neighbor ratio test.
for (unsigned int i = 0; i < matches.size(); ++i) {
if (matches[i][0].distance < matches[i][1].distance * RATIO)
goodMatches.push_back(matches[i][0]);
}
}
This code is working me with SURF and SIFT descriptors, but not with ORB.
OpenCV Error: Unsupported format or combination of formats (type=0) in buildIndex
As it's said here, FLANN needs the descriptors to be of type CV_32F so we need to convert them.
if (query.type() != CV_32F) query.convertTo(query, CV_32F);
if (target.type() != CV_32F) target.convertTo(target, CV_32F);
However, this supposed fix is returning me another error in convertTo function.
OpenCV Error: Assertion failed (!fixedType() || ((Mat*)obj)->type() == mtype) in create
This assertion is in opencv/modules/core/src/matrix.cpp file, line 2277.
What's happening?
Code to replicate issue.
#include <opencv2/opencv.hpp>
int main(int argc, char **argv) {
// Read both images.
cv::Mat image1 = cv::imread(argv[1], cv::IMREAD_GRAYSCALE);
if (image1.empty()) {
std::cerr << "Couldn't read image in " << argv[1] << std::endl;
return 1;
}
cv::Mat image2 = cv::imread(argv[2], cv::IMREAD_GRAYSCALE);
if (image2.empty()) {
std::cerr << "Couldn't read image in " << argv[2] << std::endl;
return 1;
}
// Detect the keyPoints and compute its descriptors using ORB Detector.
std::vector<cv::KeyPoint> keyPoints1, keyPoints2;
cv::Mat descriptors1, descriptors2;
cv::Ptr<cv::ORB> detector = cv::ORB::create();
detector->detectAndCompute(image1, cv::Mat(), keyPoints1, descriptors1);
detector->detectAndCompute(image2, cv::Mat(), keyPoints2, descriptors2);
// Match features.
std::vector<cv::DMatch> matches;
matchFeatures(descriptors1, descriptors2, matches);
// Draw matches.
cv::Mat image_matches;
cv::drawMatches(image1, keyPoints1, image2, keyPoints2, matches, image_matches);
cv::imshow("Matches", image_matches);
}
Did you adjust the FLANN parameters?
Taken from http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html
While using ORB, you can pass the following. The commented values are recommended as per the docs, but it didn’t provide required results in some cases. Other values worked fine.:
index_params= dict(algorithm = FLANN_INDEX_LSH,
table_number = 6, # 12
key_size = 12, # 20
multi_probe_level = 1) #2
Probably you can convert that to C++ api?
According to the comment, the C++ way is:
cv::FlannBasedMatcher matcher = cv::FlannBasedMatcher(cv::makePtr<cv::flann::LshIndexParams>(12, 20, 2));
Binary-string descriptors: ORB, BRIEF, BRISK, FREAK, AKAZE, etc.
Floating-point descriptors: SIFT, SURF, GLOH, etc.
Feature matching of binary descriptors can be efficiently done by comparing their Hamming distance as opposed to Euclidean distance used for floating-point descriptors.
For comparing binary descriptors in OpenCV, use FLANN + LSH index or Brute Force + Hamming distance.
http://answers.opencv.org/question/59996/flann-error-in-opencv-3/
By default, FlannBasedMatcher works as KDTreeIndex with L2 norm. This is the reason why it works well with SIFT/SURF descriptors and throws an exception for ORB descriptor.
Binary features and Locality Sensitive Hashing (LSH)
Performance comparison between binary and floating-point descriptors
I believe there is a bug in the OpenCV3 version: FLANN error in OpenCV 3
You need to convert your descriptors to a 'CV_32F'.
Im using OpenCV feature detection to estimate robot position based on comparison of LIDAR result and virtual map.
I've try using orb feature detection followed by flannbasedmatcher, but the match result gone wrong.
here's some of my code
Ptr<ORB> orb_a = ORB::create();
Ptr<ORB> orb_b = ORB::create();
vector <cv::KeyPoint> kp1,kp2;
Mat desc1,desc2;
/* set orb :
1. ORB name
2. nfeatures
3. Nlevels
4. EdgeThreshold
5. First Level
6. WTA
7. Score Type
8. Patchsize
9. Scale Factor */
Mat hmap,hlidar;
setORB(orb_a,500,8,100,0,4,ORB::HARRIS_SCORE,31,1.1); //map
orb_a->detectAndCompute(lidarmap,noArray(),kp1,desc1);
drawKeypoints(lidarmap,kp1,hmap,Scalar::all(-1),DrawMatchesFlags::DEFAULT);
setORB(orb_b,50,8,30,0,4,ORB::HARRIS_SCORE,10,1.5); //lidar
orb_b->detectAndCompute(lidarused,noArray(),kp2,desc2);
drawKeypoints(lidarused,kp2,hlidar,Scalar::all(-1),DrawMatchesFlags::DEFAULT);
//flann
FlannBasedMatcher matcher;
std::vector<DMatch>matches;
matcher.match (desc1,desc2,matches);
double maxdist = 0, mindist = 100000;
for (int i = 0; i< desc1.rows; i++)
{
double dist = matches[i].distance;
if (dist<mindist) mindist = dist;
if (dist>maxdist) maxdist = dist;
}
if (mindist<0.02) mindist = 0.02;
printf ("min : %7.3f \t max : %7.3f \n",mindist,maxdist);
vector <DMatch> good_matches;
for (int i=1; i<desc1.rows; i++)
{
if (matches[i].distance >= 2*mindist && matches[i].distance<maxdist/2)
{
good_matches.push_back (matches[i]);
}
}
Mat imgmatches;
drawMatches (lidarmap,kp1,
lidarused,kp2,
good_matches,imgmatches,
Scalar::all(-1), Scalar::all(-1),
vector<char>(),DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
here's the result.
Detection seems okay, but it's terrible when i rotate second image
Is flann matcher only works on unscaled and unrotated image? Can i use flann to match bicolor image (BW) ? or can somebody point where i'm doing it wrong? thanks in advance
From my experience ORB features are too weak to be used with FLANN. Try your code with sift or surf. If it works you can try to tweak in order to use ORB.
Another option is to use DBoW2 Library.
They have descent results with binary features.
I'm working on a project where I will use homography as features in a classifier. My problem is in automatically calculating homographies, i'm using SIFT descriptors to find the points between the two images on which to calculate homography but SIFT are giving me very poor results, hence i can't use them in my work.
I'm using OpenCV 2.4.3.
At first I was using SURF, but I had similar results and I decided to use SIFT which are slower but more precise. My first guess was that the image resolution in my dataset was too low but i ran my algorithm on a state-of-the-art dataset (Pointing 04) and I obtained pretty much the same results, so the problem lies in what I do and not in my dataset.
The match between the SIFT keypoints found in each image is done with the FlannBased matcher, i tried the BruteForce one but the results were again pretty much the same.
This is an example of the match I found (image from Pointing 04 dataset)
The above image shows how poor is the match found with my program. Only 1 point is a correct match. I need (at least) 4 correct matches for what I have to do.
Here is the code i use:
This is the function that extracts SIFT descriptors from each image
void extract_sift(const Mat &img, vector<KeyPoint> &keypoints, Mat &descriptors, Rect* face_rec) {
// Create masks for ROI on the original image
Mat mask1 = Mat::zeros(img.size(), CV_8U); // type of mask is CV_8U
Mat roi1(mask1, *face_rec);
roi1 = Scalar(255, 255, 255);
// Extracts keypoints in ROIs only
Ptr<DescriptorExtractor> featExtractor;
Ptr<FeatureDetector> featDetector;
Ptr<DescriptorMatcher> featMatcher;
featExtractor = new SIFT();
featDetector = FeatureDetector::create("SIFT");
featDetector->detect(img,keypoints,mask1);
featExtractor->compute(img,keypoints,descriptors);
}
This is the function that matches two images' descriptors
void match_sift(const Mat &img1, const Mat &img2, const vector<KeyPoint> &kp1,
const vector<KeyPoint> &kp2, const Mat &descriptors1, const Mat &descriptors2,
vector<Point2f> &p_im1, vector<Point2f> &p_im2) {
// Matching descriptor vectors using FLANN matcher
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
std::vector< DMatch > matches;
matcher->match( descriptors1, descriptors2, matches );
double max_dist = 0; double min_dist = 100;
// Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors1.rows; ++i ){
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
// Draw only the 4 best matches
std::vector< DMatch > good_matches;
// XXX: DMatch has no sort method, maybe a more efficent min extraction algorithm can be used here?
double min=matches[0].distance;
int min_i = 0;
for( int i = 0; i < (matches.size()>4?4:matches.size()); ++i ) {
for(int j=0;j<matches.size();++j)
if(matches[j].distance < min) {
min = matches[j].distance;
min_i = j;
}
good_matches.push_back( matches[min_i]);
matches.erase(matches.begin() + min_i);
min=matches[0].distance;
min_i = 0;
}
Mat img_matches;
drawMatches( img1, kp1, img2, kp2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
imwrite("imgMatch.jpeg",img_matches);
imshow("",img_matches);
waitKey();
for( int i = 0; i < good_matches.size(); i++ )
{
// Get the points from the best matches
p_im1.push_back( kp1[ good_matches[i].queryIdx ].pt );
p_im2.push_back( kp2[ good_matches[i].trainIdx ].pt );
}
}
And these functions are called here:
extract_sift(dataset[i].img,dataset[i].keypoints,dataset[i].descriptors,face_rec);
[...]
// Extract keypoints from i+1 image and calculate homography
extract_sift(dataset[i+1].img,dataset[i+1].keypoints,dataset[i+1].descriptors,face_rec);
dataset[front].points_r.clear(); // XXX: dunno if clearing the points every time is the best way to do it..
match_sift(dataset[front].img,dataset[i+1].img,dataset[front].keypoints,dataset[i+1].keypoints,
dataset[front].descriptors,dataset[i+1].descriptors,dataset[front].points_r,dataset[i+1].points_r);
dataset[i+1].H = findHomography(dataset[front].points_r,dataset[i+1].points_r, RANSAC);
Any help on how to improve the matching performance would be really appreciated, thanks.
You apparently use the "best four points" in your code w.r.t. the distance of the matches. In other words, you consider that a match is valid if both descriptors are really similar. I believe this is wrong. Did you try to draw all of the matches? Many of them should be wrong, but many should be good as well.
The distance of a match just tells how similar both points are. This doesn't tell if the match is coherent geometrically. Selecting the best matches should definitely consider the geometry.
Here is how I would do:
Detect the corners (you already do this)
Find the matches (you already do this)
Try to find a homography transform between both images by using the matches (don't filter them before!) using findHomography(...)
findHomography(...) will tell you which are the inliers. Those are your good_matches.
I am trying the quite new descriptor FREAK from the latest version of OpenCV following the freak_demo.cpp example. Instead of using SURF I use FAST. My basic code is something like this:
std::vector<KeyPoint> keypointsA, keypointsB;
Mat descriptorsA, descriptorsB;
std::vector<DMatch> matches;
FREAK extractor;
BruteForceMatcher<Hamming> matcher;
FAST(imgA,keypointsA,100);
FAST(imgB,keypointsB,20);
extractor.compute( imgA, keypointsA, descriptorsA );
extractor.compute( imgB, keypointsB, descriptorsB );
matcher.match(descriptorsA, descriptorsB, matches);
The algorithm finds a lot of matches, but there are a lot of outliers. Am I doing things right? Is there a way for tuning the algorithm?
When doing matching there are always some refinement steps for getting rid out of outliers.
What I usually do is discarding matches that have a distance over a threshold, for example:
for (int i = 0; i < matches.size(); i++ )
{
if(matches[i].distance > 200)
{
matches.erase(matches.begin()+i-1);
}
}
Then, I use RANSAC to see which matches fit the homography model. OpenCV has a function for this:
for( int i = 0; i < matches.size(); i++ )
{
trainMatches.push_back( cv::Point2f(keypointsB[ matches[i].trainIdx ].pt.x/500.0f, keypointsB[ matches[i].trainIdx ].pt.y/500.0f) );
queryMatches.push_back( cv::Point2f(keypointsA[ matches[i].queryIdx ].pt.x/500.0f, keypointsA[ matches[i].queryIdx ].pt.y/500.0f) );
}
Mat h = cv::findHomography(trainMatches,queryMatches,CV_RANSAC,0.005, status);
And I just draw the inliers:
for(size_t i = 0; i < queryMatches.size(); i++)
{
if(status.at<char>(i) != 0)
{
inliers.push_back(matches[i]);
}
}
Mat imgMatch;
drawMatches(imgA, keypointsA, imgB, keypointsB, inliers, imgMatch);
Just try different thresholds and distances until you get the desired resutls.
You can also train the descriptor by giving your own selected pairs. And tune the parameters in the constructor.
explicit FREAK( bool orientationNormalized = true
, bool scaleNormalized = true
, float patternScale = 22.0f
, int nbOctave = 4
, const vector<int>& selectedPairs = vector<int>()
);
BTW, a more efficient version of FREAK is on the way :-)