I'm currently working on real-time feature matching using OpenCV3.4.0, c++ in QT creator.
My code matches features between the first frame that I got by webcam and current frame input from webcam.
Mat frame1, frame2, img1, img2, img1_gray, img2_gray;
int n = 0;
VideoCapture cap1(0);
namedWindow("Video Capture1", WINDOW_NORMAL);
namedWindow("Reference img", WINDOW_NORMAL);
namedWindow("matches1", WINDOW_NORMAL);
moveWindow("Video Capture1",50, 0);
moveWindow("Reference img",50, 100);
moveWindow("matches1",100,100);
while((char)waitKey(1)!='q'){
//raw image saved in frame
cap1>>frame1;
n=n+1;
if (n ==1){
imwrite("frame1.jpg", frame1);
cout<<"First frame saved as 'frame1'!!"<<endl;
}
if(frame1.empty())
break;
imshow("Video Capture1",frame1);
img1 = imread("frame1.jpg");
img2 = frame1;
cvtColor(img1, img1_gray, cv::COLOR_BGR2GRAY);
cvtColor(img2, img2_gray, cv::COLOR_BGR2GRAY);
imshow("Reference img",img1);
// detecting keypoints
int minHessian = 400;
Ptr<Feature2D> detector = xfeatures2d::SurfFeatureDetector::create();
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(img1_gray,keypoints1);
detector->detect(img2_gray,keypoints2);
// computing descriptors
Ptr<DescriptorExtractor> extractor = xfeatures2d::SurfFeatureDetector::create();
Mat descriptors1, descriptors2;
extractor->compute(img1_gray,keypoints1,descriptors1);
extractor->compute(img2_gray,keypoints2,descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches1", img_matches);
But the code returns so many matched points that I cannot distinguish which one matches which.
So, are there any methods to get high-quality matched points only?
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
So, are there any methods to get high-quality matched points only?
I bet there are a lot of different methods. I am using e.g. a symmetry test. So matches from img1 to img2 also have to exist when matching from img2 to img1. I am using the test of Improve matching of feature points with OpenCV. Multiple other tests are shown there.
void symmetryTest(const std::vector<cv::DMatch> &matches1,const std::vector<cv::DMatch> &matches2,std::vector<cv::DMatch>& symMatches)
{
symMatches.clear();
for (vector<DMatch>::const_iterator matchIterator1= matches1.begin();matchIterator1!= matches1.end(); ++matchIterator1)
{
for (vector<DMatch>::const_iterator matchIterator2= matches2.begin();matchIterator2!= matches2.end();++matchIterator2)
{
if ((*matchIterator1).queryIdx ==(*matchIterator2).trainIdx &&(*matchIterator2).queryIdx ==(*matchIterator1).trainIdx)
{
symMatches.push_back(DMatch((*matchIterator1).queryIdx,(*matchIterator1).trainIdx,(*matchIterator1).distance));
break;
}
}
}
}
Like András Kovács says in the related answer you can also calculate a Fundamental Matrix with RANSAC to eliminate outliers using cv::findFundamentalMat.
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
I hope I understood it right that you want to have the point coordinates of homologue points that match. I am extracting the coordinates of the points after the symmetryTest.
The coordinates are inside the keypoints.
for (size_t rows = 0; rows < sym_matches.size(); rows++) {
float x1 = keypoints_1[sym_matches[rows].queryIdx].pt.x;
float y1 = keypoints_1[sym_matches[rows].queryIdx].pt.y;
float x2 = keypoints_2[sym_matches[rows].trainIdx].pt.x;
float y2 = keypoints_2[sym_matches[rows].trainIdx].pt.y;
// Push the coordinates in a vector e.g. std:vector<cv::Point2f>>
}
You can do the same with your matches, keypoints1 and keypoint2.
I have created a program that can stitch multiple images together and am now looking to improve the efficiency of it. Depending on the size of the stitched image, eventually it is to large and contains too many keypoints that the machine runs out of allocatable memory. To compensate for this my goal is to store all the keypoints and descriptors as they are found so that I don't need to find them again in the master stitched image and only need to find them in the new image being stitched. I had this process working in python but haven't had the same luck in C++.
In order to do this I need to perform a perspectiveTransform() on the keypoints and therefor convert them from vector<keypoint> to vector<point2f> and back to vector<keypoint>. I have been able to achieve this and can confirm it works (pick to follow). I am not sure if the same process needs to be done to the descriptors (currently I have don it but either its wrong or not effective).
Issue: When I run this the keypoints and descriptors don't appear to work and I throw an error I created being: "Not enough matches found" even though I know at least the keypoints are making its way to the function.
Here is the code for the keypoint and descriptor transforms. The code first calculates the warpPerspective to be applied to image one as homography will warp the second image only. The rest of the codd deals with keypoints and descriptors.
tuple<Mat, vector<KeyPoint>, Mat> stitchMatches(Mat image1,Mat image2, Mat homography, vector<KeyPoint> kp1, vector<KeyPoint> kp2 , Mat desc1, Mat desc2){
Mat result, destination, descriptors_updated;
vector<Point2f> fourPoint;
vector<KeyPoint> keypoints_updated;
//-Get the four corners of the first image (master)
fourPoint.push_back(Point2f (0,0));
fourPoint.push_back(Point2f (image1.size().width,0));
fourPoint.push_back(Point2f (0, image1.size().height));
fourPoint.push_back(Point2f (image1.size().width, image1.size().height));
//perspectiveTransform(Mat(fourPoint), destination, homography);
//- Get points used to determine Htr
double min_x, min_y, tam_x, tam_y;
float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
min_x1 = min(fourPoint.at(0).x, fourPoint.at(1).x);
min_x2 = min(fourPoint.at(2).x, fourPoint.at(3).x);
min_y1 = min(fourPoint.at(0).y, fourPoint.at(1).y);
min_y2 = min(fourPoint.at(2).y, fourPoint.at(3).y);
max_x1 = max(fourPoint.at(0).x, fourPoint.at(1).x);
max_x2 = max(fourPoint.at(2).x, fourPoint.at(3).x);
max_y1 = max(fourPoint.at(0).y, fourPoint.at(1).y);
max_y2 = max(fourPoint.at(2).y, fourPoint.at(3).y);
min_x = min(min_x1, min_x2);
min_y = min(min_y1, min_y2);
tam_x = max(max_x1, max_x2);
tam_y = max(max_y1, max_y2);
//- Htr use to map image one to result in line with the alredy warped image 1
Mat Htr = Mat::eye(3,3,CV_64F);
if (min_x < 0){
tam_x = image2.size().width - min_x;
Htr.at<double>(0,2)= -min_x;
}
if (min_y < 0){
tam_y = image2.size().height - min_y;
Htr.at<double>(1,2)= -min_y;
}
result = Mat(Size(tam_x*2,tam_y*2), CV_8UC3,cv::Scalar(0,0,0));
warpPerspective(image2, result, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(image1, result, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//-- Variables to hold the keypoints at the respective stages
vector<Point2f> kp1Local,kp2Local;
vector<KeyPoint> kp1updated, kp2updated;
//Localize the keypoints to allow for perspective change
KeyPoint::convert(kp1, kp1Local);
KeyPoint::convert(kp2, kp2Local);
//perform persepctive transform on the keypoints of type vector<point2f>
perspectiveTransform(kp1Local, kp1Local, (Htr));
perspectiveTransform(kp2Local, kp2Local, (Htr*homography));
//convert keypoints back to type vector<keypoint>
for( size_t i = 0; i < kp1Local.size(); i++ ) {
kp1updated.push_back(KeyPoint(kp1Local[i], 1.f));
}
for( size_t i = 0; i < kp2Local.size(); i++ ) {
kp2updated.push_back(KeyPoint(kp2Local[i], 1.f));
}
//Add to master of list of keypoints to be passed along during next iteration of image
keypoints_updated.reserve(kp1updated.size() + kp2updated.size());
keypoints_updated.insert(keypoints_updated.end(),kp1updated.begin(),kp1updated.end());
keypoints_updated.insert(keypoints_updated.end(),kp2updated.begin(),kp2updated.end());
//WarpPerspective of decriptors to match that of the images and cooresponding keypoints
Mat desc1New, desc2New;
warpPerspective(desc2, desc2New, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(desc1, desc1New, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//create a new Mat including the descriports from desc1 and desc2
descriptors_updated.push_back(desc1New);
descriptors_updated.push_back(desc2New);
//------------TEST TO see if keypoints have moved
Mat img_keypoints;
drawKeypoints( result, keypoints_updated, img_keypoints, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
imshow("Keypoints 1", img_keypoints );
waitKey();
destroyAllWindows();
return {result, keypoints_updated, descriptors_updated};
}
The following code is my master stitching program that does the actual stitching.
tuple<Mat,vector<KeyPoint>,Mat> stitch(Mat img1,Mat img2 ,vector<KeyPoint> keypoints, Mat descriptors, String featureDetection,String featureExtractor,String keypointsMatcher,String showMatches){
Mat desc, desc1, desc2, homography, result, croppedResult,descriptors_updated;
std::vector<KeyPoint> keypoints_updated, kp1, kp2;
std::vector<DMatch> matches;
//-Base Case[2]
if (keypoints.empty()){
//-Detect Keypoints and their descriptors
tie(kp1,desc1) = KeyPointDescriptor(img1, featureDetection,featureExtractor);
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//Find matches and calculated homography based on keypoints and descriptors
std::tie(matches,homography) = matchFeatures(kp1, desc1,kp2, desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true"){
drawMatchedImages( img1, kp1, img2, kp2, matches);
}
//stitch the images and update the keypoint and descriptors
std::tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,kp1,kp2,desc1,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
//base case[3:n]
else{
//Get keypoints and descriptors of new image and add to respective lists
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//find matches and determine homography
std::tie(matches,homography) = matchFeatures(keypoints_updated,descriptors_updated,kp2,desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true")
drawMatchedImages( img1, keypoints, img2, kp2, matches);
//stitch the images and update the keypoint and descriptors
tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,keypoints,kp2,descriptors,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
}
Lastly here is the image of the keypoints that are being transformed onto the stitched image. Any help is so greatly appreciated!
After combing through the I just happened to find I was using the wrong variable at one point!:)
I am using OpenCV 3.3.
I want to use the OpenCV C++ match function on an object of type Vector Dmatch.
The goal is to match descriptors from a single query image with the descriptors from a list of several images.
I know that when this function is used to match descriptors from a single image to descriptors from another single image, the two keypoints indexes corresponding to each matching descriptors on each image are stored into each Dmatch object from the vector.
For example, if I do
Mat img_1=imread("path1...");
Mat img_2=imread("path2...");
vector<KeyPoint> keypoints_1, keypoints_2;
Mat descriptors_1, descriptors_2;
detector->detectAndCompute(img_1, Mat(), keypoints_1, descriptors_1 );
detector->detectAndCompute( img_2, Mat(), keypoints_2, descriptors_2 );
FlannBasedMatcher matcher;
vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
then if I want to access the keypoints that were matched, for each i that is an int that is inferior to matches.size(), then
int idx=matches[i].trainIdx;
int idy=matches[i].queryIdx;
point2f matched_point1=keypoints1[idx].pt;
point2f matched_point2=keypoints2[idy].pt;
However what happens when I try to match descriptors from a single query image with the descriptors from a list of several images, since each Dmatch object can only hold two indexes, whilst I want to match more than two images.
i.e:
vector<Mat> descriptors1;
Mat descriptors2;
matcher.add( descriptors1 );
matcher.train();
matcher.match(descriptors2, matches );
what will those indexes mean?
int idx=matches[i].trainIdx;
int idy=matches[i].queryIdx;
You can store the matched index values for all the train images to a list while looping through the images. You can then select the appropriate matched points and play with them however you want.
I am trying to register two binary images. I used opencv orb detector and matcher to generate and match feature points. However, the matching result looks bad. Can anybody tell me why and how to improve? Thanks.
Here are the images and matching result.
Here is the code
OrbFeatureDetector detector; //OrbFeatureDetector detector;SurfFeatureDetector
vector<KeyPoint> keypoints1;
detector.detect(im_edge1, keypoints1);
vector<KeyPoint> keypoints2;
detector.detect(im_edge2, keypoints2);
OrbDescriptorExtractor extractor; //OrbDescriptorExtractor extractor; SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( im_edge1, keypoints1, descriptors_1 );
extractor.compute( im_edge2, keypoints2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L2, true); //BFMatcher matcher(NORM_L2);
vector< DMatch> matches;
matcher.match(descriptors_1, descriptors_2, matches);
vector< DMatch > good_matches;
vector<Point2f> featurePoints1;
vector<Point2f> featurePoints2;
for(int i=0; i<int(matches.size()); i++){
good_matches.push_back(matches[i]);
}
//-- Draw only "good" matches
Mat img_matches;
imwrite("img_matches_orb.bmp", img_matches);
ORB descriptors are, unlike SURF, binary descriptors. The HAMMING distance is suited for binary descriptors comparison. Use NORM_HAMMING when initializing your BFMatcher.
Some answers there may be helpful:
Improve matching of feature points with OpenCV
It's for SIFT descriptor, but we can also use them for ORB matching:)
I'm trying to find matching point of interest on 2 images. Final of this project is to build panorama.
I have this code
SIFT detector(0);
src1 = imread( folder + inputName1 , 1 );
cvtColor( src1, src1_gray, CV_BGR2GRAY );
// Detect first image
vector<KeyPoint> keypoints1;
detector.detect(src1_gray, keypoints1);
//Draw keypoints back to source image
drawKeypoints(src1,keypoints1,src1,Scalar::all(-1), 1);
imwrite(folder + outputName1,src1);
src2 = imread( folder + inputName2 , 1 );
cvtColor( src2, src2_gray, CV_BGR2GRAY );
// Detect second image
vector<KeyPoint> keypoints2;
detector.detect(src2_gray, keypoints2);
//Draw keypoints back to source image
drawKeypoints(src2,keypoints2,src2,Scalar::all(-1), 1);
imwrite(folder + outputName2,src2);
vector<DMatch> matches;
Mat output;
drawMatches(src1,keypoints1,src2,keypoints2,matches,output);
imwrite(folder + "matches.jpg",output);
But in final image matches.jpg, all points are shown, and vector matches is empty.
What I am doing wrong? I thought, that only matching points will be in final image, and in vector matches I find coordinates to draw lines between points.
Or should I use RANSAC for find matching points?
You have not matched any points. Look at this example: http://docs.opencv.org/doc/user_guide/ug_features2d.html.
You need to extract descriptors and then match them with FLANN for instance. Then you can draw your matches ;)