I'm currently working on real-time feature matching using OpenCV3.4.0, c++ in QT creator.
My code matches features between the first frame that I got by webcam and current frame input from webcam.
Mat frame1, frame2, img1, img2, img1_gray, img2_gray;
int n = 0;
VideoCapture cap1(0);
namedWindow("Video Capture1", WINDOW_NORMAL);
namedWindow("Reference img", WINDOW_NORMAL);
namedWindow("matches1", WINDOW_NORMAL);
moveWindow("Video Capture1",50, 0);
moveWindow("Reference img",50, 100);
moveWindow("matches1",100,100);
while((char)waitKey(1)!='q'){
//raw image saved in frame
cap1>>frame1;
n=n+1;
if (n ==1){
imwrite("frame1.jpg", frame1);
cout<<"First frame saved as 'frame1'!!"<<endl;
}
if(frame1.empty())
break;
imshow("Video Capture1",frame1);
img1 = imread("frame1.jpg");
img2 = frame1;
cvtColor(img1, img1_gray, cv::COLOR_BGR2GRAY);
cvtColor(img2, img2_gray, cv::COLOR_BGR2GRAY);
imshow("Reference img",img1);
// detecting keypoints
int minHessian = 400;
Ptr<Feature2D> detector = xfeatures2d::SurfFeatureDetector::create();
vector<KeyPoint> keypoints1, keypoints2;
detector->detect(img1_gray,keypoints1);
detector->detect(img2_gray,keypoints2);
// computing descriptors
Ptr<DescriptorExtractor> extractor = xfeatures2d::SurfFeatureDetector::create();
Mat descriptors1, descriptors2;
extractor->compute(img1_gray,keypoints1,descriptors1);
extractor->compute(img2_gray,keypoints2,descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches1", img_matches);
But the code returns so many matched points that I cannot distinguish which one matches which.
So, are there any methods to get high-quality matched points only?
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
So, are there any methods to get high-quality matched points only?
I bet there are a lot of different methods. I am using e.g. a symmetry test. So matches from img1 to img2 also have to exist when matching from img2 to img1. I am using the test of Improve matching of feature points with OpenCV. Multiple other tests are shown there.
void symmetryTest(const std::vector<cv::DMatch> &matches1,const std::vector<cv::DMatch> &matches2,std::vector<cv::DMatch>& symMatches)
{
symMatches.clear();
for (vector<DMatch>::const_iterator matchIterator1= matches1.begin();matchIterator1!= matches1.end(); ++matchIterator1)
{
for (vector<DMatch>::const_iterator matchIterator2= matches2.begin();matchIterator2!= matches2.end();++matchIterator2)
{
if ((*matchIterator1).queryIdx ==(*matchIterator2).trainIdx &&(*matchIterator2).queryIdx ==(*matchIterator1).trainIdx)
{
symMatches.push_back(DMatch((*matchIterator1).queryIdx,(*matchIterator1).trainIdx,(*matchIterator1).distance));
break;
}
}
}
}
Like András Kovács says in the related answer you can also calculate a Fundamental Matrix with RANSAC to eliminate outliers using cv::findFundamentalMat.
And how can I get each matched point's pixel coordinates in QT creator just like MATLAB?
I hope I understood it right that you want to have the point coordinates of homologue points that match. I am extracting the coordinates of the points after the symmetryTest.
The coordinates are inside the keypoints.
for (size_t rows = 0; rows < sym_matches.size(); rows++) {
float x1 = keypoints_1[sym_matches[rows].queryIdx].pt.x;
float y1 = keypoints_1[sym_matches[rows].queryIdx].pt.y;
float x2 = keypoints_2[sym_matches[rows].trainIdx].pt.x;
float y2 = keypoints_2[sym_matches[rows].trainIdx].pt.y;
// Push the coordinates in a vector e.g. std:vector<cv::Point2f>>
}
You can do the same with your matches, keypoints1 and keypoint2.
I have created a program that can stitch multiple images together and am now looking to improve the efficiency of it. Depending on the size of the stitched image, eventually it is to large and contains too many keypoints that the machine runs out of allocatable memory. To compensate for this my goal is to store all the keypoints and descriptors as they are found so that I don't need to find them again in the master stitched image and only need to find them in the new image being stitched. I had this process working in python but haven't had the same luck in C++.
In order to do this I need to perform a perspectiveTransform() on the keypoints and therefor convert them from vector<keypoint> to vector<point2f> and back to vector<keypoint>. I have been able to achieve this and can confirm it works (pick to follow). I am not sure if the same process needs to be done to the descriptors (currently I have don it but either its wrong or not effective).
Issue: When I run this the keypoints and descriptors don't appear to work and I throw an error I created being: "Not enough matches found" even though I know at least the keypoints are making its way to the function.
Here is the code for the keypoint and descriptor transforms. The code first calculates the warpPerspective to be applied to image one as homography will warp the second image only. The rest of the codd deals with keypoints and descriptors.
tuple<Mat, vector<KeyPoint>, Mat> stitchMatches(Mat image1,Mat image2, Mat homography, vector<KeyPoint> kp1, vector<KeyPoint> kp2 , Mat desc1, Mat desc2){
Mat result, destination, descriptors_updated;
vector<Point2f> fourPoint;
vector<KeyPoint> keypoints_updated;
//-Get the four corners of the first image (master)
fourPoint.push_back(Point2f (0,0));
fourPoint.push_back(Point2f (image1.size().width,0));
fourPoint.push_back(Point2f (0, image1.size().height));
fourPoint.push_back(Point2f (image1.size().width, image1.size().height));
//perspectiveTransform(Mat(fourPoint), destination, homography);
//- Get points used to determine Htr
double min_x, min_y, tam_x, tam_y;
float min_x1, min_x2, min_y1, min_y2, max_x1, max_x2, max_y1, max_y2;
min_x1 = min(fourPoint.at(0).x, fourPoint.at(1).x);
min_x2 = min(fourPoint.at(2).x, fourPoint.at(3).x);
min_y1 = min(fourPoint.at(0).y, fourPoint.at(1).y);
min_y2 = min(fourPoint.at(2).y, fourPoint.at(3).y);
max_x1 = max(fourPoint.at(0).x, fourPoint.at(1).x);
max_x2 = max(fourPoint.at(2).x, fourPoint.at(3).x);
max_y1 = max(fourPoint.at(0).y, fourPoint.at(1).y);
max_y2 = max(fourPoint.at(2).y, fourPoint.at(3).y);
min_x = min(min_x1, min_x2);
min_y = min(min_y1, min_y2);
tam_x = max(max_x1, max_x2);
tam_y = max(max_y1, max_y2);
//- Htr use to map image one to result in line with the alredy warped image 1
Mat Htr = Mat::eye(3,3,CV_64F);
if (min_x < 0){
tam_x = image2.size().width - min_x;
Htr.at<double>(0,2)= -min_x;
}
if (min_y < 0){
tam_y = image2.size().height - min_y;
Htr.at<double>(1,2)= -min_y;
}
result = Mat(Size(tam_x*2,tam_y*2), CV_8UC3,cv::Scalar(0,0,0));
warpPerspective(image2, result, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(image1, result, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//-- Variables to hold the keypoints at the respective stages
vector<Point2f> kp1Local,kp2Local;
vector<KeyPoint> kp1updated, kp2updated;
//Localize the keypoints to allow for perspective change
KeyPoint::convert(kp1, kp1Local);
KeyPoint::convert(kp2, kp2Local);
//perform persepctive transform on the keypoints of type vector<point2f>
perspectiveTransform(kp1Local, kp1Local, (Htr));
perspectiveTransform(kp2Local, kp2Local, (Htr*homography));
//convert keypoints back to type vector<keypoint>
for( size_t i = 0; i < kp1Local.size(); i++ ) {
kp1updated.push_back(KeyPoint(kp1Local[i], 1.f));
}
for( size_t i = 0; i < kp2Local.size(); i++ ) {
kp2updated.push_back(KeyPoint(kp2Local[i], 1.f));
}
//Add to master of list of keypoints to be passed along during next iteration of image
keypoints_updated.reserve(kp1updated.size() + kp2updated.size());
keypoints_updated.insert(keypoints_updated.end(),kp1updated.begin(),kp1updated.end());
keypoints_updated.insert(keypoints_updated.end(),kp2updated.begin(),kp2updated.end());
//WarpPerspective of decriptors to match that of the images and cooresponding keypoints
Mat desc1New, desc2New;
warpPerspective(desc2, desc2New, Htr, result.size(), INTER_LINEAR, BORDER_TRANSPARENT, 0);
warpPerspective(desc1, desc1New, (Htr*homography), result.size(), INTER_LINEAR, BORDER_TRANSPARENT,0);
//create a new Mat including the descriports from desc1 and desc2
descriptors_updated.push_back(desc1New);
descriptors_updated.push_back(desc2New);
//------------TEST TO see if keypoints have moved
Mat img_keypoints;
drawKeypoints( result, keypoints_updated, img_keypoints, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
imshow("Keypoints 1", img_keypoints );
waitKey();
destroyAllWindows();
return {result, keypoints_updated, descriptors_updated};
}
The following code is my master stitching program that does the actual stitching.
tuple<Mat,vector<KeyPoint>,Mat> stitch(Mat img1,Mat img2 ,vector<KeyPoint> keypoints, Mat descriptors, String featureDetection,String featureExtractor,String keypointsMatcher,String showMatches){
Mat desc, desc1, desc2, homography, result, croppedResult,descriptors_updated;
std::vector<KeyPoint> keypoints_updated, kp1, kp2;
std::vector<DMatch> matches;
//-Base Case[2]
if (keypoints.empty()){
//-Detect Keypoints and their descriptors
tie(kp1,desc1) = KeyPointDescriptor(img1, featureDetection,featureExtractor);
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//Find matches and calculated homography based on keypoints and descriptors
std::tie(matches,homography) = matchFeatures(kp1, desc1,kp2, desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true"){
drawMatchedImages( img1, kp1, img2, kp2, matches);
}
//stitch the images and update the keypoint and descriptors
std::tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,kp1,kp2,desc1,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
//base case[3:n]
else{
//Get keypoints and descriptors of new image and add to respective lists
tie(kp2,desc2) = KeyPointDescriptor(img2, featureDetection,featureExtractor);
//find matches and determine homography
std::tie(matches,homography) = matchFeatures(keypoints_updated,descriptors_updated,kp2,desc2, keypointsMatcher);
//draw matches if requested
if(showMatches == "true")
drawMatchedImages( img1, keypoints, img2, kp2, matches);
//stitch the images and update the keypoint and descriptors
tie(result,keypoints_updated,descriptors_updated) = stitchMatches(img1, img2, homography,keypoints,kp2,descriptors,desc2);
//crop function using created cropping function
croppedResult = crop(result);
return {croppedResult,keypoints_updated,descriptors_updated};
}
}
Lastly here is the image of the keypoints that are being transformed onto the stitched image. Any help is so greatly appreciated!
After combing through the I just happened to find I was using the wrong variable at one point!:)
I'm going compare images with SURF detector in opencv. for this work,I need size and Orientation of keypoints that must be compare them. for example, I have to extract keypoints matching of second image larger than keypoints matching of first image. (keypoint 1.size > keypoint 2.size).
question:
how to extract size of keypoints matching in opencv?
I'm not sure if I quite understand your question.
What I understood was that:
You will compare images using keypoints
You want to compare the size of matched keypoints
First u want at least 2 images:
Mat image1; //imread stuff here
Mat image2; //imread stuff here
Then detect keypoints in the two images using SURF:
vector<KeyPoint> keypoints1, keypoints2; //store the keypoints
Ptr<FeatureDetector> detector = new SURF();
detector->detect(image1, keypoints1); //detect keypoints in 'image1' and store them in 'keypoints1'
detector->detect(image2, keypoints2); //detect keypoints in 'image2' and store them in 'keypoints2'
After that compute the descriptors for detected keypoints:
Mat descriptors1, descriptors2;
Ptr<DescriptorExtractor> extractor = new SURF();
extractor->compute(image1, keypoints1, descriptors1);
extractor->compute(image2, keypoints2, descriptors2);
Then match the descriptors of keypoints using for example BruteForce with L2 norm:
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
After these steps matched keypoints were stored in the vector 'matches'
You can get the index of matched keypoints as follows:
//being idx any number between '0' and 'matches.size()'
int keypoint1idx = matches[idx].query; //keypoint of the first image 'image1'
int keypoint2idx = matches[idx].train; //keypoint of the second image 'image2'
Read this for further information:
http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html
Finally, to know the size of the matched keypoints u can do the following:
int size1 = keypoints1[ keypoint1idx ].size; //size of keypoint in the image1
int size2 = keypoints2[ keypoint2idx ].size; //size of keypoint in the image2
Further info: http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html
And that's it! Hope this helps
I am trying to use SURF but I am having trouble finding way to do so in C. The documentation only seems to have stuff for C++ in terms of.
I have been able to detect SURF feature:
IplImage *img = cvLoadImage("img5.jpg");
CvMat* image = cvCreateMat(img->height, img->width, CV_8UC1);
cvCvtColor(img, image, CV_BGR2GRAY);
// detecting keypoints
CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
int i;
//Extract SURF points by initializing parameters
CvSURFParams params = cvSURFParams(1, 1);
cvExtractSURF( image, 0, &imageKeypoints, &imageDescriptors, storage, params );
printf("Image Descriptors: %d\n", imageDescriptors->total);
//draw the keypoints on the captured frame
for( i = 0; i < imageKeypoints->total; i++ )
{
CvSURFPoint* r = (CvSURFPoint*)cvGetSeqElem( imageKeypoints, i );
CvPoint center;
int radius;
center.x = cvRound(r->pt.x);
center.y = cvRound(r->pt.y);
radius = cvRound(r->size*1.2/9.*2);
cvCircle( image, center, radius, CV_RGB(0,255,0), 1, 8, 0 );
}
But I can't find the method that I need to compare the descriptors of 2 images. I found this code in C++ but I'm having trouble translating it:
// matching descriptors
BruteForceMatcher<L2<float> > matcher;
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
I would appreciate if someone could lead me on to a descriptor matcher or even better, let me know where I can find the OpenCV documentation in C only.
This link might give you a hint. https://projects.developer.nokia.com/opencv/browser/opencv/opencv-2.3.1/samples/c/find_obj.cpp . Look in the function naiveNearestNeighbor
Check out the blog post from thioldhack. Contains a sample code. Its for QT, but you can easily do it for VC++ or any other. You will need to match the Key points using K-nearest neighbour algorithm. It has all.
The slightly longer but surest way is to compile OpenCV on your computer with debug information and just to step into the C++ implementation with a debugger. You can also copy it aside to your project and start peeling it like an onion till you get to pure C.
I'm trying to stitch 2 images just for start for panography. I"ve already found keypoints, found homography using RANSAC but I can't figure out how to align these 2 images (i'm new at opencv). Now part of code
vector<Point2f> points1, points2;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
points1.push_back( keypoints1[ good_matches[i].queryIdx ].pt );
points2.push_back( keypoints2[ good_matches[i].trainIdx ].pt );
}
/* Find Homography */
Mat H = findHomography( Mat(points2), Mat(points1), CV_RANSAC );
/* warp the image */
warpPerspective(mImg2, warpImage2, H, Size(mImg2.cols*2, mImg2.rows*2), INTER_CUBIC);
and I need to stitch Mat mImg1 where is loaded the first image and Mat warpImage2 where is the warped second image. Can you pls show me how to do that? I also have the warped image cut up and I know I have to change the homography matrix but for now I need to align these two images. Thank you for helping.
Edit: With Martin Beckett's help I added this code
//Point a cv::Mat header at it (no allocation is done)
Mat final(Size(mImg2.cols*2 + mImg1.cols, mImg2.rows*2),CV_8UC3);
//velikost img1
Mat roi1(final, Rect(0, 0, mImg1.cols, mImg1.rows));
Mat roi2(final, Rect(0, 0, warpImage2.cols, warpImage2.rows));
warpImage2.copyTo(roi2);
mImg1.copyTo(roi1);
imshow("final", final);
and it's working now
You create a new larger image of the correct combined size, then make ROIs of the size of the existing images in the positions you want them in the final image and copy the existing images to the ROIs.