i am working on an iphone project which uses openCV for some image matching. Initially I was using cvMatchTemplate(), but the output is not what we expected. so I am now trying to implement SURF detector using FLANN.
I tried to port the following .cpp code to objective C,
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
But couldn't get it compiled, even though I have all the required libraries and header files included. The auto-complete is also not giving options for the any detectors present in
#include "opencv2/features2d/features2d.hpp"
The detector is defined in the header file as
class CV_EXPORTS FeatureDetector
{
...
}
What am I doing wrong here? Any input on how to call methods in the detector class(Abstract base class)?
I've not used openCV on the iphone specifically so can't help there, but when I've used feature detector/descriptor/matcher I've used the following syntax (which may end up being the same as what you've written...):
cv::Ptr<cv::DescriptorExtractor> extractor;
extractor = cv::DescriptorExtractor::create("SURF");
cv::Ptr<cv::DescriptorMatcher> matcher;
matcher = cv::DescriptorMatcher::create("FlannBased");
Does that style work for you?
Related
I am using OpenCV's Feature2D::detectAndCompute() to compute the keypoints and descriptors of images.
The code basically look like this:
Ptr<cv::ORB> orb = cv::ORB::create();
Mat descriptors1, descriptors2...;
vector<KeyPoint> keypoints1, keypoints2...;
orb->detectAndCompute(img1, noArray(), keypoints1, descriptors1); // first instance
orb->detectAndCompute(img2, noArray(), keypoints2, descriptors2); // second instance
// more detectAndCompute...
My problem is that it takes significantly longer to do the first detectAndCompute(~0.25s) than all the detectAndCompute after it(~0.002s each). Why is this happening?
Also, regardless of how small the images are. The first detectAndCompute always takes 0.25s. Is there any way to cut down this time?
Thank you!
I am working with opencv and added it to a c++ project. However, I have a problem with feature matching. the code is here:
cv::Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
std::vector<KeyPoint> keypoints_1, keypoints_2;
Mat descriptors_1, descriptors_2;
f2d->detectAndCompute(img_Org_Y_mat, Mat(), keypoints_1, descriptors_1);
f2d->detectAndCompute(img_Rec_Y_mat, Mat(), keypoints_2, descriptors_2);
vector<DMatch> matches;
BFMatcher matcher(NORM_L2);
matcher.match(descriptors_1, descriptors_2, matches);
I have a problem with last line which is:
OpenCV Error: Assertion failed (masks.size() == imageCount) in
cv::DescriptorMatcher::checkMasks, file C:\Opencv2\opencv
master\modules\features2d\src\matchers.cpp, line 617
and in the program it writes:
Unhandled exception at 0x00007FF803FA9E08 in TAppEncoder.exe: Microsoft C++
exception: cv::Exception at memory location 0x000000CE624E9050.
I also meet this question.And i has solved it but i am not sure that this solution will be suitable to this question. I will share my solution.
I have add some .lib files which are needed in release model such as opencv_bgsegm341.lib to "Additional Dependencies" by mistake.
After i delete them, the program run successfully.
I hope this will do some help to you.
Would somebody share their knowledge of OpenCV feature detection and extraction of fiducial markers?
I'm attempting to find a fiducial marker (see image below) (self-created ARTag-style using MS Paint) in a scene.
Using Harris corner detection, I can adequately locate the corners of the marker image. Similarly, using Harris corner detection, I can find most of the corners of the marker in the scene. I then use SIFT to extract descriptors for the marker image and the scene image. Then I've tried both BF and FLANN for feature matching. However, both matching algorithms tend to match the wrong corners together.
Is there something that I can do to improve the accuracy? Or are there other detection methods that would be better appropriate for this application?
Portion of code:
GoodFeaturesToTrackDetector harris_detector(6, 0.15, 10, 3, true);
vector<KeyPoint> keypoints1, keypoints2;
harris_detector.detect(im1, keypoints1);
harris_detector.detect(im2, keypoints2);
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute( im1, keypoints1, descriptors1 );
extractor.compute( im2, keypoints2, descriptors2 );
BFMatcher matcher;
//FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors1, descriptors2, matches );
you can try to use ORB detector, which is a fusion of FAST keypoint detector and BRIEF descriptor. it is fast and better than BRIEF descriptor because the later does not compute the orientation.
you can found a example of orb's usage in samples/cpp/tutorial_code/features2D/AKAZE_tracking or enter link description here
or there is a python project which does the similar task as yours fiducial
I am new to OpenCV in general and have been trying out a number of examples over the past few days. I've successfully gotten Harris corner detections to work on some test images. My next test was to see if I could match two images based on the harris detections using SIFT.
Here is how I found harris corners:
GoodFeaturesToTrackDetector harris_detector(1000, 0.01, 10, 3, true);
vector<KeyPoint> keypoints1, keypoints2;
harris_detector.detect(img1, keypoints1);
harris_detector.detect(img2, keypoints2);
This works well. With my next goal of matching the features between img1 and img2, I try to use SIFT. However, when I try to declare an extractor for SIFT:
SiftDescriptorExtractor extractor;
I get the following error:
error: ‘SiftDescriptorExtractor’ was not declared in this scope
SiftDescriptorExtractor extractor;
What am I doing wrong?
Thanks in advance.
Make sure you have #include <features2d.hpp>.
In some versions of OpenCV, Sift are in <opencv2/nonfree/features2d.hpp>.
I just wondered, if using a SurfFeatureDetector to detect keypoints and a SurfDescriptorExtractor to extract the SURF descriptors (see code below as described here) wouldn't extract the descriptors twice.
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints;
detector.detect( img, keypoints ); //detecting keypoints, extracting descriptors without returning them
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute( img, keypoints, descriptors ); // extracting descriptors a second time
The openCV documentation says, those 2 classes are wrappers for the SURF() class.
The SURF::operator() is overloaded, one version taking just a keypoint vector, the other one additionally taking a vector for the descriptors.
What intrigues me... both then call the cvExtractSURF() function, which seems to extract the descriptors, no matter what... (I did not dive too deep into the C code as I find it hard to understand, so maybe I'm wrong)
But this would mean that the SurfFeatureDetector would extract descriptors without returning them. Using the SurfDescriptorExtractor in the next step just does it a second time, which seems very inefficient to me. But am I right?
You can be assured that detector does not actually compute the descriptors. The key statement to look at is line 687 of surf.cpp if( !descriptors ) continue; Features are not computed during detection, the way it should be. This kind of architecture is most likely due to the fact that surf code was "added" to OpenCV after it was designed/developed to work by itself.
As a background: note that detector and feature extractors are different things. You first "detect" points using SurfFeatureDetector where local features are extracted (using SurfDescriptorExtractor). The snippet you have is a good guide.