OpenCV: ‘SiftDescriptorExtractor’ was not declared in this scope - c++

I am new to OpenCV in general and have been trying out a number of examples over the past few days. I've successfully gotten Harris corner detections to work on some test images. My next test was to see if I could match two images based on the harris detections using SIFT.
Here is how I found harris corners:
GoodFeaturesToTrackDetector harris_detector(1000, 0.01, 10, 3, true);
vector<KeyPoint> keypoints1, keypoints2;
harris_detector.detect(img1, keypoints1);
harris_detector.detect(img2, keypoints2);
This works well. With my next goal of matching the features between img1 and img2, I try to use SIFT. However, when I try to declare an extractor for SIFT:
SiftDescriptorExtractor extractor;
I get the following error:
error: ‘SiftDescriptorExtractor’ was not declared in this scope
SiftDescriptorExtractor extractor;
What am I doing wrong?
Thanks in advance.

Make sure you have #include <features2d.hpp>.
In some versions of OpenCV, Sift are in <opencv2/nonfree/features2d.hpp>.

Related

OPencv C++ Concatenate/merge/insert two vectors of type Mat

I am trying to concatenate vectors and am running into some issues. Using the insert function I was able to combine two vectors of type std::vector<keyPoint> but run into issues when attempting to do the same process but to vectors of type std::vector<Mat>.
Te error is No matching member function for call to 'begin, 'end'
Code is as follows
std::vector<KeyPoint> kp1, kp2;
std::vector<Mat> desc1, desc2;
std::vector<KeyPoint> keypoints;
std::vector<Mat> descriptors;
//Add keypoints and descriptors found to master list
keypoints.insert(keypoints.end(),kp1.begin(),kp1.end());
keypoints.insert(keypoints.end(),kp2.begin(),kp2.end());
descriptors.insert(descriptors.end(),desc1.begin(),desc1.end());
descriptors.insert(descriptors.end(),desc2.begin(),desc2.end());
Looking for a solution or a work-around.
Thanks for any help in advance.

Feature Detection Opencv

I am a beginner in OpenCV feature detection and matching. I am trying to write one simple method to detect and display the detected feature with an image.Here is the method that I have written:
void FeatureDetection::detectSimpleFeature(Mat image){
Mat gray_image, output_image;
cvtColor(image,gray_image,CV_BGR2GRAY);
// Detect Keypoints using Simple Detector
Ptr<Feature2D> detectors;
vector<KeyPoint> keypoint_1;
detectors->detect(image, keypoint_1);
drawKeypoints(gray_image, keypoint_1, output_image, Scalar::all(-1), DrawMatchesFlags:: DEFAULT);
namedWindow("Keypoints Detection");
imshow("Keypoints Detection", output_image);
waitKey(0);
}
There is no compile time error in this function, but while running, the program is crashing.Can anyone please help?
I am also searching special types of detector like SURF, SIFT etc but could not find in my downloaded and built library. Please suggest !!!

Fiducial markers - OpenCV - Feature Detection & Matching

Would somebody share their knowledge of OpenCV feature detection and extraction of fiducial markers?
I'm attempting to find a fiducial marker (see image below) (self-created ARTag-style using MS Paint) in a scene.
Using Harris corner detection, I can adequately locate the corners of the marker image. Similarly, using Harris corner detection, I can find most of the corners of the marker in the scene. I then use SIFT to extract descriptors for the marker image and the scene image. Then I've tried both BF and FLANN for feature matching. However, both matching algorithms tend to match the wrong corners together.
Is there something that I can do to improve the accuracy? Or are there other detection methods that would be better appropriate for this application?
Portion of code:
GoodFeaturesToTrackDetector harris_detector(6, 0.15, 10, 3, true);
vector<KeyPoint> keypoints1, keypoints2;
harris_detector.detect(im1, keypoints1);
harris_detector.detect(im2, keypoints2);
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute( im1, keypoints1, descriptors1 );
extractor.compute( im2, keypoints2, descriptors2 );
BFMatcher matcher;
//FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors1, descriptors2, matches );
you can try to use ORB detector, which is a fusion of FAST keypoint detector and BRIEF descriptor. it is fast and better than BRIEF descriptor because the later does not compute the orientation.
you can found a example of orb's usage in samples/cpp/tutorial_code/features2D/AKAZE_tracking or enter link description here
or there is a python project which does the similar task as yours fiducial

Trying to match two images using sift in OpenCv, but too many matches

I am trying to implement a program which will input two images one is an image of a box alone and one which includes the box in the scene. Basically, the program is supposed to find keypoints in these two images and will show the images with keypoints matched. That is in the end I expect to see an appended image of two input images together with their matched keypoints connected. My code is as follows:
#include <opencv2\opencv.hpp>
#include <iostream>
int main(int argc, const char* argv[]) {
cv::Mat input1 = cv::imread("input.jpg", 1); //Load as grayscale
//cv::cvtColor(input1,input1,CV_BGR2GRAY);
//second input load as grayscale
cv::Mat input2 = cv::imread("input2.jpg",1);
cv::SiftFeatureDetector detector;
//cv::SiftFeatureDetector
detector(
1, 1,
cv::SIFT::CommonParams::DEFAULT_NOCTAVES,
cv::SIFT::CommonParams::DEFAULT_NOCTAVE_LAYERS,
cv::SIFT::CommonParams::DEFAULT_FIRST_OCTAVE,
cv::SIFT::CommonParams::FIRST_ANGLE );
std::vector<cv::KeyPoint> keypoints1;
detector.detect(input1, keypoints1);
// Add results to image and save.
cv::Mat output1;
cv::drawKeypoints(input1, keypoints1, output1);
cv::imshow("Sift_result1.jpg", output1);
cv::imwrite("Sift_result1.jpg",output1);
//keypoints array for input 2
std::vector<cv::KeyPoint> keypoints2;
//output array for ouput 2
cv::Mat output2;
//Sift extractor of opencv
cv::SiftDescriptorExtractor extractor;
cv::Mat descriptors1,descriptors2;
cv::BruteForceMatcher<cv::L2<float>> matcher;
cv::vector<cv::DMatch> matches;
cv::Mat img_matches;
detector.detect(input2,keypoints2);
cv::drawKeypoints(input2,keypoints2,output2);
cv::imshow("Sift_result2.jpg",output2);
cv::imwrite("Sift_result2.jpg",output2);
extractor.compute(input1,keypoints1,descriptors1);
extractor.compute(input2,keypoints2,descriptors2);
matcher.match(descriptors1,descriptors2,matches);
//show result
cv::drawMatches(input1,keypoints1,input2,keypoints2,matches,img_matches);
cv::imshow("matches",img_matches);
cv::imwrite("matches.jpg",img_matches);
cv::waitKey();
return 0;
}
The problem is there are two many many matches than expected. I tried to debug the program and looked what is inside the keypoints vectors and so on, everything looks to be fine, at least I think they are, the keypoints are detected with orientation etc.
I am using OpenCV v2.3 and checked its documentation for the types of classes I am using and tried to solve the problem but that did not work. I am working on this for a 3 days did not make much of an improvement.
Here is an output i get from my program.
I should have remove the image.
I know that should not give me too much matches, because I have tested the exact same images with another implemenation in matlab that was quite good.
Rather than using BruteForceMatcher try to use FlannBasedMatcher and also calculate max and min distances between keypoints to keep only the good matches. See "Feature Matching with FLANN" for an example.
I faced the same problem for SIFT.
I used knn matcher (K=3). and followed following procedure iteratively
{
Calculated best affine transform with least square method.
Found out the transform for all keypoints in source image.
Checked out MaxError and MinError.
Points with Error near MaxError are removed from the matching list
}

Does openCV SurfFeatureDetector unnecessarily extract descriptors internally?

I just wondered, if using a SurfFeatureDetector to detect keypoints and a SurfDescriptorExtractor to extract the SURF descriptors (see code below as described here) wouldn't extract the descriptors twice.
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints;
detector.detect( img, keypoints ); //detecting keypoints, extracting descriptors without returning them
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute( img, keypoints, descriptors ); // extracting descriptors a second time
The openCV documentation says, those 2 classes are wrappers for the SURF() class.
The SURF::operator() is overloaded, one version taking just a keypoint vector, the other one additionally taking a vector for the descriptors.
What intrigues me... both then call the cvExtractSURF() function, which seems to extract the descriptors, no matter what... (I did not dive too deep into the C code as I find it hard to understand, so maybe I'm wrong)
But this would mean that the SurfFeatureDetector would extract descriptors without returning them. Using the SurfDescriptorExtractor in the next step just does it a second time, which seems very inefficient to me. But am I right?
You can be assured that detector does not actually compute the descriptors. The key statement to look at is line 687 of surf.cpp if( !descriptors ) continue; Features are not computed during detection, the way it should be. This kind of architecture is most likely due to the fact that surf code was "added" to OpenCV after it was designed/developed to work by itself.
As a background: note that detector and feature extractors are different things. You first "detect" points using SurfFeatureDetector where local features are extracted (using SurfDescriptorExtractor). The snippet you have is a good guide.