OpenCV in C++: "Unknown Type Name" - c++

I'm trying to follow along with the OpenCV tutorial, found here. Part of the tutorial is to create a SURF feature detector.
Unlike the tutorial, my code is in a header file, like this:
class Img {
Mat mat;
int minHessian = 400;
SurfFeatureDetector detector(minHessian);
public:
...
}
The error I'm getting occurs on the line
SurfFeatureDetector detector(minHessian);
and the error is:
Unknown type name 'minHessian'
When I do not put this in a separate class, the compiler does not complain. I have also checked and I have imported the required libraries.
Can anybody tell me what the error is, and how to fix it?

I read the opencv tutorial code:
Mat img1 = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
Mat img2 = imread(argv[2], CV_LOAD_IMAGE_GRAYSCALE);
if(img1.empty() || img2.empty())
{
printf("Can't read one of the images\n");
return -1;
}
// detecting keypoints
SurfFeatureDetector detector(400);
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
....
ans as I understand, in this code, the SurfFeatureDetector detector(minHessian); is not a signature of a function that you can write in your header file as you did; but it's actually calling the SurfFeatureDetector function in the code.
So, I think if you remove it from your header file code, and put it in your function(s), where you want call it, it may work.

Related

Problems while trying to extract features using SIFT in opencv 4.5.1

I am trying to extract features of an image using SIFT in opencv 4.5.1, but when I try to check the result by using drawKeypoints() I keep getting this cryptic error:
OpenCV(4.5.1) Error: Assertion failed (!fixedType() || ((Mat*)obj)->type() == mtype) in cv::debug_build_guard::_OutputArray::create, file C:\build\master_winpack-build-win64-vc14\opencv\modules\core\src\matrix_wrap.cpp, line 1147
D:\School\IP2\OpenCVApplication-VS2019_OCV451_basic\x64\Debug\OpenCVApplication.exe (process 6140) exited with code -1.
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
The problem seems to be with the drawKeypoints() function but I'm not sure what causes the problem.
The function:
vector<KeyPoint> extractFeatures(String path) {
Mat_<uchar> source = imread(path, 0);
Mat_<uchar> output(source.rows, source.cols);
vector<KeyPoint> keypoints;
Ptr<SIFT> sift = SIFT::create();
sift->detect(source, keypoints);
drawKeypoints(source, keypoints, output);
imshow("sift_result", output);
return keypoints;
}
You are getting a exception because output argument of drawKeypoints must be 3 channels colored image, and you are initializing output to 1 channel (grayscale) image.
When using: Mat output(source.rows, source.cols); or Mat output;, the drawKeypoints function creates a new colored matrix automatically.
When using the derived template matrix class Mat_<uchar>, the function drawKeypoints raises an exception!
You may replace: Mat_<uchar> output(source.rows, source.cols); with:
Mat_<Vec3b> output(source.rows, source.cols); //Create 3 color channels image (matrix).
Note:
You may also use Mat instead of Mat_:
Mat output; //The matrix is going to be dynamically allocated inside drawKeypoints function.
Note:
My current OpenCV version (4.2.0) has no SIFT support, so I used ORB instead (for testing).
Here is the code sample used for testing:
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main()
{
Mat_<uchar> source = imread("graf.png", 0);
Mat_<Vec3b> output(source.rows, source.cols); //Create 3 color channels image (matrix).
vector<KeyPoint> keypoints;
Ptr<ORB> orb = ORB::create();
orb->detect(source, keypoints);
drawKeypoints(source, keypoints, output);
imshow("orb_result", output);
waitKey(0);
destroyAllWindows();
return 0;
}
Result:

Want to detect features and match the feature in 2 different frames

I'm currently using OpenCV 3.4.0, c++ in QT creator.
I've tried sample code in this page
https://docs.opencv.org/2.4/doc/tutorials/features2d/feature_description/feature_description.html
int minHessian = 400;
cv::xfeatures2d::SurfFeatureDetector detector(minHessian);
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
// computing descriptors
cv::xfeatures2d::SurfDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
but the code kept returning error
no matching function for call to 'cv::xfeatures2d::SURF::SURF(int&)(2nd line)
cannot declare variable 'detector' to be of abstract type 'cv::xfeatures2d::SURF'(2nd line)
cannot declare variable 'extractor' to be of astract type 'cv::xfeatures2d::SURF'(7th line)
I have imported all the necessary modules I think including xfeatures2d
What is the problem?
And are there any other sample codes that I can try?
Your coding reflects an older version of OpenCV which is not compatible with OpenCV 3.4.0. You can try like so. It is better to convert the template image to a grayscale image for better matching:
cv::Ptr<Feature2D> detector = cv::xfeatures2d::SurfFeatureDetector::create();
detector->detect(img1, keypoints1);
cv::Ptr<DescriptorExtractor> extractor = cv::xfeatures2d::SurfFeatureDetector::create();
extractor->compute(img1, keypoints1, descriptors1);

OpenCV how to create a DescriptorExtractor object

I am using OpenCV C++ library but I don't manage to create a DescriptorExtractor object.
Here is what I did :
Mat img = imread("testOrb.jpg",CV_LOAD_IMAGE_UNCHANGED);
std::vector<KeyPoint> kp;
cv::Ptr<cv::ORB> detector = cv::ORB::create();
detector->detect( img, kp )
//this part works
DescriptorExtractor descriptorExtractor;
Mat descriptors;
descriptorExtractor.compute(img, kp, descriptors);
//when these 3 lines are added, an error is thrown
But I have the following error message :
OpenCV Error: The function/feature is not implemented () in detectAndCompute, file ...
DescriptorExtractor is an abstract class, so you can't instantiate it. It's just a common interface for descriptor extractors. You can do like:
Ptr<DescriptorExtractor> descriptorExtractor = ORB::create();
Mat descriptors;
descriptorExtractor->compute(img, kp, descriptors);
Note that exists also FeatureDetector, that is the common interface for detecting keypoints, so you can do:
std::vector<KeyPoint> kp;
Ptr<FeatureDetector> detector = ORB::create();
detector->detect(img, kp);

BRIEF implementation with OpenCV 2.4.10

Does someone know of the link to BRIEF implementation with OpenCV 2.4? Regards.
PS: I know such questions are generally not welcome on SO, as the primary focus is what work you have done. But there was a similar question which was quite well received.
One of the answers to that questions suggests a generic manner for SIFT, which could be extended to BRIEF. Here is my slightly modified code.
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/highgui/highgui.hpp>
//using namespace std;
using namespace cv;
int main(int argc, char *argv[])
{
Mat image = imread("load02.jpg", CV_LOAD_IMAGE_GRAYSCALE);
cv::initModule_nonfree();
// Create smart pointer for SIFT feature detector.
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("HARRIS"); // "BRIEF was initially written. Changed after answer."
vector<KeyPoint> keypoints;
// Detect the keypoints
featureDetector->detect(image, keypoints); // NOTE: featureDetector is a pointer hence the '->'.
//Similarly, we create a smart pointer to the SIFT extractor.
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("BRIEF");
// Compute the 128 dimension SIFT descriptor at each keypoint.
// Each row in "descriptors" correspond to the SIFT descriptor for each keypoint
Mat descriptors;
featureExtractor->compute(image, keypoints, descriptors);
// If you would like to draw the detected keypoint just to check
Mat outputImage;
Scalar keypointColor = Scalar(255, 0, 0); // Blue keypoints.
drawKeypoints(image, keypoints, outputImage, keypointColor, DrawMatchesFlags::DEFAULT);
namedWindow("Output");
imshow("Output", outputImage);
char c = ' ';
while ((c = waitKey(0)) != 'q'); // Keep window there until user presses 'q' to quit.
return 0;
}
The issue with this code is that it gives an error: First-chance exception at 0x00007FFB84698B9C in Project2.exe: Microsoft C++ exception: cv::Exception at memory location 0x00000071F4FBF8E0.
The error results in the function execution breaking. A tag says that execution will resume at the namedWindow("Output"); line.
Could someone please help fix this issue, or suggest a new code altogether? Thanks.
EDIT: The terminal now shows an error: Assertion failed (!outImage.empty()) in cv::drawKeypoints, file ..\..\..\..opencv\modules\features2d\src\draw.cpp, line 115. The next statement from where the code will resume remains the same, as drawKepoints is called just before it.
In OpenCV, BRIEF is a DescriptorExtractor, not a FeatureDetector. According to FeatureDetector::create, this factory method does not support "BRIEF" algorithm. In other words, FeatureDetector::create("BRIEF") returns a null pointer and your program crashes.
The general steps in feature matching are:
Find some interesting (feature) points in an image: FeatureDetector
Find a way to describe those points: DescriptorExtractor
Try to match descriptors (feature vectors) in two images: DescriptorMatcher
BRIEF is an algorithm only for step 2. You can use some other methods, HARRIS, ORB, ..., in step 1 and pass the result to step 2 using BRIEF. Besides, SIFT can be used in both step 1 and 2 because the algorithm provides methods for both steps.
Here's a simple example to use BRIEF in OpenCV. First step, find points that looks interesting (key points) in an image:
vector<KeyPoint> DetectKeyPoints(const Mat &image)
{
auto featureDetector = FeatureDetector::create("HARRIS");
vector<KeyPoint> keyPoints;
featureDetector->detect(image, keyPoints);
return keyPoints;
}
You can try any FeatureDetector algorithm instead of "HARRIS". Next step, compute the descriptors from key points:
Mat ComputeDescriptors(const Mat &image, vector<KeyPoint> &keyPoints)
{
auto featureExtractor = DescriptorExtractor::create("BRIEF");
Mat descriptors;
featureExtractor->compute(image, keyPoints, descriptors);
return descriptors;
}
You can use algorithm different than "BRIEF", too. And you can see that the algorithms in DescriptorExtractor is not the same as the algorithms in FeatureDetector. The last step, match two descriptors:
vector<DMatch> MatchTwoImage(const Mat &descriptor1, const Mat &descriptor2)
{
auto matcher = DescriptorMatcher::create("BruteForce");
vector<DMatch> matches;
matcher->match(descriptor1, descriptor2, matches);
return matches;
}
Similarly, you can try different matching algorithm other than "BruteForce". Finally back to main program, you can build the application from those functions:
auto img1 = cv::imread("image1.jpg");
auto img2 = cv::imread("image2.jpg");
auto keyPoints1 = DetectKeyPoints(img1);
auto keyPoints2 = DetectKeyPoints(img2);
auto descriptor1 = ComputeDescriptors(img1, keyPoints1);
auto descriptor2 = ComputeDescriptors(img2, keyPoints2);
auto matches = MatchTwoImage(descriptor1, descriptor2);
and use matches vector to complete your application. If you want to check the results, OpenCV also provides functions to draw results of step 1 & 3 in an image. For example, draw the matches in the final step:
Mat result;
drawMatches(img1, keyPoints1, img2, keyPoints2, matches, result);
imshow("result", result);
waitKey(0);

Loading saved SURF keypoints

I am detecting SURF features in an image and then writing them to a yml file. I then want to load the features from the yml file again to try and detect an object but at the moment I'm having trouble loading the keypoints to draw them on an image.
I'm writing the keypoints like so:
cv::FileStorage fs("keypointsVW.yml", cv::FileStorage::WRITE);
write(fs, "keypoints_1", keypoints_1);
fs.release();
I am trying to read them like so:
cv::FileStorage fs2("keypointsVW.yml", cv::FileStorage::READ);
read(fs2, "keypoints_1", keypoints_1);
fs2.release();
But this is producing a host of errors.
Detection and draw code:
cv::Mat img_1 = cv::imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
int minHessian = 400;
cv::SurfFeatureDetector detector(minHessian);
std::vector<cv::KeyPoint> keypoints_1;
detector.detect(img_1, keypoints_1);
cv::Mat img_keypoints_1;
//......write code
//.......read code
drawKeypoints(img_1, keypoints_1, img_keypoints_1);
imshow("keypoints_1", img_keypoints_1);
Found the solution, I'll post it here in case anyone else has the same problem.
std::vector<cv::KeyPoint> testPoints;
cv::FileStorage fs2("keypointsVW.yml", cv::FileStorage::READ);
cv::FileNode kptFileNode = fs2["keypointsVW"];
read(kptFileNode, testPoints);
fs2.release();