Custom SIFT detector in OpenCV - c++

Is there a way to specify custom SIFT detector parameters in OpenCV?
It seems that the FeatureDetector constructor does not take any parameter, whereas it seems possible to specify those parameters in the SIFT constructor.
I am working on logo detection.
Some of the logos have very low texture information, so I would like to detect more keypoints when there are too few (I could increase the edgeThreshold of SIFT?).

It is possible to create a custom SIFT descriptor extractor:
SIFT siftDetectorExtractor = SIFT(200, 3, 0.04, 15, 1.6);
Mat logo = imread("logoName.jpg");
vector<KeyPoint> keyPoints;
Mat sifts;
siftDetectorExtractor(logo, Mat(), keyPoints, sifts);
or use the detector-class:
Ptr<FeatureDetector> detector = Ptr<FeatureDetector>( new SIFT( <your arguments> ) );

Related

Geting image descriptors for an image patch

I have already detected and computed SIFT keypoints and descriptors in an image (which I need for a different purpose) with OpenCV (4.3.0-dev).
Mat descriptors;
vector<KeyPoint> keypoints;
sift->detectAndCompute(image, Mat(), keypoints, descriptors);
Now I want to get keypoints and descriptors for a rectangular patch of this same image (from the previously extracted keypoints and descriptors) and do so without having to run the costly detectAndCompute() again. The only solutions I can come up with are masking and region of interest extraction, both of which require detectAndCompute() to be run again. How can this be done?

opencv detecting blob with a mask

I want to detect blobs using opencv SimpleBlobDetector, in that class
cv::Ptr<cv::SimpleBlobDetector> detector = cv::SimpleBlobDetector::create(parameters);
detector->detect( inputImage, keypoints);
This works fine, until I want to introduce a mask so that the detector only looks for blobs within the mask.
detector->detect( inputImage, keypoints, zmat );
from the documentation, link, it says:
Mask specifying where to look for keypoints (optional). It must be a
8-bit integer matrix with non-zero values in the region of interest.
My understanding is that the detect algorithm will search only the non zero elements, in the mask matrix. So, I created a mask and populated this way::
cv::Mat zmat = cv::Mat::zeros(inputImage.size(), CV_8UC1);
cv::Scalar color(255,255,255);
cv::Rect rect(x,y,w,h);
cv::rectangle(zmat, rect, color, CV_FILLED);
However, it seems that the mask is not doing anything and the detect algorithm is detecting everything. I am using OpenCV 3.2.
I also tried just simple roi, but still the detector is detecting things all over:
cv::Mat roi(zmat, cv::Rect(10,10,600,600));
roi = cv::Scalar(255, 255, 255);
// match keypoints of connected components with blob detection
detector->detect( inputImage, keypoints, zmat);
Sorry it's not better news.
Using my installed version of opencv (a 3.1.0 dev version, built September 2016 - I really don't want to reinstall that thing!), I too have this problem. The SimpleBlobDetector just ignores the mask data. There's a quick and dirty work around using a Mat copy with roi (mostly your code, but declare zmat with 3 channels):
cv::Mat zmat = cv::Mat::zeros(gImg.size(), CV_8UC3);
cv::Scalar color(255,255,255);
cv::Rect rect(x,y,w,h);
cv::rectangle(zmat, rect, color, CV_FILLED);
inputImage.copyTo(zmat, zmat);
detector->detect(zmat, keypoints);
So you end up with your input image in zmat but with the "uninteresting" areas blacked (zeroed) out. Technically, it isn't using any (much) more memory than declaring your mask and it doesn't interfere with your input image either.
The only other thing worth checking is that your rectangle rect does specify something that isn't the complete frame - I obviously substituted in my own values for that for testing.

Removing all part of the binary image except the area with convexity defect

I am using OPENCV 3.2 and I am working on a binary image.This is an image I am working on.
I am trying to remove everything except the hand area ( with convexity defects).
I tried blob detection to detect the blobs (other than the hand) but it's not showing anything. Any suggestions on how I should proceed?
The sample code for blob detection is:
Mat im; //has the above shown binary image
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create();
vector<KeyPoint> keypoints;
detector->detect(skin_binary1, keypoints);
Mat im_with_keypoints;
drawKeypoints(im, keypoints, im_with_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
// Show blobs
imshow("keypoints", im_with_keypoints);`

Fiducial markers - OpenCV - Feature Detection & Matching

Would somebody share their knowledge of OpenCV feature detection and extraction of fiducial markers?
I'm attempting to find a fiducial marker (see image below) (self-created ARTag-style using MS Paint) in a scene.
Using Harris corner detection, I can adequately locate the corners of the marker image. Similarly, using Harris corner detection, I can find most of the corners of the marker in the scene. I then use SIFT to extract descriptors for the marker image and the scene image. Then I've tried both BF and FLANN for feature matching. However, both matching algorithms tend to match the wrong corners together.
Is there something that I can do to improve the accuracy? Or are there other detection methods that would be better appropriate for this application?
Portion of code:
GoodFeaturesToTrackDetector harris_detector(6, 0.15, 10, 3, true);
vector<KeyPoint> keypoints1, keypoints2;
harris_detector.detect(im1, keypoints1);
harris_detector.detect(im2, keypoints2);
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute( im1, keypoints1, descriptors1 );
extractor.compute( im2, keypoints2, descriptors2 );
BFMatcher matcher;
//FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors1, descriptors2, matches );
you can try to use ORB detector, which is a fusion of FAST keypoint detector and BRIEF descriptor. it is fast and better than BRIEF descriptor because the later does not compute the orientation.
you can found a example of orb's usage in samples/cpp/tutorial_code/features2D/AKAZE_tracking or enter link description here
or there is a python project which does the similar task as yours fiducial

OpenCV: ‘SiftDescriptorExtractor’ was not declared in this scope

I am new to OpenCV in general and have been trying out a number of examples over the past few days. I've successfully gotten Harris corner detections to work on some test images. My next test was to see if I could match two images based on the harris detections using SIFT.
Here is how I found harris corners:
GoodFeaturesToTrackDetector harris_detector(1000, 0.01, 10, 3, true);
vector<KeyPoint> keypoints1, keypoints2;
harris_detector.detect(img1, keypoints1);
harris_detector.detect(img2, keypoints2);
This works well. With my next goal of matching the features between img1 and img2, I try to use SIFT. However, when I try to declare an extractor for SIFT:
SiftDescriptorExtractor extractor;
I get the following error:
error: ‘SiftDescriptorExtractor’ was not declared in this scope
SiftDescriptorExtractor extractor;
What am I doing wrong?
Thanks in advance.
Make sure you have #include <features2d.hpp>.
In some versions of OpenCV, Sift are in <opencv2/nonfree/features2d.hpp>.