OpenCV: correspondence between keypoints and descriptors - c++

I'm trying to parallelize OpenCV SIFT implementation with OpenMP (hurray! for me).
First of all: seriously? OpenCV doesn't offer any parallel implementation of the best detector/descriptor available today? Only SURF has a TBB and GPU implementations.
Anyway: what I want to know is if we compute
std::vector<cv::KeyPoint> keypoints;
Mat descriptors;
detectAndCompute(image, mask, keypoints, descriptors, useProvided); //don't look mask and useProvided
It means that the descriptor relative to the i-th keypoint, i.e. keypoints[i], is the i-th descriptor, i.e. descriptors[i]?
This is important because if that's a case, in such a parallel application I have to guarantee the order which descriptors is populated (so the correspondence between keypoints and descriptors is respected).

Related

Geting image descriptors for an image patch

I have already detected and computed SIFT keypoints and descriptors in an image (which I need for a different purpose) with OpenCV (4.3.0-dev).
Mat descriptors;
vector<KeyPoint> keypoints;
sift->detectAndCompute(image, Mat(), keypoints, descriptors);
Now I want to get keypoints and descriptors for a rectangular patch of this same image (from the previously extracted keypoints and descriptors) and do so without having to run the costly detectAndCompute() again. The only solutions I can come up with are masking and region of interest extraction, both of which require detectAndCompute() to be run again. How can this be done?

OpenCV - C++ - DMatch - Is the descriptor index the same as it's corresponding keypoint index?

I'm trying to compute disparity for corresponding keypoints in two epipolar stereo images.
I know how to get keypoints, descriptors and then compute DMatch, but then I'm using this simple operation to get disparity and I'm not sure if it's ok:
float disparity = keypointsLeft[matches[i].queryIdx].pt.x - keypointsRight[matches[i].trainIdx].pt.x;
(matches was defined as: vector <DMatch> matches and then computed)
Why I'm not sure?
OpenCV documentation 2.4.12 says:
1) for DescriptorExtractor::compute
void DescriptorExtractor::compute(const Mat& image, vector<KeyPoint>& keypoints, Mat& descriptors) const
keypoints – Input collection of keypoints. Keypoints for which a descriptor cannot be computed are removed and the remaining ones may be reordered. Sometimes new keypoints can be added, for example: SIFT duplicates a keypoint with several dominant orientations (for each orientation).
2) for DMatch
Class for matching keypoint descriptors: query descriptor index, train descriptor index, train image index, and distance between descriptors.
The problem is: Is the descriptor index the same as it's corresponding keypoint index, when as we can read the descriptors vector doesn't contain the data for each detected keypoint ?
I would check it for myself, but I have no idea how to do this... Maybe there is someone who also wondered about this issue.
My application is working and it looks fine, but it doesn't mean that it's ok...

OpenCV: Why create multiple Mat objects to transform an image's format?

I have not worked with OpenCV for a while, so please bear with my beginner questions. I curiously thought of something as I was looking through OpenCV tutorials and sample code.
Why do people create multiple Mat images when going through multiple transformations? Here is an example:
Mat mat, gray, thresh, equal;
mat = imread("E:/photo.jpg");
cvtColor(mat, gray, CV_BGR2GRAY);
equalizeHist(gray, equal);
threshold(equal, thresh, 50, 255, THRESH_BINARY);
Example of a code that uses only two Mat images:
Mat mat, process;
mat = imread("E:/photo.jpg");
cvtColor(mat, process, CV_BGR2GRAY);
equalizeHist(process, process);
threshold(process, process, 50, 255, THRESH_BINARY);
Is there anything different between the two examples? Also, another beginner question: will OpenCV run faster when it only creates two Mat images, or will it still be the same?
Thank you in advance.
The question comes down to, do you still need the unequalized image later on in the code? If you want to further process the gray image then the first option is better. If not, then use the second option.
Some functions might not work in-place; specifically, ones that transform the matrix to a different format, either by changing its dimensions (such as copyMakeBorder) or number of channels (such as cvtColor).
For your use case, the two blocks of code perform the same number of calculations, so the speed wouldn't change at all. The second option is obviously more memory efficient.

how to make same size descriptors for all images

Can any one please tell me how to make descriptor's are of fix size ? because we may get different number of descriptors from different images
For example
If i have image of 450*550 and i am apply surf feature on it , than surf extract keypoints from it and than it extract descriptors from it , for example ,It extract 10 descriptors of keypoints from 450*550 image , than it read again an image and its size it 750*880 , so surf extract keypoints from it and than descriptors , for example this time it extract 20 descriptors from this image , Now What i want to do is , i want that whatever the size of image , the size size of descriptors should be same , like it should take 10 descriptors from both images, so in case of many images it should select the minimum size of descriptor and extract only that descriptors from all images and leave others , Or i define the size and it ignore the images who descriptors are below or above that size
extractor.compute( tmplate_img, keypoints, descriptors);
my_img=descriptors.reshape(1,1);
I want to make the same size of descriptors for all the images when i run it through loop and which size of descriptor is better for getting better result , descriptors is MAT.
Thanks
You can use the following code to keep the top M keypoints that have the largest response:
bool compareFunction(KeyPoint p1, KeyPoint p2) {return p1.response>p2.response;}
//The function retains the stongest M keypoints in kp
void RetainBestKeypoints(vector<KeyPoint> &kp, int M)
{
vector<KeyPoint> sortedkp;
sort(kp.begin(),kp.end(),compareFunction);
if (kp.size()>M)
kp.erase(kp.begin()+M,kp.end());
}

Does openCV SurfFeatureDetector unnecessarily extract descriptors internally?

I just wondered, if using a SurfFeatureDetector to detect keypoints and a SurfDescriptorExtractor to extract the SURF descriptors (see code below as described here) wouldn't extract the descriptors twice.
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints;
detector.detect( img, keypoints ); //detecting keypoints, extracting descriptors without returning them
SurfDescriptorExtractor extractor;
Mat descriptors;
extractor.compute( img, keypoints, descriptors ); // extracting descriptors a second time
The openCV documentation says, those 2 classes are wrappers for the SURF() class.
The SURF::operator() is overloaded, one version taking just a keypoint vector, the other one additionally taking a vector for the descriptors.
What intrigues me... both then call the cvExtractSURF() function, which seems to extract the descriptors, no matter what... (I did not dive too deep into the C code as I find it hard to understand, so maybe I'm wrong)
But this would mean that the SurfFeatureDetector would extract descriptors without returning them. Using the SurfDescriptorExtractor in the next step just does it a second time, which seems very inefficient to me. But am I right?
You can be assured that detector does not actually compute the descriptors. The key statement to look at is line 687 of surf.cpp if( !descriptors ) continue; Features are not computed during detection, the way it should be. This kind of architecture is most likely due to the fact that surf code was "added" to OpenCV after it was designed/developed to work by itself.
As a background: note that detector and feature extractors are different things. You first "detect" points using SurfFeatureDetector where local features are extracted (using SurfDescriptorExtractor). The snippet you have is a good guide.