I'm trying to calculate the distance (Euclidean or hamming) between two descriptors already calculated. The problem is I don't want to use a matcher, I just want to calculate the distance between two descriptors.
I'm using OpenCV 2.4.9 and i have mine descriptors stored in a Mat type:
Mat descriptors1;
Mat descriptors2;
and now i just want to calculate the distance (preferably the Hamming distance since I'm using binary descriptors) between row1 of descriptors1 and row1 of descriptors2 (for example).
I have tried to use bitwise_xor() function but then I got not an effective way of doing the bitcount. There is no function to calculate the hamming distance between two arrays?
I notice that I'm fairly new to OpenCV but I appreciate any help. Thank you
you can use opencv's norm function for this.
Mat descriptors1;
Mat descriptors2;
double dist_l2 = norm(descriptors1,descriptors2,NORM_L2); // l2 for surf,sift
double dist_ham = norm(descriptors1,descriptors2,NORM_HAMMING); // for ORB,BRIEF,etc.
Related
I trying to make the fast character recognition algorithm.
I have the result of absdiff() and now I want to summ all of this cv::Mat to find out small or big difference it is.
How can I do this?
An OpenCV function sum() adds the elements for all dimensions of a matrix:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#sum
Scalar result = sum(A);
I was trying to use the new RHO homography algorithm in conjunction with perspectiveTransform, but it seems that the homography matrix calculated by RHO has a wrong size and consequently it is not compatible with that method.
See code below:
H = findHomography(obj_points, scn_points, RHO, 1.0);
perspectiveTransform(obj_corners, scene_corners, H);
Following assertion fails:
error: (-215) scn + 1 == m.cols in function perspectiveTransform
Any clue? It works perfectly with RANSAC.
I've found the solution:
With RHO, I have to check the homography matrix to ensure it is not empty. Giving 4 or more points to findHomography is not enough to get an homography matrix with this method.
Although giving to it about 50 matches to compute, it only retrieves a non empty matrix 40-50% of the times.
I'm trying to calculate affine transformation between two consecutive frames from a video. So I have found the features and got the matched points in the two frames.
FastFeatureDetector detector;
vector<Keypoints> frame1_features;
vector<Keypoints> frame2_features;
detector.detect(frame1 , frame1_features , Mat());
detector.detect(frame2 , frame2_features , Mat());
vector<Point2f> features1; //matched points in 1st image
vector<Point2f> features2; //matched points in 2nd image
for(int i = 0;i<frame2_features.size() && i<frame1_features.size();++i )
{
double diff;
diff = pow((frame1.at<uchar>(frame1_features[i].pt) - frame2.at<uchar>(frame2_features[i].pt)) , 2);
if(diff<SSD) //SSD is sum of squared differences between two image regions
{
feature1.push_back(frame1_features[i].pt);
feature2.push_back(frame2_features[i].pt);
}
}
Mat affine = getAffineTransform(features1 , features2);
The last line gives the following error :
OpenCV Error: Assertion failed (src.checkVector(2, CV_32F) == 3 && dst.checkVector(2, CV_32F) == 3) in getAffineTransform
Can someone please tell me how to calculate the affine transformation with a set of matched points between the two frames?
Your problem is that you need exactly 3 point correspondences between the images.
If you have more than 3 correspondences, you should optimize the transformation to fit all the correspondences (except of outliers).
Therefore, I recommend to take a look at findHomography()-function (http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findhomography).
It calculates a perspective transformation between the correspondences and needs at least 4 point correspondences.
Because you have more than 3 correspondences and affine transformations are a subset of perspective transformations, this should be appropriate for you.
Another advantage of the function is that it is able to detect outliers (correspondences that do not fit to the transformation and the other points) and these are not considered for transformation calculation.
To sum up, use findHomography(features1 , features2, CV_RANSAC) instead of getAffineTransform(features1 , features2).
I hope I could help you.
As I read from your code and assertion, there is something wrong with your vectors.
int checkVector(int elemChannels,int depth) //
this function returns N if the matrix is 1-channel (N x ptdim) or ptdim-channel (1 x N) or (N x 1); negative number otherwise.
And according to the documentation; http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#getaffinetransform: Calculates an affine transform from three pairs of the corresponding points.
You seem to have more or less than three points in one or both of your vectors.
A problem happened about Fourier descriptor: if a contour has K point,then let
s(k)= x(k)+i y(k),k = 0,1,...,K-1.
the s(k) discrete Fourier transform is
a(u)=∑s(k)*e^(-i2πuk/K), k = 0,1,...,K-1.
I want to inverse the contour with a(p) ,p=0,1...,P,the P is less than K.
But when use dft function in Opencv:
dft(inputarray,outputarray,DFT_INVERSE,0);
the output array has the same size with input array, how can I get a K points contour with P parameters a(p)? Thanks!!
actually the output array size should be equal to the input array size, revise the mathematical model of the DFT https://ccrma.stanford.edu/~jos/mdft/Mathematics_DFT.html
I have a vector of a 2-dimensional points in OpenCV
std::vector<cv::Point2f> points;
I would like to calculate the mean values for x and y coordinates in points. Something like:
cv::Point2f mean_point; //will contain mean values for x and y coordinates
mean_point = some_function(points);
This would be simple in Matlab. But I'm not sure if I can utilize some high level OpenCV functions to accomplish the same. Any suggestions?
InputArray does a good job here. You can simply call
cv::Mat mean_;
cv::reduce(points, mean_, 01, CV_REDUCE_AVG);
// convert from Mat to Point - there may be even a simpler conversion,
// but I do not know about it.
cv::Point2f mean(mean_.at<float>(0,0), mean_.at<float>(0,1));
Details:
In the newer OpenCV versions, the InputArray data type is introduced. This way, one can send as parameters to an OpenCV function either matrices (cv::Mat) either vectors. A vector<Vec3f> will be interpreted as a float matrix with three channels, one row, and the number of columns equal to the vector size. Because no data is copied, this transparent conversion is very fast.
The advantage is that you can work with whatever data type fits better in your app, while you can still use OpenCV functions to ease mathematical operations on it.
Since OpenCV's Point_ already defines operator+, this should be fairly simple. First we sum the values:
cv::Point2f zero(0.0f, 0.0f);
cv::Point2f sum = std::accumulate(points.begin(), points.end(), zero);
Then we divide to get the average:
Point2f mean_point(sum.x / points.size(), sum.y / points.size());
...or we could use Point_'s operator*:
Point2f mean_point(sum * (1.0f / points.size()));
Unfortunately, at least as far as I can see, Point_ doesn't define operator /, so we need to multiply by the inverse instead of dividing by the size.
You can use stl's std::accumulate as follows:
cv::Point2f sum = std::accumulate(
points.begin(), points.end(), // Run from begin to end
cv::Point2f(0.0f,0.0f), // Initialize with a zero point
std::plus<cv::Point2f>() // Use addition for each point (default)
);
cv::Point2f mean = sum / points.size(); // Divide by count to get mean
Add them all up and divide by the total number of points.