I trying to make the fast character recognition algorithm.
I have the result of absdiff() and now I want to summ all of this cv::Mat to find out small or big difference it is.
How can I do this?
An OpenCV function sum() adds the elements for all dimensions of a matrix:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#sum
Scalar result = sum(A);
Related
I'm trying to implement the following pseudo-code in C++ using Eigen:
img_binary = +1*(img>img_mean) + -1*(img<img_mean)
i.e. i'm trying to convert a gray scale image into a binary image such that values greater than image mean are +1 and less then image mean are -1. So far, I have the following:
cv::Mat cv_image
cv_image = cv::imread(img_path, CV_LOAD_IMAGE_GRAYSCALE)
MatrixXf eig_image;
cv::cv2eigen(cv_image, eig_image):
float image_mean = eig_image.mean();
ArrayXXf bin_image;
bin_image = eig_image.array() > image_mean;
I'm getting an error in the last line saying that I mixed different numeric types. Any suggestion on how I can do element-wise comparisons with Eigen?
The easiest solution in Eigen would be
ArrayXXf bin_image = (eig_image.array() > image_mean).cast<float>()*2.f-1.f;
An alternative would be:
ArrayXXf bin_image = (eig_image.array() > image_mean)
.select(ArrayXXf::Constant(1.0f,eig_image.rows(),eig_image.cols()), -1.0f);
Having to use ArrayXXf::Constant for one argument unfortunately is necessary, because there is no .select method accepting two scalar values
However, unless you plan to do further processing in Eigen you should consider using the corresponding OpenCV method threshold.
I want to divide a matrix cv::Mat src of type CV_8UC1 with a scalar of type float and the result stored in a new matrix srcN of type CV_32FC1:
I'm doing this at the moment:
for(unsigned j=0;j<src.rows;j++)
for(unsigned i=0;i<src.cols;i++)
srcN.at<float>(j,i) = ((int) src.at<uchar>(j,i))/A;
Which is not very fast. I want to do it like this: srcN=src/A; but I don't get the right values in srcN. Is there any way to do that?
Another question: MATLAB (which literally means MATrix LABoratory) is very fast in operations with matrices, how can I make my code as fast as matlab with c++/opencv?
I'm trying to calculate the distance (Euclidean or hamming) between two descriptors already calculated. The problem is I don't want to use a matcher, I just want to calculate the distance between two descriptors.
I'm using OpenCV 2.4.9 and i have mine descriptors stored in a Mat type:
Mat descriptors1;
Mat descriptors2;
and now i just want to calculate the distance (preferably the Hamming distance since I'm using binary descriptors) between row1 of descriptors1 and row1 of descriptors2 (for example).
I have tried to use bitwise_xor() function but then I got not an effective way of doing the bitcount. There is no function to calculate the hamming distance between two arrays?
I notice that I'm fairly new to OpenCV but I appreciate any help. Thank you
you can use opencv's norm function for this.
Mat descriptors1;
Mat descriptors2;
double dist_l2 = norm(descriptors1,descriptors2,NORM_L2); // l2 for surf,sift
double dist_ham = norm(descriptors1,descriptors2,NORM_HAMMING); // for ORB,BRIEF,etc.
A problem happened about Fourier descriptor: if a contour has K point,then let
s(k)= x(k)+i y(k),k = 0,1,...,K-1.
the s(k) discrete Fourier transform is
a(u)=∑s(k)*e^(-i2πuk/K), k = 0,1,...,K-1.
I want to inverse the contour with a(p) ,p=0,1...,P,the P is less than K.
But when use dft function in Opencv:
dft(inputarray,outputarray,DFT_INVERSE,0);
the output array has the same size with input array, how can I get a K points contour with P parameters a(p)? Thanks!!
actually the output array size should be equal to the input array size, revise the mathematical model of the DFT https://ccrma.stanford.edu/~jos/mdft/Mathematics_DFT.html
I have a vector of a 2-dimensional points in OpenCV
std::vector<cv::Point2f> points;
I would like to calculate the mean values for x and y coordinates in points. Something like:
cv::Point2f mean_point; //will contain mean values for x and y coordinates
mean_point = some_function(points);
This would be simple in Matlab. But I'm not sure if I can utilize some high level OpenCV functions to accomplish the same. Any suggestions?
InputArray does a good job here. You can simply call
cv::Mat mean_;
cv::reduce(points, mean_, 01, CV_REDUCE_AVG);
// convert from Mat to Point - there may be even a simpler conversion,
// but I do not know about it.
cv::Point2f mean(mean_.at<float>(0,0), mean_.at<float>(0,1));
Details:
In the newer OpenCV versions, the InputArray data type is introduced. This way, one can send as parameters to an OpenCV function either matrices (cv::Mat) either vectors. A vector<Vec3f> will be interpreted as a float matrix with three channels, one row, and the number of columns equal to the vector size. Because no data is copied, this transparent conversion is very fast.
The advantage is that you can work with whatever data type fits better in your app, while you can still use OpenCV functions to ease mathematical operations on it.
Since OpenCV's Point_ already defines operator+, this should be fairly simple. First we sum the values:
cv::Point2f zero(0.0f, 0.0f);
cv::Point2f sum = std::accumulate(points.begin(), points.end(), zero);
Then we divide to get the average:
Point2f mean_point(sum.x / points.size(), sum.y / points.size());
...or we could use Point_'s operator*:
Point2f mean_point(sum * (1.0f / points.size()));
Unfortunately, at least as far as I can see, Point_ doesn't define operator /, so we need to multiply by the inverse instead of dividing by the size.
You can use stl's std::accumulate as follows:
cv::Point2f sum = std::accumulate(
points.begin(), points.end(), // Run from begin to end
cv::Point2f(0.0f,0.0f), // Initialize with a zero point
std::plus<cv::Point2f>() // Use addition for each point (default)
);
cv::Point2f mean = sum / points.size(); // Divide by count to get mean
Add them all up and divide by the total number of points.