I have a vector of a 2-dimensional points in OpenCV
std::vector<cv::Point2f> points;
I would like to calculate the mean values for x and y coordinates in points. Something like:
cv::Point2f mean_point; //will contain mean values for x and y coordinates
mean_point = some_function(points);
This would be simple in Matlab. But I'm not sure if I can utilize some high level OpenCV functions to accomplish the same. Any suggestions?
InputArray does a good job here. You can simply call
cv::Mat mean_;
cv::reduce(points, mean_, 01, CV_REDUCE_AVG);
// convert from Mat to Point - there may be even a simpler conversion,
// but I do not know about it.
cv::Point2f mean(mean_.at<float>(0,0), mean_.at<float>(0,1));
Details:
In the newer OpenCV versions, the InputArray data type is introduced. This way, one can send as parameters to an OpenCV function either matrices (cv::Mat) either vectors. A vector<Vec3f> will be interpreted as a float matrix with three channels, one row, and the number of columns equal to the vector size. Because no data is copied, this transparent conversion is very fast.
The advantage is that you can work with whatever data type fits better in your app, while you can still use OpenCV functions to ease mathematical operations on it.
Since OpenCV's Point_ already defines operator+, this should be fairly simple. First we sum the values:
cv::Point2f zero(0.0f, 0.0f);
cv::Point2f sum = std::accumulate(points.begin(), points.end(), zero);
Then we divide to get the average:
Point2f mean_point(sum.x / points.size(), sum.y / points.size());
...or we could use Point_'s operator*:
Point2f mean_point(sum * (1.0f / points.size()));
Unfortunately, at least as far as I can see, Point_ doesn't define operator /, so we need to multiply by the inverse instead of dividing by the size.
You can use stl's std::accumulate as follows:
cv::Point2f sum = std::accumulate(
points.begin(), points.end(), // Run from begin to end
cv::Point2f(0.0f,0.0f), // Initialize with a zero point
std::plus<cv::Point2f>() // Use addition for each point (default)
);
cv::Point2f mean = sum / points.size(); // Divide by count to get mean
Add them all up and divide by the total number of points.
Related
I have two vectors with coordinates, stored as a OpenCV's floating points:
a) dstpoints is a vector with OpenCV points - std::vector<cv::Point2f> (I have 162 points in my example, they are not changing),
b) ppts is also std::vector<cv::Point2f> and the same size as dstpoints:
std::vector<cv::Point2f> ppts = project_keypoints(params, input);
But ppts is dependent on two other vectors:
- input is 2*162=324 long and is not changing,
- params is 189 long and its values should be changed to get the minimal value of variable suma, something like this:
double suma = 0.0;
for (int i=0; i<dstpoints_size; i++)
{
suma += pow(dstpoints[i].x - ppts[i].x, 2);
suma += pow(dstpoints[i].y - ppts[i].y, 2);
}
I'm looking for params vector that will give me the smallest value of suma variable. Least squares algorithm seems to be a good option to solve it:
https://en.wikipedia.org/wiki/Least_squares
I tried dlib version:
http://dlib.net/dlib/optimization/optimization_least_squares_abstract.h.html#solve_least_squares
but I'm afraid that is not good for my case.
I think, my problem in dlib version is that for every different params vector I get different ppts vector, not only single value, and I don't know if solve_least_squares function from dlib can match my example.
I'm looking for C++ solution (probably with optimizers) that could help to solve the problem.
I'm trying to implement the following pseudo-code in C++ using Eigen:
img_binary = +1*(img>img_mean) + -1*(img<img_mean)
i.e. i'm trying to convert a gray scale image into a binary image such that values greater than image mean are +1 and less then image mean are -1. So far, I have the following:
cv::Mat cv_image
cv_image = cv::imread(img_path, CV_LOAD_IMAGE_GRAYSCALE)
MatrixXf eig_image;
cv::cv2eigen(cv_image, eig_image):
float image_mean = eig_image.mean();
ArrayXXf bin_image;
bin_image = eig_image.array() > image_mean;
I'm getting an error in the last line saying that I mixed different numeric types. Any suggestion on how I can do element-wise comparisons with Eigen?
The easiest solution in Eigen would be
ArrayXXf bin_image = (eig_image.array() > image_mean).cast<float>()*2.f-1.f;
An alternative would be:
ArrayXXf bin_image = (eig_image.array() > image_mean)
.select(ArrayXXf::Constant(1.0f,eig_image.rows(),eig_image.cols()), -1.0f);
Having to use ArrayXXf::Constant for one argument unfortunately is necessary, because there is no .select method accepting two scalar values
However, unless you plan to do further processing in Eigen you should consider using the corresponding OpenCV method threshold.
I'm following this tutorial, which uses Features2D + Homography. If I have known camera matrix for each image, how can I optimize the result? I tried some images, but it didn't work well.
//Edit
After reading some materials, I think I should rectify two image first. But the rectification is not perfect, so a vertical line on image 1 correspond a vertical band on image 2 generally. Are there any good algorithms?
I'm not sure if I understand your problem. You want to find corresponding points between the images or you want to improve the correctness of your matches by use of the camera intrinsics?
In principle, in order to use camera geometry for finding matches, you would need the fundamental or essential matrix, depending on wether you know the camera intrinsics (i.e. calibrated camera). That means, you would need an estimate for the relative rotation and translation of the camera. Then, by computing the epipolar lines corresponding to the features found in one image, you would need to search along those lines in the second image to find the best match. However, I think it would be better to simply rely on automatic feature matching. Given the fundamental/essential matrix, you could try your luck with correctMatches, which will move the correspondences such that the reprojection error is minimised.
Tips for better matches
To increase the stability and saliency of automatic matches, it usually pays to
Adjust the parameters of the feature detector
Try different detection algorithms
Perform a ratio test to filter out those keypoints which have a very similar second-best match and are therefore unstable. This is done like this:
Mat descriptors_1, descriptors_2; // obtained from feature detector
BFMatcher matcher;
vector<DMatch> matches;
matcher = BFMatcher(NORM_L2, false); // norm depends on feature detector
vector<vector<DMatch>> match_candidates;
const float ratio = 0.8; // or something
matcher.knnMatch(descriptors_1, descriptors_2, match_candidates, 2);
for (int i = 0; i < match_candidates.size(); i++)
{
if (match_candidates[i][0].distance < ratio * match_candidates[i][1].distance)
matches.push_back(match_candidates[i][0]);
}
A more involved way of filtering would be to compute the reprojection error for each keypoint in the first frame. This means to compute the corresponding epipolar line in the second image and then checking how far its supposed matching point is away from that line. Throwing away those points whose distance exceeds some threshold would remove the matches which are incompatible with the epiploar geometry (which I assume would be known). Computing the error can be done like this (I honestly do not remember where I took this code from and I may have modified it a bit, also the SO editor is buggy when code is inside lists, sorry for the bad formatting):
double computeReprojectionError(vector& imgpts1, vector& imgpts2, Mat& inlier_mask, const Mat& F)
{
double err = 0;
vector lines[2];
int npt = sum(inlier_mask)[0];
// strip outliers so validation is constrained to the correspondences
// which were used to estimate F
vector imgpts1_copy(npt),
imgpts2_copy(npt);
int c = 0;
for (int k = 0; k < inlier_mask.size().height; k++)
{
if (inlier_mask.at(0,k) == 1)
{
imgpts1_copy[c] = imgpts1[k];
imgpts2_copy[c] = imgpts2[k];
c++;
}
}
Mat imgpt[2] = { Mat(imgpts1_copy), Mat(imgpts2_copy) };
computeCorrespondEpilines(imgpt[0], 1, F, lines[0]);
computeCorrespondEpilines(imgpt1, 2, F, lines1);
for(int j = 0; j < npt; j++ )
{
// error is computed as the distance between a point u_l = (x,y) and the epipolar line of its corresponding point u_r in the second image plus the reverse, so errij = d(u_l, F^T * u_r) + d(u_r, F*u_l)
Point2f u_l = imgpts1_copy[j], // for the purpose of this function, we imagine imgpts1 to be the "left" image and imgpts2 the "right" one. Doesn't make a difference
u_r = imgpts2_copy[j];
float a2 = lines1[j][0], // epipolar line
b2 = lines1[j]1,
c2 = lines1[j][2];
float norm_factor2 = sqrt(pow(a2, 2) + pow(b2, 2));
float a1 = lines[0][j][0],
b1 = lines[0][j]1,
c1 = lines[0][j][2];
float norm_factor1 = sqrt(pow(a1, 2) + pow(b1, 2));
double errij =
fabs(u_l.x * a2 + u_l.y * b2 + c2) / norm_factor2 +
fabs(u_r.x * a1 + u_r.y * b1 + c1) / norm_factor1; // distance of (x,y) to line (a,b,c) = ax + by + c / (a^2 + b^2)
err += errij; // at this point, apply threshold and mark bad matches
}
return err / npt;
}
The point is, grab the fundamental matrix, use it to compute epilines for all the points and then compute the distance (the lines are given in a parametric form so you need to do some algebra to get the distance). This is somewhat similar in outcome to what findFundamentalMat with the RANSAC method does. It returns a mask wherein for each match there is either a 1, meaning that it was used to estimate the matrix, or a 0 if it was thrown out. But estimating the fundamental Matrix like this will probably be less accurate than using chessboards.
EDIT: Looks like oarfish beat me to it, but I'll leave this here.
The fundamental matrix (F) defines a mapping from a point in the left image to a line in the right image on which the corresponding point must lie, assuming perfect calibration. This is the epipolar line, i.e. the line though the point in the left image and the two epipoles of the stereo camera pair. For references, see these lecture notes and this chapter of the HZ book.
Given a set of point correspondences in the left and right images: (p_L, p_R), from SURF (or any other feature matcher), and given F, the constraint from epipolar geometry of the stereo pair says that p_R should lie on the epipolar line projected by p_L onto the right image, i.e.
In practice, calibration errors from noise as well as erroneous feature matches lead to a non-zero value.
However, using this idea, you can then perform outlier removal by rejecting those feature matches for which this equation is greater than a certain threshold value, i.e. reject (p_L, p_R) if and only if:
When selecting this threshold, keep in mind that it is the distance in image space of a point from an epipolar line that you are willing to tolerate, which in some sense is your epipolar error tolerance.
Degenerate case: To visually imagine what this means, let us assume that the stereo pair differ only in a pure X-translation. Then the epipolar lines are horizontal. This means that you can connect the feature matched point pairs by a line and reject those pairs whose line slope is not close to zero. The equation above is a generalization of this idea to arbitrary stereo rotation and translation, which is accounted for by the matrix F.
Your specific images: It looks like your feature matches are sparse. I suggest instead to use a dense feature matching approach so that after outlier removal, you are still left with a sufficient number of good-quality matches. I'm not sure which dense feature matcher is already implemented in OpenCV, but I suggest starting here.
Giving your pictures, your are trying to do a stereo matching.
This page will be helpfull. The rectification you want can be done using stereoCalibrate then stereoRectify.
The result (from the doc):
In order to find the Fundamental Matrix, you need correct correspondances, but in order to get good correspondances, you need a good estimate of the fundamental matrix. This might sound like an impossible chicken-and-the-egg-problem, but there is well established methods to do this; RANSAC.
It randomly selects a small set of correspondances, uses those to calculate a fundamental matrix (using the 7 or 8 point algorithm) and then tests how many of the other correspondences that comply with this matrix (using the method described by scribbleink for measuring the distance between point and epipolar line). It keeps testing new combinations of correspondances for a certain number of iterations and selects the one with the most inliers.
This is already implemented in OpenCV as cv::findFundamentalMat (http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmat). Select the method CV_FM_RANSAC to use ransac to remove bad correspondances. It will output a list of all the inlier correspondances.
The requirement for this is that all the points does not lie on the same plane.
I have a 3 channel Mat image, type is CV_8UC3.
I want to compare, in a loop, the intensity value of a pixel with its neighbours and then set 0 or 1 if the neighbour is greater or not.
I can get the intensity calling Img.at<Vec3b>(x,y).
But my question is: how can I compare two Vec3b?
Should I compare pixels value for every channel (BGR or Vec3b[0], Vec3b[1] and Vec3b[2]), and then merge the three channels results into a single Mat object?
Me again :)
If you want to compare (greater or less) two RGB values you need to project the 3-dimensional RGB space onto a plane or axis.
Of course, there are many possibilities to do this, but an easy way would be to use the HSV color space. The hue (H), however, is not appropriate as a linear order function because it is circular (i.e. the value 1.0 is identical with 0.0, so you cannot decide if 0.5 > 0.0 or 0.5 < 0.0). However, the saturation (S) or the value (V) are appropriate projection functions for your purpose:
If you want to have colored pixels "larger" than monochrome pixels, you will prefer S.
If you want to have lighter pixels larger than darker pixels, you will probably prefer V.
Also any combination of S and V would be a valid projection function, e.g. S+V.
As far as I understand, you want a measure to calculate distance/similarity between two Vec3b pixels. This can be reflected to the general problem of finding distance between two vectors in an n-mathematical space.
One of the famous measures (and I think this is what you're asking for), is the Euclidean distance.
If you are using Opencv then you can simply use:
cv::Vec3b a(1, 1, 1);
cv::Vec3b b(5, 5, 5);
double dist = cv::norm(a, b, CV_L2);
You can refer to this for reading about cv::norm and its options.
Edit: If you are doing this to measure color similarity, it's recommended to use the LAB color space as it's proved that Euclidean distance in LAB space is a good approximation for human perception of colors.
Edit 2: I see what you mean, for this you can get the magnitude of each vector and then compare them, something like this:
double a_magnitude = cv::norm(a, CV_L2);
double b_magnitude = cv::norm(b, CV_L2);
if(a_magnitude > b_magnitude)
// do something
else
// do something else.
At the moment i'm working with a camera to detect a marker. I use opencv and the Aruco Libary.
Only I'm stuck with a problem right now. I need to detect if the distance between 2 marker is less than a specific value. I have a function to calculate the distance, I can compare everything. But I'm looking for the most efficient way to keep track of all the markers (around 5/6) and how close they are together.
There is a list with markers but I cant find a efficient way to compare all of them.
I have a
Vector <Marker>
I also have a function called getDistance.
double getDistance(cv::Point2f punt1, cv::Point2f punt2)
{
float xd = punt2.x-punt1.x;
float yd = punt2.y-punt1.y;
double Distance = sqrtf(xd*xd + yd*yd);
return Distance;
}
The Markers contain a Point2f, so i can compare them easily.
One way to increase performance is to keep all the distances squared and avoid using the square root function. If you square the specific value you are checking against then this should work fine.
There isn't really a lot to recommend. If I understand the question and I'm counting the pairs correctly, you'll need to calculate 10 distances when you have 5 points, and 15 distances when you have 6 points. If you need to determine all of the distances, then you have no choice but to calculate all of the distances. I don't see any way around that. The only advice I can give is to make sure you calculate the distance between each pair only once (e.g., once you know the distance between points A and B, you don't need to calculate the distance between B and A).
It might be possible to sort the vector in such a way that you can short circuit your loop. For instance, if you sort it correctly and the distance between point A and point B is larger than your threshold, then the distances between A and C and A and D will also be larger than the threshold. But keep in mind that sorting isn't free, and it's likely that for small sets of points it would be faster to just calculate all distances ("Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. ... For example, binary trees are always faster than splay trees for workaday problems.").
Newer versions of the C and C++ standard library have a hypot function for calculating distance between points:
#include <cmath>
double getDistance(cv::Point2f punt1, cv::Point2f punt2)
{
return std::hypot(punt2.x - punt1.x, punt2.y - punt1.y);
}
It's not necessarily faster, but it should be implemented in a way that avoids overflow when the points are far apart.
One minor optimization is to simply check if the change in X or change in Y exceeds the threshold. If it does, you can ignore the distance between those two points because the overall distance will also exceed the threshold:
const double threshold = ...;
std::vector<cv::Point2f> points;
// populate points
...
for (auto i = points.begin(); i != points.end(); ++i) {
for (auto j = i + 1; j != points.end(); ++j) {
double dx = std::abs(i->x - j->x), dy = std::abs(i->y - j->y);
if (dx > threshold || dy > threshold) {
continue;
}
double distance = std::hypot(dx, dy);
if (distance > threshold) {
continue;
}
...
}
}
If you're dealing with large amounts of data inside your vector you may want to consider some multithreading using future.
Vector <Marker> could be chunked into X chunks which are asynchronously computed together and stored inside std::future<>, putting to use #Sesame's suggestion will also increase your speed as well.