Optimizing distance between two vectors in C++ - c++

I have two vectors with coordinates, stored as a OpenCV's floating points:
a) dstpoints is a vector with OpenCV points - std::vector<cv::Point2f> (I have 162 points in my example, they are not changing),
b) ppts is also std::vector<cv::Point2f> and the same size as dstpoints:
std::vector<cv::Point2f> ppts = project_keypoints(params, input);
But ppts is dependent on two other vectors:
- input is 2*162=324 long and is not changing,
- params is 189 long and its values should be changed to get the minimal value of variable suma, something like this:
double suma = 0.0;
for (int i=0; i<dstpoints_size; i++)
{
suma += pow(dstpoints[i].x - ppts[i].x, 2);
suma += pow(dstpoints[i].y - ppts[i].y, 2);
}
I'm looking for params vector that will give me the smallest value of suma variable. Least squares algorithm seems to be a good option to solve it:
https://en.wikipedia.org/wiki/Least_squares
I tried dlib version:
http://dlib.net/dlib/optimization/optimization_least_squares_abstract.h.html#solve_least_squares
but I'm afraid that is not good for my case.
I think, my problem in dlib version is that for every different params vector I get different ppts vector, not only single value, and I don't know if solve_least_squares function from dlib can match my example.
I'm looking for C++ solution (probably with optimizers) that could help to solve the problem.

Related

Get orientation angle from rotation matrix (camera/marker)

When i get my rvecs from the function estimatePoseSingleMarkers how can I get the angle for the orientation of my marker ?
On Stackoverflow and others forums, the function Rodrigues seems to be necessary, but I don't understand what exactly the function does.
After apply this function, I understood that I need to convert the result of the function Rodrigues into a euler angles.
I expect to have a float vector representing an angle like 45.6 ° for example.
But I have strange values: 1.68175 -0.133805 -1.5824
For this values, I put my marker in front of my camera so the values does not correspond.
Here my code :
cv::Mat R;
cv::Rodrigues(rvecs[i], R); // R is 3x3
std::vector<float> v = rotationMatrixToEulerAngles(R);
for(size_t i = 0; i < v.size(); i ++)
std::cout << v[i] << std::endl;
The function rotationMatrixToEulerAngles is from : https://learnopencv.com/rotation-matrix-to-euler-angles, I try others things but I still have strange values, so I don't get it something... I want to have something like [180, 90 ,0] or [45,0,152] etc.
If someone can explain me by steps how from the rvecs I can get a vector of angles (on angle for each axe).
UPDATE :
I test many differents code propose in internet, I read article but I havn't good values.
I got now "good" float values like 190.45 or 80.32 etc. because I multiply by (180/M_PI) but the values are wrong.
When I put my marker in front of my camera, I should have [0 , 0, 0] I think, but I havn't that.
What is the problem ?
I found the problem : I need to put rvecs[i][0] [i][1] [i][2] in a vector and then use cv::Rodrigues with this vector and use the function rotationMatrixToEuleurAngles

How to generate the best curved fit for an unknown set of 2D points in C++

I am trying to get the best fit for an unknown set of 2D points. The points are centered points of rivers, and they don't come in a certain order.
I've tried to use polynomial regression but I don't know what is the best polynomial order for different sets of data.
I've also tried cubic spline, but I don't want a line through all the points I have, I want an approximation of the best fit line through the points.
I would like to get something like this even for lines that have more curves. This example is computed with polynomial regression, and it works fine.
Is there a way to do some smooth or regression algorithm that can get the best fit line even for a set of points like the following?
PolynomialRegression<double> pol;
static int polynomOrder = <whateverPolynomOrderFitsBetter>;
double error = 0.005f;
std::vector<double> coeffs;
pol.fitIt(x, y, polynomOrder, coeffs);
// get fitted values
for(std::size_t i = 0; i < points.size(); i++)
{
int order = polynomOrder;
long double yFitted = 0;
while(order >= 0)
{
yFitted += (coeffs[order] * pow(points[i].x, order) + error);
order --;
}
points[i].y = yFitted;
}
In my implementation with an 35 polynomial order this is all I can get , and also changing the polynomial order with higher values turns into Nan values for the coefficients.
I'm not sure if this is the best approach I can have.

Total Least Squares algorithm in C/C++

Given a set of points P I need to find a line L that best approximates these points. I have tried to use the function gsl_fit_linear from the GNU scientific library. However my data set often contains points that have a line of best fit with undefined slope (x=c), thus gsl_fit_linear returns NaN. It is my understanding that it is best to use total least squares for this sort of thing because it is fast, robust and it gives the equation in terms of r and theta (so x=c can still be represented). I can't seem to find any C/C++ code out there currently for this problem. Does anyone know of a library or something that I can use? I've read a few research papers on this but the topic is still a little fizzy so I don't feel confident implementing my own.
Update:
I made a first attempt at programming my own with armadillo using the given code on this wikipedia page. Alas I have so far been unsuccessful.
This is what I have so far:
void pointsToLine(vector<Point> P)
{
Row<double> x(P.size());
Row<double> y(P.size());
for (int i = 0; i < P.size(); i++)
{
x << P[i].x;
y << P[i].y;
}
int m = P.size();
int n = x.n_cols;
mat Z = join_rows(x, y);
mat U;
vec s;
mat V;
svd(U, s, V, Z);
mat VXY = V(span(0, (n-1)), span(n, (V.n_cols-1)));
mat VYY = V(span(n, (V.n_rows-1)) , span(n, (V.n_cols-1)));
mat B = (-1*VXY) / VYY;
cout << B << endl;
}
the output from B is always 0.5504, Even when my data set changes. As well I thought that the output should be two values, so I'm definitely doing something very wrong.
Thanks!
To find the line that minimises the sum of the squares of the (orthogonal) distances from the line, you can proceed as follows:
The line is the set of points p+r*t where p and t are vectors to be found, and r varies along the line. We restrict t to be unit length. While there is another, simpler, description in two dimensions, this one works with any dimension.
The steps are
1/ compute the mean p of the points
2/ accumulate the covariance matrix C
C = Sum{ i | (q[i]-p)*(q[i]-p)' } / N
(where you have N points and ' denotes transpose)
3/ diagonalise C and take as t the eigenvector corresponding to the largest eigenvalue.
All this can be justified, starting from the (orthogonal) distance squared of a point q from a line represented as above, which is
d2(q) = q'*q - ((q-p)'*t)^2

C++ - Efficient way to compare vectors

At the moment i'm working with a camera to detect a marker. I use opencv and the Aruco Libary.
Only I'm stuck with a problem right now. I need to detect if the distance between 2 marker is less than a specific value. I have a function to calculate the distance, I can compare everything. But I'm looking for the most efficient way to keep track of all the markers (around 5/6) and how close they are together.
There is a list with markers but I cant find a efficient way to compare all of them.
I have a
Vector <Marker>
I also have a function called getDistance.
double getDistance(cv::Point2f punt1, cv::Point2f punt2)
{
float xd = punt2.x-punt1.x;
float yd = punt2.y-punt1.y;
double Distance = sqrtf(xd*xd + yd*yd);
return Distance;
}
The Markers contain a Point2f, so i can compare them easily.
One way to increase performance is to keep all the distances squared and avoid using the square root function. If you square the specific value you are checking against then this should work fine.
There isn't really a lot to recommend. If I understand the question and I'm counting the pairs correctly, you'll need to calculate 10 distances when you have 5 points, and 15 distances when you have 6 points. If you need to determine all of the distances, then you have no choice but to calculate all of the distances. I don't see any way around that. The only advice I can give is to make sure you calculate the distance between each pair only once (e.g., once you know the distance between points A and B, you don't need to calculate the distance between B and A).
It might be possible to sort the vector in such a way that you can short circuit your loop. For instance, if you sort it correctly and the distance between point A and point B is larger than your threshold, then the distances between A and C and A and D will also be larger than the threshold. But keep in mind that sorting isn't free, and it's likely that for small sets of points it would be faster to just calculate all distances ("Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. ... For example, binary trees are always faster than splay trees for workaday problems.").
Newer versions of the C and C++ standard library have a hypot function for calculating distance between points:
#include <cmath>
double getDistance(cv::Point2f punt1, cv::Point2f punt2)
{
return std::hypot(punt2.x - punt1.x, punt2.y - punt1.y);
}
It's not necessarily faster, but it should be implemented in a way that avoids overflow when the points are far apart.
One minor optimization is to simply check if the change in X or change in Y exceeds the threshold. If it does, you can ignore the distance between those two points because the overall distance will also exceed the threshold:
const double threshold = ...;
std::vector<cv::Point2f> points;
// populate points
...
for (auto i = points.begin(); i != points.end(); ++i) {
for (auto j = i + 1; j != points.end(); ++j) {
double dx = std::abs(i->x - j->x), dy = std::abs(i->y - j->y);
if (dx > threshold || dy > threshold) {
continue;
}
double distance = std::hypot(dx, dy);
if (distance > threshold) {
continue;
}
...
}
}
If you're dealing with large amounts of data inside your vector you may want to consider some multithreading using future.
Vector <Marker> could be chunked into X chunks which are asynchronously computed together and stored inside std::future<>, putting to use #Sesame's suggestion will also increase your speed as well.

Calculate mean for vector of points

I have a vector of a 2-dimensional points in OpenCV
std::vector<cv::Point2f> points;
I would like to calculate the mean values for x and y coordinates in points. Something like:
cv::Point2f mean_point; //will contain mean values for x and y coordinates
mean_point = some_function(points);
This would be simple in Matlab. But I'm not sure if I can utilize some high level OpenCV functions to accomplish the same. Any suggestions?
InputArray does a good job here. You can simply call
cv::Mat mean_;
cv::reduce(points, mean_, 01, CV_REDUCE_AVG);
// convert from Mat to Point - there may be even a simpler conversion,
// but I do not know about it.
cv::Point2f mean(mean_.at<float>(0,0), mean_.at<float>(0,1));
Details:
In the newer OpenCV versions, the InputArray data type is introduced. This way, one can send as parameters to an OpenCV function either matrices (cv::Mat) either vectors. A vector<Vec3f> will be interpreted as a float matrix with three channels, one row, and the number of columns equal to the vector size. Because no data is copied, this transparent conversion is very fast.
The advantage is that you can work with whatever data type fits better in your app, while you can still use OpenCV functions to ease mathematical operations on it.
Since OpenCV's Point_ already defines operator+, this should be fairly simple. First we sum the values:
cv::Point2f zero(0.0f, 0.0f);
cv::Point2f sum = std::accumulate(points.begin(), points.end(), zero);
Then we divide to get the average:
Point2f mean_point(sum.x / points.size(), sum.y / points.size());
...or we could use Point_'s operator*:
Point2f mean_point(sum * (1.0f / points.size()));
Unfortunately, at least as far as I can see, Point_ doesn't define operator /, so we need to multiply by the inverse instead of dividing by the size.
You can use stl's std::accumulate as follows:
cv::Point2f sum = std::accumulate(
points.begin(), points.end(), // Run from begin to end
cv::Point2f(0.0f,0.0f), // Initialize with a zero point
std::plus<cv::Point2f>() // Use addition for each point (default)
);
cv::Point2f mean = sum / points.size(); // Divide by count to get mean
Add them all up and divide by the total number of points.