Get orientation angle from rotation matrix (camera/marker) - c++

When i get my rvecs from the function estimatePoseSingleMarkers how can I get the angle for the orientation of my marker ?
On Stackoverflow and others forums, the function Rodrigues seems to be necessary, but I don't understand what exactly the function does.
After apply this function, I understood that I need to convert the result of the function Rodrigues into a euler angles.
I expect to have a float vector representing an angle like 45.6 ° for example.
But I have strange values: 1.68175 -0.133805 -1.5824
For this values, I put my marker in front of my camera so the values does not correspond.
Here my code :
cv::Mat R;
cv::Rodrigues(rvecs[i], R); // R is 3x3
std::vector<float> v = rotationMatrixToEulerAngles(R);
for(size_t i = 0; i < v.size(); i ++)
std::cout << v[i] << std::endl;
The function rotationMatrixToEulerAngles is from : https://learnopencv.com/rotation-matrix-to-euler-angles, I try others things but I still have strange values, so I don't get it something... I want to have something like [180, 90 ,0] or [45,0,152] etc.
If someone can explain me by steps how from the rvecs I can get a vector of angles (on angle for each axe).
UPDATE :
I test many differents code propose in internet, I read article but I havn't good values.
I got now "good" float values like 190.45 or 80.32 etc. because I multiply by (180/M_PI) but the values are wrong.
When I put my marker in front of my camera, I should have [0 , 0, 0] I think, but I havn't that.
What is the problem ?

I found the problem : I need to put rvecs[i][0] [i][1] [i][2] in a vector and then use cv::Rodrigues with this vector and use the function rotationMatrixToEuleurAngles

Related

How to create a B-Spline with QwtSpline

In QwtSpline there are two different types of splines, but I don't see a difference between both types.
I found a image, that explains my problem:
QwtSpline always creates a spline, like that on the left side of the picture.
But I want to have a Spline, like that one on the right side.
My Code is the following:
QwtSpline spline;
QPolygonF polygon;
QVector<QPointF> result;
polygon.append(startPoint);
polygon.append(rotatedPoint);
polygon.append(endPoint);
spline.setPoints(polygon);
for(double i = startPoint.rx(); i < endPoint.rx(); i++)
{
result << QPointF(i, spline.value(i));
}
result << QPointF(endPoint.rx(), spline.value(endPoint.rx()));
What I want to do, is to draw a spline like that one on the right side of the picture to a QwtPlot. Maybe there is a easier way to solve my problem, than creating a QwtSpline iterate throug it to creat a QwtCurve with every point on the QwtSpline.
If it is easier, to draw a bezier curve to QwtPlot, that's no problem, a bezier curve would be easier for me. I only took a spline, because I didn't find a bezier curve in Qwt.
Try Qwt from one of the branches >= 6.2. It has a complete new implementation of several spline interpolation algorithms.
But if it is about drawing a bezier curve only you can also use QwtShapeItem, what is displaying a QPainterPath. Of course you can also use QPainterPath to create a QPolygon from your bezier curve and then use QwtPlotCurve.
Ok guys because you didn't like my answer bevore, I will try to explain how to calculate a bezier curve:
I think the easiest way is to use the Bernstein-Bézier representation of curves.
To do that, you have to find out the Bernstein polynomials. That's not difficult. There is a formular to do that its
n is the number of points of your curve and
i is the actually point.
That means that you have so many Bernstein polynom, how many points you have.
If you know every Bernstein polynom you are able to use the following formular to calculate the curve.
n is the total number of points and
i is the index of the current point.
P is a point and t always runs from 0 to 1. 0 is the positon at the left side and 1 represents the position on the right side. r is the new point of the curve.
Now you have two calculate the formular above for x and y.
This is the formular for x
This is the formular for y
As you can see the only variable parameter on the right side is t. That means, that you have to calculate this formular many times with t between 0 and 1. The easiest way to do that, is to write a for loop like that:
QList<QPointF> results = QList<QPointF>();
QList<QPointF> points = QList<QPointF>();
for(double i = 0; i <= 1; i+=0.01)
{
double x = //formular for rx
double y = //formular for ry
results << QPointF(x, y);
}
I hope it wasn't to complicated. If you didn't understand this short explanation, you can look at the Handbook of Mathematics. In the Sixth Edition its on site 1000 to 1001.
ISBN: 978-3-662-46220-1

Curvature Scale Space corner detection algorithm. Arc Length Parameter?

I'm studying about the CSS algorithm and I don't get the hang of the concept of 'Arc Length Parameter'.
According to the literature, planar curve Gamma(u)=(x(u),y(u)) and they say this u is the arc length parameter and apparently, Gaussian Kernel g is also parameterized by this u here.
Stop me if I got something wrong but, aren't x and y location of the pixel? How is it represented by another parameter?
I had no idea when I first saw it on the literature so, I looked up the code. and apparently, I got puzzled even more.
here is the portion of the code
void getGaussianDerivs(double sigma, int M, vector<double>& gaussian,
vector<double>& dg, vector<double>& d2g) {
int L = (M - 1) / 2;
double sigma_sq = sigma * sigma;
double sigma_quad = sigma_sq*sigma_sq;
dg.resize(M); d2g.resize(M); gaussian.resize(M);
Mat_<double> g = getGaussianKernel(M, sigma, CV_64F);
for (double i = -L; i < L+1.0; i += 1.0) {
int idx = (int)(i+L);
gaussian[idx] = g(idx);
// from http://www.cedar.buffalo.edu/~srihari/CSE555/Normal2.pdf
dg[idx] = (-i/sigma_sq) * g(idx);
d2g[idx] = (-sigma_sq + i*i)/sigma_quad * g(idx);
}
}
so, it seems the code uses simple 1D Gaussian Kernel Aperture size of M and it is trying to compute its 1st and 2nd derivatives. As far as I know, 1D Gaussian kernel has parameter of x which is a horizontal coordinate and sigma which is scale. it seems like that 'arc length parameter u' is equivalent to the variable of x. That doesn't make any sense because later in the code, it directly convolutes the set of x and y on the contour.
what is this u?
PS. since I replied to the fellow who tried to answer my question, I think I should modify my question, so, here we go.
What I'm confusing is, how is this parameter 'u' implemented in codes? I think I understood the full code above -of course, I inserted only a portion of the code- but the problem is, I have no idea about what it would be for the 'improved' version of the algorithm. It says it's using 'affine length parameter' instead of this 'arc length parameter' and I'm not so sure how I implement the concept into the code.
According to the literature, the main difference between arc length parameter and affine length parameter is it's sampling interval and arc length parameter uses 1 for the vertical and horizontal direction and root of 2 for the diagonal direction. It makes sense since the portion of the code above is using for loop to compute 1st and 2nd derivatives of the 1d Gaussian and it directly inserts the value of interval 1 but, how is it gonna be with different interval with different variable? Is it possible that I'm not able to use 'for loop' for it?

Calculate mean for vector of points

I have a vector of a 2-dimensional points in OpenCV
std::vector<cv::Point2f> points;
I would like to calculate the mean values for x and y coordinates in points. Something like:
cv::Point2f mean_point; //will contain mean values for x and y coordinates
mean_point = some_function(points);
This would be simple in Matlab. But I'm not sure if I can utilize some high level OpenCV functions to accomplish the same. Any suggestions?
InputArray does a good job here. You can simply call
cv::Mat mean_;
cv::reduce(points, mean_, 01, CV_REDUCE_AVG);
// convert from Mat to Point - there may be even a simpler conversion,
// but I do not know about it.
cv::Point2f mean(mean_.at<float>(0,0), mean_.at<float>(0,1));
Details:
In the newer OpenCV versions, the InputArray data type is introduced. This way, one can send as parameters to an OpenCV function either matrices (cv::Mat) either vectors. A vector<Vec3f> will be interpreted as a float matrix with three channels, one row, and the number of columns equal to the vector size. Because no data is copied, this transparent conversion is very fast.
The advantage is that you can work with whatever data type fits better in your app, while you can still use OpenCV functions to ease mathematical operations on it.
Since OpenCV's Point_ already defines operator+, this should be fairly simple. First we sum the values:
cv::Point2f zero(0.0f, 0.0f);
cv::Point2f sum = std::accumulate(points.begin(), points.end(), zero);
Then we divide to get the average:
Point2f mean_point(sum.x / points.size(), sum.y / points.size());
...or we could use Point_'s operator*:
Point2f mean_point(sum * (1.0f / points.size()));
Unfortunately, at least as far as I can see, Point_ doesn't define operator /, so we need to multiply by the inverse instead of dividing by the size.
You can use stl's std::accumulate as follows:
cv::Point2f sum = std::accumulate(
points.begin(), points.end(), // Run from begin to end
cv::Point2f(0.0f,0.0f), // Initialize with a zero point
std::plus<cv::Point2f>() // Use addition for each point (default)
);
cv::Point2f mean = sum / points.size(); // Divide by count to get mean
Add them all up and divide by the total number of points.

Opencv - Getting Pixel Coordinates from Feature Matching

Can anyone help me? I want to get the x and y coordinates of the best pixels the feature matcher selects in the code provided, using c++ with opencv.
http://opencv.itseez.com/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-flann-matcher
Been looking around, but can't get anything to work.
Any help is greatly appreciated!
The DMatch class gives you the distance between the two matching KeyPoints (train and query). So, the best pairs detected should have the smallest distance. The tutorial grabs all matches that are less than 2*(minimum pair distance) and considers those the best.
So, to get the (x, y) coordinates of the best matches. You should use the good_matches (which is a list of DMatch objects) to look up the corresponding indices from the two different KeyPoint vectors (keypoints_1 and keypoints_2). Something like:
for(size_t i = 0; i < good_matches.size(); i++)
{
Point2f point1 = keypoints_1[good_matches[i].queryIdx].pt;
Point2f point2 = keypoints_2[good_matches[i].trainIdx].pt;
// do something with the best points...
}

2D Discrete laplacian (del2) in C++

I am trying to figure out how to port the del2() function in matlab to C++.
I have a couple of masks that I am working with that are ones and zeros, so I wrote code liket his:
for(size_t i = 1 ; i < nmax-1 ; i++)
{
for(size_t j = 1 ; j < nmax-1 ; j++)
{
transmask[i*nmax+j] = .25*(posmask[(i+1)*nmax + j]+posmask[(i-1)*nmax+j]+posmask[i*nmax+(j+1)]+posmask[i*nmax+(j-1)]);
}
}
to compute the interior points of the laplacians. I think according to some info in "doc del2" in matlab, the border conditions just use the available info to compute, right? SO i guess I just need to write cases for the border conditions at i,j = 0 and nmax
However, i would think these values from the code I have posted here would be correct for the interior points as is, but it seems like the del2 results are different!
I dug through the del2 source, and I guess I am not enough of a matlab wizard to figure out what is going on with some of the code for the interior computation
You can see the code of del2 by edit del2 or type del2.
Note that del2 does cubic interpolation on the boundaries.
The problem is that the line you have there:
transmask[i*nmax+j] = .25*(posmask[(i+1)*nmax + j]+posmask[(i-1)*nmax+j]+posmask[i*nmax+(j+1)]+posmask[i*nmax+(j-1)]);
isn't the discrete Laplacian at all.
What you have is (I(i+1,j) + I(i-1,j) + I(i,j+1) + I(i,j-1) ) / 4
I dont' know what this mask is, but the discrete Laplacian (assuming the spacing between each pixel in each dimension is 1) is:
(-4 * I(i,j) + I(i+1,j) + I(i-1,j) + I(i,j+1) + I(i,j-1) )
So basically, you missed a term, and you don't need to divide by 4. I suggest going back and rederiving the discrete Laplacian from its definition, which is the second x derivative of the image plus the second y derivative of the image.
Edit: I see where you got the /4 from, as Matlab uses this definition for some reason (even though this isn't standard mathematically).
I think that with the Matlab compiler you can convert the m code into C code. Have you tried that?
I found this link where another methot to convert to C is explained.
http://www.kluid.com/mlib/viewtopic.php?t=337
Good luck.