Which euler angle order does this code result in? - c++

Using the mathematics library GLM, I use this code to combine the euler angle rotations to a rotation matrix.
#include <GLM/gtc/matrix_transform.hpp>
using namespace glm;
mat4 matrix = rotate(mat4(1), X, vec3(1, 0, 0))
* rotate(mat4(1), Y, vec3(0, 1, 0))
* rotate(mat4(1), Z, vec3(0, 0, 1));
Does this result in an euler angle sequenze of XYZ or ZYX? I am not sure since matrix multiplication behave not the same as scalar multiplications.

Remember that matrix calculation, in openGL, use a notation knows as vector column (http://en.wikipedia.org/wiki/Column_vector). So, any point transformation will be expressed by a system of linear equation, expressed in vector column notation like this:
[P'] = M.[P], where M = M1.M2.M3
This means that the first transformation that is applied to the points, expressed by vector [P] is M3, after that by M2 and at last by M1.
Answering your question, the resulting Euler angle will be ZXY, once Z rotation transformation is the last matrix that you write to form a matrix multiplication.

Related

Rotation accuracy error builds up too fast?

When applying rotations one after another, precision errors accumulate.
But I am surprised of how fast the error builds up.
In this example I am comparing 2 transformations that are equivalent in theory.
In practice I get 0.02 degrees error by doing just 2 rotations instead of one.
I was expecting the error to be lower.
Is there a way to make the result of these 2 transformations closer? Other than using double precision variables.
#include <glm/gtx/rotate_vector.hpp>
double RadToDeg(double rad)
{
return rad * 180.0 / M_PI;
}
const glm::vec3 UP(0, 0, 1);
void CompareRotations()
{
glm::vec3 v0 = UP;
glm::vec3 v1 = glm::normalize((glm::vec3(0.0491, 0.0057, 0.9987)));
glm::vec3 v2 = glm::normalize((glm::vec3(0.0493, 0.0057, 0.9987)));
glm::vec3 axis_0_to_1 = glm::cross(v0, v1);
glm::vec3 axis_1_to_2 = glm::cross(v1, v2);
glm::vec3 axis_global = glm::cross(v0, v2);
float angle_0_to_1 = RadToDeg(acos(glm::dot(v0, v1)));
float angle_1_to_2 = RadToDeg(acos(glm::dot(v1, v2)));
float angle_global = RadToDeg(acos(glm::dot(v0, v2)));
glm::vec3 v_step = UP;
v_step = glm::rotate(v_step, angle_0_to_1, axis_0_to_1);
v_step = glm::rotate(v_step, angle_1_to_2, axis_1_to_2);
glm::vec3 v_glob = UP;
v_glob = glm::rotate(v_glob, angle_global, axis_global);
float angle = RadToDeg(acos(glm::dot(v_step, v_glob)));
if (angle > 0.01)
{
printf("error");
}
}
If you just want to continue rotating along the same axis, then it would probably be best to just increment the rotation angle around that axis and recompute a new matrix from that angle every time. Note that you can directly compute a matrix for rotation around an arbitrary axis. Building rotations from Euler Angles, for example, is generally neither necessary nor a great solution (singularities, numerically not ideal, behavior not very intuitive). There is an overload of glm::rotate() that takes an axis and an angle that you could use for that.
If you really have to concatenate many arbitrary rotations around arbitrary axes, then using Quaternions to represent your rotations would potentially be numerically more stable. Since you're already using GLM, you could just use the quaternions in there. You might find this tutorial useful.
Floating-point multiplication isn't as precise as you think, and every time you multiply two floating-point numbers you lose precision -- quite rapidly, as you have discovered.
Generally you want to store your transforms not as the result matrix, but as the steps required to get that matrix; for example, if you are doing only a single-axis transform, you store your transform as the angle and recompute the matrix each time. However, if multiple axes are involved, this gets very complicated very quickly.
Another approach is to use an underlying representation of the transform that can itself be transformed precisely. Quaternions are very popular for this (per Michael Kenzel's answer), but another approach that can be easier to visualize is to use a pair of vectors that represent the transform in a way that you can reconstitute a normalized matrix. For example, you can think of your rotation as a pair of vectors, forward and up. From this you can compute your transformation matrix with e.g.:
z_axis = normalize(forward);
x_axis = normalize(cross(up, forward));
y_axis = normalize(cross(forward, x_axis));
and then you build your transform matrix from these vectors; given those axes and a pos for your position the (column-major) OpenGL matrix will be:
{ x_axis.x, x_axis.y, x_axis.z, 0,
y_axis.x, y_axis.y, y_axis.z, 0,
z_axis.x, z_axis.y, z_axis.z, 0,
pos.x, pos.y, pos.z, 1 }
Similarly, you can renormalize a transform matrix by extracting the Z and Y vectors from your matrix as direction and up, respectively, and reconstructing a new matrix from them.
This does take a lot more computational complexity than using quaternions, but I find it much easier to wrap my head around.

Irrlicht: draw 2D image in 3D space based on four corner coordinates

I would like to create a function to position a free-floating 2D raster image in space with the Irrlicht engine. The inspiration for this is the function rgl::show2d in the R package rgl. An example implementation in R can be found here.
The input data should be limited to the path to the image and a table with the four corner coordinates of the respective plot rectangle.
My first, pretty primitive and finally unsuccessful approach to realize this with irrlicht:
Create a cube:
ISceneNode * picturenode = scenemgr->addCubeSceneNode();
Flatten one side:
picturenode->setScale(vector3df(1, 0.001, 1));
Add image as texture:
picturenode->setMaterialTexture(0, driver->getTexture("path/to/image.png"));
Place flattened cube at the center position of the four corner coordinates. I just calculate the mean coordinates on all three axes with a small function position_calc().
vector3df position = position_calc(rcdf); picturenode->setPosition(position);
Determine the object rotation by calculating the normal of the plane defined by the four corner coordinates, normalizing the result and trying to somehow translate the resulting vector to rotation angles.
vector3df normal = normal_calc(rcdf);
vector3df angles = (normal.normalize()).getSphericalCoordinateAngles();
picturenode->setRotation(angles);
This solution doesn't produce the expected result. The rotation calculation is wrong. With this approach I'm also not able to scale the image correctly to it's corner coordinates.
How can I fix my workflow? Or is there a much better way to achieve this with Irrlicht that I'm not aware of?
Edit: Thanks to #spug I believe I'm almost there. I tried to implement his method 2, because quaternions are already available in Irrlicht. Here's what I came up with to calculate the rotation:
#include <Rcpp.h>
#include <irrlicht.h>
#include <math.h>
using namespace Rcpp;
core::vector3df rotation_calc(DataFrame rcdf) {
NumericVector x = rcdf["x"];
NumericVector y = rcdf["y"];
NumericVector z = rcdf["z"];
// Z-axis
core::vector3df zaxis(0, 0, 1);
// resulting image's normal
core::vector3df normal = normal_calc(rcdf);
// calculate the rotation from the original image's normal (i.e. the Z-axis)
// to the resulting image's normal => quaternion P.
core::quaternion p;
p.rotationFromTo(zaxis, normal);
// take the midpoint of AB from the diagram in method 1, and rotate it with
// the quaternion P => vector U.
core::vector3df MAB(0, 0.5, 0);
core::quaternion m(MAB.X, MAB.Y, MAB.Z, 0);
core::quaternion rot = p * m * p.makeInverse();
core::vector3df u(rot.X, rot.Y, rot.Z);
// calculate the rotation from U to the midpoint of DE => quaternion Q
core::vector3df MDE(
(x(0) + x(1)) / 2,
(y(0) + y(1)) / 2,
(z(0) + z(1)) / 2
);
core::quaternion q;
q.rotationFromTo(u, MDE);
// multiply in the order Q * P, and convert to Euler angles
core::quaternion f = q * p;
core::vector3df euler;
f.toEuler(euler);
// to degrees
core::vector3df degrees(
euler.X * (180.0 / M_PI),
euler.Y * (180.0 / M_PI),
euler.Z * (180.0 / M_PI)
);
Rcout << "degrees: " << degrees.X << ", " << degrees.Y << ", " << degrees.Z << std::endl;
return degrees;
}
The result is almost correct, but the rotation on one axis is wrong. Is there a way to fix this or is my implementation inherently flawed?
That's what the result looks like now. The points mark the expected corner points.
I've thought of two ways to do this; neither are very graceful - not helped by Irrlicht restricting us to spherical polars.
NB. the below assumes rcdf is centered at the origin; this is to make the rotation calculation a bit more straightforward. Easy to fix though:
Compute the center point (the translational offset) of rcdf
Subtract this from all the points of rcdf
Perform the procedures below
Add the offset back to the result points.
Pre-requisite: scaling
This is easy; simply calculate the ratios of width and height in your rcdf to your original image, then call setScaling.
Method 1: matrix inversion
For this we need an external library which supports 3x3 matrices, since Irrlicht only has 4x4 (I believe).
We need to solve the matrix equation which rotates the image from X-Y to rcdf. For this we need 3 points in each frame of reference. Two of these we can immediately set to adjacent corners of the image; the third must point out of the plane of the image (since we need data in all three dimensions to form a complete basis) - so to calculate it, simply multiply the normal of each image by some offset constant (say 1).
(Note the points on the original image have been scaled)
The equation to solve is therefore:
(Using column notation). The Eigen library offers an implementation for 3x3 matrices and inverse.
Then convert this matrix to spherical polar angles: https://www.learnopencv.com/rotation-matrix-to-euler-angles/
Method 2:
To calculate the quaternion to rotate from direction vector A to B: Finding quaternion representing the rotation from one vector to another
Calculate the rotation from the original image's normal (i.e. the Z-axis) to rcdf's normal => quaternion P.
Take the midpoint of AB from the diagram in method 1, and rotate it with the quaternion P (http://www.geeks3d.com/20141201/how-to-rotate-a-vertex-by-a-quaternion-in-glsl/) => vector U.
Calculate the rotation from U to the midpoint of DE => quaternion Q
Multiply in the order Q * P, and convert to Euler angles: https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles
(Not sure if Irrlicht has support for quaternions)

Trouble Implementing Rodrigues' rotation formula in C++

I'm trying to implement a function that takes two geometry vectors in 3D space and returns a rotation matrix that rotates the first vector to the second vector. My function currently uses Rodrigues' rotation formula to create a matrix, but my implementation of this formula gives the wrong answer for some inputs. I checked the math by hand for one test that gave an incorrect result, and my work gave the same result.
Here is the code for my function:
Matrix3d rotation_matrix(Vector3d vector0, Vector3d vector1)
{
vector0.normalize();
vector1.normalize();
// vector orthogonal to both inputs
Vector3d u = vector0.cross(vector1);
if (!u.norm())
{
if (vector0 == vector1)
return Matrix3d::Identity();
// return rotation matrix that represents 180 degree rotation
Matrix3d m1;
m1 << -1, 0, 0,
0,-1, 0,
0, 0, 1;
return m1;
}
/* For the angle between both inputs:
* 1) The sine is the magnitude of their cross product.
* 2) The cosine equals their dot product.
*/
// sine must be calculated using original cross product
double sine = u.norm();
double cosine = vector0.dot(vector1);
u.normalize();
double ux = u[0];
double uy = u[1];
double uz = u[2];
Matrix3d cross_product_matrix;
cross_product_matrix << 0, -uz, uy,
uz, 0,-ux,
-uy, ux, 0;
Matrix3d part1 = Matrix3d::Identity();
Matrix3d part2 = cross_product_matrix * sine;
Matrix3d part3 = cross_product_matrix*cross_product_matrix * (1 - cosine);
return part1 + part2 + part3;
}
I use the Eigen C++ library for linear algebra (available here):
http://eigen.tuxfamily.org/index.php?title=Main_Page
Any help is appreciated. Thanks.
A one liner version consists in using Eigen's Quaternion:
return Matrix3d(Quaterniond::FromTwoVectors(v0,v1));
If you want to rotate from one vector to another just use built in "Eigen::Quaternion::setFromTwoVectors"
http://eigen.tuxfamily.org/dox/classEigen_1_1QuaternionBase.html#ac35460294d855096e9b687cadf821452
It makes exactly what you need and implementation much faster. Then you can call
"Eigen::Quaternion::toRotationMatrix" , to convert to matrix. Both operations are comparably fast and probably faster than direct Rodrigues formula.

rotate a Vector to reach orthogonality with another vector

I have 2 vectors (V1{x1, y1, z1}, V2{x2, y2, z2}) , and I want rotate V1 around X-Axis, Y-Axis and Z-Axis to be parallel to V2. I want to find 3 rotation angles.
Is there any general formula I can use to find them?
I would do that in this way:
A = V1xV2; //Cross product, this gives the axis of rotation
sin_angle = length(A)/( |V1| |V2|); //sine of the angle between vectors
angle = asin(sin_angle);
A_n = normalize(A);
Now you can build a quaternion with angle and A_n.
q = (A_n.x i + A_n.y j + A_n.z k)*sin(angle/2) + cos(angle/2);
And use these formulas to get your euler angles.
Do you really need the rotation angles, or is it a rotation matrix you're looking for. If the latter, you can do it the way it's done in OpenFOAM: http://github.com/OpenFOAM/OpenFOAM-2.1.x/blob/master/src/OpenFOAM/primitives/transform/transform.H#L45
Note that in OpenFOAM for vector the & operator denotes the inner product, the ^ operator the cross product and * is the outer product. The sqr function computes the element-wise squares, magSqr the square of the magnitude of a vector (i.e. v&v).

Finding eigenvectors of covariance matrix to create 3D bounding sphere

I'm currently in the process of writing a function to find an "exact" bounding-sphere for a set of points in 3D space. I think I have a decent understanding of the process so far, but I've gotten stuck.
Here's what I'm working with:
A) Points in 3D space
B) 3x3 covariance matrix stored in a 4x4 matrix class (referenced by cells m0,m1,m2,m3,m4,ect; instead of rows and cols)
I've found the 3 eigenvalues for the covariance matrix of the points, and I've set up a function to convert a matrix to reduced row echelon form (rref) via Gaussian elimination.
I've tested both of those functions against figures in examples I've found online, and they appear to be working correctly.
The next step is to find the eigenvectors using the equation:
(M - λ*I)*V
... where M is the covariance matrix, λ is one of the eigenvalues, I is the identity matrix, and V is the eigenvector.
However, I don't seem to be constructing the 4x3 matrix correctly before rref'ing it, as the far right column where the eigenvector components should be calculated are 0 before and after running rref. I understand why they are zero after (without any constants, the simplest solution to a linear system of equations is all coefficients of zero), but I'm at a loss as to what to put there.
Here's the function so far:
Vect eigenVector(const Matrix &M, const float eval) {
Matrix A = Matrix(M);
A -= Matrix(IDENTITY)*eval;
A.rref();
return Vect(A[m3],A[m7],A[m11]);
}
The 3x3 covariance matrix is passed as M, and the eigenvalue as eval. Matrix(IDENTITY) returns an identity matrix. m3,m7, and m11 correspond to the far-right column of a 4x3 matrix.
Here's the example 3x3 matrix (stored in a 4x4 matrix class) I'm using to test the functions:
Matrix(1.5f, 0.5f, 0.75f, 0,
0.5f, 0.5f, 0.25f, 0,
0.75f, 0.25f, 0.5f, 0,
0, 0, 0, 0);
I'm correctly (?) getting the eigenvalues of 2.097, 0.3055, 0.09756 from my other function.
eigenVector() above correctly subtracts the passed eigenvalue from the diagonal (0,0 1,1 2,2)
Matrix A after rref():
[(1, 0, 0, -0),
(-0, 1, 0, -0),
(-0, -0, 1, -0),
(0, 0, 0, -2.09694)]
For the rref() function, I'm using a translated python function found here:
http://elonen.iki.fi/code/misc-notes/python-gaussj/index.html
What should the matrix I pass to rref() look like to get an eigenvector out?
Thanks
(M - λI)V is not an equation, it's just an expression. However, (M - λI)V = 0 is. And it's the equation that relates eigenvectors to eigenvalues.
So assuming your rref function works, I would imagine that you create an augmented matrix as [(M - λI) | 0], where 0 denotes a zero-vector. This sounds like what you're doing already, so I would have to assume that your rref function is broken. Or alternatively, it doesn't know how to handle 4x4 matrices (as opposed to 4x3 matrices, which is what it would expect for an augmented matrix).
Ah, with a few more hours of grueling research, I've managed to solve my problem.
The issue is that there is no "one" set of eigenvectors but rather an infinite number with varying magnitudes.
The method I chose was to use a REF (row echelon form) instead of RREF, leaving enough information in the matrix to allow me to substitute in an arbitrary value for z, and work backwards to solve for y and x. I then normalized the vector to get a unit eigenvector, which should work for my purposes.
My final code:
Vect eigenVector(const Matrix &M, const float eVal) {
Matrix A = Matrix(M);
A -= Matrix(IDENTITY)*eVal;
A.ref();
float K = 16; // Arbitrary value
float J = -K*A[m6]; // Substitute in K to find J
float I = -K*A[m2]-J*A[m1]; // Substitute in K and J to find I
Vect eVec = Vect(I,J,K);
eVec.norm(); // Normalize eigenvector
return eVec;
}
The only oddity is that the eigenvectors come out facing in the opposite direction than I expected (they were negated!), but that's a moot problem.