Reversing RotateAxisAngle back to angles - c++

I'm trying to figure out how to reverse RotateAxisAngle to get back rotations around these arbitrary axes (or equivalent rotations that yield same net rotation, doesn't have to be identical). Does anyone know how to do it? I'm using MathGeoLib, but I don't see an opposite way, to return back the angles about axes, when all you have is the matrix.
Here's the forward direction code (RotateAxisAngle is from MathGeoLib):
float4x4 matrixRotation = float4x4::RotateAxisAngle(axisX, ToRadian(rotation.x));
matrixRotation = matrixRotation * float4x4::RotateAxisAngle(axisY, ToRadian(rotation.y));
matrixRotation = matrixRotation * float4x4::RotateAxisAngle(axisZ, ToRadian(rotation.z));
Now I want to get back to the degrees, about these arbitrary axes, in the same order (well, pull off Z, then Y, then X), so if I did it again, forward direction, would yield the same net rotations.
Here's the sample/matrix corresponding to that set of rotations I posted above, if it helps, on reversing back to it:
axisX:
x 0.80878228 float
y -0.58810818 float
z 0.00000000 float
Rot about that axis:
30.000000 float
axisY:
x 0.58811820 float
y 0.80877501 float
z 0.00000000 float
Rot about that axis:
60.000000 float
axisZ:
x 0.00000000 float
y 0.00000000 float
z 1.0000000 float
Rot about that axis:
40.000000 float
That forms this matrix, which is stored to a file, and need to retrieve the rotations about above axes (without any info about rotations originally used)
[4][4]
[0x0] 0.65342271 float
[0x1] -0.51652151 float
[0x2] 0.55339342 float
[0x3] 0.00000000 float
[0x0] 0.69324547 float
[0x1] 0.11467478 float
[0x2] -0.71151978 float
[0x3] 0.00000000 float
[0x0] 0.30405501 float
[0x1] 0.84856069 float
[0x2] 0.43300733 float
[0x3] 0.00000000 float
[0x0] 0.00000000 float
[0x1] 0.00000000 float
[0x2] 0.00000000 float
[0x3] 1.0000000 float

OK, I'm going to take another stab at this. My first answer was for XYZ order of rotations. This answer is for ZYX order, now that I know more about how MathGeoLib works.
MathGeoLib represents position vectors as column vectors v = [x y z 1]^T where ^T is the transpose operator which flips rows to columns (and vice versa). Rotation matrices pre-multiply column vectors. So if we have a matrix Rx(s) representing a rotation around the x-axis by s degrees, then a rotation Ry(t) representing a rotation about the y-axis by t degrees, then rotation Rz(u) representing a rotation about the z-axis by u degrees, and we combine them and multiply with v as Rx(s) Ry(t) Rz(u) v, we're actually applying the z rotation first. But we can still work out the angles from the combined matrix, it's just that the formulas are going to be different from the more common XYZ order.
We have the upper left blocks of the rotation matrices as follows. (The fourth row and column are all 0s except for the diagonal element which is 1; that never changes in the calculations that follow so we can safely ignore.) MathGeoLib appears to use left-handed coordinates, so the rotation matrices are:
[1 0 0] [ cos t 0 sin t] [ cos u -sin u 0]
Rx(s) = [0 cos s -sin s], Ry(t) = [ 0 1 0], Rz(u) = [ sin u cos u 0]
[0 sin s cos s] [-sin t 0 cos t] [ 0 0 1]
(Note the position of the - sign in Ry(t); it's there because we think of the coordinates in cyclic order. Rx(s) rotates y and z; Ry(t) rotates z and x; Rz(u) rotates x and y. Since Ry(t) rotates z and x not in alphabetical order but in cyclic order, the rotation is opposite in direction from what you expect for alphabetical order.
Now we multiply the matrices in the correct order. Rx(s) Ry(t) is
[1 0 0][ cos t 0 sin t] [ cos t 0 sin t]
[0 cos s -sin s][ 0 1 0] = [ sin s*sin t cos s -sin s*cos t]
[0 sin s cos s][-sin t 0 cos t] [-cos s*sin t sin s cos s*cos t]
The product of that with Rz(u) is
[ cos t 0 sin t][ cos u -sin u 0]
[ sin s*sin t cos s -sin s*cos t][ sin u cos u 0] =
[-cos s*sin t sin s cos s*cos t][ 0 0 1]
[ cos t*cos u -cos t*sin u sin t]
[ sin s*sin t*cos u+cos s*sin u -sin s*sin t*sin u+cos s*cos u -sin s*cos t]
[-cos s*sin t*cos u+sin s*sin u cos s*sin t*sin u+sin s*cos u cos s*cos t]
So we can figure out the angles as follows:
tan s = -(-sin s * cos t)/(cos s * cos t) = M23/M33 => s = -arctan2(M23,M33)
sin t = M13 => t = arcsin(M13)
tan u = -(-cos t * sin u)/(cos t * cos u) = M12/M11 => u = -arctan2(M12,M11)
If we are to implement those calculations, we need understand how the matrix is indexed in MathGeoLib. The indexing is row major, just like the mathematical notation, but indexing starts at 0 (computer style) rather than at 1 (math style) so the C++ formulas you want are
s = -atan2(M[1][2],M[2][2]);
t = asin(M[0][2]);
u = -atan2(M[0][1],M[0][0]);
The angles are returned in radians so will need to be converted to degrees if desired. You should test that result in the case when axes for the rotations Z, Y, and X are in standard position (001), (010), and (100).
If we are to reverse a rotation about non-standard axes, as in your example, the problem becomes more difficult. However, I think it can be done by a "change of coordinates". So if our rotation mystery matrix is matrixRotation, I believe you can just form the "conjugate" matrix
M = coordinateChangeMatrix*matrixRotation*coordinateChangeMatrix^{-1}
and then use the above formulas. Here coordinateChangeMatrix would be the matrix
[Xaxis0 Xaxis1 Xaxis2 0]
[Yaxis0 Yaxis1 Yaxis2 0]
[Zaxis0 Zaxis1 Zaxis2 0]
[ 0 0 0 1]
where the rotation X-axis is (Xaxis0,Xaxis1,Xaxis2). In your example those numbers would be (0.808...,-0.588...,0). You should make sure that the rotation matrix is orthonormal, i.e., the dot product of Xaxis with itself is 1, the dot product of Xaxis with another axis is 0, and the same for any other axis. If the coordinate change matrix is not orthonormal, the calculation might still work, but I don't know for sure.
The inverse of the coordinate change matrix can be calculated using float4x4::inverseOrthonormal or if it's not orthonormal you can use float4x4::inverse but as I mentioned I don't know how well that will work.

If you just want a rotation that reverses the rotation you've got in one step, you can invert the rotation matrix. float4x4::InverseOrthonormal should work, and it's fast and accurate. float4x4::Inverse will also work but it's slower and less accurate.
If you really want to recover the angles, it goes something like this. (There are numerous different conventions, even for X-Y-Z; I think this one matches, but you may have to take the transpose of the matrix or make some other modification. If this doesn't work I can suggest alternatives.) First we follow the Wikipedia article for a description of the Euler Angles to Matrix conversion. In the resulting matrix, we have
A11 = cos theta cos psi
A21 = -cos theta sin psi
A31 = sin theta
A32 = -sin phi cos theta
A33 = cos phi cos theta
where phi is the rotation around the x-axis, theta is the rotation around the y-axis, and psi is rotation around the z-axis. To recover the angles, we do
phi = -arctan2(A32,A33)
theta = arcsin(A31)
psi = -arctan2(A21,A11)
The angles may not match the original angles exactly, but the rotations should match. arctan2 is the two-argument form of the arctan function, which takes into account the quadrant of the point represented by the argument, and deals correctly with 90 degree angles.
Given the way your rotations are represented, I think you may have to use the transpose instead. That's easy: you just swap the indices in the above formulas:
phi = -arctan2(A23,A33)
theta = arcsin(A13)
psi = -arctan2(A12,A11)
If neither of those work, I could take a closer look at the MathGeoLib library and figure out what they're doing.
Update
I neglected to take into account information about the axes of rotation in my previous answer. Now I think I have a plan for dealing with them.
The idea is to "change coordinates" then do the operations as above in the new coordinates. I'm a little hazy on the details, so the process is a little "alchemical" at the moment. The OP should try various combinations of my suggestions and see if any of them work ... there aren't too many (just 4 ... for the time being).
The idea is to form a coordinate change matrix using the coordinates of the rotation axes. We do it like this:
axisX: 0.80878228 -0.58810818 0.00000000 0.00000000
axisY: 0.58811820 0.80877501 0.00000000 0.00000000
axisZ: 0.00000000 0.00000000 1.0000000 0.00000000
and..: 0.00000000 0.00000000 0.00000000 1.0000000
I have just taken the three 3-vectors axisX, axisY, axisZ, padded them with 0 at the end, and added the row [0 0 0 1] at the bottom.
I also need the inverse of that matrix. Since the coordinate system is an orthonormal frame, the inverse is the transpose. You can use the InverseOrthonormal function in the library; all it does is form the transpose.
Now take your mystery matrix, and pre-multiply it by the coordinate change matrix, and post-multiply it by the inverse of the coordinate change matrix. Then apply one of the two calculations above using inverse trig functions. Crossing my fingers, I think that's it ...
If that doesn't work, then pre-multiply the mystery matrix by the inverse of the coordinate change matrix, and post-multiply by the coordinate change matrix. Then apply one or the other of the sets of trig formulas.
Does it work?

Related

How to calculate intersection coordinate between two given geo coordinates and bearing angle

I want to calculate the intersect point of following coordinates. I am able to calculate the bearing angle i.e. heading and distance between these two coordinates. But not get how to calculate intersect point of coordinates.
QGeoCoordinate sourceCoord(19.999601675,73.726176879);
QGeoCoordinate destinationCoord(19.999139102,73.725825826);
distance = 0.06318 km
bearing angle (heading) 1 to 2 = 215
bearing angle (heading) 2 to 1 = 35
To calculate distance i am using this formula:
//φ is latitude, λ is longitude, R is earth’s radius
a = sin²(Δφ/2) + cos φ1 ⋅ cos φ2 ⋅ sin²(Δλ/2)
// distance between two coordinates
c = 2 ⋅ atan2( √a, √(1−a) )
d = R ⋅ c
To calculate bearing angle(heading) I used formula:
θ = atan2( sin Δλ ⋅ cos φ2 , cos φ1 ⋅ sin φ2 − sin φ1 ⋅ cos φ2 ⋅ cos Δλ )
How to calculate intersect point between two geo coordinates from bearing angle and heading?
This question has mostly been answered here.
If you assume the earth is spherical, you can do something similar to what #chux - Reinstate Monica suggested:
convert to cartesian coordinates
add the points
convert the result to lat/long, ignoring radius
In Python, it might look something like this (I got rid of the math.s to make it easier to read):
def midpt(lat1, lng1, lat2, lng2):
if lat1 == -lat2 and abs(lng1 - lng2) == 180:
return [0, (radians(lng1) + radians(lng2)) / 2]
# convert to cartesian
cart1 = [cos(radians(lat1)) * cos(radians(lng1)),
cos(radians(lat1)) * sin(radians(lng1)),
sin(radians(lat1))
]
cart2 = [cos(radians(lat2)) * cos(radians(lng2)),
cos(radians(lat2)) * sin(radians(lng2)),
sin(radians(lat2))
]
# add the positions
pt_sum = [cart1[0] + cart2[0],
cart1[1] + cart2[1],
cart1[2] + cart2[2],
]
r = sqrt(pt_sum[0] ** 2 + pt_sum[1] ** 2 + pt_sum[2] ** 2)
# convert back to lat/lng
int_lat = degrees(asin(pt_sum[2]/r))
int_lng = degrees(atan2(pt_sum[1], pt_sum[0]))
return [int_lat, int_lng]

How to determine the intersection between the camera direction and a plane?

I have a 3D scene with an infinite horizontal plane (parallel to the xz coordinates) at a height H along the Y vertical axis.
I would like to know how to determine the intersection between the axis of my camera and this plane.
The camera is defined by a view-matrix and a projection-matrix.
There are two sub-problems here: 1) Extracting the position and view-direction from the camera matrix. 2) Calculating the intersection between the view-ray and the plane.
Extracting position and view-direction
The view matrix describes how points are transformed from world-space to view space. The view-space in OpenGL is usually defined such that the camera is in the origin and looks into the -z direction.
To get the position of the camera, we have to transform the origin [0,0,0] of the view-space back into world-space. Mathematically speaking, we have to calculate:
camera_pos_ws = inverse(view_matrix) * [0,0,0,1]
but when looking at the equation we'll see that we are only interrested in the 4th column of the inverse matrix which will contain 1
camera_pos_ws = [-view_matrix[12], -view_matrix[13], -view_matrix[14]]
The orientation of the camera can be found by a similar calculation. We know that the camera looks in -z direction in view-space thus the world space direction is given by
camera_dir_ws = inverse(view_matrix) * [0,0,-1,0];
Again, when looking at the equation, we'll see that this only takes the third row of the inverse matrix into account which is given by2
camera_dir_ws = [-view_matrix[2], -view_matrix[6], -view_matrix[10]]
Calculating the intersection
We now know the camera position P and the view direction D, thus we have to find the x,z value along the ray R(x,y,z) = P + l * D where y equals H. Since there is only one unknown, l, we can calculate that from
y = Py + l * Dy
H = Py + l * Dy
l = (H - Py) / Dy
The intersection point is then given by pasting l back into the ray equation.
Notes
1 The indices assume that the matrix is stored in a column-major linear array.
2 Note, that the inverse of a matrix of the form
M = [ R T ]
0 1
, where R is a orthogonal 3x3 matrix, is given by
inv(M) = [ transpose(R) -T ]
0 1
For a general line-plane intersection there are lot of answers and tutorials.
Your case is simple due to the plane is horizontal.
I suppose the camera is at C(cx, cy, cz) and it looks at T(tx, ty,tz).
Then the line camera-target can be defined by:
cx - x cy - y cz - z
------ = ------ = ------ /// These are two independant equations
tx - cx ty - cy tz - cz
For a horizontal plane, only a equation is needed: y = H.
Substitute this value in the line equations and you get
(cx-x)/(tx-cx) = (cy-H)/(ty-cy)
(cz-z)/(tz-cz) = (cy-H)/(ty-cy)
So
x = cx - (tx-cx)*(cy-H)/(ty-cy)
y = H
z = cz - (tz-cz)*(cy-H)/(ty-cy)
Of course if your camera looks in an also horizontal line then ty=cy and there is not solution.

Quaternion to EulerXYZ, how to differentiate the negative and positive quaternion

I've been trying to figure out the difference between these, and why ToEulerXYZ does not get the right rotation.
Using MathGeoLib:
axisX:
x 0.80878228 float
y -0.58810818 float
z 0.00000000 float
axisY:
x 0.58811820 float
y 0.80877501 float
z 0.00000000 float
axisZ:
x 0.00000000 float
y 0.00000000 float
z 1.0000000 float
Code:
Quat aQ = Quat::RotateAxisAngle(axisX, DegToRad(30)) * Quat::RotateAxisAngle(axisY, DegToRad(60)) * Quat::RotateAxisAngle(axisZ, DegToRad(40));
float3 eulerAnglesA = aQ.ToEulerXYZ();
Quat bQ = Quat::RotateAxisAngle(axisX, DegToRad(-150)) * Quat::RotateAxisAngle(axisY, DegToRad(120)) * Quat::RotateAxisAngle(axisZ, DegToRad(-140));
float3 eulerAnglesB = bQ.ToEulerXYZ();
Both to ToEulerXYZ get {x=58.675510 y=33.600880 z=38.327244 ...} (when converted to degrees).
The only difference I can see, is the quaternions are identical, but one is negative. The ToEulerXYZ is wrong though, as one should be the negative ({x=-58.675510 y=-33.600880 z=-38.327244 ...}) (bQ)
AQ is:
x 0.52576530 float
y 0.084034257 float
z 0.40772036 float
w 0.74180400 float
While bQ is:
x -0.52576530 float
y -0.084034257 float
z -0.40772036 float
w -0.74180400 float
Is this just an error with MathGeoLib, or some weird nuance, or maybe someone can explain to me what is going on logically.
There are additional scenarios that are not even negative
axisX:
-0.71492511 y=-0.69920099 z=0.00000000
axisY:
0.69920099 y=-0.71492511 z=0.00000000
axisZ:
x=0.00000000 y=0.00000000 z=1.0000000
Code:
Quat aQ = Quat::RotateAxisAngle(axisX, DegToRad(0)) * Quat::RotateAxisAngle(axisY, DegToRad(0)) * Quat::RotateAxisAngle(axisZ, DegToRad(-90));
float3 eulerAnglesA = aQ.ToEulerXYZ();
Quat bQ = Quat::RotateAxisAngle(axisX, DegToRad(-180)) * Quat::RotateAxisAngle(axisY, DegToRad(180)) * Quat::RotateAxisAngle(axisZ, DegToRad(90));
float3 eulerAnglesB = bQ.ToEulerXYZ();
These both yield the same quaternion!
x 0.00000000 float
y 0.00000000 float
z -0.70710677 float
w 0.70710677 float
The quaternions -q and q are different; however, the rotations represented by the two quaternions are identical. This phenomenon is usually described by saying quaternions provide a double cover of the rotation group SO(3). The algebra to see this is very simple: given a vector represented by quaternion p, and a rotation represented represented by a quaternion q, the rotation is qpq^{-1}. On the other hand, -qp(-q)^{-1} = -1qp(q)^{-1}(-1) = q(-1)p(-1)q^{-1} = qp(-1)^2q^{-1} = qpq^{-1}, the same rotation. Quaternions normally don't commute, so pq != qp for general quaternions, but scalars like -1 do commute with quaternions.
I believe ToEulerXYZ should be the same in both cases, which it appears to be.
From what I remember a quaternion can be considered as a rotation around an arbitrary axis.
And this can help to understand intuitively why there will always be two quaternions to represent a given rotation.
Rotating 90° around 0,0,1 is going to be the same as rotating 270° around 0,0, -1.
I.e. A quarter turn anticlockwise around 0,0,1 is identical to a quarter turn clockwise around 0,0, -1.
You can check this out by using your thumb as the axis of rotation and do the 90° rotation in the direction your fingers curl.

DirectX 3D rotate eyePt around lookAt

I'm really stuggling with the maths i need for this maths has never been my strong point once you get into Calculus and Geometry.
So how do i rotate a vector3 around another vector3
D3DXVECTOR3 lookAt = this->cam->getLookAt();
D3DXVECTOR3 eyePt = this->cam->getEyePt();
I need to rotate lookAt around eyePt how do I do this I know I need a Matrix but what I'm supposed to fill it with and how I do the rotation in it I just don't understand.
So if some one could provide code with explanations of the steps used to do it would really help
Another note is that i only want to translate on X,Z axis as i'm rotation around Y axis so here is an image of what im trying to do
Take the unit vector eyePt, which will be the axis of rotations. (I presume it's a unit vector; if not, I can show you how to turn it into a unit vector.) Let's call it E:
Ex
E = Ey
Ez
(This is the vector (Ex, Ey, Ez), but it's hard to do mathematical notation here.)
Now construct three matrices. The identity matrix I:
1 0 0
I = 0 1 0
0 0 1
the tensor product of E and E, which we'll call T:
0 -Ez Ey
T = Ez 0 -Ex
-Ey Ex 0
and the cross-product matrix of E, which we'll call P:
ExEx ExEy ExEz
P = ExEy EyEy EyEz
ExEz EyEz EzEz
Now choose an angle of rotation, theta (in radians). The rotation matrix will be:
R = cos(theta)I + (1-cos(theta))T + sin(theta)P
Now to rotate the vector v (which in this case is lookAt), just multiply R by it:
vafter = Rvbefore

How to convert mathematical matrix to Direct3D matrix?

When representing a mathematical matrix, rotations are performed as follows:
http://en.wikipedia.org/wiki/Rotation_matrix#Basic_rotations
Rx(θ) = 1 0 0
0 cos θ -sin θ
0 sin θ cos θ
Ry(θ) = cos θ 0 sin θ
0 1 0
-sin θ 0 cos θ
Rz(θ) = cos θ -sin θ 0
sin θ cos θ 0
0 0 1
However, I've discovered that Direct3D uses the transpose of these rotation matrices.
In my app I have a generic Matrix class which uses the standard mathematical rotation representations. With a simple rotation about 1 axis, it is easy to convert to a Direct3D matrix, as you can just do the transposition. However, if you rotate about x, y and then z you cannot simply get the transposed matrix.
My question is, how can I convert a mathematical matrix in to a Direct3D matrix?
Here is an example:
Matrix matrix;
matrix.RotateX(1.0f);
matrix.RotateY(1.0f);
matrix.RotateZ(1.0f);
Mathematical matrix =
m_11 0.29192656 float
m_12 -0.45464867 float
m_13 0.84147096 float
m_14 0.00000000 float
m_21 0.83722234 float
m_22 -0.30389664 float
m_23 -0.45464867 float
m_24 0.00000000 float
m_31 0.46242565 float
m_32 0.83722234 float
m_33 0.29192656 float
m_34 0.00000000 float
m_41 0.00000000 float
m_42 0.00000000 float
m_43 0.00000000 float
m_44 1.0000000 float
Direct3D matrix =
_11 0.29192656 float
_12 0.45464867 float
_13 -0.84147096 float
_14 0.00000000 float
_21 -0.072075009 float
_22 0.88774973 float
_23 0.45464867 float
_24 0.00000000 float
_31 0.95372111 float
_32 -0.072075009 float
_33 0.29192656 float
_34 0.00000000 float
_41 0.00000000 float
_42 0.00000000 float
_43 0.00000000 float
_44 1.0000000 float
Edit: Here are some examples of individual rotations.
X-Axis rotation by 1 radian
My matrix class:
1.0000000 0.00000000 0.00000000 0.00000000
0.00000000 0.54030228 -0.84147096 0.00000000
0.00000000 0.84147096 0.54030228 0.00000000
0.00000000 0.00000000 0.00000000 1.0000000
Direct3D:
1.0000000 0.00000000 0.00000000 0.00000000
0.00000000 0.54030228 0.84147096 0.00000000
0.00000000 -0.84147096 0.54030228 0.00000000
0.00000000 0.00000000 0.00000000 1.0000000
As you can see, the Direct3D matrix is exactly the transpose of my matrix class (my class gives the same results as the examples given by Wikipedia at the top of this question).
I've always traced this confusion back to the late 80s. As I remember it, there were two particularly influential books on graphics at the time. One of them wrote vectors as row vectors on the left of matrices; the book was quite hardware oriented - conceptually you thought of the vectors as flowing left-to-right through a pipeline of transform matrices. Rendermorphics (which was later picked up by Microsoft to become Direct3D) went down this route. The other book wrote vectors as column vectors on the right of matrices, which is the way OpenGL does it (and I think most mathematicians would naturally gravitate towards this, although I have met exceptions).
However, both approaches are entirely equally valid mathematics! If you're confused by Direct3D, start thinking in terms of writing row vectors on the left of matrices, not in terms of transposing the matrices.
You're not the first person to be confused by this (see also).
The DirectX matrix classes are an implementation of a mathematical matrix.
The handedness only exhibits itself when you do operations on it such as rotation.
The transpose should give you the result you are looking for as long as the same operations are done on both the DIrectX matrix and your 'mathematical matrix'. My guess is that there's a difference in the implementations of rotation, or the order the rotations are being done differs.
Given that the only terms off of the prime diagonal are of the form ±sin ( θ ) and that sin ( -θ ) = - sin ( θ ) but cos ( -θ ) = cos ( θ ), the relationship between Direct3D and the Wikipedia maths can also be seen as an opposite interpretation of the direction of the angle.