OpenGL rotation vector from matrix - opengl

Ok, so here is what I have:
an abstract "Object" class which I made in a framework to use it as a base class for all 3D objects.
a Matrix4 member of this class which has the sole purpose of storing rotation info for the object.
some functions that multiply the matrix: for each of the yaw, pitch & roll rotations (both global and local), I made a method that multiplies the above rotation matrix with a new matrix.
e.g.: if you locally yaw the object by 45 degrees in CCW direction, then
rotMatrix = newRotationZMatrix(45) * rotMatrix;
What I would like to know is what is the best way of getting the global rotation of the object as a vector - generally speaking, how do you get the rotation angles around X,Y and Z from a transformation matrix that contains JUST rotations.

There are techniques to obtain that, just get the euler angles from a rotation matrix, it involves a bit of math. Here you can read about it.

I have a similar class.
It's one of the most fun part to me work with matrix 3D.
Well, let's to the answer.
I'm working with OpenGL ES, so to me performance is a crucial part. I make a routine, test it, test it, stress tests, rebuild whole routine and test, test... until to find the best way to save memory and increase performance.
So, thinking in maximum performance, and if you, like me, have a class to work with matrices, I tell you that the best way to get it is:
• Store the rotation's values in variables before put it in the matrix.
At this way you don't need to perform complex Euler calculations at every time that you want to get these values. The same is true for the scale's values. Just the translations that have separated indexes don't need this.

Related

Quaternion based camera

I try to implement an FPS camera based on quaternion math.
I store a rotation quaternion variable called _quat and multiply it by another quaternion when needed. Here's some code:
void Camera::SetOrientation(float rightAngle, float upAngle)//in degrees
{
glm::quat q = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
q*= glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));
_quat = q;
}
void Camera::OffsetOrientation(float rightAngle, float upAngle)//in degrees
{
glm::quat q = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
q*= glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));
_quat *= q;
}
The application can request the orientation matrix via GetOrientation, which simply casts the quaternion to a matrix.
glm::mat4 Camera::GetOrientation() const
{
return glm::mat4_cast(_quat);
}
The application changes the orientation in the following way:
int diffX = ...;//some computations based on mouse movement
int diffY = ...;
camera.OffsetOrientation(g_mouseSensitivity * diffX, g_mouseSensitivity * diffY);
This results in bad, mixed rotations around pretty much all the axes. What am I doing wrong?
The problem is the way that you are accumulating rotations. This would be the same whether you use quaternions or matrices. Combining a rotation representing pitch and yaw with another will introduce roll.
By far the easiest way to implement an FPS camera is to simply accumulate changes to the heading and pitch, then convert to a quaterion (or matrix) when you need to. I would change the methods in your camera class to:
void Camera::SetOrientation(float rightAngle, float upAngle)//in degrees
{
_rightAngle = rightAngle;
_upAngle = upAngle;
}
void Camera::OffsetOrientation(float rightAngle, float upAngle)//in degrees
{
_rightAngle += rightAngle;
_upAngle += upAngle;
}
glm::mat4 Camera::GetOrientation() const
{
glm::quat q = glm::angleAxis(glm::radians(-_upAngle), glm::vec3(1,0,0));
q*= glm::angleAxis(glm::radians(_rightAngle), glm::vec3(0,1,0));
return glm::mat4_cast(q);
}
The problem
As already pointed out by GuyRT, the way you do accumulation is not good. In theory, it would work that way. However, floating point math is far from being perfectly precise, and errors accumulate the more operations you do. Composing two quaternion rotations is 28 operations versus a single operation adding a value to an angle (plus, each of the operations in a quaternion multiplication affects the resulting rotation in 3D space in a very non-obvious way).
Also, quaternions used for rotation are rather sensible to being normalized, and rotating them de-normalizes them slightly (rotating them many times de-normalizes them a lot, and rotating them with another, already de-normalized quaternion amplifies the effect).
Reflection
Why do we use quaternions in the first place?
Quaternions are commonly used for the following reasons:
Avoiding the dreaded gimbal lock (although a lot of people don't understand the issue, replacing three angles with three quaternions does not magically remove the fact that one combines three rotations around the unit vectors -- quaternions must be used correctly to avoid this problem)
Efficient combination of many rotations, such as in skinning (28 ops versus 45 ops when using matrices), saving ALU.
Fewer values (and thus fewer degrees of freedom), fewer ops, so less opportunity for undesirable effects compared to using matrices when combining many transformations.
Fewer values to upload, for example when a skinned model has a couple of hundred bones or when drawing ten thousand instances of an object. Smaller vertex streams or uniform blocks.
Quaternions are cool, and people using them are cool.
Neither of these really make a difference for your problem.
Solution
Accumulate the two rotations as angles (normally undesirable, but perfectly acceptable for this case), and create a rotation matrix when you need it. This can be done either by combining two quaternions and converting to a matrix as in GuyRT's answer, or by directly generating the rotation matrix (which is likely more efficient, and all that OpenGL wants to see is that one matrix anyway).
To my knowledge, glm::rotate only does rotate-around-arbitrary-axis. Which you could of course use (but then you'd rather combine two quaternions!). Luckily, the formula for a matrix combining rotations around x, then y, then z is well-known and straightforward, you find it for example in the second paragraph of (3) here.
You do not wish to rotate around z, so cos(gamma) = 1 and sin(gamma) = 0, which greatly simplifies the formula (write it out on a piece of paper).
Using rotation angles is something that will make many people shout at you (often not entirely undeserved).
A cleaner alternative is keeping track of the direction you look at either with a vector pointing from your eye in the direction where you wish to look, or by remembering the point in space that you look at (this is something that combines nicely with physics in a 3rd person game, too). That also needs an "up" vector if you want to allow arbitrary rotations -- since then "up" isn't always the world space "up" -- so you may need two vectors. This is much nicer and more flexible, but also more complex.
For what is desired in your example, a FPS where your only options are to look left-right and up-down, I find rotation angles -- for the camera only -- entirely acceptable.
I haven't used GLM, so maybe you won't like this answer. However, performing quaternion rotation is not bad.
Let's say your camera has an initial saved orientation 'vecOriginalDirection' (a normalized vec3). Let's say you want it to follow another 'vecDirection' (also normalized). This way we can adapt a Trackball-like approach, and treat vecDirection as a deflection from whatever is the default focus of the camera.
The usually preferred way to do quaternion rotation in the real world is using NLERP. Let's see if I can remember: in pseudocode (assuming floating-point) I think it's this:
quat = normalize([ cross(vecDirection, vecOriginalDirection),
1. + dot(vecDirection, vecOriginalDirection)]);
(Don't forget the '1. +'; I forget why it's there, but it made sense at one time. I think I pulled my hair out for a few days until finding it. It's basically the unit quaternion, IIRC, which is getting averaged in, thereby making the double-angle act like the angle... maybe :))
Renormalizing, shown above as 'normalize()', is essential (it's the 'N' in NLERP). Of course, normalizing quat (x,y,z,w) is just:
quat /= sqrt(x*x+y*y+z*z+w*w);
Then, if you want to use your own function to make a 3x3 orientation matrix from quat:
xx=2.*x*x,
yy=2.*y*y,
zz=2.*z*z,
xy=2.*x*y,
xz=2.*x*z,
yz=2.*y*z,
wx=2.*w*x,
wy=2.*w*y,
wz=2.*w*z;
m[0]=1.-(yy+zz);
m[1]=xy+wz;
m[2]=xz-wy;
m[3]=xy-wz;
m[4]=1.-(xx+zz);
m[5]=yz+wx;
m[6]=xz+wy;
m[7]=yz-wx;
m[8]=1.-(xx+yy);
To actually implement a trackball, you'll need to calculate vecDirection when the finger is held down, and save it off to vecOriginalDirection when it is first pressed down (assuming touch interface).
You'll also probably want to calculate these values based on a piecewise half-sphere/hyperboloid function, if you aren't already. I think #minorlogic was trying to save some tinkering, since it sounds like you might be able to just use a drop-in virtual trackball.
The up angle rotation should be pre multiplied, post multiplying will rotate the world around the origin through (1,0,0), pre-multiplying will rotate the camera.
glm::quat q_up = glm::angleAxis(glm::radians(-upAngle), glm::vec3(1,0,0));
q_right = glm::angleAxis(glm::radians(rightAngle), glm::vec3(0,1,0));
_quat *= q_right;
_quat = q_up * _quat;

How to store and modify angles in 3D space

This isn't about understanding angular physics, but more how to actually implement it.
For example, I'm using a simple linear interpolation per frame (with dT)
I'm having trouble with the angular units, I want to be able to rotate around arbitrary axes.
(with glm)
Using a vec3 for torque, inertia and angular velocity works excellent for a single axis.
Any more and you get gimbal lock. (i.e. You can rotate around a local x, y or z but superimposing prevents proper results)
Using quaternions I can't get it to function nicely with time, inertia or for an extended period.
Is there any tried-and-true method for representing these features?
The usual solution is to use the matrix representation of rotation. Two rotations in sequence can be expressed by multiplying their respective matrices. And because matrix multiplication is not symmetric, the order of the 2 rotations matters - as it should.

Should Euler rotations be stored as three matrices or one matrix?

I am trying to create a simple matrix library in C++ that I will hopefully be able to use in game development afterwards.
I have the basic implementation done, but I have just realized a problem with storing only one matrix per object: the rotation order will get mixed up fairly quickly.
To the best of my knowledge: AB != BA
Therefore, if I am continually multiplying arbitrary rotations to my matrix, than the rotation will get mixed up, correct? In my case, I need to rotate globally on the Y axis, and locally on the X axis (and locally on the Z axis would be nice as well). These seem like the qualities of the average first person shooter. So by "mixed up", I mean that if I go to rotate on the Y axis (or Z axis), then it will start rotating around the local X axis, instead of the intended axis (if that makes any sense).
So, these are the solutions I came up with:
Keep 3 Euler angles, and rebuild the matrix in the correct order when one angle changes
Keep 3 Matrices, one for each axis
Somehow destruct the matrix during multiplication, and reconstruct it properly afterwards (?)
Or am I worrying about nothing? Are my qualms false, and the order will somehow magically solve itself?
You are correct that the order of rotation matrices can be an issue here.
Especially if you use Euler angles, you can suffer from the issue of gimbal lock: let's say your first rotation is +90° positive "pitch", meaning you're looking straight upward; then if the next rotation is +45° "roll", then you're still just looking straight up. But if you do the rotations in the opposite order, you end up looking somewhere different altogether. (see the Wikipedia link for an illustration that makes this clearer.)
One common answer in game development is what you've got in (1): store the Euler angles independently, and then build the rotation matrix out of all three of them at once every time you want to get the object's orientation in world space.
Another common solution is to store rotation as an angle around a single axis, rather than as Euler angles. (That is often less convenient for animators and player motion.)
We also often use quaternions as a more efficient way of storing and combining rotations.
Each of the links above should take you to an article illustrating the relevant math. I also like Eric Lengyel's Mathematics for 3D Game Programming and Computer Graphics book, which explains this whole subject very well.
I don't know how other people usually do this, but I generally just store the angles, and then reconstruct a matrix if necessary.
You are right that if you had one matrix and kept multiplying something onto it, you would end up messing things up. But again, I don't think this is the route you probably want to take.
I don't know what sort of graphics system you want to be using, but with OpenGL, you don't even have to worry about the matrix representation (unless you're doing something super performance-critical), and can simply use some calls to glRotate and the like.

3d geometry: how to align an object to a vector

i have an object in 3d space that i want to align according to a vector.
i already got the Y-rotation out by doing an atan2 on the x and z component of the vector. but i would also like to have an X-rotation to make the object look downwards or upwards.
imagine a plane that does it's pitch yaw roll, just without the roll.
i am using openGL to set the rotations so i will need an Y-angle and an X-angle.
I would not use Euler angles, but rather a Euler axis/angle. For that matter, this is what Opengl glRotate uses as input.
If all you want is to map a vector to another vector, there are an infinite number of rotations to do that. For the shortest one, (the one with the smallest angle of rotation), you can use the vector found by the cross product of your from and to unit vectors.
axis = from X to
from there, the angle of rotation can be found from from.to = cos(theta) (assuming unit vectors)
theta = arccos(from.to)
glRotate(axis, theta) will then transform from to to.
But as I said, this is only one of many rotations that can do the job. You need a full referencial to define better how you want the transform done.
You should use some form of quaternion interpolation (Spherical Linear Interpolation) to animate your object going from its current orientation to this new orientation.
If you store the orientations using Quaternions (vector space math), then you can get the shortest path between two orientations very easily. For a great article, please read Understanding Slerp, Then Not Using It.
If you use Euler angles, you will be subject to gimbal lock and some really weird edge cases.
Actually...take a look at this article. It describes Euler Angles which I believe is what you want here.

Will this cause gimbal-lock?

I making a very simple 3d scene, having 5 points in world coordinates. I'd like to navigate across the scene so I'm defining a camera with both an UP and OUT vector. With this information I generate a rotation matrix in every frame, which I'll apply to the vectors in order to get the camera coordinates.
The question is: I've read about gimbal lock as a problem using this method, but would it happen in this case?
Note that I'm generating the rotation matrix in every frame, and I'm not rotating accumulatively. So could a lock happen in this situation? If that was the case, what would you suggest to safely apply a rotation (from the UP and OUT vectors)?
Thank you
If by OUT you mean "forward", and this is always perpendicular to the UP vector, then NO, you won't encounter gimbal lock.
What you are doing is creating an orientation matrix from the UP and FORWARD vectors, and applying that each frame, which is a fairly common method for moving a camera in space.
You are not applying multiple rotations using euler angles, which can be a cause of gimbal lock.
Note to create the matrix you will also need to create a "left" (or right) vector from the UP and FORWARD vectors. A good introduction to this is here - note that that example does then apply rotations to the camera matrix, which is an entirely optional step.
Wikipedia has a good explanation of gimbal lock.
You will encounter gimbal-lock problem when using matrix-approach to generate rotation matrices (for X,Y,Z) and then multiplying them to get final rotation matrix. If I've understood you will, you use OUT vector to get angles (alpha, beta, gamma), then you are calculating matrices, finally - you multiply them to get final roation matrix - then yes, you will encounter gimbal lock.
One way to get rid of this problem is to use Quaternions for calculations.
Also, here I've found some OpenGL tutorial about how to implement those.