Converting glm::lookat matrix to quaternion and back - c++

I am using glm to create a camera class, and I am running into some problems with a lookat function. I am using a quaternion to represent rotation, but I want to use glm's prewritten lookat function to avoid duplicating code. This is my lookat function right now:
void Camera::LookAt(float x, float y, float z) {
glm::mat4 lookMat = glm::lookAt(position, glm::vec3(x, y, z), glm::vec3(0, 1, 0));
rotation = glm::toQuat(lookMat);
}
However when I call LookAt(0.0f,0.0f,0.0f), my camera is not rotated to that point. When I call glm::eulerangles(rotation) after the lookat call, I get a vec3 with the following values: (180.0f, 0.0f, 180.0f). position is (0.0f,0.0f,-10.0f), so I should not have any rotation at all to look at 0,0,0. This is the function which builds the view matrix:
glm::mat4 Camera::GetView() {
view = glm::toMat4(rotation) * glm::translate(glm::mat4(), position);
return view;
}
Why am I not getting the correct quaternion, and how can I fix my code?

Solution:
You have to invert the rotation of the quaternion by conjugating it:
using namespace glm;
quat orientation = conjugate(toQuat(lookAt(vecA, vecB, up)));
Explanation:
The lookAt function is a replacement for gluLookAt, which is used to construct a view matrix.
The view matrix is used to rotate the world around the viewer, and is therefore the inverse of the cameras transform.
By taking the inverse of the inverse, you can get the actual transform.

I ran into something similar, the short answer is your lookMat might need to be inverted/transposed, because it is a camera rotation (at least in my case), as opposed to a world rotation. Rotating the world would be a inverse of a camera rotation.
I have a m_current_quat which is a quaternion that stores the current camera rotation. I debugged the issue by printing out the matrix produced by glm::lookAt, and comparing with the resulting matrix that I get by applying m_current_quat and a translation by m_camera_position. Here is the relevant code for my test.
void PrintMatrix(const GLfloat m[16], const string &str)
{
printf("%s:\n", str.c_str());
for (int i=0; i<4; i++)
{
printf("[");
//for (int j=i*4+0; j<i*4+4; j++) // row major, 0, 1, 2, 3
for (int j=i+0; j<16; j+=4) // OpenGL is column major by default, 0, 4, 8, 12
{
//printf("%d, ", j); // print matrix index
printf("%.2f, ", m[j]);
}
printf("]\n");
}
printf("\n");
}
void CameraQuaternion::SetLookAt(glm::vec3 look_at)
{
m_camera_look_at = look_at;
// update the initial camera direction and up
//m_initial_camera_direction = glm::normalize(m_camera_look_at - m_camera_position);
//glm::vec3 initial_right_vector = glm::cross(m_initial_camera_direction, glm::vec3(0, 1, 0));
//m_initial_camera_up = glm::cross(initial_right_vector, m_initial_camera_direction);
m_camera_direction = glm::normalize(m_camera_look_at - m_camera_position);
glm::vec3 right_vector = glm::cross(m_camera_direction, glm::vec3(0, 1, 0));
m_camera_up = glm::cross(right_vector, m_camera_direction);
glm::mat4 lookat_matrix = glm::lookAt(m_camera_position, m_camera_look_at, m_camera_up);
// Note: m_current_quat quat stores the camera rotation with respect to the camera space
// The lookat_matrix produces a transformation for world space, where we rotate the world
// with the camera at the origin
// Our m_current_quat need to be an inverse, which is accompolished by transposing the lookat_matrix
// since the rotation matrix is orthonormal.
m_current_quat = glm::toQuat(glm::transpose(lookat_matrix));
// Testing: Make sure our model view matrix after gluLookAt, glmLookAt, and m_current_quat agrees
GLfloat current_model_view_matrix[16];
//Test 1: gluLookAt
gluLookAt(m_camera_position.x, m_camera_position.y, m_camera_position.z,
m_camera_look_at.x, m_camera_look_at.y, m_camera_look_at.z,
m_camera_up.x, m_camera_up.y, m_camera_up.z);
glGetFloatv(GL_MODELVIEW_MATRIX, current_model_view_matrix);
PrintMatrix(current_model_view_matrix, "Model view after gluLookAt");
//Test 2: glm::lookAt
lookat_matrix = glm::lookAt(m_camera_position, m_camera_look_at, m_camera_up);
PrintMatrix(glm::value_ptr(lookat_matrix), "Model view after glm::lookAt");
//Test 3: m_current_quat
glLoadIdentity();
glMultMatrixf( glm::value_ptr( glm::transpose(glm::mat4_cast(m_current_quat))) );
glTranslatef(-m_camera_position.x, -m_camera_position.y, -m_camera_position.z);
glGetFloatv(GL_MODELVIEW_MATRIX, current_model_view_matrix);
PrintMatrix(current_model_view_matrix, "Model view after quaternion transform");
return;
}
Hope this helps.

I want to use glm's prewritten lookat function to avoid duplicating code.
But it's not duplicating code. The matrix that comes out of glm::lookat is just a mat4. Going through the conversion from a quaternion to 3 vectors, only so that glm::lookat can convert it back into an orientation is just a waste of time. You've already done 85% of lookat's job; just do the rest.

You are getting the (or better: a) correct rotation.
When I call glm::eulerangles(rotation) after the lookat call, I get a
vec3 with the following values: (180.0f, 0.0f, 180.0f). position is
(0.0f,0.0f,-10.0f), so I should not have any rotation at all to look
at 0,0,0.
glm is following the conventions of the old fixed-function GL. And there, eye space was defined as the camera placed at origin, with x pointng to the right, y up and looking in -z direction. Since you want to look in positive z direction, the camera has to turn. Now, as a human, I would have described that as a rotation of 180 degrees around y, but a rotation of 180 degrees around x in combination with another 180 degrees rotation aroundz will have the same effect.

When multiplied by the LookAt view matrix, the world-space vectors are rotated (brought) into the camera's view while the camera's orientation is kept in place.
So an actual rotation of the camera by 45 degress to the right is achieved with a matrix which applies a 45 degree rotation to the left to all the world-space vertices.
For a Camera object you would need to get its local forward and up direction vectors in order to calculate a lookAt view matrix.
viewMatrix = glm::lookAtLH (position, position + camera_forward, camera_up);
When using quaternions to store the orientation of an object (be it a camera or anything else), usually this rotation quat is used to calculate the vectors which define its local-space (left-handed one in the below example):
glm::vec3 camera_forward = rotation * glm::vec3(0,0,1); // +Z is forward direction
glm::vec3 camera_right = rotation * glm::vec3(1,0,0); // +X is right direction
glm::vec3 camera_up = rotation * glm::vec3(0,1,0); // +Y is up direction
Thus, the world-space directions should be rotated 45 degress to the right in order to reflect the correct orientation of the camera.
This is why the lookMat or the quat obtained from it cannot be directly used for this purpose, since the orientation they describe is a reversed one.
Correct rotation can be done in two ways:
Calculate the inverse of the lookAt matrix and multiply the world-space direction vectors by this rotation matrix
(more efficient) Convert the LookAt matrix into a quaternion and conjugate it instead of applying glm::inverse, since the result is a unit quat and for such quats the inverse is equal to the conjugate.
Your LookAt should look like this:
void Camera::LookAt(float x, float y, float z) {
glm::mat4 lookMat = glm::lookAt(position, glm::vec3(x, y, z), glm::vec3(0, 1, 0));
rotation = glm::conjugate( glm::quat_cast(lookMat));
}

Related

How to correctly represent 3D rotation in games

In most 3D platform games, only rotation around the Y axis is needed since the player is always positioned upright.
However, for a 3D space game where the player needs to be rotated on all axises, what is the best way to represent the rotation?
I first tried using Euler angles:
glRotatef(anglex, 1.0f, 0.0f, 0.0f);
glRotatef(angley, 0.0f, 1.0f, 0.0f);
glRotatef(anglez, 0.0f, 0.0f, 1.0f);
The problem I had with this approach is that after each rotation, the axises change. For example, when anglex and angley are 0, anglez rotates the ship around its wings, however if anglex or angley are non zero, this is no longer true. I want anglez to always rotate around the wings, irrelevant of anglex and angley.
I read that quaternions can be used to exhibit this desired behavior however was unable to achieve it in practice.
I assume my issue is due to the fact that I am basically still using Euler angles, but am converting the rotation to its quaternion representation before usage.
struct quaternion q = eulerToQuaternion(anglex, angley, anglez);
struct matrix m = quaternionToMatrix(q);
glMultMatrix(&m);
However, if storing each X, Y, and Z angle directly is incorrect, how do I say "Rotate the ship around the wings (or any consistent axis) by 1 degree" when my rotation is stored as a quaternion?
Additionally, I want to be able to translate the model at the angle that it is rotated by. Say I have just a quaternion with q.x, q.y, q.z, and q.w, how can I move it?
Quaternions are very good way to represent rotations, because they are efficient, but I prefer to represent the full state "position and orientation" by 4x4 matrices.
So, imagine you have a 4x4 matrix for every object in the scene. Initially, when the object is unrotated and untraslated, this matrix is the identity matrix, this is what I will call "original state". Suppose, for instance, the nose of your ship points towards -z in its original state, so a rotation matrix that spin the ship along the z axis is:
Matrix4 around_z(radian angle) {
c = cos(angle);
s = sin(angle);
return Matrix4(c, -s, 0, 0,
s, c, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
}
now, if your ship is anywhere in space and rotated to any direction, and lets call this state t, if you want to spin the ship around z axis for an angle amount as if it was on its "original state", it would be:
t = t * around_z(angle);
And when drawing with OpenGL, t is what you multiply for every vertex of that ship. This assumes you are using column vectors (as OpenGL does), and be aware that matrices in OpenGL are stored columns first.
Basically, your problem seems to be with the order you are applying your rotations. See, quaternions and matrices multiplication are non-commutative. So, if instead, you write:
t = around_z(angle) * t;
You will have the around_z rotation applied not to the "original state" z, but to global coordinate z, with the ship already affected by the initial transformation (roatated and translated). This is the same thing when you call the glRotate and glTranslate functions. The order they are called matters.
Being a little more specific for your problem: you have the absolute translation trans, and the rotation around its center rot. You would update each object in your scene with something like:
void update(quaternion delta_rot, vector delta_trans) {
rot = rot * delta_rot;
trans = trans + rot.apply(delta_trans);
}
Where delta_rot and delta_trans are both expressed in coordinates relative to the original state, so, if you want to propel your ship forward 0.5 units, your delta_trans would be (0, 0, -0.5). To draw, it would be something like:
void draw() {
// Apply the absolute translation first
glLoadIdentity();
glTranslatevf(&trans);
// Apply the absolute rotation last
struct matrix m = quaternionToMatrix(q);
glMultMatrix(&m);
// This sequence is equivalent to:
// final_vertex_position = translation_matrix * rotation_matrix * vertex;
// ... draw stuff
}
The order of the calls I choose by reading the manual for glTranslate and glMultMatrix, to guarantee the order the transformations are applied.
About rot.apply()
As explained at Wikipedia article Quaternions and spatial rotation, to apply a rotation described by quaternion q on a vector p, it would be rp = q * p * q^(-1), where rp is the newly rotated vector. If you have a working quaternion library implemented on your game, you should either already have this operation implemented, or should implement it now, because this is the core of using quaternions as rotations.
For instance, if you have a quaternion that describes a rotation of 90° around (0,0,1), if you apply it to (1,0,0), you will have the vector (0,1,0), i.e. you have the original vector rotated by the quaternion. This is equivalent to converting your quaternion to matrix, and doing a matrix to colum-vector multiplication (by matrix multiplication rules, it yields another column-vector, the rotated vector).

Creating a view matrix with glm

I am trying to create a view matrix with glm. I know of glm::lookAt and understand how it works. I want to know if there are similar functions that will acheive the same outcome that take different parameters. For example, is there a function that takes an up-vector, a directional bearing on the plane perpendicular(?) to the vector, and an angle? (i.e. The sky is that way, I turn N degrees/radians to my left and tilt my head M degrees/radians upward).
You can just build the matrix by composing a set of operations:
// define your up vector
glm::vec3 upVector = glm::vec3(0, 1, 0);
// rotate around to a given bearing: yaw
glm::mat4 camera = glm::rotate(glm::mat4(), bearing, upVector);
// Define the 'look up' axis, should be orthogonal to the up axis
glm::vec3 pitchVector = glm::vec3(1, 0, 0);
// rotate around to the required head tilt: pitch
camera = glm::rotate(camera, tilt, pitchVector);
// now get the view matrix by taking the camera inverse
glm::mat4 view = glm::inverse(camera);

OpenGL - Camera Orbiting a Point with Quaternions

So I currently use quaternions to store and modify the orientation of the objects in my OpenGL scene, as well as the orientation of the camera. When rotating these objects directly (i.e. saying I want to rotate the camera Z amount around the Z-axis, or I want to rotate an object X around the X-axis and then translate it T along its local Z-axis), I have no problems, so I can only assume my fundamental rotation code is correct.
However, I am now trying to implement a function to make my camera orbit an arbitrary point in space, and am having quite a hard time of it. Here is what I have come up with so far, which doesn't work (this takes place within the Camera class).
//Get the inverse of the orientation, which should represent the orientation
//"from" the focal point to the camera
Quaternion InverseOrient = m_Orientation;
InverseOrient.Invert();
///Rotation
//Create change quaternions for each axis
Quaternion xOffset = Quaternion();
xOffset.FromAxisAngle(xChange * m_TurnSpeed, 1.0, 0.0, 0.0);
Quaternion yOffset = Quaternion();
yOffset.FromAxisAngle(yChange * m_TurnSpeed, 0.0, 1.0, 0.0);
Quaternion zOffset = Quaternion();
zOffset.FromAxisAngle(zChange * m_TurnSpeed, 0.0, 0.0, 1.0);
//Multiply the change quats into the inversed orientation quat
InverseOrient = yOffset * zOffset * xOffset * InverseOrient;
//Translate according to the focal distance
//Start with a vector relative to the position being looked at
sf::Vector3<float> RelativePos(0, 0, -m_FocalDistance);
//Rotate according to the quaternion
RelativePos = InverseOrient.MultVect(RelativePos);
//Add that relative position to the focal point
m_Position.x = m_FocalPoint->x + RelativePos.x;
m_Position.y = m_FocalPoint->y + RelativePos.y;
m_Position.z = m_FocalPoint->z + RelativePos.z;
//Now set the orientation to the inverse of the quaternion
//used to position the camera
m_Orientation = InverseOrient;
m_Orientation.Invert();
What ends up happening is that the camera rotates around some other point - certainly not the object, but apparently not itself either, as though it were looping through space in a spiral path.
So this is clearly not the way to go about orbiting a camera around a point, but what is?
I would operate on the camera first in spherical coordinates and convert to quaternions as necessary.
Given the following assumptions:
The camera has no roll
The point you are looking at is [x, y, z]
You have yaw, pitch angles
[0, 1, 0] is "up"
Here is how to calculate some important values:
The view vector: v = [vx, vy, vz] = [cos(yaw)*cos(pitch), sin(pitch), -sin(yaw)*cos(pitch)]
The camera location: p = [x, y, z] - r*v
The right vector: cross product v with [0, 1, 0]
The up vector: cross product v with the right vector
Your view quaternion is [0, vx, vy, vz] (that's the view vector with a 0 w-component)
Now in your simulation you can operate on pitch/yaw, which are pretty intuitive. If you want to do interpolation, convert the before and after pitch+yaws into quaternions and do quaternion spherical linear interpolation.

Issue with GLM Camera X,Y Rotation introducing Z Rotation

So I've been having trouble with a camera I've implemented in OpenGL and C++ using the GLM library. The type of camera I'm aiming for is a fly around camera which will allow easy exploration of a 3D world. I have managed to get the camera pretty much working, it's nice and smooth, looks around and the movement seems to be nice and correct.
The only problem I seem to have is that the rotation along the camera's X and Y axis (looking up and down) introduces some rotation about it's Z axis. This has the result of causing the world to slightly roll whilst travelling about.
As an example... if I have a square quad in front of the camera and move the camera in a circular motion, so as if looking around in a circle with your head, once the motion is complete the quad will have rolled slightly as if you've tilted your head.
My camera is currently a component which I can attach to an object/entity in my scene. Each entity has a "Frame" which is basically the model matrix for that entity. The Frame contains the following attributes:
glm::mat4 m_Matrix;
glm::vec3 m_Position;
glm::vec3 m_Up;
glm::vec3 m_Forward;
These are then used by the camera to create the appropriate viewMatrix like this:
const glm::mat4& CameraComponent::GetViewMatrix()
{
//Get the transform of the object
const Frame& transform = GetOwnerGO()->GetTransform();
//Update the viewMatrix
m_ViewMatrix = glm::lookAt(transform.GetPosition(), //position of camera
transform.GetPosition() + transform.GetForward(), //position to look at
transform.GetUp()); //up vector
//return reference to the view matrix
return m_ViewMatrix;
}
And now... here are my rotate X and Y methods within the Frame object, which I'm guessing is the place of the problem:
void Frame::RotateX( float delta )
{
glm::vec3 cross = glm::normalize(glm::cross(m_Up, m_Forward)); //calculate x axis
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, cross);
m_Forward = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Forward, 0.0f))); //Rotate forward vector by new rotation
m_Up = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Up, 0.0f))); //Rotate up vector by new rotation
}
void Frame::RotateY( float delta )
{
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, m_Up);
//Rotate forward vector by new rotation
m_Forward = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Forward, 0.0f)));
}
So somewhere in there, there's a problem which I've been searching around trying to fix. I've been messing with it for a few days now, trying random things but I either get the same result, or the z axis rotation is fixed but other bugs appear such as incorrect X, Y rotation and camera movement.
I had a look at gimbal lock but from what I understood of it, this problem didn't seem quite like gimbal lock to me. But I may be wrong.
Store the current pitch/yaw angles and generate the camera matrix on-the-fly instead of trying to accumulate small changes on the intermediate vectors.
In your RotateY function, change it from this:
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, m_Up);
to this:
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, glm::vec3(0,1,0));

C++ OpenGL Quaternion for Camera flips it upside down

It does look at the target when I move to the target to the right and looks at it up until it goes 180 degrees of the -zaxis and decides to go the other way.
Matrix4x4 camera::GetViewMat()
{
Matrix4x4 oRotate, oView;
oView.SetIdentity();
Vector3 lookAtDir = m_targetPosition - m_camPosition;
Vector3 lookAtHorizontal = Vector3(lookAtDir.GetX(), 0.0f, lookAtDir.GetZ());
lookAtHorizontal.Normalize();
float angle = acosf(Vector3(0.0f, 0.0f, -1.0f).Dot(lookAtHorizontal));
Quaternions horizontalOrient(angle, Vector3(0.0f, 1.0f, 0.0f));
ori = horizontalOrient;
ori.Conjugate();
oRotate = ori.ToMatrix();
Vector3 inverseTranslate = Vector3(-m_camPosition.GetX(), -m_camPosition.GetY(), -m_camPosition.GetZ());
oRotate.Transform(inverseTranslate);
oRotate.Set(0, 3, inverseTranslate.GetX());
oRotate.Set(1, 3, inverseTranslate.GetY());
oRotate.Set(2, 3, inverseTranslate.GetZ());
oView = oRotate;
return oView;
}
As promised, a bit of code showing the way I'd make a camera look at a specific point in space at all times.
First of all, we'd need a method to construct a quaternion from an angle and an axis, I happen to have that on pastebin, the angle input is in radians:
http://pastebin.com/vLcx4Qqh
Make sure you don't input the axis (0,0,0), which wouldn't make any sense whatsoever.
Now the actual update method, we need to get the quaternion rotating the camera from default orientation to pointing towards the target point. PLEASE note I just wrote this out of the top of my head, it probably needs a little debugging and may need a little optimization, but this should at least give you a push in the right direction.
void camera::update()
{
// First get the direction from the camera's position to the target point
vec3 lookAtDir = m_targetPoint - m_position;
// I'm going to divide the vector into two 'components', the Y axis rotation
// and the Up/Down rotation, like a regular camera would work.
// First to calculate the rotation around the Y axis, so we zero out the y
// component:
vec3 lookAtHorizontal = vec3(lookAtDir.x, 0.0f, lookAtDir.z).normalize();
// Get the quaternion from 'default' direction to the horizontal direction
// In this case, 'default' direction is along the -z axis, like most OpenGL
// programs. Make sure the projection matrix works according to this.
float angle = acos(vec3(0.0f, 0.0f, -1.0f).dot(lookAtHorizontal));
quaternion horizontalOrient(angle, vec3(0.0f, 1.0f, 0.0f));
// Since we already stripped the Y component, we can simply get the up/down
// rotation from it as well.
angle = acos(lookAtDir.normalize().dot(lookAtHorizontal));
if(angle) horizontalOrient *= quaternion(angle, lookAtDir.cross(lookAtHorizontal));
// ...
m_orientation = horizontalOrient;
}
Now to actually take m_orientation and m_position and get the world -> camera matrix
// First inverse each element (-position and inverse the quaternion),
// the position is rotated since the position within a matrix is 'added' last
// to the output vector, so it needs to account for rotation of the space.
mat3 rotationMatrix = m_orientation.inverse().toMatrix();
vec3 inverseTranslate = rotationMatrix * -m_position; // Note the minus
mat4 matrix = mat3; // just means the matrix is expanded, the last entry (bottom right of the matrix) is a 1.0f like an identity matrix would be.
// This bit is row-major in my case, you just need to set the translation of the matrix.
matrix[3] = inverseTranslate.x;
matrix[7] = inverseTranslate.y;
matrix[11] = inverseTranslate.z;
EDIT I think it should be obvious but just for completeness, .dot() takes the dot product of the vectors, .cross() takes the cross product, the object executing the method is vector A, and the parameter of the method is vector B.