Quaternions -> Euler Angles -> Rotation Matrix trouble (GLM) - c++

I'm writing a program that loads a file containing a scene description and then displays it using OpenGL. I'm using GLM for all of my math operations. Rotations in the scene file are stored in quaternion format. My scene management systems takes rotations for objects in the form of euler angles, and these angles are later converted to a rotation matrix when drawing.
My loading process thus takes the quaternion rotations, converts them to euler angles for storage in my object class, then converts these euler angles to rotation matrices for drawing. I'm using the glm::eulerAngles and glm::eulerAngleYXZ functions (respectively) to perform these two operations.
However, I am getting incorrect results. For example, if I understand correctly the quaternion {0.500 -0.500 0.500 0.500} (that's W X Y Z) should describe the rotation taking an arrow from the +Z axis to the +Y axis. When I run the program, however, I get the arrow pointing along the +X axis.
I would assume that there is some flaw in my understanding of the quaternions, but I am able to get my expected results by skipping the intermediary euler angle form. By converting the quaternion directly to a rotation matrix using glm::toMat4, I get a rotation that points my +Z arrow towards +Y.
I'm having trouble reconciling these two different outputs, considering that both methods seem both simple and correct. To simplify my question, why is it that these two seemingly equivalent methods produce different results:
glm::quat q(.5, -.5, .5, .5);
glm::vec3 euler = glm::eulerAngles(q) * 3.14159f / 180.f; // eulerAngleYXZ takes radians but eulerAngles returns degrees
glm::mat4 transform1 = glm::eulerAngleYXZ(euler.y, euler.x, euler.z);
// transform1 rotates a +Z arrow so that it points at +X
glm::quat q(.5, -.5, .5, .5);
glm::mat4 transform2 = glm::toMat4(q);
// transform2 rotates a +Z arrow so that it points at +Y

You have probably figured this out by now... but
What eulerAngle sequence does the function:
glm::vec3 euler = glm::eulerAngles(q) * 3.14159f / 180.f;
return? If it does not return explicitly an 'YXZ' sequence, you will not be able to use the next function properly:
glm::mat4 transform1 = glm::eulerAngleYXZ(euler.y, euler.x, euler.z);
Your variable 'euler' must be the same sequence type as the function you specify to transform it into a rotation matrix.
After looking here it looks like the function 'glm::eulerAngles' returns 'XYZ' as pitch, yaw, and roll. Thus, assuming they are 'YXZ', or yaw, pitch, roll is incorrect.
As said before, with Euler angles and rotation matrices, order matters!

The order of multiplication is important when dealing with Euler angles. YXZ and XYZ produce very different rotations.
You could calculate separate matrices for each axis, and then multiply them together in the order your need.
glm::quat q(.5, -.5, .5, .5);
glm::vec3 euler = glm::eulerAngles(q) * 3.14159f / 180.f;
glm::mat4 transformX = glm::eulerAngleX(euler.x);
glm::mat4 transformY = glm::eulerAngleY(euler.y);
glm::mat4 transformZ = glm::eulerAngleZ(euler.z);
glm::mat4 transform1 =
transformX * transformY * transformZ; // or some other order

I think the result is radian already, no need to convert.
glm::quat q(.5, -.5, .5, .5);
glm::vec3 euler = glm::eulerAngles(q); // * 3.14159f / 180.f;
glm::mat4 transformX = glm::eulerAngleX(euler.x);
glm::mat4 transformY = glm::eulerAngleY(euler.y);
glm::mat4 transformZ = glm::eulerAngleZ(euler.z);
glm::mat4 transform1 =
transformX * transformY * transformZ; // or some other order

Related

Is there any way to extract a transform matrix from a view matrix in glm?

I need to extract the transform matrix from my camera to assign it to a mesh.
I'm working in a computational graphics project in school, the objective is to simulate the arms of a character in first person perspective.
My camera implementation includes a vector3 for the camera position, so i can assign that to my mesh, the problem is that i can't extract the rotation of the camera from my view matrix yet.
I calculate my final pitch and yaw in the rotation function this way, x and y are the current mouse position in the screen
m_yaw += (x - m_mouseLastPosition.x) * m_rotateSpeed;
m_pitch -= (y - m_mouseLastPosition.y) * m_rotateSpeed;
This is how i update the view matrix when it changes
glm::vec3 newFront;
newFront.x = -cos(glm::radians(m_yaw)) * cos(glm::radians(m_pitch));
newFront.y = sin(glm::radians(m_yaw)) * cos(glm::radians(m_pitch));
newFront.z = sin(glm::radians(m_pitch));
m_front = glm::normalize(newFront);
m_right = glm::normalize(glm::cross(m_front, m_worldUp));
m_up = glm::normalize(glm::cross(m_right, m_front));
m_viewMatrix = glm::lookAt(m_position, (m_position + m_front), m_up);
Right now I can assign the position of the camera to my mesh, like this
m_mesh.m_transform = glm::translate(glm::mat4(1.0f), m_camera.m_position);
I can assign the camera position successfully, but not rotation.
What i expect is to assign the full camera transform to my mesh, or to extract the rotation independently and assign it to the mesh after.
The steps to setting up the model view projection matrix, that I have always followed (which doesn't mean it's 100% right), which appears to be what you are having problems with is:
// Eye position is in world coordinate system, as is scene_center. up_vector is normalized.
glm::dmat4 view = glm::lookat(eye_position, scene_center, up_vector);
glm::dmat4 proj = glm::perspective(field_of_view, aspect, near_x, far_x);
// This converts the model from it's units, to the units of the world coordinate system
glm::dmat4 model = glm::scale(glm::dmat4(1.0), glm::dvec3(1.0, 1.0, 1.0));
// Add model level rotations here utilizing glm::rotate
// offset is where the objects 0,0 should be mapped to in the world coordinate system
model = glm::translate(model, offset);
// Order of course matters here.
glm::dvec3 mvp = proj * view * model;
Hope that helps.
Thank you so much for your answers, they helped a lot.
I managed to solve my problem in a very simple way, i just had to directly assign the final transform to my mesh using the separate properties of my camera.
glm is so new to me, i wasn't familiar with the way it handles matrix multiplications.
The final code takes a translation matrix with the camera position, then i rotated the resulting matrix in the Y axis with my pitch and finaly the resulting one rotates in the X axis with my yaw.
m_mesh.m_transform = glm::rotate(glm::rotate(glm::translate(glm::mat4(1.0f), camera.m_position), glm::radians(-camera.m_yaw), glm::vec3(0.0f, 1.0f, 0.0f)), glm::radians(-camera.m_pitch), glm::vec3(1.0f, 0.0f, 0.0f));

How to properly rotate and scale an object in OpenGL?

I'm trying to build the model matrix every frame, for that I'm creating a translation, rotation, and scale matrix and multiplying it. But I can't seem to figure it how to build the rotation matrix and scale it properly.
Here's what I'm doing:
glm::mat4 scale = glm::scale(mat4(1.0f), my_models[i].myscale);
glm::mat4 rotateM(1.0);
glm:mat4 translate = glm::translate(mat4(1.0f), my_models[i].initialPos);
rotateM = mat4_cast(my_models[i].Quat);
rotateM = glm::rotate(rotateM, (float) my_models[i].angle * t, my_models[i].animation_axis[0]);
my_models[i].modelMatrix = translate * rotateM *scale;
my_models[i].Quat = quat_cast(my_models[i].modelMatrix);
In the Constructor I'm using:
quat Quat = glm::angleAxis(glm::radians(90.f), glm::vec3(0.f, 1.f, 0.f));
If my_models[i].myscale exactly 1.0f it rotates just fine, but if it is bigger the object keeps growing and rotates weirdly. Quaternions are very new to me, so I'm assuming I'm messing up there.
What am I missing? Are there simpler ways to construct the models rotation matrix? if so, what information should I be saving?
Edit:
As jparima suggested, the following fixed my problem.
glm::mat4 scale = glm::scale(mat4(1.0f), my_models[i].myscale);
glm::mat4 rotateM(1.0);
glm::mat4 translate = glm::translate(mat4(1.0f), my_models[i].initialPos);
my_models[i].Quat = rotate(my_models[i].Quat, my_models[i].angle * t, my_models[i].animation_axis[0]);
rotateM = mat4_cast(my_models[i].Quat);
my_models[i].modelMatrix = translate * rotateM * scale;
From the GLM quaternion.hpp for quat_cast it says
Converts a pure rotation 4 * 4 matrix to a quaternion.
You are setting the whole model matrix there which has also scale and translation. In fact, I don't know why you are converting a matrix to quaternion at all. Therefore you could remove the last line (quat_cast) and update the quaternion directly if you want to apply rotations.

Converting glm::lookat matrix to quaternion and back

I am using glm to create a camera class, and I am running into some problems with a lookat function. I am using a quaternion to represent rotation, but I want to use glm's prewritten lookat function to avoid duplicating code. This is my lookat function right now:
void Camera::LookAt(float x, float y, float z) {
glm::mat4 lookMat = glm::lookAt(position, glm::vec3(x, y, z), glm::vec3(0, 1, 0));
rotation = glm::toQuat(lookMat);
}
However when I call LookAt(0.0f,0.0f,0.0f), my camera is not rotated to that point. When I call glm::eulerangles(rotation) after the lookat call, I get a vec3 with the following values: (180.0f, 0.0f, 180.0f). position is (0.0f,0.0f,-10.0f), so I should not have any rotation at all to look at 0,0,0. This is the function which builds the view matrix:
glm::mat4 Camera::GetView() {
view = glm::toMat4(rotation) * glm::translate(glm::mat4(), position);
return view;
}
Why am I not getting the correct quaternion, and how can I fix my code?
Solution:
You have to invert the rotation of the quaternion by conjugating it:
using namespace glm;
quat orientation = conjugate(toQuat(lookAt(vecA, vecB, up)));
Explanation:
The lookAt function is a replacement for gluLookAt, which is used to construct a view matrix.
The view matrix is used to rotate the world around the viewer, and is therefore the inverse of the cameras transform.
By taking the inverse of the inverse, you can get the actual transform.
I ran into something similar, the short answer is your lookMat might need to be inverted/transposed, because it is a camera rotation (at least in my case), as opposed to a world rotation. Rotating the world would be a inverse of a camera rotation.
I have a m_current_quat which is a quaternion that stores the current camera rotation. I debugged the issue by printing out the matrix produced by glm::lookAt, and comparing with the resulting matrix that I get by applying m_current_quat and a translation by m_camera_position. Here is the relevant code for my test.
void PrintMatrix(const GLfloat m[16], const string &str)
{
printf("%s:\n", str.c_str());
for (int i=0; i<4; i++)
{
printf("[");
//for (int j=i*4+0; j<i*4+4; j++) // row major, 0, 1, 2, 3
for (int j=i+0; j<16; j+=4) // OpenGL is column major by default, 0, 4, 8, 12
{
//printf("%d, ", j); // print matrix index
printf("%.2f, ", m[j]);
}
printf("]\n");
}
printf("\n");
}
void CameraQuaternion::SetLookAt(glm::vec3 look_at)
{
m_camera_look_at = look_at;
// update the initial camera direction and up
//m_initial_camera_direction = glm::normalize(m_camera_look_at - m_camera_position);
//glm::vec3 initial_right_vector = glm::cross(m_initial_camera_direction, glm::vec3(0, 1, 0));
//m_initial_camera_up = glm::cross(initial_right_vector, m_initial_camera_direction);
m_camera_direction = glm::normalize(m_camera_look_at - m_camera_position);
glm::vec3 right_vector = glm::cross(m_camera_direction, glm::vec3(0, 1, 0));
m_camera_up = glm::cross(right_vector, m_camera_direction);
glm::mat4 lookat_matrix = glm::lookAt(m_camera_position, m_camera_look_at, m_camera_up);
// Note: m_current_quat quat stores the camera rotation with respect to the camera space
// The lookat_matrix produces a transformation for world space, where we rotate the world
// with the camera at the origin
// Our m_current_quat need to be an inverse, which is accompolished by transposing the lookat_matrix
// since the rotation matrix is orthonormal.
m_current_quat = glm::toQuat(glm::transpose(lookat_matrix));
// Testing: Make sure our model view matrix after gluLookAt, glmLookAt, and m_current_quat agrees
GLfloat current_model_view_matrix[16];
//Test 1: gluLookAt
gluLookAt(m_camera_position.x, m_camera_position.y, m_camera_position.z,
m_camera_look_at.x, m_camera_look_at.y, m_camera_look_at.z,
m_camera_up.x, m_camera_up.y, m_camera_up.z);
glGetFloatv(GL_MODELVIEW_MATRIX, current_model_view_matrix);
PrintMatrix(current_model_view_matrix, "Model view after gluLookAt");
//Test 2: glm::lookAt
lookat_matrix = glm::lookAt(m_camera_position, m_camera_look_at, m_camera_up);
PrintMatrix(glm::value_ptr(lookat_matrix), "Model view after glm::lookAt");
//Test 3: m_current_quat
glLoadIdentity();
glMultMatrixf( glm::value_ptr( glm::transpose(glm::mat4_cast(m_current_quat))) );
glTranslatef(-m_camera_position.x, -m_camera_position.y, -m_camera_position.z);
glGetFloatv(GL_MODELVIEW_MATRIX, current_model_view_matrix);
PrintMatrix(current_model_view_matrix, "Model view after quaternion transform");
return;
}
Hope this helps.
I want to use glm's prewritten lookat function to avoid duplicating code.
But it's not duplicating code. The matrix that comes out of glm::lookat is just a mat4. Going through the conversion from a quaternion to 3 vectors, only so that glm::lookat can convert it back into an orientation is just a waste of time. You've already done 85% of lookat's job; just do the rest.
You are getting the (or better: a) correct rotation.
When I call glm::eulerangles(rotation) after the lookat call, I get a
vec3 with the following values: (180.0f, 0.0f, 180.0f). position is
(0.0f,0.0f,-10.0f), so I should not have any rotation at all to look
at 0,0,0.
glm is following the conventions of the old fixed-function GL. And there, eye space was defined as the camera placed at origin, with x pointng to the right, y up and looking in -z direction. Since you want to look in positive z direction, the camera has to turn. Now, as a human, I would have described that as a rotation of 180 degrees around y, but a rotation of 180 degrees around x in combination with another 180 degrees rotation aroundz will have the same effect.
When multiplied by the LookAt view matrix, the world-space vectors are rotated (brought) into the camera's view while the camera's orientation is kept in place.
So an actual rotation of the camera by 45 degress to the right is achieved with a matrix which applies a 45 degree rotation to the left to all the world-space vertices.
For a Camera object you would need to get its local forward and up direction vectors in order to calculate a lookAt view matrix.
viewMatrix = glm::lookAtLH (position, position + camera_forward, camera_up);
When using quaternions to store the orientation of an object (be it a camera or anything else), usually this rotation quat is used to calculate the vectors which define its local-space (left-handed one in the below example):
glm::vec3 camera_forward = rotation * glm::vec3(0,0,1); // +Z is forward direction
glm::vec3 camera_right = rotation * glm::vec3(1,0,0); // +X is right direction
glm::vec3 camera_up = rotation * glm::vec3(0,1,0); // +Y is up direction
Thus, the world-space directions should be rotated 45 degress to the right in order to reflect the correct orientation of the camera.
This is why the lookMat or the quat obtained from it cannot be directly used for this purpose, since the orientation they describe is a reversed one.
Correct rotation can be done in two ways:
Calculate the inverse of the lookAt matrix and multiply the world-space direction vectors by this rotation matrix
(more efficient) Convert the LookAt matrix into a quaternion and conjugate it instead of applying glm::inverse, since the result is a unit quat and for such quats the inverse is equal to the conjugate.
Your LookAt should look like this:
void Camera::LookAt(float x, float y, float z) {
glm::mat4 lookMat = glm::lookAt(position, glm::vec3(x, y, z), glm::vec3(0, 1, 0));
rotation = glm::conjugate( glm::quat_cast(lookMat));
}

OpenGL - Camera Orbiting a Point with Quaternions

So I currently use quaternions to store and modify the orientation of the objects in my OpenGL scene, as well as the orientation of the camera. When rotating these objects directly (i.e. saying I want to rotate the camera Z amount around the Z-axis, or I want to rotate an object X around the X-axis and then translate it T along its local Z-axis), I have no problems, so I can only assume my fundamental rotation code is correct.
However, I am now trying to implement a function to make my camera orbit an arbitrary point in space, and am having quite a hard time of it. Here is what I have come up with so far, which doesn't work (this takes place within the Camera class).
//Get the inverse of the orientation, which should represent the orientation
//"from" the focal point to the camera
Quaternion InverseOrient = m_Orientation;
InverseOrient.Invert();
///Rotation
//Create change quaternions for each axis
Quaternion xOffset = Quaternion();
xOffset.FromAxisAngle(xChange * m_TurnSpeed, 1.0, 0.0, 0.0);
Quaternion yOffset = Quaternion();
yOffset.FromAxisAngle(yChange * m_TurnSpeed, 0.0, 1.0, 0.0);
Quaternion zOffset = Quaternion();
zOffset.FromAxisAngle(zChange * m_TurnSpeed, 0.0, 0.0, 1.0);
//Multiply the change quats into the inversed orientation quat
InverseOrient = yOffset * zOffset * xOffset * InverseOrient;
//Translate according to the focal distance
//Start with a vector relative to the position being looked at
sf::Vector3<float> RelativePos(0, 0, -m_FocalDistance);
//Rotate according to the quaternion
RelativePos = InverseOrient.MultVect(RelativePos);
//Add that relative position to the focal point
m_Position.x = m_FocalPoint->x + RelativePos.x;
m_Position.y = m_FocalPoint->y + RelativePos.y;
m_Position.z = m_FocalPoint->z + RelativePos.z;
//Now set the orientation to the inverse of the quaternion
//used to position the camera
m_Orientation = InverseOrient;
m_Orientation.Invert();
What ends up happening is that the camera rotates around some other point - certainly not the object, but apparently not itself either, as though it were looping through space in a spiral path.
So this is clearly not the way to go about orbiting a camera around a point, but what is?
I would operate on the camera first in spherical coordinates and convert to quaternions as necessary.
Given the following assumptions:
The camera has no roll
The point you are looking at is [x, y, z]
You have yaw, pitch angles
[0, 1, 0] is "up"
Here is how to calculate some important values:
The view vector: v = [vx, vy, vz] = [cos(yaw)*cos(pitch), sin(pitch), -sin(yaw)*cos(pitch)]
The camera location: p = [x, y, z] - r*v
The right vector: cross product v with [0, 1, 0]
The up vector: cross product v with the right vector
Your view quaternion is [0, vx, vy, vz] (that's the view vector with a 0 w-component)
Now in your simulation you can operate on pitch/yaw, which are pretty intuitive. If you want to do interpolation, convert the before and after pitch+yaws into quaternions and do quaternion spherical linear interpolation.

Issue with GLM Camera X,Y Rotation introducing Z Rotation

So I've been having trouble with a camera I've implemented in OpenGL and C++ using the GLM library. The type of camera I'm aiming for is a fly around camera which will allow easy exploration of a 3D world. I have managed to get the camera pretty much working, it's nice and smooth, looks around and the movement seems to be nice and correct.
The only problem I seem to have is that the rotation along the camera's X and Y axis (looking up and down) introduces some rotation about it's Z axis. This has the result of causing the world to slightly roll whilst travelling about.
As an example... if I have a square quad in front of the camera and move the camera in a circular motion, so as if looking around in a circle with your head, once the motion is complete the quad will have rolled slightly as if you've tilted your head.
My camera is currently a component which I can attach to an object/entity in my scene. Each entity has a "Frame" which is basically the model matrix for that entity. The Frame contains the following attributes:
glm::mat4 m_Matrix;
glm::vec3 m_Position;
glm::vec3 m_Up;
glm::vec3 m_Forward;
These are then used by the camera to create the appropriate viewMatrix like this:
const glm::mat4& CameraComponent::GetViewMatrix()
{
//Get the transform of the object
const Frame& transform = GetOwnerGO()->GetTransform();
//Update the viewMatrix
m_ViewMatrix = glm::lookAt(transform.GetPosition(), //position of camera
transform.GetPosition() + transform.GetForward(), //position to look at
transform.GetUp()); //up vector
//return reference to the view matrix
return m_ViewMatrix;
}
And now... here are my rotate X and Y methods within the Frame object, which I'm guessing is the place of the problem:
void Frame::RotateX( float delta )
{
glm::vec3 cross = glm::normalize(glm::cross(m_Up, m_Forward)); //calculate x axis
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, cross);
m_Forward = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Forward, 0.0f))); //Rotate forward vector by new rotation
m_Up = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Up, 0.0f))); //Rotate up vector by new rotation
}
void Frame::RotateY( float delta )
{
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, m_Up);
//Rotate forward vector by new rotation
m_Forward = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Forward, 0.0f)));
}
So somewhere in there, there's a problem which I've been searching around trying to fix. I've been messing with it for a few days now, trying random things but I either get the same result, or the z axis rotation is fixed but other bugs appear such as incorrect X, Y rotation and camera movement.
I had a look at gimbal lock but from what I understood of it, this problem didn't seem quite like gimbal lock to me. But I may be wrong.
Store the current pitch/yaw angles and generate the camera matrix on-the-fly instead of trying to accumulate small changes on the intermediate vectors.
In your RotateY function, change it from this:
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, m_Up);
to this:
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, glm::vec3(0,1,0));