Rotating camera lookAt point with quarternions - opengl

I am trying to rotate the camera using quarternions but i have problems doing it.
The first thing that I notices now is that when I execute this the Camera Position and Camera LookAt become almost the same and in some cases they are the same and then i get precision problems and all other problems relating to it when i try and move the camera.
if (Input::getInstance()->isMouseDown(SDL_BUTTON_RIGHT-1)){
//log_.debug("Camera Mouse Left down");
glm::vec2 mouseDelta = glm::ivec2(oldX, oldY) - Input::getInstance()->getMousePosition();
glm::quat q1 = glm::quat(glm::vec3(glm::radians(mouseDelta.y), glm::radians(mouseDelta.x), 0.0f));
cameraLook_ = q1 * (direction * mouseSensitivity_) * glm::conjugate(q1) + cameraPosition_;
//cameraLook_ = glm::rotate(cameraLook_, mouseDelta.x * delta, glm::vec3(0,1,0));
//cameraLook_ = glm::rotate(cameraLook_, mouseDelta.y * delta, glm::vec3(0, 0, 1));
}

I switched back to matrices for my solution, because for me they are easier to work with at the moment.
There are few things i need to understand about quarternions then I can fix my old problem.
glm::mat4 rotationMatrix = glm::translate(cameraPosition_ - glm::vec3(1.0));
rotationMatrix *= glm::rotate(mouseDelta.x, glm::vec3(0, 1, 0));
rotationMatrix *= glm::rotate(mouseDelta.y, glm::vec3(0, 0, 1));
rotationMatrix *= glm::translate(-cameraPosition_ + glm::vec3(1.0));
cameraLook_ = glm::vec3(rotationMatrix * glm::vec4(cameraLook_, 1.0f));

Related

OpenGL Combining Yaw Pitch Roll

I'm trying to render water with reflection and refraction and I am new on OpenGL and I have stumbled upon a simple problem. I want for the reflection texture to move the camera down and invert it to capture the image. I have a Camera with quaternions and Yaw and Pitch. I thought to add roll also and rotate roll by 180 degrees to get the inversion but I can not manage to combine the 3 quaternions. Here is the update function of the camera :
void UpdateCameraVectors()
{
// Yaw
glm::quat aroundY = glm::angleAxis(glm::radians(-RightAngle), glm::vec3(0, 1, 0));
// Pitch
glm::quat aroundX = glm::angleAxis(glm::radians(UpAngle), glm::vec3(1, 0, 0));
// Roll
glm::quat aroundZ = glm::angleAxis(glm::radians(RollAngle), glm::vec3(0, 0, 1));
Orientation = aroundY * aroundX;
glm::quat qF = Orientation * glm::quat(0, 0, 0, -1) * glm::conjugate(Orientation);
Front = { qF.x, qF.y, qF.z };
Right = glm::normalize(glm::cross(Front, glm::vec3(0, 1, 0)));
}
Any suggestions on how to invert the view or how to combine the quaternions? I already tried all possible multiplications between them.

Quaternion-based First Person View Camera

I have been learning OpenGL by following the tutorial, located at https://paroj.github.io/gltut/.
Passing the basics, I got a bit stuck at understanding quaternions and their relation to spatial orientation and transformations, especially from world- to camera-space and vice versa. In the chapter Camera-Relative Orientation, the author makes a camera, which rotates a model in world space relative to the camera orientation. Quoting:
We want to apply an orientation offset (R), which takes points in camera-space. If we wanted to apply this to the camera matrix, it would simply be multiplied by the camera matrix: R * C * O * p. That's nice and all, but we want to apply a transform to O, not to C.
My uneducated guess would be that if we applied the offset to camera space, we would get the first-person camera. Is this correct? Instead, the offset is applied to the model in world space, making the spaceship spin relative to that space, and not to camera space. We just observe it spin from camera space.
Inspired by at least some understanding of quaternions (or so I thought), I tried to implement the first person camera. It has two properties:
struct Camera{
glm::vec3 position; // Position in world space.
glm::quat orientation; // Orientation in world space.
}
Position is modified in reaction to keyboard actions, while the orientation changes due to mouse movement on screen.
Note: GLM overloads * operator for glm::quat * glm::vec3 with the relation for rotating a vector by a quaternion (more compact form of v' = qvq^-1)
For example, moving forward and moving right:
glm::vec3 worldOffset;
float scaleFactor = 0.5f;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_Z_NEG]); // AXIS_Z_NEG = glm::vec3(0, 0, -1)
position += worldOffset * scaleFactor;
}
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_X_NEG]); // AXIS_Z_NEG = glm::vec3(-1, 0, 0)
position += worldOffset * scaleFactor;
}
Orientation and position information is passed to glm::lookAt matrix for constructing the world-to-camera transformation, like so:
auto camPosition = position;
auto camForward = orientation * glm::vec3(0.0, 0.0, -1.0);
viewMatrix = glm::lookAt(camPosition, camPosition + camForward, glm::vec3(0.0, 1.0, 0.0));
Combining model, view and projection matrices and passing the result to vertex shader displays everything okay - the way one would expect to see things from the first-person POV. However, things get messy when I add mouse movements, tracking the amount of movement in x and y directions. I want to rotate around the world y-axis and local x-axis:
auto xOffset = glm::angleAxis(xAmount, axis_vectors[AxisVector::AXIS_Y_POS]); // mouse movement in x-direction
auto yOffset = glm::angleAxis(yAmount, axis_vectors[AxisVector::AXIS_X_POS]); // mouse movement in y-direction
orientation = orientation * xOffset; // Works OK, can look left/right
orientation = yOffset * orientation; // When adding this line, things get ugly
What would the problem be here?
I admit, I don't have enough knowledge to debug the mouse movement code properly, I mainly followed the lines, saying "right multiply to apply the offset in world space, left multiply to do it in camera space."
I feel like I know things half-way, drawing conclusions from a plethora of e-resources on the subject, while getting more educated and more confused at the same time.
Thanks for any answers.
To rotate a glm quaternion representing orientation:
//Precomputation:
//pitch (rot around x in radians),
//yaw (rot around y in radians),
//roll (rot around z in radians)
//are computed/incremented by mouse/keyboard events
To compute view matrix:
void CameraFPSQuaternion::UpdateView()
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat orientation = qPitch * qYaw;
orientation = glm::normalize(orientation);
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
If you want to store the quaternion, then you recompute it whenever yaw, pitch, or roll changes:
void CameraFPSQuaternion::RotatePitch(float rads) // rotate around cams local X axis
{
glm::quat qPitch = glm::angleAxis(rads, glm::vec3(1, 0, 0));
m_orientation = glm::normalize(qPitch) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(m_orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
m_viewMatrix = rotate * translate;
}
If you want to give a rotation speed around a given axis, you use slerp:
void CameraFPSQuaternion::Update(float deltaTimeSeconds)
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(m_d_pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(m_d_yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(m_d_roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat m_d_orientation = qPitch * qYaw;
glm::quat delta = glm::mix(glm::quat(0,0,0,0),m_d_orientation,deltaTimeSeconds);
m_orientation = glm::normalize(delta) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
The problem lied with the usage of glm::lookAt for constructing the view matrix. Instead, I am now constructing the view matrix like so:
auto rotate = glm::mat4_cast(entity->orientation);
auto translate = glm::mat4(1.0f);
translate = glm::translate(translate, -entity->position);
viewMatrix = rotate * translate;
For translation, I'm left multiplying with an inverse of orientation instead of orientation now.
glm::quat invOrient = glm::conjugate(orientation);
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = invOrient * (axis_vectors[AxisVector::AXIS_Z_NEG]);
position += worldOffset * scaleFactor;
}
...
Everything else is the same, apart from some further offset quaternion normalizations in the mouse movement code.
The camera now behaves and feels like a first-person camera.
I still don't properly understand the difference between view matrix and lookAt matrix, if there is any. But that's the topic for another question.

Mouse picking with ray casting in DirectX

I'm trying to implement raycasting with DirectX in C++ but have run into problems. I've tried two approaches, using XMVector3Unproject and following the instructions provided here on stackoverflow a while ago.
Unproject Approach:
//p is mouse location vector
//Don't know if I need to normalize mouse input
//p.x = (2.0f * p.x) / g_nScreenWidth - 1.0f;
//p.y = 1.0f - (2.0f * p.y) / g_nScreenHeight;
Vector3 orig = XMVector3Unproject(Vector3(p.x, p.y, 0),
0,
0,
g_nScreenWidth,
g_nScreenHeight,
0,
1,
GameRenderer.m_matProj,
GameRenderer.m_matView,
GameRenderer.m_matWorld);
Vector3 dest = XMVector3Unproject(Vector3(p.x, p.y, 1),
0,
0,
g_nScreenWidth,
g_nScreenHeight,
0,
1,
GameRenderer.m_matProj,
GameRenderer.m_matView,
GameRenderer.m_matWorld);
Vector3 direction = dest - orig;
direction.Normalize();
//shoot a bullet to visualize the ray for testing
g_cObjectManager.createObject(BUL_OBJ, "bullet", orig, direction*100);
Matrix Inversion Approach:
//Normalized device coordinates
p.x = (2.0f * p.x) / g_nScreenWidth - 1.0f;
p.y = 1.0f - (2.0f * p.y) / g_nScreenHeight;
XMVECTOR det; //Determinant, needed for matrix inverse function call
Vector3 origin = Vector3(p.x, p.y, 0);
Vector3 faraway = Vector3(p.x, p.y, 1);
XMMATRIX invViewProj = XMMatrixInverse(&det, GameRenderer.m_matView * GameRenderer.m_matProj);
Vector3 rayorigin = XMVector3Transform(origin, invViewProj);
Vector3 rayend = XMVector3Transform(faraway, invViewProj);
Vector3 raydirection = rayend - rayorigin;
raydirection.Normalize();
g_cObjectManager.createObject(BUL_OBJ, "bullet", rayorigin, raydirection * 5);
I assume at least the matrix inversion approach from stackoverflow should work, but for some reason my attempt doesn't. Do I need the world matrix as well, or is there some step I'm missing?
This is my first stackoverflow post, so if anything is unclear or more information is needed please let me know.
In the end I realized my problem originated with switching from perspective to orthographic view to draw the HUD and never resetting the view matrix. I was able to use the XMVector3Unproject() function with the identity matrix in place of the world matrix and everything works flawlessly now.
Thanks Nico Schertler for the information and reassurance!

OpenGl glm local rotation

I need to rotate object in local coordinates system, like you can rotate it in 3dmax\maya etc...
My current code is:
ModelMatrix = glm::mat4(1.0f);
TransformMatrix = glm::mat4(1.0f);
ScaleMatrix = glm::mat4(1.0f);
RotateMatrix = glm::mat4(1.0f);
ScaleMatrix = glm::scale(ScaleMatrix, glm::vec3(scalex, scalez, scaley));
TransformMatrix = glm::translate(TransformMatrix, glm::vec3(x, z, y));
RotateMatrix = glm::rotate(RotateMatrix, anglex, glm::vec3(1, 0, 0));
RotateMatrix= glm::rotate(RotateMatrix, angley, glm::vec3(0, 0, 1));
RotateMatrix = glm::rotate(RotateMatrix, anglez, glm::vec3(0, 1, 0));
ModelMatrix = TransformMatrix * ScaleMatrix* RotateMatrix;
MVP = Projection * View * ModelMatrix ;
anglex,y,z - comes from keyboard.
Right now only last dimension works as local (im my example it's glm::vec3(0, 1, 0) Z axis) At this IMAGE I show what I needed(2) and what I've got(3)... If I changes "anglez" it's always works as ROLL. But anglex and angley is in the world coordinates system.
The second my attempt - use Quaternions:
quat MyQuaternion= glm::quat(cos(glm::radians(xangle / 2)), 0, sin(glm::radians(xangle / 2)), 0);
quat MyQuaternion2 = glm::quat(cos(glm::radians(yangle/ 2)), sin(glm::radians(yangle / 2)), 0, 0);
quat MyQuaternion3 = glm::quat(cos(glm::radians(zangle / 2)), 0,0,sin(glm::radians(zangle / 2)));
glm::mat4 RotationMatrix = toMat4(MyQuaternion*MyQuaternion2*MyQuaternion3);
But I have the same result
You should modify the entire ModelMatrix instead of the angles. Initialize ModelMatrix to the identity matrix. Then, when you process keyboard input:
if(rotate about x-axis)
ModelMatrix = glm::rotate(ModelMatrix, angle, glm::vec3(1, 0, 0));
if(rotate about y-axis)
ModelMatrix = glm::rotate(ModelMatrix, angle, glm::vec3(0, 1, 0));
if(rotate about z-axis)
ModelMatrix = glm::rotate(ModelMatrix, angle, glm::vec3(0, 0, 1));
if(any rotation happened)
MVP = Projection * View * ModelMatrix ;
You can do this modification at any level. Either the MVP level, the ModelMatrix level (as shown here) or the RotateMatrix level.

openGL: What is the order of transformations using glm?

I loaded an object from a .obj file.
I am trying to apply a glm::rotate, glm::translate, glm::scale to it.
The movement (translation and rotation) is made using keyboard input like this;
// speed is 10
// angle starts at 0
if (keys[UP]) {
movex += speed * sin(angle);
movez += speed * cos(angle);
}
if (keys[DOWN]) {
movex -= speed * sin(angle);
movez -= speed * cos(angle);
}
if (keys[RIGHT]) {
angle -= PI / 180;
}
if (keys[LEFT]) {
angle += PI / 180;
}
and then, the transformations:
// use shader
glUseProgram(gl_program_shader);
// send the uniform matrices to the shader
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "model_matrix"), 1, false, glm::value_ptr(model_matrix));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "view_matrix"), 1, false, glm::value_ptr(view_matrix));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "projection_matrix"), 1, false, glm::value_ptr(projection_matrix));
glm::mat4 rotate = glm::rotate(model_matrix, angle, glm::vec3(0, 1, 0));
glm::mat4 translate = glm::translate(model_matrix, glm::vec3(RADIUS + movex, 0, -RADIUS + movez));
glm::mat4 scale = glm::scale(model_matrix, glm::vec3(20.0, 20.0, 20.0));
glUniformMatrix4fv(glGetUniformLocation(gl_program_shader, "model_matrix"), 1, false, glm::value_ptr(translate * scale * rotate));
glBindVertexArray(ironMan->vao);
glDrawElements(GL_TRIANGLES, ironMan->num_indices, GL_UNSIGNED_INT, 0);
If I don't use KEY_RIGHT / KEY_LEFT, it works as expected (which means that it translates the object forward and backward).
If I use them, it rotates around it's own center, but when I press KEY_UP / KEY_DOWN, it translates ... well, not as expected.
I've put on the notifyKeyPressed() function a case to print out the movex, movey and angle value and it seems that the angle is not the one that it should:
when the IronMan is with it's back at me, angle = 0 - this is the
start point;
when the IronMan is with his face on the right, the
angle should be around -PI / 2 (-1.57), but it is around -80/-90.
I thought this was because of the glm::scale, but I changed the values and nothing changed.
Any idea why is the angle acting like this?
Your rotation center stays at the origin, you need to translate the rotation center of your object to the origin, make your scale, rotation, translate back, then translate.
EDIT
More importantly related to your exact problem, glm::rotate works in degrees.