glm::lookAt with custom rotation does not work properly - c++

I want to control my camera so that it can rotate around the model.
The theoretical code should be:
// `camera_rotation_angle_x_` and `camera_rotation_angle_y_` are initialized to 0, and can be modified by the user.
glm::mat4 CreateViewMatrix(glm::vec3 eye_pos, glm::vec3 scene_center, glm::vec3 up_vec) {
auto eye_transform = glm::translate(glm::mat4(1.0f), -scene_center); // recenter
eye_transform = glm::rotate(eye_transform, camera_rotation_angle_x_, glm::vec3(1.0f, 0.0f, 0.0f));
eye_transform = glm::rotate(eye_transform, camera_rotation_angle_y_, glm::vec3(0.0f, 1.0f, 0.0f));
eye_transform = glm::translate(eye_transform, scene_center); // move back
eye_pos = eye_transform * glm::vec4(eye_pos, 1.0f);
up_vec = eye_transform * glm::vec4(up_vec, 0.0f);
return glm::lookAt(eye_pos, scene_center, up_vec);
}
But the two lines "recenter" and "move back" must be written as follows to rotate correctly, otherwise the distance from the camera to the center will vary when the rotation parameters change:
auto eye_transform = glm::translate(glm::mat4(1.0f), scene_center); // recenter *The sign has changed*
...
eye_transform = glm::translate(eye_transform, -scene_center); // move back *The sign has changed*
...
// Correctness test, only when the distance remains constant, the rotation logic is correct.
cout << "eye_pos: " << eye_pos[0] << ", " << eye_pos[1] << ", " << eye_pos[2] << endl;
cout << "distance: " << (sqrt(
pow(eye_pos[0] - scene_center[0], 2)
+ pow(eye_pos[1] - scene_center[1], 2)
+ pow(eye_pos[2] - scene_center[2], 2)
)) << endl;
It is the correct logic to subtract the central value first and then add it back. It does not make any sense to add and then subtract.
So what's going wrong that I have to write code with logic errors in order for it to work properly?
The caller's code is below, maybe the bug is here?
// `kEyePos`, `kSceneCenter`, `kUpVec`, `kFovY`, `kAspect`, `kDistanceEyeToBack` and `kLightPos` are constants throughout the lifetime of the program
UniformBufferObject ubo{};
ubo.model = glm::mat4(1.0f);
ubo.view = CreateViewMatrix(kEyePos, kSceneCenter, kUpVec);
ubo.proj = glm::perspective(kFovY, kAspect, 0.1f, kDistanceEyeToBack);
// GLM was originally designed for OpenGL, where the Y coordinate of the clip coordinates is inverted.
// The easiest way to compensate for that is to flip the sign on the scaling factor of the Y axis in the projection matrix.
// Because of the Y-flip we did in the projection matrix, the vertices are now being drawn in counter-clockwise order instead of clockwise order.
// This causes backface culling to kick in and prevents any geometry from being drawn.
// You should modify the frontFace in `VkPipelineRasterizationStateCreateInfo` to `VK_FRONT_FACE_COUNTER_CLOCKWISE` to correct this.
ubo.proj[1][1] *= -1;
ubo.light_pos = glm::vec4(kLightPos, 1.0f); // The w component of point is 1
memcpy(vk_buffer_->GetUniformBufferMapped(frame_index), &ubo, sizeof(ubo));

I found that the problem was with the order of matrix construction, not with the glm::lookAt method.
m = glm::rotate(m, angle, up_vec)
is equivalent to
m = m * glm::rotate(glm::mat4(1), angle, up_vec)
, not
m = glm::rotate(glm::mat4(1), angle, up_vec) * m
as I thought.
This easily explains why swapping "recenter" and "move back" works properly.
The correct code is as follows:
glm::mat4 VulkanRendering::CreateViewMatrix(glm::vec3 eye_pos, glm::vec3 scene_center, glm::vec3 up_vec) const {
// Create transform matrix in reverse order.
// First rotate around the X axis, and then around the Y axis, otherwise it does not match the practice of most games.
auto view_transform = glm::translate(glm::mat4(1.0f), scene_center); // last: move back
view_transform = glm::rotate(view_transform, camera_rotation_angle_y_, glm::vec3(0.0f, 1.0f, 0.0f));
view_transform = glm::rotate(view_transform, camera_rotation_angle_x_, glm::vec3(1.0f, 0.0f, 0.0f));
view_transform = glm::translate(view_transform, -scene_center); // first: recenter
eye_pos = view_transform * glm::vec4(eye_pos, 1.0f); // The w component of point is 1
up_vec = view_transform * glm::vec4(up_vec, 0.0f); // The w component of vector is 0
return glm::lookAt(eye_pos, scene_center, up_vec);
}

Related

Matrix multiplication - How to make a planet spin on it's own axis, and orbit it's parent?

I've got two planets, the sun and the earth. I want them to spin on their own axis, and orbit the planet at the same time.
I can get these two behaviors to work individually, but I'm stumped as to how to combine them.
void planet::render() {
mat4 axisPos = mat4(1.0f);
axialRotation = rotate(axisPos, axisRotationSpeedConstant, vec3(0.0f, 1.0f, 0.0f));
if (hostPlanet != NULL) {
mat4 hostTransformPosition = mat4(1.0f);
hostTransformPosition[3] = hostPlanet->getTransform()[3];
orbitalSpeed += orbitalSpeedConstant;
orbitRotation = rotate(hostTransformPosition, orbitalSpeed, vec3(0.0f, 1.0f, 0.0f));
orbitRotation = translate(orbitRotation, vec3(distanceFromParent, 0.0f, 0.0f));
//rotTransform will make them spin on their axis, but not orbit their parent planet
mat4 rotTransform = transform * axialRotation;
//transform *= rotTransform;
//orbitRotation will make the planet orbit, but it won't spin on it's own axis.
transform = orbitRotation;
}
else {
transform *= axialRotation;
}
glUniform4fv(gColLoc, 1, &color[0]);
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &getTransform()[0][0]);
glDrawArrays(GL_LINES, 0, NUMVERTS);
};
Woohoo! As usual, asking the question lead me to being able to answer it. After the last line, knowing that transform[0 to 2] represents the rotation in the 4x4 matrix (with transform[3] representing the position in 3D space), I thought to replace the rotation from the previous matrix calculation with the current one.
Badabing, I got my answer.
transform = orbitRotation;
transform[0] = rotTransform[0];
transform[1] = rotTransform[1];
transform[2] = rotTransform[2];

From perspective to orthographic projections

I'm trying to change my camera projection from perspective to orthographic.
At the moment my code is working fine with the perspective projection
m_prespective = glm::perspective(70.0f, (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_prespective * glm::lookAt(m_position, m_forward, m_up);
But as soon as i change it to orthographic projection I can't see my mesh anymore.
m_ortho = glm::ortho(0.0f, (float)DISPLAY_WIDTH, (float)DISPLAY_HEIGHT,5.0f, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_ortho * glm::lookAt(m_position, m_forward, m_up);
I don't understand what I'm doing wrong.
In perspective projection the term (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT is evaluating the picture aspect ratio. This number is going to be close to 1. The left and right clip plane distances at the near plane for perspective projection is aspect * near_distance. More interesting though is the expanse of left-right at the viewing distance, which in your case is abs(m_position.z)= abs(mesh.radius).
Carrying this over to orthographic projection the left, right, top and bottom clip plane distances should be of the same order of magnitude, so given that aspect is close to 1 the values for left, right, bottom and top should be close to the value of abs(mesh.radius). The resolution of the display in pixels is totally irrelevant except for the aspect ratio.
Furthermore when using a perspective projection the value for near should be chosen as large as possible so that all desired geometry is visible. Doing otherwise will waste precious depth buffer resolution.
float const view_distance = mesh.radius + 1;
float const aspect = (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT;
switch( typeof_projection ){
case perspective:
m_projection = glm::perspective(70.0f, aspect, 1.f, 1000.0f);
break;
case ortho:
m_projection = glm::ortho(
-aspect * view_distance,
aspect * view_distance,
view_distance,
view_distance,
-1000, 1000 );
break;
}
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -view_distance);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_projection * glm::lookAt(m_position, m_forward, m_up);

Projection View matrix calculation for directional light shadow mapping

In order to calculate the projection view matrix for a directional light I take the vertices of the frustum of my active camera, multiply them by the rotation of my directional light and use these rotated vertices to calculate the extends of an orthographic projection matrix for my directional light.
Then I create the view matrix using the center of my light's frustum bounding box as the position of the eye, the light's direction for the forward vector and then the Y axis as the up vector.
I calculate the camera frustum vertices by multiplying the 8 corners of a box with 2 as size and centered in the origin.
Everything works fine and the direction light projection view matrix is correct but I've encountered a big issue with this method.
Let's say that my camera is facing forward (0, 0, -1), positioned on the origin and with a zNear value of 1 and zFar of 100. Only objects visible from my camera frustum are rendered into the shadow map, so every object that has a Z position between -1 and -100.
The problem is, if my light has a direction which makes the light come from behind the camera and the is an object, for example, with a Z position of 10 (so behind the camera but still in front of the light) and tall enough to possibly cast a shadow on the scene visible from my camera, this object is not rendered into the shadow map because it's not included into my light frustum, resulting in an error not casting the shadow.
In order to solve this problem I was thinking of using the scene bounding box to calculate the light projection view Matrix, but doing this would be useless because the image rendered into the shadow map cuold be so large that numerous artifacts would be visible (shadow acne, etc...), so I skipped this solution.
How could I overcome this problem?
I've read this post under the section of 'Calculating a tight projection' to create my projection view matrix and, for clarity, this is my code:
Frustum* cameraFrustum = activeCamera->GetFrustum();
Vertex3f direction = GetDirection(); // z axis
Vertex3f perpVec1 = (direction ^ Vertex3f(0.0f, 0.0f, 1.0f)).Normalized(); // y axis
Vertex3f perpVec2 = (direction ^ perpVec1).Normalized(); // x axis
Matrix rotationMatrix;
rotationMatrix.m[0] = perpVec2.x; rotationMatrix.m[1] = perpVec1.x; rotationMatrix.m[2] = direction.x;
rotationMatrix.m[4] = perpVec2.y; rotationMatrix.m[5] = perpVec1.y; rotationMatrix.m[6] = direction.y;
rotationMatrix.m[8] = perpVec2.z; rotationMatrix.m[9] = perpVec1.z; rotationMatrix.m[10] = direction.z;
Vertex3f frustumVertices[8];
cameraFrustum->GetFrustumVertices(frustumVertices);
for (AInt i = 0; i < 8; i++)
frustumVertices[i] = rotationMatrix * frustumVertices[i];
Vertex3f minV = frustumVertices[0], maxV = frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
minV.x = min(minV.x, frustumVertices[i].x);
minV.y = min(minV.y, frustumVertices[i].y);
minV.z = min(minV.z, frustumVertices[i].z);
maxV.x = max(maxV.x, frustumVertices[i].x);
maxV.y = max(maxV.y, frustumVertices[i].y);
maxV.z = max(maxV.z, frustumVertices[i].z);
}
Vertex3f extends = maxV - minV;
extends *= 0.5f;
Matrix viewMatrix = Matrix::MakeLookAt(cameraFrustum->GetBoundingBoxCenter(), direction, perpVec1);
Matrix projectionMatrix = Matrix::MakeOrtho(-extends.x, extends.x, -extends.y, extends.y, -extends.z, extends.z);
Matrix projectionViewMatrix = projectionMatrix * viewMatrix;
SceneObject::SetMatrix("ViewMatrix", viewMatrix);
SceneObject::SetMatrix("ProjectionMatrix", projectionMatrix);
SceneObject::SetMatrix("ProjectionViewMatrix", projectionViewMatrix);
And this is how I calculate the frustum and it's bounding box:
Matrix inverseProjectionViewMatrix = projectionViewMatrix.Inversed();
Vertex3f points[8];
_frustumVertices[0] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, -1.0f); // near top-left
_frustumVertices[1] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, -1.0f); // near top-right
_frustumVertices[2] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, -1.0f); // near bottom-left
_frustumVertices[3] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, -1.0f); // near bottom-right
_frustumVertices[4] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, 1.0f); // far top-left
_frustumVertices[5] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, 1.0f); // far top-right
_frustumVertices[6] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, 1.0f); // far bottom-left
_frustumVertices[7] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, 1.0f); // far bottom-right
_boundingBoxMin = _frustumVertices[0];
_boundingBoxMax = _frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
_boundingBoxMin.x = min(_boundingBoxMin.x, _frustumVertices[i].x);
_boundingBoxMin.y = min(_boundingBoxMin.y, _frustumVertices[i].y);
_boundingBoxMin.z = min(_boundingBoxMin.z, _frustumVertices[i].z);
_boundingBoxMax.x = max(_boundingBoxMax.x, _frustumVertices[i].x);
_boundingBoxMax.y = max(_boundingBoxMax.y, _frustumVertices[i].y);
_boundingBoxMax.z = max(_boundingBoxMax.z, _frustumVertices[i].z);
}
_boundingBoxCenter = Vertex3f((_boundingBoxMin.x + _boundingBoxMax.x) / 2.0f, (_boundingBoxMin.y + _boundingBoxMax.y) / 2.0f, (_boundingBoxMin.z + _boundingBoxMax.z) / 2.0f);

Emulating gluLookAt with glm::Quat(ernions)

I've been trying to emulate gluLookAt functionality, but with Quaternions. Each of my game object have a TranslationComponent. This component stores the object's position (glm::vec3), rotation (glm::quat) and scale (glm::vec3). The camera calculates its position each tick doing the following:
// UP = glm::vec3(0,1,0);
//FORWARD = glm::vec3(0,0,1);
cameraPosition = playerPosition - (UP * distanceUP) - (FORWARD * distanceAway);
This position code works as expexted, the camera is place 3 metres behind the player and 1 metre up. Now, the camera's Quaternion is set to the follow:
//Looking at the player's feet
cameraRotation = quatFromToRotation(FORWARD, playerPosition);
The rendering engine now takes these values and generates the ViewMatrix (camera) and the ModelMatrix (player) and then renders the scene. The code looks like this:
glm::mat4 viewTranslationMatrix =
glm::translate(glm::mat4(1.0f), cameraTransform->getPosition());
glm::mat4 viewScaleMatrix =
glm::scale(glm::mat4(1.0f), cameraTransform->getScale());
glm::mat4 viewRotationMatrix =
glm::mat4_cast(cameraTransform->getRotation());
viewMatrix = viewTranslationMatrix * viewRotationMatrix * viewScaleMatrix;
quatFromToRotation(glm::vec3 from, glm::vec3 to) is defined as the following:
glm::quat quatFromToRotation(glm::vec3 from, glm::vec3 to)
{
from = glm::normalize(from); to = glm::normalize(to);
float cosTheta = glm::dot(from, to);
glm::vec3 rotationAxis;
if (cosTheta < -1 + 0.001f)
{
rotationAxis = glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), from);
if (glm::length2(rotationAxis) < 0.01f)
rotationAxis = glm::cross(glm::vec3(1.0f, 0.0f, 0.0f), from);
rotationAxis = glm::normalize(rotationAxis);
return glm::angleAxis(180.0f, rotationAxis);
}
rotationAxis = glm::cross(from, to);
float s = sqrt((1.0f + cosTheta) * 2.0f);
float invis = 1.0f / s;
return glm::quat(
s * 0.5f,
rotationAxis.x * invis,
rotationAxis.y * invis,
rotationAxis.z * invis
);
}
What I'm having troubles with is the fact the cameraRotation isn't being set correctly. No matter where the player is, the camera's forward is always (0,0,-1)
Your problem is in the line
//Looking at the player's feet
cameraRotation = quatToFromRotation(FORWARD, playerPosition);
You need to look from the camera position to the player's feet - not from "one meter above the player" (assuming the player is at (0,0,0) when you initially do this). Replace FORWARD with cameraPosition:
cameraRotation = quatToFromRotation(cameraPosition, playerPosition);
EDIT I believe you have an error in your quatToFromRotation function as well. See https://stackoverflow.com/a/11741520/1967396 for a very nice explanation (and some pseudo code) of quaternion rotation.

Translating a camera along a quaternion using glm

I am trying to translate my camera's position along an orientation defined in a glm::quat.
void Camera::TranslateCameraAlongZ(float distance) {
glm::vec3 direction = glm::normalize(rotation * glm::vec3(0.0f, 0.0f, 1.0f));
position += direction * distance;
}
This works fine when rotation is the identity quaternion. The X and the Z translations do not work when the rotation is anything else. When the camera is rotated 45 degrees to the left and I call TranslateCameraAlongZ(-0.1f) I am translated backwards and to the left when I should be going backwards and to the right. All translations that are not at perfect 90 degree increments are similarly messed up. What am I doing wrong here, and what is the simplest way to fix it? In case they might be relevant, here is the function which generates my view matrix and the function that rotates my camera:
glm::mat4 Camera::GetView() {
view = glm::toMat4(rotation) * glm::translate(glm::mat4(), position);
return view;
}
void Camera::RotateCameraDeg(float x, float y, float z) {
rotation = glm::normalize(glm::angleAxis(x,glm::vec3(1.0f, 0.0f, 0.0f)) * rotation);
rotation = glm::normalize(glm::angleAxis(y,glm::vec3(0.0f, 1.0f, 0.0f)) * rotation);
rotation = glm::normalize(glm::angleAxis(z,glm::vec3(0.0f, 0.0f, 1.0f)) * rotation);
std::cout << glm::eulerAngles(rotation).x << " " << glm::eulerAngles(rotation).y << " " << glm::eulerAngles(rotation).z << "\n";
}
I'm guessing that "rotation" is the inverse of the camera's rotation (thinking of the camera as an object in the world). That is, "rotation" takes an object from world space to camera space. If you want to move the camera's world space position forwards, you need to take the local forward vector (0,0,1) or (0,0,-1)?, multiply it by the inverse of "rotation" (to move the vector from camera space to world space), then scale and add to the position.