Camera/View matrix - c++

After reading through this article (http://3dgep.com/?p=1700) it seems to imply I got my view matrix wrong. Here's how I compute the view matrix;
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
Mat4 Camera::GetViewMatrix() const
{
return Orientation() * glm::translate(Mat4(1.0f), -mTranslation);
}
Supposedly, I am to invert this resulting matrix, but I have not so far and it has work excellently thus far, and I'm not doing any inverting down the pipeline either. Is there something I am missing here?

You already did the inversion. The view matrix is the inverse of the model transformation that positions the camera. This is:
ModelCamera = Translation(position) * Rotation
So the inverse is:
ViewMatrix = (Translation(position) * Rotation)^-1
= Rotation^-1 * Translation(position)^-1
The translation is inverted by negating the offset:
= Rotation^-1 * Translation(-position)
This leaves us with inverting the rotation. We can assume that the rotation is inverted. Thus, the original rotation of the camera model is
Rotation^-1 = RotationX(verticalAngle) * RotationY(horizontalAngle)
Rotation = (RotationX(verticalAngle) * RotationY(horizontalAngle))^-1
= RotationY(horizontalAngle)^-1 * RotationX(verticalAngle)^-1
= RotationY(-horizontalAngle) * RotationX(-verticalAngle)
So the angles you specify are actually the inverted angles that would rotate the camera. If you increase horizontalAngle, the camera should turn to the right (assuming a right-handed coordinate system). That's just a matter of definitions.

Related

Am I correctly rotating my model using matrices?

I have been getting unexpected behavior while trying to rotate a basic cube. It may be helpful to know that translating the cube works correctly in the y and z direction. However, translating along the x axis is backwards(I negate only x for proper results) which I haven't been able to figure out why.
Furthermore, rotating the cube has been a mess. Without any sort of transform the cube appears correctly. Once I add a rotation transformation the cube is not displayed until I change one of the x,y,z rotation values from 0(Putting all values back to 0 makes it disappear again). Once it appears the cube won't rotate around whichever x,y,z plane I first changed unless I change two or more of the coordinates. It also wobbles around its origin when rotating.
Below is a snippets of my code I believe has incorrect math.
/* Here's how I setup the matrices for a mvp matrix*/
proj = glm::perspective(glm::radians(90.0f), (960.0f / 540.0f), 0.1f, 400.0f);
view = glm::lookAt(glm::vec3(0, 0, -200), glm::vec3(0, 0, 0), glm::vec3(0, 1, 0));
glm::mat4 model = glm::mat4(1.0f);
/* Here's how I transform the model matrix, note
translating works properly once the cube is visible*/
model = glm::translate(model, glm::vec3(-translation[0], translation[1], translation[2])); //negative x value
model = glm::rotate(model, 30.0f, rotation);
glm::mat4 mvp = proj * view * model;
shader->Bind();
shader->SetUniformMat4f("MVP", mvp);
renderer.Draw(*c_VAO, *c_EBO, *shader);
/* Here's how I use these values in my vertex shader */
layout(location = 0) in vec4 position;
...
uniform mat4 MVP;
...
void main()
{
gl_Position = u_MVP * position;
....
};
I've checked both the translation and rotation vectors values and they are as expected but I am still going mad trying to figure out this problem.
The unit of the angle of glm::rotate is radians. Use glm::radians to convert form degrees to radians:
model = glm::rotate(model, 30.0f, rotation);
model = glm::rotate(model, glm::radians(30.0f), rotation);

Trying to correctly generate matrices for cascaded shadow maps

I am trying to implement cascaded shadow maps in OpenGL, but I am having trouble generating the view and projection matrices. Here is my code:
glm::mat4 lightViewMatrix = glm::lookAt(glm::vec3(0.0), glm::normalize(direction), glm::vec3(0.0f, 1.0f, 0.0f));
glm::vec4 min(INFINITY);
glm::vec4 max(-INFINITY);
for (int i = 0; i < 8; i++) {
glm::vec4 lightSpaceCorner = lightViewMatrix * frustumCornersWorldSpace[i];
min = glm::min(min, lightSpaceCorner);
max = glm::max(max, lightSpaceCorner);
}
glm::mat4 ortho = glm::ortho(min.x, max.x, min.y, max.y, min.z, max.z);
mMatrices[cascade] = ortho * lightViewMatrix ;
The resulting shadow map is cut off at certain angles, which leads me to believe that the either the view or projection matrix is incorrectly configured.
If anyone knows the answer, it would be a huge help.
Thanks.
Got it working! Turns out the issue wasn't with the matrices, but the depth value I was using to decide which cascade I should use. I was using gl_FragCoord.z, thinking it was in clip space.

Raytracing camera rotates in wrong directions

I'm trying to build a raytracer and I use this article on how to build camera system.
The problem is that when, after calculating ray direction in camera space, I multiply it by camera-to-world transformation matrix and my camera seems to rotate in wrong (opposite) directions and works correctly if I inverse transformation matrix before multiplication.
Here is the code (I use glm library and right-handed coordinate system).
Initial data:
glm::vec3 origin_ = glm::vec3(0.f);// camera origin
cont glm::vec3 kDirection = glm::vec3(0.f, 0.f, -1.f);
cont glm::vec3 kUp = glm::vec3(0.f, 1.f, 0.f);
float aspect_ratio_ = (float)raster_height_ / raster_width_;
// bug !!! rotates in opposite direction (camera is actually tilted down)
glm::mat4 camera_to_world_ = glm::lookAtRH(origin_, glm::vec3(0.f, 0.2f, -1.f), kUp);
// works !!! (camera is tilted up)
glm::mat4 camera_to_world_ = glm::inverse(glm::lookAtRH(origin_, glm::vec3(0.f, 0.2f, -1.f), kUp));
And function that generates camera rays
// Calculate ray as if camera is located at 0,0,0 and pointing into negative z direction
// Then transform ray direction to desired plase
// x,y - pixel coordinates of raster image
// calculate as if raster image (screen) is 1.0 unit away from origin (eye)
Ray Camera::GenRay(const uint32_t x, const uint32_t y) {
glm::vec3 ray_direction = kDirection;
// from raster space to NDC space
glm::vec2 pixel_ndc((x + 0.5f) / raster_width_, (y + 0.5f) / raster_height_);
// from NDC space to camera space
float scale = tan(fov_ / 2.0f);
ray_direction.x = (2.0f * pixel_ndc.x - 1.0f) * scale; // *aspect_ratio_;
ray_direction.y = (1.0f - 2.0f * pixel_ndc.y) * scale * aspect_ratio_;
// apply camera-to-world rotation matrix to direction
ray_direction = camera_to_world_ * glm::vec4(ray_direction, 0.0f);
return Ray(origin_, ray_direction, Ray::Type::kPrimary);
}
I really can't understand the root of a problem so any help us appreciated.

How do I change quaternion rotation to use the local camera axis rather than the world axis in DirectX 11

I'm currently trying to rotate the camera around its local axis based on keyboard/mouse input and the code I currently have uses DirectXMath and works nicely, however it is using the world axis to rotate around rather than the cameras local axis. Because of this, some of the rotations are not as expected and causes issues as the camera rotates. For example, when we tilt our camera, the Y axis will change and we will want to rotate around another axis to get our expected results.
What am I doing wrong in the code or what do I need to change in order to rotate around its local axis?
vector.x, vector.y, vector.z (The vector to rotate around, i.e. (1.0f, 0.0f, 0.0f))
//define our camera matrix
XMFLOAT4X4 cameraMatrix;
//position, lookat, up values for the camera
XMFLOAT3 position;
XMFLOAT3 up;
XMFLOAT3 lookat;
void Camera::rotate(XMFLOAT3 vector, float theta) {
XMStoreFloat4x4(&cameraMatrix, XMMatrixIdentity());
//set our view quaternion to our current camera's lookat position
XMVECTOR viewQuaternion = XMQuaternionIdentity();
viewQuaternion = XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f);
//set the rotation vector based on our parameter, i.e (1.0f, 0.0f, 0.0f)
//to rotate around the x axis
XMVECTOR rotationVector = XMVectorSet(vector.x, vector.y, vector.z, 0.0f);
//create a rotation quaternion to rotate around our vector, with a specified angle, theta
XMVECTOR rotationQuaternion = XMVectorSet(
XMVectorGetX(rotationVector) * sin(theta / 2),
XMVectorGetY(rotationVector) * sin(theta / 2),
XMVectorGetZ(rotationVector) * sin(theta / 2),
cos(theta / 2));
//get our rotation quaternion inverse
XMVECTOR rotationInverse = XMQuaternionInverse(rotationQuaternion);
//new view quaternion = [ newView = ROTATION * VIEW * INVERSE ROTATION ]
//multiply our rotation quaternion with our view quaternion
XMVECTOR newViewQuaternion = XMQuaternionMultiply(rotationQuaternion, viewQuaternion);
//multiply the result of our calculation above with the inverse rotation
//to get our new view values
newViewQuaternion = XMQuaternionMultiply(newViewQuaternion, rotationInverse);
//take the new lookat values from our newViewQuaternion and put them into the camera
lookat = XMFLOAT3(XMVectorGetX(newViewQuaternion), XMVectorGetY(newViewQuaternion), XMVectorGetZ(newViewQuaternion));
//build our camera matrix using XMMatrixLookAtLH
XMStoreFloat4x4(&cameraMatrix, XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f)));
}
The view matrix is then set
//store our camera's matrix inside the view matrix
XMStoreFloat4x4(&_view, camera->getCameraMatrix() );
-
Edit:
I have tried an alternative solution without using quaternions, and it seems I can get the camera to rotate correctly around its own axis, however the camera's lookat values now never change and after I have stopped using the mouse/keyboard, it snaps back to its original position.
void Camera::update(float delta) {
XMStoreFloat4x4(&cameraMatrix, XMMatrixIdentity());
//do we have a rotation?
//this is set as we try to rotate, around a current axis such as
//(1.0f, 0.0f, 0.0f)
if (rotationVector.x != 0.0f || rotationVector.y != 0.0f || rotationVector.z != 0.0f) {
//yes, we have an axis to rotate around
//create our axis vector to rotate around
XMVECTOR axisVector = XMVectorSet(rotationVector.x, rotationVector.y, rotationVector.z, 0.0f);
//create our rotation matrix using XMMatrixRotationAxis, and rotate around this axis with a specified angle theta
XMMATRIX rotationMatrix = XMMatrixRotationAxis(axisVector, 2.0 * delta);
//create our camera's view matrix
XMMATRIX viewMatrix = XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f));
//multiply our camera's view matrix by the rotation matrix
//make sure the rotation is on the right to ensure local axis rotation
XMMATRIX finalCameraMatrix = viewMatrix * rotationMatrix;
/* this piece of code allows the camera to correctly rotate and it doesn't
snap back to its original position, as the lookat coordinates are being set
each time. However, this will make the camera rotate around the world axis
rather than the local axis. Which brings us to the same problem we had
with the quaternion rotation */
//XMVECTOR look = XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0);
//XMVECTOR finalLook = XMVector3Transform(look, rotationMatrix);
//lookat.x = XMVectorGetX(finalLook);
//lookat.y = XMVectorGetY(finalLook);
//lookat.z = XMVectorGetZ(finalLook);
//finally store the finalCameraMatrix into our camera matrix
XMStoreFloat4x4(&cameraMatrix, finalCameraMatrix);
} else {
//no, there is no rotation, don't apply the roation matrix
//no rotation, don't apply the rotation matrix
XMStoreFloat4x4(&cameraMatrix, XMMatrixLookAtLH(
XMVectorSet(position.x, position.y, position.z, 0.0f),
XMVectorSet(lookat.x, lookat.y, lookat.z, 0.0f),
XMVectorSet(up.x, up.y, up.z, 0.0f)));
}
An example can be seen here: https://i.gyazo.com/f83204389551eff427446e06624b2cf9.mp4
I think I am missing setting the actual lookat value to the new lookat value, but I'm not sure how to calculate the new value, or extract it from the new view matrix (which I have already tried)

My 3d OpenGL object rotates around the world origin, not local-space origin. What am I doing wrong or misunderstanding?

I've been stuck on this for two days now, I'm unsure where else to look. I'm rendering two 3d cubes using OpenGL, and trying to apply a local rotation to each cube in these scene in response to me pressing a button.
I've got to the point where my cubes rotate in 3d space, but their both rotating about the world-space origin, instead of their own local origins.
(couple second video)
https://www.youtube.com/watch?v=3mrK4_cCvUw
After scouring the internet, the appropriate formula for calculating the MVP is as follow:
auto const model = TranslationMatrix * RotationMatrix * ScaleMatrix;
auto const modelview = projection * view * model;
Each of my cube's has it's own "model", which is defined as follows:
struct model
{
glm::vec3 translation;
glm::quat rotation;
glm::vec3 scale = glm::vec3{1.0f};
};
When I press a button on my keyboard, I create a quaternion representing the new angle and multiply it with the previous rotation quaternion, updating it in place.
The function looks like this:
template<typename TData>
void rotate_entity(TData &data, ecst::entity_id const eid, float const angle,
glm::vec3 const& axis) const
{
auto &m = data.get(ct::model, eid);
auto const q = glm::angleAxis(glm::degrees(angle), axis);
m.rotation = q * m.rotation;
// I'm a bit unsure on this last line above, I've also tried the following without fully understanding the difference
// m.rotation = m.rotation * q;
}
The axis is provided by the user like so:
// inside user-input handling function
float constexpr ANGLE = 0.2f;
...
// y-rotation
case SDLK_u: {
auto constexpr ROTATION_VECTOR = glm::vec3{0.0f, 1.0f, 0.0f};
rotate_entities(data, ANGLE, ROTATION_VECTOR);
break;
}
case SDLK_i: {
auto constexpr ROTATION_VECTOR = glm::vec3{0.0f, -1.0f, 0.0f};
rotate_entities(data, ANGLE, ROTATION_VECTOR);
break;
}
My GLSL vertex shader is pretty straight forward from what I've found in the example code out there:
// attributes input to the vertex shader
in vec4 a_position; // position value
// output of the vertex shader - input to fragment
// shader
out vec3 v_uv;
uniform mat4 u_mvmatrix;
void main()
{
gl_Position = u_mvmatrix * a_position;
v_uv = vec3(a_position.x, a_position.y, a_position.z);
}
Inside my draw code, the exact code I'm using to calculate the MVP for each cube is:
...
auto const& model = shape.model();
auto const tmatrix = glm::translate(glm::mat4{}, model.translation);
auto const rmatrix = glm::toMat4(model.rotation);
auto const smatrix = glm::scale(glm::mat4{}, model.scale);
auto const mmatrix = tmatrix * rmatrix * smatrix;
auto const mvmatrix = projection * view * mmatrix;
// simple wrapper that does logging and forwards to glUniformMatrix4fv()
p.set_uniform_matrix_4fv(logger, "u_mvmatrix", mvmatrix);
Earlier in my program, I calculate my view/projection matrices like so:
auto const windowheight = static_cast<GLfloat>(hw.h);
auto const windowwidth = static_cast<GLfloat>(hw.w);
auto projection = glm::perspective(60.0f, (windowwidth / windowheight), 0.1f, 100.0f);
auto view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 1.0f), // camera position
glm::vec3(0.0f, 0.0f, -1.0f), // look at origin
glm::vec3(0.0f, 1.0f, 0.0f)); // "up" vector
The positions of my cube's in world-space are on the Z axis, so they should be visible:
cube0.set_world_position(0.0f, 0.0f, 0.0f, 1.0f);
cube1.set_world_position(-0.7f, 0.7f, 0.0f, 1.0f);
// I call set_world_position() exactly once before my game enter's it's main loop.
// I never call this again, it just modifies the vertex used as the center of the shape.
// It doesn't modify the model matrix at all.
// I call it once before my game enter's it's game loop, and I never modify it after that.
So, my question is, is the appropriate way to update a rotation for an object?
Should I be storing a quaternion directly in my object's "model"?
Should I be storing my translation and scaling as separate vec3's?
Is there an easier way to do this? I've been reading and re-reading anything I can find, but I don't see anyone doing this in the same way.
This tutorial is a bit short on details, specifically how to apply a rotation to an existing rotation (I believe this is just multiplying the quaternions together, which is what I'm doing inside rotate_entity(...) above).
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-17-quaternions/
https://github.com/opengl-tutorials/ogl/blob/master/tutorial17_rotations/tutorial17.cpp#L306-L311
Does it make more sense to store the resulting "MVP" matrix myself as my "model" and apply glm::transform/glm::scale/glm::rotate operations on the MVP matrix directly? (I tried this last option earlier, but I couldn't figure out how to get that to work too).
Thanks!
edit: better link
Generally, you don't want to modify the position of your model's individual vertices on the CPU. That's the entire purpose of the vertex program. The purpose of the model matrix is to position the model in the world in the vertex program.
To rotate a model around its center, you need to first move the center to the origin, then rotate it, then move the center to its final position. So let's say you have a cube that stretches from (0,0,0) to (1,1,1). You need to:
Translate the cube by (-0.5, -0.5, -0.5)
Rotate by the angle
Translate the cube by (0.5, 0.5, 0.5)
Translate the cube to wherever it belongs in the scene
You can combine the last 2 translations into a single one, and of course, you can collapse all of these transformations into a single matrix that is your model matrix.