glm lookAt FPS camera flips vertical mouse look when Z < 0 - c++

I am trying to implement a FPS camera using C++, OpenGL and GLM.
What I did until now:
I have a cameraPosition vector for the camera position, and also
cameraForward (pointing to where the camera looks at), cameraRight and cameraUp, which are calculated like this:
inline void controlCamera(GLFWwindow* currentWindow, const float& mouseSpeed, const float& deltaTime)
{
double mousePositionX, mousePositionY;
glfwGetCursorPos(currentWindow, &mousePositionX, &mousePositionY);
int windowWidth, windowHeight;
glfwGetWindowSize(currentWindow, &windowWidth, &windowHeight);
m_cameraYaw += (windowWidth / 2 - mousePositionX) * mouseSpeed;
m_cameraPitch += (windowHeight / 2 - mousePositionY) * mouseSpeed;
lockCamera();
glfwSetCursorPos(currentWindow, windowWidth / 2, windowHeight / 2);
// Rotate the forward vector horizontally. (the first argument is the default forward vector)
m_cameraForward = rotate(vec3(0.0f, 0.0f, -1.0f), m_cameraYaw, vec3(0.0f, 1.0f, 0.0f));
// Rotate the forward vector vertically.
m_cameraForward = rotate(m_cameraForward, -m_cameraPitch, vec3(1.0f, 0.0f, 0.0f));
// Calculate the right vector. First argument is the default right vector.
m_cameraRight = rotate(vec3(1.0, 0.0, 0.0), m_cameraYaw, vec3(0.0f, 1.0f, 0.0f));
// Calculate the up vector.
m_cameraUp = cross(m_cameraRight, m_cameraForward);
}
Then I "look at" like this:
lookAt(m_cameraPosition, m_cameraPosition + m_cameraForward, m_cameraUp)
The problem: I seem to be missing something, because my FPS camera works as it is supposed to be until I move forward and get behind Z(0.0) (z becomes negative).. then my vertical mouse look flips and when I try to look up my application looks down...
The same question was asked here: glm::lookAt vertical camera flips when z <= 0 , but I didn't understand what the issue is and how to solve it.
EDIT: The problem is definitely in the forward, up and right vectors. When I calculate them like this:
m_cameraForward = vec3(
cos(m_cameraPitch) * sin(m_cameraYaw),
sin(m_cameraPitch),
cos(m_cameraPitch) * cos(m_cameraYaw)
);
m_cameraRight = vec3(
sin(m_cameraYaw - 3.14f/2.0f),
0,
cos(m_cameraYaw - 3.14f/2.0f)
);
m_cameraUp = glm::cross(m_cameraRight, m_cameraForward);
Then the problem goes away, but then m_cameraPitch and m_cameraYaw don't match... I mean if m_cameraYaw is 250 and I make a 180 flip m_cameraYaw is 265... I can't restrict leaning backwards for example like that? Any ideas?

Related

OpenGL camera target issue

I'm implementing a 3D maze game in C++, using OpenGL where the camera is at a fixed position exactly above the middle of the maze looking at the middle too. It works and looks just as I've wanted but the my code is not so nice because I needed to put a + 0.01f into the position to work well. If I miss it out, the labirynth doesn't show up, it seems like the camera points into the opposite direction.. How can I fix it elegantly?
glm::vec3 FrontDirection = glm::vec3(((GLfloat)MazeSize / 2) + 0.01f, 0.0f, ((GLfloat)MazeSize / 2)
The (0,0,0) origo location is the left corner of the maze (start position) so this is the reason I set the position like this:
glm::vec3 CameraPosition = glm::vec3(((GLfloat)MazeSize / 2), ((GLfloat)MazeSize + 5.0f), ((GLfloat)MazeSize / 2))
The UpDirection is (0.0f, 1.0f, 0.0f) lookAt function looks like this:
Matrix = glm::lookAt(CameraPosition, FrontDirection, UpDirection);
I know that by convention, in OpenGL the camera points towards the negative z-axis so the FrontDirection is basically pointing in the reverse direction of what it is targeting. For me it would be completely clear to set the positions like I did above but it still is not working as I've expected. (unless I put that + 0.01f)
Thank you for your answer in advance!
I know that by convention, in OpenGL the camera points towards the negative z-axis
In viewspace the z-axis points out of the viewport, but that is completely irrelevant. You define the view matrix, the position and the orientation of the camera. The camera position is the first argument of glm::lookAt. The camera looks in the direction of the target, the 2nd argument of glm::lookAt.
The 2nd parameter of glm::lookAt is not a direction vector, it is a point on the line of sight.
Compute a point on the line of sight by CameraPosition+FrontDirection:
Matrix = glm::lookAt(CameraPosition, CameraPosition+FrontDirection, UpDirection);
Your upwards-vector has the same direction as the line of sight. The upwards should be orthogonal to the line of sight. The up-vector defines the roll:
glm::vec3 CameraPosition = glm::vec3(
((GLfloat)MazeSize / 2), ((GLfloat)MazeSize + 5.0f), ((GLfloat)MazeSize / 2));
glm::vec3 CameraTarget = glm::vec3(
((GLfloat)MazeSize / 2), 0.0f, ((GLfloat)MazeSize / 2));
glm::vec3 UpDirection = glm::vec3(0.0f, 0.0f, 1.0f);
Matrix = glm::lookAt(CameraPosition, CameraTarget, UpDirection);

How to translate, rotate and scale at a particular point?

I want to translate, rotate and scale, all of these transformations from a specified point(origin). This is what I use right now(I tried many combinations but still cannot figure it out! Looks like it's back to learning Linear Algebra for me).
I am using GLM.
Take a look at the code which I am using:-
GLfloat rx = sprite->origin.x / width;
GLfloat ry = sprite->origin.y / height;
translate(model, vec3( -sprite->origin.x, -sprite->origin.y, 0.0f ));
translate(model, vec3( rx * scale[0], ry * scale[1], 0.0f ));
rotate(model, radians(rotation), vec3( 0.0f, 0.0f, 1.0f ));
translate(model, vec3( -(rx * scale[0]), -(ry * scale[1]), 0.0f ));
scale(model, vec3( width * scale[0], height * scale[1], 1.0f ));
translate(model, vec3( position[0], position[1], 0.0f ));
Origin is the starting point of the object, then translate it to the center for rotation and then translate back.
Scale it and translate the object to the specified position.
I mean, what is wrong with this code?
I found the answer,
Do it in this order:-
Translate the object to the position, e.g, vec3(position, 0.0f).
And then rotate the object.
Translate the point back to the origin, e.g, vec3(-originx * scale, -originy * scale, 0.0f).
Finally, scale the object.
Explanation:
Take the position of the object as 0, 0 in 2D coordinates(take it as the variable position mentioned above in the example of the first point).
You can see that the object will translate(move) to the position.
Note: Don't forget to multiply it with your scale or it can yield wrong results if your scale is more than 1.
Now is the time you rotate the object.
Scaling works from the top-left corner so you can't do it from the center.
Next, just scale the object.
That's all.

Projection Matrix only goes from 0 to 1

I am working on my Projection Matrix in c++.
If I use a Orthogonal Matrix, the Axis range goes from 0 to my screen size.
Now if I use my Perspective Matrix, the Axis range goes from 0 to 1.
This is not good if I want to position my objects. I could divide their movement with the width and height, but I think that there should be a better solution just like by using an orthogonal matrix.
T aspect = (right - left) / (top - bottom);
T xScale = 1.0f / tan(fov / 2.0f);
T yScale = xScale / aspect;
return Matrix<T>(
yScale, 0.0f, 0.0f, 0.0f,
0.0f, xScale, 0.0f, 0.0f,
(left + right) / (left - right), (top + bottom) / (bottom - top), zFar / (zNear - zFar), -1.0f,
0.0f, 0.0f, (zNear * zFar) / (zNear - zFar), 0.0f);
That's my Perspective Matrix
T farNear = zFar - zNear;
return Matrix<T>(
2.0f / (right - left), 0.0f, 0.0f, 0.0f,
0.0f, 2.0f / (top - bottom), 0.0f, 0.0f,
0.0f, 0.0f, 1.0f / farNear, 0.0f,
(left + right) / (left - right), (top + bottom) / (bottom - top), -zNear / farNear, 1.0f);
That's my Orthogonal Matrix calculation
So how can I fix it so that if I use my perspective matrix, the axis range goes from 0 to my screen size instead of 0 to 1.
This range you mention does not work that way in a perspective projection.
To figure out the width and height of your viewing volume, you need to know your field of view (in GL we typically define this using a vertical angle and aspect ratio) and the distance from the near plane; width and height will vary with distance down the z-axis.
The following diagram illustrates the situation:
  
In an orthographic projection, the viewing volume has the same width and height no matter how far or close you are to the near clip plane. In this sort of projection, a point (x,y,...) at z=1.0 is equa-distant from one edge of the screen as the same point (x,y,...) at z=100.0, and thus you can establish a single X and Y range for all points.
With a perspective projection as discussed here, the farther a point is from the near plane, the more pushed toward the center of the screen it gets because the visible coordinate space expands.
  
The only way you are going to have a single range of visible X and Y coordinates is if you keep Z constant. But if you keep Z constant, then why do you want a perspective projection in the first place?

Window coordinates to camera angles?

So I want to use quaternions and angles to control my camera using my mouse.
I accumulate the vertical/horizontal angles like this:
void Camera::RotateCamera(const float offsetHorizontalAngle, const float offsetVerticalAngle)
{
mHorizontalAngle += offsetHorizontalAngle;
mHorizontalAngle = std::fmod(mHorizontalAngle, 360.0f);
mVerticalAngle += offsetVerticalAngle;
mVerticalAngle = std::fmod(mVerticalAngle, 360.0f);
}
and compute my orientation like this:
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
and the forward vector, which I need for glm::lookAt, like this:
Vec3 Camera::Forward() const
{
return Vec3(glm::inverse(Orientation()) * Vec4(0.0f, 0.0f, -1.0f, 0.0f));
}
I think that should do the trick, but I do not know how in my example game to get actual angles? All I have is the current and previous mouse location in window coordinates.. how can I get proper angles from that?
EDIT: on a second thought.. my "RotateCamera()" cant be right; I am experiencing rubber-banding effect due to the angles reseting after reaching 360 deegres... so how do I accumulate angles properly? I can just sum them up endlessly
Take a cross section of the viewing frustum (the blue circle is your mouse position):
Theta is half of your FOV
p is your projection plane distance (don't worry - it will cancel out)
From simple ratios it is clear that:
But from simple trignometry
So ...
Just calculate the angle psi for each of your mouse positions and subtract to get the difference.
A similar formula can be found for the vertical angle:
Where A is your aspect ratio (width / height)

Emulating gluLookAt with glm::Quat(ernions)

I've been trying to emulate gluLookAt functionality, but with Quaternions. Each of my game object have a TranslationComponent. This component stores the object's position (glm::vec3), rotation (glm::quat) and scale (glm::vec3). The camera calculates its position each tick doing the following:
// UP = glm::vec3(0,1,0);
//FORWARD = glm::vec3(0,0,1);
cameraPosition = playerPosition - (UP * distanceUP) - (FORWARD * distanceAway);
This position code works as expexted, the camera is place 3 metres behind the player and 1 metre up. Now, the camera's Quaternion is set to the follow:
//Looking at the player's feet
cameraRotation = quatFromToRotation(FORWARD, playerPosition);
The rendering engine now takes these values and generates the ViewMatrix (camera) and the ModelMatrix (player) and then renders the scene. The code looks like this:
glm::mat4 viewTranslationMatrix =
glm::translate(glm::mat4(1.0f), cameraTransform->getPosition());
glm::mat4 viewScaleMatrix =
glm::scale(glm::mat4(1.0f), cameraTransform->getScale());
glm::mat4 viewRotationMatrix =
glm::mat4_cast(cameraTransform->getRotation());
viewMatrix = viewTranslationMatrix * viewRotationMatrix * viewScaleMatrix;
quatFromToRotation(glm::vec3 from, glm::vec3 to) is defined as the following:
glm::quat quatFromToRotation(glm::vec3 from, glm::vec3 to)
{
from = glm::normalize(from); to = glm::normalize(to);
float cosTheta = glm::dot(from, to);
glm::vec3 rotationAxis;
if (cosTheta < -1 + 0.001f)
{
rotationAxis = glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), from);
if (glm::length2(rotationAxis) < 0.01f)
rotationAxis = glm::cross(glm::vec3(1.0f, 0.0f, 0.0f), from);
rotationAxis = glm::normalize(rotationAxis);
return glm::angleAxis(180.0f, rotationAxis);
}
rotationAxis = glm::cross(from, to);
float s = sqrt((1.0f + cosTheta) * 2.0f);
float invis = 1.0f / s;
return glm::quat(
s * 0.5f,
rotationAxis.x * invis,
rotationAxis.y * invis,
rotationAxis.z * invis
);
}
What I'm having troubles with is the fact the cameraRotation isn't being set correctly. No matter where the player is, the camera's forward is always (0,0,-1)
Your problem is in the line
//Looking at the player's feet
cameraRotation = quatToFromRotation(FORWARD, playerPosition);
You need to look from the camera position to the player's feet - not from "one meter above the player" (assuming the player is at (0,0,0) when you initially do this). Replace FORWARD with cameraPosition:
cameraRotation = quatToFromRotation(cameraPosition, playerPosition);
EDIT I believe you have an error in your quatToFromRotation function as well. See https://stackoverflow.com/a/11741520/1967396 for a very nice explanation (and some pseudo code) of quaternion rotation.