I'm implementing a 3D maze game in C++, using OpenGL where the camera is at a fixed position exactly above the middle of the maze looking at the middle too. It works and looks just as I've wanted but the my code is not so nice because I needed to put a + 0.01f into the position to work well. If I miss it out, the labirynth doesn't show up, it seems like the camera points into the opposite direction.. How can I fix it elegantly?
glm::vec3 FrontDirection = glm::vec3(((GLfloat)MazeSize / 2) + 0.01f, 0.0f, ((GLfloat)MazeSize / 2)
The (0,0,0) origo location is the left corner of the maze (start position) so this is the reason I set the position like this:
glm::vec3 CameraPosition = glm::vec3(((GLfloat)MazeSize / 2), ((GLfloat)MazeSize + 5.0f), ((GLfloat)MazeSize / 2))
The UpDirection is (0.0f, 1.0f, 0.0f) lookAt function looks like this:
Matrix = glm::lookAt(CameraPosition, FrontDirection, UpDirection);
I know that by convention, in OpenGL the camera points towards the negative z-axis so the FrontDirection is basically pointing in the reverse direction of what it is targeting. For me it would be completely clear to set the positions like I did above but it still is not working as I've expected. (unless I put that + 0.01f)
Thank you for your answer in advance!
I know that by convention, in OpenGL the camera points towards the negative z-axis
In viewspace the z-axis points out of the viewport, but that is completely irrelevant. You define the view matrix, the position and the orientation of the camera. The camera position is the first argument of glm::lookAt. The camera looks in the direction of the target, the 2nd argument of glm::lookAt.
The 2nd parameter of glm::lookAt is not a direction vector, it is a point on the line of sight.
Compute a point on the line of sight by CameraPosition+FrontDirection:
Matrix = glm::lookAt(CameraPosition, CameraPosition+FrontDirection, UpDirection);
Your upwards-vector has the same direction as the line of sight. The upwards should be orthogonal to the line of sight. The up-vector defines the roll:
glm::vec3 CameraPosition = glm::vec3(
((GLfloat)MazeSize / 2), ((GLfloat)MazeSize + 5.0f), ((GLfloat)MazeSize / 2));
glm::vec3 CameraTarget = glm::vec3(
((GLfloat)MazeSize / 2), 0.0f, ((GLfloat)MazeSize / 2));
glm::vec3 UpDirection = glm::vec3(0.0f, 0.0f, 1.0f);
Matrix = glm::lookAt(CameraPosition, CameraTarget, UpDirection);
Related
I am following the LearnOpenGL tutorials and have been tinkering with shadows casting. So far everything is working correctly but there is this very specific problem where I can't cast shadows that from a purely vertical directional light. Let's add some code. My light space matrix looks like this:
glm::mat4 view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
return glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, -10.0f, 10.0f) * view;
direction is a vector with as direction the direction of the directional light, of course. Everything works well until that direction vector is set to (0,-1,0), because it is parallel to the up vector (0,1,0). To construct the lookAt matrix, glm is performing the cross product between the up vector and the difference between the center and the eye (so in that case it's basically the direction), but that cross product won't give any result since the two vectors are parallel.
Knowing all of this, my question would be : how should my lookAt view matrix when the up vector and the direction of the light are parallel ?
Edit : Thank you for your answer, I changed my code to this :
if(abs(direction.x) < FLT_EPSILON && abs(direction.z) < FLT_EPSILON)
view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 0.0f, 1.0f));
else
view = glm::lookAt(-direction, glm::vec3(0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
return glm::ortho(-10.0f, 10.0f, -10.0f, 10.0f, -10.0f, 10.0f) * view;
and now everything works fine !
When the up-vector and the line of sight are parallel, then the view matrix is undefined, because the Cross product is (0, 0, 0).
The view matrix is an Orthogonal matrix, this means each of the 3 axis is perpendicular to plane which is formed by the 2 other axis. Respectively the angele between the axis is 90°.
The view matrix is the inverse matrix of that matrix which defines the viewing position and orientation. This matrix is defined by the parameters to glm::lookAt. 2 of the axis a res specified by the line of sight and the up-vector. The 3rd axis is calculated by the cross product.
This all means, you have to specify the matrix by 2 Orthogonal directions. If the angle between the direction vectors is not exactly (90°) then this is corrected by glm::lookAt. But the algorithm fails to do that if the vectors are parallel.
Define a line of sight (direction) vector and a up-vector, with an angle of 90° to each another. If you rotate on of them, then you've to rotate the other vector in the same way.
e.g. Lets assume you've a direction vector (line of sight) and an up-vector:
direction: (0, 0, 1)
up : (0, 1, 0)
If the direction vector is rotated by 90° then the up-vector has to be rotated by 90°, too:
direction: (0, -1, 0)
up : (0, 0, 1)
So I have a camera object in my scenegraph which I want to rotate around another object and still look at it.
So far the code I've tried to translate its position just keeps moving it back and forth to the right and back a small amount.
Here is the code I've tried using in my game update loop:
//ang is set to 75.0f
camera.position += camera.right * glm::vec3(cos(ang * deltaTime), 1.0f, 1.0f);
I'm not really sure where I'm going wrong. I've looked at other code to rotate around the object and they use cos and sine but since im only translating along the x axis I thought I would only need this.
First you have to create a rotated vector. This can be done by glm::rotateZ. Note, since glm version 0.9.6 the angle has to be set in radians.
float ang = ....; // angle per second in radians
float timeSinceStart = ....; // seconds since the start of the animation
float dist = ....; // distance from the camera to the target
glm::vec3 cameraVec = glm::rotateZ(glm::vec3(dist, 0.0f, 0.0f), ang * timeSinceStart);
Furthermore, you must know the point around which the camera should turn. Probably the position of the objet:
glm::vec3 objectPosition = .....; // position of the object where the camera looks to
The new position of the camera is the position of the object, displaced by the rotation vector:
camera.position = objectPosition + cameraVec;
The target of the camera has to be the object position, because the camera should look to the object:
camera.front = glm::normalize(objectPosition - camera.position);
The up vector of the camera should be the z-axis (rotation axis):
camera.up = glm::vec3(0.0f, 0.0f, 1.0f);
I am working on my Projection Matrix in c++.
If I use a Orthogonal Matrix, the Axis range goes from 0 to my screen size.
Now if I use my Perspective Matrix, the Axis range goes from 0 to 1.
This is not good if I want to position my objects. I could divide their movement with the width and height, but I think that there should be a better solution just like by using an orthogonal matrix.
T aspect = (right - left) / (top - bottom);
T xScale = 1.0f / tan(fov / 2.0f);
T yScale = xScale / aspect;
return Matrix<T>(
yScale, 0.0f, 0.0f, 0.0f,
0.0f, xScale, 0.0f, 0.0f,
(left + right) / (left - right), (top + bottom) / (bottom - top), zFar / (zNear - zFar), -1.0f,
0.0f, 0.0f, (zNear * zFar) / (zNear - zFar), 0.0f);
That's my Perspective Matrix
T farNear = zFar - zNear;
return Matrix<T>(
2.0f / (right - left), 0.0f, 0.0f, 0.0f,
0.0f, 2.0f / (top - bottom), 0.0f, 0.0f,
0.0f, 0.0f, 1.0f / farNear, 0.0f,
(left + right) / (left - right), (top + bottom) / (bottom - top), -zNear / farNear, 1.0f);
That's my Orthogonal Matrix calculation
So how can I fix it so that if I use my perspective matrix, the axis range goes from 0 to my screen size instead of 0 to 1.
This range you mention does not work that way in a perspective projection.
To figure out the width and height of your viewing volume, you need to know your field of view (in GL we typically define this using a vertical angle and aspect ratio) and the distance from the near plane; width and height will vary with distance down the z-axis.
The following diagram illustrates the situation:
In an orthographic projection, the viewing volume has the same width and height no matter how far or close you are to the near clip plane. In this sort of projection, a point (x,y,...) at z=1.0 is equa-distant from one edge of the screen as the same point (x,y,...) at z=100.0, and thus you can establish a single X and Y range for all points.
With a perspective projection as discussed here, the farther a point is from the near plane, the more pushed toward the center of the screen it gets because the visible coordinate space expands.
The only way you are going to have a single range of visible X and Y coordinates is if you keep Z constant. But if you keep Z constant, then why do you want a perspective projection in the first place?
I am trying to implement a FPS camera using C++, OpenGL and GLM.
What I did until now:
I have a cameraPosition vector for the camera position, and also
cameraForward (pointing to where the camera looks at), cameraRight and cameraUp, which are calculated like this:
inline void controlCamera(GLFWwindow* currentWindow, const float& mouseSpeed, const float& deltaTime)
{
double mousePositionX, mousePositionY;
glfwGetCursorPos(currentWindow, &mousePositionX, &mousePositionY);
int windowWidth, windowHeight;
glfwGetWindowSize(currentWindow, &windowWidth, &windowHeight);
m_cameraYaw += (windowWidth / 2 - mousePositionX) * mouseSpeed;
m_cameraPitch += (windowHeight / 2 - mousePositionY) * mouseSpeed;
lockCamera();
glfwSetCursorPos(currentWindow, windowWidth / 2, windowHeight / 2);
// Rotate the forward vector horizontally. (the first argument is the default forward vector)
m_cameraForward = rotate(vec3(0.0f, 0.0f, -1.0f), m_cameraYaw, vec3(0.0f, 1.0f, 0.0f));
// Rotate the forward vector vertically.
m_cameraForward = rotate(m_cameraForward, -m_cameraPitch, vec3(1.0f, 0.0f, 0.0f));
// Calculate the right vector. First argument is the default right vector.
m_cameraRight = rotate(vec3(1.0, 0.0, 0.0), m_cameraYaw, vec3(0.0f, 1.0f, 0.0f));
// Calculate the up vector.
m_cameraUp = cross(m_cameraRight, m_cameraForward);
}
Then I "look at" like this:
lookAt(m_cameraPosition, m_cameraPosition + m_cameraForward, m_cameraUp)
The problem: I seem to be missing something, because my FPS camera works as it is supposed to be until I move forward and get behind Z(0.0) (z becomes negative).. then my vertical mouse look flips and when I try to look up my application looks down...
The same question was asked here: glm::lookAt vertical camera flips when z <= 0 , but I didn't understand what the issue is and how to solve it.
EDIT: The problem is definitely in the forward, up and right vectors. When I calculate them like this:
m_cameraForward = vec3(
cos(m_cameraPitch) * sin(m_cameraYaw),
sin(m_cameraPitch),
cos(m_cameraPitch) * cos(m_cameraYaw)
);
m_cameraRight = vec3(
sin(m_cameraYaw - 3.14f/2.0f),
0,
cos(m_cameraYaw - 3.14f/2.0f)
);
m_cameraUp = glm::cross(m_cameraRight, m_cameraForward);
Then the problem goes away, but then m_cameraPitch and m_cameraYaw don't match... I mean if m_cameraYaw is 250 and I make a 180 flip m_cameraYaw is 265... I can't restrict leaning backwards for example like that? Any ideas?
So I want to use quaternions and angles to control my camera using my mouse.
I accumulate the vertical/horizontal angles like this:
void Camera::RotateCamera(const float offsetHorizontalAngle, const float offsetVerticalAngle)
{
mHorizontalAngle += offsetHorizontalAngle;
mHorizontalAngle = std::fmod(mHorizontalAngle, 360.0f);
mVerticalAngle += offsetVerticalAngle;
mVerticalAngle = std::fmod(mVerticalAngle, 360.0f);
}
and compute my orientation like this:
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
and the forward vector, which I need for glm::lookAt, like this:
Vec3 Camera::Forward() const
{
return Vec3(glm::inverse(Orientation()) * Vec4(0.0f, 0.0f, -1.0f, 0.0f));
}
I think that should do the trick, but I do not know how in my example game to get actual angles? All I have is the current and previous mouse location in window coordinates.. how can I get proper angles from that?
EDIT: on a second thought.. my "RotateCamera()" cant be right; I am experiencing rubber-banding effect due to the angles reseting after reaching 360 deegres... so how do I accumulate angles properly? I can just sum them up endlessly
Take a cross section of the viewing frustum (the blue circle is your mouse position):
Theta is half of your FOV
p is your projection plane distance (don't worry - it will cancel out)
From simple ratios it is clear that:
But from simple trignometry
So ...
Just calculate the angle psi for each of your mouse positions and subtract to get the difference.
A similar formula can be found for the vertical angle:
Where A is your aspect ratio (width / height)