glm::lookAt returns matrix with nan elements - opengl

I want to create a view matrix for a camera which perpendicularly look at the ground:
glm::mat4 matrix = glm::lookAt(glm::vec3(0.0f, 1.0f, 0.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
The last argument is the global up vector so everything seems to be correct but I get following matirx:
-nan -nan -0 0
-nan -nan 1 0
-nan -nan -0 0
nan nan -1 1
I guess that I get nan because a look at vector is parallel to up vector, but how can I build a correct view matrix using glm::lookAt function.

The problem is with either your camera's position, or the up vector.
Your camera is 1 unit up (0,1,0), looking down at the origin (0,0,0). The up vector indicates the up direction of the camera, not the world space. For example, if you're looking forward, the up vector would be +Y. If you're looking down, with the top of your head facing +X, then the up vector is +X to you. It has to be something that's not at all parallel with the position vector of the camera.
Solutions:
Changing the up vector to anything along the XZ plane
or to something that's not (0,0,0) when projected onto the XZ plane
Move your camera so that it's anywhere but along the Y axis

In lookAt it is impossible to have the viewing direction and the up-vector looking in the same direction. If you want to have a camera that is looking along the negative y-axis, you'll have to adjust the up-vector, for example to [0,0,1]. The direction one specifies in the up-vector controls how the camera is rotated around the view axis.

I ran across this same problem of NaNs in the matrix returned by glm::lookAt() yesterday and have concocted what I think is a workaround. This seems to work for me for the particular problem of the UP vector being vec3(0.0f, 1.0f, 0.0f), which seems to be a common use case.
My Vulkan code looks like this:
struct UniformBufferObject {
alignas(16) glm::mat4 model;
alignas(16) glm::mat4 view;
alignas(16) glm::mat4 proj;
};
...
UniformBufferObject ubo{};
...
glm::vec3 cameraPos = glm::vec3(0.0f, 2.0f, 0.0f);
ubo.view = glm::lookAt(cameraPos, glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
// if the direction vector from the camera to the point being observed ends up being parallel to the UP vector
// glm::lookAt() returns a mat4 with NaNs in it. to workaround this, look for NaNs in ubo.view
int view_contains_nan = 0;
for (int col = 0; (col < 4) && !view_contains_nan; ++col) {
for (int row = 0; (row < 4) && !view_contains_nan; ++row) {
if (std::fpclassify(ubo.view[col][row]) == FP_NAN) {
view_contains_nan = 1;
}
}
}
// if we ended up with NaNs, the workaround ubo.view that seems to work depends on the sign of the camera position Y
if (view_contains_nan) {
std::cout << "view contains NaN" << std::endl;
if (cameraPos.y >= 0.0f) {
ubo.view = glm::mat4( -0.0f, -1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
-1.0f, 0.0f, -0.0f, 0.0f,
-0.0f, -0.0f, -cameraPos.y, 1.0f);
} else {
ubo.view = glm::mat4( 0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, -1.0f, 0.0f,
-1.0f, 0.0f, -0.0f, 0.0f,
-0.0f, -0.0f, cameraPos.y, 1.0f);
}
}
Hopefully it works for you too, though I suppose it would be nice if glm::lookAt() could be fixed to not return matrices with NaNs in it.

Related

Do not understand captureViews in Diffuse-irradiance tutorial in learnopengl.com

I am learning IBL in https://learnopengl.com/PBR/IBL/Diffuse-irradiance.
The tutorial convert a equirectangular to a cubemap by creating 6 views.
And the views are the following code:
glm::mat4 captureViews[] =
{
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(-1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, 1.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, -1.0f, 0.0f), glm::vec3(0.0f, 0.0f, -1.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, 0.0f, 1.0f), glm::vec3(0.0f, -1.0f, 0.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, 0.0f, -1.0f), glm::vec3(0.0f, -1.0f, 0.0f))
};
I don't understand the third parameter of glm::lookAt.
glm::lookAt's third parameter is the up vector. I think the captureViews should be:
// zero is [0, 0, 0]
// right is [1, 0, 0]
// left is [-1, 0, 0]
// up is [0, 1, 0]
// down is [0, -1, 0]
// back is [0, 0, 1]
// forward is [0, 0, -1]
glm::mat4 captureViews[] =
{
glm::lookAt(zero, right, up),
glm::lookAt(zero, left, up),
glm::lookAt(zero, up, back),
glm::lookAt(zero, down, forward),
glm::lookAt(zero, back, up),
glm::lookAt(zero, forward, up)
};
But I totally wrong. I don't understand the magic in the tutorial's up vector.
Can anyone explain it for me?
When a cubemap texture is used, then a 3 dimensional direction vector has to be transformed, to 2 dimensional texture coordinate relative to one side of the map.
The relevant part of the specification for this transformtion is OpenGL 4.6 API Core Profile Specification, 8.13 Cube Map Texture Selection, page 253:
When a cube map texture is sampled, the (s t r) texture coordinates are treated
as a direction vector (rx ry rz) emanating from the center of a cube. The q coordinate is ignored. At texture application time, the interpolated per-fragment direction vector selects one of the cube map face’s two-dimensional images based on the largest magnitude coordinate direction (the major axis direction). If two or more coordinates have the identical magnitude, the implementation may define the rule to disambiguate this situation. The rule must be deterministic and depend only on (rx ry rz). The target column in table 8.19 explains how the major axis direction maps to the two-dimensional image of a particular cube map target.
Using the sc, tc, and ma determined by the major axis direction as specified in table 8.19, an updated (s t)
is calculated as follows:
s = 1/2 * (s_c / |m_a| + 1)
t = 1/2 * (t_c / |m_a| + 1)
Major Axis Direction| Target |sc |tc |ma |
--------------------+---------------------------+---+---+---+
+rx |TEXTURE_CUBE_MAP_POSITIVE_X|−rz|−ry| rx|
−rx |TEXTURE_CUBE_MAP_NEGATIVE_X| rz|−ry| rx|
+ry |TEXTURE_CUBE_MAP_POSITIVE_Y| rx| rz| ry|
−ry |TEXTURE_CUBE_MAP_NEGATIVE_Y| rx|−rz| ry|
+rz |TEXTURE_CUBE_MAP_POSITIVE_Z| rx|−ry| rz|
−rz |TEXTURE_CUBE_MAP_NEGATIVE_Z|−rx|−ry| rz|
--------------------+---------------------------+---+---+---+
sc cooresponds the u coordiante and tc to the v cooridnate. So tc has to be in the direction of the view space up vector
Look at the first row of the table:
+rx | TEXTURE_CUBE_MAP_POSITIVE_X | −rz | −ry | rx
This means, for the X+ side (right side) of the cube map, the directions which correspond to the tangent and binormal are
sc = (0, 0, -1)
tc = (0, -1, 0)
This perfectly matches the 1st row of the table glm::mat4 captureViews[]:
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f))
because the major direction is given by the line of sight, which is the directon form the eye position to the target (los = target - eye) and so (1, 0, 0).
The up vector (or ts) is (0, -1, 0).
sc is given by the cross product of the line of sight and the up vector (0, 0, -1).

How can I set up a camera at bird's eye view using GLM?

I am trying to set up my camera at a bird's eye perspective. By that I mean pointing straight down. This is what I've initialized so far:
glm::vec3 camPosition = glm::vec3(0.0f, 10.0f, 0.0f); // camera's position
glm::vec3 camFront = glm::vec3(0.0f, 0.0f, 0.0f); // where the camera is pointing
glm::vec3 camUp = glm::vec3(0.0f, 0.0f, 1.0f);
I pass this into the glm::lookat function but this is not working at all. Perhaps I haven't understood it that well...
I am trying to set up my camera at a bird's eye perspective.
I recommend to do the following. Define 2 vectors.
Define the up vector of the world. This means the vector, which points form the ground to the sky, in the coordinate system of your world:
glm::vec3 world_up( 0.0f, 0.0f, 1.0f );
Define the direction to the north in the coordinate system of your world:
glm::vec3 world_north( 0.0f, 1.0f, 0.0f );
With this information the vectors of the view coordinates system can be set up.
camPosition is the position of the "bird". A point hight up in the sky:
float height = 10.0f;
glm::vec3 camPosition = world_up * 10.0f;
camTraget it the position where the "bird" is looking at. A point on the ground:
glm::vec3 camTraget = glm::vec3(0.0f, 0.0f, 0.0f);
camUp is perpendicular to the vector from camPosition to camTraget. Since the "bird" looks at the ground it is the flight direction of the bird (e.g. to the north):
glm::vec3 camUp = world_north;
With this vectrs the view matrix can be set up by glm::lookAt():
glm::mat4 view = glm::lookAt( camPosition, camTraget, camUp );

OpenGL z value. Why negative value in front?

Opengl has right-hand coordinate system. It means z values increase towards me.
right-hand coordinate system
I draw two triangles:
float vertices[] =
{
//position //color
//triangle 1
0.0f, 1.0f, -1.0f, 1.0f, 0.0f, 0.0f,//0
-1.0f, -1.0f, -1.0f, 1.0f, 0.0f, 0.0f,//1
1.0f, -1.0f, -1.0f, 1.0f, 0.0f, 0.0f,//2
//triangle 2
0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f,//3
1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f,//4
-1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f//5
};
Why triangle 1 is in front? Triangle 2 should be in front, because of 0.0f > -1.0f.
I have only gl_Position = vec4(aPos, 1.0); in vertex shader.
After it, if I translate vertices by z=-3 in vertex shader, this translation behaves as it should be. Object becomes further.
Why triangle 1 is in front? Triangle 2 should be in front, because of 0.0f > -1.0f.
I have only gl_Position = vec4(aPos, 1.0); in vertex shader.
Of course the red triangle is in front of the blue one, because you don't use any projection matrix. You forgot to transform the input vertex by the projection matrix before you assign the vertex coordinate to gl_Position.
This causes that the vertices are equal to the normalized device space coordinates. In normalized device space the z-axis points into the viewport and the "projection" is orthographic and not perspective.
You have to do something like this:
in vec3 aPos;
mat4 modelProjectionMatrix;
void main()
{
gl_Position = modelProjectionMatrix * vec4(aPos, 1.0);
}

OpenGL rendering to cubemap

I'm trying to render to cubemap. The scene that is being render is a terrain.
I use latitude-longitude debug display to see what's in a certain cubemap.
The two debug view on the bottom left are dummy cubemap that just shows directions and one cubemap with real pictures.
The debug view on the right bottom half show what I get rendered in a cubemap that I'm after.
I've tried many different combinations for setting up the camera, but none of them gave any logical results. I've also compared the code with several samples for implementation of the dynamic cubemap and I was still unable to spot the problem. I'm out of ideas what to even try next, so any help or suggestion is welcome.
Draw to cubemap function:
void Draw(GLuint cubemap, glm::ivec2 res, glm::vec3 position)
{
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glBindRenderbuffer(GL_RENDERBUFFER, rb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, res.x, res.y);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rb);
// camera
glm::mat4 p = glm::perspective(90.0f, 1.0f, 0.01f, 10.0f);
glm::mat4 v;
glm::vec3 targets[6] = {
glm::vec3(+1.0f, 0.0f, 0.0f),
glm::vec3(-1.0f, 0.0f, 0.0f),
glm::vec3(0.0f, +1.0f, 0.0f),
glm::vec3(0.0f, -1.0f, 0.0f),
glm::vec3(0.0f, 0.0f, +1.0f),
glm::vec3(0.0f, 0.0f, -1.0f)
};
glm::vec3 ups[6] = {
glm::vec3(0.0f, 1.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f),
glm::vec3(0.0f, 0.0f, 1.0f),
glm::vec3(0.0f, 0.0f, -1.0f),
glm::vec3(0.0f, 1.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f)
};
// render
for (int i = 0; i < 6; i++)
{
glViewport(0, 0, res.x, res.y);
// setup target face
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, cubemap, 0);
// setup camera
v = glm::lookAt(position, position + targets[i], ups[i]);
// draw
DrawTerrain(terrain.heightmap, terrain.m, v, p); // model, view, projection matrices
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
The matrices were wrong. After a very thorough check of the values, the values that glm was returning weren't correct, both for projection and view matrices. I will see if I'll report a request for bugfix, but for now, here's the code that actually fixed the matrices.
// projection matrix (fov = 90 degrees, aspect = 1.0)
glm::mat4 p;
float n = 0.1f, f = 2.0f; // near and far
p[0][0] = 1.0f;
p[1][1] = 1.0f;
p[2][2] = -f / (f - n);
p[2][3] = -1.0f;
p[3][2] = -(f*n) / (f - n);
glm::vec3 targets[6] = {
glm::vec3(+1.0f, 0.0f, 0.0f),
glm::vec3(-1.0f, 0.0f, 0.0f),
glm::vec3(0.0f, +1.0f, 0.0f),
glm::vec3(0.0f, -1.0f, 0.0f),
glm::vec3(0.0f, 0.0f, +1.0f),
glm::vec3(0.0f, 0.0f, -1.0f)
};
glm::vec3 ups[6] = {
glm::vec3(0.0f, 1.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f),
glm::vec3(0.0f, 0.0f, -1.0f),
glm::vec3(0.0f, 0.0f, 1.0f),
glm::vec3(0.0f, 1.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f)
};
for(int i=0; i<6; ++i)
{
// view matrix
v = glm::lookAt(position, position + targets[i], ups[i]);
v[0][2] *= -1.0f;
v[1][2] *= -1.0f;
v[2][2] *= -1.0f;
v[3][2] *= -1.0f;
// render...
}
EDIT:
After Andreas' comments I investigated a bit more.
glm::perspective required FOV in radians, but since every single example that used that function called it with degrees, I never really suspected at it. After checking at scrathapixel I was sure that perspective matrix is right (even though the determinant is negative). So, FOV is in radiands, that was my mistake.
However, the lookAt was wrong. I compared that function across several resources and definitely with bgfx's lookAt and indeed, the entire third column should have sign reversed. So the changes where I multiply that column of the view matrix with -1 remained.

Why is my OpenGL program using matrix rotations displaying nothing?

I can't find how to create the view matrix with yaw, pitch and roll. I'm working with LWJGL and have a rotate function available.
viewMatrix.setZero();
viewMatrix.rotate(pitch, new Vector3f(1.0f, 0.0f, 0.0f));
viewMatrix.rotate(yaw, new Vector3f(0.0f, 1.0f, 0.0f));
viewMatrix.rotate(roll, new Vector3f(0.0f, 0.0f, 1.0f));
viewMatrix.m33 = 1.0f;
viewMatrix.translate(position);
I am doing something fundamentally wrong, and I hate the fact that I can't fix it do to the lack of documentation (or my lack of google skills).
I do not transpose the matrix.
As a note, position is a zero vector and I do not see anything on the screen (when view matrix is zero I do).
Added: I am trying to reach the equivalent of the following:
GL11.glRotatef(pitch, 1.0f, 0.0f, 0.0f);
GL11.glRotatef(yaw, 0.0f, 1.0f, 0.0f);
GL11.glRotatef(roll, 0.0f, 0.0f, 1.0f);
GL11.glTranslatef(position.x, position.y, position.z);
You should use viewMatrix.setIdentity() instead of viewMatrix.setZero() to initially set the matrix to a unit matrix, instead of zeroing the matrix.
compounding rotations like that is the wrong way to go about it, try this: http://tutorialrandom.blogspot.com/2012/08/how-to-rotate-in-3d-using-opengl-proper.html