QMatrix4x4 Model View Projection OpenGL Can't Get Scene to Render - c++

Given this vertex shader:
attribute vec3 vertex;
uniform mat4 mvp;
void main() {
gl_Position = mvp * vec4(vertex, 1.0);
}
And this fragment shader:
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Which is able to render the data below, when the mvp matrix is the identity or if the model matrix is a scale, rotate, or tranlate transform:
GLfloat values[] = {
-1.0, -1.0, +0.0,
+1.0, -1.0, +0.0,
+0.0, +1.0, +0.0,
};
Why does the following usage of Qt's QMatrix4x4::lookAt and QMatrix4x4::perspective cause the scene to be rendered as if no object is there?
QMatrix4x4 model;
QMatrix4x4 view;
view.lookAt(
QVector3D(0.0, 0.0, 10.0), // Eye
QVector3D(0.0, 0.0, 0.0), // Focal Point
QVector3D(0.0, 1.0, 0.0)); // Up vector
QMatrix4x4 proj;
// Window size is fixed at 800.0 by 600.0
proj.perspective(45.0, 800.0 / 600.0, 1.0, 100.0);
QMatrix4x4 mvp = (model * view * proj);
What I am looking for is not only how to fix the code but by which means I can attempt to debug these things in the future.
Just on a hunch I changed mvp to p * v * m and it fixed the issue. Why is it mvp if you have to do the multiplication in the opposite order? I know matrix multiplication is not transitive. That is if A and B are matrices, A * B != B * A if A and B are not I.

It's called MVP because... somebody named it that way. ;)
It makes some sense, though. It basically lists the transformations in the order they are applied. You first apply the Model matrix to your vertices, then the View matrix to the result of that, then the projection matrix to the result of both.
Or mathematically, for an input vertex vObj, you could write:
vWorld = M * vObj
vEye = V * vWorld
vClip = P * vEye
If you substitute the equations, you get:
vClip = P * vEye = P * (V * vWorld) = P * (V * (M * vObj))
Matrix multiplications are associative, so this can be rewritten as:
P * (V * (M * vObj)) = (P * V * M) * vObj
Therefore, the combined matrix is calculated as P * V * M.

Reto Koradi is right. And it's not because of memory layout or something, it's because OpenGL uses column vectors - matrix with four rows and one column or 4x1 matrix. Transformations are Matrix4x4 * Vector to meet criteria for matrix multiplications (result is column vector again).
Another approach is to define vector as row (1x4 matrix) and then all transformations are vWorld = vObj * M etc to meet criteria for matrix multiplication resulting in row vector. Out of sudden, last row is rewritten as vClip = vObj * M * V * P.
Matrix multiplication is always the same, you should not need to care how matrices are stored in the memory (well, unless it's linear array and you need to address element at row/column), but transform matrices are different depending on definition of vector.
In OpenGL always compose transforms from right to left. Remember that left-most matrix is applied last.
For some reason (history?), vectors are usually considered column vectors and transform matrices are applied from right to left, but as noted in the comment below, it's possible to use both approaches in GL Shading Language (GLSL).

It's to do with the dimensionality of matrices and mathematical notation. The dimensions of a matrix are defined as rows x columns. So a 1x3 matrix is M = [a b c]. Then a 4x4 matrix is as expected.
Multiplying two matrices of dimension AxB and CxD can only be done if B=C (row into column and sum the result).
A list of N vertices with XYZW coordinates can be defined as a matrix Nx4 or 4xN in size, but only 4xN works with the definition of the multiplication operator if the block of vertices come after the matrix:
V' (4xN) = M (4x4) x V (4xN)
So vertices are considered as column vectors to make this notation work.

Related

Why is GLM Perspective projection acting like Orthographic Projection

I Have a projection matrix in my C++ OpenGL Application.
glm::mat4 projection = glm::perspective(45.0f, 16.0f / 9.0f, 1.0f, 100.0f);
This Matrix is later sent as uniform to the Vertex Shader ->
Nade::Shader::SetMat4(app.shader->GetProgram(), "p", app.projection);
And Utilized inside the Vertex Shader
gl_Position = m * p * vec4(pos,1.0);
And then the Rendered Quad is moved at the Z Axis
object.Translate(0, 0, -0.05);
Observed Behavior: The Rendered Mesh behaves like it is within a Orthographic Matrix where it stays same in size but clips away at the far point
Expected Behavior: The Rendered Mesh reduces in size and clips away.
How can I fix this?
gl_Position = m * p * vec4(pos,1.0); is equivalent to gl_Position = m * (p * vec4(pos,1.0));, which means that the position is transformed by p before being transformed by m.
Assuming p means "projection" and m means "modelview", then it should be:
gl_Position = p * m * vec4(pos,1.0);
You might be wondering: Why didn't this cause issues earlier?
With an orthographic projection, and a camera looking down the z axis, the original code could still look like it works. That's because a zero-centered orthographic projection is basically just a scaling matrix.

How to remove transformations from a matrix

I have a Transfromation matrix , which is combination of three other transformation matrixes.
glm::mat4 Matrix1 = position * rotation * scaling;
glm::mat4 Matrix2 = position * rotation * scaling;
glm::mat4 Matrix3 = position * rotation * scaling;
glm::mat4 transMatrix = Matrix1 * Matrix2 * Matrix3;
if sometime later i just want to remove the effect of Matrix1 from the transMatrix.
How can i do that ?
In short you may simply multiply by the inverse of Matrix1:
glm::mat4 Matrix2And3 = glm::inverse(Matrix1) * transMatrix;
Order of operation is important, if you wanted to remove Matrix3, you would need make transMatrix * inverse(Matrix3) instead. If it was Matrix2, you would need to remove matrix1 (or 3) and then matrix2.
However, matrix inversion should be avoided at all costs, since it's very inefficient, and for your situation it is avoidable.
What you call Matrix is actually just a 3D Pose: Position + Rotation + Size.
Assuming you are using uniform scaling (Size = float) your mat3 component of your Pose, i.e., is a Orthogonal matrix, these type of matrices have a special property which is:
Inverse(O) == Transpose(O)
Calculating the transpose of a matrix is a lot simpler than the inverse, this means that you can do the following, to achieve the same results but a lot faster:
mat4 inv = (mat4)transpose((mat3)Matrix1);
inv[3] = glm::vec4(-position1, 1.0f);
mat4 Matrix2And3 = inv * transMatrix;
If you want to go even further, I recommend you create Pose class and provide cast operators for mat4 and mat3, to take full performance at ease.

Verification of transformation matrix usage in vertex shader. Correctness or normals transformation

I need to be able to modify vertex coordinates accordingly to a transformation matrix, but I have per-vertex lighting, so I am not sure, that my approach is correct for normals:
#version 120
uniform mat4 transformationMatrix;
void main() {
vec3 normal, lightDir;
vec4 diffuse, ambient, globalAmbient;
float NdotL;
// Transformation part
normal = gl_NormalMatrix * gl_Normal * transpose(mat3(transformationMatrix));
gl_Position = gl_ModelViewProjectionMatrix * transformationMatrix * gl_Vertex;
// Calculate color
lightDir = normalize(vec3(gl_LightSource[0].position));
NdotL = max(abs(dot(normal, lightDir)), 0.0);
diffuse = gl_Color * gl_LightSource[0].diffuse;
ambient = gl_Color * gl_LightSource[0].ambient;
globalAmbient = gl_LightModel.ambient * gl_Color;
gl_FrontColor = NdotL * diffuse + globalAmbient + ambient;
}
I perform all transformations in lines 8-9. Could You comment whether it is correct way or not?
If you want to create a normal matrix, then you have to use the inverse transpose of the upper left 3*3, of the 4*4 matrix.
See Why transforming normals with the transpose of the inverse of the modelview matrix?
and Why is the transposed inverse of the model view matrix used to transform the normal vectors?
This would mean that you have to write your code like this:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
But, if a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right.
See GLSL Programming/Vector and Matrix Operations
This means you can write the code like this and avoid the transpose operation:
normal = gl_NormalMatrix * (gl_Normal * inverse(mat3(transformationMatrix)));
If the 4*4 matrix transformationMatrix is a Orthogonal matrix, this means the X, Y, and Z axis are Orthonormal (unit vectors and they are normal to each other), then it is sufficent to use the the upper left 3*3. In this case the inverse matrix is equal the transposed matrix.
See In which cases is the inverse matrix equal to the transpose?
This will simplify your code:
normal = gl_NormalMatrix * mat3(transformationMatrix) * gl_Normal;
Of course this can also be expressed like this:
normal = gl_NormalMatrix * (gl_Normal * transpose(mat3(transformationMatrix)));
Note, this is not the same as you do in your code, becaues the * operations are processed from the left to the right (see GLSL - The OpenGL Shading Language 4.6, 5.1 Operators, page 97) and the result of
vec3 v;
mat3 m1, m2;
(m1 * v) * m2
is not equal
m1 * (v * m2);
The normal transformation does not look correct.
Since v * transpose(M) is exactly the same as M * v, you didn't do any special case handling for non-uniform scaling at all.
What you are looking for is most probably to use the inverse-transpose matrix:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
For more details about the math behind this, have a look at this.

shadow mapping - transforming a view space position to the shadow map space

I use deferred rendering and I store a fragment position in the camera view space. When I perform a shadow calculation I need to transform a camera view space to the shadow map space. I build a shadow matrix this way:
shadowMatrix = shadowBiasMatrix * lightProjectionMatrix * lightViewMatrix * inverseCameraViewMatrix;
shadowBiasMatrix shifts values from [-1,1] to [0,1] range.
lightProjectionMatrix that's orthographic projection matrix for a directional light. lightViewMatrix looks at the frustum center and contains a light direction.
inverseCameraViewMatrix transforms a fragment position from a camera view space to the world space.
I wonder if it is correct to multiply the inverse camera view matrix with the other matrices ? Maybe I should use the inverse camera view matrix separately ?
First scenario:
vec4 shadowCoord = shadowMatrix * vec4(cameraViewSpacePosition, 1.0);
Second scenario, a inverse camera view matrix is use separately:
vec4 worldSpacePosition = inverseCameraViewSpaceMatrix * vec4(cameraViewSpacePosition, 1.0);
vec4 shadowCoord = shadowMatrix * worldSpacePosition;
Precomputing the shadow matrix in the described way is the correct approach and should work as expected.
Because of the associativity of matrix multiplication the results of the three scenarios should be identical (ignoring floating point precision) and are thus interchangeable.
But because these calculations are done in the fragment shader, it is best to premultiply the matrixes in the main program to do as few operations as possible per fragment.
I'm also writing a deferred renderer currently and calculate my matrices in the same way without any problems.
// precomputed: lightspace_mat = light_projection * light_view * inverse_cam_view
// calculate position in clip-space of the lightsource
vec4 lightspace_pos = lightspace_mat * vec4(viewspace_pos, 1.0);
// perspective divide
lightspace_pos/=lightspace_pos.w;
// move range from [-1.0, 1.0] to [0.0, 1.0]
lightspace_pos = lightspace_pos * vec4(0.5) + vec4(0.5);
// sample shadowmap
float shadowmap_depth = texture(shadowmap, lightspace_pos.xy).r;
float fragment_depth = lightspace_pos.z;
I also found this tutorial using a simillar approach, that could be helpfull: http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx
float readShadowMap(vec3 eyeDir)
{
mat4 cameraViewToWorldMatrix = inverse(worldToCameraViewMatrix);
mat4 cameraViewToProjectedLightSpace = lightViewToProjectionMatrix * worldToLightViewMatrix * cameraViewToWorldMatrix;
vec4 projectedEyeDir = cameraViewToProjectedLightSpace * vec4(eyeDir,1);
projectedEyeDir = projectedEyeDir/projectedEyeDir.w;
vec2 textureCoordinates = projectedEyeDir.xy * vec2(0.5,0.5) + vec2(0.5,0.5);
const float bias = 0.0001;
float depthValue = texture2D( tShadowMap, textureCoordinates ) - bias;
return projectedEyeDir.z * 0.5 + 0.5 < depthValue;
}
The eyeDir that comes in input is in View Space. To find the pixel in
the shadow map we need to take that point and covert it into the
light's clip space, which means going from Camera View Space into
World Space, then into Light View Space, than into Light Projection
Space/Clip space. All these transformations are done using matrices;
if you are not familiar with space changes you may want to read my
article about spaces and transformations.
Once we are in the right space we calculate the texture coordinates
and we are finally ready to read from the shadow map. Bias is a small
offset that we apply to the values in the map to avoid that because of
rounding errors a point ends up shading itself! So we shift all the
map back a bit so that all the values in the map are slightly smaller
than they should.

How does vectors multiply act in shader language?

Such as gl_FragColor = v1 * v2, i can't really get how does it multiplies and it seems that the reference give the explanation of vector multiply matrix.
ps: The type of v1 and v2 are both vec4.
The * operator works component-wise for vectors like vec4.
vec4 a = vec4(1.0, 2.0, 3.0, 4.0);
vec4 b = vec4(0.1, 0.2, 0.3, 0.4);
vec4 c = a * b; // vec4(0.1, 0.4, 0.9, 1.6)
The GLSL Language Specification says under section 5.10 Vector and Matrix Operations:
With a few exceptions, operations are component-wise. Usually, when an
operator operates on a vector or matrix, it is operating independently
on each component of the vector or matrix, in a component-wise
fashion. [...] The exceptions are matrix multiplied by vector, vector
multiplied by matrix, and matrix multiplied by matrix. These do not
operate component-wise, but rather perform the correct linear
algebraic multiply.