Why is the MVP being transposed in DirectX example - c++

I found this in our internal code as well and I'm trying to understand what is happening.
In the following code: https://github.com/microsoft/DirectX-Graphics-Samples/tree/master/Samples/Desktop/D3D12MeshShaders/src/MeshletRender
They do Transpose(M * V * P) before sending it to the shader. In the shader it's treated as a row-major matrix and they do pos * MVP. Why is this? I have similar code where we multiply the MVP outside in a row-major matrix and then insert it into the shaders row-major matrix, and then we do mul(pos, transpose(mvp)).
We have similar code for PSSL where we do the M * V * P and send it to the shader where we have specified that the matrix is row_major float4x4 but then we don't have to do transpose.
Hopefully someone can help me out here because it's very confusing. Does it have to do with home the memory is handled?

I got confirmed that DX11 is column-major.
On line 32, the combined model-view-projection matrix is computed by
multiplying the projection, view, and world matrix together. You will
notice that we are post-multiplying the world matrix by the view
matrix and the model-view matrix by the projection matrix. If you have
done some programming with DirectX in the past, you may have used
row-major matrix order in which case you would have swapped the order
of multiplications. Since DirectX 10, the default order for matrices
in HLSL is column-major so we will stick to this convention in this
demo and future DirectX demos.
Using column-major matrices means that we have to post-multiply the
vertex position by the model-view-projection matrix to correctly
transform the vertex position from object-space to homogeneous
clip-space.
From https://www.3dgep.com/introduction-to-directx-11/
And https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-per-component-math#matrix-ordering
Matrix packing order for uniform parameters is set to column-major by
default.
Hope this saves someone from going insane.

Related

Matrix transposition before passing to the vertex shader

I'm very confused about passing matrices to a vertex shader, as far as i know you have to transpose matrices before passing them to a vertex shader.
But my world matrix when i pass it to a vertex shader did not work correctly, it worked fine with scaling and rotation but translation caused weird visual glitches. So through trial and error i found that this problem could be solved by not transposing the world matrix before passing it to a vertex shader, but when i tried the same with view and projection matrices nothing worked.
I don't understand why I'm very confused, do i have to transpose all matrices except world matrices?
It depends on the code of your shaders.
Without any of the /Zpr or /Zpc HLSL compiler options, when your HLSL code says pos = mul( matrix, vector ) the matrix is expected to be column major. When HLSL code says pos = mul( vector, matrix ), the matrix is expected to be row major.
Column major matrices are slightly faster to handle on GPUs, following reasons.
The HLSL for multiplication compiles into four dp4 instructions. Dot products are fast on GPUs, used everywhere a lot, especially in pixel shaders.
VRAM access pattern is slightly better. If you want to know more, the keyword is “memory coalescing”, most sources are about CUDA but that thing is equally applicable to graphics.
That’s why Direct3D defaults to column major layout.

Set projection mode in OpenGL 3?

In old OpenGL versions spatial projection (where objects become smaller with growing distance) could be enabled via a call to
glMatrixMode(GL_PROJECTION);
(at least as far as I remember this was the call to enable this mode).
In OpenGL 3 - when this slow stack-mode is no longer used - this function does not work any more.
So how can I have the same spatial effect here? What is the intended way for this?
You've completely misunderstood what glMatrixMode actually did. The old, legacy OpenGL fixed function pipeline kept a set of matrices around, which were all used indiscriminately when drawing stuff. The two most important matrices were:
the modelview matrix, which is used to describe the transformation from model local space into view space. View space is still kind of abstract, but it can be understood as the world transformed into the coordinate space of the "camera". Illumination calculations happened in that space.
the projection matrix, which is used to describe the transformation from view space into clip space. Clip space is an intermediary stage right before reaching device coordinates (there are few important details involved in this, but those are not important right now), which mostly involves applying the homogenous divide i.e. scaling the clip coordinate vector by the reciprocal of its w-component.
The fixed transformation pipeline always was
position_view := Modelview · position
do illumination calculations with position_view
position_clip := Projection · position_view
position_pre_ndc := position_clip · 1/position_clip.w
In legacy OpenGL the modelview and projection matrix are always there. glMatrixMode is a selector, which of the existing matrices are subject to the operations done by the matrix manipulation functions. One of these functions is glFrustum which generates and multiplies a perspective matrix, i.e. a matrix which will create a perspective effect through the homogenous divide.
So how can I have the same spatial effect here? What is the intended way for this?
You generate a perspective matrix of the desired properties, and use it to transform the vertex attribute you designate as model local position into clip space and submit that into the gl_Position output of the vertex shader. The usual way to do this is by passing in a modelview and a projection matrix as uniforms.
The most bare bones GLSL vertex shader doing that would be
#version 330
uniform mat4 modelview;
uniform mat4 projection;
in vec4 pos;
void main(){
gl_Position = projection * modelview * pos;
}
As for generating the projection matrix: All the popular computer graphics math libraries got you covered and have functions for that.

Calculating the Normal Matrix in OpenGL

The following site says to use the model_view matrix when computing the normal matrix (assuming we are not using the built in gl_NormalMatrix): (site)Light House. I have the following algorithm in my program:
//Calculate Normal matrix
// 1. Multiply the model matrix by the view matrix and then grab the upper left
// corner 3 x 3 matrix.
mat3x3 mv_orientation = glext::orientation_matrix<float, glext::column>(
glext::model_view<float, glext::column>(glext_model_, glext_view_));
// 2. Because openGL matrices use homogeneous coordinate an affine inversion
// should work???
mv_orientation.affine_invert();
// 3. The normal matrix is defined as the transpose of the inverse of the upper
// left 3 X 3 matrix
mv_orientation.transpose();
// 4. Place this into the shader
basic_shader_.set_uniform_3d_matrix("normal_matrix", mv_orientation.to_gl_matrix());
Assuming most statements above are correct in the aforementioned code. Do you not include the projection matrix in the computation of the normal matrix? If not why, does the projection matrix not affect the normals like they do points?
That's because projection is not an affine transformation. Projections don't maintain the inner product and then they don't maintain the angles. And the real angles that have effect on the light diffusion and reflection are the angles in the affine 3d space. So using also the projection matrix would get you different angles, wrong angles, and hence wrong lights.
Do you not include the projection matrix in the computation of the normal matrix?
No. Normals are required for calculations, like illumination, happening in world and/or view space. It doesn't make sense from a mathematical point of you to do this after projection.
If not why, does the projection matrix not affect the normals like they do points?
Because it would make no sense. That normals should not undergo projective transformation was the original reason to have a separate projection matrix. If you'd put normals through the projection they'd loose their meaning and usefullness.

Given the model/view/projection transformation matrix, how do I compute the normal matrix without using deprecated features in GLSL 1.2?

I know that the normal transformation matrix is the inverse of the transpose of the model/view/projection matrix, but it looks like "inverse" was only added in GLSL 1.4 and I can't find a "transpose". Do I have to just copy and paste a bunch of math into my GLSL? (If so, is there a nice authoritative open-source place I can copy from?)
To be clear, what I'm asking is "How do I calculate gl_NormalMatrix without using deprecated APIs"?
This is normally handled by computing the transpose of the inverse of the modelview matrix
N = (M^-1)^T
on the CPU, then uploading the matrix just like uploading any other matrix.
just to clarify there's also transpose and if you don't do any scale the normal matrix is the 3x3 submatrix, in glsl you can do
normal = mat3(model_matrix) * v_normal;

Normal model matrix calculation for normal mapping in GLSL

I need to calculate a normal model matrix for doing normal mapping in GLSL shader. I want to make sure I am right on this: When I multiply view (camera model) matrix with geometry model matrix, is the view matrix supposed to be already inverted? It is not clear from the online examples like those found here and here. Also, I see in some cases people also transpose the resulting matrix. Why? So what is the right way to build a normal model matrix in OpenGL?
Currently I do it this way:
glm::mat4 view = inverse(GetCameraModel());
glm::mat3 normalModelMatr= glm::mat3(view * mesh.GetModel());
Is this the way to go ?
The correct normal matrix is the inverse transpose of the model-view matrix. If you do not do any non-uniform scaling, that is scaling axises by different amounts, the inverse of the matrix is equal to its transpose because it is orthogonal.
Therefore, the two operations cancel out and it is just the original matrix.
If you do do non uniform scale, the matrix is not orthogonal and you must do the inverse transpose.
You take the top 3x3 matrix, because you only need to rotate and scale normals, not translate.
So your normal matrix is correct as long as you do not employ non-uniform scaling.