Normal model matrix calculation for normal mapping in GLSL - opengl

I need to calculate a normal model matrix for doing normal mapping in GLSL shader. I want to make sure I am right on this: When I multiply view (camera model) matrix with geometry model matrix, is the view matrix supposed to be already inverted? It is not clear from the online examples like those found here and here. Also, I see in some cases people also transpose the resulting matrix. Why? So what is the right way to build a normal model matrix in OpenGL?
Currently I do it this way:
glm::mat4 view = inverse(GetCameraModel());
glm::mat3 normalModelMatr= glm::mat3(view * mesh.GetModel());
Is this the way to go ?

The correct normal matrix is the inverse transpose of the model-view matrix. If you do not do any non-uniform scaling, that is scaling axises by different amounts, the inverse of the matrix is equal to its transpose because it is orthogonal.
Therefore, the two operations cancel out and it is just the original matrix.
If you do do non uniform scale, the matrix is not orthogonal and you must do the inverse transpose.
You take the top 3x3 matrix, because you only need to rotate and scale normals, not translate.
So your normal matrix is correct as long as you do not employ non-uniform scaling.

Related

How to calculate a linear tapering transformation matrix

I need to calculate a 4x4 matrix (for OpenGL) that can transform a 3d object on the left to one on the right. Transformation applied only in one axis.
EDIT:
The inputs are a given 3d object (points) to be deformed and a single variable for the amount of deformation.
A picture represents a cube projected to plane showing only relevant changes. There are no changes in the axis perpendicular to view plane.
The relative position of these two objects is not relevant and used only to show "before and after" situation.
******* wrong answer was here *******
It's somewhat opposite to 2D perspective matrix with further perspective division. So, to do this "perspective" thing inversely, you need to do something opposite to perspective division then multiply the result by an inverted "perspective" matrix. And though the perspective matrix may be inverted, I have no idea what is "opposite to perspective division". I think you just can't do it with matrices. You'll have to transform Y coord of each vertex instead

Calculating the Normal Matrix in OpenGL

The following site says to use the model_view matrix when computing the normal matrix (assuming we are not using the built in gl_NormalMatrix): (site)Light House. I have the following algorithm in my program:
//Calculate Normal matrix
// 1. Multiply the model matrix by the view matrix and then grab the upper left
// corner 3 x 3 matrix.
mat3x3 mv_orientation = glext::orientation_matrix<float, glext::column>(
glext::model_view<float, glext::column>(glext_model_, glext_view_));
// 2. Because openGL matrices use homogeneous coordinate an affine inversion
// should work???
mv_orientation.affine_invert();
// 3. The normal matrix is defined as the transpose of the inverse of the upper
// left 3 X 3 matrix
mv_orientation.transpose();
// 4. Place this into the shader
basic_shader_.set_uniform_3d_matrix("normal_matrix", mv_orientation.to_gl_matrix());
Assuming most statements above are correct in the aforementioned code. Do you not include the projection matrix in the computation of the normal matrix? If not why, does the projection matrix not affect the normals like they do points?
That's because projection is not an affine transformation. Projections don't maintain the inner product and then they don't maintain the angles. And the real angles that have effect on the light diffusion and reflection are the angles in the affine 3d space. So using also the projection matrix would get you different angles, wrong angles, and hence wrong lights.
Do you not include the projection matrix in the computation of the normal matrix?
No. Normals are required for calculations, like illumination, happening in world and/or view space. It doesn't make sense from a mathematical point of you to do this after projection.
If not why, does the projection matrix not affect the normals like they do points?
Because it would make no sense. That normals should not undergo projective transformation was the original reason to have a separate projection matrix. If you'd put normals through the projection they'd loose their meaning and usefullness.

Projection of rotation matrix

I have a rotation matrix I use to show a 3D-vector with openGL. Now I would like to have a projection of this vector on the XY-plane. So I´m looking for the rotation matrix that I can use to do this. Any ideas on how to do this?
This will not be rotation matrix, but general 4x4 transformation matrix for doing a projection on a plane. It is often used to do a "shadow" matrix for flattening objects onto a floor.
See more here: http://www.opengl.org/archives/resources/features/StencilTalk/tsld021.htm
https://math.stackexchange.com/questions/320527/projecting-a-point-on-a-plane-through-a-matrix
Projected Shadow with shadow matrix, simple test fails

Given the model/view/projection transformation matrix, how do I compute the normal matrix without using deprecated features in GLSL 1.2?

I know that the normal transformation matrix is the inverse of the transpose of the model/view/projection matrix, but it looks like "inverse" was only added in GLSL 1.4 and I can't find a "transpose". Do I have to just copy and paste a bunch of math into my GLSL? (If so, is there a nice authoritative open-source place I can copy from?)
To be clear, what I'm asking is "How do I calculate gl_NormalMatrix without using deprecated APIs"?
This is normally handled by computing the transpose of the inverse of the modelview matrix
N = (M^-1)^T
on the CPU, then uploading the matrix just like uploading any other matrix.
just to clarify there's also transpose and if you don't do any scale the normal matrix is the 3x3 submatrix, in glsl you can do
normal = mat3(model_matrix) * v_normal;

Matrix multiplication for autorotation and different screen sizes

Whether you use fixed or programmable shader pipeline, a common vertex pipeline consists of this matrix multiplication (either custom coded or behind the scenes):
Projection * Modelview * Position
Lots of tutorials note items such as an object's rotation should go into the Modelview matrix.
I created a standard rotation matrix function based on degrees and then added to the degrees parameter the proper multiple of 90 to account for the screen's autorotation orientation. Works.
For different screen sizes (different pixel widths and heights of screen), I could also factor a Scale multiplier in there so that a Modelview matrix might incorporate a lot of these.
But what I've settled on is a much more verbose matrix math and since I'm new to this stuff, I'd appreciate feedback on whether this is smart.
I simply add independent matrices for the screensize scaling as well as the screen orientation, in addition to object manipulation such as scale and rotation. I end up with this:
Projection * ScreenRotation * ScreenScale * Translate * Rotate * Scale * Position
Some of these are interchangeable order, such as Rotate and Scale could be switched, I find.
This gives me more fine-tuned control and segregation of code so I can concentrate on just an object's rotation without thinking of the screen's orientation at the same time, for example.
Is this is a common or acceptable strategy to organize matrix math appropriately? It seems to work fine, but are there any pitfalls to such verbosity?
The main issue with such verbosity is, that it wastes precious computation cycles if performed on the GPU. Each matrix would be supplied as a uniform, thus forcing the GPU into computing for each and every vertex, while it would be actually a constant across the whole shader. The nice thing about matrices is, that a single matrix can hold the whole chain of transformations and transformation can be done by a single vector-matrix multiplication.
The typical stanza
Projection · Modelview · Position
of using two matrices comes from, that usually one needs the intermediary result of Modelview · Position for some calculations. In theory you could contract the whole thing down to
ProjectionViewModel · Position
Now you're proposing this matrix expression
Projection * ScreenRotation * ScreenScale * Translate * Rotate * Scale * Position
Ugh… this whole thing is the pinnacle of unflexibility. You want flexibility? This thing is rigid, what if you wat to apply some nonuniform scaling onto already rotated geometry. The order of operations in matrix math matters and you can not freely mix them. Assume you're drawing a sphere
Rotate(45, 0, 0, 1) · Scale(1,2,1) · SphereVertex
looks totally different than
Scale(1,2,1) · Rotate(45, 0, 0, 1) · SphereVertex
Screen scale and rotation can, and should be, applied directly into the Projection matrix, no need for extra matrices. The key understanding is, that you can compose every linear transformation chain into a single matrix. And for practical reasons you want to apply screen pixel aspect scaling as last step in the chain, and screen rotation as the second to last step in the chain.
So you can build your projection matrix, not in the shader, but in your display routines frame setup code. Assume you're using my linmath.h it would look like the following
mat4x4 projection;
mat4x4_set_identity(projection);
mat4x4_mul_scale_aniso(projection, …);
mat4x4_mul_rotate_Z(projection, …);
if(using_perspective)
mat4x4_mul_frustum(projection, …);
else
mat4x4_mul_ortho(projection, …);
The resulting matrix projection you'd then set as the projection matrix uniform.