In this glsl shader sample some vertex normal converted to the view space by transform: vec4(vertNormal,1.0)).xyz
Why we need to do this kind of the transformation?
#version 430
layout (location=0) in vec3 vertPos;
layout (location=1) in vec3 vertNormal;
void main(void)
{
...
// convert vertex position to view space
vec4 P = mv_matrix * vec4(vertPos,1.0); // looks good
// convert normal to view space
vec3 N = normalize((norm_matrix * vec4(vertNormal,1.0)).xyz); // why to vec4 and back to vec3?
...
}
The model view matrix looks like this:
( X-axis.x, X-axis.y, X-axis.z, 0 )
( Y-axis.x, Y-axis.y, Y-axis.z, 0 )
( Z-axis.x, Z-axis.y, Z-axis.z, 0 )
( trans.x, trans.y, trans.z, 1 )
The upper let 3*3 is contains the orientation and the scale. The 4th row contains the translation.
While for the transformation of a point the full matrix has to take into account, for the transformation of a direction vector only the orientation is of interest.
In general the normal matrix is a mat3 and the inverse, transposed of the upper left 3*3 of the model view matrix. See
Why is the transposed inverse of the model view matrix used to transform the normal vectors?
Why transforming normals with the transpose of the inverse of the modelview matrix?
mat3 norm_matrix;
vec3 = normalize(norm_matrix * vertNormal);
The normal matrix can be calculated from the model view matrix:
mat4 mv_matrix;
mat3 norm_matrix = transpose(inverse(mat3(mv_matrix));
vec3 N = normalize(norm_matrix * vertNormal);
If the model view matrix is an Orthogonal matrix then the inverse, transpose can be skipped, because the inverse matrix is equal to the transposed matrix.
See In which cases is the inverse matrix equal to the transpose?
vec3 N = normalize(mat3(mv_matrix)* vertNormal);
If you want to do the calculations in view space, then you have to transform the vertex coordinate from model space to view space:
vec4 P = mv_matrix * vec4(vertPos,1.0);
and you have to transform the direction of the normal vector form model space to view space:
vec3 N = normalize(norm_matrix * vertNormal);
Related
My question is about removing rotation from the view matrix. Removing translations is easy but I couldn't find any way to remove the rotation from a matrix. Is there any way to remove rotation from the view matrix.
The camera rotates around the y-axis so view matrix also and affects reflection.
In vertex shader my code is
#version 330 core
layout(location = 0) in vec3 ModelSpaceVertexPosition;
layout(location = 2) in vec3 ModelSpaceVertexNormal;
out vec3 reflectnormal;
out vec3 reflectposition;
uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
void main(){
reflectnormal = ( ViewMatrix * ModelMatrix * vec4(ModelSpaceVertexNormal,0)).xyz;//mat3(transpose(inverse(ModelMatrix))) * ModelSpaceVertexNormal;
reflectposition = vec3(0,0,0) - ( ViewMatrix * ModelMatrix * vec4(ModelSpaceVertexPosition,1)).xyz;//vec3(ModelMatrix * vec4(ModelSpaceVertexPosition, 1.0));
gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(ModelSpaceVertexPosition,1);
}
In fragment shader my code is
#version 330 core
in vec3 reflectnormal;
in vec3 reflectposition;
uniform samplerCube skybox;
out vec3 color;
void main(){
vec3 Rtest = reflect(-reflectposition, reflectnormal);
vec3 EnvironmentReflection = vec3(texture(skybox , Rtest));
color = EnvironmentReflection;
}
which gives me nice view of
but the problem is rotation. when I rotate camera reflection also rotates with the camera.
How can I remove rotation from reflection?
Gif video: https://imgur.com/a/rQh7A7H
The skybox contains a cubemap with respect to world space. Hence you have to compute the reflection vector (Rtest) in world space.
One possibility is to transform Rtest from view space to world space. This can be done by the inverse view matrix in the fragment shader:
uniform mat4 ViewMatrix;
void main()
{
vec3 Rtest = reflect(-reflectposition, reflectnormal);
vec3 Rtest_world = inverse(mat3(ViewMatrix)) * Rtest;
vec3 EnvironmentReflection = texture(skybox, Rtest_world).xyz;
color = EnvironmentReflection;
}
Anyway you should avoid to transform a direction vector in the fragment shader. It is cheaper to do the computation in worldspace and to compute reflectnormal and reflectposition in world space rather than view space.
You have to the get the position of the camera in world space, for the computation of the view vector. The position can be get by the translation of the inverse ViewMatrix:
vec3 camera_world = inverse(ViewMatrix)[3].xyz;
reflectnormal = (ModelMatrix * vec4(ModelSpaceVertexNormal,0)).xyz;
reflectposition = camera_world - (ModelMatrix * vec4(ModelSpaceVertexPosition,1)).xyz;
Note, the view matrix transforms from world space to view space. Hence the Inverse view matrix transforms from view space to world space. The inverse view matrix is the matrix which defines the position and orientation of the camera in the world, thus the translation of the inverse view matrix is the position of the camera in the world.
I need to be able to modify vertex coordinates accordingly to a transformation matrix, but I have per-vertex lighting, so I am not sure, that my approach is correct for normals:
#version 120
uniform mat4 transformationMatrix;
void main() {
vec3 normal, lightDir;
vec4 diffuse, ambient, globalAmbient;
float NdotL;
// Transformation part
normal = gl_NormalMatrix * gl_Normal * transpose(mat3(transformationMatrix));
gl_Position = gl_ModelViewProjectionMatrix * transformationMatrix * gl_Vertex;
// Calculate color
lightDir = normalize(vec3(gl_LightSource[0].position));
NdotL = max(abs(dot(normal, lightDir)), 0.0);
diffuse = gl_Color * gl_LightSource[0].diffuse;
ambient = gl_Color * gl_LightSource[0].ambient;
globalAmbient = gl_LightModel.ambient * gl_Color;
gl_FrontColor = NdotL * diffuse + globalAmbient + ambient;
}
I perform all transformations in lines 8-9. Could You comment whether it is correct way or not?
If you want to create a normal matrix, then you have to use the inverse transpose of the upper left 3*3, of the 4*4 matrix.
See Why transforming normals with the transpose of the inverse of the modelview matrix?
and Why is the transposed inverse of the model view matrix used to transform the normal vectors?
This would mean that you have to write your code like this:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
But, if a vector is multiplied to a matrix from the left, the result corresponds to to multiplying a column vector to the transposed matrix from the right.
See GLSL Programming/Vector and Matrix Operations
This means you can write the code like this and avoid the transpose operation:
normal = gl_NormalMatrix * (gl_Normal * inverse(mat3(transformationMatrix)));
If the 4*4 matrix transformationMatrix is a Orthogonal matrix, this means the X, Y, and Z axis are Orthonormal (unit vectors and they are normal to each other), then it is sufficent to use the the upper left 3*3. In this case the inverse matrix is equal the transposed matrix.
See In which cases is the inverse matrix equal to the transpose?
This will simplify your code:
normal = gl_NormalMatrix * mat3(transformationMatrix) * gl_Normal;
Of course this can also be expressed like this:
normal = gl_NormalMatrix * (gl_Normal * transpose(mat3(transformationMatrix)));
Note, this is not the same as you do in your code, becaues the * operations are processed from the left to the right (see GLSL - The OpenGL Shading Language 4.6, 5.1 Operators, page 97) and the result of
vec3 v;
mat3 m1, m2;
(m1 * v) * m2
is not equal
m1 * (v * m2);
The normal transformation does not look correct.
Since v * transpose(M) is exactly the same as M * v, you didn't do any special case handling for non-uniform scaling at all.
What you are looking for is most probably to use the inverse-transpose matrix:
normal = gl_NormalMatrix * transpose(inverse(mat3(transformationMatrix))) * gl_Normal;
For more details about the math behind this, have a look at this.
Let's say I have the following vertex shader code below:
attribute vec4 vPos;
uniform mat4 MVP;
uniform mat4 LookAt;
void main{
gl_Position = MVP * vPos;
}
How do I use the LookAt matrix in this shader to position the eye of the camera? I have tried LookAt * MVP * vPos but that didn't seem to work as my triangle just disappeared off screen!
I would suggest move the LookAt outside the shader to prevent un-necessary calculation per vertex. The shader still do
gl_Position = MVP * vPos;
and you manipulate MVP in the application with glm. For example:
projection = glm::perspective(fov, aspect, 0.1f, 10000.0f);
view = glm::lookAt(eye, center, up);
model = matrix of the model, with all the dynamic transforms.
MVP = projection * view * model;
A LookAt matrix is in general called a View matrix and is concatenated with a model-to-world transform matrix to form the WorldView matrix. This is then multiplied by the projection matrix which is often orthographic or perspective. Vertex positions in model space are multiplied with the resulting matrix in order to be transformed to clip space (kinda...I skipped a couple of steps here that you don't have to do and is performed by the hardware/driver).
In your case, make sure that you're using the correct 'handedness' for your transformations. Also you can try and multiply the position in the reverse order with the transpose of your transformation matrices like so vPos*T_MVP*T_LookAt.
I am trying to move object depending on camera position. Here is my vertex shader
uniform mat4 osg_ViewMatrixInverse;
void main(){
vec4 position = gl_ProjectionMatrix * gl_ModelViewMatrix *gl_Vertex;
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
if( camPos.z >1000.0 )
position.z = position.z+1.0;
if( camPos.z >5000.0 )
position.z = position.z+10.0;
if (camPos.z< 300.0 )
position.z = position.z+300.0;
gl_Position = position;
}
But when camera's vertical position is less than 300 or more than 1000 the model simply disappears though in second case it should be moved just by one unit. I read about inside the shader coordinates are different from a world coordinates that's why i am multiplying by Projection and ModelView matrices, to get the world coordinates. Maybe I am wrong at this point? Forgive me if it's a simple question but i couldnt find the answer.
UPDATE: camPos is translated to world coordinates, but position is not. Maybe it has to do with the fact i am using osg_ViewMatrixInverse (passed by OpenSceneGraph) to get camera position and internal gl_ProjectionMatrix and gl_ModelViewMatrix to get the vertex coordinates? How do I translate position into world coordinates?
The problem is that you are transforming the position into clip coordinates (by multiplying gl_Vertex by the projection and modelview matrices), then performing a world-coordinate operation on those clip coordinates, which does not give the results you want.
Simply perform your transformations before you multiply by the modelview and projection matrices.
uniform mat4 osg_ViewMatrixInverse;
void main() {
vec4 position = gl_Vertex;
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
if( camPos.z >1000.0 )
position.z = position.z+1.0;
if( camPos.z >5000.0 )
position.z = position.z+10.0;
if (camPos.z< 300.0 )
position.z = position.z+300.0;
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * position;
}
gl_Position is in clip-space, the values you output for any coordinate must be >= -gl_Position.W or <= gl_Position.W or they will be clipped. If all of your coordinates for a primitive are outside this range, then nothing will be drawn. The reasoning for this is that after the vertex shader completes, OpenGL divides the clip-space coordinates by W to produce coordinates in the range [-1,1] (NDC). Anything outside this volume will not be on screen.
What you should actually do here is add these coordinates to your object-space position and then perform the transformation from object-space to clip-space. Colonel Thirty Two's answer already does a very good job of showing how to do this; I just wanted to explain exactly why you should not apply this offset to the clip-space coordinates.
Figured it out:
uniform mat4 osg_ViewMatrixInverse;
uniform mat4 osg_ViewMatrix;
void main(){
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
vec4 position_in_view_space = gl_ModelViewMatrix * gl_Vertex;
vec4 position_in_world_space = osg_ViewMatrixInverse * position_in_view_space;
if( camPos.z >1000.0 )
position_in_world_space.z = position_in_world_space.z+700.0;
if( camPos.z >5000.0 )
position_in_world_space.z = position_in_world_space.z+1000.0;
if (camPos.z< 300.0 )
position_in_world_space.z = position_in_world_space.z+200;
position_in_view_space = osg_ViewMatrix * position_in_world_space;
vec4 position_in_object_space = gl_ModelViewMatrixInverse * position_in_view_space;
gl_Position = gl_ModelViewProjectionMatrix * position_in_object_space;
}
One needs to transform gl_Vertex (which is in object space coords) into a world coordinates through view space coordinates (maybe there is direct conversion i dont see) than he can modify them and transform back into object space coordinates.
I just want to be sure I understand TBN matrix calculation correctly
In vertex shader we usually use:
vec3 n = normalize(gl_NormalMatrix * gl_Normal);
vec3 t = normalize(gl_NormalMatrix * Tangent.xyz);
vec3 b = normalize(gl_NormalMatrix * Bitangent.xyz);
mat3 tbn = mat3(t, b, n);
As I understand this tbn matrix transforms a vector from Tangent space to Eye space. Actually we want the reverse - transform a vector from eye space to Tangent space. Thus we need to invert the tbn matrix:
tbn = transpose(tbn); // transpose should be OK here for doing matrix inversion
note: tbn - should contain only rotations, for such case we can use transpose to inverse the matrix.
And we can transform our vectors:
vec3 lightT = tbn * light_vector;
... = tbn * ...
In several tutorials, source codes I've found that authors used something like this:
light.x = dot(light, t);
light.y = dot(light, b);
light.z = dot(light, n);
The above code does the same as multiplying by the transposed(tbn) matrix.
The question:
Should we use transposed tbn matrix just as I explained above? Or maybe I am missing something?
Note by that solution we have vectors (light_vector) transformed into TBN in the vertex shader, then in fragment shader we only have to get normal from the normal map. Other option is to create TBN matrix that transform from TBN space into eye space and then in the fragment shader transform each read normal from the normal map.
transpose is not inversion of matrix !!!
my TBN matrixes in vertex shader look like this:
uniform mat4x4 tm_l2g_dir;
layout(location=3) in vec3 tan;
layout(location=4) in vec3 bin;
layout(location=5) in vec3 nor;
out smooth mat3 pixel_TBN;
void main()
{
vec4 p;
//...
p.xyz=tan.xyz; p.w=1.0; pixel_TBN[0]=normalize((tm_l2g_dir*p).xyz);
p.xyz=bin.xyz; p.w=1.0; pixel_TBN[1]=normalize((tm_l2g_dir*p).xyz);
p.xyz=nor.xyz; p.w=1.0; pixel_TBN[2]=normalize((tm_l2g_dir*p).xyz);
//...
}
where:
tm_l2g_dir is transformation matrix from local model space to global scene space whitout any movement (changes only direction) for your code it is your Normal matrix
tan,bin,nor are vectors for TBN matrix (as part of my models)
nor - is normal vector to surface from actual Vertex position (can calculate as vector multiplication of two vertices from this vertex)
tan,bin are perpendicular vectors usually parallel to texture mapping axises or teselation of your model. If you choose wrong tan/bin vectors than sometimes some lighting artifacts can occur. for example if you have cylinder than bin is its rotation axis and tan is perpendicular to it along the circle (tangent)
tan,bin,nor should be perpendicular to each other
you can calculate TBN also automatic but that will cause some artifacts. to do it you simply chose as tan vector one vertice used for normal computation and bin=nor x bin
in fragment shader i use pixel_TBN with normal map to compute the real normal for fragment
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec2 pixel_txr;
in smooth mat3 pixel_TBN;
uniform sampler2D txr_normal;
out layout(location=0) vec4 frag_col;
const vec4 v05=vec4(0.5,0.5,0.5,0.5);
//------------------------------------------------------------------
void main(void)
{
vec4 col;
vec3 normal;
col=(texture2D(txr_normal,pixel_txr.st)-v05)*2.0; // normal/bump maping
normal=pixel_TBN*col.xyz;
// col=... other stuff to compute col
frag_col=col;
}
//------------------------------------------------------------------