I just want to be sure I understand TBN matrix calculation correctly
In vertex shader we usually use:
vec3 n = normalize(gl_NormalMatrix * gl_Normal);
vec3 t = normalize(gl_NormalMatrix * Tangent.xyz);
vec3 b = normalize(gl_NormalMatrix * Bitangent.xyz);
mat3 tbn = mat3(t, b, n);
As I understand this tbn matrix transforms a vector from Tangent space to Eye space. Actually we want the reverse - transform a vector from eye space to Tangent space. Thus we need to invert the tbn matrix:
tbn = transpose(tbn); // transpose should be OK here for doing matrix inversion
note: tbn - should contain only rotations, for such case we can use transpose to inverse the matrix.
And we can transform our vectors:
vec3 lightT = tbn * light_vector;
... = tbn * ...
In several tutorials, source codes I've found that authors used something like this:
light.x = dot(light, t);
light.y = dot(light, b);
light.z = dot(light, n);
The above code does the same as multiplying by the transposed(tbn) matrix.
The question:
Should we use transposed tbn matrix just as I explained above? Or maybe I am missing something?
Note by that solution we have vectors (light_vector) transformed into TBN in the vertex shader, then in fragment shader we only have to get normal from the normal map. Other option is to create TBN matrix that transform from TBN space into eye space and then in the fragment shader transform each read normal from the normal map.
transpose is not inversion of matrix !!!
my TBN matrixes in vertex shader look like this:
uniform mat4x4 tm_l2g_dir;
layout(location=3) in vec3 tan;
layout(location=4) in vec3 bin;
layout(location=5) in vec3 nor;
out smooth mat3 pixel_TBN;
void main()
{
vec4 p;
//...
p.xyz=tan.xyz; p.w=1.0; pixel_TBN[0]=normalize((tm_l2g_dir*p).xyz);
p.xyz=bin.xyz; p.w=1.0; pixel_TBN[1]=normalize((tm_l2g_dir*p).xyz);
p.xyz=nor.xyz; p.w=1.0; pixel_TBN[2]=normalize((tm_l2g_dir*p).xyz);
//...
}
where:
tm_l2g_dir is transformation matrix from local model space to global scene space whitout any movement (changes only direction) for your code it is your Normal matrix
tan,bin,nor are vectors for TBN matrix (as part of my models)
nor - is normal vector to surface from actual Vertex position (can calculate as vector multiplication of two vertices from this vertex)
tan,bin are perpendicular vectors usually parallel to texture mapping axises or teselation of your model. If you choose wrong tan/bin vectors than sometimes some lighting artifacts can occur. for example if you have cylinder than bin is its rotation axis and tan is perpendicular to it along the circle (tangent)
tan,bin,nor should be perpendicular to each other
you can calculate TBN also automatic but that will cause some artifacts. to do it you simply chose as tan vector one vertice used for normal computation and bin=nor x bin
in fragment shader i use pixel_TBN with normal map to compute the real normal for fragment
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec2 pixel_txr;
in smooth mat3 pixel_TBN;
uniform sampler2D txr_normal;
out layout(location=0) vec4 frag_col;
const vec4 v05=vec4(0.5,0.5,0.5,0.5);
//------------------------------------------------------------------
void main(void)
{
vec4 col;
vec3 normal;
col=(texture2D(txr_normal,pixel_txr.st)-v05)*2.0; // normal/bump maping
normal=pixel_TBN*col.xyz;
// col=... other stuff to compute col
frag_col=col;
}
//------------------------------------------------------------------
Related
In the attempt to get diffuse lighting correct, I read several articles and tried to apply them as close as possible.
However, even if the transform of normal vectors seems close to be right, the lighting still slides slightly over the object (which should not be the case for a fixed light).
Note 1: I added bands based on the dot product to make the problem more apparent.
Note 2: This is not Sauron eye.
In the image two problems are apparent:
The normal is affected by the projection matrix: when the viewport is horizontal, the normals display an elliptic shading (as in the image). When the viewport is vertical (height>width), the ellipse is vertical.
The shading move over the surface when the camera is rotated around the object.This is not much visible with normal lighting, but get apparent when projecting patterns from the light source.
Code and attempts:
Unfortunately, a minimal working example get soon very large, so I will only post relevant code. If this is not enough, as me and I will try to publish somewhere the code.
In the drawing function, I have the following matrix creation:
glm::mat4 projection = glm::perspective(45.0f, (float)m_width/(float)m_height, 0.1f, 200.0f);
glm::mat4 view = glm::translate(glm::mat4(1), glm::vec3(0.0f, 0.0f, -2.5f))*rotationMatrix; // make the camera 2.5f away, and rotationMatrix is driven by the mouse.
glm::mat4 model = glm::mat4(1); //The sphere at the center.
glm::mat4 mvp = projection * view * model;
glm::mat4 normalVp = projection * glm::transpose(glm::inverse(view * model));
In the vertex shader, the mvp is used to transform position and normals:
#version 420 core
uniform mat4 mvp;
uniform mat4 normalMvp;
in vec3 in_Position;
in vec3 in_Normal;
in vec2 in_Texture;
out Vertex
{
vec4 pos;
vec4 normal;
vec2 texture;
} v;
void main(void)
{
v.pos = mvp * vec4(in_Position, 1.0);
gl_Position = v.pos;
v.normal = normalMvp * vec4(in_Normal, 0.0);
v.texture = in_Texture;
}
And in the fragment shader, the diffuse shading is applied:
#version 420 core
in Vertex
{
vec4 pos;
vec4 normal;
vec2 texture;
} v;
uniform sampler2D uSampler1;
out vec4 out_Color;
uniform mat4 mvp;
uniform mat4 normalMvp;
uniform vec3 lightsPos;
uniform float lightsIntensity;
void main()
{
vec3 color = texture2D(uSampler1, v.texture);
vec3 lightPos = (mvp * vec4(lightsPos, 1.0)).xyz;
vec3 lightDirection = normalize( lightPos - v.pos.xyz );
float dot = clamp(dot(lightDirection, normalize(v.normal.xyz)), 0.0, 1.0);
vec3 ambient = 0.3 * color;
vec3 diffuse = dot * lightsIntensity * color;
// Here I have my debug code to add the projected bands on the image.
// kind of if(dot>=0.5 && dot<0.75) diffuse +=0.2;...
vec3 totalLight = ambient + diffuse;
out_Color = vec4(totalLight, 1.0);
}
Question:
How to properly transform the normals to get diffuse shading?
Related articles:
How to calculate the normal matrix?
GLSL normals with non-standard projection matrix
OpenGL Diffuse Lighting Shader Bug?
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
http://www.lighthouse3d.com/tutorials/glsl-12-tutorial/the-normal-matrix/
Mostly, all sources agree that it should be enough to multiply the projection matrix by the transpose of the inverse of the model-view matrix. That is what I think I am doing, but the result is not right apparently.
Lighting calculations should not be performed in clip space (including the projection matrix). Leave the projection away from all variables, including light positions etc., and you should be good.
Why is that? Well, lighting is a physical phenomenon that essentially depends on angles and distances. Therefore, to calculate it, you should choose a space that preserves these things. World space or camera space are two examples of angle and distance-preserving spaces (compared to the physical space). You may of course define them differently, but in most cases they are. Clip space preserves neither of the two, hence the angles and distances you calculate in this space are not the physical ones you need to determine physical lighting.
I'm trying to apply a lighting per-pixel in my 3d engine but I'm having some trouble understanding what can be wrong with my geometry. I'm a beginner in OpenGL so please bear with me if my question may sound stupid, I'll explain as best as I can.
My vertex shader:
#version 400 core
layout(location = 0) in vec3 position;
in vec2 textureCoordinates;
in vec3 normal;
out vec2 passTextureCoordinates;
out vec3 normalVectorFromVertex;
out vec3 vectorFromVertexToLightSource;
out vec3 vectorFromVertexToCamera;
uniform mat4 transformation;
uniform mat4 projection;
uniform mat4 view;
uniform vec3 lightPosition;
void main(void) {
vec4 mainPosition = transformation * vec4(position, 1.0);
gl_Position = projection * view * mainPosition;
passTextureCoordinates = textureCoordinates;
normalVectorFromVertex = (transformation * vec4(normal, 1.0)).xyz;
vectorFromVertexToLightSource = lightPosition - mainPosition.xyz;
}
My fragment-shader:
#version 400 core
in vec2 passTextureCoordinates;
in vec3 normalVectorFromVertex;
in vec3 vectorFromVertexToLightSource;
layout(location = 0) out vec4 out_Color;
uniform sampler2D textureSampler;
uniform vec3 lightColor;
void main(void) {
vec3 versor1 = normalize(normalVectorFromVertex);
vec3 versor2 = normalize(vectorFromVertexToLightSource);
float dotProduct = dot(versor1, versor2);
float lighting = max(dotProduct, 0.0);
vec3 finalLight = lighting * lightColor;
out_Color = vec4(finalLight, 1.0) * texture(textureSampler, passTextureCoordinates);
}
The problem: Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up in such a way that whenever the pipeline goes to the fragment shader, my dot product between the vector that goes from my vertex to the light source and my normal is probably outputting <= 0, indicating that the lightsource is in an angle that is >= π/2 and therefore all my pixels are outputting rgb(0,0,0,1). But for the weirdest reason that I cannot understand geometrically, if I calculate transformation * vec4(normal, 1.0) the lighting appears to work kind of fine, except for extremely weird behaviours, like 'reacting' to distance. I mean, using this very simple lighting technique the vertex brightness is completely agnostic to distance, since it would imply the calculation of the vectors length, but I'm normalizing them before applying the dot product so there is no way that this is expected to me.
One thing that is clearly wrong to me, is that my transformation matrix have the translation components applied before multiplying the normal vectors, which will "move and point" the normals in the direction of the translation, which is wrong. Still I'm not sure if I should be getting this results. Any insights are appreciated.
Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up
What if you have non-uniform scaling in that transformation matrix?
Imagine a flat square surface, all normals are pointing up. Now you scale that surface to stretch in the horizontal direction: what would happen to normals?
If you don't adjust your transformation matrix to not have the scaling part in it, the normals will get skewed. After all, you only care about the object's orientation when considering the normals and the scale of the object is irrelevant to where the surface is pointing to.
Or think about a circle:
img source
You need to apply inverse transpose of the model view matrix to avoid scaling the normals when transforming the normals. Another SO question discusses it, as well as this video from Jaime King teaching Graphics with OpenGL.
Additional resources on transforming normals:
LearnOpenGL: Basic Lighting
Lighthouse3d.com: The Normal Matrix
First of all, I must apologize for posting yet another question on this subject (there are a lot already!). I did search for other related questions and answers, but unfortunately none of them showed me the solution. Now I'm desperate! :D
It is worth mentioning that the code posted below gives a satisfying 'bumpy' effect. It is the scene enlightenment that seems to be wrong.
The scene: is dead simple! A cube in the center, a light source rotating around it (parallel to the ground) and above.
My approach is to start from my basic light shader, which gives me adequate outputs (or so I think!). The first step is to modify it to do the calculations in tangent space, then use the normal extracted from a texture.
I tried to comment the code nicely, but in short I have two questions:
1) Doing only basic lighting (no normal mapping), I expect the scene to look exactly the same, with or without transforming my vectors into tangent space with the TBN matrix. Am I wrong?
2) Why do I get incorrect enlightenment?
A couple of screenshots to give you an idea (EDITED) - following LJ's comment, I am no longer summing normals and tangent per vertex/face. Interestingly, it highlights the issue (see on the capture, I have marked how the light moves).
Basically it is as if the cube was rotated 90 degrees to the left, or, as if the light was turing vertically instead of horizontally
Result with normal mapping:
Version with simple light:
Vertex shader:
// Information about the light.
// Here we care essentially about light.Position, which
// is set to be something like vec3(cos(x)*9, 5, sin(x)*9)
uniform Light_t Light;
uniform mat4 W; // The model transformation matrix
uniform mat4 V; // The camera transformation matrix
uniform mat4 P; // The projection matrix
in vec3 VS_Position;
in vec4 VS_Color;
in vec2 VS_TexCoord;
in vec3 VS_Normal;
in vec3 VS_Tangent;
out vec3 FS_Vertex;
out vec4 FS_Color;
out vec2 FS_TexCoord;
out vec3 FS_LightPos;
out vec3 FS_ViewPos;
out vec3 FS_Normal;
// This method calculates the TBN matrix:
// I'm sure it is not optimized vertex shader code,
// to have this seperate method, but nevermind for now :)
mat3 getTangentMatrix()
{
// Note: here I must say am a bit confused, do I need to transform
// with 'normalMatrix'? In practice, it seems to make no difference...
mat3 normalMatrix = transpose(inverse(mat3(W)));
vec3 norm = normalize(normalMatrix * VS_Normal);
vec3 tang = normalize(normalMatrix * VS_Tangent);
vec3 btan = normalize(normalMatrix * cross(VS_Normal, VS_Tangent));
tang = normalize(tang - dot(tang, norm) * norm);
return transpose(mat3(tang, btan, norm));
}
void main()
{
// Set the gl_Position and pass color + texcoords to the fragment shader
gl_Position = (P * V * W) * vec4(VS_Position, 1.0);
FS_Color = VS_Color;
FS_TexCoord = VS_TexCoord;
// Now here we start:
// This is where supposedly, multiplying with the TBN should not
// change anything to the output, as long as I apply the transformation
// to all of them, or none.
// Typically, removing the 'TBN *' everywhere (and not using the normal
// texture later in the fragment shader) is exactly the code I use for
// my basic light shader.
mat3 TBN = getTangentMatrix();
FS_Vertex = TBN * (W * vec4(VS_Position, 1)).xyz;
FS_LightPos = TBN * Light.Position;
FS_ViewPos = TBN * inverse(V)[3].xyz;
// This line is actually not needed when using the normal map:
// I keep the FS_Normal variable for comparison purposes,
// when I want to switch to my basic light shader effect.
// (see later in fragment shader)
FS_Normal = TBN * normalize(transpose(inverse(mat3(W))) * VS_Normal);
}
And the fragment shader:
struct Textures_t
{
int SamplersCount;
sampler2D Samplers[4];
};
struct Light_t
{
int Active;
float Ambient;
float Power;
vec3 Position;
vec4 Color;
};
uniform mat4 W;
uniform mat4 V;
uniform Textures_t Textures;
uniform Light_t Light;
in vec3 FS_Vertex;
in vec4 FS_Color;
in vec2 FS_TexCoord;
in vec3 FS_LightPos;
in vec3 FS_ViewPos;
in vec3 FS_Normal;
out vec4 frag_Output;
vec4 getPixelColor()
{
return Textures.SamplersCount >= 1
? texture2D(Textures.Samplers[0], FS_TexCoord)
: FS_Color;
}
vec3 getTextureNormal()
{
// FYI: the normal texture is always at index 1
vec3 bump = texture(Textures.Samplers[1], FS_TexCoord).xyz;
bump = 2.0 * bump - vec3(1.0, 1.0, 1.0);
return normalize(bump);
}
vec4 getLightColor()
{
// This is the one line that changes between my basic light shader
// and the normal mapping one:
// - If I don't do 'TBN *' earlier and use FS_Normal here,
// the enlightenment seems fine (see second screenshot)
// - If I do multiply by TBN (including on FS_Normal), I would expect
// the same result as without multiplying ==> not the case: it looks
// very similar to the result with normal mapping
// (just has no bumpy effect of course)
// - If I use the normal texture (along with TBN of course), then I get
// the result you see in the first screenshot.
vec3 N = getTextureNormal(); // Instead of 'normalize(FS_Normal);'
// Everything from here on is the same as my basic light shader
vec3 L = normalize(FS_LightPos - FS_Vertex);
vec3 E = normalize(FS_ViewPos - FS_Vertex);
vec3 R = normalize(reflect(-L, N));
// Ambient color: light color times ambient factor
vec4 ambient = Light.Color * Light.Ambient;
// Diffuse factor: product of Normal to Light vectors
// Diffuse color: light color times the diffuse factor
float dfactor = max(dot(N, L), 0);
vec4 diffuse = clamp(Light.Color * dfactor, 0, 1);
// Specular factor: product of reflected to camera vectors
// Note: applies only if the diffuse factor is greater than zero
float sfactor = 0.0;
if(dfactor > 0)
{
sfactor = pow(max(dot(R, E), 0.0), 8.0);
}
// Specular color: light color times specular factor
vec4 specular = clamp(Light.Color * sfactor, 0, 1);
// Light attenuation: square of the distance moderated by light's power factor
float atten = 1 + pow(length(FS_LightPos - FS_Vertex), 2) / Light.Power;
// The fragment color is a factor of the pixel and light colors:
// Note: attenuation only applies to diffuse and specular components
return getPixelColor() * (ambient + (diffuse + specular) / atten);
}
void main()
{
frag_Output = Light.Active == 1
? getLightColor()
: getPixelColor();
}
That's it! I hope you have enough information and of course, your help will be greatly appreciated! :) Take care.
I am experiancing a very similar problem, and i can not explain why the lighting doesn't work right, but i can answer your first question and at the very least explain how i somehow got lighting working acceptably (though your problem may not necesarrily be the same is mine).
Firstly in theory if you tangents and bitangents are calculated correctly, then you should get exactly the same lighting result when doing the calculation in tangentspace with a tangentspace normal [0,0,1].
Secondly while it is common knowledge that you should transform your normals from model to cameraspace by multiplying by inverse transpose model-view matrix as explained by this tutorial, i found that the problem with the lighting being transformed wrong can be solved if you transform the normal tangent by the model-view matrix rather than the inverse transpose model-view. Ie use normalMatrix = mat3(W); instead of normalMatrix = transpose(inverse(mat3(W)));.
In my case this did »fix« the problems with the light, but i don't know why this fixed it, but i make no guarantee that it does not (in fact i assume that it does) introduce other problems with the shading
So I have a .3ds mesh that I imported into my project and it's quite large so I wanted to scale it down in size using glscalef. However the rendering algorithm I use in my shader makes use of normal vector values and so after scaling my rendering algorithm no longer works exactly as it should. So how do remedy this? Is there a glscalef for normals as well?
Normal vectors are transformed by the transposed inverse of the modelview matrix. However a second constraint is, that normals be of unit length and scaling the modelview changes that. So in your shader, you should apply a normalization step
#version 330
uniform mat4x4 MV;
uniform mat4x4 P;
uniform mat4x4 N; // = transpose(inv(MV));
in vec3 vertex_pos; // vertes position attribute
in vec3 vertex_normal; // vertex normal attribute
out vec3 trf_normal;
void main()
{
trf_normal = normalize( (N * vec4(vertex_normal, 1)).xyz );
gl_Position = P * MV * vertex_pos;
}
Note that "normalization" is the process of turning a vector into colinear vector of its own with unit length, and has nothing to do with the concept of surface normals.
To transform normals, you need to multiply them by the inverse transpose of the transformation matrix. There are several explanations of why this is the case, the best one is probably here
Normal is transformed by MV^IT, the inverse transpose of the modelview matrix, read the redbook, it explains everything.
I want to adjust the colors depending on which xyz position they are in the world.
I tried this in my fragment shader:
varying vec4 verpos;
void main(){
vec4 c;
c.x = verpos.x;
c.y = verpos.y;
c.z = verpos.z;
c.w = 1.0;
gl_FragColor = c;
}
but it seems that the colors change depending on my camera angle/position, how do i make the coords independent from my camera position/angle?
Heres my vertex shader:
varying vec4 verpos;
void main(){
gl_Position = ftransform();
verpos = gl_ModelViewMatrix*gl_Vertex;
}
Edit2: changed title, so i want world coords, not screen coords!
Edit3: added my full code
In vertex shader you have gl_Vertex (or something else if you don't use fixed pipeline) which is the position of a vertex in model coordinates. Multiply the model matrix by gl_Vertex and you'll get the vertex position in world coordinates. Assign this to a varying variable, and then read its value in fragment shader and you'll get the position of the fragment in world coordinates.
Now the problem in this is that you don't necessarily have any model matrix if you use the default modelview matrix of OpenGL, which is a combination of both model and view matrices. I usually solve this problem by having two separate matrices instead of just one modelview matrix:
model matrix (maps model coordinates to world coordinates), and
view matrix (maps world coordinates to camera coordinates).
So just pass two different matrices to your vertex shader separately. You can do this by defining
uniform mat4 view_matrix;
uniform mat4 model_matrix;
In the beginning of your vertex shader. And then instead of ftransform(), say:
gl_Position = gl_ProjectionMatrix * view_matrix * model_matrix * gl_Vertex;
In the main program you must write values to both of these new matrices. First, to get the view matrix, do the camera transformations with glLoadIdentity(), glTranslate(), glRotate() or gluLookAt() or what ever you prefer as you would normally do, but then call glGetFloatv(GL_MODELVIEW_MATRIX, &array); in order to get the matrix data to an array. And secondly, in a similar way, to get the model matrix, also call glLoadIdentity(); and do the object transformations with glTranslate(), glRotate(), glScale() etc. and finally call glGetFloatv(GL_MODELVIEW_MATRIX, &array); to get the matrix data out of OpenGL, so you can send it to your vertex shader. Especially note that you need to call glLoadIdentity() before beginning to transform the object. Normally you would first transform the camera and then transform the object which would result in one matrix that does both the view and model functions. But because you're using separate matrices you need to reset the matrix after camera transformations with glLoadIdentity().
gl_FragCoord are the pixel coordinates and not world coordinates.
Or you could just divide the z coordinate by the w coordinate, which essentially un-does the perspective projection; giving you your original world coordinates.
ie.
depth = gl_FragCoord.z / gl_FragCoord.w;
Of course, this will only work for non-clipped coordinates..
But who cares about clipped ones anyway?
You need to pass the World/Model matrix as a uniform to the vertex shader, and then multiply it by the vertex position and send it as a varying to the fragment shader:
/*Vertex Shader*/
layout (location = 0) in vec3 Position
uniform mat4 World;
uniform mat4 WVP;
//World FragPos
out vec4 FragPos;
void main()
{
FragPos = World * vec4(Position, 1.0);
gl_Position = WVP * vec4(Position, 1.0);
}
/*Fragment Shader*/
layout (location = 0) out vec4 Color;
...
in vec4 FragPos
void main()
{
Color = FragPos;
}
The easiest way is to pass the world-position down from the vertex shader via a varying variable.
However, if you really must reconstruct it from gl_FragCoord, the only way to do this is to invert all the steps that led to the gl_FragCoord coordinates. Consult the OpenGL specs, if you really have to do this, because a deep understanding of the transformations will be necessary.