How to fix incorrect Blinn-Phong lighting - c++

I am trying to implement Blinn-Phong shading for a single light source within a Vulkan shader but I am getting a result which is not what I expect.
The output is shown below:
The light position should be behind to the right of the camera, which is correctly represented on the touri but not on the circle. I do not expect to have the point of high intensity in the middle of the circle.
The light position is at coordinates (10, 10, 10).
The point of high intensity in the middle of the circle is (0,0,0).
Vertex shader:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 0) uniform MVP {
mat4 model;
mat4 view;
mat4 proj;
} mvp;
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColor;
layout(location = 2) in vec2 inTexCoord;
layout(location = 3) in vec3 inNormal;
layout(location = 0) out vec3 fragColor;
layout(location = 1) out vec2 fragTexCoord;
layout(location = 2) out vec3 Normal;
layout(location = 3) out vec3 FragPos;
layout(location = 4) out vec3 viewPos;
void main() {
gl_Position = mvp.proj * mvp.view * mvp.model * vec4(inPosition, 1.0);
fragColor = inColor;
fragTexCoord = inTexCoord;
Normal = inNormal;
FragPos = inPosition;
viewPos = vec3(mvp.view[3][0], mvp.view[3][1], mvp.view[3][2]);
}
Fragment shader:
#version 450
#extension GL_ARB_separate_shader_objects : enable
layout(binding = 1) uniform sampler2D texSampler;
layout(binding = 2) uniform LightUBO{
vec3 position;
vec3 color;
} Light;
layout(location = 0) in vec3 fragColor;
layout(location = 1) in vec2 fragTexCoord;
layout(location = 2) in vec3 Normal;
layout(location = 3) in vec3 FragPos;
layout(location = 4) in vec3 viewPos;
layout(location = 0) out vec4 outColor;
void main() {
vec3 color = texture(texSampler, fragTexCoord).rgb;
// ambient
vec3 ambient = 0.2 * color;
// diffuse
vec3 lightDir = normalize(Light.lightPos - FragPos);
vec3 normal = normalize(Normal);
float diff = max(dot(lightDir, normal), 0.0);
vec3 diffuse = diff * color;
// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, normal);
float spec = 0.0;
vec3 halfwayDir = normalize(lightDir + viewDir);
spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);
vec3 specular = vec3(0.25) * spec;
outColor = vec4(ambient + diffuse + specular, 1.0);
}
Note:
I am trying to implement shaders from this tutorial into Vulkan.

This would seem to simply be a question of using the right coordinate system. Since some vital information is missing from your question, I will have to make a few assumptions. First of all, based on the fact that you have a model matrix and apparently have multiple objects in your scene, I will assume that your world space and object space are not the same in general. Furthermore, I will assume that your model matrix transforms from object space to world space, your view matrix transforms from world space to view space and your proj matrix transforms from view space to clip space. I will also assume that your inPosition and inNormal attributes are in object space coordinates.
Based on all of this, your viewPos is just taking the last column of the view matrix, which will not contain the camera position in world space. Neither will the last row. The view matrix transforms from world space to view space. Its last column corresponds to the vector pointing to the world space origin as seen from the perspective of the camera. Your FragPos and Normal will be in object space. And, based on what you said in your question, your light positions are in world space. So in the end, you're just mashing together coordinates that are all relative to completely different coordinate systems. For example:
vec3 lightDir = normalize(Light.lightPos - FragPos);
Here, you're subtracting an object space position from a world space position, which will yield a completely meaningless result. This meaningless result is then normalized and dotted with an object-space direction
float diff = max(dot(lightDir, normal), 0.0);
Also, even if viewPos was the world-space camera position, this
vec3 viewDir = normalize(viewPos - FragPos);
would still be meaningless since FragPos is given in object-space coordinates.
Operations on coordinate vectors only make sense if all the vectors involved are relative to the same coordinate system. It doesn't really matter so much which coordinate system you choose. But you have to pick one. Make sure all your vectors are actually relative to that coordinate system, e.g., world space. If some vectors do not already happen to be in that coordinate system, you will have to transform them into that coordinate system. Only once all your vectors are in the same coordinate system, your shading computations will be meaningful…
To get the viewPos, you could take the last column of the inverse view matrix (if you happened to already have that somewhere for some reason), or simply pass the camera position as an additional uniform. Also, rather than multiply the model view and projection matrices again and again, once for every single vertex, consider just passing a combined model-view-projection matrix to the shader…
Apart from that: Note that you will most likely only want to have a specular component if the surface is actually oriented towards the light.

Related

Normal mapping working incorrectly, weird half-light effect

We are trying to implement normal mapping in our 2D Game Engine and get weird effect.
If normal is set manually like that
vec3 Normal = vec3(0.0, 0.0, 1.0) light works correctly, but we dont get "deep" effect that we want to achieve by normal mapping:
But if we get normal using normal map texture: vec3 Normal = texture(NormalMap, TexCoord).rgb it doesn't work at all. What should not be illuminated is illuminated and vice versa (such as the gaps between the bricks). And besides this, a dark area is on the bottom (or top, depending on the position of the light) side of the texture.
Although the texture of the normal map itself looks fine:
This is our fragment shader:
#version 330 core
layout (location = 0) out vec4 FragColor;
in vec2 TexCoord;
in vec2 FragPos;
uniform sampler2D OurTexture;
uniform sampler2D NormalMap;
struct point_light
{
vec3 Position;
vec3 Color;
};
uniform point_light Light;
void main()
{
vec4 Color = texture(OurTexture, TexCoord);
vec3 Normal = texture(NormalMap, TexCoord).rgb;
if (Color.a < 0.1)
discard;
vec3 LightDir = vec3(Light.Position.xy - FragPos, Light.Position.z);
float D = length(LightDir);
vec3 L = normalize(LightDir);
Normal = normalize(Normal * 2.0 - 1.0);
vec3 Diffuse = Light.Color * max(dot(Normal, L), 0);
vec3 Ambient = vec3(0.3, 0.3, 0.3);
vec3 Falloff = vec3(1, 0, 0);
float Attenuation = 1.0 /(Falloff.x + Falloff.y*D + Falloff.z*D*D);
vec3 Intensity = (Ambient + Diffuse) * Attenuation;
FragColor = Color * vec4(Intensity, 1);
}
And vertex as well:
#version 330 core
layout (location = 0) in vec2 aPosition;
layout (location = 1) in vec2 aTexCoord;
uniform mat4 Transform;
uniform mat4 ViewProjection;
out vec2 FragPos;
out vec2 TexCoord;
void main()
{
gl_Position = ViewProjection * Transform * vec4(aPosition, 0.0, 1.0);
TexCoord = aTexCoord;
FragPos = vec2(Transform * vec4(aPosition, 0.0, 1.0));
}
I google about that and found some people that get the same result, but their questions remained unanswered.
Any idea of what is the cause?
What texture format are you using for the normal map? SRGB, SNORM, etc? That might be the issue. Try UNORM.
Additionally, since you are not using a tangent space, make sure the plane's Z axis aligns with the Z axis of the normals. Also OGL reads Y in the reversed direction, so you need to flip the Y coordinates of the normals that you read from the normal map. Alternatively, you can use a reversed Y normal map (green pointing down).

Normal mapping: TBN matrix different result in vertex shader compared to fragment shader

I'm working on a normal mapping implementation for a tutorial and for teaching purposes I'd like to pass a TBN matrix to the fragment shader (from the vertex shader) so I can transform normal vectors in tangent space to world-space for lighting calculations. The normal mapping is applied to a 2D plane with its normal pointing in the positive z direction.
However, when I calculate the TBN matrix in the vertex shader of a flat plane (so all tangents/bitangents are the same for all vertices) the displayed normals are completely off. While if I pass the tangent/bitangent and normal vectors to the fragment shader and construct the TBN there, it works just fine as the image below shows (with displayed normals):
This is where it gets weird. Because the plane is flat, the T,B and N vectors are the same for all its vertices thus the TBN matrix should also be the same for each fragment (as fragment interpolation doesn't change anything). The TBN matrix in the vertex shader should be exactly the same as the TBN matrix in the fragment shader but the visual outputs say otherwise.
The source code of both the vertex and fragment shader are below:
Vertex:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 texCoords;
layout (location = 3) in vec3 tangent;
layout (location = 4) in vec3 bitangent;
out VS_OUT {
vec3 FragPos;
vec3 Normal;
vec2 TexCoords;
vec3 Tangent;
vec3 Bitangent;
mat3 TBN;
} vs_out;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
void main()
{
gl_Position = projection * view * model * vec4(position, 1.0f);
vs_out.FragPos = vec3(model * vec4(position, 1.0));
vs_out.TexCoords = texCoords;
mat3 normalMatrix = transpose(inverse(mat3(model)));
vs_out.Normal = normalize(normalMatrix * normal);
vec3 T = normalize(normalMatrix * tangent);
vec3 B = normalize(normalMatrix * bitangent);
vec3 N = normalize(normalMatrix * normal);
vs_out.TBN = mat3(T, B, N);
vs_out.Tangent = T;
vs_out.Bitangent = B;
}
Fragment
#version 330 core
out vec4 FragColor;
in VS_OUT {
vec3 FragPos;
vec3 Normal;
vec2 TexCoords;
vec3 Tangent;
vec3 Bitangent;
mat3 TBN;
} fs_in;
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform vec3 lightPos;
uniform vec3 viewPos;
uniform bool normalMapping;
void main()
{
vec3 normal = fs_in.Normal;
mat3 tbn;
if(normalMapping)
{
// Obtain normal from normal map in range [0,1]
normal = texture(normalMap, fs_in.TexCoords).rgb;
// Transform normal vector to range [-1,1]
normal = normalize(normal * 2.0 - 1.0);
// Then transform normal in tangent space to world-space via TBN matrix
tbn = mat3(fs_in.Tangent, fs_in.Bitangent, fs_in.Normal); // TBN calculated in fragment shader
// normal = normalize(tbn * normal); // This works!
normal = normalize(fs_in.TBN * normal); // This gives incorrect results
}
// Get diffuse color
vec3 color = texture(diffuseMap, fs_in.TexCoords).rgb;
// Ambient
vec3 ambient = 0.1 * color;
// Diffuse
vec3 lightDir = normalize(lightPos - fs_in.FragPos);
float diff = max(dot(lightDir, normal), 0.0);
vec3 diffuse = diff * color;
// Specular
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 reflectDir = reflect(-lightDir, normal);
vec3 halfwayDir = normalize(lightDir + viewDir);
float spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);
vec3 specular = vec3(0.2) * spec; // assuming bright white light color
FragColor = vec4(ambient + diffuse + specular, 1.0f);
FragColor = vec4(normal, 1.0); // display normals for debugging
}
Both TBN matrices are clearly different. Below I compiled an image of different fragment shader outputs:
You can see that the T,B and N vectors are correct and so is the fragment shader's tbn matrix, but the TBN matrix from the vertex shader fs_in.TBN gives completely bogus values.
I am completely clueless as to why it doesn't work. I know I can simply pass the Tangent and Bitangent vector to the fragment shader, calculate it there and be done with it but I'm quite curious as to the exact reason why this doesn't work?
Some general remarks:
1) You should normalize fs_in.Tangent, fs_in.Bitangent, and fs_in.Normal in the fragment shader, since it is not guaranteed that they are after the varying interpolation. They should be normalized, because they are the basis vectors of a coordinate system.
2) You don't need to pass all three, tangent, bitangent, and normal, since one of them can be calculated using the cross product: bitangent = cross(tangent, normal). This point also speaks in favor of passing (two) vectors instead of the whole matrix.
To your question, why fs_in.TBN does not look like tbn in the fragment shader:
The images you provide look very strange, indeed. Looks like the matrix' vectors are somehow mixed up. Which output do you get, displaying the transposed matrix transpose(fs_in.TBN)?
Also make sure that the matrix' columns are accessed correctly like described in [1]! (It looks like they are, but please double check, you never know.)
[1] Geeks3D; GLSL 4×4 Matrix Fields; http://www.geeks3d.com/20141114/glsl-4x4-matrix-mat4-fields/

Why do my specular highlights show up so strongly on polygon edges?

I have a simple application that draws a sphere with a single directional light. I'm creating the sphere by starting with an octahedron and subdividing each triangle into 4 smaller triangles.
With just diffuse lighting, the sphere looks very smooth. However, when I add specular highlights, the edges of the triangles show up fairly strongly. Here are some examples:
Diffuse only:
Diffuse and Specular:
I believe that the normals are being interpolated correctly. Looking at just the normals, I get this:
In fact, if I switch to a flat shading, where the normals are per-polygon instead of per-vertex, I get this:
In my vertex shader, I'm multiplying the model's normals by the transpose inverse modelview matrix:
#version 330 core
layout (location = 0) in vec4 vPosition;
layout (location = 1) in vec3 vNormal;
layout (location = 2) in vec2 vTexCoord;
out vec3 fNormal;
out vec2 fTexCoord;
uniform mat4 transInvModelView;
uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
void main()
{
fNormal = vec3(transInvModelView * vec4(vNormal, 0.0));
fTexCoord = vTexCoord;
gl_Position = ProjectionMatrix * ModelViewMatrix * vPosition;
}
and in the fragment shader, I'm calculating the specular highlights as follows:
#version 330 core
in vec3 fNormal;
in vec2 fTexCoord;
out vec4 color;
uniform sampler2D tex;
uniform vec4 lightColor; // RGB, assumes multiplied by light intensity
uniform vec3 lightDirection; // normalized, assumes directional light, lambertian lighting
uniform float specularIntensity;
uniform float specularShininess;
uniform vec3 halfVector; // Halfway between eye and light
uniform vec4 objectColor;
void main()
{
vec4 texColor = objectColor;
float specular = max(dot(halfVector, fNormal), 0.0);
float diffuse = max(dot(lightDirection, fNormal), 0.0);
if (diffuse == 0.0)
{
specular = 0.0;
}
else
{
specular = pow(specular, specularShininess) * specularIntensity;
}
color = texColor * diffuse * lightColor + min(specular * lightColor, vec4(1.0));
}
I was a little confused about how to calculate the halfVector. I'm doing it on the CPU and passing it in as a uniform. It's calculated like this:
vec3 lightDirection(1.0, 1.0, 1.0);
lightDirection = normalize(lightDirection);
vec3 eyeDirection(0.0, 0.0, 1.0);
eyeDirection = normalize(eyeDirection);
vec3 halfVector = lightDirection + eyeDirection;
halfVector = normalize(halfVector);
glUniform3fv(halfVectorLoc, 1, &halfVector [ 0 ]);
Is that the correct formulation for the halfVector? Or does it need to be done in the shaders as well?
Interpolating normals into a face can (and almost always will) result in a shortening of the normal. That's why the highlight is darker in the center of a face and brighter at corners and edges. If you do this, just re-normalize the normal in the fragment shader:
fNormal = normalize(fNormal);
Btw, you cannot precompute the half vector as it is view dependent (that's the whole point of specular lighting). In your current scenario, the highlight will not change when you just move the camera (keeping the direction).
One way to do this in the shader is to pass an additional uniform for the eye position and then calculate the view direction as eyePosition - vertexPosition. Then continue as you did on the CPU.

OpenGL point light moving when camera rotates

I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.
My vertex shader looks like this:
#version 420
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;
uniform vec4 DIFFUSE_COLOR;
//======= OUTS ============//
out smooth vec2 uvsOut;
out flat vec4 diffuseOut;
out vec3 Position;
out smooth vec3 Normal;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
uvsOut = uvs;
diffuseOut = DIFFUSE_COLOR;
Normal = normal;
Position = vec3(MODEL_VIEW_MATRIX * position);
gl_Position = MVP_MATRIX * position;
}
The fragment shader :
//==================== Uniforms ===============================
struct LightInfo{
vec4 Lp;///light position
vec3 Li;///light intensity
vec3 Lc;///light color
int Lt;///light type
};
const int MAX_LIGHTS=5;
uniform LightInfo lights[1];
// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;
////ADS lighting method :
vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {
vec3 n = normalize(normal);
vec4 lMVPos = lights[0].Lp ; //
vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
vec3 v = normalize(vec3(-position)); //
vec3 r = normalize(- reflect(s , n));
vec3 h = normalize(v+s);
float sDotN = max( 0.0 , dot(s, n) );
vec3 diff = KD * lights[0].Lc * sDotN ;
diff = clamp(diff ,0.0 ,1.0);
vec3 spec = vec3(0,0,0);
if (sDotN > 0.0) {
spec = KS * pow( max( 0.0 ,dot(n,h) ) , SHININESS);
spec = clamp(spec ,0.0 ,1.0);
}
return lights[0].Li * ( spec+diff);
}
I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with
glm::lookAt()
which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...
I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?
I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.

Tangent Space Normal Mapping - shader sanity check

I'm getting some pretty freaky results from my tangent space normal mapping shader :). In the scene I show here, the teapot and checkered walls are being shaded with my ordinary Phong-Blinn shader (obviously teapot backface cull gives it a lightly ephemeral look and feel :-) ). I've tried to add in normal mapping to the sphere, with psychedelic results:
The light is coming from the right (just about visible as a black blob). The normal map I'm using on the sphere looks like this:
I'm using AssImp to process input models, so it's calculating tangent and bi-normals for each vertex automatically for me.
The pixel and vertex shaders are below. I'm not too sure what's going wrong, but it wouldn't surprise me if the tangent basis matrix is somehow wrong. I assume I have to compute things into eye space and then transform the eye and light vectors into tangent space and that this is the correct way to go about it. Note that the light position comes into the shader already in view space.
// Vertex Shader
#version 420
// Uniform Buffer Structures
// Camera.
layout (std140) uniform Camera
{
mat4 Camera_Projection;
mat4 Camera_View;
};
// Matrices per model.
layout (std140) uniform Model
{
mat4 Model_ViewModelSpace;
mat4 Model_ViewModelSpaceInverseTranspose;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position; // Already in view space.
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Streams (per vertex)
layout(location = 0) in vec3 attrib_Position;
layout(location = 1) in vec3 attrib_Normal;
layout(location = 2) in vec3 attrib_Tangent;
layout(location = 3) in vec3 attrib_BiNormal;
layout(location = 4) in vec2 attrib_Texture;
// Output streams (per vertex)
out vec3 attrib_Fragment_Normal;
out vec4 attrib_Fragment_Position;
out vec3 attrib_Fragment_Light;
out vec3 attrib_Fragment_Eye;
// Shared.
out vec2 varying_TextureCoord;
// Main
void main()
{
// Compute normal.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Generate matrix for tangent basis.
mat3 tangentBasis = mat3( attrib_Tangent,
attrib_BiNormal,
attrib_Normal);
// Light vector.
attrib_Fragment_Light = tangentBasis * normalize(Light_Position - position.xyz);
// Eye vector.
attrib_Fragment_Eye = tangentBasis * normalize(-position.xyz);
// Return position.
gl_Position = Camera_Projection * position;
}
... and the pixel shader looks like this:
// Pixel Shader
#version 420
// Samplers
uniform sampler2D Map_Normal;
// Global Uniforms
// Material.
layout (std140) uniform Material
{
vec4 Material_Ambient_Colour;
vec4 Material_Diffuse_Colour;
vec4 Material_Specular_Colour;
vec4 Material_Emissive_Colour;
float Material_Shininess;
float Material_Strength;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position;
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Input streams (per vertex)
in vec3 attrib_Fragment_Normal;
in vec3 attrib_Fragment_Position;
in vec3 attrib_Fragment_Light;
in vec3 attrib_Fragment_Eye;
// Shared.
in vec2 varying_TextureCoord;
// Result
out vec4 Out_Colour;
// Main
void main(void)
{
// Compute normals.
vec3 N = normalize(texture(Map_Normal, varying_TextureCoord).xyz * 2.0 - 1.0);
vec3 L = normalize(attrib_Fragment_Light);
vec3 V = normalize(attrib_Fragment_Eye);
vec3 R = normalize(-reflect(L, N));
// Compute products.
float NdotL = max(0.0, dot(N, L));
float RdotV = max(0.0, dot(R, V));
// Compute final colours.
vec4 ambient = Light_Ambient_Colour * Material_Ambient_Colour;
vec4 diffuse = Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * (pow(RdotV, Material_Shininess) * Material_Strength);
// Final colour.
Out_Colour = ambient + diffuse + specular;
}
Edit: 3D Studio Render of the scene (to show the UV's are OK on the sphere):
I think your shaders are okay, but your texture coordinates on the sphere are totally off. It's as if they got distorted towards the poles along the longitude.