OpenGL point light moving when camera rotates - opengl

I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.
My vertex shader looks like this:
#version 420
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;
uniform vec4 DIFFUSE_COLOR;
//======= OUTS ============//
out smooth vec2 uvsOut;
out flat vec4 diffuseOut;
out vec3 Position;
out smooth vec3 Normal;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
uvsOut = uvs;
diffuseOut = DIFFUSE_COLOR;
Normal = normal;
Position = vec3(MODEL_VIEW_MATRIX * position);
gl_Position = MVP_MATRIX * position;
}
The fragment shader :
//==================== Uniforms ===============================
struct LightInfo{
vec4 Lp;///light position
vec3 Li;///light intensity
vec3 Lc;///light color
int Lt;///light type
};
const int MAX_LIGHTS=5;
uniform LightInfo lights[1];
// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;
////ADS lighting method :
vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {
vec3 n = normalize(normal);
vec4 lMVPos = lights[0].Lp ; //
vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
vec3 v = normalize(vec3(-position)); //
vec3 r = normalize(- reflect(s , n));
vec3 h = normalize(v+s);
float sDotN = max( 0.0 , dot(s, n) );
vec3 diff = KD * lights[0].Lc * sDotN ;
diff = clamp(diff ,0.0 ,1.0);
vec3 spec = vec3(0,0,0);
if (sDotN > 0.0) {
spec = KS * pow( max( 0.0 ,dot(n,h) ) , SHININESS);
spec = clamp(spec ,0.0 ,1.0);
}
return lights[0].Li * ( spec+diff);
}
I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with
glm::lookAt()
which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...
I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?

I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.

Related

OpenGL Simple Shading, Artifacts

I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.

OpenGL 3D terrain lighting artefacts

I'm doing per-pixel lighting(phong shading) on my terrain. I'm using a heightmap to generate the terrain height and then calculating the normal for each vertex. The normals are interpolated in the fragment shader and also normalized.
I am getting some weird dark lines near the edges of triangles where there shouldn't be.
http://imgur.com/L2kj4ca
I checked if the normals were correct using a geometry shader to draw the normals on the terrain and they seem to be correct.
http://imgur.com/FrJpdXI
There is no point using a normal map for the terrain it will just give pretty much the same normals. The problem lies with the way the normals are interpolated across a triangle.
I am out of idea's how to solve this. I couldn't find any working solution online.
Terrain Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 textureCoords;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
out float visibility;
uniform mat4 transformationMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 lightPosition;
const float density = 0.0035;
const float gradient = 5.0;
void main()
{
vec4 worldPosition = transformationMatrix * vec4(position, 1.0f);
vec4 positionRelativeToCam = viewMatrix * worldPosition;
gl_Position = projectionMatrix * positionRelativeToCam;
pass_textureCoords = textureCoords;
surfaceNormal = (transformationMatrix * vec4(normal, 0.0f)).xyz;
toLightVector = lightPosition - worldPosition.xyz;
float distance = length(positionRelativeToCam.xyz);
visibility = exp(-pow((distance * density), gradient));
visibility = clamp(visibility, 0.0, 1.0);
}
Terrain Fragment Shader:
#version 330 core
in vec2 pass_textureCoords;
in vec3 surfaceNormal;
in vec3 toLightVector;
in float visibility;
out vec4 colour;
uniform vec3 lightColour;
uniform vec3 fogColour;
uniform sampler2DArray blendMap;
uniform sampler2DArray diffuseMap;
void main()
{
vec4 blendMapColour = texture(blendMap, vec3(pass_textureCoords, 0));
float backTextureAmount = 1 - (blendMapColour.r + blendMapColour.g + blendMapColour.b);
vec2 tiledCoords = pass_textureCoords * 255.0;
vec4 backgroundTextureColour = texture(diffuseMap, vec3(tiledCoords, 0)) * backTextureAmount;
vec4 rTextureColour = texture(diffuseMap, vec3(tiledCoords, 1)) * blendMapColour.r;
vec4 gTextureColour = texture(diffuseMap, vec3(tiledCoords, 2)) * blendMapColour.g;
vec4 bTextureColour = texture(diffuseMap, vec3(tiledCoords, 3)) * blendMapColour.b;
vec4 diffuseColour = backgroundTextureColour + rTextureColour + gTextureColour + bTextureColour;
vec3 unitSurfaceNormal = normalize(surfaceNormal);
vec3 unitToLightVector = normalize(toLightVector);
float brightness = dot(unitSurfaceNormal, unitToLightVector);
float ambient = 0.2;
brightness = max(brightness, ambient);
vec3 diffuse = brightness * lightColour;
colour = vec4(diffuse, 1.0) * diffuseColour;
colour = mix(vec4(fogColour, 1.0), colour, visibility);
}
This can be either two issues :
1. Incorrect normals :
There is different types of shading : Flat shading, Gouraud shading and Phong shading (different of Phong specular) example :
You usually want to do a Phong shading. To do that, OpenGL make your life easier and interpolate for you the normals between each vertex of each triangle, so at each pixel you have the correct normal for this point: but you still need to feed it proper normal values, that are the average of the normals of every triangles attached to this vertex. So in your function that create the vertex, the normals and the UVs, you need to compute the normal at each vertex by averaging every triangle normal attached to this vertex. illustration
2. Subdivision problem :
The other possible issue is that your terrain is not subdivided enough, or your heightmap resolution is too low, resulting to this kind of glitch because of the difference of height between two vertex in one triangle (so between two pixels in your heightmap).
Maybe if you can provide some of your code and shaders, maybe even the heightmap so we can pin exactly what is happening in your case.
This is old, but I suspect you're not transforming your normal using the transposed inverse of the upper 3x3 part of your modelview matrix. See this. Not sure what's in "transformationMatrix", but if you're using it to transform the vertex and the normal something is probably fishy...

Normal mapping gone horribly wrong

I tried to implement normal mapping in my opengl application but I can't get it to work.
This is the diffuse map (which I add a brown color to) and this is the normal map.
In order to get the tangent and bitangent (in other places called binormals?) vectors, I run this function for every triangle in my mesh:
void getTangent(const glm::vec3 &v0, const glm::vec3 &v1, const glm::vec3 &v2,
const glm::vec2 &uv0, const glm::vec2 &uv1, const glm::vec2 &uv2,
std::vector<glm::vec3> &vTangents, std::vector<glm::vec3> &vBiangents)
{
// Edges of the triangle : postion delta
glm::vec3 deltaPos1 = v1-v0;
glm::vec3 deltaPos2 = v2-v0;
// UV delta
glm::vec2 deltaUV1 = uv1-uv0;
glm::vec2 deltaUV2 = uv2-uv0;
float r = 1.0f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);
glm::vec3 tangent = (deltaPos1 * deltaUV2.y - deltaPos2 * deltaUV1.y)*r;
glm::vec3 bitangent = (deltaPos2 * deltaUV1.x - deltaPos1 * deltaUV2.x)*r;
for(int i = 0; i < 3; i++) {
vTangents.push_back(tangent);
vBiangents.push_back(bitangent);
}
}
After that, I call glBufferData to upload the vertices, normals, uvs, tangents and bitangents to the GPU.
The vertex shader:
#version 430
uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform mat4 ModelMatrix;
in vec3 vertex;
in vec3 normal;
in vec2 uv;
in vec3 tangent;
in vec3 bitangent;
out vec2 fsCoords;
out vec3 fsVertex;
out mat3 TBNMatrix;
void main()
{
gl_Position = ProjectionMatrix * CameraMatrix * ModelMatrix * vec4(vertex, 1.0);
fsCoords = uv;
fsVertex = vertex;
TBNMatrix = mat3(tangent, bitangent, normal);
}
Fragment shader:
#version 430
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform mat4 ModelMatrix;
uniform vec3 CameraPosition;
uniform struct Light {
float ambient;
vec3 position;
} light;
uniform float shininess;
in vec2 fsCoords;
in vec3 fsVertex;
in mat3 TBNMatrix;
out vec4 color;
void main()
{
//base color
const vec3 brownColor = vec3(153.0 / 255.0, 102.0 / 255.0, 51.0 / 255.0);
color = vec4(brownColor * (texture(diffuseMap, fsCoords).rgb + 0.25), 1.0);//add a fixed base color (0.25), because its dark as hell
//general vars
vec3 normal = texture(normalMap, fsCoords).rgb * 2.0 - 1.0;
vec3 surfacePos = vec3(ModelMatrix * vec4(fsVertex, 1.0));
vec3 surfaceToLight = normalize(TBNMatrix * (light.position - surfacePos)); //unit vector
vec3 eyePos = TBNMatrix * CameraPosition;
//diffuse
float diffuse = max(0.0, dot(normal, surfaceToLight));
//specular
float specular;
vec3 incidentVector = -surfaceToLight; //unit
vec3 reflectionVector = reflect(incidentVector, normal); //unit vector
vec3 surfaceToCamera = normalize(eyePos - surfacePos); //unit vector
float cosAngle = max(0.0, dot(surfaceToCamera, reflectionVector));
if(diffuse > 0.0)
specular = pow(cosAngle, shininess);
//add lighting to the fragment color (no attenuation for now)
color.rgb *= light.ambient;
color.rgb += diffuse + specular;
}
The image I get is completely incorrect. (light positioned on camera)
What am I doing wrong here?
My bet is on the color setting/mixing in fragment shader...
you are setting output color more then once
If I remember correctly on some gfx drivers that do a big problems for example everything after the line
color = vec4(brownColor * (texture(diffuseMap, fsCoords).rgb + 0.25), 1.0);//add a fixed base color (0.25), because its dark as hell
could be deleted by driver ...
you are adding color and intensities instead of color*intensity
but I could overlook someting.
try just normal/bump shading at first
Ignore ambient,reflect,specular... and then if it works add the rest one by one. Always check the shader's compilation logs
Too lazy to further analyze your code, so here is how I do it:
Left size is space ship object (similar to ZXS Elite's Viper) rendered with fixed function. Right side the same (a bit different rotation of object) with GLSL shader's in place and this normal/bump map
[Vertex]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
// texture units:
// 0 - texture0 map 2D rgba
// 1 - texture1 map 2D rgba
// 2 - normal map 2D xyz
// 3 - specular map 2D i
// 4 - light map 2D rgb rgb
// 5 - enviroment/skybox cube map 3D rgb
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_l2g_dir;
uniform mat4x4 tm_g2s;
uniform mat4x4 tm_l2s_per;
uniform mat4x4 tm_per;
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
layout(location=2) in vec2 txr;
layout(location=3) in vec3 tan;
layout(location=4) in vec3 bin;
layout(location=5) in vec3 nor;
out smooth vec3 pixel_pos;
out smooth vec4 pixel_col;
out smooth vec2 pixel_txr;
//out flat mat3 pixel_TBN;
out smooth mat3 pixel_TBN;
//------------------------------------------------------------------
void main(void)
{
vec4 p;
p.xyz=pos;
p.w=1.0;
p=tm_l2g*p;
pixel_pos=p.xyz;
p=tm_g2s*p;
gl_Position=p;
pixel_col=col;
pixel_txr=txr;
p.xyz=tan.xyz; p.w=1.0; pixel_TBN[0]=normalize((tm_l2g_dir*p).xyz);
p.xyz=bin.xyz; p.w=1.0; pixel_TBN[1]=normalize((tm_l2g_dir*p).xyz);
p.xyz=nor.xyz; p.w=1.0; pixel_TBN[2]=normalize((tm_l2g_dir*p).xyz);
}
//------------------------------------------------------------------
[Fragment]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec3 pixel_pos;
in smooth vec4 pixel_col;
in smooth vec2 pixel_txr;
//in flat mat3 pixel_TBN;
in smooth mat3 pixel_TBN;
uniform sampler2D txr_texture0;
uniform sampler2D txr_texture1;
uniform sampler2D txr_normal;
uniform sampler2D txr_specular;
uniform sampler2D txr_light;
uniform samplerCube txr_skybox;
const int _lights=3;
uniform vec3 light_col0=vec3(0.1,0.1,0.1);
uniform vec3 light_dir[_lights]= // direction to local star in ellipsoid space
{
vec3(0.0,0.0,+1.0),
vec3(0.0,0.0,+1.0),
vec3(0.0,0.0,+1.0),
};
uniform vec3 light_col[_lights]= // local star color * visual intensity
{
vec3(1.0,0.0,0.0),
vec3(0.0,1.0,0.0),
vec3(0.0,0.0,1.0),
};
out layout(location=0) vec4 frag_col;
const vec4 v05=vec4(0.5,0.5,0.5,0.5);
const bool _blend=false;
const bool _reflect=true;
//------------------------------------------------------------------
void main(void)
{
float a=0.0,b,li;
vec4 col,blend0,blend1,specul,skybox;
vec3 normal;
col=(texture2D(txr_normal,pixel_txr.st)-v05)*2.0; // normal/bump maping
// normal=pixel_TBN*col.xyz;
normal=pixel_TBN[0];
blend0=texture(txr_texture0,pixel_txr.st);
blend1=texture(txr_texture1,pixel_txr.st);
specul=texture(txr_specular,pixel_txr.st);
skybox=texture(txr_skybox,normal);
if (_blend)
{
a=blend1.a;
blend0*=1.0-a;
blend1*=a;
blend0+=blend1;
blend0.a=a;
}
col.xyz=light_col0; col.a=0.0; li=0.0; // normal shading (aj s bump mapingom)
for (int i=0;i<_lights;i++)
{
b=dot(light_dir[i],normal.xyz);
if (b<0.0) b=0.0;
// b*=specul.r;
li+=b;
col.xyz+=light_col[i]*b;
}
col*=blend0;
if (li<=0.1)
{
blend0=texture2D(txr_light,pixel_txr.st);
blend0*=1.0-a;
blend0.a=a;
col+=blend0;
}
if (_reflect) col+=skybox*specul.r;
col*=pixel_col;
if (col.r<0.0) col.r=0.0;
if (col.g<0.0) col.g=0.0;
if (col.b<0.0) col.b=0.0;
a=0.0;
if (a<col.r) a=col.r;
if (a<col.g) a=col.g;
if (a<col.b) a=col.b;
if (a>1.0)
{
a=1.0/a;
col.r*=a;
col.g*=a;
col.b*=a;
}
frag_col=col;
}
//------------------------------------------------------------------
These source codes are bit old and mix of different things for specific application
So extract only what you need from it. If you are confused with the variable names then comment me...
tm_ stands for transform matrix
l2g stands for local coordinate system to global coordinate system transform
dir means that transformation changes just direction (offset is 0,0,0)
g2s stands for global to screen ...
per is perspective transform ...
The GLSL compilation log
You have to obtain its content programaticaly after compilation of your shader's (not application!!!). I do it with calling the function glGetShaderInfoLog for every shader,program I use ...
[Notes]
Some drivers optimize "unused" variables. As you can see at the image txr_texture1 is not found even if the fragment shader has it in code but the blending is not used in this App so driver deleted it on its own...
Shader logs can show you much (syntax errors, warnings...)
there are few GLSL IDEs for making shader's easy but I prefer my own because I can use in it the target app code directly. Mine looks like this:
each txt window is a shader source (vertex,fragment,...) the right bottom is clipboard, left top is shader's log after last compilation and left bottom is the preview. I managed to code it like Borland style IDE (with the keys also and syntax highlight) the other IDEs I saw look similar (different colors of coarse:)) anyway if you want to play with shader's download such App or do it your self it will help a lot...
There could be also a problem with TBN creation
You should visually check if the TBN vectors (tangent,binormal,normal) correspond to object surface by drawing colored lines at each vertex position. Just to be sure... something like this:
I will try to make your code work. Have you tried it with moving camera?
I cannot see anywhere that you have transformed the TBNMatrix with the transform, view and model matrices. Did you try with the vec3 normal = TBNMatrix[2]; original normals? (Fragment shader)
The following might help. In the Vertex shader you have:
uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform mat4 ModelMatrix;
However here, only these 3 matrices should be used:
uniform mat4 PCM;
uniform mat4 MIT; //could be mat3
uniform mat4 ModelMatrix; //could be mat3
It is more efficient to calculate the product of those matrices on CPU (and yields the same because matrix multiplication is associative). Then this product, the PCM can be used as to calculate the new position with one multiplication per vertex:
gl_Position = PCM * vec4(vertex, 1.0);
The MIT is the inverse transpose of the ModelMatrix, you have to calculate it on the CPU. This can be used the transform the normals:
vec4 tang = ModelMatrix*vec4(tangent,0);
vec4 bita= ModelMatrix*vec4(bitangent,0);
vec4 norm= PCMIT*vec4(tangent,0);
TBNMatrix = mat3(normalize(tang.xyz), normalize(bita.xyz), normalize(normal.xyz));
I am not sure what happens to the tangent and bitangent, but this way the normal will stay perpendicular to them. It is easy to prove. Here I use a ° b as the skalar product of a and b vectors. So let n be some normal, and a is some vektor on the surface (eg. {bi}tangent, edge of a triangle), and let A be any transformation. Then:
0 = a n = A^(-1) A a ° n = A a ° A^(-T) n = 0
Where I used the equality A x ° y = x ° A^T y. Therefore if a is perpendicular to n, then A a is perpendicular to A^(-T) n, so we have to transform it with the matrix's inverse transpose.
However, the normal should have a length of 1, so after the transformations, it should be normalized.
You can get also get perpendicular normal by doing this:
vec3 normal = normalize(cross(tangent, bitangent));
Where cross(a,b) is the function that calculates cross product of a and b, witch is always perpendicular to both a and b.
Sorry for my English :)

OpenGL Programmable Pipeline Point Lights

Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.

Tangent Space Normal Mapping - shader sanity check

I'm getting some pretty freaky results from my tangent space normal mapping shader :). In the scene I show here, the teapot and checkered walls are being shaded with my ordinary Phong-Blinn shader (obviously teapot backface cull gives it a lightly ephemeral look and feel :-) ). I've tried to add in normal mapping to the sphere, with psychedelic results:
The light is coming from the right (just about visible as a black blob). The normal map I'm using on the sphere looks like this:
I'm using AssImp to process input models, so it's calculating tangent and bi-normals for each vertex automatically for me.
The pixel and vertex shaders are below. I'm not too sure what's going wrong, but it wouldn't surprise me if the tangent basis matrix is somehow wrong. I assume I have to compute things into eye space and then transform the eye and light vectors into tangent space and that this is the correct way to go about it. Note that the light position comes into the shader already in view space.
// Vertex Shader
#version 420
// Uniform Buffer Structures
// Camera.
layout (std140) uniform Camera
{
mat4 Camera_Projection;
mat4 Camera_View;
};
// Matrices per model.
layout (std140) uniform Model
{
mat4 Model_ViewModelSpace;
mat4 Model_ViewModelSpaceInverseTranspose;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position; // Already in view space.
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Streams (per vertex)
layout(location = 0) in vec3 attrib_Position;
layout(location = 1) in vec3 attrib_Normal;
layout(location = 2) in vec3 attrib_Tangent;
layout(location = 3) in vec3 attrib_BiNormal;
layout(location = 4) in vec2 attrib_Texture;
// Output streams (per vertex)
out vec3 attrib_Fragment_Normal;
out vec4 attrib_Fragment_Position;
out vec3 attrib_Fragment_Light;
out vec3 attrib_Fragment_Eye;
// Shared.
out vec2 varying_TextureCoord;
// Main
void main()
{
// Compute normal.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Generate matrix for tangent basis.
mat3 tangentBasis = mat3( attrib_Tangent,
attrib_BiNormal,
attrib_Normal);
// Light vector.
attrib_Fragment_Light = tangentBasis * normalize(Light_Position - position.xyz);
// Eye vector.
attrib_Fragment_Eye = tangentBasis * normalize(-position.xyz);
// Return position.
gl_Position = Camera_Projection * position;
}
... and the pixel shader looks like this:
// Pixel Shader
#version 420
// Samplers
uniform sampler2D Map_Normal;
// Global Uniforms
// Material.
layout (std140) uniform Material
{
vec4 Material_Ambient_Colour;
vec4 Material_Diffuse_Colour;
vec4 Material_Specular_Colour;
vec4 Material_Emissive_Colour;
float Material_Shininess;
float Material_Strength;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position;
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Input streams (per vertex)
in vec3 attrib_Fragment_Normal;
in vec3 attrib_Fragment_Position;
in vec3 attrib_Fragment_Light;
in vec3 attrib_Fragment_Eye;
// Shared.
in vec2 varying_TextureCoord;
// Result
out vec4 Out_Colour;
// Main
void main(void)
{
// Compute normals.
vec3 N = normalize(texture(Map_Normal, varying_TextureCoord).xyz * 2.0 - 1.0);
vec3 L = normalize(attrib_Fragment_Light);
vec3 V = normalize(attrib_Fragment_Eye);
vec3 R = normalize(-reflect(L, N));
// Compute products.
float NdotL = max(0.0, dot(N, L));
float RdotV = max(0.0, dot(R, V));
// Compute final colours.
vec4 ambient = Light_Ambient_Colour * Material_Ambient_Colour;
vec4 diffuse = Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * (pow(RdotV, Material_Shininess) * Material_Strength);
// Final colour.
Out_Colour = ambient + diffuse + specular;
}
Edit: 3D Studio Render of the scene (to show the UV's are OK on the sphere):
I think your shaders are okay, but your texture coordinates on the sphere are totally off. It's as if they got distorted towards the poles along the longitude.