I'm working on OpenGL application using the QT5 Gui framework, However, I'm not an expert in OpenGL and I'm facing a couple of issues when trying to simulate directional light. I'm using 'almost' the same algorithm I used in an WebGL application which works just fine.
The application is used to render multiple adjacent cells of a large gridblock (each of which is represented by 8 independent vertices) meaning that some vertices of the whole gridblock are duplicated in the VBO. the normals are calculated per face in geometry shader as shown below in the code.
QOpenGLWidget paintGL() body.
void OpenGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
m_camera = camera.toMatrix();
m_world.setToIdentity();
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = (m_camera * m_world).normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
QVector3D lightDirection = QVector3D(1,1,1);
lightDirection.normalize();
QVector3D directionalColor = QVector3D(1,1,1);
QVector3D ambientLight = QVector3D(0.2,0.2,0.2);
m_program->setUniformValue(m_lightDirectionLoc, lightDirection);
m_program->setUniformValue(m_directionalColorLoc, directionalColor);
m_program->setUniformValue(m_ambientColorLoc, ambientLight);
geometries->drawGeometry(m_program);
m_program->release();
}
}
Vertex Shader
#version 330
layout(location = 0) in vec4 vertex;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 B = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330
in vec3 transformedNormal;
out vec4 fColor;
uniform vec3 lightDirection;
uniform vec3 ambientColor;
uniform vec3 directionalColor;
void main()
{
highp float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
vec3 vLightWeighting = ambientColor + directionalColor * directionalLightWeighting;
highp vec3 color = vec3(1, 1, 0.0);
fColor = vec4(color*vLightWeighting, 1.0);
}
The 1st issue is that lighting on the faces seems to change whenever the camera angle changes (camera location doesn't affect it, only the angle). You can see this behavior in the following snapshot. My guess is that I'm doing something wrong when calculating the normal matrix, but I can't figure out what it is.
The 2nd issue (The one causing me headaches) is whenever The camera is moved, edges of the cells show blocky and rigged lines that flickers when the camera moves around. this effect gets really nasty when there are too many cells clustered together.
The model used in the snapshot is just a sample slab of 10 cells to better illustrate the faulty effects. The actual models (gridblock) contain up to 200K cells stacked together.
EDIT: 2nd issue solution.
I was using znear/zfar of 0.01f and 50000.0f respecticvely, when I
changed the znear to 1.0f, this effect disappeared. According to OpenGL Wiki this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically
EDIT2: I tried debug drawing the normals as suggested in the comments,
I quickly realized that I probably shouldn't calculate them based on
gl_Position (after MVP matrix multiplication in VS) instead I should use the
original vertex locations, so i modified the the shaders as follows:
Vertex Shader (UPDATED)
#version 330
layout(location = 0) in vec4 vertex;
out vec3 vert;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
vert = vertex.xyz;
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader (UPDATED)
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
in vec3 vert [];
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = vert[2].xyz - vert[0].xyz;
vec3 B = vert[1].xyz - vert[0].xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
EndPrimitive();
}
But even after this modification the normals of the surface still change with the camera angle, as shown below in the screenshot. I dont know if the normal calculation is wrong or the normal matrix calculation is done wrong or maybe both...
EDIT3: 1st Issue Solution: changing normal calculation in GS from
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
to transformedNormal = normalize(cross(A,B)); seems to solve the
problem. Omitting the normalMatrix from the calculation fixed the
issue and the normals dont change with the viewing angle.
If I missed any important/relevant information, please notify me in a comment.
Depth buffer precision
Depth buffer is usually stored as 16 or 24 bit buffer. It is a HW implementation of float normalized to specific range. So you can see there is very few bits for mantissa/exponent in comparison to standard float.
if I oversimplify things and assume integer values instead float then for 16 bit buffer you got 2^16 values. if you got znear=0.1 and zfar=50000.0 then you got only 65535 values on the full range. Now as the Depth valued are nonlinear you got higher accuracy near znear and much much lower near zfar plane so the depth values will jump with higher and higher step causing accuracy problems where any 2 polygons are near.
I empirically got this for setting the planes in my views:
(zfar-znear)/desired_accuracy_step > 0.3*(2^n)
Where n is the depth buffer bit-width and desired_accuracy_step is the wanted resolution in Z axis I need. Sometimes I saw it exchanged by znear value.
Related
I'm doing per-pixel lighting(phong shading) on my terrain. I'm using a heightmap to generate the terrain height and then calculating the normal for each vertex. The normals are interpolated in the fragment shader and also normalized.
I am getting some weird dark lines near the edges of triangles where there shouldn't be.
http://imgur.com/L2kj4ca
I checked if the normals were correct using a geometry shader to draw the normals on the terrain and they seem to be correct.
http://imgur.com/FrJpdXI
There is no point using a normal map for the terrain it will just give pretty much the same normals. The problem lies with the way the normals are interpolated across a triangle.
I am out of idea's how to solve this. I couldn't find any working solution online.
Terrain Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 textureCoords;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
out float visibility;
uniform mat4 transformationMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 lightPosition;
const float density = 0.0035;
const float gradient = 5.0;
void main()
{
vec4 worldPosition = transformationMatrix * vec4(position, 1.0f);
vec4 positionRelativeToCam = viewMatrix * worldPosition;
gl_Position = projectionMatrix * positionRelativeToCam;
pass_textureCoords = textureCoords;
surfaceNormal = (transformationMatrix * vec4(normal, 0.0f)).xyz;
toLightVector = lightPosition - worldPosition.xyz;
float distance = length(positionRelativeToCam.xyz);
visibility = exp(-pow((distance * density), gradient));
visibility = clamp(visibility, 0.0, 1.0);
}
Terrain Fragment Shader:
#version 330 core
in vec2 pass_textureCoords;
in vec3 surfaceNormal;
in vec3 toLightVector;
in float visibility;
out vec4 colour;
uniform vec3 lightColour;
uniform vec3 fogColour;
uniform sampler2DArray blendMap;
uniform sampler2DArray diffuseMap;
void main()
{
vec4 blendMapColour = texture(blendMap, vec3(pass_textureCoords, 0));
float backTextureAmount = 1 - (blendMapColour.r + blendMapColour.g + blendMapColour.b);
vec2 tiledCoords = pass_textureCoords * 255.0;
vec4 backgroundTextureColour = texture(diffuseMap, vec3(tiledCoords, 0)) * backTextureAmount;
vec4 rTextureColour = texture(diffuseMap, vec3(tiledCoords, 1)) * blendMapColour.r;
vec4 gTextureColour = texture(diffuseMap, vec3(tiledCoords, 2)) * blendMapColour.g;
vec4 bTextureColour = texture(diffuseMap, vec3(tiledCoords, 3)) * blendMapColour.b;
vec4 diffuseColour = backgroundTextureColour + rTextureColour + gTextureColour + bTextureColour;
vec3 unitSurfaceNormal = normalize(surfaceNormal);
vec3 unitToLightVector = normalize(toLightVector);
float brightness = dot(unitSurfaceNormal, unitToLightVector);
float ambient = 0.2;
brightness = max(brightness, ambient);
vec3 diffuse = brightness * lightColour;
colour = vec4(diffuse, 1.0) * diffuseColour;
colour = mix(vec4(fogColour, 1.0), colour, visibility);
}
This can be either two issues :
1. Incorrect normals :
There is different types of shading : Flat shading, Gouraud shading and Phong shading (different of Phong specular) example :
You usually want to do a Phong shading. To do that, OpenGL make your life easier and interpolate for you the normals between each vertex of each triangle, so at each pixel you have the correct normal for this point: but you still need to feed it proper normal values, that are the average of the normals of every triangles attached to this vertex. So in your function that create the vertex, the normals and the UVs, you need to compute the normal at each vertex by averaging every triangle normal attached to this vertex. illustration
2. Subdivision problem :
The other possible issue is that your terrain is not subdivided enough, or your heightmap resolution is too low, resulting to this kind of glitch because of the difference of height between two vertex in one triangle (so between two pixels in your heightmap).
Maybe if you can provide some of your code and shaders, maybe even the heightmap so we can pin exactly what is happening in your case.
This is old, but I suspect you're not transforming your normal using the transposed inverse of the upper 3x3 part of your modelview matrix. See this. Not sure what's in "transformationMatrix", but if you're using it to transform the vertex and the normal something is probably fishy...
I have a simple application that draws a sphere with a single directional light. I'm creating the sphere by starting with an octahedron and subdividing each triangle into 4 smaller triangles.
With just diffuse lighting, the sphere looks very smooth. However, when I add specular highlights, the edges of the triangles show up fairly strongly. Here are some examples:
Diffuse only:
Diffuse and Specular:
I believe that the normals are being interpolated correctly. Looking at just the normals, I get this:
In fact, if I switch to a flat shading, where the normals are per-polygon instead of per-vertex, I get this:
In my vertex shader, I'm multiplying the model's normals by the transpose inverse modelview matrix:
#version 330 core
layout (location = 0) in vec4 vPosition;
layout (location = 1) in vec3 vNormal;
layout (location = 2) in vec2 vTexCoord;
out vec3 fNormal;
out vec2 fTexCoord;
uniform mat4 transInvModelView;
uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
void main()
{
fNormal = vec3(transInvModelView * vec4(vNormal, 0.0));
fTexCoord = vTexCoord;
gl_Position = ProjectionMatrix * ModelViewMatrix * vPosition;
}
and in the fragment shader, I'm calculating the specular highlights as follows:
#version 330 core
in vec3 fNormal;
in vec2 fTexCoord;
out vec4 color;
uniform sampler2D tex;
uniform vec4 lightColor; // RGB, assumes multiplied by light intensity
uniform vec3 lightDirection; // normalized, assumes directional light, lambertian lighting
uniform float specularIntensity;
uniform float specularShininess;
uniform vec3 halfVector; // Halfway between eye and light
uniform vec4 objectColor;
void main()
{
vec4 texColor = objectColor;
float specular = max(dot(halfVector, fNormal), 0.0);
float diffuse = max(dot(lightDirection, fNormal), 0.0);
if (diffuse == 0.0)
{
specular = 0.0;
}
else
{
specular = pow(specular, specularShininess) * specularIntensity;
}
color = texColor * diffuse * lightColor + min(specular * lightColor, vec4(1.0));
}
I was a little confused about how to calculate the halfVector. I'm doing it on the CPU and passing it in as a uniform. It's calculated like this:
vec3 lightDirection(1.0, 1.0, 1.0);
lightDirection = normalize(lightDirection);
vec3 eyeDirection(0.0, 0.0, 1.0);
eyeDirection = normalize(eyeDirection);
vec3 halfVector = lightDirection + eyeDirection;
halfVector = normalize(halfVector);
glUniform3fv(halfVectorLoc, 1, &halfVector [ 0 ]);
Is that the correct formulation for the halfVector? Or does it need to be done in the shaders as well?
Interpolating normals into a face can (and almost always will) result in a shortening of the normal. That's why the highlight is darker in the center of a face and brighter at corners and edges. If you do this, just re-normalize the normal in the fragment shader:
fNormal = normalize(fNormal);
Btw, you cannot precompute the half vector as it is view dependent (that's the whole point of specular lighting). In your current scenario, the highlight will not change when you just move the camera (keeping the direction).
One way to do this in the shader is to pass an additional uniform for the eye position and then calculate the view direction as eyePosition - vertexPosition. Then continue as you did on the CPU.
I'm trying to implement phong shading in GLSL but am having some issues with the specular component.
The green light is the specular component. The light (a point light) travels in a circle above the plane. The specular highlight always points inward toward the Y axis about which the light rotates and fans out toward the diffuse reflection as seen in the image. It doesn't appear to be affected at all by the positioning of the camera and I'm not sure where I'm going wrong.
Vertex shader code:
#version 330 core
/*
* Phong Shading with with Point Light (Quadratic Attenutation)
*/
//Input vertex data
layout(location = 0) in vec3 vertexPosition_modelSpace;
layout(location = 1) in vec2 vertexUVs;
layout(location = 2) in vec3 vertexNormal_modelSpace;
//Output Data; will be interpolated for each fragment
out vec2 uvCoords;
out vec3 vertexPosition_cameraSpace;
out vec3 vertexNormal_cameraSpace;
//Uniforms
uniform mat4 mvMatrix;
uniform mat4 mvpMatrix;
uniform mat3 normalTransformMatrix;
void main()
{
vec3 normal = normalize(vertexNormal_modelSpace);
//Set vertices in clip space
gl_Position = mvpMatrix * vec4(vertexPosition_modelSpace, 1);
//Set output for UVs
uvCoords = vertexUVs;
//Convert vertex and normal into eye space
vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace;
vertexNormal_cameraSpace = normalize(normalTransformMatrix * normal);
}
Fragment Shader Code:
#version 330 core
in vec2 uvCoords;
in vec3 vertexPosition_cameraSpace;
in vec3 vertexNormal_cameraSpace;
//out
out vec4 fragColor;
//uniforms
uniform sampler2D diffuseTex;
uniform vec3 lightPosition_cameraSpace;
void main()
{
const float materialAmbient = 0.025; //a touch of ambient
const float materialDiffuse = 0.65;
const float materialSpec = 0.35;
const float lightPower = 2.0;
const float specExponent = 2;
//--------------Set Colors and determine vectors needed for shading-----------------
//reflection colors- NOTE- diffuse and ambient reflections will use the texture color
const vec3 colorSpec = vec3(0,1,0); //Green spec color
vec3 diffuseColor = texture2D(diffuseTex, uvCoords).rgb; //Get color from the texture at fragment
const vec3 lightColor = vec3(1,1,1); //White light
//Re-normalize normal vectors : after interpolation they make not be unit length any longer
vec3 normVertexNormal_cameraSpace = normalize(vertexNormal_cameraSpace);
//Set camera vec
vec3 viewVec_cameraSpace = normalize(-vertexPosition_cameraSpace); //Since its view space, camera at origin
//Set light vec
vec3 lightVec_cameraSpace = normalize(lightPosition_cameraSpace - vertexPosition_cameraSpace);
//Set reflect vect
vec3 reflectVec_cameraSpace = normalize(reflect(-lightVec_cameraSpace, normVertexNormal_cameraSpace)); //reflect function requires incident vec; from light to vertex
//----------------Find intensity of each component---------------------
//Determine Light Intensity
float distance = abs(length(lightPosition_cameraSpace - vertexPosition_cameraSpace));
float lightAttenuation = 1.0/( (distance > 0) ? (distance * distance) : 1 ); //Quadratic
vec3 lightIntensity = lightPower * lightAttenuation * lightColor;
//Determine Ambient Component
vec3 ambientComp = materialAmbient * diffuseColor * lightIntensity;
//Determine Diffuse Component
float lightDotNormal = max( dot(lightVec_cameraSpace, normVertexNormal_cameraSpace), 0.0 );
vec3 diffuseComp = materialDiffuse * diffuseColor * lightDotNormal * lightIntensity;
vec3 specComp = vec3(0,0,0);
//Determine Spec Component
if(lightDotNormal > 0.0)
{
float reflectDotView = max( dot(reflectVec_cameraSpace, viewVec_cameraSpace), 0.0 );
specComp = materialSpec * colorSpec * pow(reflectDotView, specExponent) * lightIntensity;
}
//Add Ambient + Diffuse + Spec
vec3 phongFragRGB = ambientComp +
diffuseComp +
specComp;
//----------------------Putting it together-----------------------
//Out Frag color
fragColor = vec4( phongFragRGB, 1);
}
Just noting that the normalTransformMatrix seen in the Vertex shader is the inverse-transpose of the model-view matrix.
I am setting a vector from the vertex position to the light, to the camera, and the reflect vector, all in camera space. For the diffuse calculation I am taking the dot product of the light vector and the normal vector, and for the specular component I am taking the dot product of the reflection vector and the view vector. Perhaps there is some fundamental misunderstanding that I have with the algorithm?
I thought at first that the problem could be that I wasn't normalizing the normals entering the fragment shader after interpolation, but adding a line to normalize didn't affect the image. I'm not sure where to look.
I know that there a lot of phong shading questions on the site, but everyone seems to have a problem that is a bit different. If anyone can see where I am going wrong, please let me know. Any help is appreciated.
EDIT: Okay its working now! Just as jozxyqk suggested below, I needed to do a mat4*vec4 operation for my vertex position or lose the translation information. When I first made the change I was getting strange results until I realized that I was making the same mistake in my OpenGL code for the lightPosition_cameraSpace before I passed it to the shader (the mistake being that I was casting down the view matrix to a mat3 for the calculation instead of setting the light position vector as a vec4). Once I edited those lines the shader appears to be working properly! Thanks for the help, jozxqk!
I can see two parts which don't look right.
"vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace" should be a mat4/vec4(x,y,z,1) multiply, otherwise it ignores the translation part of the modelview matrix.
2. distance uses the light position relative to the camera and not the vertex. Use lightVec_cameraSpace instead. (edit: missed the duplicated calculation)
I am attempting to get some simple diffuse lighting to work in GLSL. I have a cube that is being passed in as an array of points and I'm calculating the face normals inside my geometry shader (because I intend to deform the mesh at run-time so I'll need the new face normals.)
My problem is that the diffuse value is changing as I move the camera around the world. so the shading on a face of my cube changes as the camera moves. I have not been able to figure out what I am missing that is causing this. My shaders are as follows:
Vertex:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Geometry:
#version 330
precision highp float;
layout (triangles) in;
layout (triangle_strip) out;
layout (max_vertices = 3) out;
out vec3 normal;
uniform mat4 MVP;
uniform mat4 MV;
void main(void)
{
for (int i = 0; i < gl_in.length(); i++) {
gl_Position = gl_in[i].gl_Position;
vec3 U = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 V = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
normal.x = (U.y * V.z) - (U.z * V.y);
normal.y = (U.z * V.x) - (U.x * V.z);
normal.z = (U.x * V.y) - (U.y * V.x);
normal = normalize(transpose(inverse(MV)) * vec4(normal,1)).xyz;
EmitVertex();
}
EndPrimitive();
}
Fragment:
#version 330 core
in vec3 normal;
out vec4 out_color;
const vec3 lightDir = vec3(-1,-1,1);
uniform mat4 MV;
void main()
{
vec3 nlightDir = normalize(vec4(lightDir,1)).xyz;
float diffuse = clamp(dot(nlightDir,normal),0,1);
out_color = vec4(diffuse*vec3(0,1,0),1.0);
}
Thanks
There are a lot of wrong things in your code. Most of your problems come from completely forgetting what space various vectors are in. You cannot meaningfully do computations between vectors that are in different spaces.
normal = normalize(transpose(inverse(MV)) * vec4(normal,1)).xyz;
By using 1 as the fourth component of the normal, you completely break this computation. It causes the normal to be translated, which is not appropriate.
Furthermore, your normal value is computed based on gl_Position. And gl_Position is in clip space, not model space. So all you get is the clip-space normal, which is not what you need, want, or can even use.
If you want to compute the camera-space normal, then compute it from camera-space positions. Or compute the model-space normal from model-space positions and use the model/view matrix to transform it to camera-space.
Also, do the inverse/transpose on the CPU and pass it to the shader. Oh, and take all of the normal computations out of the loop; you only need to do it once per triangle (store it in a local variable and copy it to the output for each vertex). And stop doing the cross-product manually; use the built-in GLSL cross function.
vec3 nlightDir = normalize(vec4(lightDir,1)).xyz;
This makes no more sense than using 1 as the forth component in your transform before. Just normalize lightDir directly.
Equally importantly, if you're doing lighting in camera space, then the light direction needs to change with the camera in order for it to remain in the same apparent direction in the world. So you're going to have to take your world-space light position and transform it to camera space (typically on the CPU, passed in as a uniform).
Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.