I've been working OpenGL's Superbible (5th Edition) point light examples.
I found them missing the constant, linear and quadratic attenuation values integrated into the old lighting model, so I went and wrote a point light shader with attenuation values based on this ogre guide.
The results have been completely bizarre. What can be done to get a sensible light attenuation on the new glsl? Is there a glsl table for attenuation constants?
Pics
Attenuation at distance 99 -sphere is black, light is blue-
Attenuation at distance 50 -sphere is black, light is blue-
Attenuation at distance 16 -sphere is black, light is blue-
Attenuation at distance 2 -sphere is black, light is blue-
Sample vertex and fragment programs:
Vertex program:
//point light per pixel vertex program
#version 130
// Incoming per vertex... position and normal
in vec4 vVertex;
in vec3 vNormal;
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
// Color to fragment program
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
out float dist;
out float constantAttenuation;
out float linearAttenuation;
out float quadraticAttenuation;
void main(void)
{
// Get surface normal in eye coordinates
vVaryingNormal = normalMatrix * vNormal;
// Get vertex position in eye coordinates
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
//get distance to light source
dist=length(vLightPosition-vPosition3);
//write proper attenuation values
if (dist<7.0){
constantAttenuation=1.0;
linearAttenuation=0.7;
quadraticAttenuation=1.8;
}
else if (dist<13.0){
constantAttenuation=1.0;
linearAttenuation=0.35;
quadraticAttenuation=0.44;
}
else if (dist<20.0){
constantAttenuation=1.0;
linearAttenuation=0.22;
quadraticAttenuation=0.20;
}
if (dist<32.0){
constantAttenuation=1.0;
linearAttenuation=0.14;
quadraticAttenuation=0.07;
}
if (dist<50.0){
constantAttenuation=1.0;
linearAttenuation=0.09;
quadraticAttenuation=0.32;
}
if (dist<65.0){
constantAttenuation=1.0;
linearAttenuation=0.07;
quadraticAttenuation=0.017;
}
if (dist<100.0){
constantAttenuation=1.0;
linearAttenuation=0.045;
quadraticAttenuation=0.0075;
}
// Get vector to light source
vVaryingLightDir = normalize(vLightPosition - vPosition3);
// Don't forget to transform the geometry!
gl_Position = mvpMatrix * vVertex;
}
Fragment program:
//point light per pixel fragment program
#version 130
out vec4 vFragColor;
uniform vec4 ambientColor;
uniform vec4 diffuseColor;
uniform vec4 specularColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
in float dist;
in float constantAttenuation;
in float linearAttenuation;
in float quadraticAttenuation;
void main(void){
float att;
att = 1.0 / constantAttenuation + linearAttenuation*dist +quadraticAttenuation*dist*dist;
// Dot product gives us diffuse intensity
float diff = max(0.0, dot(normalize(vVaryingNormal), normalize(vVaryingLightDir)));
// Multiply intensity by diffuse color, force alpha to 1.0
vFragColor = att*(diff * diffuseColor +ambientColor); // attenuation affects the diffuse component
// Specular Light
vec3 vReflection = normalize(reflect(-normalize(vVaryingLightDir), normalize(vVaryingNormal)));
float spec = max(0.0, dot(normalize(vVaryingNormal), vReflection));
if(diff != 0) {
float fSpec = pow(spec, 128.0);
vFragColor.rgb += (att*vec3 (fSpec, fSpec, fSpec)); // attenuation affects the specular component
}
}
Your attenuation formula is completely incorrect:
att = 1.0 / constantAttenuation + linearAttenuation*dist +quadraticAttenuation*dist*dist;
Should be instead:
att = constantAttenuation / ((1+linearAttenuation*dist)*(1+quadraticAttenuation*dist*dist)).
That's why you are getting the sphere lighter when the distance increases. Try the proper formula and post the result.
I ended up using just this attenuation values, the thing works acceptably:
constantAttenuation=1.0;
linearAttenuation=0.22;
quadraticAttenuation=0.20;
There a couple of parenthesis missing on the attenuation division too.
Related
Managed to get Shadow Mapping to work in my OpenGL rendering engine, but it is producing some weird artifacts that I think are "shadow acne". However, I am using shadow2DProj to get the shadow value from the shadow depth map, which for me has proven to be the only way to get shadows to show up at all. Therefore, looking around at various tutorial at learnopengl, opengl-tutorials and others have yielded no help. Would like some advice as to how I could mitigate this problem.
Here is my shader that I use to draw the shadow map with:
#version 330 core
out vec4 FragColor;
struct Light {
vec3 position;
vec3 ambient;
vec3 diffuse;
vec3 specular;
vec3 attenuation;
};
in vec3 FragPos;
in vec3 Normal;
in vec2 TexCoords;
in vec4 ShadowCoords;
uniform vec3 viewPos;
uniform sampler2D diffuseMap;
uniform sampler2D specularMap;
uniform sampler2DShadow shadowMap;
uniform Light lights[4];
uniform float shininess;
float calculateShadow(vec3 lightDir)
{
float shadowValue = shadow2DProj(shadowMap, ShadowCoords).r;
float shadow = shadowValue;
return shadow;
}
vec3 calculateAmbience(Light light, vec3 textureMap)
{
return light.ambient * textureMap;
}
void main()
{
vec4 tex = texture(diffuseMap, TexCoords);
if (tex.a < 0.5)
{
discard;
}
vec3 ambient = vec3(0.0);
vec3 diffuse = vec3(0.0);
vec3 specular = vec3(0.0);
vec3 norm = normalize(Normal);
vec3 viewDir = normalize(viewPos - FragPos);
for (int i = 0; i < 4; i++)
{
ambient = ambient + lights[i].ambient * tex.rgb;
vec3 lightDir = normalize(lights[i].position - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
diffuse = diffuse + (lights[i].diffuse * diff * tex.rgb);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), shininess);
specular = specular + (lights[i].specular * spec * tex.rgb);
float dist = length(lights[i].position - FragPos);
float attenuation = lights[i].attenuation.x + (lights[i].attenuation.y * dist) + (lights[i].attenuation.z * (dist * dist));
if (attenuation > 0.0)
{
ambient *= 1.0 / attenuation;
diffuse *= 1.0 / attenuation;
specular *= 1.0 / attenuation;
}
}
float shadow = calculateShadow(normalize(lights[0].position - FragPos));
vec3 result = (ambient + (shadow) * (diffuse + specular));
FragColor = vec4(result, 1.0);
}
This is the result I get. Notice the weird stripes on top of the cube:
Reading the description about shadow acne, this seems to be the same phenomenon (source: https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping).
According to that article, I need to check if the ShadowCoord depth value, minus a bias constant, is lower then the shadow value read from the shadow map. If so, we have shadow. Now... here comes the problem. Since I am using shadow2DProj and not texture() to get my shadow value from the shadow map (through some intricate sorcery no doubt), I am unable to "port" that article's code into my shader and get it to work. Here is what I have tried:
float calculateShadow(vec3 lightDir)
{
float closestDepth = shadow2DProj(shadowMap, ShadowCoords).r;
float bias = 0.005;
float currentDepth = ShadowCoords.z;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
But that produces no shadows at all, since the "shadow" float is always assigned 1.0 from the depth & bias check. I must admit that I do not fully understand what I am getting from using shadow2DProj(...).r as compared to texture(...).r, but it sure is something completely different.
This question has a misunderstanding of what shadow2DProj does. The function does not return a depth value, but a depth comparison result. Therefore, apply the bias before calling it.
Solution 1
Apply the bias prior to running the comparison. ShadowCoords.z is your currentDepth value.
float calculateShadow(vec3 lightDir)
{
const float bias = 0.005;
float shadow = shadow2DProj(shadowMap, vec3(ShadowCoords.uv, ShadowCoords.z - bias)).r;
return shadow;
}
Solution 2
Apply the bias while performing the light-space depth pass.
glPolygonOffset(float factor, float units)
This function offsets Z-axis values by factor * DZ + units where DZ is the z-axis slope of the polygon. Setting this to positive values moves polygons deeper into the scene, which acts like our bias.
During initialization:
glEnable(GL_POLYGON_OFFSET_FILL);
During Light Depth Pass:
// These parameters will need to be tweaked for your scene
// to prevent acne and mitigate peter panning
glPolygonOffset(1.0, 1.0);
// draw potential shadow casters
// return to default settings (no offset)
glPolygonOffset(0, 0);
Shader Code:
// we don't even need the light direction for slope bias
float calculateShadow()
{
float shadow = shadow2DProj(shadowMap, ShadowCoords).r;
return shadow;
}
I'm in front of a very strange problem which seems to originate from a simple multiplication in the fragment shader
I'm trying to calculate shadows using a framebuffer that renders only the depths from "light's perspective" which is a common tecnique for beginners easier to implement
Fragment Shader:
#version 330 core
uniform sampler2D parquet;
uniform samplerCube depthMaps[15];
in vec2 TexCoords;
out vec4 color;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos[15];
uniform vec3 lightColor[15];
uniform float intensity[15];
uniform float far_plane;
uniform vec3 viewPos;
float ShadowCalculation(vec3 fragPos, vec3 lightPost, samplerCube depthMaps)
{
vec3 fragToLight = fragPos - lightPost;
float closestDepth = texture(depthMaps, fragToLight).r;
// original depth value
closestDepth *= far_plane;
float currentDepth = length(fragToLight);
float bias = 0.05;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
void main()
{
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos[0] - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor[0];
float _distance = length(vec3(FragPos - lightPos[0]));
float attenuation = 1.0 / pow(_distance +1, 2);
if(attenuation > 1.0) attenuation = 1.0;
float intens = intensity[0];
if(intensity[0] > 150) intens = 150.0f;
vec3 resulta = (diffuse * attenuation) * intens;
//texture color
vec3 tCol = vec3(texture(parquet, TexCoords));
//gamma correction
tCol.rgb = pow(tCol.rgb, vec3(0.45));
vec3 colors = resulta * tCol * (1.0f - ShadowCalculation(FragPos, lightPos[0], depthMaps[0]));
color = vec4(colors, 1.0f);
}
The last multiplication inside main() behaves strangely, multiplying the result of the diffuse light by the texture color renders nicely (so we have no shadows, just diffuse lightning)
//works
vec3 colors = resulta * tCol;
Multiplying the diffuse light by the shadow results renders also nicely (now we have no textures)
//works
vec3 colors = resulta * (1.0f - ShadowCalculation(FragPos, lightPos[0], depthMaps[0]));
Doing all togheter, renders just a black screen. I've tried all sort of things in the fragment shader, but none worked.
Lastly, here is the fragment shader used to render the cubemap:
#version 330 core
in vec4 FragPos;
uniform vec3 lightPos;
uniform float far_plane;
void main()
{
float lightDistance = length(FragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / far_plane;
gl_FragDepth = lightDistance;
}
Can you spot any logical error? I'm using uniforms array buffers since i'll later need multiple lights at once
After a while trying to visually debug the shader's output I finally found the error, I was binding the depthmap's cubemap texture incorrectly and this caused the strange behaviour I was seeing in the last multiplication
Lesson learned: It' not always fragment's fault
I have implemented Blinn-Phong shading in my fragment shader to calculate the lighting of each fragment with multiple lights. The computation seems to be all good except for one part. My directional light is computed correctly but my position light is not.
Pictures talk more than word so:
Wrong light positions:
The shader works like this:
I update the light array with their positions in world space. And then for each object, I upload their material information and draw them using this fragment shader to get the correct color. The vertex shader's only task (regarding lighting) is to pass on normals for the fragment. All my computation are done in world space.
I already checked the position of each light and they are correct. Which is why I came here. I think I made a mistake when I compute the cosine of the incident angle but I don't know where.
Here is the code of my fragment shader:
Edit : I corrected some mistakes with the nice help of Nico Schertler, here is the new code with both fragment and vertex shader
Edit 2 : After another round of check, it was in fact not a misplaced light position. But an error in the computation of dot vector between the normal and the light direction.
Fragment Shader
#version 330 compatibility
struct light {
vec4 position; // position.w indicates if it's a direction or point lighting
vec3 diffuse; // diffuse color of the light
vec3 specular; // specular color of the light
float attenuation; // attenuation of the light
};
struct material {
vec3 diffuse;
vec3 specular;
vec3 ambient;
float shininess;
};
uniform material objMaterial;
uniform light lights[9];
uniform int nbLight;
uniform mat4 viewMatrix; // Matrix to view coordinate
uniform mat4 viewInvMatrix; // invert view matrix to find the position of the viewer
in vec3 worldNormal; // normal in worldspace
in vec3 position; // position in worldspace
vec3 ambiant = vec3(0.2,0.2,0.2);
void main()
{
int i;
vec3 viewDirection = normalize((viewInvMatrix * vec4(0.0, 0.0, 0.0, 1.0) - vec4(position, 1.)).xyz);
vec3 lightDirection, lightPositon;
vec3 normalizeWorldNormal = normalize(worldNormal);
float attenuation;
vec3 totalLight = ambiant * objMaterial.ambient;
float cosAngIncidence;
float blinnTerm;
for(i=0; i<9; i=i+1){
if(i < nbLight){
lightPositon = lights[i].position.xyz;
if (0.0 == lights[i].position.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(lightPositon);
}
else // point light
{
vec3 positionToLightSource = lightPositon - position;
lightDirection = normalize(positionToLightSource);
float dis = length(positionToLightSource);
attenuation = 1.0 / (lights[i].attenuation * dis);
}
cosAngIncidence = dot(normalizeWorldNormal, lightDirection);
cosAngIncidence = clamp(cosAngIncidence, 0, 1);
vec3 halfAngle = normalize(lightDirection + viewDirection);
blinnTerm = dot(normalizeWorldNormal, halfAngle);
blinnTerm = clamp(blinnTerm, 0, 1);
blinnTerm = cosAngIncidence != 0.0 ? blinnTerm : 0.0;
blinnTerm = pow(blinnTerm, objMaterial.shininess*5);
totalLight = totalLight
+ (lights[i].diffuse * objMaterial.diffuse * attenuation * cosAngIncidence)
+ (lights[i].specular * objMaterial.specular * attenuation * blinnTerm);
}
}
gl_FragColor = vec4(totalLight.xyz, 1.0);
}
Vertex Shader
#version 330 compatibility
layout(location=0) in vec3 vx_pos;
layout(location=1) in vec3 vx_nor;
layout(location=3) in vec3 vx_col;
uniform mat4 modelMatrix; // Matrix to world coordinate
uniform mat4 normalInvTransMatrix; // Matrix for normal in world coordinate
uniform mat4 viewMatrix; // Matrix to view coordinate
uniform mat4 projectionMatrix; // Matrix to projection coordinate
out vec3 worldNormal;
out vec3 position;
void main() {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(vx_pos,1.0);
worldNormal = vec3(normalInvTransMatrix * vec4(vx_nor, 0.));
position = vec3(modelMatrix * vec4(vx_pos,0.));
}
I'm trying to implement phong shading in GLSL but am having some issues with the specular component.
The green light is the specular component. The light (a point light) travels in a circle above the plane. The specular highlight always points inward toward the Y axis about which the light rotates and fans out toward the diffuse reflection as seen in the image. It doesn't appear to be affected at all by the positioning of the camera and I'm not sure where I'm going wrong.
Vertex shader code:
#version 330 core
/*
* Phong Shading with with Point Light (Quadratic Attenutation)
*/
//Input vertex data
layout(location = 0) in vec3 vertexPosition_modelSpace;
layout(location = 1) in vec2 vertexUVs;
layout(location = 2) in vec3 vertexNormal_modelSpace;
//Output Data; will be interpolated for each fragment
out vec2 uvCoords;
out vec3 vertexPosition_cameraSpace;
out vec3 vertexNormal_cameraSpace;
//Uniforms
uniform mat4 mvMatrix;
uniform mat4 mvpMatrix;
uniform mat3 normalTransformMatrix;
void main()
{
vec3 normal = normalize(vertexNormal_modelSpace);
//Set vertices in clip space
gl_Position = mvpMatrix * vec4(vertexPosition_modelSpace, 1);
//Set output for UVs
uvCoords = vertexUVs;
//Convert vertex and normal into eye space
vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace;
vertexNormal_cameraSpace = normalize(normalTransformMatrix * normal);
}
Fragment Shader Code:
#version 330 core
in vec2 uvCoords;
in vec3 vertexPosition_cameraSpace;
in vec3 vertexNormal_cameraSpace;
//out
out vec4 fragColor;
//uniforms
uniform sampler2D diffuseTex;
uniform vec3 lightPosition_cameraSpace;
void main()
{
const float materialAmbient = 0.025; //a touch of ambient
const float materialDiffuse = 0.65;
const float materialSpec = 0.35;
const float lightPower = 2.0;
const float specExponent = 2;
//--------------Set Colors and determine vectors needed for shading-----------------
//reflection colors- NOTE- diffuse and ambient reflections will use the texture color
const vec3 colorSpec = vec3(0,1,0); //Green spec color
vec3 diffuseColor = texture2D(diffuseTex, uvCoords).rgb; //Get color from the texture at fragment
const vec3 lightColor = vec3(1,1,1); //White light
//Re-normalize normal vectors : after interpolation they make not be unit length any longer
vec3 normVertexNormal_cameraSpace = normalize(vertexNormal_cameraSpace);
//Set camera vec
vec3 viewVec_cameraSpace = normalize(-vertexPosition_cameraSpace); //Since its view space, camera at origin
//Set light vec
vec3 lightVec_cameraSpace = normalize(lightPosition_cameraSpace - vertexPosition_cameraSpace);
//Set reflect vect
vec3 reflectVec_cameraSpace = normalize(reflect(-lightVec_cameraSpace, normVertexNormal_cameraSpace)); //reflect function requires incident vec; from light to vertex
//----------------Find intensity of each component---------------------
//Determine Light Intensity
float distance = abs(length(lightPosition_cameraSpace - vertexPosition_cameraSpace));
float lightAttenuation = 1.0/( (distance > 0) ? (distance * distance) : 1 ); //Quadratic
vec3 lightIntensity = lightPower * lightAttenuation * lightColor;
//Determine Ambient Component
vec3 ambientComp = materialAmbient * diffuseColor * lightIntensity;
//Determine Diffuse Component
float lightDotNormal = max( dot(lightVec_cameraSpace, normVertexNormal_cameraSpace), 0.0 );
vec3 diffuseComp = materialDiffuse * diffuseColor * lightDotNormal * lightIntensity;
vec3 specComp = vec3(0,0,0);
//Determine Spec Component
if(lightDotNormal > 0.0)
{
float reflectDotView = max( dot(reflectVec_cameraSpace, viewVec_cameraSpace), 0.0 );
specComp = materialSpec * colorSpec * pow(reflectDotView, specExponent) * lightIntensity;
}
//Add Ambient + Diffuse + Spec
vec3 phongFragRGB = ambientComp +
diffuseComp +
specComp;
//----------------------Putting it together-----------------------
//Out Frag color
fragColor = vec4( phongFragRGB, 1);
}
Just noting that the normalTransformMatrix seen in the Vertex shader is the inverse-transpose of the model-view matrix.
I am setting a vector from the vertex position to the light, to the camera, and the reflect vector, all in camera space. For the diffuse calculation I am taking the dot product of the light vector and the normal vector, and for the specular component I am taking the dot product of the reflection vector and the view vector. Perhaps there is some fundamental misunderstanding that I have with the algorithm?
I thought at first that the problem could be that I wasn't normalizing the normals entering the fragment shader after interpolation, but adding a line to normalize didn't affect the image. I'm not sure where to look.
I know that there a lot of phong shading questions on the site, but everyone seems to have a problem that is a bit different. If anyone can see where I am going wrong, please let me know. Any help is appreciated.
EDIT: Okay its working now! Just as jozxyqk suggested below, I needed to do a mat4*vec4 operation for my vertex position or lose the translation information. When I first made the change I was getting strange results until I realized that I was making the same mistake in my OpenGL code for the lightPosition_cameraSpace before I passed it to the shader (the mistake being that I was casting down the view matrix to a mat3 for the calculation instead of setting the light position vector as a vec4). Once I edited those lines the shader appears to be working properly! Thanks for the help, jozxqk!
I can see two parts which don't look right.
"vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace" should be a mat4/vec4(x,y,z,1) multiply, otherwise it ignores the translation part of the modelview matrix.
2. distance uses the light position relative to the camera and not the vertex. Use lightVec_cameraSpace instead. (edit: missed the duplicated calculation)
Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.