I'm in front of a very strange problem which seems to originate from a simple multiplication in the fragment shader
I'm trying to calculate shadows using a framebuffer that renders only the depths from "light's perspective" which is a common tecnique for beginners easier to implement
Fragment Shader:
#version 330 core
uniform sampler2D parquet;
uniform samplerCube depthMaps[15];
in vec2 TexCoords;
out vec4 color;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos[15];
uniform vec3 lightColor[15];
uniform float intensity[15];
uniform float far_plane;
uniform vec3 viewPos;
float ShadowCalculation(vec3 fragPos, vec3 lightPost, samplerCube depthMaps)
{
vec3 fragToLight = fragPos - lightPost;
float closestDepth = texture(depthMaps, fragToLight).r;
// original depth value
closestDepth *= far_plane;
float currentDepth = length(fragToLight);
float bias = 0.05;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
void main()
{
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos[0] - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor[0];
float _distance = length(vec3(FragPos - lightPos[0]));
float attenuation = 1.0 / pow(_distance +1, 2);
if(attenuation > 1.0) attenuation = 1.0;
float intens = intensity[0];
if(intensity[0] > 150) intens = 150.0f;
vec3 resulta = (diffuse * attenuation) * intens;
//texture color
vec3 tCol = vec3(texture(parquet, TexCoords));
//gamma correction
tCol.rgb = pow(tCol.rgb, vec3(0.45));
vec3 colors = resulta * tCol * (1.0f - ShadowCalculation(FragPos, lightPos[0], depthMaps[0]));
color = vec4(colors, 1.0f);
}
The last multiplication inside main() behaves strangely, multiplying the result of the diffuse light by the texture color renders nicely (so we have no shadows, just diffuse lightning)
//works
vec3 colors = resulta * tCol;
Multiplying the diffuse light by the shadow results renders also nicely (now we have no textures)
//works
vec3 colors = resulta * (1.0f - ShadowCalculation(FragPos, lightPos[0], depthMaps[0]));
Doing all togheter, renders just a black screen. I've tried all sort of things in the fragment shader, but none worked.
Lastly, here is the fragment shader used to render the cubemap:
#version 330 core
in vec4 FragPos;
uniform vec3 lightPos;
uniform float far_plane;
void main()
{
float lightDistance = length(FragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / far_plane;
gl_FragDepth = lightDistance;
}
Can you spot any logical error? I'm using uniforms array buffers since i'll later need multiple lights at once
After a while trying to visually debug the shader's output I finally found the error, I was binding the depthmap's cubemap texture incorrectly and this caused the strange behaviour I was seeing in the last multiplication
Lesson learned: It' not always fragment's fault
Related
I've been trying to implement the Blinn-Phong lighting model to project lighting onto an imported Wavefront OBJ model through Assimp(github link).
The model seems to be loaded correctly, however, there seems to be a point where the lighting appears to be "cut off" near the middle of the model.
Image of the imported model with and without lighting enabled.
As you can see on the left of the image above, there is a region in the middle of the model where the light effectively gets "split up" which is not what is intended. It can be seen that there is a sort of discrepancy where the side facing towards the light source appears brighter than normal and the side away from the light source appears darker than normal without any sort of easing in between the two sides.
I believe there might be something wrong with how I've implemented the lighting model in the fragment shader but I cannot say for sure as to why this is happening.
Vertex shader:
#version 330 core
layout (location = 0) in vec3 vertPos;
layout (location = 1) in vec3 vertNormal;
layout (location = 2) in vec2 vertTexCoords;
out vec3 fragPos;
out vec3 fragNormal;
out vec2 fragTexCoords;
uniform mat4 proj, view, model;
uniform mat3 normalMat;
void main() {
fragPos = vec3(model * vec4(vertPos, 1));
gl_Position = proj * view * vec4(fragPos, 1);
fragTexCoords = vertTexCoords;
fragNormal = normalMat * vertNormal;
}
Fragment shader:
#version 330 core
in vec3 fragPos;
in vec3 fragNormal;
in vec2 fragTexCoords;
out vec4 FragColor;
const int noOfDiffuseMaps = 1;
const int noOfSpecularMaps = 1;
struct Material {
sampler2D diffuseMaps[noOfDiffuseMaps], specularMaps[noOfSpecularMaps];
float shininess;
};
struct Light {
vec3 direction;
vec3 ambient, diffuse, specular;
};
uniform Material material;
uniform Light light;
uniform vec3 viewPos;
const float pi = 3.14159265;
uniform float gamma = 2.2;
float near = 0.1;
float far = 100;
float LinearizeDepth(float depth)
{
float z = depth * 2 - 1;
return (2 * near * far) / (far + near - z * (far - near));
}
void main() {
vec3 normal = normalize(fragNormal);
vec3 calculatedColor = vec3(0);
for (int i = 0; i < noOfDiffuseMaps; i++) {
vec3 diffuseTexel = texture(material.diffuseMaps[i], fragTexCoords).rgb;
// Ambient lighting
vec3 ambient = diffuseTexel * light.ambient;
// Diffuse lighting
float diff = max(dot(light.direction, normal), 0);
vec3 diffuse = diffuseTexel * light.diffuse * diff;
calculatedColor += ambient + diffuse;
}
for (int i = 0; i < noOfSpecularMaps; i++) {
vec3 specularTexel = texture(material.specularMaps[0], fragTexCoords).rgb;
vec3 viewDir = normalize(viewPos - fragPos);
vec3 halfWayDir = normalize(viewDir + light.direction);
float energyConservation = (8 + material.shininess) / (8 * pi);
// Specular lighting
float spec = pow(max(dot(halfWayDir, normal), 0), material.shininess);
vec3 specular = specularTexel * light.specular * spec * energyConservation;
calculatedColor += specular;
}
float depthColor = 1 - LinearizeDepth(gl_FragCoord.z) / far;
FragColor = vec4(pow(calculatedColor, vec3(1 / gamma)) * depthColor, 1);
}
Make sure your texture and colors are also linear(it is a simple pow 2.2) because you are doing gamma encoding at the end.
Also note, it is expected to have a harsh terminator.
http://filmicworlds.com/blog/linear-space-lighting-i-e-gamma/
Beyond that, if you expect soft falloffs, it must be coming from an area light. For that you can implement wrap lighting or area lights.
I'm porting an openGL application to webassembly using Emscripten. I've written a bunch of shaders in GLSL (330) for the native version. However for the webversion I need shaders written in GLSL ES (300 es). How would I go about converting my shaders from GLSL to GLSL ES?
Possibilities I have considered so far:
GLSL -> SPIR-V -> GLSL ES,
having a bunch of #ifdef statements in the GLSL code in order to make blocks of code only execute for GLSL ES or GLSL,
writing custom C++ code that dynamically creates GLSL / GLSL ES code depending on what you need
simply having two nearly identical copies of all the shaders, one in GLSL and the other in GLSL ES
Example of GLSL vertex shader:
#version 330 core
#define NR_LIGHTS 10
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
out vec3 normalViewSpace;
out vec3 posViewSpace;
out vec2 textureCoords;
out vec4 positionsLightSpace[NR_LIGHTS];
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
uniform mat4 lightMatrices[NR_LIGHTS];
void main()
{
vec4 posViewSpaceV4;
posViewSpaceV4 = viewMatrix * modelMatrix * vec4(position, 1.0);
posViewSpace = posViewSpaceV4.xyz;
gl_Position = projectionMatrix * posViewSpaceV4;
normalViewSpace = mat3(viewMatrix) * normalMatrix * normal;
for( int i = 0; i
Example of GLSL fragment shader:
#version 330 core
#define NR_LIGHTS 10
struct Material {
vec3 ambient;
vec3 diffuse;
vec3 specular;
float shininess;
float alpha;
};
struct Light {
vec3 posViewSpace;
vec3 ambient;
vec3 diffuse;
vec3 specular;
float constant;
float linear;
float quadratic;
vec3 directionViewSpace;
float cutOff;
float outerCutOff;
sampler2D shadowMap;
};
out vec4 FragColor;
in vec3 normalViewSpace;
in vec3 posViewSpace;
in vec4 positionsLightSpace[NR_LIGHTS];
uniform Material material;
uniform Light lights[NR_LIGHTS];
float shadowCalculation(vec4 posLightSpace, sampler2D shadowMap, Light light)
{
// perform perspective divide
vec3 projCoords = posLightSpace.xyz / posLightSpace.w; // range [-1, 1]
// transform range [0, 1]
projCoords = projCoords * 0.5 + 0.5;
float closestDepth = texture(shadowMap, projCoords.xy).r;
float currentDepth = projCoords.z;
vec3 lightDir = normalize(light.posViewSpace - posViewSpace);
float bias = max(0.00005 * (1.0 - dot(normalViewSpace, lightDir)), 0.000005); // solves shadow acne
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
vec3 calcSpotLight( Light light, vec3 normal, vec3 position, float shadow) // normal and position in view space, although this function should not care about which space it's in
{
vec3 result = vec3(0.0, 0.0, 0.0);
vec3 lightDir = normalize(light.posViewSpace - position);
float theta = dot(lightDir, normalize(-light.directionViewSpace));
float epsilon = light.cutOff - light.outerCutOff;
float intensity = clamp((theta - light.outerCutOff) / epsilon, 0.0, 1.0); // interpolate between inner and outer cutOff and clamp to 0 and 1
if( intensity > 0 ) // if inside spot radius
{
// attenuation
float distance = length(light.posViewSpace - position);
float attenuation = 1.0 / (light.constant + light.linear * distance + light.quadratic * (distance * distance));
if( attenuation > 0.001 )
{
// ambient
vec3 ambient = material.ambient * light.ambient;
// diffuse
vec3 norm = normalize(normalViewSpace);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * material.diffuse * light.diffuse;
// specular
vec3 viewDir = normalize(-position); // in view space the camera is at (0, 0, 0)
vec3 reflectDir = reflect(-lightDir, norm); // reflect function expect vector FROM light source TO position
float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);
vec3 specular = material.specular * spec * light.specular;
// result
result = intensity * attenuation * (ambient + (1.0 - shadow) * (diffuse + specular));
}
}
return result;
}
void main()
{
vec3 result = material.ambient * 0.08;
for( int i = 0; i
Managed to get Shadow Mapping to work in my OpenGL rendering engine, but it is producing some weird artifacts that I think are "shadow acne". However, I am using shadow2DProj to get the shadow value from the shadow depth map, which for me has proven to be the only way to get shadows to show up at all. Therefore, looking around at various tutorial at learnopengl, opengl-tutorials and others have yielded no help. Would like some advice as to how I could mitigate this problem.
Here is my shader that I use to draw the shadow map with:
#version 330 core
out vec4 FragColor;
struct Light {
vec3 position;
vec3 ambient;
vec3 diffuse;
vec3 specular;
vec3 attenuation;
};
in vec3 FragPos;
in vec3 Normal;
in vec2 TexCoords;
in vec4 ShadowCoords;
uniform vec3 viewPos;
uniform sampler2D diffuseMap;
uniform sampler2D specularMap;
uniform sampler2DShadow shadowMap;
uniform Light lights[4];
uniform float shininess;
float calculateShadow(vec3 lightDir)
{
float shadowValue = shadow2DProj(shadowMap, ShadowCoords).r;
float shadow = shadowValue;
return shadow;
}
vec3 calculateAmbience(Light light, vec3 textureMap)
{
return light.ambient * textureMap;
}
void main()
{
vec4 tex = texture(diffuseMap, TexCoords);
if (tex.a < 0.5)
{
discard;
}
vec3 ambient = vec3(0.0);
vec3 diffuse = vec3(0.0);
vec3 specular = vec3(0.0);
vec3 norm = normalize(Normal);
vec3 viewDir = normalize(viewPos - FragPos);
for (int i = 0; i < 4; i++)
{
ambient = ambient + lights[i].ambient * tex.rgb;
vec3 lightDir = normalize(lights[i].position - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
diffuse = diffuse + (lights[i].diffuse * diff * tex.rgb);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), shininess);
specular = specular + (lights[i].specular * spec * tex.rgb);
float dist = length(lights[i].position - FragPos);
float attenuation = lights[i].attenuation.x + (lights[i].attenuation.y * dist) + (lights[i].attenuation.z * (dist * dist));
if (attenuation > 0.0)
{
ambient *= 1.0 / attenuation;
diffuse *= 1.0 / attenuation;
specular *= 1.0 / attenuation;
}
}
float shadow = calculateShadow(normalize(lights[0].position - FragPos));
vec3 result = (ambient + (shadow) * (diffuse + specular));
FragColor = vec4(result, 1.0);
}
This is the result I get. Notice the weird stripes on top of the cube:
Reading the description about shadow acne, this seems to be the same phenomenon (source: https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping).
According to that article, I need to check if the ShadowCoord depth value, minus a bias constant, is lower then the shadow value read from the shadow map. If so, we have shadow. Now... here comes the problem. Since I am using shadow2DProj and not texture() to get my shadow value from the shadow map (through some intricate sorcery no doubt), I am unable to "port" that article's code into my shader and get it to work. Here is what I have tried:
float calculateShadow(vec3 lightDir)
{
float closestDepth = shadow2DProj(shadowMap, ShadowCoords).r;
float bias = 0.005;
float currentDepth = ShadowCoords.z;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
But that produces no shadows at all, since the "shadow" float is always assigned 1.0 from the depth & bias check. I must admit that I do not fully understand what I am getting from using shadow2DProj(...).r as compared to texture(...).r, but it sure is something completely different.
This question has a misunderstanding of what shadow2DProj does. The function does not return a depth value, but a depth comparison result. Therefore, apply the bias before calling it.
Solution 1
Apply the bias prior to running the comparison. ShadowCoords.z is your currentDepth value.
float calculateShadow(vec3 lightDir)
{
const float bias = 0.005;
float shadow = shadow2DProj(shadowMap, vec3(ShadowCoords.uv, ShadowCoords.z - bias)).r;
return shadow;
}
Solution 2
Apply the bias while performing the light-space depth pass.
glPolygonOffset(float factor, float units)
This function offsets Z-axis values by factor * DZ + units where DZ is the z-axis slope of the polygon. Setting this to positive values moves polygons deeper into the scene, which acts like our bias.
During initialization:
glEnable(GL_POLYGON_OFFSET_FILL);
During Light Depth Pass:
// These parameters will need to be tweaked for your scene
// to prevent acne and mitigate peter panning
glPolygonOffset(1.0, 1.0);
// draw potential shadow casters
// return to default settings (no offset)
glPolygonOffset(0, 0);
Shader Code:
// we don't even need the light direction for slope bias
float calculateShadow()
{
float shadow = shadow2DProj(shadowMap, ShadowCoords).r;
return shadow;
}
I am having a very strange occurrence where glDisableVertexAttribArray works in my one solution but when I get the solution from my Perforce repository, it doesn't run and throws an assert.
I checked out this forum question but it, unfortunately, didn't solve my problem. This is for shadow mapping that I have been working on and when I try to render things to the depth buffer and then disable the vertex attributes, it throws an error.
Here's how my code is laid out:
glUseProgram(shaderProgram);
glUniform1i(u_diffuseTextureLocation, 0);
glUniform1i(u_shadowMapLocation, 1);
[...]
glUseProgram(shaderProgram);
[Render some stuff to depth buffer]
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(0);
glDisableVertexAttibArray(a_normalAttribLocation); // This gives the GL_INVALID_OPERATION
// enum
And here's the vertex shader in that program:
#version 430 core
uniform mat4 u_projection;
uniform mat4 u_view;
uniform mat4 u_model;
uniform mat4 u_lightSpaceMat;
in vec3 a_position;
in vec3 a_normal;
in vec2 a_texture;
out VS_OUT {
vec3 v_fragPos;
vec3 v_normal;
vec2 v_texCoords;
vec4 v_fragPosLightSpace;
} vs_out;
void main()
{
gl_Position = u_projection * u_view * u_model * vec4(a_position, 1.0);
vs_out.v_fragPos = (u_model * vec4(a_position, 1.0)).xyz;
vs_out.v_normal = transpose(inverse(mat3(u_model))) * a_normal;
vs_out.v_texCoords = a_texture;
vs_out.v_fragPosLightSpace = u_lightSpaceMat * vec4(vs_out.v_fragPos, 1.0);
}
And the fragment shader in the program:
#version 430 core
uniform sampler2D u_shadowMap;
uniform sampler2D u_diffuseTexture;
uniform vec3 u_lightPos;
uniform vec3 u_viewPos;
in VS_OUT {
vec3 v_fragPos;
vec3 v_normal;
vec2 v_texCoords;
vec4 v_fragPosLightSpace;
} fs_in;
out vec4 fragColor;
float shadowCalculation(vec4 fragPosLightSpace, vec3 normal, vec3 lightDir)
{
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range
// fragPosLight as coords)
float closestDepth = texture(u_shadowMap, projCoords.xy).r;
// Get depth of current fragment from lights perspective
float currentDepth = projCoords.z;
float bias = max(0.05 * (1.0 - dot(normal, lightDir)), 0.005);
// Percentage closer filtering
float shadow = 0.0;
vec2 texelSize = 1.0 / textureSize(u_shadowMap, 0);
for (int x = -1; x <= 1; ++x)
{
for (int y = -1; y <= 1; ++y)
{
float pcfDepth = texture(u_shadowMap, projCoords.xy + vec2(x, y) * texelSize).r;
shadow += currentDepth - bias > pcfDepth ? 1.0 : 0.0;
}
}
shadow /= 9.0;
return shadow;
}
void main()
{
vec3 color = texture(u_diffuseTexture, fs_in.v_texCoords).rgb;
vec3 normal = normalize(fs_in.v_normal);
vec3 lightColor = vec3(1.0);
// ambient
vec3 ambient = 0.15 * color;
// diffuse
vec3 lightDir = normalize(u_lightPos - fs_in.v_fragPos);
float diff = max(dot(lightDir, normal), 0.0);
vec3 diffuse = diff * lightColor;
// specular
vec3 viewDir = normalize(u_viewPos - fs_in.v_fragPos);
float spec = 0.0;
vec3 halfWayDir = normalize(lightDir + viewDir);
spec = pow(max(dot(normal, halfWayDir), 0.0), 64.0);
vec3 specular = spec * lightColor;
// calculate shadow
float shadow = shadowCalculation(fs_in.v_fragPosLightSpace, normal, lightDir);
vec3 lighting = (ambient + (1.0 - shadow) * (diffuse + specular)) * color;
fragColor = vec4(lighting, 1.0);
}
What I'm really confused about is that the program runs when I'm using my local files. But when I pull the files from the Perforce repository and try and run it, then it throws the exception. I checked and all the necessary files are uploaded to Perforce. It would seem that there is something going wrong with which attributes are actually active? I'm not sure. Just scratching my head here...
glBindVertexArray(0);
glDisableVertexAttibArray(a_normalAttribLocation);
glDisableVertexAttribArray modifies the current VAO. You just removed the current VAO, setting it to 0. Which, in a core profile, means no VAO at all. In the compatibility profile, there is a VAO 0, which is probably why it works elsewhere: you're getting the compatibility profile on a different machine.
However, if you're using VAOs, it's not clear why you want to disable an attribute array at all. The whole point of VAOs is that you don't have to call the attribute array functions every frame. You just bind the VAO and go.
Recently I added deferred shading support in my engine; however I ran into some attenuation issues:
As you can see, when I'm rendering the light volume (sphere), it doesn't blend nicely with the ambient part of the image !
Here is how I declare my point light:
PointLight pointlight;
pointlight.SetPosition(glm::vec3(0.0, 6.0, 0.0));
pointlight.SetIntensity(glm::vec3(1.0f, 1.0f, 1.0f));
Here is how I compute the light sphere radius:
Attenuation attenuation = pointLights[i].GetAttenuation();
float lightMax = std::fmaxf(std::fmax(pointLights[i].GetIntensity().r, pointLights[i].GetIntensity().g),
pointLights[i].GetIntensity().b);
float pointLightRadius = (-attenuation.linear +
std::sqrtf(std::pow(attenuation.linear, 2.0f) - 4.0f * attenuation.exponential *
(attenuation.constant - (256.0f / 5.0f) * lightMax))) / (2.0f * attenuation.exponential);
And finally, here is my PointLightPass fragment shader:
#version 450 core
struct BaseLight
{
vec3 intensities;//a.k.a color of light
float ambientCoeff;
};
struct Attenuation
{
float constant;
float linear;
float exponential;
};
struct PointLight
{
BaseLight base;
Attenuation attenuation;
vec3 position;
};
struct Material
{
float shininess;
vec3 specularColor;
float ambientCoeff;
};
layout (std140) uniform Viewport
{
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 ViewProjection;
uniform vec2 scrResolution;
};
layout(binding = 0) uniform sampler2D gPositionMap;
layout(binding = 1) uniform sampler2D gAlbedoMap;
layout(binding = 2) uniform sampler2D gNormalMap;
layout(binding = 3) uniform sampler2D gSpecularMap;
uniform vec3 cameraPosition;
uniform PointLight pointLight;
out vec4 fragmentColor;
vec2 FetchTexCoord()
{
return gl_FragCoord.xy / scrResolution;
}
void main()
{
vec2 texCoord = FetchTexCoord();
vec3 gPosition = texture(gPositionMap, texCoord).xyz;
vec3 gSurfaceColor = texture(gAlbedoMap, texCoord).xyz;
vec3 gNormal = texture(gNormalMap, texCoord).xyz;
vec3 gSpecColor = texture(gSpecularMap, texCoord).xyz;
float gSpecPower = texture(gSpecularMap, texCoord).a;
vec3 totalLight = gSurfaceColor * 0.1; //TODO remove hardcoded ambient light
vec3 viewDir = normalize(cameraPosition - gPosition);
vec3 lightDir = normalize(pointLight.position - gPosition);
vec3 diffuse = max(dot(gNormal, lightDir), 0.0f) * gSurfaceColor *
pointLight.base.intensities;
vec3 halfWayDir = normalize(lightDir + viewDir);
float spec = pow(max(dot(gNormal, halfWayDir), 0.0f), 1.0f);
vec3 specular = pointLight.base.intensities * spec /** gSpecColor*/;
float distance = length(pointLight.position - gPosition);
float attenuation = 1.0f / (1.0f + pointLight.attenuation.linear * distance
+ pointLight.attenuation.exponential * distance * distance +
pointLight.attenuation.constant);
diffuse *= attenuation;
specular *= attenuation;
totalLight += diffuse + specular;
fragmentColor = vec4(totalLight, 1.0f);
}
So what can you suggest to deal with this issue ?
EDIT : Here are more details :
For deferred shading,
I populate my GBuffer;
I make an ambient light pass where I render a fullscreen quad
with the ambient colors :
#version 420 core
layout (std140) uniform Viewport
{
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 ViewProjection;
uniform vec2 scrResolution;
};
layout(binding = 1) uniform sampler2D gAlbedoMap;
out vec4 fragmentColor;
vec2 FetchTexCoord()
{
return gl_FragCoord.xy / scrResolution;
}
void main()
{
vec2 texCoord = FetchTexCoord();
vec3 gSurfaceColor = texture(gAlbedoMap, texCoord).xyz;
vec3 totalLight = gSurfaceColor * 1.2; //TODO remove hardcoded ambient light
fragmentColor = vec4(totalLight, 1.0f);
}
Then I pass my point lights (see code above);
The reason you're having this problem is that you're using a "light volume" (a fact that you didn't make entirely clear in this question, but was brought up in your other question).
You are using the normal light attenuation equation. Well, you'll notice that this equation does not magically stop at some arbitrary radius. It is defined for all distances from 0 to infinity.
The purpose of your light volume is to prevent lighting contributions beyond a certain distance. Well, if your light attenuation doesn't go to zero at that distance, then you're going to see a discontinuity at the edge of the light volume.
If you're going to use a light volume, you need to use a light attenuation equation that actually is guaranteed to reach zero at the edge of the volume. Or failing that, you should pick a radius for your volume such that the attenuated strength of the light is nearly zero. And your radius is too small for that.
Keep making your radius bigger until you can't tell it's there.