Currently, I have some textures that will be overlapping one another, for example :
However, when I apply lighting attenuation to it, texture seems to be blending together:
I'm not sure if I'm applying the formula correctly. Enabling/Disabling glBlendFunc doesnt seem to work either. Do I need to apply depth testing here, or is there a way to play with texture opacity properly? For now I only need overlapping to be rendered, while "ignoring" the underlapping portion. Thanks!
My fragment shader looks like this:
layout (location=0) in vec3 vInterpColor;
layout(location=1) in vec2 vTexCoord;
//Lighting
uniform int numLights;
uniform float pLightArrX[32];
uniform float pLightArrY[32];
uniform float pLightArrRad[32];
uniform sampler2D uTex2d;
layout (location=0) out vec4 fFragColor;
void main () {
float attenuation = 0.0f;
for(int i = 0; i < numLights; i++){
float distToLight = distance(gl_FragCoord.xy, vec2(pLightArrX[i], pLightArrY[i]));
// linear attenuation
if(pLightArrRad[i] > 0.f){
float linearAttenuation = 1.0f - (distToLight/pLightArrRad[i]);
if (linearAttenuation < 0.0f) {
linearAttenuation = 0.0f;
}
attenuation += linearAttenuation;
}
}
if (attenuation > 1.0f) {
attenuation = 1.0f;
}
fFragColor = texture(uTex2d, vTexCoord) * attenuation;
} ```
Related
This is the process I go through to render the scene:
Bind MSAA x4 GBuffer (4 Color Attachments, Position, Normal, Color and Unlit Color (skybox only. I also have a Depth component/Texture).
Draw SkyBox
Draw Geo
Blit all Color and Depth Components to a Single Sample FBO
Apply Lighting (I use the depth texture to check if it should be lit by checking if depth texture value is less than 1).
Render Quad
And this is what is happening:
As you can see I get these white and black artefacts around the edge instead of smooth edge. (Good to note that if I remove the lighting and just render the texture without lighting, I don't get this and it smooths correctly).
Here is my shader (it has SSAO implemented but that seem to not effect this).
#version 410 core
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D ssaoTex;
uniform sampler2D gPosition;
uniform sampler2D gNormal;
uniform sampler2D gAlbedo;
uniform sampler2D gAlbedoUnlit;
uniform sampler2D gDepth;
uniform mat4 View;
struct Light {
vec3 Pos;
vec3 Color;
float Linear;
float Quadratic;
float Radius;
};
const int MAX_LIGHTS = 32;
uniform Light lights[MAX_LIGHTS];
uniform vec3 viewPos;
uniform bool SSAO;
void main()
{
vec3 color = texture(gAlbedo, Texcoord).rgb;
vec3 colorUnlit = texture(gAlbedoUnlit, Texcoord).rgb;
vec3 pos = texture(gPosition, Texcoord).rgb;
vec3 norm = normalize(texture( gNormal, Texcoord)).rgb;
vec3 depth = texture(gDepth, Texcoord).rgb;
float ssaoValue = texture(ssaoTex, Texcoord).r;
// then calculate lighting as usual
vec3 lighting;
if(SSAO)
{
lighting = vec3(0.3 * color.rgb * ssaoValue); // hard-coded ambient component
}
else
{
lighting = vec3(0.3 * color.rgb); // hard-coded ambient component
}
vec3 posWorld = pos.rgb;
vec3 viewDir = normalize(viewPos - posWorld);
for(int i = 0; i < MAX_LIGHTS; ++i)
{
vec4 lightPos = View * vec4(lights[i].Pos,1.0);
vec3 normLight = normalize(lightPos.xyz);
float distance = length(lightPos.xyz - posWorld);
if(distance < lights[i].Radius)
{
// diffuse
vec3 lightDir = normalize(lightPos.xyz - posWorld);
vec3 diffuse = max(dot(norm.rgb, lightDir), 0.0) * color.rgb *
lights[i].Color;
float attenuation = 1.0 / (1.0 + lights[i].Linear * distance + lights[i].Quadratic * distance * distance);
lighting += (diffuse*attenuation);
}
}
if(depth.r >= 1)
{
outColor = vec4(colorUnlit, 1.0);
}
else
{
outColor = vec4(lighting, 1.0);
}
}
So the last if statement checks if it is in the depth texture, if it is then apply lighting, if it is not then just draw the skybox (this is so lighting is not applied to the skybox).
I have spent a few days trying to work this out, changing ways of checking if it should be light by comparing normals, position and depth, changing the formats to a higher res (e.g. using RGB16F instead of RGB8 etc.) but I can't figure out what is causing it and doing lighting per sample (using texel fetch) would be way to intensive.
Any Ideas?
This question is a bit old now but I thought I would say how I solved my issue.
I run basic Sobel Filter in my shader which I use to do screen-space outlines, but in addition I also check if MSAA is enabled and if so compute lighting per texel around the edge pixels!
I'm in front of a very strange problem which seems to originate from a simple multiplication in the fragment shader
I'm trying to calculate shadows using a framebuffer that renders only the depths from "light's perspective" which is a common tecnique for beginners easier to implement
Fragment Shader:
#version 330 core
uniform sampler2D parquet;
uniform samplerCube depthMaps[15];
in vec2 TexCoords;
out vec4 color;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos[15];
uniform vec3 lightColor[15];
uniform float intensity[15];
uniform float far_plane;
uniform vec3 viewPos;
float ShadowCalculation(vec3 fragPos, vec3 lightPost, samplerCube depthMaps)
{
vec3 fragToLight = fragPos - lightPost;
float closestDepth = texture(depthMaps, fragToLight).r;
// original depth value
closestDepth *= far_plane;
float currentDepth = length(fragToLight);
float bias = 0.05;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
void main()
{
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos[0] - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor[0];
float _distance = length(vec3(FragPos - lightPos[0]));
float attenuation = 1.0 / pow(_distance +1, 2);
if(attenuation > 1.0) attenuation = 1.0;
float intens = intensity[0];
if(intensity[0] > 150) intens = 150.0f;
vec3 resulta = (diffuse * attenuation) * intens;
//texture color
vec3 tCol = vec3(texture(parquet, TexCoords));
//gamma correction
tCol.rgb = pow(tCol.rgb, vec3(0.45));
vec3 colors = resulta * tCol * (1.0f - ShadowCalculation(FragPos, lightPos[0], depthMaps[0]));
color = vec4(colors, 1.0f);
}
The last multiplication inside main() behaves strangely, multiplying the result of the diffuse light by the texture color renders nicely (so we have no shadows, just diffuse lightning)
//works
vec3 colors = resulta * tCol;
Multiplying the diffuse light by the shadow results renders also nicely (now we have no textures)
//works
vec3 colors = resulta * (1.0f - ShadowCalculation(FragPos, lightPos[0], depthMaps[0]));
Doing all togheter, renders just a black screen. I've tried all sort of things in the fragment shader, but none worked.
Lastly, here is the fragment shader used to render the cubemap:
#version 330 core
in vec4 FragPos;
uniform vec3 lightPos;
uniform float far_plane;
void main()
{
float lightDistance = length(FragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / far_plane;
gl_FragDepth = lightDistance;
}
Can you spot any logical error? I'm using uniforms array buffers since i'll later need multiple lights at once
After a while trying to visually debug the shader's output I finally found the error, I was binding the depthmap's cubemap texture incorrectly and this caused the strange behaviour I was seeing in the last multiplication
Lesson learned: It' not always fragment's fault
I have implemented the basic shadow mapping algorithm but it works correctly with only one light.
I want to render a scene with two following point lights :
Light_1 - position : vec3(-8.0f, 5.0f, 8.0f), direction : vec3(1.3f, -1.0f, -1.0f)
Light_2 - position : vec3(8.0f, 5.0f, 8.0f), direction : vec3(1.3f, -1.0f, -1.0f)
If I render separately the two lights I have the following results:
Rendering with Light_1 :
Rendering with Light_2 :
But the two light together it looks like this :
As you can see the first shadow seems to be rendered correctly, but it is below the shadow of the light_2 which is not correct. To sum up the situation I have the texture of my box which is bound to the texture unit 0. The shadow depth texture is bound from the texture unit 1 and if there are more than one depth texture (so at least two ligths, like in this example), there are bound to the texture unit 1 + 1 (GL_TEXTURE1 + 1). Here's the code that represent what I said :
for (int idy = 0; idy < this->m_pScene->getLightList().size(); idy++)
[...]
Light *light = this->m_pScene->getLightList()[idy];
FrameBuffer *frameBuffer = light->getFrameBuffer();
glActiveTexture(GL_TEXTURE1 + idy);
glBindTexture(GL_TEXTURE_2D, frameBuffer->getTexture()->getTextureId()); //To unbind
shaderProgram->setUniform(std::string("ShadowMatrix[").append(Convertor::toString<int> (idy)).append("]").c_str(), this->m_pScene->getLightList()[idy]->getBiasViewPerspectiveMatrix() * modelMatrix);
shaderProgram->setUniform(std::string("ShadowMap[").append(Convertor::toString<int>(idy)).append("]").c_str(), (int)idy + 1);
It corresponds in our case to :
shaderProgram->setUniform("ShadowMatrix[0]", <shadow_matrix_light_1>);
shaderProgram->setUniform("ShadowMap[0]", 1); (GL_TEXTURE1)
shaderProgram->setUniform("ShadowMatrix[1]", <shadow_matrix_light_2>);
shaderProgram->setUniform("ShadowMap[1]", 2); (GL_TEXTURE2)
The vertex shader is the following (available for just 2 lights):
#version 400
#define MAX_SHADOW_MATRIX 10
#define MAX_SHADOW_COORDS 10
layout (location = 0) in vec4 VertexPosition;
layout (location = 1) in vec3 VertexNormal;
layout (location = 2) in vec2 VertexTexture;
uniform mat3 NormalMatrix;
uniform mat4 ModelViewMatrix;
uniform mat4 ShadowMatrix[MAX_SHADOW_MATRIX];
uniform mat4 MVP;
uniform int lightCount;
out vec3 Position;
out vec3 Normal;
out vec2 TexCoords;
out vec4 ShadowCoords[MAX_SHADOW_COORDS];
void main(void)
{
TexCoords = VertexTexture;
Normal = normalize(NormalMatrix * VertexNormal);
Position = vec3(ModelViewMatrix * VertexPosition);
for (int idx = 0; idx < lightCount; idx++)
ShadowCoords[idx] = ShadowMatrix[idx] * VertexPosition;
gl_Position = MVP * VertexPosition;
}
And a piece of code of the fragment shader :
[...]
vec3 evalBasicFragmentShadow(vec3 LightIntensity, int idx)
{
vec3 Ambient = LightInfos[idx].La * MaterialInfos.Ka;
if (ShadowCoords[idx].w > 0.0f)
{
vec4 tmp_shadow_coords = ShadowCoords[idx];
tmp_shadow_coords.z -= SHADOW_OFFSET;
float shadow = textureProj(ShadowMap[idx], tmp_shadow_coords);
LightIntensity = LightIntensity * shadow + Ambient;
}
else
{
LightIntensity = LightIntensity + MaterialInfos.Ka;
}
return (LightIntensity);
}
vec3 getLightIntensity(vec3 TexColor)
{
vec3 LightIntensity = vec3(0.0f);
for (int idx = 0; idx < lightCount; idx++)
{
vec3 tnorm = (gl_FrontFacing ? -normalize(Normal) : normalize(Normal));
vec3 lightDir = vec3(LightInfos[idx].Position) - Position;
vec3 lightDirNorm = normalize(lightDir);
float lightAtt = getLightAttenuation(lightDir, LightInfos[idx]);
LightIntensity += Point_ADS_Shading(lightAtt, -tnorm, lightDirNorm, TexColor, idx);
LightIntensity = evalBasicFragmentShadow(LightIntensity, idx);
}
return (LightIntensity);
}
[...]
It's look like a texture unit problem because separatly the two shadows have been rendered perfectly and I use glActiveTexture correctly (I think so). Plus, I noticed that if I change the loading order of the lights, the bad shadow is caused by 'the other light' (it's the contrary). So it seems to comes from the texture unit 2, but I don't understand why. Does anyone can help me, please ? Thanks a lot in advance for your help.
I solved my problem. Actually, I just filled the first depth texture (for the first loaded light). So for the second light, the shadow map was not filled and this is what explains the black area on the third image above.
Here's the final result :
I hope this post will be usefull for someone. Thanks for your attention.
I'm using my own (not opengl built in) light. This is my fragment shader program:
#version 330
in vec4 vertexPosition;
in vec3 surfaceNormal;
in vec2 textureCoordinate;
in vec3 eyeVecNormal;
out vec4 outputColor;
uniform sampler2D texture_diffuse;
uniform bool specular;
uniform float shininess;
struct Light {
vec4 position;
vec4 ambientColor;
vec4 diffuseColor;
vec4 specularColor;
};
uniform Light lights[8];
void main()
{
outputColor = texture2D(texture_diffuse, textureCoordinate);
for (int l = 0; l < 8; l++) {
vec3 lightDirection = normalize(lights[l].position.xyz - vertexPosition.xyz);
float diffuseLightIntensity = max(0, dot(surfaceNormal, lightDirection));
outputColor.rgb += lights[l].ambientColor.rgb * lights[l].ambientColor.a;
outputColor.rgb += lights[l].diffuseColor.rgb * lights[l].diffuseColor.a * diffuseLightIntensity;
if (specular) {
vec3 reflectionDirection = normalize(reflect(lightDirection, surfaceNormal));
float specular = max(0.0, dot(eyeVecNormal, reflectionDirection));
if (diffuseLightIntensity != 0) {
vec3 specularColorOut = pow(specular, shininess) * lights[l].specularColor.rgb;
outputColor.rgb += specularColorOut * lights[l].specularColor.a;
}
}
}
}
Now the problem is, that when i have 2 light sources with say ambient color vec4(0.2f, 0.2f, 0.2f, 1.0f) the ambient color on model will be vec4(0.4f, 0.4f, 0.4f, 1.0f) because i simply add it to the outputColor variable. How can i calculate a single ambient and a single diffuse color variables for multiple lights, so i would get a realistic result?
Here's a fun fact: lights in the real world do not have ambient, diffuse, or specular colors. They emit one color. Period (OK, if you want to be technical, lights emit lots of colors. But individual photons don't have "ambient" properties. They just have different wavelengths). All you're doing is copying someone who copied OpenGL's nonsense about ambient and diffuse light colors.
Stop copying someone else's code and do it right.
Each light has a color. You use that color to compute the diffuse and specular contributions of that light to the object. That's it.
Ambient is a property of the scene, not of any particular light. It is intended to represent indirect, global illumination: the light reflected from other objects in the scene, as taken as a general aggregate. You don't have ambient "lights"; there is only one ambient term, and it should be applied once.
Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.