I have recently implemented SSAO in my engine(deferred shading), but I am very insecure of how I should combine SSAO with global light and local lights(point light).
Should I do this:
//Global light pass.
vec3 sceneColor = DiffuseBRDF + SpecularBRDF;
float luminance = dot(sceneColor, vec3(0.2126, 0.7152, 0.0722));
sceneColor *= mix(texture(ssaoTexture, texCoord).r, 1.0, luminance);
//Local light pass.
//Use additive blending in this pass, i.e. glBlendFunc(GL_ONE, GL_ONE).
//The final result would be:
vec3 finalColor = sceneColor + pointLight0 + pointLight1 + ... + pointLightN;
or this:
//Global light pass.
vec3 sceneColor = DiffuseBRDF + SpecularBRDF;
//Local light pass.
//Use additive blending in this pass, i.e. glBlendFunc(GL_ONE, GL_ONE).
vec3 finalColor = sceneColor + pointLight0 + pointLight1 + ... + pointLightN;
//Composite pass.
float luminance = dot(finalColor, vec3(0.2126, 0.7152, 0.0722));
finalColor *= mix(texture(ssaoTexture, texCoord).r, 1.0, luminance);
Ambient occlusion is a value that describes how much ambient light can hit a point on the surface. Ambient light is a light that comes from all directions, rather than from a single light source, and usually contains sky lighting, image based lighting, global illumination or a simple flat color. The correct way to apply AO is to multiply it with ambient lighting. Simply put, ambient occlusion is to ambient lighting as shadows are to direct lighting.
So, if your point lights are "atmospheric" lights and are supposed to represent soft ambient lighting without specular highlights, then you should apply the AO to these point lights only, without any luminance scaling. If the point lights are lights with a well defined source and you don't have any significant ambient lighting, then you should be consistent and apply the AO to all lights equally, with luminance scaling to simulate the look produced by correct AO application.
Related
As you can tell from the title, I'm trying to create the mirror reflection while using deferred rendering and ambient occlusion. For ambient occlusion I'm specifically using the ssao algorithm.
To create the mirror I use the basic idea of reflecting all the models to the other side of the mirror and then rendering only the parts visible through the mirror.
Using deferred rendering I decided to do this during the creation of the gBuffer. In order to achieve correct lighting of the reflected objects, I made sure that the positions and normals of the reflected objects in the gBuffer are the same with their 'non reflected' version. That way, both the actual models and their images will receive the same lighting.
My problem is now with the ssao algorithm. It seems that the reflected objects are calculated to be highly occluded and this results in black areas which you can see in the mirror:
I've noticed that these black areas appear only in places that are not in my view. Things that I can see without the mirror have no unexpected black spots on them.
Note that the data in the gBuffer are all in view space. So there must be a connection there. Maybe the random samples used during ssao or their normals are not calculated correctly.
So , this is the fragment shader for the ambient occlusion :
void main()
{
vec3 fragPos = texture(gPosition, TexCoords).xyz;
vec3 normal = texture(gNormal, TexCoords).rgb;
vec3 randomVec = texture(texNoise, TexCoords * noiseScale).xyz;
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
float occlusion = 0.0;
float kernelSize=64;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5;
float sampleDepth = texture(gPosition, offset.xy).z;
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z -
sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
//FragColor = vec4(1,1,1,1);
occl=vec4(occlusion,occlusion,occlusion,1);
}
Any ideas as to why these black areas appear or suggestions to correct them?
I could just ignore the ambient occlusion in the reflection but I'm not happy with that.
Maybe, if the ambient occlusion shader used the positions and normals of the reflected objects there would be no problem. But then I'll get into trouble of saving more things in the buffer so I gave up that idea for now.
I'm trying to figure out if my BRDF approach is correct or not.
// Calculate sun BRDF color
vec3 sunBrdf = brdf(tsLightSourceDir, tsEyeDir,
tsNormal, vtsTangent, vtsBinormal,
albedo.rgb,
NdotL, NdotV,
tRoughness, tMetallic, specular);
// brdf multiplied by incoming sun radiance -> color
vec3 irradiance = sunAmbientRadiance + sunDiffuseRadiance * NdotL;
vec3 brdfColor = sunBrdf * irradiance;
// Add base ambient color
vec3 ambientColor = albedo.rgb * sunAmbientRadiance;
resultColor = ambientColor;
resultColor += brdfColor;
// Interpolate by shadow
resultColor = mix(ambientColor,
resultColor,
shadowAmount);
My question: I'm adding the ambient light 2x. 1x when calculating the ambient color and 1x by adding it to the amount of light coming into the BRDF.
I can't figure out if this is correct or not, since the BRDF needs the ambient light in my opinion since it's a part of the lighting that arrives onto the surface. But on the other hand, it's already calculated by multiplying it with the albedo color and added as a base color...
Any help is much appreciated!
The correct formula goes like this (in my opinion):
vec3 irradiance = sunDiffuseRadiance * NdotL;
I have a deferred renderer that only calculates lighting equations when the current fragment is in range of a light source. I do this by calculating the size of a light volume in my application and send this with other light information to the shaders. I then check the distance between the fragment and lightPos (per light) and use the light's volume as a treshold.
For simplicity's sake I use a linear equation (quadratic equations generate way too large light volumes) for light attenuation. All the lighting equations work fine, but when I use multiple lights I sometimes see strange circle borders as if the distance check causes the light calculations to prematurely stop causing a sudden change in lighting. You can see this effect in the image below:
The fragment shader code is as follows:
vec3 position = texture(worldPos, fs_in.TexCoords).rgb;
vec3 normal = texture(normals, fs_in.TexCoords).rgb;
normal = normalize(normal * 2.0 - 1.0);
vec3 color = texture(albedo, fs_in.TexCoords).rgb;
float depth = texture(worldPos, fs_in.TexCoords).a;
// Add global ambient value
fragColor = vec4(vec3(0.1) * color, 0.0);
for(int i = 0; i < NR_LIGHTS; ++i)
{
float distance = abs(length(position - lights[i].Position.xyz));
if(distance <= lights[i].Size)
{
vec3 lighting;
// Ambient
lighting += lights[i].Ambient * color;
// Diffuse
vec3 lightDir = normalize(lights[i].Position.xyz - position);
float diffuse = max(dot(normal, lightDir), 0.0);
lighting += diffuse * lights[i].Diffuse * color;
// Specular
vec3 viewDir = normalize(viewPos - position);
vec3 reflectDir = reflect(-lightDir, normal);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 8);
lighting += spec * lights[i].Specular;
// Calculate attenuation
float attenuation = max(1.0f - lights[i].Linear * distance, 0.0);
lighting *= attenuation;
fragColor += vec4(lighting, 0.0);
}
}
fragColor.a = 1.0;
The attenuation function is a linear function of the distance between the fragment position and each light source.
In this particular scene I use a linear attenuation value of 0.075 of which I generate the light's size/radius as:
Size = 1.0 / Linear;
some observations
When I remove the distance check if(distance <= lights[i].Size) I don't get the weird border issue.
If I visualize the lighting value of a single light source and visualize the distance as distance/lights.Size I get the following 2 images:
which looks as if the light radius/distance-calculations and light attenuation are similar in radius.
When I change the distance check equation to if(distance <= lights[i].Size * 2.0f) (as to drastically increase the light's radius) I get significantly less border banding, but if I look close enough I do see them from time to time so even that doesn't completely remove the issue.
I have no idea what is causing this and I am out of options at the moment.
This section:
vec3 lighting;
// Ambient
lighting += lights[i].Ambient * color;
You are never initializing lighting before you add to it. I think this can cause undefined behaviour. Try to change it to:
// Ambient
vec3 lighting = lights[i].Ambient * color;
I just tried implementing specular highlights. The issue is that when moving far away from the surface, the highlight becomes stronger and stronger and the edge of the highlight becomes very harsh. When moving too near to the surface, the highlight completely disappears.
This is the related part of my fragment shader. All computations are in view space. I use a directional sun light.
// samplers
vec3 normal = texture2D(normals, coord).xyz;
vec3 position = texture2D(positions, coord).xyz;
float shininess = texture2D(speculars, coord).x;
// normalize directional light source
vec3 source;
if(directional) source = position + normalize(light);
else source = light;
// reflection
float specular = 0;
vec3 lookat = vec3(0, 0, 1);
float reflection = max(0, dot(reflect(position, normal), lookat));
int power = 5;
specular = shininess * pow(reflection, power);
// ...
// output
image = color * attenuation * intensity * (fraction + specular);
This is a screenshot of my lighting buffer. You can see that the foremost barrel has no specular highlight at all while the ones far away shine much too strong. The barrel in the middle is lighted as desired.
What am I doing wrong?
You're calculating the reflection vector from the object position instead of using the inverted light direction (pointing from object to light source).
It's like using the V instead of the L in this diagram:
Also, I think shininess should be the exponent of your expression not something that multiplies linearly the specular contribution.
I think variables naming is confusing you.
From what I'm reading (assuming you're in camera space and without handedness knowledge)
vec3 lookat = vec3(0, 0, 1);
float reflection = max(0, dot(reflect(position, normal), lookat));
lookat is a directional light and position is the actual lookat.
Make sure normal(it's probably already normalized) and position(the lookat) are normalized.
A less confusing code would be:
vec3 light_direction = vec3(0, 0, 1);
vec3 lookat = normalize(position-vec3(0,0,0));
float reflection = max(0, dot(reflect(light_direction, normal), -lookat));
Without normalizing position, reflection will be biased. The bias would be strong when position is far from the camera vec3(0,0,0)
Note how lookat is not a constant; it changes for each and every position. lookat = vec3(0,0,1) is looking toward a single position in view space.
I have a fragment shader that does parallax mapping and lighting computations. I've managed to trace the problem to the sampling of textures and/or lighting computation so I'll only post code for that to keep it short:
// sample maps
vec3 albedo = texture2D(u_albedo_texture_1, p_texc).rgb;
vec3 ao = texture2D(u_ambientocclusion_texture, v_texcoord).rgb - 0.5; // ambient occlusion
vec3 normalT = texture2D(u_normalmap_texture, p_texc).rgb * 2.0 - 1.0; // tangent space normal
vec4 normalW = vec4(normalize(TBN * normalT).xyz, 1.0); // world space normal
vec3 color = vec3(0.0);
vec3 ambient = textureCube(u_cubemap_texture, -normalW.xyz).rgb; // get ambient lighting from cubemap
color += ambient * ao;
float d = length(light_vector);
float atten = 1.0 / dot(light_attenuation, vec3(1.0, d, d * d)); // compute attenuation of point light
// diffuse term
float ndotl = max(dot(normalW.xyz, light_vector), 0.0);
color += atten * ndotl * light_diffuse.rgb * albedo;
// specular term
float hdotn = max(0.0, dot(half_vector, normalW.xyz));
color += atten * ((shininess + 8.0) / 8.0 * 3.1416) * light_specular.rgb * p_ks * pow(hdotn, shininess);
gl_FragColor = vec4(color.rgb, 1.0);
I have this code working in other shaders, but here in the parallax mapping shader it just discards the pixels and I can't understand why. So instead of showing the object it simply discards the pixels and the object doesn't show up at all. No wrong or messy colors, nothing.
I managed to trace the problem to the albedo and ambient sampled colors. They're both sampled correctly. If I print out
gl_FragColor = vec4(ambient.rgb, 1.0);
or
gl_FragColor = vec4(albedo.rgb, 1.0);
they both get displayed correctly but if I try to do something with both colors, the pixels get discarded. None of the following work:
gl_FragColor = vec4(ambient + albedo, 1.0);
gl_FragColor = vec4(ambient * albedo, 1.0);
gl_FragColor = vec4(ambient.r, albedo.g, ambient.b, 1.0);
ambient is sampled from a cube map and albedo from a 2D texture. I have other samplers for ambient occlusion and normal mapping and those work and they also work in conjunction with ambient OR albedo.
So ambient + ao works and albedo + ao works.
So, this is my problem and I can't figure out what causes it. Like I said, I have the same code in another shader and that one works.
Worth mentioning, this runs on a PowerVR emulator for OpenGL ES 2.0. and the GPU is an nVidia GT220.
Also feel free to point out other mistakes in my code like inefficient stuff. And if necessary, I'll post the full fragment shader code. Sorry for the long post :P
SOLVED
Wow, can't believe the cause was so stupid, but I completely missed it. Basically, I forgot to load the cubemap for that particular model and it wasn't sent to the shader. Still, how the hell did it sample something in the first place? I mean, it was working separately, so what did the texture unit sample? A cubemap left in the cache from previous processing?
Still, how the hell did it sample something in the first place?
Usually when my shaders doesn't produce the desired result but are looking annoyingly 'normal' it is because the shader compilation discarded it (I made an error somewhere) and decided to use the 'fixed pipeline' shader instead...