What is the best OpenGL self-shadowing technique? - opengl

I chose the quite old, but sufficient method of shadow mapping, which is OK overall, but I quickly discovered some self-shadowing problems:
It seems, this problem appears because of the bias offset, which is necessary to eliminate shadow acne artifacts.
After some googling, it seems that there is no easy solution to this, so I tried some shader tricks which worked, but not very well.
My first idea was to perform a calculation of a dot multiplication between a light direction vector and a normal vector. If the result is lower than 0, the angle between vectors is >90 degrees, so this surface is pointing outward at the light source, hence it is not illuminated. This works good, except shadows may appear too sharp and hard:
After I was not satisfied with the results, I tried another trick, by multiplying the shadow value by the abs value of the dot product of light direction and normal vector (based on the normal map), and it did work (hard shadows from the previous image got smooth transition from shadow to regular diffuse color), except it created another artifact in situations, when the normal map normal vector is pointing somewhat at the sun, but the face normal vector does not. It also made self-shadows much brighter (but it is fixable):
Can I do something about it, or should I just choose the lesser evil?
Shader shadows code for example 1:
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
vec4 fragPosLightSpace = lightSpaceMatrix * vec4(FragPos, 1.0);
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// keep the shadow at 0.0 when outside the far_plane region of the light's frustum.
if (currentDepth > 1.0)
{
return 0.0;
}
// calculate bias (based on depth map resolution and slope)
float bias = max(0.005 * (1.0 - dot(normal, lightDir)), 0.0005);
vec2 texelSize = 1.0 / vec2(textureSize(material.texture_shadow, 0));
const int sampleRadius = 2;
const float sampleRadiusCount = pow(sampleRadius * 2 + 1, 2);
for(int x = -sampleRadius; x <= sampleRadius; ++x)
{
for(int y = -sampleRadius; y <= sampleRadius; ++y)
{
float pcfDepth = texture(material.texture_shadow, vec3(projCoords.xy + vec2(x, y) * texelSize, layer)).r;
shadow += (currentDepth - bias) > pcfDepth ? ambientShadow : 0.0;
}
}
shadow /= sampleRadiusCount;
Hard self shadows trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
// "Normal" is a face normal vector, "normal" is calculated based on normal map. I know there is a naming problem with that))
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = max(abs(vectorNormalDot), ambientShadow);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...
}
Dot product multiplication trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = ambientShadow * abs(vectorNormalDot);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...

Related

How to implement refraction light in fragment shader?

I am working on an OpenGL ray-tracer, which is capable of loading obj files and ray-trace it. My application loads the obj file with assimp and then sends all of the triangle faces (with the primitive coordinates and te material coefficients as well) to the fragment shader by using shader storage objects. The basic structure is about to render the results to a quad from the fragment shader.
I have trouble with the ray-tracing part in the fragment shader, but first let's introduce it.
For diffuse light, Lambert's cosine law, for specular light Phong-Blinn model was used. In case of total reflection a weightvariable is used to make the reflected light has an effect on other objects as well. The weight is calculated with approximating the Fresnel equation by Schlick method. In the image below, you can see, that the plane works like a mirror reflecting the image of the cube above.
I would like to make the cube appear as a glass object (like a glass sphere), which has refracting and reflecting effects as well. Or at least refracts the light. In the image above you can see a refracting effect on cube, but it is not as good, as it should be. I searched examples how to implement it, but until now, I recognized fresnel euquation has to be used just like in the reflection part.
Here is fragment my shader:
vec3 Fresnel(vec3 F0, float cosTheta) {
return F0 + (vec3(1, 1, 1) - F0) * pow(1-cosTheta, 5);
}
float schlickApprox(float Ni, float cosTheta){
float F0=pow((1-Ni)/(1+Ni), 2);
return F0 + (1 - F0) * pow((1 - cosTheta), 5);
}
vec3 trace(Ray ray){
vec3 weight = vec3(1, 1, 1);
const float epsilon = 0.0001f;
vec3 outRadiance = vec3(0, 0, 0);
int maxdepth=5;
for (int i=0; i < maxdepth; i++){
Hit hit=traverseBvhTree(ray);
if (hit.t<0){ return weight * lights[0].La; }
vec4 textColor = texture(texture1, vec2(hit.u, hit.v));
Ray shadowRay;
shadowRay.orig = hit.orig + hit.normal * epsilon;
shadowRay.dir = normalize(lights[0].direction);
// Ambient Light
outRadiance+= materials[hit.mat].Ka.xyz * lights[0].La*textColor.xyz * weight;
// Diffuse light based on Lambert's cosine law
float cosTheta = dot(hit.normal, normalize(lights[0].direction));
if (cosTheta>0 && traverseBvhTree(shadowRay).t<0) {
outRadiance +=lights[0].La * materials[hit.mat].Kd.xyz * cosTheta * weight;
// Specular light based on Phong-Blinn model
vec3 halfway = normalize(-ray.dir + lights[0].direction);
float cosDelta = dot(hit.normal, halfway);
if (cosDelta > 0){
outRadiance +=weight * lights[0].Le * materials[hit.mat].Ks.xyz * pow(cosDelta, materials[hit.mat].shininess); }
}
float fresnel=schlickApprox(materials[hit.mat].Ni, cosTheta);
// For refractive materials
if (materials[hit.mat].Ni < 3)
{
/*this is the under contruction part.*/
ray.orig = hit.orig - hit.normal*epsilon;
ray.dir = refract(ray.dir, hit.normal, materials[hit.mat].Ni);
}
// If the refraction index is more than 15, treat the material as mirror.
else if (materials[hit.mat].Ni >= 15) {
weight *= fresnel;
ray.orig=hit.orig+hit.normal*epsilon;
ray.dir=reflect(ray.dir, hit.normal);
}
}
return outRadiance;
}
Update 1
I updated the trace method in the shader. As far as I understand the physics of light, if there is a material, which reflects and refracts the light, I have to treat two cases according to this.
In this case of reflection, I added a weight to the diffuse light calculation: weight *= fresnel
In case of refraction light, the weight is weight*=1-fresnel.
Moreover, I calculated the ray.orig and ray.dir connected to the cases and refraction calculation only happens when it is not the case of total internal reflection (fresnel is smaller than 1).
The modified trace method:
vec3 trace(Ray ray){
vec3 weight = vec3(1, 1, 1);
const float epsilon = 0.0001f;
vec3 outRadiance = vec3(0, 0, 0);
int maxdepth=3;
for (int i=0; i < maxdepth; i++){
Hit hit=traverseBvhTree(ray);
if (hit.t<0){ return weight * lights[0].La; }
vec4 textColor = texture(texture1, vec2(hit.u, hit.v));
Ray shadowRay;
shadowRay.orig = hit.orig + hit.normal * epsilon;
shadowRay.dir = normalize(lights[0].direction);
// Ambient Light
outRadiance+= materials[hit.mat].Ka.xyz * lights[0].La*textColor.xyz * weight;
// Diffuse light based on Lambert's cosine law
float cosTheta = dot(hit.normal, normalize(lights[0].direction));
if (cosTheta>0 && traverseBvhTree(shadowRay).t<0) {
outRadiance +=lights[0].La * materials[hit.mat].Kd.xyz * cosTheta * weight;
// Specular light based on Phong-Blinn model
vec3 halfway = normalize(-ray.dir + lights[0].direction);
float cosDelta = dot(hit.normal, halfway);
if (cosDelta > 0){
outRadiance +=weight * lights[0].Le * materials[hit.mat].Ks.xyz * pow(cosDelta, materials[hit.mat].shininess); }
}
float fresnel=schlickApprox(materials[hit.mat].Ni, cosTheta);
// For refractive/reflective materials
if (materials[hit.mat].Ni < 7)
{
bool outside = dot(ray.dir, hit.normal) < 0;
// compute refraction if it is not a case of total internal reflection
if (fresnel < 1) {
ray.orig = outside ? hit.orig-hit.normal*epsilon : hit.orig+hit.normal*epsilon;
ray.dir = refract(ray.dir, hit.normal,materials[hit.mat].Ni);
weight *= 1-fresnel;
continue;
}
// compute reflection
ray.orig= outside ? hit.orig+hit.normal*epsilon : hit.orig-hit.normal*epsilon;
ray.dir= reflect(ray.dir, hit.normal);
weight *= fresnel;
continue;
}
// If the refraction index is more than 15, treat the material as mirror: total reflection
else if (materials[hit.mat].Ni >= 7) {
weight *= fresnel;
ray.orig=hit.orig+hit.normal*epsilon;
ray.dir=reflect(ray.dir, hit.normal);
}
}
return outRadiance;
}
Here is a snapshot connected to the update. Slightly better, I guess.
Update 2:
I found an iterative algorithm which is using a stack to visualize refractive and reflected rays in opengl: it is on page 68.
I modified my frag shader according to this. Almost fine, except for the back faces, which are completely black. Some pictures attached.
Here is the trace method of my frag shader:
vec3 trace(Ray ray){
vec3 color;
float epsilon=0.001;
Stack stack[8];// max depth
int stackSize = 0;// current depth
int bounceCount = 0;
vec3 coeff = vec3(1, 1, 1);
bool continueLoop = true;
while (continueLoop){
Hit hit = traverseBvhTree(ray);
if (hit.t>0){
bounceCount++;
//----------------------------------------------------------------------------------------------------------------
Ray shadowRay;
shadowRay.orig = hit.orig + hit.normal * epsilon;
shadowRay.dir = normalize(lights[0].direction);
color+= materials[hit.mat].Ka.xyz * lights[0].La * coeff;
// Diffuse light
float cosTheta = dot(hit.normal, normalize(lights[0].direction));// Lambert-féle cosinus törvény alapján.
if (cosTheta>0 && traverseBvhTree(shadowRay).t<0) {
color +=lights[0].La * materials[hit.mat].Kd.xyz * cosTheta * coeff;
vec3 halfway = normalize(-ray.dir + lights[0].direction);
float cosDelta = dot(hit.normal, halfway);
// Specular light
if (cosDelta > 0){
color +=coeff * lights[0].Le * materials[hit.mat].Ks.xyz * pow(cosDelta, materials[hit.mat].shininess); }
}
//---------------------------------------------------------------------------------------------------------------
if (materials[hit.mat].indicator > 3.0 && bounceCount <=2){
float eta = 1.0/materials[hit.mat].Ni;
Ray refractedRay;
refractedRay.dir = dot(ray.dir, hit.normal) <= 0.0 ? refract(ray.dir, hit.normal, eta) : refract(ray.dir, -hit.normal, 1.0/eta);
bool totalInternalReflection = length(refractedRay.dir) < epsilon;
if(!totalInternalReflection){
refractedRay.orig = hit.orig + hit.normal*epsilon*sign(dot(ray.dir, hit.normal));
refractedRay.dir = normalize(refractedRay.dir);
stack[stackSize].coeff = coeff *(1 - schlickApprox(materials[hit.mat].Ni, dot(ray.dir, hit.normal)));
stack[stackSize].depth = bounceCount;
stack[stackSize++].ray = refractedRay;
}
else{
ray.dir = reflect(ray.dir, -hit.normal);
ray.orig = hit.orig - hit.normal*epsilon;
}
}
else if (materials[hit.mat].indicator == 0){
coeff *= schlickApprox(materials[hit.mat].Ni, dot(-ray.dir, hit.normal));
ray.orig=hit.orig+hit.normal*epsilon;
ray.dir=reflect(ray.dir, hit.normal);
}
else { //Diffuse Material
continueLoop=false;
}
}
else {
color+= coeff * lights[0].La;
continueLoop=false;
}
if (!continueLoop && stackSize > 0){
ray = stack[stackSize--].ray;
bounceCount = stack[stackSize].depth;
coeff = stack[stackSize].coeff;
continueLoop = true;
}
}
return color;
}

How to get a smooth result with RSM (Reflective Shadow Mapping)?

I'm trying to implement a Reflective Shadow Mapping program with Vulkan.
The problem is that a get bad result :
As you can see the result is not smooth.
Here I am rendering in a first pass the position, normal and flux from the light position in 3 textures with a resolution of 512 * 512.
In a second pass, I compute the indirect illumination from the first pass textures according to this paper (http://www.klayge.org/material/3_12/GI/rsm.pdf) :
for(int i = 0; i < 151; i++)
{
vec4 rsmProjCoords = projCoords + vec4(rsmDiskSampling[i] * 0.09, 0.0, 0.0);
vec3 indirectLightPos = texture(rsmPosition, rsmProjCoords.xy).rgb;
vec3 indirectLightNorm = texture(rsmNormal, rsmProjCoords.xy).rgb;
vec3 indirectLightFlux = texture(rsmFlux, rsmProjCoords.xy).rgb;
vec3 r = worldPos - indirectLightPos;
float distP2 = dot( r, r );
vec3 emission = indirectLightFlux * (max(0.0, dot(indirectLightNorm, r)) * max(0.0, dot(N, -r)));
emission *= rsmDiskSampling[i].x * rsmDiskSampling[i].x / (distP2 * distP2);
indirectRSM += emission;
}
The problem is fixed.
The main problem was the sampling, I was using a linear sampling instead of a nearest sampling :
samplerInfo.magFilter = VK_FILTER_NEAREST;
samplerInfo.minFilter = VK_FILTER_NEAREST;
Other problems were the number of VPL used and the distance between them.

SSAO implementation in Babylon JS and GLSL, using view ray for depth comparison

I'm trying to create my own SSAO shader in forward rendering (not in post processing) with GLSL. I'm encountering some issues, but I really can't figure out what's wrong with my code.
It is created with Babylon JS engine as a BABYLON.ShaderMaterial and set in a BABYLON.RenderTargetTexture, and it is mainly inspired by this renowned SSAO tutorial: http://john-chapman-graphics.blogspot.fr/2013/01/ssao-tutorial.html
For performance reasons, I have to do all the calculation without projecting and unprojecting in screen space, I'd rather use the view ray method described in the tutorial above.
Before explaining the whole thing, please note that Babylon JS uses a left-handed coordinate system, which may have quite an incidence on my code.
Here are my classic steps:
First, I calculate my four camera far plane corners positions in my JS code. They might be constants every time as they are calculated in view space position.
// Calculating 4 corners manually in view space
var tan = Math.tan;
var atan = Math.atan;
var ratio = SSAOSize.x / SSAOSize.y;
var far = scene.activeCamera.maxZ;
var fovy = scene.activeCamera.fov;
var fovx = 2 * atan(tan(fovy/2) * ratio);
var xFarPlane = far * tan(fovx/2);
var yFarPlane = far * tan(fovy/2);
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, far);
var farCornersVec = [topLeft, topRight, bottomRight, bottomLeft];
var farCorners = [];
for (var i = 0; i < 4; i++) {
var vecTemp = farCornersVec[i];
farCorners.push(vecTemp.x, vecTemp.y, vecTemp.z);
}
These corner positions are sent to the vertex shader -- that is why the vector coordinates are serialized in the farCorners[] array to be sent in the vertex shader.
In my vertex shader, position.x and position.y signs let the shader know which corner to use at each pass.
These corners are then interpolated in my fragment shader for calculating a view ray, i.e. a vector from the camera to the far plane (its .z component is, therefore, equal to the far plane distance to camera).
The fragment shader follows the instructions of John Chapman's tutorial (see commented code below).
I get my depth buffer as a BABYLON.RenderTargetTexture with the DepthRenderer.getDepthMap() method. A depth texture lookup actually returns (according to Babylon JS's depth shaders):
(gl_FragCoord.z / gl_FragCoord.w) / far, with:
gl_FragCoord.z: the non-linear depth
gl_FragCoord.z = 1/Wc, where Wc is the clip-space vertex position (i.e. gl_Position.w in the vertex shader)
far: the positive distance from camera to the far plane.
The kernel samples are arranged in a hemisphere with random floats in [0,1], most being distributed close to origin with a linear interpolation.
As I don't have a normal texture, I calculate them from the current depth buffer value with getNormalFromDepthValue():
vec3 getNormalFromDepthValue(float depth) {
vec2 offsetX = vec2(texelSize.x, 0.0);
vec2 offsetY = vec2(0.0, texelSize.y);
// texelSize = size of a texel = (1/SSAOSize.x, 1/SSAOSize.y)
float depthOffsetX = getDepth(depthTexture, vUV + offsetX); // Horizontal neighbour
float depthOffsetY = getDepth(depthTexture, vUV + offsetY); // Vertical neighbour
vec3 pX = vec3(offsetX, depthOffsetX - depth);
vec3 pY = vec3(offsetY, depthOffsetY - depth);
vec3 normal = cross(pY, pX);
normal.z = -normal.z; // We want normal.z positive
return normalize(normal); // [-1,1]
}
Finally, my getDepth() function allows me to get the depth value at current UV in 32-bit float:
float getDepth(sampler2D tex, vec2 texcoord) {
return unpack(texture2D(tex, texcoord));
// unpack() retreives the depth value from the 4 components of the vector given by texture2D()
}
Here are my vertex and fragment shader codes (without function declarations):
// ---------------------------- Vertex Shader ----------------------------
precision highp float;
uniform float fov;
uniform float far;
uniform vec3 farCorners[4];
attribute vec3 position; // 3D position of each vertex (4) of the quad in object space
attribute vec2 uv; // UV of each vertex (4) of the quad
varying vec3 vPosition;
varying vec2 vUV;
varying vec3 vCornerPositionVS;
void main(void) {
vPosition = position;
vUV = uv;
// Map current vertex with associated frustum corner position in view space:
// 0: top left, 1: top right, 2: bottom right, 3: bottom left
// This frustum corner position will be interpolated so that the pixel shader always has a ray from camera->far-clip plane.
vCornerPositionVS = vec3(0.0);
if (positionVS.x > 0.0) {
if (positionVS.y <= 0.0) { // top left
vCornerPositionVS = farCorners[0];
}
else if (positionVS.y > 0.0) { // top right
vCornerPositionVS = farCorners[1];
}
}
else if (positionVS.x <= 0.0) {
if (positionVS.y > 0.0) { // bottom right
vCornerPositionVS = farCorners[2];
}
else if (positionVS.y <= 0.0) { // bottom left
vCornerPositionVS = farCorners[3];
}
}
gl_Position = vec4(position * 2.0, 1.0); // 2D position of each vertex
}
// ---------------------------- Fragment Shader ----------------------------
precision highp float;
uniform mat4 projection; // Projection matrix
uniform float radius; // Scaling factor for sample position, by default = 1.7
uniform float depthBias; // 1e-5
uniform vec2 noiseScale; // (SSAOSize.x / noiseSize, SSAOSize.y / noiseSize), with noiseSize = 4
varying vec3 vCornerPositionVS; // vCornerPositionVS is the interpolated position calculated from the 4 far corners
void main() {
// Get linear depth in [0,1] with texture2D(depthBufferTexture, vUV)
float fragDepth = getDepth(depthBufferTexture, vUV);
float occlusion = 0.0;
if (fragDepth < 1.0) {
// Retrieve fragment's view space normal
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
// Random rotation: rvec.xyz are the components of the generated random vector
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0; // [-1,1]
rvec.z = 0.0; // Random rotation around Z axis
// Get view ray, from camera to far plane, scaled by 1/far so that viewRayVS.z == 1.0
vec3 viewRayVS = vCornerPositionVS / far;
// Current fragment's view space position
vec3 fragPositionVS = viewRay * fragDepth;
// Creation of TBN matrix
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
for (int i = 0; i < NB_SAMPLES; i++) {
// Get sample kernel position, from tangent space to view space
vec3 samplePosition = tbn * kernelSamples[i];
// Add VS kernel offset sample to fragment's VS position
samplePosition = samplePosition * radius + fragPosition;
// Project sample position from view space to screen space:
vec4 offset = vec4(samplePosition, 1.0);
offset = projection * offset; // To view space
offset.xy /= offset.w; // Perspective division
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth:
float sampleDepth = getDepth(depthTexture, offset.xy);
float rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;
// Reminder: fragDepth == fragPosition.z
// Range check and accumulate if fragment contributes to occlusion:
occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;
}
}
// Inversion
float ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
}
A horizontal and vertical Gaussian shader blur clears the noise generated by the random texture afterwards.
My parameters are:
NB_SAMPLES = 16
radius = 1.7
depthBias = 1e-5
power = 1.0
Here is the result:
The result has artifacts on its edges, and the close shadows are not very strong... Would anyone see something wrong or weird in my code?
Thanks a lot!
fragPositionVS is a position in view space coordinates and radius is length in view coordinates. You use them to calculate the samplePosition:
samplePosition = samplePosition * radius + fragPositionVS;
But in the line rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;, you compare the difference of fragDepth and sampleDepth with radius. That makes no sense, since fragDepth and sampleDepth are values from the depth buffer in, the range [0, 1] and radius is a lenght in the view space.
In the line occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;, you calculate the difference of samplePosition.z and sampleDepth. While samplePosition.z is a view space coordinate inbetween -near and -far, sampleDepth is a depth in range [0, 1]. Calculating a difference between these two values doesn't make any sense either.
I suggest using always Z coordinates, if you want to calculate distances or if you want to compare distances.
If you have a depth value, the Z-coordinate in view space can be calculated by converting the depth value to normalized device coordinate and converting the normalized device coordinate to a view coordinate:
float DepthToZ( in float depth )
{
float near = .... ; // distance to near plane (absolute value)
float far = .... ; // distance to far plane (absolute value)
float z_ndc = 2.0 * depth - 1.0;
float z_eye = 2.0 * near * far / (far + near - z_ndc * (far - near));
return -z_eye;
}
The depth is a value in the range [0, 1] and maps the range from the distance to the near plane and the distance to the far plane (in view space), but not linear (for perspective projection).
For this reason, the code line vec3 fragPositionVS = (vCornerPositionVS / far) * fragDepth; will not calculate a correct fragment position, but you can do it like this:
vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
Note, in view space the z axis comes out of the view port. If the corner positions are set up in view space, then the Z-coordinate has to be the negative distance to the far plane:
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, -far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, -far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, -far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, -far);
In the vertex shader the assignment of the corner positions is mixed. The lower left position of the viewport is (-1,-1) and the top right position is (1,1) (in normalized device coordinates).Adapt the code like this:
JavaScript:
var farCornersVec = [bottomLeft, bottomRight, topLeft, topRight];
Vertex shader:
// bottomLeft=0*2+0*1, bottomRight=0*2+1*1, topLeft=1*2+0*1, topRight=1*2+1*1;
int i = (positionVS.y > 0.0 ? 2 : 0) + (positionVS.x > 0.0 ? 1 : 0);
vCornerPositionVS = farCorners[i];
Note, if you could add an additional vertex attribute for the corner position, then it would be simplified.
The calculation of the fragment position can be simplified, if the aspect ratio, the field of view angle and the normalized device coordinates of the fragment (fragment position in range [-1,1]) are known:
ndc_xy = vUV * 2.0 - 1.0;
tanFov_2 = tan( radians( fov / 2 ) )
aspect = vp_size_x / vp_size_y
fragZ = DepthToZ( fragDepth );
fragPos = vec3( ndc_xy.x * aspect * tanFov_2, ndc_xy.y * tanFov_2, -1.0 ) * abs( fragZ );
If the perspective projection matrix is known, this can be calculated easily:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec4 viewH = inverse( projection ) * vec4( ndc_xy, fragDepth * 2.0 - 1.0, 1.0 );
vec3 fragPosition = viewH.xyz / viewH.w;
If the perspective projection is symmetric (the filed of view is not displaced and the Z-axis of the view space is in the center of the viewport), this can be simplified:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPosition = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs(DepthToZ(fragDepth));
See also:
How to recover view space position given view space depth value and ndc xy
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
I suggest to write the fragment shader somehow like this:
float fragDepth = getDepth(depthBufferTexture, vUV);
float ambientOcclusion = 1.0;
if (fragDepth > 0.0)
{
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0;
rvec.z = 0.0;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
mat3 tbn = mat3(tangent, cross(normal, tangent), normal);
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPositionVS = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs( DepthToZ(fragDepth) );
// vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
float occlusion = 0.0;
for (int i = 0; i < NB_SAMPLES; i++)
{
vec3 samplePosition = fragPositionVS + radius * tbn * kernelSamples[i];
// Project sample position from view space to screen space:
vec4 offset = projection * vec4(samplePosition, 1.0);
offset.xy /= offset.w; // Perspective division -> [-1,1]
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth
float sampleZ = DepthToZ( getDepth(depthTexture, offset.xy) );
// Range check and accumulate if fragment contributes to occlusion:
float rangeCheck = step( abs(fragPositionVS.z - sampleZ), radius );
occlusion += step( samplePosition.z - sampleZ, -depthBias ) * rangeCheck;
}
// Inversion
ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
}
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
See the WebGL example, which demonstrates the full algorithm (Unfortunately the full code would exceed the limit of 30000 signs, which an answer is limited to):
JSFiddle or GitHub
Extension to the answer
The depth as it would be stored in the depth buffer is calculated like this:
(see OpenGL ES write depth data to color)
float ndc_depth = vPosPrj.z / vPosPrj.w;
float depth = ndc_depth * 0.5 + 0.5;
This value is already calculated in the fragment shader and is contained in gl_FragCoord.z. See the Khronos Group reference page for gl_FragCoord which says:
The z component is the depth value that would be used for the fragment's depth if no shader contained any writes to gl_FragDepth.
If the depth has to be stored in a RGBA8 buffer, the depth has to be encoded to the 4 bytes of the buffer to avoid a loss of accuracy, and has to be decoded when read from the buffer:
encode
vec3 PackDepth( in float depth )
{
float depthVal = depth * (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( depthVal * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return encode.xyz - encode.yzw / 256.0 + 1.0/512.0;
}
decode
float UnpackDepth( in vec3 pack )
{
float depth = dot( pack, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
return depth * (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
}
See also the answers to the following questions:
How do I convert between float and vec4,vec3,vec2?
OpenGL ES write depth data to color
How do you pack one 32bit int Into 4, 8bit ints in glsl / webgl?

Stable Shadow Mapping

I'm trying to stabilise the shadows in my 3D renderer. I'm using CSMs.
Here's the code I've got, making no attempt to stabilise. The size of the projection in world space should at least be constant though:
void SkyLight::update() {
// direction is the direction that the light is facing
vec3 tangent = sq::make_tangent(direction);
for (int i = 0; i < 4; i++) {
// .first is the far plane, .second is a struct of 8 vec3 making a world space camera frustum
const std::pair<float, sq::Frustum>& csm = camera->csmArr[i];
// calculates the bounding box centre of the frustum
vec3 frusCentre = sq::calc_frusCentre(csm.second);
mat4 viewMat = glm::lookAt(frusCentre-direction, frusCentre, tangent);
mat4 projMat = glm::ortho(-csm.first, csm.first, -csm.first, csm.first, -csm.first, csm.first);
matArr[i] = projMat * viewMat;
}
}
That works. But, the shadows flicker and swim like mad. So, here's an attempt at stabilising, by trying to snap them to texel-sized increments, as recommended everywhere but seemingly never explained:
void SkyLight::update() {
vec3 tangent = sq::make_tangent(direction);
for (int i = 0; i < 4; i++) {
const std::pair<float, sq::Frustum>& csm = camera->csmArr[i];
vec3 frusCentre = sq::calc_frusCentre(csm.second);
double qStep = csm.first / 1024.0; // shadow texture size
frusCentre.x = std::round(frusCentre.x / qStep) * qStep;
frusCentre.y = std::round(frusCentre.y / qStep) * qStep;
frusCentre.z = std::round(frusCentre.z / qStep) * qStep;
mat4 viewMat = glm::lookAt(frusCentre-direction, frusCentre, tangent);
mat4 projMat = glm::ortho(-csm.first, csm.first, -csm.first, csm.first, -csm.first, csm.first);
matArr[i] = projMat * viewMat;
}
}
This makes a difference, in that the shadows now swim slowly, rather than bouncing too fast to notice any pattern. However, I'm pretty sure that's just by chance, and that I'm not at all snapping to the right thing, or even snapping the right thing.
To fix this, I need to do the snapping in light space, not world space:
viewMat[3][0] -= glm::mod(viewMat[3][0], 2.f * split / texSize);
viewMat[3][1] -= glm::mod(viewMat[3][1], 2.f * split / texSize);
viewMat[3][2] -= glm::mod(viewMat[3][2], 2.f * split / texSize);
Old (Wrong) answer:
So, I revisited this today and managed to solve it in about 10 minutes :D
Just round frusCentre like this:
frusCentre -= glm::mod(frusCentre, 2.f * csm.first / 1024.f);
or, more generally:
frusCentre -= glm::mod(frusCentre, 2.f * split / texSize);
EDIT: No, that's no good... I'll keep trying.

GLSL OpenGL Each Light Added Get's Darker

I have a scene that works perfectly with one light. However, when I add two more - each new addition becomes dimmer until it is almost unseen. Is the attenuation factors wrong or could it be something else?
int i = 0;
for(i=0; i<3; i++){
if (lights[i].enabled == 1.0){
//Lighting Attributes
vec4 light_position = vec4(lights[i].position,1.0);
vec4 light_ambient = lights[i].ambient;
vec4 light_diffuse = lights[i].diffuse;
vec4 light_specular = lights[i].specular;
float light_att_constant = 1.0;
float light_att_linear = 0.0;
float light_att_quadratic = 0.01;
float light_shine = 1.0;
//Object Attributes
vec3 obj_position = n_vertex;
vec3 obj_normals = n_normal;
vec4 obj_color = n_colors;
//Calc Distance
vec3 distance_LO = (obj_position - light_position.xyz);
float distance = length(distance_LO);
//Normalize some attributes
vec3 n_light_position = normalize(distance_LO);
//Apply ambience
finalColor *= light_ambient * global_ambient;
//Calc Cosine of Normal and Light
float NdotL = max(dot(obj_normals, n_light_position),0.0);
//Calc Eye Vector (negated position)
vec3 eye_view = -obj_position;
//Check if Surface is facing the Light
if (NdotL > 0){
//Apply lambertian reflection
finalColor += obj_color * light_diffuse * NdotL;
//Calc the half-vector
vec3 half_vector = normalize(light_position.xyz + eye_view);
//Calc angle between normal and half-vector
//See the engine notebook for a diagram.
float NdotHV = max(dot(obj_normals, half_vector), 0.0);
//Apply Specularity
finalColor += obj_color * light_specular * pow(NdotHV, light_shine);
}
//Calc Attenuation
float attenuation = light_att_constant / ((1 + light_att_linear * distance) *
1 + light_att_quadratic * distance * distance);
//Apply Attenuation
finalColor = finalColor * attenuation;
}
}
color = vec4(finalColor.rgb, 1.0);
You multiply in your colours. This means that shadows will get darker.
If you have an area around some relative brightness 1/2, then you multiply it by 1/2 (contribution from that light), you'll get 1/4.
If you have Photoshop or Gimp you can test this yourself with the Multiply blending mode, and three circles, pure red, pure green and pure blue and overlap them. Compare Multiply to Linear Dodge (the plus operation in Photoshop.)
Here's an example.
You'll most certainly want an additive effect, that is, add the terms together.