I am trying to create a simple subsurface scattering effect using a shader but I am facing a small issue.
Look at those screenshots. The three images represents three lighting states (above surface, really close to surface, subsurface) with various lighting colors (red and blue) and always the same subsurface color (red).
As you might notice when the light is above the surface and really close to this surface its influence appears to minimize which is the expected behavior. But the problem is that is behaves the same for the subsurface part, this is normal according to my shader code but in my opinion the subsurface light influence should be higher when going close to the surface. I suggest you to look at the screenshot for the expected result.
How can I do that ?
Here is the simplified shader code.
half ndotl = max(0.0f, dot(normalWorld, lightDir));
half inversendotl = max(0.0f, dot(normalWorld, -lightDir));
half3 lightColor = _LightColor0.rgb * ndotl; // This is the normal light color calculation
half3 subsurfacecolor = translucencyColor.rgb * inversendotl; // This is the subsurface color
half3 topsubsurfacecolor = translucencyColor.rgb; // This is used for adding subsurface color to top surface
final = subsurfacescolor + lerp(lightColor, topsubsurfacecolor * 0.5, 1 - ndotl - inversendotl);
The way, how you have implemented subsurface scattering effect is very rough. It is hard to achieve nice result using so simple approach.
Staying within selected approach, I would recommend you the following things:
Take into account distance to the light source accordingly to the inverse square law. This applies to both components, direct light and subsurface.
Once the light is behind the surface, it is better to ignore the dot product of the inner normal and direction to the light, because you never know how the light would travel through the object. One more reason is that because of the law of refraction (assuming that refraction coefficient of the object is higher than one of the air) makes this dot product less influential. You may just use a step function to turn on subsurface component once the light source is behind the surface.
So, the modified version of your shader would be as follows:
half3 toLightVector = u_lightPos - v_fragmentPos;
half lightDistanceSQ = dot(toLightVector, toLightVector);
half3 lightDir = normalize(toLightVector);
half ndotl = max(0.0, dot(v_normal, lightDir));
half inversendotl = step(0.0, dot(v_normal, -lightDir));
half3 lightColor = _LightColor0.rgb * ndotl / lightDistanceSQ * _LightIntensity0;
half3 subsurfacecolor = translucencyColor.rgb * inversendotl / lightDistanceSQ * _LightIntensity0;
half3 final = subsurfacecolor + lightColor;
Where u_lightPos - uniform that contains position of the light source, v_fragmentPos - varying that contains position of the fragment.
Here is an example in glsl using three.js:
var container;
var camera, scene, renderer;
var sssMesh;
var lightSourceMesh;
var sssUniforms;
var clock = new THREE.Clock();
init();
animate();
function init() {
container = document.getElementById('container');
camera = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 1, 3000);
camera.position.z = 4;
camera.position.y = 2;
camera.rotation.x = -0.45;
scene = new THREE.Scene();
var boxGeometry = new THREE.CubeGeometry(0.75, 0.75, 0.75);
var lightSourceGeometry = new THREE.CubeGeometry(0.1, 0.1, 0.1);
sssUniforms = {
u_lightPos: {
type: "v3",
value: new THREE.Vector3()
}
};
var sssMaterial = new THREE.ShaderMaterial({
uniforms: sssUniforms,
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragment_shader').textContent
});
var lightSourceMaterial = new THREE.MeshBasicMaterial();
sssMesh = new THREE.Mesh(boxGeometry, sssMaterial);
sssMesh.position.x = 0;
sssMesh.position.y = 0;
scene.add(sssMesh);
lightSourceMesh = new THREE.Mesh(lightSourceGeometry, lightSourceMaterial);
lightSourceMesh.position.x = 0;
lightSourceMesh.position.y = 0;
scene.add(lightSourceMesh);
renderer = new THREE.WebGLRenderer();
container.appendChild(renderer.domElement);
onWindowResize();
window.addEventListener('resize', onWindowResize, false);
}
function onWindowResize(event) {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
var delta = clock.getDelta();
var lightHeight = Math.sin(clock.elapsedTime * 1.0) * 0.5 + 0.7;
lightSourceMesh.position.y = lightHeight;
sssUniforms.u_lightPos.value.y = lightHeight;
sssMesh.rotation.y += delta * 0.5;
renderer.render(scene, camera);
}
body {
color: #ffffff;
background-color: #050505;
margin: 0px;
overflow: hidden;
}
<script src="http://threejs.org/build/three.min.js"></script>
<div id="container"></div>
<script id="fragment_shader" type="x-shader/x-fragment">
varying vec3 v_fragmentPos;
varying vec3 v_normal;
uniform vec3 u_lightPos;
void main(void)
{
vec3 _LightColor0 = vec3(1.0,0.5,0.5);
float _LightIntensity0 = 0.2;
vec3 translucencyColor = vec3(0.8,0.2,0.2);
vec3 toLightVector = u_lightPos - v_fragmentPos;
float lightDistanceSQ = dot(toLightVector, toLightVector);
vec3 lightDir = normalize(toLightVector);
float ndotl = max(0.0, dot(v_normal, lightDir));
float inversendotl = step(0.0, dot(v_normal, -lightDir));
vec3 lightColor = _LightColor0.rgb * ndotl / lightDistanceSQ * _LightIntensity0;
vec3 subsurfacecolor = translucencyColor.rgb * inversendotl / lightDistanceSQ * _LightIntensity0;
vec3 final = subsurfacecolor + lightColor;
gl_FragColor=vec4(final,1.0);
}
</script>
<script id="vertexShader" type="x-shader/x-vertex">
varying vec3 v_fragmentPos;
varying vec3 v_normal;
void main()
{
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
v_fragmentPos = (modelMatrix * vec4( position, 1.0 )).xyz;
v_normal = (modelMatrix * vec4( normal, 0.0 )).xyz;
gl_Position = projectionMatrix * mvPosition;
}
</script>
There are large amount of different techniques of simulation of SSS.
Texture-space diffusion and shadowmap-based translucency are the most frequently used techniques.
Check this article from GPU Gems, it describes mentioned techniques.
Also, you can find interesting this presentation from EA. It mentions approach that is very close to yours for rendering plants.
Spherical harmonics also works well for static geometry, but this approach is very complicated and it needs precomputed irradiance transfer. Check this article, that shows use of spherical harmonics to approximate SSS.
Related
I'm trying to implement a Reflective Shadow Mapping program with Vulkan.
The problem is that a get bad result :
As you can see the result is not smooth.
Here I am rendering in a first pass the position, normal and flux from the light position in 3 textures with a resolution of 512 * 512.
In a second pass, I compute the indirect illumination from the first pass textures according to this paper (http://www.klayge.org/material/3_12/GI/rsm.pdf) :
for(int i = 0; i < 151; i++)
{
vec4 rsmProjCoords = projCoords + vec4(rsmDiskSampling[i] * 0.09, 0.0, 0.0);
vec3 indirectLightPos = texture(rsmPosition, rsmProjCoords.xy).rgb;
vec3 indirectLightNorm = texture(rsmNormal, rsmProjCoords.xy).rgb;
vec3 indirectLightFlux = texture(rsmFlux, rsmProjCoords.xy).rgb;
vec3 r = worldPos - indirectLightPos;
float distP2 = dot( r, r );
vec3 emission = indirectLightFlux * (max(0.0, dot(indirectLightNorm, r)) * max(0.0, dot(N, -r)));
emission *= rsmDiskSampling[i].x * rsmDiskSampling[i].x / (distP2 * distP2);
indirectRSM += emission;
}
The problem is fixed.
The main problem was the sampling, I was using a linear sampling instead of a nearest sampling :
samplerInfo.magFilter = VK_FILTER_NEAREST;
samplerInfo.minFilter = VK_FILTER_NEAREST;
Other problems were the number of VPL used and the distance between them.
I followed the tutorial at Learn OpenGL to implement Screenspace Ambient Occlusion. Things are mostly looking okay besides a strange artifact at the top and bottom of the window.
The problem is more obvious moving the camera, when it appears as if top parts of the image are imprinted on the bottom and vise versa, as shown in this video.
The artifact worsens when standing close to a wall and looking up and down so perhaps the Znear value is contributing? The scale of my scene does seem small compared to other demos, Znear and Zfar are 0.01f and 1000 and the width of the shown hallway is around 1.2f.
I've read into the common SSAO artifacts and haven't found anything resembling this.
#version 330 core
in vec2 TexCoords;
layout (location = 0) out vec3 FragColor;
uniform sampler2D MyTexture0; // Position
uniform sampler2D MyTexture1; // Normal
uniform sampler2D MyTexture2; // TexNoise
const int samples = 64;
const float radius = 0.25;
const float bias = 0.025;
uniform mat4 projectionMatrix;
uniform float screenWidth;
uniform float screenHeight;
void main()
{
//tile noise texture over screen based on screen dimensions divided by noise size
vec2 noiseScale = vec2(screenWidth/4.0, screenHeight/4.0);
vec3 sample_sphere[64];
sample_sphere[0] = vec3(0.04977, -0.04471, 0.04996);
sample_sphere[1] = vec3(0.01457, 0.01653, 0.00224);
sample_sphere[2] = vec3(-0.04065, -0.01937, 0.03193);
sample_sphere[3] = vec3(0.01378, -0.09158, 0.04092);
sample_sphere[4] = vec3(0.05599, 0.05979, 0.05766);
sample_sphere[5] = vec3(0.09227, 0.04428, 0.01545);
sample_sphere[6] = vec3(-0.00204, -0.0544, 0.06674);
sample_sphere[7] = vec3(-0.00033, -0.00019, 0.00037);
sample_sphere[8] = vec3(0.05004, -0.04665, 0.02538);
sample_sphere[9] = vec3(0.03813, 0.0314, 0.03287);
sample_sphere[10] = vec3(-0.03188, 0.02046, 0.02251);
sample_sphere[11] = vec3(0.0557, -0.03697, 0.05449);
sample_sphere[12] = vec3(0.05737, -0.02254, 0.07554);
sample_sphere[13] = vec3(-0.01609, -0.00377, 0.05547);
sample_sphere[14] = vec3(-0.02503, -0.02483, 0.02495);
sample_sphere[15] = vec3(-0.03369, 0.02139, 0.0254);
sample_sphere[16] = vec3(-0.01753, 0.01439, 0.00535);
sample_sphere[17] = vec3(0.07336, 0.11205, 0.01101);
sample_sphere[18] = vec3(-0.04406, -0.09028, 0.08368);
sample_sphere[19] = vec3(-0.08328, -0.00168, 0.08499);
sample_sphere[20] = vec3(-0.01041, -0.03287, 0.01927);
sample_sphere[21] = vec3(0.00321, -0.00488, 0.00416);
sample_sphere[22] = vec3(-0.00738, -0.06583, 0.0674);
sample_sphere[23] = vec3(0.09414, -0.008, 0.14335);
sample_sphere[24] = vec3(0.07683, 0.12697, 0.107);
sample_sphere[25] = vec3(0.00039, 0.00045, 0.0003);
sample_sphere[26] = vec3(-0.10479, 0.06544, 0.10174);
sample_sphere[27] = vec3(-0.00445, -0.11964, 0.1619);
sample_sphere[28] = vec3(-0.07455, 0.03445, 0.22414);
sample_sphere[29] = vec3(-0.00276, 0.00308, 0.00292);
sample_sphere[30] = vec3(-0.10851, 0.14234, 0.16644);
sample_sphere[31] = vec3(0.04688, 0.10364, 0.05958);
sample_sphere[32] = vec3(0.13457, -0.02251, 0.13051);
sample_sphere[33] = vec3(-0.16449, -0.15564, 0.12454);
sample_sphere[34] = vec3(-0.18767, -0.20883, 0.05777);
sample_sphere[35] = vec3(-0.04372, 0.08693, 0.0748);
sample_sphere[36] = vec3(-0.00256, -0.002, 0.00407);
sample_sphere[37] = vec3(-0.0967, -0.18226, 0.29949);
sample_sphere[38] = vec3(-0.22577, 0.31606, 0.08916);
sample_sphere[39] = vec3(-0.02751, 0.28719, 0.31718);
sample_sphere[40] = vec3(0.20722, -0.27084, 0.11013);
sample_sphere[41] = vec3(0.0549, 0.10434, 0.32311);
sample_sphere[42] = vec3(-0.13086, 0.11929, 0.28022);
sample_sphere[43] = vec3(0.15404, -0.06537, 0.22984);
sample_sphere[44] = vec3(0.05294, -0.22787, 0.14848);
sample_sphere[45] = vec3(-0.18731, -0.04022, 0.01593);
sample_sphere[46] = vec3(0.14184, 0.04716, 0.13485);
sample_sphere[47] = vec3(-0.04427, 0.05562, 0.05586);
sample_sphere[48] = vec3(-0.02358, -0.08097, 0.21913);
sample_sphere[49] = vec3(-0.14215, 0.19807, 0.00519);
sample_sphere[50] = vec3(0.15865, 0.23046, 0.04372);
sample_sphere[51] = vec3(0.03004, 0.38183, 0.16383);
sample_sphere[52] = vec3(0.08301, -0.30966, 0.06741);
sample_sphere[53] = vec3(0.22695, -0.23535, 0.19367);
sample_sphere[54] = vec3(0.38129, 0.33204, 0.52949);
sample_sphere[55] = vec3(-0.55627, 0.29472, 0.3011);
sample_sphere[56] = vec3(0.42449, 0.00565, 0.11758);
sample_sphere[57] = vec3(0.3665, 0.00359, 0.0857);
sample_sphere[58] = vec3(0.32902, 0.0309, 0.1785);
sample_sphere[59] = vec3(-0.08294, 0.51285, 0.05656);
sample_sphere[60] = vec3(0.86736, -0.00273, 0.10014);
sample_sphere[61] = vec3(0.45574, -0.77201, 0.00384);
sample_sphere[62] = vec3(0.41729, -0.15485, 0.46251);
sample_sphere[63] = vec3 (-0.44272, -0.67928, 0.1865);
// get input for SSAO algorithm
vec3 fragPos = texture(MyTexture0, TexCoords).xyz;
vec3 normal = normalize(texture(MyTexture1, TexCoords).rgb);
vec3 randomVec = normalize(texture(MyTexture2, TexCoords * noiseScale).xyz);
// create TBN change-of-basis matrix: from tangent-space to view-space
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
// iterate over the sample kernel and calculate occlusion factor
float occlusion = 0.0;
for(int i = 0; i < samples; ++i)
{
// get sample position
vec3 sample = TBN * sample_sphere[i]; // from tangent to view-space
sample = fragPos + sample * radius;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projectionMatrix * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5; // transform to range 0.0 - 1.0
// get sample depth
float sampleDepth = texture(MyTexture0, offset.xy).z;
// range check & accumulate
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z - sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / samples);
FragColor = vec3(occlusion);
}
As Rabbid76 suggested, the artifacts were caused by sampling outside of the screen borders. I added a check to prevent this and things are looking much better..
vec4 clipSpacePos = projectionMatrix * vec4(sample, 1.0); // from view to clip-space
vec3 ndcSpacePos = clipSpacePos.xyz /= clipSpacePos.w; // perspective divide
vec2 windowSpacePos = ((ndcSpacePos.xy + 1.0) / 2.0) * vec2(screenWidth, screenHeight);
if ((windowSpacePos.y > 0) && (windowSpacePos.y < screenHeight))
if ((windowSpacePos.x > 0) && (windowSpacePos.x < screenWidth))
// THEN APPLY AMBIENT OCCLUSION
It hasn't entirely fixed the issue though as areas close to the windows edge now appear lighter than they should because fewer samples are tested. Perhaps somebody can suggest an approach that moves the sample area to an appropriate location?
I'm trying to create my own SSAO shader in forward rendering (not in post processing) with GLSL. I'm encountering some issues, but I really can't figure out what's wrong with my code.
It is created with Babylon JS engine as a BABYLON.ShaderMaterial and set in a BABYLON.RenderTargetTexture, and it is mainly inspired by this renowned SSAO tutorial: http://john-chapman-graphics.blogspot.fr/2013/01/ssao-tutorial.html
For performance reasons, I have to do all the calculation without projecting and unprojecting in screen space, I'd rather use the view ray method described in the tutorial above.
Before explaining the whole thing, please note that Babylon JS uses a left-handed coordinate system, which may have quite an incidence on my code.
Here are my classic steps:
First, I calculate my four camera far plane corners positions in my JS code. They might be constants every time as they are calculated in view space position.
// Calculating 4 corners manually in view space
var tan = Math.tan;
var atan = Math.atan;
var ratio = SSAOSize.x / SSAOSize.y;
var far = scene.activeCamera.maxZ;
var fovy = scene.activeCamera.fov;
var fovx = 2 * atan(tan(fovy/2) * ratio);
var xFarPlane = far * tan(fovx/2);
var yFarPlane = far * tan(fovy/2);
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, far);
var farCornersVec = [topLeft, topRight, bottomRight, bottomLeft];
var farCorners = [];
for (var i = 0; i < 4; i++) {
var vecTemp = farCornersVec[i];
farCorners.push(vecTemp.x, vecTemp.y, vecTemp.z);
}
These corner positions are sent to the vertex shader -- that is why the vector coordinates are serialized in the farCorners[] array to be sent in the vertex shader.
In my vertex shader, position.x and position.y signs let the shader know which corner to use at each pass.
These corners are then interpolated in my fragment shader for calculating a view ray, i.e. a vector from the camera to the far plane (its .z component is, therefore, equal to the far plane distance to camera).
The fragment shader follows the instructions of John Chapman's tutorial (see commented code below).
I get my depth buffer as a BABYLON.RenderTargetTexture with the DepthRenderer.getDepthMap() method. A depth texture lookup actually returns (according to Babylon JS's depth shaders):
(gl_FragCoord.z / gl_FragCoord.w) / far, with:
gl_FragCoord.z: the non-linear depth
gl_FragCoord.z = 1/Wc, where Wc is the clip-space vertex position (i.e. gl_Position.w in the vertex shader)
far: the positive distance from camera to the far plane.
The kernel samples are arranged in a hemisphere with random floats in [0,1], most being distributed close to origin with a linear interpolation.
As I don't have a normal texture, I calculate them from the current depth buffer value with getNormalFromDepthValue():
vec3 getNormalFromDepthValue(float depth) {
vec2 offsetX = vec2(texelSize.x, 0.0);
vec2 offsetY = vec2(0.0, texelSize.y);
// texelSize = size of a texel = (1/SSAOSize.x, 1/SSAOSize.y)
float depthOffsetX = getDepth(depthTexture, vUV + offsetX); // Horizontal neighbour
float depthOffsetY = getDepth(depthTexture, vUV + offsetY); // Vertical neighbour
vec3 pX = vec3(offsetX, depthOffsetX - depth);
vec3 pY = vec3(offsetY, depthOffsetY - depth);
vec3 normal = cross(pY, pX);
normal.z = -normal.z; // We want normal.z positive
return normalize(normal); // [-1,1]
}
Finally, my getDepth() function allows me to get the depth value at current UV in 32-bit float:
float getDepth(sampler2D tex, vec2 texcoord) {
return unpack(texture2D(tex, texcoord));
// unpack() retreives the depth value from the 4 components of the vector given by texture2D()
}
Here are my vertex and fragment shader codes (without function declarations):
// ---------------------------- Vertex Shader ----------------------------
precision highp float;
uniform float fov;
uniform float far;
uniform vec3 farCorners[4];
attribute vec3 position; // 3D position of each vertex (4) of the quad in object space
attribute vec2 uv; // UV of each vertex (4) of the quad
varying vec3 vPosition;
varying vec2 vUV;
varying vec3 vCornerPositionVS;
void main(void) {
vPosition = position;
vUV = uv;
// Map current vertex with associated frustum corner position in view space:
// 0: top left, 1: top right, 2: bottom right, 3: bottom left
// This frustum corner position will be interpolated so that the pixel shader always has a ray from camera->far-clip plane.
vCornerPositionVS = vec3(0.0);
if (positionVS.x > 0.0) {
if (positionVS.y <= 0.0) { // top left
vCornerPositionVS = farCorners[0];
}
else if (positionVS.y > 0.0) { // top right
vCornerPositionVS = farCorners[1];
}
}
else if (positionVS.x <= 0.0) {
if (positionVS.y > 0.0) { // bottom right
vCornerPositionVS = farCorners[2];
}
else if (positionVS.y <= 0.0) { // bottom left
vCornerPositionVS = farCorners[3];
}
}
gl_Position = vec4(position * 2.0, 1.0); // 2D position of each vertex
}
// ---------------------------- Fragment Shader ----------------------------
precision highp float;
uniform mat4 projection; // Projection matrix
uniform float radius; // Scaling factor for sample position, by default = 1.7
uniform float depthBias; // 1e-5
uniform vec2 noiseScale; // (SSAOSize.x / noiseSize, SSAOSize.y / noiseSize), with noiseSize = 4
varying vec3 vCornerPositionVS; // vCornerPositionVS is the interpolated position calculated from the 4 far corners
void main() {
// Get linear depth in [0,1] with texture2D(depthBufferTexture, vUV)
float fragDepth = getDepth(depthBufferTexture, vUV);
float occlusion = 0.0;
if (fragDepth < 1.0) {
// Retrieve fragment's view space normal
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
// Random rotation: rvec.xyz are the components of the generated random vector
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0; // [-1,1]
rvec.z = 0.0; // Random rotation around Z axis
// Get view ray, from camera to far plane, scaled by 1/far so that viewRayVS.z == 1.0
vec3 viewRayVS = vCornerPositionVS / far;
// Current fragment's view space position
vec3 fragPositionVS = viewRay * fragDepth;
// Creation of TBN matrix
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
for (int i = 0; i < NB_SAMPLES; i++) {
// Get sample kernel position, from tangent space to view space
vec3 samplePosition = tbn * kernelSamples[i];
// Add VS kernel offset sample to fragment's VS position
samplePosition = samplePosition * radius + fragPosition;
// Project sample position from view space to screen space:
vec4 offset = vec4(samplePosition, 1.0);
offset = projection * offset; // To view space
offset.xy /= offset.w; // Perspective division
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth:
float sampleDepth = getDepth(depthTexture, offset.xy);
float rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;
// Reminder: fragDepth == fragPosition.z
// Range check and accumulate if fragment contributes to occlusion:
occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;
}
}
// Inversion
float ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
}
A horizontal and vertical Gaussian shader blur clears the noise generated by the random texture afterwards.
My parameters are:
NB_SAMPLES = 16
radius = 1.7
depthBias = 1e-5
power = 1.0
Here is the result:
The result has artifacts on its edges, and the close shadows are not very strong... Would anyone see something wrong or weird in my code?
Thanks a lot!
fragPositionVS is a position in view space coordinates and radius is length in view coordinates. You use them to calculate the samplePosition:
samplePosition = samplePosition * radius + fragPositionVS;
But in the line rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;, you compare the difference of fragDepth and sampleDepth with radius. That makes no sense, since fragDepth and sampleDepth are values from the depth buffer in, the range [0, 1] and radius is a lenght in the view space.
In the line occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;, you calculate the difference of samplePosition.z and sampleDepth. While samplePosition.z is a view space coordinate inbetween -near and -far, sampleDepth is a depth in range [0, 1]. Calculating a difference between these two values doesn't make any sense either.
I suggest using always Z coordinates, if you want to calculate distances or if you want to compare distances.
If you have a depth value, the Z-coordinate in view space can be calculated by converting the depth value to normalized device coordinate and converting the normalized device coordinate to a view coordinate:
float DepthToZ( in float depth )
{
float near = .... ; // distance to near plane (absolute value)
float far = .... ; // distance to far plane (absolute value)
float z_ndc = 2.0 * depth - 1.0;
float z_eye = 2.0 * near * far / (far + near - z_ndc * (far - near));
return -z_eye;
}
The depth is a value in the range [0, 1] and maps the range from the distance to the near plane and the distance to the far plane (in view space), but not linear (for perspective projection).
For this reason, the code line vec3 fragPositionVS = (vCornerPositionVS / far) * fragDepth; will not calculate a correct fragment position, but you can do it like this:
vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
Note, in view space the z axis comes out of the view port. If the corner positions are set up in view space, then the Z-coordinate has to be the negative distance to the far plane:
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, -far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, -far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, -far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, -far);
In the vertex shader the assignment of the corner positions is mixed. The lower left position of the viewport is (-1,-1) and the top right position is (1,1) (in normalized device coordinates).Adapt the code like this:
JavaScript:
var farCornersVec = [bottomLeft, bottomRight, topLeft, topRight];
Vertex shader:
// bottomLeft=0*2+0*1, bottomRight=0*2+1*1, topLeft=1*2+0*1, topRight=1*2+1*1;
int i = (positionVS.y > 0.0 ? 2 : 0) + (positionVS.x > 0.0 ? 1 : 0);
vCornerPositionVS = farCorners[i];
Note, if you could add an additional vertex attribute for the corner position, then it would be simplified.
The calculation of the fragment position can be simplified, if the aspect ratio, the field of view angle and the normalized device coordinates of the fragment (fragment position in range [-1,1]) are known:
ndc_xy = vUV * 2.0 - 1.0;
tanFov_2 = tan( radians( fov / 2 ) )
aspect = vp_size_x / vp_size_y
fragZ = DepthToZ( fragDepth );
fragPos = vec3( ndc_xy.x * aspect * tanFov_2, ndc_xy.y * tanFov_2, -1.0 ) * abs( fragZ );
If the perspective projection matrix is known, this can be calculated easily:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec4 viewH = inverse( projection ) * vec4( ndc_xy, fragDepth * 2.0 - 1.0, 1.0 );
vec3 fragPosition = viewH.xyz / viewH.w;
If the perspective projection is symmetric (the filed of view is not displaced and the Z-axis of the view space is in the center of the viewport), this can be simplified:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPosition = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs(DepthToZ(fragDepth));
See also:
How to recover view space position given view space depth value and ndc xy
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
I suggest to write the fragment shader somehow like this:
float fragDepth = getDepth(depthBufferTexture, vUV);
float ambientOcclusion = 1.0;
if (fragDepth > 0.0)
{
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0;
rvec.z = 0.0;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
mat3 tbn = mat3(tangent, cross(normal, tangent), normal);
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPositionVS = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs( DepthToZ(fragDepth) );
// vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
float occlusion = 0.0;
for (int i = 0; i < NB_SAMPLES; i++)
{
vec3 samplePosition = fragPositionVS + radius * tbn * kernelSamples[i];
// Project sample position from view space to screen space:
vec4 offset = projection * vec4(samplePosition, 1.0);
offset.xy /= offset.w; // Perspective division -> [-1,1]
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth
float sampleZ = DepthToZ( getDepth(depthTexture, offset.xy) );
// Range check and accumulate if fragment contributes to occlusion:
float rangeCheck = step( abs(fragPositionVS.z - sampleZ), radius );
occlusion += step( samplePosition.z - sampleZ, -depthBias ) * rangeCheck;
}
// Inversion
ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
}
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
See the WebGL example, which demonstrates the full algorithm (Unfortunately the full code would exceed the limit of 30000 signs, which an answer is limited to):
JSFiddle or GitHub
Extension to the answer
The depth as it would be stored in the depth buffer is calculated like this:
(see OpenGL ES write depth data to color)
float ndc_depth = vPosPrj.z / vPosPrj.w;
float depth = ndc_depth * 0.5 + 0.5;
This value is already calculated in the fragment shader and is contained in gl_FragCoord.z. See the Khronos Group reference page for gl_FragCoord which says:
The z component is the depth value that would be used for the fragment's depth if no shader contained any writes to gl_FragDepth.
If the depth has to be stored in a RGBA8 buffer, the depth has to be encoded to the 4 bytes of the buffer to avoid a loss of accuracy, and has to be decoded when read from the buffer:
encode
vec3 PackDepth( in float depth )
{
float depthVal = depth * (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( depthVal * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return encode.xyz - encode.yzw / 256.0 + 1.0/512.0;
}
decode
float UnpackDepth( in vec3 pack )
{
float depth = dot( pack, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
return depth * (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
}
See also the answers to the following questions:
How do I convert between float and vec4,vec3,vec2?
OpenGL ES write depth data to color
How do you pack one 32bit int Into 4, 8bit ints in glsl / webgl?
I am currently in the process of making water waves, so basically I am starting from the beginning. I have created a mesh which is basically a flat square and have animated it in the vertex shader (below is the code which achieves that)
vtx.y = (sin(2.0 * vtx.x + a_time/1000.0 ) * cos(1.5 * vtx.y + a_time/1000.0) * 0.2);
Basically just moving the y position based on a sin and cos function, the results of this can be observed here!
I then tried adding some Perlin noise (as per the Perlin noise functions by Ian McEwan, available here github.com/ashima/webgl-noise) as follows
vtx.y = vtx.y + 0.1*cnoise((a_time/5000.0)*a_vertex.yz);
the results of this can be observed here!
As you can plainly observe there is no real "random" effect that I was looking for (simulate some basic random roughness of an ocean).
I was wondering how it would be possible for me to achieve this (also any suggestions on how to improve either of the functions that change y would also be appreciated).
The simplest solution, is to use texture, that contain needed noise. If the displacement is kept in the texture, then it is possible to apply the displacement in the vertex shader, so there would be no need to modify vertex buffer. To make the waves moving, your may add some animated offset.
There are plenty of ways to fake, as you say, the "random" effect. You make take two samples from texture, using differently changing offsets and then simply add two displacements.
For example, see the following vertex shader:
uniform sampler2D u_heightMap;
uniform float u_time;
uniform mat4 modelViewMatrix
uniform mat4 projectionMatrix
attribute vec3 position;
void main()
{
vec3 pos = position;
vec2 offset1 = vec2(0.8, 0.4) * u_time * 0.1;
vec2 offset2 = vec2(0.6, 1.1) * u_time * 0.1;
float hight1 = texture2D(u_heightMap, uv + offset1).r * 0.02;
float hight2 = texture2D(u_heightMap, uv + offset2).r * 0.02;
pos.z += hight1 + hight2;
vec4 mvPosition = modelViewMatrix * vec4( pos, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
I've made a simple example using threejs:
var container;
var camera, scene, renderer;
var mesh;
var uniforms;
var clock = new THREE.Clock();
init();
animate();
function init() {
container = document.getElementById('container');
camera = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 0.1, 100);
camera.position.z = 0.6;
camera.position.y = 0.2;
camera.rotation.x = -0.45;
scene = new THREE.Scene();
var boxGeometry = new THREE.PlaneGeometry(0.75, 0.75, 100, 100);
var heightMap = THREE.ImageUtils.loadTexture("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAMAAACdt4HsAAAA81BMVEVZWVlRUVFdXV1EREQ8PDxBQUFWVlY3NzdKSkoyMjJqamphYWFOTk5mZmZHR0dxcXEtLS0qKip0dHRubm4aGhpTU1M+Pj4vLy8nJyeCgoJ3d3chISE0NDSKiop6eno5OTkBAQEjIyMeHh5/f38SEhKOjo6FhYUWFhaRkZELCwsPDw+bm5t8fHyfn59jY2OVlZUICAiwsLCHh4elpaWioqL///+qqqqYmJitra20tLS3t7fIyMi5ubnR0dHh4eHDw8P8/PzKysq9vb2np6fw8PDd3d3Ozs67u7vFxcX09PTY2Ni/v7/p6enV1dXT09Pl5eXt7e2kvPjWAAAMM0lEQVRYwxVW1ZbjOBAtsSwzsx1jOOlAc/f0MO7M7v9/zWr0kNg+xyVX6RIQXOhlUyIJNQwOLc/vL1ZiHuzATLhtIppQ1DzA0vd7S9nz9oBiExE/JXFo7zACFOzscM5l6yDfk/bs2vmTKhc3i8U6xIbfIGSlkUgXC5FGnft2N0kvrSQRWV+qu0sBYLu7g3t9VMicmjAPTJ74sjbWm5ubcb1YLKq84JH+X9MQW2l1UNl4s8wQmCJbRik0ISCLGKL4+OHty5dfs6qWSxNxbErO07L0Or8KV0G6Xqw3ozzvsFe1ka4cwdDEwr/pKMGgcJxg+/L6/iolQknps3A4YAwmlY4jTZsaVNJoHMvEXU2IGMnYlWy3u+YAhjDDAMwmbIrTdapdSc0d8bO1P20ftnf7cw2x4RNa9pgZWVa26PxpBZaJwh0eF34xTG5gOxJY2Ewr7ASgJ+uB2d14op22p6Obh0XBx3W/ZIbXZ2VmUFoZguV3B7JZjFFassASkkCJcte0XFl56/W6LSxAfuQLQnzq0E7G/lJiK3a4KNebLBPUi7dyUzIwyXgz9g7rgArn+OHj9wsA66N1imza+d5ybR0nZ1x6hA+fvv7a72sb4RDzUJG0XES4niblezFYqYSVGmSxYwdUdWPUdZIloifksH/5fE42y943XHW4m54en3ZStEHI/cWid39+yR1awf2+vofpEWddaRimSUVkYFwtE4SbOYjrQmyWCam6UrT3hD992CL59DzQzuPN+cizZWe4H8wYwr1LPU9YZeK0rTD6UiQOtUyOAbwUbBOzvo/asePzu8/P777lppNI7GCU9dGIODWhcMMGjzebxUYQ4J7HsMIi8x0RyfwwBNNRtuVmsSgttX/3/vVch6hbgxEZ41JA7x2PoLh7541RST20gtQXzJOp0abZJm2J0W0ialneuKFhuJ3P1/PPy1GXLzuNC0NQvP3yCPo8kV3n+/3L6U5q9N8Y9bUuWFkuS6Mro9TzWgrSygxNhfPzg54xW0bS5YT60sxdC8RyoxtRIW7qGnfjYt1b4XNAdakxivqR0l5YabpZLLPFgj/ysW9R1eNVAbL6S4sOCE2cmNlNiHmctE6/Xi7TIXfKm81yXZIqhkjIuKXdQq/eZkaC40oYohXM2tx41IAiABvTivIAIezixCCpwMgoe0OZeHu9uF5WalCi5WKTprpxSaw2FQTpIkJ0lgmMolwdYr+VVhi4EvKpgY3ejDoV4c3l363NgZLCZcRUjEVrQzq0L9OKAJaQ+AaIBI55aONs9ORQn1fb08usC9yMFTMhYTg2qMXB1LN3MC03m43gnHj6XBkn1Ep1ARKqmHApxCHYDXFrn8EV65v1jR8x22qFV/nJbOl3aBuDPy4W0VKifoyYSZhDIg8YSip/mWUonC5bRTUf1l2y7ERKMQKHVb4npMpDb7MsRdmLaLmOLMCBerzEAaIGAxckt6zVarLPH3dm3y83ZRQl2MR4hRh1uptlgucT3oyGqEoBiMhD0BThXBz328u0v0KtQLnHl+fvbwVaZlqN12vNV6NiBLY1OONms2T2Fja9Ly1GdK+4WM21DTaC1afH+xwIIvbd55c9kFHz31+ON4sxth0mHHUeBuJXZd8GuO/85KBsKh1zdVXD3cN1dw2GoKgo8ObudPf0NbCiyG/x9owT4blnzQrPiI2KKMvnOYrGzdLMV9cZIXUZWuIwf+wxB6UUbKdpMJvcFglthtDEF62Gz2+4186wiNDqEnIr3/IkExgVc1JW9lygm/X6RviRCNS0BYfFtrJsxVXeAG37ClZPL2fUZfqjYf969/j87eXnt0KFgCLfJ7LJtZM50Wi7iB+G6QAoZozELiBk0ZKApt3+muO/Kpzwu5eCBMXx3fvbr4UNO9iUfKpr5Q6BhbanUy454qA3PyRVGJiEbqKyz8ZEPYLTj55D1A6NN1WAv/337z+Px+2kCMWItn7mmMfT3bv/QsmMFg52UtEkPD4fbS2rN2sRfz9NceZ3bJ7jxU2bImybNsYuQmHhKq+PNlFCLGxyvksMCzQbQ3fIH75fAiQ8itzA3P/zFErDE6oxhKUcaUlbxa1RahG1jdIgRqZp2Qs+vbtO+xXUjTtd5nnlMhTM27v71fDz94dQwIFIpDe2Vyo0CRzizjFJXJhukA9BwJkF7n34+WO9Ah7s5ppbCQNrdfrw7c9//7wCKI4B47/yIGeXeZHk1CpQOVK3Pq12NkpbhuqZHziXMOXqfLFJi8xw//3Pc93MtuHxWZnFoFztzQSnUZU4+NuVk8TrYXCZWGqFs0BhCKfVX1kPVT27zbHev3tXMM/POmZpUEzHWg12DI4hUL67f/56zevaPXj6Vk+COUmCr48PGFoNeyPyK6i/fnzVTQU4tiSliDCiXO35gtkxTR3HaupZqYb3kSFISjiXzpCbioCnvZQ5whAymH7e3n6+B5+kBuNWlxqWqof6cm2cMtJ6pZSNB8W0coATK+UwSCoLg9Uv1x7nseNu/32vMXe7d/+y9uEcKowydv73z92ESRYRbB63zc6EYrsN5tx19UQSjALYKTFW5OCi/NfXwDu+//Hpvs7d/Y/feWyxg7tz3WBYHffDPP+lCqPu5elpfz/rpwE2HYjBA95VEuai2BeYGSiE+8un6XR7+zs/nn6/MXqwcazViwS29kgf3On0Whcu1qdFDMvKgApKWULAdiHYNlhWlcFEan/bPf73/vb91gJu+GnmM2kd6txzAqLVObCZPdT51vJKDxqKwKGG/m2mxmR9JHiFsMKE28+3ty/HXKub74G6P33+8mrTMvMMDjkedmE+BwRvoSA6hVCHgxsWw/0Rinx2z9PAqqq5PP+4vT2FE6Ya5pdfU/2xNmhMKFqtZGu0hM+rx6+gmDH6aDBtVyKz/vr99OFej5kIqzn+0pz9kJ8lsfwKx1QyrjCHWBI0x8A1+1tyvAODGilXZtjYRiRgdfy+Wj1NWkTBfTg1deAwD4Gkhu9nPkVAKqrpiQLdJKlSqS/0o86PMfINjqm53a+29vx6+2ffTA+PEGUMMx4eV670U89oK4tbrcdkDPevv1f2AasaQcGdlFJheL6Q3Aw/PQz17Zv55f2P18KJqAWmO086SQaWkzexgzUKERKr28fhCvbOfXmAlY2AaUN0rJgSaYY/f/z4ZxbFz482zQgarvuAW1V6/PP0cMndMC9chM3w231uEeJA+PoCPNIaXbr1ZCETcOITrdSaj8Pp0NJWIkfZUre6DS+BaTLeFDYcuASpaFr1LQpruOmjxciITzPR7L982M+qSvvN0joNDInUaw2PplxrrkYTjRk16+GuoR3zNFctWq1v4KZtxwTEZpPZ725vb3/93m8VCpW1uuIYLODmsMLKBrGMKC+GsLh/Uq4rHXnY8n6hFzSxKNNy3UvzSb9/sdx9PX96e97vG7Oe9w/Pbw/bq465CUeWjKWggJZekxfXDy+usV6mJrxNiREtIw/uv+zvPuXuITyfVvn5wBzIn17efw544I06xBJBE+RQp/RoxFa/7pv648qepgfAwRzjsHH8kgnD2plMI2Jlsgo50jz9889b0S82y2WpByGEPRyhlUTGUMill/94fdg5sNSJULjsb44z5tj3urXROrIVndfKrU4t+zjTrE+9WO2K4/T8scCAcZxlXSQVMUQGSBuvUYmu07lkx6Wvw6Omn5+21DGyw+ntIRa4thGWxHE4cXI9QNTQ5XrZ6bxrMBOwY5rI1zhNLHgwZSVKnbA2elfVmKaN+OlyeSxahzBEkeWPKXA7v8PES9p+HXUsBoPX9zkhCHMA+2ozq+vWm5So+bRVyupb6e5mg2B+KAi3TFJVaD7PJlYPqutSwQxAcZryYfq+B+KN6GVozttd0+htmj9vCgh1uCPIbudCPQ2ftkbUd8Gx0OyfJ2XvsHRAYhsxmI4h0G45trtaw2CyqaCwP+0swqg8X0+fv15Q/e+7LzPnfsaGFdEmzbTLhZyCidydaWtqYG7YivkSm5yM6z6zTDwo5CRseL19Bs8zHx+vu0AVu3prAltvRPEtsHcNWFlcNIqbYMeQa1KmiRnwFoamkdFGYEYAn59A9yeYYlXkcQhWduBaGi/MtoMQGMP1dXBNAGkqXrUeWjWA5rsBmEcTy4pN4fDr/qx44iciIweQQTCbgAgjDA7ofwxocDxlnyf5AAAAAElFTkSuQmCC");
heightMap.wrapT = heightMap.wrapS = THREE.RepeatWrapping;
uniforms = {u_time: {type: "f", value: 0.0 }, u_heightMap: {type: "t",value:heightMap} };
var material = new THREE.ShaderMaterial({
uniforms: uniforms,
side: THREE.DoubleSide,
wireframe: true,
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragment_shader').textContent
});
mesh = new THREE.Mesh(boxGeometry, material);
mesh.rotation.x = 3.14 / 2.0;
scene.add(mesh);
renderer = new THREE.WebGLRenderer();
renderer.setClearColor( 0xffffff, 1 );
container.appendChild(renderer.domElement);
onWindowResize();
window.addEventListener('resize', onWindowResize, false);
}
function onWindowResize(event) {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
var delta = clock.getDelta();
uniforms.u_time.value += delta;
mesh.rotation.z += delta * 0.5;
renderer.render(scene, camera);
}
body { margin: 0px; overflow: hidden; }
<script src="http://threejs.org/build/three.min.js"></script>
<div id="container"></div>
<script id="fragment_shader" type="x-shader/x-fragment">
void main( void )
{
gl_FragColor = vec4(vec3(0.0), 1.0);
}
</script>
<script id="vertexShader" type="x-shader/x-vertex">
uniform lowp sampler2D u_heightMap;
uniform float u_time;
void main()
{
vec3 pos = position;
vec2 offset1 = vec2(1.0, 0.5) * u_time * 0.1;
vec2 offset2 = vec2(0.5, 1.0) * u_time * 0.1;
float hight1 = texture2D(u_heightMap, uv + offset1).r * 0.02;
float hight2 = texture2D(u_heightMap, uv + offset2).r * 0.02;
pos.z += hight1 + hight2;
vec4 mvPosition = modelViewMatrix * vec4( pos, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
</script>
Using better displacement texture, or even using two different textures for two offsets you may achieve better results.
I have a scene that works perfectly with one light. However, when I add two more - each new addition becomes dimmer until it is almost unseen. Is the attenuation factors wrong or could it be something else?
int i = 0;
for(i=0; i<3; i++){
if (lights[i].enabled == 1.0){
//Lighting Attributes
vec4 light_position = vec4(lights[i].position,1.0);
vec4 light_ambient = lights[i].ambient;
vec4 light_diffuse = lights[i].diffuse;
vec4 light_specular = lights[i].specular;
float light_att_constant = 1.0;
float light_att_linear = 0.0;
float light_att_quadratic = 0.01;
float light_shine = 1.0;
//Object Attributes
vec3 obj_position = n_vertex;
vec3 obj_normals = n_normal;
vec4 obj_color = n_colors;
//Calc Distance
vec3 distance_LO = (obj_position - light_position.xyz);
float distance = length(distance_LO);
//Normalize some attributes
vec3 n_light_position = normalize(distance_LO);
//Apply ambience
finalColor *= light_ambient * global_ambient;
//Calc Cosine of Normal and Light
float NdotL = max(dot(obj_normals, n_light_position),0.0);
//Calc Eye Vector (negated position)
vec3 eye_view = -obj_position;
//Check if Surface is facing the Light
if (NdotL > 0){
//Apply lambertian reflection
finalColor += obj_color * light_diffuse * NdotL;
//Calc the half-vector
vec3 half_vector = normalize(light_position.xyz + eye_view);
//Calc angle between normal and half-vector
//See the engine notebook for a diagram.
float NdotHV = max(dot(obj_normals, half_vector), 0.0);
//Apply Specularity
finalColor += obj_color * light_specular * pow(NdotHV, light_shine);
}
//Calc Attenuation
float attenuation = light_att_constant / ((1 + light_att_linear * distance) *
1 + light_att_quadratic * distance * distance);
//Apply Attenuation
finalColor = finalColor * attenuation;
}
}
color = vec4(finalColor.rgb, 1.0);
You multiply in your colours. This means that shadows will get darker.
If you have an area around some relative brightness 1/2, then you multiply it by 1/2 (contribution from that light), you'll get 1/4.
If you have Photoshop or Gimp you can test this yourself with the Multiply blending mode, and three circles, pure red, pure green and pure blue and overlap them. Compare Multiply to Linear Dodge (the plus operation in Photoshop.)
Here's an example.
You'll most certainly want an additive effect, that is, add the terms together.