Deferred Shadow Mapping GLSL - opengl

Im currently implementing a deferred rendering pipeline and im stuck with shadow mapping.
Ive already implemented it succesfully into a forward pipeline.
The Steps i do are:
Get Position in Light View
Convert to light view clip space
Get shadow tex coords with * 0.5 + 0.5;
check depth
Edit: Updated code with new result image:
float checkShadow(vec3 position) {
// get position in light view
mat4 invView = inverse(cameraView);
vec4 pEyeDir = sunBias * sunProjection * sunView * invView * vec4(position, 1.0);
// light view clip space
pEyeDir = pEyeDir / pEyeDir.w;
// get uv coordinates
vec2 sTexCoords = pEyeDir.xy * 0.5 + 0.5;
float bias = 0.0001;
float depth = texture(sunDepthTex, sTexCoords).r - bias;
float shadow = 1.0f;
if(pEyeDir.z * 0.5 + 0.5 > depth)
{
shadow = 0.3f;
}
return shadow;
}
here some variables important for the code above:
vec3 position = texture(positionTex, uv).rgb;
Also i get a dark background( meshes stay the same) at some camera positions, only happens when i multiply the shadow value to the final color.
As requested, here are the position and sun depth texture:

Ok i fixed it. The problem was because the light depth had a different size than the gbuffer textures.
To use different texture sizes i had to normalize them with
coords = (coords / imageSize ) * windowSize;

Related

OpenGL multipass shadow maps on Qt are not rendered

I'm trying to render dynamic shadows casted from the point light on some cubes. However, like there are no shadows at all on the cubes, particularly on the silver cube:
In my program I use Qt OpenGL API together with the native OpenGL one. I do two passes: 1. create a depth texture (glGenTextures()), create a framebuffer (glGenFramebuffers()) and attach the texture to that (glFramebufferTexture()). Then render all cubes to that framebuffer. 2. Render as usually all scene to the screen with using that texture in the shaders. I use QOpenGLShaderProgram class for creating shader programs.
How I calculate the shadows in the fragment shader:
float isFragInShadow(vec3 fragPos)
{
vec3 relFragPos = fragPos - lightPos;
float closestDepth = texture(depth_map, relFragPos).r;
closestDepth *= far_plane;
float currentDepth = length(relFragPos);
float bias = max(0.05 * (1.0 - dot(normalize(-relFragPos), Normal)), 0.005);
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
...
float isShadow = isFragInShadow(FragPos);
vec3 resColor = ambient * obj_material.ambient + (1.0 - isShadow) * (diffuse * obj_material.diffuse + specular * obj_material.specular);
I also tried to render to the screen the depth texture and the result is following:
https://i.stack.imgur.com/qM9a1.png
How I render:
vec3 lightToFrag = FragPos - lightPos;
float closestDepth = texture(depth_map, lightToFrag).r;
Color = vec4(vec3(closestDepth), 1.0);
Why can I not see shadows in the scene?

Why won't my falloff work for this dot product based spotlight?

I made a spotlight that
Projects 3d models onto a render target from each light POV to simulate shadows
Cuts a circle out of the square of light that has been projected onto the render target as a result of the light frustum, then only lights up the pixels inside that circle (except the shadowed parts of course), so you dont see the square edges of the projected frustum.
After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95 to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value, which should range between .95 and 1.0.
This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However, there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander and let me know, please help, thank you so much.
float CalculateSpotLightIntensity(
float3 LightPos_VertexSpace,
float3 LightDirection_WS,
float3 SurfaceNormal_WS)
{
//float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace);
float3 lightToVertex_WS = -LightPos_VertexSpace;
float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS)));
// METALLIC EFFECT (deactivate for now)
float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));
if(dotProduct > .95 /*&& metalEffect > .55*/)
{
return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));
//return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct;
//return dotProduct;
}
else
{
return 0;
}
}
float4 LightPixelShader(PixelInputType input) : SV_TARGET
{
float2 projectTexCoord;
float depthValue;
float lightDepthValue;
float4 textureColor;
// Set the bias value for fixing the floating point precision issues.
float bias = 0.001f;
// Set the default output color to the ambient light value for all pixels.
float4 lightColor = cb_ambientColor;
/////////////////// NORMAL MAPPING //////////////////
float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex);
// Expand the range of the normal value from (0, +1) to (-1, +1).
bumpMap = (bumpMap * 2.0f) - 1.0f;
// Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal!
float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal));
//////////////// LIGHT LOOP ////////////////
for(int i = 0; i < NUM_LIGHTS; ++i)
{
// Calculate the projected texture coordinates.
projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;
projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;
if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))
{
// Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location.
depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r;
// Calculate the depth of the light.
lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w;
// Subtract the bias from the lightDepthValue.
lightDepthValue = lightDepthValue - bias;
float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue );
// Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel.
// If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it.
if(lightDepthValue < depthValue)
{
// Calculate the amount of light on this pixel.
float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i])));
if(lightIntensity > 0.0f)
{
// Determine the final diffuse color based on the diffuse color and the amount of light intensity.
float spotLightIntensity = CalculateSpotLightIntensity(
input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!!
cb_lights[i].lightDirection,
bumpNormal/*input.normal*/);
lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight
//lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light
}
}
}
}
// Saturate the final light color.
lightColor = saturate(lightColor);
// lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection));
// TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location.
input.tex.x += textureTranslation;
// BLENDING
float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex);
float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex);
float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex);
textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2));
// Combine the light and texture color.
float4 finalColor = lightColor * textureColor;
/////// TRANSPARENCY /////////
//finalColor.a = 0.2f;
return finalColor;
}
Oops! It's because the range of 0.95 - 1.0 was too small to make a difference! So I had to expand the range to 0~1 by doing
float expandedRange = (dotProduct - .95)/.05f;
return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))*expandedRange*expandedRange);
Now it has a soft edge. A little too soft for me honestly. Now I'm just doing a quadratic falloff by squaring the expanded range as you can see. Any tips on making it look nicer? Let me know, thanks.

SSAO implementation in Babylon JS and GLSL, using view ray for depth comparison

I'm trying to create my own SSAO shader in forward rendering (not in post processing) with GLSL. I'm encountering some issues, but I really can't figure out what's wrong with my code.
It is created with Babylon JS engine as a BABYLON.ShaderMaterial and set in a BABYLON.RenderTargetTexture, and it is mainly inspired by this renowned SSAO tutorial: http://john-chapman-graphics.blogspot.fr/2013/01/ssao-tutorial.html
For performance reasons, I have to do all the calculation without projecting and unprojecting in screen space, I'd rather use the view ray method described in the tutorial above.
Before explaining the whole thing, please note that Babylon JS uses a left-handed coordinate system, which may have quite an incidence on my code.
Here are my classic steps:
First, I calculate my four camera far plane corners positions in my JS code. They might be constants every time as they are calculated in view space position.
// Calculating 4 corners manually in view space
var tan = Math.tan;
var atan = Math.atan;
var ratio = SSAOSize.x / SSAOSize.y;
var far = scene.activeCamera.maxZ;
var fovy = scene.activeCamera.fov;
var fovx = 2 * atan(tan(fovy/2) * ratio);
var xFarPlane = far * tan(fovx/2);
var yFarPlane = far * tan(fovy/2);
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, far);
var farCornersVec = [topLeft, topRight, bottomRight, bottomLeft];
var farCorners = [];
for (var i = 0; i < 4; i++) {
var vecTemp = farCornersVec[i];
farCorners.push(vecTemp.x, vecTemp.y, vecTemp.z);
}
These corner positions are sent to the vertex shader -- that is why the vector coordinates are serialized in the farCorners[] array to be sent in the vertex shader.
In my vertex shader, position.x and position.y signs let the shader know which corner to use at each pass.
These corners are then interpolated in my fragment shader for calculating a view ray, i.e. a vector from the camera to the far plane (its .z component is, therefore, equal to the far plane distance to camera).
The fragment shader follows the instructions of John Chapman's tutorial (see commented code below).
I get my depth buffer as a BABYLON.RenderTargetTexture with the DepthRenderer.getDepthMap() method. A depth texture lookup actually returns (according to Babylon JS's depth shaders):
(gl_FragCoord.z / gl_FragCoord.w) / far, with:
gl_FragCoord.z: the non-linear depth
gl_FragCoord.z = 1/Wc, where Wc is the clip-space vertex position (i.e. gl_Position.w in the vertex shader)
far: the positive distance from camera to the far plane.
The kernel samples are arranged in a hemisphere with random floats in [0,1], most being distributed close to origin with a linear interpolation.
As I don't have a normal texture, I calculate them from the current depth buffer value with getNormalFromDepthValue():
vec3 getNormalFromDepthValue(float depth) {
vec2 offsetX = vec2(texelSize.x, 0.0);
vec2 offsetY = vec2(0.0, texelSize.y);
// texelSize = size of a texel = (1/SSAOSize.x, 1/SSAOSize.y)
float depthOffsetX = getDepth(depthTexture, vUV + offsetX); // Horizontal neighbour
float depthOffsetY = getDepth(depthTexture, vUV + offsetY); // Vertical neighbour
vec3 pX = vec3(offsetX, depthOffsetX - depth);
vec3 pY = vec3(offsetY, depthOffsetY - depth);
vec3 normal = cross(pY, pX);
normal.z = -normal.z; // We want normal.z positive
return normalize(normal); // [-1,1]
}
Finally, my getDepth() function allows me to get the depth value at current UV in 32-bit float:
float getDepth(sampler2D tex, vec2 texcoord) {
return unpack(texture2D(tex, texcoord));
// unpack() retreives the depth value from the 4 components of the vector given by texture2D()
}
Here are my vertex and fragment shader codes (without function declarations):
// ---------------------------- Vertex Shader ----------------------------
precision highp float;
uniform float fov;
uniform float far;
uniform vec3 farCorners[4];
attribute vec3 position; // 3D position of each vertex (4) of the quad in object space
attribute vec2 uv; // UV of each vertex (4) of the quad
varying vec3 vPosition;
varying vec2 vUV;
varying vec3 vCornerPositionVS;
void main(void) {
vPosition = position;
vUV = uv;
// Map current vertex with associated frustum corner position in view space:
// 0: top left, 1: top right, 2: bottom right, 3: bottom left
// This frustum corner position will be interpolated so that the pixel shader always has a ray from camera->far-clip plane.
vCornerPositionVS = vec3(0.0);
if (positionVS.x > 0.0) {
if (positionVS.y <= 0.0) { // top left
vCornerPositionVS = farCorners[0];
}
else if (positionVS.y > 0.0) { // top right
vCornerPositionVS = farCorners[1];
}
}
else if (positionVS.x <= 0.0) {
if (positionVS.y > 0.0) { // bottom right
vCornerPositionVS = farCorners[2];
}
else if (positionVS.y <= 0.0) { // bottom left
vCornerPositionVS = farCorners[3];
}
}
gl_Position = vec4(position * 2.0, 1.0); // 2D position of each vertex
}
// ---------------------------- Fragment Shader ----------------------------
precision highp float;
uniform mat4 projection; // Projection matrix
uniform float radius; // Scaling factor for sample position, by default = 1.7
uniform float depthBias; // 1e-5
uniform vec2 noiseScale; // (SSAOSize.x / noiseSize, SSAOSize.y / noiseSize), with noiseSize = 4
varying vec3 vCornerPositionVS; // vCornerPositionVS is the interpolated position calculated from the 4 far corners
void main() {
// Get linear depth in [0,1] with texture2D(depthBufferTexture, vUV)
float fragDepth = getDepth(depthBufferTexture, vUV);
float occlusion = 0.0;
if (fragDepth < 1.0) {
// Retrieve fragment's view space normal
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
// Random rotation: rvec.xyz are the components of the generated random vector
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0; // [-1,1]
rvec.z = 0.0; // Random rotation around Z axis
// Get view ray, from camera to far plane, scaled by 1/far so that viewRayVS.z == 1.0
vec3 viewRayVS = vCornerPositionVS / far;
// Current fragment's view space position
vec3 fragPositionVS = viewRay * fragDepth;
// Creation of TBN matrix
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
for (int i = 0; i < NB_SAMPLES; i++) {
// Get sample kernel position, from tangent space to view space
vec3 samplePosition = tbn * kernelSamples[i];
// Add VS kernel offset sample to fragment's VS position
samplePosition = samplePosition * radius + fragPosition;
// Project sample position from view space to screen space:
vec4 offset = vec4(samplePosition, 1.0);
offset = projection * offset; // To view space
offset.xy /= offset.w; // Perspective division
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth:
float sampleDepth = getDepth(depthTexture, offset.xy);
float rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;
// Reminder: fragDepth == fragPosition.z
// Range check and accumulate if fragment contributes to occlusion:
occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;
}
}
// Inversion
float ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
}
A horizontal and vertical Gaussian shader blur clears the noise generated by the random texture afterwards.
My parameters are:
NB_SAMPLES = 16
radius = 1.7
depthBias = 1e-5
power = 1.0
Here is the result:
The result has artifacts on its edges, and the close shadows are not very strong... Would anyone see something wrong or weird in my code?
Thanks a lot!
fragPositionVS is a position in view space coordinates and radius is length in view coordinates. You use them to calculate the samplePosition:
samplePosition = samplePosition * radius + fragPositionVS;
But in the line rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;, you compare the difference of fragDepth and sampleDepth with radius. That makes no sense, since fragDepth and sampleDepth are values from the depth buffer in, the range [0, 1] and radius is a lenght in the view space.
In the line occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;, you calculate the difference of samplePosition.z and sampleDepth. While samplePosition.z is a view space coordinate inbetween -near and -far, sampleDepth is a depth in range [0, 1]. Calculating a difference between these two values doesn't make any sense either.
I suggest using always Z coordinates, if you want to calculate distances or if you want to compare distances.
If you have a depth value, the Z-coordinate in view space can be calculated by converting the depth value to normalized device coordinate and converting the normalized device coordinate to a view coordinate:
float DepthToZ( in float depth )
{
float near = .... ; // distance to near plane (absolute value)
float far = .... ; // distance to far plane (absolute value)
float z_ndc = 2.0 * depth - 1.0;
float z_eye = 2.0 * near * far / (far + near - z_ndc * (far - near));
return -z_eye;
}
The depth is a value in the range [0, 1] and maps the range from the distance to the near plane and the distance to the far plane (in view space), but not linear (for perspective projection).
For this reason, the code line vec3 fragPositionVS = (vCornerPositionVS / far) * fragDepth; will not calculate a correct fragment position, but you can do it like this:
vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
Note, in view space the z axis comes out of the view port. If the corner positions are set up in view space, then the Z-coordinate has to be the negative distance to the far plane:
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, -far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, -far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, -far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, -far);
In the vertex shader the assignment of the corner positions is mixed. The lower left position of the viewport is (-1,-1) and the top right position is (1,1) (in normalized device coordinates).Adapt the code like this:
JavaScript:
var farCornersVec = [bottomLeft, bottomRight, topLeft, topRight];
Vertex shader:
// bottomLeft=0*2+0*1, bottomRight=0*2+1*1, topLeft=1*2+0*1, topRight=1*2+1*1;
int i = (positionVS.y > 0.0 ? 2 : 0) + (positionVS.x > 0.0 ? 1 : 0);
vCornerPositionVS = farCorners[i];
Note, if you could add an additional vertex attribute for the corner position, then it would be simplified.
The calculation of the fragment position can be simplified, if the aspect ratio, the field of view angle and the normalized device coordinates of the fragment (fragment position in range [-1,1]) are known:
ndc_xy = vUV * 2.0 - 1.0;
tanFov_2 = tan( radians( fov / 2 ) )
aspect = vp_size_x / vp_size_y
fragZ = DepthToZ( fragDepth );
fragPos = vec3( ndc_xy.x * aspect * tanFov_2, ndc_xy.y * tanFov_2, -1.0 ) * abs( fragZ );
If the perspective projection matrix is known, this can be calculated easily:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec4 viewH = inverse( projection ) * vec4( ndc_xy, fragDepth * 2.0 - 1.0, 1.0 );
vec3 fragPosition = viewH.xyz / viewH.w;
If the perspective projection is symmetric (the filed of view is not displaced and the Z-axis of the view space is in the center of the viewport), this can be simplified:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPosition = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs(DepthToZ(fragDepth));
See also:
How to recover view space position given view space depth value and ndc xy
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
I suggest to write the fragment shader somehow like this:
float fragDepth = getDepth(depthBufferTexture, vUV);
float ambientOcclusion = 1.0;
if (fragDepth > 0.0)
{
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0;
rvec.z = 0.0;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
mat3 tbn = mat3(tangent, cross(normal, tangent), normal);
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPositionVS = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs( DepthToZ(fragDepth) );
// vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
float occlusion = 0.0;
for (int i = 0; i < NB_SAMPLES; i++)
{
vec3 samplePosition = fragPositionVS + radius * tbn * kernelSamples[i];
// Project sample position from view space to screen space:
vec4 offset = projection * vec4(samplePosition, 1.0);
offset.xy /= offset.w; // Perspective division -> [-1,1]
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth
float sampleZ = DepthToZ( getDepth(depthTexture, offset.xy) );
// Range check and accumulate if fragment contributes to occlusion:
float rangeCheck = step( abs(fragPositionVS.z - sampleZ), radius );
occlusion += step( samplePosition.z - sampleZ, -depthBias ) * rangeCheck;
}
// Inversion
ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
}
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
See the WebGL example, which demonstrates the full algorithm (Unfortunately the full code would exceed the limit of 30000 signs, which an answer is limited to):
JSFiddle or GitHub
Extension to the answer
The depth as it would be stored in the depth buffer is calculated like this:
(see OpenGL ES write depth data to color)
float ndc_depth = vPosPrj.z / vPosPrj.w;
float depth = ndc_depth * 0.5 + 0.5;
This value is already calculated in the fragment shader and is contained in gl_FragCoord.z. See the Khronos Group reference page for gl_FragCoord which says:
The z component is the depth value that would be used for the fragment's depth if no shader contained any writes to gl_FragDepth.
If the depth has to be stored in a RGBA8 buffer, the depth has to be encoded to the 4 bytes of the buffer to avoid a loss of accuracy, and has to be decoded when read from the buffer:
encode
vec3 PackDepth( in float depth )
{
float depthVal = depth * (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( depthVal * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return encode.xyz - encode.yzw / 256.0 + 1.0/512.0;
}
decode
float UnpackDepth( in vec3 pack )
{
float depth = dot( pack, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
return depth * (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
}
See also the answers to the following questions:
How do I convert between float and vec4,vec3,vec2?
OpenGL ES write depth data to color
How do you pack one 32bit int Into 4, 8bit ints in glsl / webgl?

Getting World Position from Depth Buffer Value

I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.
You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).

2 pass effect in opengl

I try to create a 2 pass effect using FBO in OpenGL.
In the first pass, I write the depth in a color buffer (image 1):
Using the following in its vertex shader:
gl_Position = projection * view * gl_Vertex;
vec4 position = gl_Position/gl_Position.w;
position = position / 2.0 + 0.5;
float temp_depth = position.z;
gl_FrontColor = vec4(temp_depth,temp_depth,temp_depth,1);
In the second pass I try to use the texture from the previous pass and color the scene (image 2):
Here is the code in vertex shader:
vec4 shadow_coord = projection * view * gl_Vertex;
shadow_coord = shadow_coord / shadow_coord.w;
shadow_coord = shadow_coord / 2.0 + 0.5;
gl_FrontColor = texture2D(light_depth_texture, shadow_coord.xy);
The scene is consisted of a quad in the front of a cone. In both cases the fragment shader is gl_FragColor = gl_Color; The view and projection matrices in both cases are exactly the same defined at start. The problems is that there is a deviation in shadow_coord.xy.
As long as the view and projection values are exactly the same, shouldn't I get same result?
What can I do to fix it?
What resolution do you use for the texture you render into? And what kind of filtering? (seems linear, should be nearest). Also try to offset the coordinate you read with like:
// offset should be 0.5 / texture_resolution
gl_FrontColor = texture2D(light_depth_texture, shadow_coord.xy + offset);
And as the other commenters mentioned, 8 bit is not enough to store depth values, consider using a depth texture or a floating point format (like GL_R32F as in ARB_texture_rg).