Getting the true z value from the depth buffer - opengl

Sampling from a depth buffer in a shader returns values between 0 and 1, as expected.
Given the near- and far- clip planes of the camera, how do I calculate the true z value at this point, i.e. the distance from the camera?

From http://web.archive.org/web/20130416194336/http://olivers.posterous.com/linear-depth-in-glsl-for-real
// == Post-process frag shader ===========================================
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
float z_b = texture2D(depthBuffTex, vTexCoord).x;
float z_n = 2.0 * z_b - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
[edit] So here's the explanation (with 2 mistakes, see Christian's comment below) :
An OpenGL perspective matrix looks like this :
When you multiply this matrix by an homogeneous point [x,y,z,1], it gives you: [don't care, don't care, Az+B, -z] (with A and B the 2 big components in the matrix).
OpenGl next does the perspective division: it divides this vector by its w component. This operation is not done in shaders (except special cases like shadowmapping) but in hardware; you can't control it. w = -z, so the Z value becomes -A/z -B.
We are now in Normalized Device Coordinates. The Z value is between 0 and 1. For some stupid reason, OpenGL requires that it should be moved to the [-1,1] range (just like x and y). A scaling and offset is applied.
This final value is then stored in the buffer.
The above code does the exact opposite :
z_b is the raw value stored in the buffer
z_n linearly transforms z_b from [-1,1] to [0,1]
z_e is the same formula as z_n=-A/z_e -B, but solved for z_e instead. It's equivalent to z_e = -A / (z_n+B). A and B should be computed on the CPU and sent as uniforms, btw.
The opposite function is :
varying float depth; // Linear depth, in world units
void main(void)
{
float A = gl_ProjectionMatrix[2].z;
float B = gl_ProjectionMatrix[3].z;
gl_FragDepth = 0.5*(-A*depth + B) / depth + 0.5;
}

I know this is an old, old question, but I've found myself back here more than once on various occasions, so I thought I'd share my code that does the forward and reverse conversions.
This is based on #Calvin1602's answer. These work in GLSL or plain old C code.
uniform float zNear = 0.1;
uniform float zFar = 500.0;
// depthSample from depthTexture.r, for instance
float linearDepth(float depthSample)
{
depthSample = 2.0 * depthSample - 1.0;
float zLinear = 2.0 * zNear * zFar / (zFar + zNear - depthSample * (zFar - zNear));
return zLinear;
}
// result suitable for assigning to gl_FragDepth
float depthSample(float linearDepth)
{
float nonLinearDepth = (zFar + zNear - 2.0 * zNear * zFar / linearDepth) / (zFar - zNear);
nonLinearDepth = (nonLinearDepth + 1.0) / 2.0;
return nonLinearDepth;
}

I ended up here trying to solve a similar problem when Nicol Bolas's comment on this page made me realize what I was doing wrong. If you want the distance to the camera and not the distance to the camera plane, you can compute it as follows (in GLSL):
float GetDistanceFromCamera(float depth,
vec2 screen_pixel,
vec2 resolution) {
float fov = ...
float near = ...
float far = ...
float distance_to_plane = near / (far - depth * (far - near)) * far;
vec2 center = resolution / 2.0f - 0.5;
float focal_length = (resolution.y / 2.0f) / tan(fov / 2.0f);
float diagonal = length(vec3(screen_pixel.x - center.x,
screen_pixel.y - center.y,
focal_length));
return distance_to_plane * (diagonal / focal_length);
}
(source) Thanks to github user cassfalg:
https://github.com/carla-simulator/carla/issues/2287

Related

What is the best OpenGL self-shadowing technique?

I chose the quite old, but sufficient method of shadow mapping, which is OK overall, but I quickly discovered some self-shadowing problems:
It seems, this problem appears because of the bias offset, which is necessary to eliminate shadow acne artifacts.
After some googling, it seems that there is no easy solution to this, so I tried some shader tricks which worked, but not very well.
My first idea was to perform a calculation of a dot multiplication between a light direction vector and a normal vector. If the result is lower than 0, the angle between vectors is >90 degrees, so this surface is pointing outward at the light source, hence it is not illuminated. This works good, except shadows may appear too sharp and hard:
After I was not satisfied with the results, I tried another trick, by multiplying the shadow value by the abs value of the dot product of light direction and normal vector (based on the normal map), and it did work (hard shadows from the previous image got smooth transition from shadow to regular diffuse color), except it created another artifact in situations, when the normal map normal vector is pointing somewhat at the sun, but the face normal vector does not. It also made self-shadows much brighter (but it is fixable):
Can I do something about it, or should I just choose the lesser evil?
Shader shadows code for example 1:
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
vec4 fragPosLightSpace = lightSpaceMatrix * vec4(FragPos, 1.0);
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// keep the shadow at 0.0 when outside the far_plane region of the light's frustum.
if (currentDepth > 1.0)
{
return 0.0;
}
// calculate bias (based on depth map resolution and slope)
float bias = max(0.005 * (1.0 - dot(normal, lightDir)), 0.0005);
vec2 texelSize = 1.0 / vec2(textureSize(material.texture_shadow, 0));
const int sampleRadius = 2;
const float sampleRadiusCount = pow(sampleRadius * 2 + 1, 2);
for(int x = -sampleRadius; x <= sampleRadius; ++x)
{
for(int y = -sampleRadius; y <= sampleRadius; ++y)
{
float pcfDepth = texture(material.texture_shadow, vec3(projCoords.xy + vec2(x, y) * texelSize, layer)).r;
shadow += (currentDepth - bias) > pcfDepth ? ambientShadow : 0.0;
}
}
shadow /= sampleRadiusCount;
Hard self shadows trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
// "Normal" is a face normal vector, "normal" is calculated based on normal map. I know there is a naming problem with that))
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = max(abs(vectorNormalDot), ambientShadow);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...
}
Dot product multiplication trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = ambientShadow * abs(vectorNormalDot);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...

Adaptive depth bias for texture sampling

I have a complex 3D scenes, the values in my depth buffer ranges from close shot, several centimeters, to several kilometers.
For some various effects I am using a depth bias, offset to circumvent some artifacts (SSAO, Shadow). Even during depth peeling by comparing depth between the current peel and the previous one some issues can occur.
I have fix those issues for close up shot but when the fragment is far enough, the bias become obsolete.
I am wondering how to tackle the bias for such scenes. Something around bias depending on the current world depth of the current pixel or maybe completely disabling the effect at a given depth?
Is there some good practices regarding those issues, and how can I address them?
It seems I found a way,
I have sound this link for shadow bias
https://digitalrune.github.io/DigitalRune-Documentation/html/3f4d959e-9c98-4a97-8d85-7a73c26145d7.htm
Depth bias and normal offset values are specified in shadow map
texels. For example, depth bias = 3 means that the pixels is moved the
length of 3 shadows map texels closer to the light.
By keeping the bias proportional to the projected shadow map texels,
the same settings work at all distances.
I use the difference in world space between the current point and a neighboring pixel with the same depth component. the bias become something close to "the average distance between 2 neighboring pixels". The further the pixel is the larger the bias will be (from few millimeters close to the near plane to meters at the far plane).
So for each of my sampling point, I offset its position from some
pixels in its x direction (3 pixels give me good results on various
scenes).
I compute the world difference between the currentPoint and this new offsetedPoint
I use this difference as a bias for all my depth testing
code
float compute_depth_offset() {
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec2 nextPixel = vec2(gl_FragCoord.xy + vec2(depth_transparency_bias, 0.0)) / dim;
vec4 currentNDC;
vec4 nextNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - depth_range.near - depth_range.far) / (depth_range.far - depth_range.near);
currentNDC.w = 1.0;
nextNDC.xy = nextPixel * 2.0 - 1.0;
nextNDC.z = currentNDC.z;
nextNDC.w = currentNDC.w;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec4 nextWorld = (inv_mvp * nextNDC);
nextWorld.xyz = nextWorld.xyz / nextWorld.w;
return length(nextWorld.xyz - world.xyz);
}
recently I used only the world space derivative of the current pixels position:
float compute_depth_offset(float zNear, float zFar)
{
mat4 mvp = projection * modelView;
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec4 currentNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - 0.0 - 1.0) / (1.0 - 0.0);
currentNDC.w = 1.0;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec3 depth = max(abs(dFdx(world.xyz)), abs(dFdy(world.xyz)));
return depth.x + depth.y + depth.z;
}

Fish-eye warping about mouse position - fragment shader

I'm trying to create a fish-eye effect but only in a small radius around the mouse position. I've been able to modify this code to work about the mouse position (demo) but I can't figure out where the zooming is coming from. I'm expecting the output to warp the image similarly to this (ignore the color inversion for the sake of this question):
Relevant code:
// Check if within given radius of the mouse
vec2 diff = myUV - u_mouse - 0.5;
float distance = dot(diff, diff); // square of distance, saves a square-root
// Add fish-eye
if(distance <= u_radius_squared) {
vec2 xy = 2.0 * (myUV - u_mouse) - 1.0;
float d = length(xy * maxFactor);
float z = sqrt(1.0 - d * d);
float r = atan(d, z) / PI;
float phi = atan(xy.y, xy.x);
myUV.x = d * r * cos(phi) + 0.5 + u_mouse.x;
myUV.y = d * r * sin(phi) + 0.5 + u_mouse.y;
}
vec3 tex = texture2D(tMap, myUV).rgb;
gl_FragColor.rgb = tex;
This is my first shader, so other improvements besides fixing this issue are also welcome.
Compute the vector from the current fragment to the mouse and the length of the vector:
vec2 diff = myUV - u_mouse;
float distance = length(diff);
The new texture coordinate is the sum of the mouse position and the scaled direction vector:
myUV = u_mouse + normalize(diff) * u_radius * f(distance/u_radius);
For instance:
uniform float u_radius;
uniform vec2 u_mouse;
void main()
{
vec2 diff = myUV - u_mouse;
float distance = length(diff);
if (distance <= u_radius)
{
float scale = (1.0 - cos(distance/u_radius * PI * 0.5));
myUV = u_mouse + normalize(diff) * u_radius * scale;
}
vec3 tex = texture2D(tMap, myUV).rgb;
gl_FragColor = vec4(tex, 1.0);
}

World To Screenspace for Mode7 effect (fragment shader)

I have a fragment shader that transforms the view into something resembling mode7.
I want to know the Screen-Space x,y coordinates given a world position.
As the transformation happens in the fragment shader, I can't simply inverse a matrix. This is the fragment shader code:
uniform float Fov; //1.4
uniform float Horizon; //0.6
uniform float Scaling; //0.8
void main() {
vec2 pos = uv.xy - vec2(0.5, Horizon);
vec3 p = vec3(pos.x, pos.y, pos.y + Fov);
vec2 s = vec2(p.x/p.z, p.y/p.z) * Scaling;
s.x += 0.5;
s.y += screenRatio;
gl_FragColor = texture2D(ColorTexture, s);
}
It transforms pixels in a pseudo 3d way:
-
What I want to do is get a screen-space coordinate for a given world position (in normal code, not shaders).
How do I reverse the order of operations above?
This is what I have right now:
(GAME_WIDTH and GAME_HEIGHT are constants and hold pixel values, e.g. 320x240)
vec2 WorldToScreenspace(float x, float y) {
// normalize coordinates 0..1, as x,y are in pixels
x = x/GAME_WIDTH - 0.5;
y = y/GAME_HEIGHT - Horizon;
// as z depends on a y value I have yet to calculate, how can I calc it?
float z = ??;
// invert: vec2 s = vec2(p.x/p.z, p.y/p.z) * Scaling;
float sx = x*z / Scaling;
float sy = y*z / Scaling;
// invert: pos = uv.xy - vec2(0.5, Horizon);
sx += 0.5;
sy += screenRatio;
// convert back to screen space
return new vec2(sx * GAME_WIDTH, sy * GAME_HEIGHT);
}

OpenGL - Trouble With Custom Perspective Matrix & Shader

This is my code for generating the perspective matrices:
public static Matrix4f orthographicMatrix(float left, float right, float bot,
float top, float far, float near) {
// construct and return matrix
Matrix4f mat = new Matrix4f();
mat.m00 = 2 / (right - left);
mat.m11 = 2 / (top - bot);
mat.m22 = -2 / (far - near);
mat.m30 = -((right + left) / (right - left));
mat.m31 = -((top + bot) / (top - bot));
mat.m32 = -((far + near) / (far - near));
mat.m33 = 1;
return mat;
}
public static Matrix4f projectionMatrix(float fovY, float aspect,
float near, float far) {
// compute values
float yScale = (float) ((float) 1 / Math.tan(Math.toRadians(fovY / 2)));
float xScale = yScale / aspect;
float zDistn = near - far;
// construct and return matrix
Matrix4f mat = new Matrix4f();
mat.m00 = xScale;
mat.m11 = yScale;
mat.m22 = far / zDistn;
mat.m23 = (near * far) / zDistn;
mat.m32 = -1;
mat.m33 = 0;
return mat;
}
In my program, first a square is rendered in orthographical perspective, then a square is rendered in projection perspective.
But here's my problem - when my shader does the multiplication in this order:
gl_Position = mvp * vec4(vPos.xyz, 1);
Only the square rendered with projection perspective is displayed. But when the multiplication is done in this order:
gl_Position = vec4(vPos.xyz, 1) * mvp;
Only the square rendered with orthographical perspective is displayed! So obviously my problem is that only one square is displayed at a time given a certain multiplication order.
Issues with multiplication order are indicative of issues with the order you store the components in your matrix.
    mat * vec is equivalent to vec * transpose (mat)
To put this another way, if you are using row-major matrices (which are effectively transpose (mat) as far as GL is concerned) instead of column-major you need to reverse the order of your matrix multiplication. This goes for compound multiplication too, you have to reverse the entire sequence of multiplications. This is why the order of scaling, rotation and translation between D3D (row-major) and OpenGL (column-major) is backwards.
Therefore, when you construct your MVP matrix it should look like this:
    projection * view * model  (OpenGL)
    model * view * projection  (Direct3D)
Column-major matrix multiplication, as OpenGL uses, should read right-to-left. That is, you start with an object space position (right-most) and transform to clip space (left-most).
In other words, this is the proper way to transform your vertices...
gl_Position = ModelViewProjectionMatrix * position;
~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~
clip space object space to clip space obj space