Reconstructed position from depth - How to handle precision issues? - c++

In my deferred renderer, I've managed to successfully reconstruct my fragment position from the depth buffer.... mostly. By comparing my results to the position stored in an extra buffer, I've noticed that I'm getting a lot of popping far away from the screen. Here's a screenshot of what I'm seeing:
The green and yellow parts at the top are just the skybox, where the position buffer contains (0, 0, 0) but the reconstruction algorithm interprets it as a normal fragment with depth = 0.0 (or 1.0?).
The scene is rendered using fragColor = vec4(0.5 + (reconstPos - bufferPos.xyz), 1.0);, so anywhere that the resulting fragment is exactly (0.5, 0.5, 0.5) is where the reconstruction and the buffer have the exact same value. Imprecision towards the back of the depth buffer is to be expected, but that magenta and blue seems a bit strange.
This is how I reconstruct the position from the depth buffer:
vec3 reconstructPositionWithMat(vec2 texCoord)
{
float depth = texture2D(depthBuffer, texCoord).x;
depth = (depth * 2.0) - 1.0;
vec2 ndc = (texCoord * 2.0) - 1.0;
vec4 pos = vec4(ndc, depth, 1.0);
pos = matInvProj * pos;
return vec3(pos.xyz / pos.w);
}
Where texCoord = gl_FragCoord.xy / textureSize(colorBuffer, 0);, and matInvProj is the inverse of the projection matrix used to render the gbuffer.
Right now my position buffer is GL_RGBA32F (since it's only for testing accuracy, I don't care as much about bandwith and memory waste), and my depth buffer is GL_DEPTH24_STENCIL8 (I got similar results from GL_DEPTH_COMPONENT32, and yes I do need the stencil buffer).
My znear is 0.01f, and zfar is 1000.0f. I'm rendering a single quad as my ground which is 2000.0f x 2000.0f large (I wanted it to be big enough that it would clip with the far plane).
Is this level of imprecision considered acceptable? What are some ways that people have gotten around this problem? Is there something wrong with how I reconstruct the view/eye-space position?

Related

Dealing with the non-linearity of perspective shadowmap depth values

I have implemented shadow maps in GLSL by rendering the view from a light into a depth texture, and then in a second pass compare these values when rendering my geometry from camera view.
In abbreviated code, the vertex shader of the second (main) render pass is:
...
gl_Position = camviewprojmat * position;
shadowcoord = lightviewprojmat * postion;
...
and fragment shader I lookup this shadowcoord texel in the shadow texture to see if the light sees the same thing (lit), or something closer (shadowed.) This is done by setting GL_TEXTURE_COMPARE_MODE to GL_COMPARE_REF_TO_TEXTURE for the depth texture.
This works great for lights that have an orthogonal projection. But once I use a perspective projection to create wide-angle spot lights, I encounter errors in the image.
I have determined the cause of my issues to be the incorrectly interpolated depth values shadowcoord.z / shadowcoord.w which, due to the perspective projection, are not linear. Yet, the interpolation over the triangle is linear.
At the vertex locations, the depth values are determined exactly, but the fragments between vertex locations get incorrectly interpolated values for depth.
This is demonstrated by the image below. The yellow crosshairs are the light position, which is a spot-light looking straight down. The colour-coding is the light-depth from -1 (red) to +1 (blue.)
The pillar in the middle has long tall triangles from top to bottom, and all the interpolated light-depth values are off by a lot.
The stairs on the left have much more vertex locations, so it samples the non-linear depths more accurately.
The project matrix I use for the spot light is created like this (I use a very wide angle of 170 deg):
// create a perspective projection matrix
const float f = 1.0f / tanf(fov/2.0f);
const float aspect = 1.0f;
float* mout = sl_proj.data;
mout[0] = f / aspect;
mout[1] = 0.0f;
mout[2] = 0.0f;
mout[3] = 0.0f;
mout[4] = 0.0f;
mout[5] = f;
mout[6] = 0.0f;
mout[7] = 0.0f;
mout[8] = 0.0f;
mout[9] = 0.0f;
mout[10] = (zFar+zNear) / (zNear-zFar);
mout[11] = -1.0f;
mout[12] = 0.0f;
mout[13] = 0.0f;
mout[14] = 2 * zFar * zNear / (zNear-zFar);
mout[15] = 0.0f;
How can I deal with this non-linearity in the light depth buffer? Is it possible to have perspective projection that has linear depth values? Should I compute my shadow coordinates differently? Can they be corrected after the fact?
Note: I did consider doing the projection in the fragment shader instead, but as I have many lights in the scene, doing all those matrix multiplications in the fragment shader would be too costly in computation.
This stackoverflow answer describes how to do a linear depth buffer.
It entails writing out the depth (modelviewprojmat * position).z in the vertex shader, and then in the fragment shader compute the linear depth as:
gl_FragDepth = ( depth - zNear ) / ( zFar - zNear );
And with a linear depth buffer, the fragment interpolators can do their job properly.

OpenGL GLSL bloom effect bleeds on edges

I have a framebuffer called "FBScene" that renders to a texture TexScene.
I have a framebuffer called "FBBloom" that renders to a texture TexBloom.
I have a framebuffer called "FBBloomTemp" that renders to a texture TexBloomTemp.
First I render all my blooming / glowing objects to FBBloom and thus into TexBloom. Then I play ping pong with FBBloom and FBBloomTemp, alternatingly blurring horizontally / vertically to get a nice bloom texture.
Then I pass the final "TexBloom" texture and the TexScene to a screen shader that draws a screen filling quad with both textures:
gl_FragColor = texture(TexBloom, uv) + texture(TexScene, uv);
The problem is:
While blurring the images, the bloom effect bleeds into the opposite edges of the screen if the glowing object is too close to the screen border.
This is my blur shader:
vec4 color = vec4(0.0);
vec2 off1 = vec2(1.3333333333333333) * direction;
vec2 off1DivideByResolution = off1 / resolution;
vec2 uvPlusOff1 = uv + off1DivideByResolution;
vec2 uvMinusOff1 = uv - off1DivideByResolution;
color += texture(image, uv) * 0.29411764705882354;
color += texture(image, uvPlusOff1) * 0.35294117647058826;
color += texture(image, uvMinusOff1) * 0.35294117647058826;
gl_FragColor = color;
I think I need to prevent uvPlusOff1 and uvMinusOff1 from beeing outside of the -1 and +1 uv range. But I don't know how to do that.
I tried to clamp the uv values at the gap in the code above with:
float px = clamp(uvPlusOff1.x, -1, 1);
float py = clamp(uvPlusOff1.y, -1, 1);
float mx = clamp(uvMinusOff1.x, -1, 1);
float my = clamp(uvMinusOff1.y, -1, 1);
uvPlusOff1 = vec2(px, py);
uvMinusOff1 = vec2(mx, my);
But it did not work as expected. Any help is highly appreciated.
Bleeding to the other side of the screen usually happens when the wrap-mode is set to GL_REPEAT. Set it to GL_CLAMP_TO_EDGE and it shouldn't happen anymore.
Edit - To explain a little bit more why this happens in your case: A texture coordinate of [1,1] means the bottom-right corner of the bottom-right texel. When linear filtering is enabled, this location will read four pixels around that corner. In case of repeating textures, three of them are on other sides of the screen. If you want to prevent the problem manually, you have to clamp to the range [0 + 1/texture_size, 1 - 1/texture_size].
I'm also not sure why you even clamp to [-1, 1], because texture coordinates usually range from [0, 1]. Negative values will be outside of the texture and are handled by the wrap mode.

Shadow Map Produces Incorrect Results

I'm attempting to implement shadow mapping into my deferred rendering pipeline, but I'm running into a few issues actually generating the shadow map, then shadowing the pixels – pixels that I believe should be shadowed simply aren't.
I have a single directional light, which is the 'sun' in my engine. I have deferred rendering set up for lighting, which works properly thus far. I render the scene again into a depth-only FBO for the shadow map, using the following code to generate the view matrix:
glm::vec3 position = r->getCamera()->getCameraPosition(); // position of level camera
glm::vec3 lightDir = this->sun->getDirection(); // sun direction vector
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-10,10,-10,10,-10,20); // ortho projection
glm::mat4 depthViewMatrix = glm::lookAt(position + (lightDir * 20.f / 2.f), -lightDir, glm::vec3(0,1,0));
glm::mat4 lightSpaceMatrix = depthProjectionMatrix * depthViewMatrix;
Then, in my lighting shader, I use the following code to determine whether a pixel is in shadow or not:
// lightSpaceMatrix is the same as above, FragWorldPos is world position of the texekl
vec4 FragPosLightSpace = lightSpaceMatrix * vec4(FragWorldPos, 1.0f);
// multiply non-ambient light values by ShadowCalculation(FragPosLightSpace)
// ... do more stuff ...
float ShadowCalculation(vec4 fragPosLightSpace) {
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// vec3 projCoords = fragPosLightSpace.xyz;
// Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gSunShadowMap, projCoords.xy).r;
// Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// Check whether current frag pos is in shadow
float bias = 0.005;
float shadow = (currentDepth - bias) > closestDepth ? 1.0 : 0.0;
// Ensure that Z value is no larger than 1
if(projCoords.z > 1.0) {
shadow = 0.0;
}
return shadow;
}
However, that doesn't really get me what I'm after. Here's a screenshot of the output after shadowing, as well as the shadow map half-assedly converted to an image in Photoshop:
Render output
Shadow Map
Since the directional light is the only light in my shader, it seems that the shadow map is being rendered pretty close to correctly, since the perspective/direction roughly match. However, what I don't understand is why none of the teapots actually end up casting a shadow on the others.
I'd appreciate any pointers on what I might be doing wrong. I think that my issue lies either in the calculation of that light space matrix (I'm not sure how to properly calculate that, given a moving camera, such that the stuff that's in view will be updated,) or in the way I determine whether the texel the deferred renderer is shading is in shadow or not. (FWIW, I determine the world position from the depth buffer, but I've proven that this calculation is working correctly.)
Thanks for any help.
Debugging shadow problems can be tricky. Lets start with a few points:
If you look at your render closely, you will actually see a shadow on one of the pots in the top left corner.
Try rotating your sun, this usually helps to see if there are any problems with the light transform matrix. From your output, it seems the sun is very horizontal and might not cast shadows on this setup. (another angle might show more shadows)
It appears as though you are calculating the matrix correctly, but try shrinking your maximum depth in glm::ortho(-10,10,-10,10,-10,20) to tightly fit your scene. If the depth is too large, you will lose precision and shadow will have artifacts.
To visualize where the problem is coming from further, try outputing the result from your shadow map lookup from here:
closestDepth = texture(gSunShadowMap, projCoords.xy).r
If the shadow map is being projected correctly, then you know you have a problem in your depth comparisons. Hope this helps!

reconstructed world position from depth is wrong

I'm trying to implement deferred shading/lighting. In order to reduce the number/size of the buffers I use I wanted to use the depth texture to reconstruct world position later on.
I do this by multiplying the pixel's coordinates with the inverse of the projection matrix and the inverse of the camera matrix. This sort of works, but the position is a bit off. Here's the absolute difference with a sampled world position texture:
For reference, this is the code I use in the second pass fragment shader:
vec2 screenPosition_texture = vec2((gl_FragCoord.x)/WIDTH, (gl_FragCoord.y)/HEIGHT);
float pixelDepth = texture2D(depth, screenPosition_texture).x;
vec4 worldPosition = pMatInverse*vec4(VertexIn.position, pixelDepth, 1.0);
worldPosition = vec4(worldPosition.xyz/worldPosition.w, 1.0);
//worldPosition /= 1.85;
worldPosition = cMatInverse*worldPosition_byDepth;
If I uncomment worldPosition /= 1.85, the position is reconstructed a lot better (on my geometry/range of depth values). I just got this value by messing around after comparing my output with what it should be (stored in a third texture).
I'm using 0.1 near, 100.0 far and my geometries are up to about 15 away.
I know there may be precision errors, but this seems a bit too big of an error too close to the camera.
Did I miss anything here?
As mentioned in a comment:
I didn't convert the depth value from NDC space to clip space.
I should have added this line:
pixelDepth = pixelDepth * 2.0 - 1.0;

Texture repeating and clamping in shader

I have the following fragment and vertex shader, in which I repeat a texture:
//Fragment
vec2 texcoordC = gl_TexCoord[0].xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
texcoordC.x = clamp(texcoordC.x, 0.0, 0.9);
texcoordC.y = clamp(texcoordC.y, 0.0, 0.9);
vec4 texColor = texture2D(sampler, texcoordC);
gl_FragColor = texColor;
//Vertex
gl_TexCoord[0] = gl_MultiTexCoord0;
colorC = gl_Color.r;
gl_Position = ftransform();
ADDED: After this process, I fetch the texture coordinates and use a texture pack:
vec4 textureGet(vec2 texcoord) {
// Tile is 1.0/16.0 part of texture, on x and y
float tileSp = 1.0 / 16.0;
vec4 color = texture2D(sampler, texcoord);
// Get tile x and y by red color stored
float texTX = mod(color.r, tileSp);
float texTY = color.r - texTX;
texTX /= tileSp;
// Testing tile
texTX = 1.0 - tileSp;
texTY = 1.0 - tileSp;
vec2 savedC = color.yz;
// This if else statement can be ignored. I use time to move the texture. Seams show without this as well.
if (color.r > 0.1) {
savedC.x = mod(savedC.x + sin(time / 200.0 * (color.r * 3.0)), 1.0);
savedC.y = mod(savedC.y + cos(time / 200.0 * (color.r * 3.0)), 1.0);
} else {
savedC.x = mod(savedC.x + time * (color.r * 3.0) / 1000.0, 1.0);
savedC.y = mod(savedC.y + time * (color.r * 3.0) / 1000.0, 1.0);
}
vec2 texcoordC = vec2(texTX + savedC.x * tileSp, texTY + savedC.y * tileSp);
vec4 res = texture2D(texturePack, texcoordC);
return res;
}
I have some troubles with showing seams (of 1 pixel it seems) however. If I leave out texcoord *= 10.0 no seams are shown (or barely), if I leave it in they appear. I clamp the coordinates (even tried lower than 1.0 and bigger than 0.0) to no avail. I strongly have the feeling it is a rounding error somewhere, but I have no idea where. ADDED: Something to note is that in the actual case I convert the texcoordC x and y to 8 bit floats. I think the cause lies here; I added another shader describing this above.
The case I show is a little more complicated in reality, so there is no use for me to do this outside the shader(!). I added the previous question which explains a little about the case.
EDIT: As you can see the natural texture span is divided by 10, and the texture is repeated (10 times). The seams appear at the border of every repeating texture. I also added a screenshot. The seams are the very thin lines (~1pixel). The picture is a cut out from a screenshot, not scaled. The repeated texture is 16x16, with 256 subpixels total.
EDIT: This is a followup question of: this question, although all necessary info should be included here.
Last picture has no time added.
Looking at the render of the UV coordinates, they are being filtered, which will cause the same issue as in your previous question, but on a smaller scale. What is happening is that by sampling the UV coordinate texture at a point between two discontinuous values (i.e. two adjacent points where the texture coordinates wrapped), you get an interpolated value which isn't in the right part of the texture. Thus the boundary between texture tiles is a mess of pixels from all over that tile.
You need to get the mapping 1:1 between screen pixels and the captured UV values. Using nearest sampling might get you some of the way there, but it should be possible to do without using that, if you have the right texture and pixel coordinates in the first place.
Secondly, you may find you get bleeding effects due to the way you are doing the texture atlas lookup, as you don't account for the way texels are sampled. This will be amplified if you use any mipmapping. Ideally you need a border, and possibly some massaging of the coordinates to account for half-texel offsets. However I don't think that's the main issue you're seeing here.