Hello there fellow programmers, I have found yet an other obstacle in improving shadow mapping.
The question is that I am doing some shadow mapping, and can not find any suitable depth biases for it. Some time ago in my XNA project when I was doing it for the first time, I could set it up quite nicely on a terrain, and could find a suitable bias in no time, because, if no bias was set, such acne occured:
I was adjusting the bias until the shadows looked fine, that is the basic idea behind it as far as I could understand. This is the result what I am aiming for.
Now moved on to DirectX 11, and set up a little scene later, and done some PCF shadow mapping. Then, when setting no bias, there were no such acne, but the shadows were disconnected, such as this (sorry for ugly texturing):
It should have acne on the ground, and the shadow of the box connect to the box's edge, but instead it is disconnected. When lowering bias (to a negative), the gap starts to decrease, but soon the whole shadow map darkens before reaching optimal gap distance.
I am using the same shader code (only extended by pcf filtering) as on XNA, so I am assuming it is more to do with the api. Checking out the xna code, I can not even see what the render-to texture formats are, because they are hidden (I guess).
If anyone ever had this problem before, please help me achieve the shadow acne with a zero bias.
UPDATE: This image shows the uneven shadowing on a plain flat quad (the acne should be even across the whole quad, as it is on the terrain in XNA):
UPDATE2: I tried switching culling mode to cull out front-facing faces during the depth render pass, that way the shadows ccan be adjusted a little better, because there is no floor rendered that would darken if bias is modified, but the shadows are still very uneven. Individual meshes' shadows can be adjusted to look good, but then the others get messed up.
Somehow, I think, the depth buffer distribution is very uneven, because objects closer to the far plane get better results. Not even my clipping distances are bad, I think. The Shadow Projection is set to 1-2000 depth distribution (in my scene, 1 meter is approximately 200 units), but somehow, accuracy is way off.
I have tried depth texture formats of R32_FLOAT and D24_S8_UNORM.
I've seen a similar issue when not setting the 'SlopeScaledDepthBias' property appropriately (also part of the D3D11_RASTERIZER_DESC).
As is explained here, the SlopeScaledDepthBias is an important part of the depth bias calculation, make sure it's initialized to something that makes sense.
OK, slope scaled bias is a solution to self-shadowing acne, but was not the correct solution for my problem.
I have found out I had my viewport set up faulty. All this time I have filled out the description for its depth to 0.1f near depth and 1.0f far depth, because I had not known what it was for exactly and worked ok, but the depth buffer was uneven, getting more precision near the shadow eye and losing precision going further.
I changed the viewport description for the shadow map rendering depth stencil view to 0.0f on the near plane and finally it worked, so now the whole scene is getting depth information evenly, like previously in XNA.
Related
Im experimenting with rendering by trying to make a minecraft-like voxel engine. I want nice visuals, so I implemented basic shadow mapping. On a basic level it works, I do have shadows, and I have already accounted for shadow acne. But I am still encountering weird artifacts and problems which make the scene look rubbish.
This is what I had to begin with
I am guessing that the sawtooth shadow pattern on straight edges is basically a projection of the individual shadow map pixels, so I tried increasing the resolution of the shadow map to a massive 8192x8192 and it did indeed make the sawtooth much finer (though still perfectly visible)
I changed from GL_NEAREST to GL_LINEAR filtering and added Poisson sampling in my shader like this
void main()
{
float visibility = 0.6;
for (int i=0;i<4;i++){
visibility+=0.1*texture( depth_buffer_texture, vec3(shadowCoord.xy + poissonDisk[i]/14000.0,shadowCoord.z-0.0002));
}
fragColor = texture(texture_sampler, outTexCoord)*visibility;
}
(I also tried changing to sampler2DShadow but that did literally nothing)
the result looks better but still has some problems, namely the sawtooth is stiill visible both on the casting surface and on the shadow itself
This does look a lot better but I can still see the following problems:
I have an 8192x8192 shadow map which does not seem reasonable (though I'm not sure, it does still run at a comfortable fps even without any optimisations at this point, so, maybe, in this case it is ok?)
The shadow edges are still not smooth both on the casting surface and on the surface the shadow falls on
As this is an 'infinite' voxel world, the light source, while using a constant ortho projection, has to follow the player arround, and when the source moves with the player the shadow edges move around and flicker in a most annoying way. With the large texture and smoothing it is almost ok but gets worse fast with a smaller texture size. I believe its called shadow swimming but could not find the proper way to fix this
There is an annoying Moire-like pattern on the farther hill. While it is present without the shadows it is made considerably worse by them and I can't seem to find what its called and, therfore can't really search how to fix it
If anyone can help me fix these problems or even just point me in the right direction I would be very grateful. Thanks in advance!
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I am having trouble understanding why depth fail is better. There is an issue with the eye of the camera being inside a shadow volume, I understand that part. So you need to cap front faces that were clipped by the near plane for depth pass to work. But for depth fail to work, you need to cap back faces.
Is it because capping for far plane is easier than capping for near plane? If so, why is this?
The near-plane clipping problem is not fully solved by capping front faces; it will still fail in many situations when the camera is inside a shadow volume. This makes it unsuitable for roamable 3D environments with shadows.
Depth fail will work even without capping back faces; it will simply miss shadows where the volume points back to infinity. This is a MUCH less common situation than where depth pass occurs, which is any time the camera is in a shadow. Furthermore, there is hardware support to facilitate Z clamping:
OpenGL 3.2: GL_DEPTH_CLAMP
D3D10: RasterizerDesc.DepthClipEnable = FALSE.
However, for a robust solution, you usually need to put tweaks onto both; neither is used "in the raw". Depth pass is used for top down perspectives because the camera will never be in a shadow (and it's quite an effective technique in that field), where as depth fail is used in FPS's or other related games where being inside a shadow volume is a problem.
Here is a list of pro's and con's from a paper that is an archetypal of the subject:
Depth-pass
Advantages
Does not require capping for shadow volumes
Less geometry to render
Faster of the two techniques
Easier to implement if we ignore the near plane clipping problem
Does not require an infinite perspective projection
Disadvantages
Not robust due to unsolvable near plane clipping problem
Depth-fail
Advantages
Robust solution since far plane clipping problem can be solved
elegantly
Disadvantages
Requires capping to form closed shadow volumes
More geometry to render due to capping
Slower of the two techniques
Slightly more difficult to implement
Requires an infinite perspective projection
Let's say i have 2 textured triangles.
I want to draw one triangle over the other one, such that the top one is basically laying on top of the second one.
Now technically they are on the same plane, but they do not share the same "space" (they do not intersect), though visually it is tough to tell at a certain distance.
Basically when these triangles are very close together (in parallel) i see texture "artifacts". I should ONLY see the triangle that is on top. But what im seeing is that the triangle in the background tends to "bleed" through.
Is there a way to alleviate this side effect, like increasing the depth precision or something? Maybe even increase the tessellation of the triangles?
* Update *
I am using vertex and index buffers. This is using OpenGL ES on iPhone.
I dont know if this picture will help or make things worse. But here it is. Two triangles very close to each other along the Z-axis (but not touching). (NOTE: the normal vector for these triangles are going straight towards you).
You can increase the depth precision up to 32 bits per pixel. However, if the 2 triangles are coplanar, that likely won't fix the problem. If they aren't coplanar (it's really hard to tell from your description what you're talking about), then increasing the depth precision might help. If you're using FBOs for your drawing, simply create the depth texture with 32-bits per component by using GL_DEPTH_COMPONENT32 for the internal format. There are several examples here. If you're not using FBOs, please describe how you create your context (also what OS you're on - Windows, OS X, Linux?).
You could try changing the Depth Buffer function to something more appropriate...
glDepthFunc(GL_ALWAYS) - Essentially disables depth testing
glDepthFunc(GL_GEQUAL) - Overwrites when greater OR equal
If they are too close (assuming they are parallel, not on the same plane), you will get precision errors (like banding artifacts=. Try adding some small offset to the top polygon using glPolygonOffsset: http://www.opengl.org/sdk/docs/man/xhtml/glPolygonOffset.xml Check this simple tutorial: http://www.felixgers.de/teaching/jogl/polygonOffset.html
EDIT: Also try increasing precision as #user1118321 says.
What you are describing is called Z-Fighting (http://en.wikipedia.org/wiki/Z-fighting).
Sadly depth buffers only have limited precision, so if the difference in depth of two polygons is smaller than the precision of the depth buffer, you can't predict which polygon will pass the depth test and be drawn.
As others have said, you can increase the precision of the depth buffer so that polygons have to be closer to each other before the z-fighting artifacts occur, or you can disable the depth test so you are ensured that polygons rendered wont be blocked by anything previously drawn.
I have recently implemented soft shadows using voxel cone tracing in OpenGL 4.3 by tracing a cone in the direction of the light and accumulating opacity values.
The key thing that I am trying to resolve or hide is the very voxelized shadowing effect as the occluded surface gets closer to the occluder, as well as hide the clear spots in the shadow due to surface voxelization. I am using low resolution voxels 64x64x64; however, even if I use higher resolution voxels, some of the low-res voxels at a higher mip-map level are still captured in the trace.
So here's what my first idea is: I want to be able to keep the softest parts of the shadow that is furthest away and replace the parts of the shadow that is closer to the occluder with a shadow map. The shadow map will fade as it is further away from each occluder and I will somehow blend it into with the cone traced shadows.
Can anyone think of a way to fade a shadow away based on distance from each object for a shadow-map and then have it blend smoothly into the cone-traced shadow?
Another idea I have would be to somehow ray-trace shadows onto surfaces that are closer to an occluder, but this would probably be too expensive.
Alternatively, I would welcome any other ideas to help improve my soft shadow algorithm.
I've also put up a video to show it in motion:
https://www.youtube.com/watch?v=SUiUlRojBpM
Still haven't found a way to resolve the shadowing issue.
I'm guessing the "clear spot" artifacts are occurring due to large voxel sizes only being partially filled with geometry: "accumulating opacity values.". How many samples are you taking when converting from rasterized pixels to voxels? If the sample volume/voxel volume is small then there could be issues with correctly rendering transparency - there will be noise indicated by lighter areas.
Also, are your voxels' transparency direction dependent? Based on the author's original paper. Directional dependence is important to ensure semi-opaque voxels are rendered correctly.
A quick picture to explain
"for a shadow-map and then have it blend smoothly into the cone-traced shadow?"
This seems like you are kind of shooting yourself in the foot. You get an even larger performance hit and get the disadvantages of both shadow mapping and voxel cone tracing. Voxel cone tracing is expensive but can give nice soft shadows and do global illumination. Shadow mapping is better at doing hard shadows and is faster for smaller scenes, but as you add more geometry you end up redrawing the same stuff multiple times, at least once for each light.
Great work btw. I came across your problem while doing preliminary research for my own DirectX implementation of voxel cone tracing.
[Edit]
I realized that I made a typo in the picture. The picture on the right should be 4/64, rather than 4/16. I forgot about the Z dimension.
Well, in this case you can do it by adding more lights. You can add more lights closer to the original one and then compose the shadow of a light with the shadows of a bunch of closer lights. That is the 'area light' effect.