I am having trouble understanding why depth fail is better. There is an issue with the eye of the camera being inside a shadow volume, I understand that part. So you need to cap front faces that were clipped by the near plane for depth pass to work. But for depth fail to work, you need to cap back faces.
Is it because capping for far plane is easier than capping for near plane? If so, why is this?
The near-plane clipping problem is not fully solved by capping front faces; it will still fail in many situations when the camera is inside a shadow volume. This makes it unsuitable for roamable 3D environments with shadows.
Depth fail will work even without capping back faces; it will simply miss shadows where the volume points back to infinity. This is a MUCH less common situation than where depth pass occurs, which is any time the camera is in a shadow. Furthermore, there is hardware support to facilitate Z clamping:
OpenGL 3.2: GL_DEPTH_CLAMP
D3D10: RasterizerDesc.DepthClipEnable = FALSE.
However, for a robust solution, you usually need to put tweaks onto both; neither is used "in the raw". Depth pass is used for top down perspectives because the camera will never be in a shadow (and it's quite an effective technique in that field), where as depth fail is used in FPS's or other related games where being inside a shadow volume is a problem.
Here is a list of pro's and con's from a paper that is an archetypal of the subject:
Depth-pass
Advantages
Does not require capping for shadow volumes
Less geometry to render
Faster of the two techniques
Easier to implement if we ignore the near plane clipping problem
Does not require an infinite perspective projection
Disadvantages
Not robust due to unsolvable near plane clipping problem
Depth-fail
Advantages
Robust solution since far plane clipping problem can be solved
elegantly
Disadvantages
Requires capping to form closed shadow volumes
More geometry to render due to capping
Slower of the two techniques
Slightly more difficult to implement
Requires an infinite perspective projection
Related
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.
I have recently implemented soft shadows using voxel cone tracing in OpenGL 4.3 by tracing a cone in the direction of the light and accumulating opacity values.
The key thing that I am trying to resolve or hide is the very voxelized shadowing effect as the occluded surface gets closer to the occluder, as well as hide the clear spots in the shadow due to surface voxelization. I am using low resolution voxels 64x64x64; however, even if I use higher resolution voxels, some of the low-res voxels at a higher mip-map level are still captured in the trace.
So here's what my first idea is: I want to be able to keep the softest parts of the shadow that is furthest away and replace the parts of the shadow that is closer to the occluder with a shadow map. The shadow map will fade as it is further away from each occluder and I will somehow blend it into with the cone traced shadows.
Can anyone think of a way to fade a shadow away based on distance from each object for a shadow-map and then have it blend smoothly into the cone-traced shadow?
Another idea I have would be to somehow ray-trace shadows onto surfaces that are closer to an occluder, but this would probably be too expensive.
Alternatively, I would welcome any other ideas to help improve my soft shadow algorithm.
I've also put up a video to show it in motion:
https://www.youtube.com/watch?v=SUiUlRojBpM
Still haven't found a way to resolve the shadowing issue.
I'm guessing the "clear spot" artifacts are occurring due to large voxel sizes only being partially filled with geometry: "accumulating opacity values.". How many samples are you taking when converting from rasterized pixels to voxels? If the sample volume/voxel volume is small then there could be issues with correctly rendering transparency - there will be noise indicated by lighter areas.
Also, are your voxels' transparency direction dependent? Based on the author's original paper. Directional dependence is important to ensure semi-opaque voxels are rendered correctly.
A quick picture to explain
"for a shadow-map and then have it blend smoothly into the cone-traced shadow?"
This seems like you are kind of shooting yourself in the foot. You get an even larger performance hit and get the disadvantages of both shadow mapping and voxel cone tracing. Voxel cone tracing is expensive but can give nice soft shadows and do global illumination. Shadow mapping is better at doing hard shadows and is faster for smaller scenes, but as you add more geometry you end up redrawing the same stuff multiple times, at least once for each light.
Great work btw. I came across your problem while doing preliminary research for my own DirectX implementation of voxel cone tracing.
[Edit]
I realized that I made a typo in the picture. The picture on the right should be 4/64, rather than 4/16. I forgot about the Z dimension.
Well, in this case you can do it by adding more lights. You can add more lights closer to the original one and then compose the shadow of a light with the shadows of a bunch of closer lights. That is the 'area light' effect.
I've been learning OpenGL, and the one topic that continues to baffle me is the far clipping plane. While I can understand the reasoning behind the near clipping plane, and the side clipping planes (which never have any real effect because objects outside them would never be rendered anyway), the far clipping plane seems only to be an annoyance.
Since those behind OpenGL have obviously thought this through, I know there must be something I am missing. Why does OpenGL have a far clipping plane? More importantly, because you cannot turn it off, what are the recommended idioms and practices to use when drawing things at huge distances (for objects such as stars thousands of units away in a space game, a skybox, etc.)? Are you expected just to make the clipping plane very far away, or is there a more elegant solution? How is this done in production software?
The only reason is depth-precision. Since you only have a limited number of bits in the depth buffer, you can also just represent a finite amount of depth with it.
However, you can set the far plane to infinitely far away: See this. It just won't work very well with the depth buffer - you will see a lot of artifacts if you have occlusion far away.
So since this revolves around the depth buffer, you won't have a problem dealing with further-away stuff, as long as you don't use it. For example, a common technique is to render the scene in "slabs" that each only use the depth buffer internally (for all the stuff in one slab) but some form of painter's algorithm externally (for the slabs, so you draw the furthest one first)
Why does OpenGL have a far clipping plane?
Because computers are finite.
There are generally two ways to attempt to deal with this. One way is to construct the projection by taking the limit as z-far approaches infinity. This will converge on finite values, but it can play havoc with your depth precision for distant objects.
An alternative (if you're willing to have objects beyond a certain distance fail to depth-test correctly at all) is to turn on depth clamping with glEnable(GL_DEPTH_CLAMP). This will prevent clipping against the near and far planes; it's just that any fragments that would have normalized z coordinates outside of the [-1, 1] range will be clamped to that range. As previously indicated, it screws up depth tests between fragments that are being clamped, but usually those objects are far away.
It's just "the fact" that OpenGL depth test was performed in Window Space Coordinates (Normalized device coordinates in [-1,1]^3. With extra scaling glViewport and glDepthRange).
So from my point of view it's one of the design point of view of the OpenGL library.
One of approach to eliminate this OpenGL extension/OpenGL core functionality https://www.opengl.org/registry/specs/ARB/depth_clamp.txt if it is available in your OpenGL version.
I want to describe that in the perspective projection there is nothing about "far clipping plane".
3.1 For perspective projection you need to setup point \vec{c} as center of projection and plane on which projection will be performed. Let's call it
image plane T: (\vec{r}-\vec{r_0},\vec{n})
3.2 Let's assume that projected plane T split arbitary point \vec{r} and \vec{c} central of projection. In other case \vec{r} and \vec{c} are in one hafe-space and point \vec{r} should be discarded.
3.4 The idea of projection is to find intersection \vec{i} with plane T
\vec{i}=(1-t)\vec{c}+t\vec{r}
3.5 As it is
(\vec{i}-\vec{r_0},\vec{n})=0
=>
( (1-t)\vec{c}+t\vec{r}-\vec{r_0},\vec{n})=0
=>
( \vec{c}+t(\vec{r}-\vec{c})-\vec{r_0},\vec{n})=0
3.6. From "3.5" derived t can be subtitute into "3.4" and you will receive projection into plane T.
3.7. After projection you point will lie in the plane. But if assume that image plane is parallel to OXY plane, then I can suggest to use original "depth" for point after projection.
So from geometry point of view it is possible not to use far plane at all. As also not to use [-1,1]^3 model explicitly at all.
p.s. I don't know how to type latex formulas in correct way, s.t. they will be rendered.
I'm loading 3D objects (obj or 3ds or collada files) into my openGL application. The 3 environment is quite large (a few hundred metres in all axis').
My problem is that smaller 3D objects (i.e. in the order of ~< 1-2m ) don't appear to be depth-tested properly. Depending on the zoom of the camera, I can sometimes see the back face of the object (I have been using a simple cube for testing) or other faces becoming visible/invisible/torn. Please see the attached images for a better explanation.
I am led to believe the problem is due to mipmapping being enabled. I would either like to disable mipmapping (can someone suggest a simple, fast way to do this) or set the resolution to be greater for the mipmapped objects. Or am I barking up the wrong tree completely?
Thanks
Chris
That's the result of insufficient z-buffer precission, which is an issue in games that have huge worlds but (relatively) small objects. The immediate solution would be to try using a 24 bit z-buffer instead of a 16 bit one. Another way to tackle this would be to render the game world it two steps, first the big distant objects, then clearing the zbuffer and then drawing the closer objects.
This specific problem is called z-fighting by the way, here's a great resource on this issue: http://www.codermind.com/articles/Depth-buffer-tutorial.html
The take-away is the last paragraph of the article above:
the true issue is that you can't draw
both objects that are very far and
objects that are very near with the
same depth buffer equations. If you
want to draw very far objects then you
need to sacrifice your near view by
pushing it further. To avoid clipping
artifacts you can make your collision
envelope large enough so that your
clip plane will never intercept an
existing object within your frustum.
Or you can make object gradually
disappear with transparency as they
come near your clip plane.
If you want to keep near objects and
at the same time draw mountains (or
planets) in the far distance, then you
can cut your rendering in parts. First
drawing your far objects, then
clearing the depth buffer and
rendering the near objects with a
different z buffer.
Like Julio, I believe that this is a depth precision issues, not something related to mip-mapping. However, I suggest you start by adjusting your near and far clipping plane before changing anything else (You are probably already using a 24-bit depth buffer anyways, as that is the default on most drivers/cards). Particularly the near plane should be as far away as possible for your scene. Look for calls to glFrustum or gluPerspective.