I have followed the tutorial presented here to create shadow maps.
What I would like to do is to have some sort of post processing step on the shadow, where I apply gaussian blur (or whatever blur) on the shadow map. Understand that I have followed this tutorial strictly, and that I am very inexperienced when it comes to OpenGL. I don't know if it should be applied to the depthMapFBO or the depthMap itself. Or if I need to create new FBOs/textures.
Can one even blur the depth value in this way? How would you go about blurring the shadow map?
Note that I'm not interested in realism, I just want a uniform blurring on all shadows.
Blurring a shadow depth texture makes no sense.
A depth texture contains depth values. A value of, for example, 0.5 means something. It specifies the depth of a texel. For a shadow map, it means something even more specific: it specifies the closest distance to the light of an occluding surface at a particular location in the scene.
If the closest distance at one location is 0.5, what would it mean to "blur" this with a distance of 0.6? It would effectively mean that you have changed the distance of objects to the light. But since that won't be reflected in the actual geometry, this means that the blurred shadow map no longer accurately represents the geometry. So now, there will be locations that should be shadowed which are not, and locations that are being shadowed but shouldn't be.
In short, it makes your depth texture meaningless.
What you seem to want is softer shadows. There are ways to accomplish that with shadow maps, but blurring the depth texture is not one of them.
Related
Recently I'm working on my SSAO implementation, for my engine, I would like to add support for both forward and deferred renderer, thus I choose to use depth-only approach and reconstruct normal from depth map, here is the code:
//Restore view space position with non-linear depth
vec3 viewPos = getPosition(uv, depth, invProjMat);
//Restore view space normal
vec3 normal = normalize(cross(dFdx(viewPos), dFdy(viewPos)));
Then I apply this normal onto my SSAO implementation, it achieve quite a good result mostly, except for the edge:
I'm sure that's because the discontinuous normal on edge, but I have no idea how to fix it. So is there any approach to avoid the artifact on the edge when reconstructing normal from depth? THX.
There's no way to completely get rid of all artifacts except by generating a normals gbuffer. But there's a solution for reconstructing normals that gives you much better results:
What I do in my engine is, I take 5 depth samples for normal reconstruction. They are positioned like a cross. The center sample is the pixel you're currently rendering and you sample the pixel above, left, right and below that center sample. Then for normal reconstruction you simply take the closest Y sample (either the one above or below the center sample) and the closest X sample (either the one left or right of the center sample) relative to the center sample. This gets rid of those normal artiftacts in 99% of the cases, but it introduces branching and needs more texture samples. Therefore it is quite a bit slower than just doing what you're doing already.
Classic normal reconstruction:
Improved (cross pattern) normal reconstruction:
Classic normal reconstruction with SSAO:
Improved (cross pattern) normal reconstruction with SSAO:
(excuse my outdated SSAO screenshots)
Also please note that using dFdx() and dFdy() will probably always result in more artifacts than just sampling the depth texture three times (because there is no guarante that the values returned by dFdx() or dFdy() will be accurate), which is clearly the case in your example.
TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?
So I'm working on implementing shadow mapping. So far, I've rendered the geometry (depth, normals, colors) to a framebuffer from the camera's point of view, and rendered the depth of the geometry from the light's point of view. Now, I'm rendering the lighting from the camera's point of view, and for each fragment, I'm to compare its distance to the light, to the depth tex value from the render-from-the-lights-pov pass. If the distance is greater, it's in shadow. (Just recapping here to make sure there isn't anything I don't realize I don't understand).
So, to do this last step, I need to convert the depth value [0-1] to its eye-space value [0.1-100] (my near/far planes). (explanation here- Getting the true z value from the depth buffer).
Is there any reason to not instead just have the render-from-the-lights-pov pass just write to a texture the distance of the fragment to the camera (the z component) directly? Then we won't have to deal with the ridiculous conversion? Or am I missing something?
You can certainly write your own depth value to a texture, and many people do just that. The advantage of doing that is that you can choose whatever representation and mapping you like.
The downside is that you have to either a) still have a "real" depth buffer attached to your FBO (and therefore double up the bandwidth you're using for depth writing), or b) use GL_MIN/GL_MAX blending mode (depending on how you are mapping depth) and possibly miss out on early-z out optimizations.
I know that normal mapping describes the process of adding detail to meshes without increasing the polygon count, and that this is achieved by using specific normal textures for manipulating the way light is applied to the object. Okay.
But what is bump mapping then? Is it just another term for normal mapping?
How do the visual results compare? Can both techniques be combined?
Bump Mapping describes a general technique for simulating bumps and wrinkles on the surface of an object. This is normally accomplished by manipulating surface normals when doing lighting calculations.
Normal Mapping is a variation of Bump Mapping in which the surface normals are provided via a texture, with normals embedded into the RGB channels of the image.
Other techniques, such as Parallax Mapping, are also Bump Mapping techniques because they distort the surface normals.
To answer the second part of the question, they could fairly easily be combined. The base surface normals could be determined from a normal mapping and then modified via another bump mapping technique.
Bump mapping was originally suggested by Jim Blinn back in 1978. His system basically works by perturbing the normal on a surface by using the height of that texel and the height of the surrounding texels.
This is quite similar to DUDV bumpmapping (You may recall the original environment mapped bump mapping as introduced in DX6 which was DUDV). This works by pre-calculating the derivatives from above so that you can miss out the first stage of the calculation (as it does not change each frame).
Normal mapping is a very similar technique that works by, simply, replacing the normal at each texel position. Conceptually its much simpler.
There is another technique that produces "similar" results. It is called emboss bump mapping. This method works by using multipass rendering. Basically you end up subtracting a gray scale heightmap from the last pass but offsetting it a small amount based on the light direction.
There are other ways of emulating surface topology as well.
Elevation mapping uses the height map as an alpha texture and then renders multiple slices through that texture with a different alpha value to simulate the change in height. If not performed correctly, however, the slices can be very visible.
Displacement mapping works by generating a 3D mesh that uses the texture as its basis. This, obviously, massively increase your vertex count.
Steep parallax, relief mapping, etc are the newest techniques. They work by casting a ray through the heightmap until it intersects. This has the big advantage that if a lump should block out the texture behing it now does as the ray doesn't hit the heightmap behind where it initially hits so always displays the "closest" texel.
In Computer graphics, what's the difference between material and texture?
In OpenGL, a material is a set of coefficients that define how the lighting model interacts with the surface. In particular, ambient, diffuse, and specular coefficients for each color component (R,G,B) are defined and applied to a surface and effectively multiplied by the amount of light of each kind/color that strikes the surface. A final emmisivity coefficient is then added to each color component that allows objects to appear luminous without actually interacting with other objects.
A texture, on the other hand, is a set of 1-, 2-, 3-, or 4- dimensional bitmap (image) data that is applied and interpolated on to a surface according to texture coordinates at the vertices. Texture data alters the color of the surface whether or not lighting is enabled (and depending on the texture mode, e.g. decal, modulate, etc.). Textures are used frequently to provide sub-polygon level detail to a surface, e.g. applying a repeating brick and mortar texture to a quad to simulate a brick wall, rather than modeling the geometry of each individual brick.
In the classical (fixed-pipeline) OpenGL model, textures and materials are somewhat orthogonal. In the new programmable shader world, the line has blurred quite a bit. Frequently textures are used to influence lighting in other ways. For example, bump maps are textures that are used to perturb surface normals to effect lighting, rather than modifying pixel color directly as a regular "image" texture would.
The question suggests a common misunderstanding of various computer graphics concepts. It is one born of pre-shader thinking and coding.
A texture is nothing more than a container for a series of one or more images, where an image is an array of some dimensionality (1D, 2D, etc) of values, where each value can be a vector of between 1 and 4 numbers. Textures also have some special techniques for accessing values from them that allow for interpolation and the minimizing of aliasing artifacts from sampling.
A texture can contain colors, but textures do not have to contain colors. Textures can be used to vary parameters across an objects surface, but that is not all textures can be used for.
Textures have no direct association with "materials"; you can use them for a wide variety of things (or nothing at all).
A material is a concept in lighting. Given a particular light and a point on the surface, the intensity (ie: color) of light reflected from that surface at that point is governed by a lighting equation. This equation is a function of many parameters. But those parameters are grouped into two categories.
One category of light equation parameters are the light parameters. These describe the characteristics of the light source. Point lights vs. directional lights vs. spot lights. The light intensity (again: color) is another parameter. Since the intensity itself may vary depending on the direction of the surface point relative to the light (think flashlights or spotlights), the intensity may be accessed from a texture. That's how many games project flashlights over a dark room.
The other category of light equation parameters describes the characteristics of the surface at that point. These are called material parameters. The material parameters, or material for short, describe important properties of the surface at the point in question. The normal at that point is an important one. There is also the diffuse reflectance (color), specular reflectance, specular shininess (exponent for Phong/Blinn-Phong) and various other parameters, depending on how comprehensive your lighting equation is.
Where do these values come from? Light parameters tend to be fixed in the world. Lights don't move per-object (though if you're doing lighting in object space, then each object would have its own light position). The light intensity may vary. But that's mostly it; anything else happens between frames, not within a single frame's rendering. So most light parameters are shader uniforms.
Material parameters can come from a variety of sources. Using a shader uniform effectively means that all points on the surface have that same value. So your could have the diffuse color come from a uniform, which would give the surface a uniform color (modified by lighting, of course). You can vary material parameters per-vertex, by passing them as vertex attributes or computing them from other attributes.
Or you can provide a parameter by mapping a texture to a surface. This mapping involves associating texture coordinates with vertex positions, so that the texture is directly attached to the surface. The texture is sampled at that location during rendering, and that value is used to perform the lighting.
The most common textures you're familiar with, "color textures", are simply varying the diffuse reflectance of the surface. The texture provides the diffuse color at each point along the surface.
This is not the only function of textures. You could just as easily vary the specular shininess over the surface. Or the light intensity. Or anything else.
Textures are tools. Materials are just a group of light equation parameters.
What I think of with those terms:
A texture is an image that is mapped onto a 3D object.
A material simulates a physical material. Take "glass" for example. You couldn't produce a glass effect with a plain texture map. It has parameters like how it reflects and refracts light at different angles. A material could also be a simple texture map so sometimes the terms mean the same thing.
Although the terms can be used interchangeably, it's common to refer to a bitmap as a texture.
While a fully defined texture, with lighting properties, bump mapping etc, would more usually be referred to as a material.
But I should stress that depending on the tools being used, their terminology will be used by the related community.