Recently I'm working on my SSAO implementation, for my engine, I would like to add support for both forward and deferred renderer, thus I choose to use depth-only approach and reconstruct normal from depth map, here is the code:
//Restore view space position with non-linear depth
vec3 viewPos = getPosition(uv, depth, invProjMat);
//Restore view space normal
vec3 normal = normalize(cross(dFdx(viewPos), dFdy(viewPos)));
Then I apply this normal onto my SSAO implementation, it achieve quite a good result mostly, except for the edge:
I'm sure that's because the discontinuous normal on edge, but I have no idea how to fix it. So is there any approach to avoid the artifact on the edge when reconstructing normal from depth? THX.
There's no way to completely get rid of all artifacts except by generating a normals gbuffer. But there's a solution for reconstructing normals that gives you much better results:
What I do in my engine is, I take 5 depth samples for normal reconstruction. They are positioned like a cross. The center sample is the pixel you're currently rendering and you sample the pixel above, left, right and below that center sample. Then for normal reconstruction you simply take the closest Y sample (either the one above or below the center sample) and the closest X sample (either the one left or right of the center sample) relative to the center sample. This gets rid of those normal artiftacts in 99% of the cases, but it introduces branching and needs more texture samples. Therefore it is quite a bit slower than just doing what you're doing already.
Classic normal reconstruction:
Improved (cross pattern) normal reconstruction:
Classic normal reconstruction with SSAO:
Improved (cross pattern) normal reconstruction with SSAO:
(excuse my outdated SSAO screenshots)
Also please note that using dFdx() and dFdy() will probably always result in more artifacts than just sampling the depth texture three times (because there is no guarante that the values returned by dFdx() or dFdy() will be accurate), which is clearly the case in your example.
Related
Well making something transparent isn't that difficult, but i need that transparency to be different based on an object's curve to make it look like it isn't just a flat object. Something like the picture below.
The center is more transparent than the sides of the cylinder, it is more black which is the background color. Then there is the bezel which seems to have some sort of specular lighting at the top to make it more shiny, but i'd have no idea how to go about that transparency in that case. Using the normals of the surface relative to the eye position to determine the transparency value? Any help would be appreciated.
(moved comments into answer and added some more details)
Use (Sub Surface) scattering instead of transparency.
You can simplify things a lot for example by assuming the light source is constant along whole surface/volume ... so you need just the view ray integration not the whole volume integral per ray... I do it in my Atmospheric shader and it still looks pretty awesome almost indistinguisable from the real thing see some newer screenshots ... have compared it to the photos from Earth and Mars and the results where pretty close without any REALLY COMPLICATED MATH.
There are more options how to achieve this:
Voxel map (volume rendering)
It is easy to implement scattering into volume render engine but needs a lot of memory and power.
use 2 depth buffers (front and back face)
this need 2 passes with Cull face on and CW/CCW settings. This is also easy to implement but this can not handle multiple objects in the same view along Z axis of camera view. The idea is to pass both depth buffers to shader and integrating the pixel rays along its path cumulating/absorbing light from light source. Something like this:
render geometry to both depth buffers as 2 textures.
render quad covering whole screen
for each fragment compute the ray line (green)
compute the intersection points in booth depth buffers
obtain 'length,ang'
integrate along the length using scattering to compute pixel color
I use something like this:
vec3 p,p0,p1; // p0 front and p1 back face ray/depth buffer intersection points
int n=16; // integration steps
dl=(p1-p0)/float(n); // integration step vector
vec3 c=background color;
float q=dot(normalize(p1-p0),light)=fabs(cos(ang)); // normal light shading
for (p=p1,i=0;i<n;p1-=dp,i++) // p = p1 -> p0 path through object
{
b=B0.rgb*dl; // B0 is saturated color of object
c.r*=1.0-b.r; // some light is absorbed
c.g*=1.0-b.g;
c.b*=1.0-b.b;
c+=b*q; // some light is scattered in
} // here c is the final fragment color
After/durring the integration you should normalize the color ... so that the resulting color is saturated around the real view depth of the rendered material. for more informatio see the Atmospheric scattering link below (this piece of code is extracted from it)
analytical object representation
If you know the surface equation then you can compute the light path intersections inside shader without the need for depth buffers or voxel map. This Simple GLSL Atmospheric shader of mine uses this approach as ellipsoids are really easily handled this way.
Ray tracer
If you need precision and can not use Voxel maps then you can try ray-tracing engines instead. But all scattering renderers/engines (#1,#2,#3 included) are ray tracers anyway... As you can see all techniques discussed here are the same the only difference is the method of obtaining the ray/object boundary intersection points.
TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?
I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.
I know that normal mapping describes the process of adding detail to meshes without increasing the polygon count, and that this is achieved by using specific normal textures for manipulating the way light is applied to the object. Okay.
But what is bump mapping then? Is it just another term for normal mapping?
How do the visual results compare? Can both techniques be combined?
Bump Mapping describes a general technique for simulating bumps and wrinkles on the surface of an object. This is normally accomplished by manipulating surface normals when doing lighting calculations.
Normal Mapping is a variation of Bump Mapping in which the surface normals are provided via a texture, with normals embedded into the RGB channels of the image.
Other techniques, such as Parallax Mapping, are also Bump Mapping techniques because they distort the surface normals.
To answer the second part of the question, they could fairly easily be combined. The base surface normals could be determined from a normal mapping and then modified via another bump mapping technique.
Bump mapping was originally suggested by Jim Blinn back in 1978. His system basically works by perturbing the normal on a surface by using the height of that texel and the height of the surrounding texels.
This is quite similar to DUDV bumpmapping (You may recall the original environment mapped bump mapping as introduced in DX6 which was DUDV). This works by pre-calculating the derivatives from above so that you can miss out the first stage of the calculation (as it does not change each frame).
Normal mapping is a very similar technique that works by, simply, replacing the normal at each texel position. Conceptually its much simpler.
There is another technique that produces "similar" results. It is called emboss bump mapping. This method works by using multipass rendering. Basically you end up subtracting a gray scale heightmap from the last pass but offsetting it a small amount based on the light direction.
There are other ways of emulating surface topology as well.
Elevation mapping uses the height map as an alpha texture and then renders multiple slices through that texture with a different alpha value to simulate the change in height. If not performed correctly, however, the slices can be very visible.
Displacement mapping works by generating a 3D mesh that uses the texture as its basis. This, obviously, massively increase your vertex count.
Steep parallax, relief mapping, etc are the newest techniques. They work by casting a ray through the heightmap until it intersects. This has the big advantage that if a lump should block out the texture behing it now does as the ray doesn't hit the heightmap behind where it initially hits so always displays the "closest" texel.
I am trying to write an optimized code that renders a 3D scene using OpenGL onto a sphere and then displays the unwrapped sphere on the screen ie producing a planar map of a purely reflective sphere. In math terms, I would like to produce a projection map where the x axis is the polar angle and y axis is the azimuth.
I am trying to do this by placing the camera at the center of the sphere probe and taking planar shots around so as to approximate spherical quads with planar tiles of the frustum. Then I can use this as texture to apply to a distorted planar patch.
Seems to me this is pretty tedious approach. I wonder if there is way to take this on using shaders or some GPU-smart method.
Thank you
S.
I can give you two solutions.
The first is to make a standard render-to-texture, but with a cubemap attached as the destination buffer. If your hardware is recent enough, it can be done in a single pass. This will deal with all the needed math in HW for you, but data repartition of cubemaps aren't ideal (quite a lot of distortion if the corners). In most cases, it should be enough though.
After this, you render a quad to the screen, and in a shader you map your UV coordinates to xyz vectors using staightforwad spherical mapping. The HW will compute for you which side of the cubemap to take, at which UV.
The second is more or less the same, but with a custom deformation and less HW support : dual paraboloids. Two paraboloids may not be enough, but you are free to slightly modify the equations and make 6 passes. The rendering pass is the same, but this time you're all by yourself to choose the right texture and compute the UVs.
By the time you've bothered to build the model, take the planar shots, apply non-affine transformations and stitch the whole thing together, you've probably gained no performance and considerable complexity. Just project the planar image mathematically and be done with it.
You seem to be asking for OpenGL's sphere mapping. NeHe has a tutorial on sphere mapping that might be useful.