GL_LINES and FXAA in WebGL - glsl

So I have this problem with FXAA. The lines I draw using GL_LINES look, well, not really anti-aliased after applying FXAA as a post-process filter, they just look blurred. So my Question basically is - is this the espected bahaviour of FXAA with GL_LINES? The shader code I'm using is nothing special (See here)
This is how the FXAA output looks:
And here is the (standard 4x) MSAA:

No matter the fact that it has "antialiasing" in its name, FXAA is not really antialiasing. "Real" antialiasing techniques involve taking multiple samples of the signal; FXAA, as a post-processing technique cannot do that.
At the end of the day, it is nothing more than a smart blur filter. So while there may be variations of it that can handle lines better, it's still just a blur filter.

Related

Weird result with MSAA

MSAA using OpenGL.
I just drew a white sphere using 'glutSolidSphere' and filled black where 'dot(Normal, CameraVec) < threshold' for silhouette.
And I found a weird result at the outline of the inner white circle. It looks like MSAA not worked.
By the way, it worked well at the outline(outmost) of the black circle.
If I increase the number of samples, it works well even at the outline of the inner white circle.
I think it should work well independent of the number of samples, because resolving samples occurs after fragment shader.
Is this the right result? If yes, why?
Below is the result of 4 samples(left) and 32 samples(right).
MSAA only helps to smooth polygon edges and intersections. It does nothing to smoothen sharp transitions created by your shader code.
The main idea behind MSAA is that the fragment shader is still executed only once per fragment. Each fragment has multiple samples, and coverage is determined by sample. This means that some samples of the fragment can be inside the rendered polygon, and some outside. The fragment shader output is then written to only the covered samples. But all the covered samples within the fragment get the same value.
The depth buffer also has per-sample resolution, meaning that intersections between polygons also profit from the smoothing produced by MSAA.
Once you are aware how MSAA works, it makes sense that it does nothing for sharp transitions in the interior of polygons, which can be the result of logic applied in the shader. To achieve smoothing in this case, you would have to evaluate the fragment shader per sample, which does not happen with MSAA.
MSAA is attractive because it does perform sufficient anti-aliasing for many use cases, with relatively minimal overhead. But as you noticed, it's obviously not sufficient for all cases.
What you can do about this goes beyond the scope of an answer here. There are two main directions:
You can avoid generating sharp transitions in your shader code. If you use standard texturing, using mipmaps can help. For procedural transitions, you can smooth them out in your code, possibly based on gradient values.
You can use a different anti-aliasing method. There are too many to mention here. It's easy to get perfect anti-aliasing with super-sampling, but it's very expensive. Most methods try to achieve a compromise in getting better results than plain MSAA, while not adding too much overhead.
I'm somewhat puzzled by the fact that you get some smoothing on the inside edge with 32x MSAA. I don't think that's expected. I wonder if there's something going on during the downsampling that produces some form of smoothing.

Model with transparency

I have a model with transparent quads for a beard. I can not tell what triangles belong to the beard because their color comes from the texture passed to the fragment shader. I have tried to presort the triangles back to front during export of the model, but this does not help. So I implemented MSAA and Alpha to Coverage, but this did not help either. My last attempt was to draw the model, with a depth mask off and skipping any transparent data, so the color buffer would have non-clear color values to blend with. Then I would draw the model a second time with depth testing on and drawing the alpha pieces.
Nothing I have tried so far has worked. What other techniques can I try to get the beard of the model to properly draw? I am looking for a way to handle this that doesn't use a bunch of extensions. I'd prefer techniques that can be handled with plain old OpenGL 4.
Here is an image of what I am dealing with.
This is what I got after I applied the selected answer.
What you're trying to do there is a still largely unsolved problem: Order independent transparency. MSAA is something entirely different, as is alpha coverage.
So far the best working solution is to separate the model into an opaque and a hairy part. Draw the opaque parts of your scene first, then draw everything (semi-)translucent, ordered far to near in a second pass.
The way your image looks like it seems like the beard is rendered as the first thing, which is quite the opposite of what you actually want.
Simple way:
Enable depth write (depth mask), disable alpha-blending, draw model without the beard.
Disable depth write, enable alpha-blending, draw the beard. Make sure face culling is enabled.
Hard way:
Because order-independent transparency in renderers that use z-buffer is an unsolved problem (as datenwolf said), you could try depth-peeling. I believe the paper is available within OpenGL SDK. Most likely it'll be slower than "simple way", and there'll be a limit on number of maximum overlapping transparent polygons. Also check wikipedia article on order-independent transparency.

Outline / Silhouette rendering with OpenGL

I know there are several techniques to achieve this, but none of them seems sufficient.
Using a sobel / laplace filter doesn't find all the correct edges (and finds unwanted edges), is slow and doesn't give me control over the outline width.
What i have settled on for now is rendering the backside of my objects first with a solid color and a little bigger than the actual objects. The result does look good, but i really want my outlines to have a constant width.
I already tried rendering the backside of my objects with thick wireframe lines. Gives me a constant outline width, but line width is deprecated, produces rendering artifacts and leaves gaps, if the outline abruptly changes direction (like on a cube for example). I have not yet tried using a third rendering pass drawing a point the size of the wireframe lines for each vertex, because of the other problems with this technique.
Any ideas?
edit I even looked at finding the edges myself using a geometry shader, as described in http://prideout.net/blog/?p=54, but it suffers from the same gaps as the wireframe technique.
edit I was able to get rid of the rendering artifacts with the wireframe technique by disabling the GL_DEPTH_TEST while drawing the outlines. Unfortunately i also lost the outlines on overlapping objects...
My goal is to get the same effect they use on characters in the Dragons Lair 3 game. Does anyone know how they did it?
in case you're after real edge detection, Ive found that you can get pretty good results with a convolution LoG (Laplacian over Gaussian) 5x5 kernel, applied to the depth buffer and blended over the rendered object (possibly with a decent FSAA). You need some tuning in the fragment shader in order to clamp the blended outline, but the results are good. (and its a matter of what you really want, btw)
note that:
1) Laplace filtering and log filtering are different things and produce different results
2) if you apply the convolution on the depth buffer, instead of the rendered image, you end up with totally different results, firthermore, if an outline width conrol is desired, a dilate filter followed by a selective-erode pass can be applied, this way you will end up with a render that closely match a hand drawn sketch made with a marker, and you have fine control over tip size but at the cost of 2 extra pass

OpenGL: Rendering two transparent planes intersecting each other: impossible or not?

I ran to this problem hard, it seems impossible to render.
How can one solve this problem? I want the OpenGL render it like it looks at the right side of this image below:
You need to render your planes while disabling the depth test and using an order independent blending formula.
If you have some opaque geometries on the back, draw those ones, put the depth buffer to read only instead of disabling the depth test, and render the transparent ones.
There are also advanced techniques dealing with that common problem, like depth peeling.
EDIT
You can put the depth buffer in read only using: glDepthMask(GL_FALSE).
Here is a good article explaining why you can't achieve the perfect transparency: Transparency Sorting. Also give a look at Order Independent Transparency with Dual Depth Peeling article which covers two methods (one is quite straightforward and single pass) used to have exact (or approximate) order independant transparency.
I forgot to mention Alpha to Coverage.
One non-trivial solution is to split the planes into parts, sort them and then render them back to front. However the perfect sorting is hard to achieve.
Like in the article posted in the other answer:
Transparency Sorting: Depth Sorting

Crossfading scenes in OpenGL

I would like to render two scenes in OpenGL, and then do a visual crossfade from one scene to the second. Can anyone suggest a starting point for learning how to do this?
The most pmajor thing you need to learn is how to do render-to-texture.
When you have both scenes in 2 textures it really is simple to crossfade between them. In fact its pretty simple to do all manor of interesting fade effects :)
Here's sample code of a cross fade. This seems a little different than what Goz has since the two scenes are dynamic. The example uses the stencil buffer for the cross fade.
I could think of another way to crossfade scenes, but it depends on how complex your scene renderer is. If it is simple, you could start a shader program before rendering the second scene that does the desired blending effect. I would try glBlend (GL_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and manipulate the fragments' alpha values in the shader.
FBOs are btw. available for years already - extension or not. If your renderer is complex and uses shader programs, you could just as well render both scenes to FBOs and blend these. Using FBOs is a very common technique for allowing to easily apply all kinds of effect rendering.