How does multisample really work? - opengl

I am very interested in understanding how multisampling works. I have found a large literature on how to enable or use it, but very little information concerning what it really does in order to achieve an antialiased rendering. What I have found, in many places, is conflicting information that only confused me more.
Please note that I know how to enable and use multisampling (I actually already use it), what I don't know is what kind of data really gets into the multisampled renderbuffers/textures, and how this data is used in the rendering pipeline.
I can understand very well how supersampling works, but multisampling still has some obscure areas that I would like to understand.
here is what the specs say: (OpenGL 4.2)
Pixel sample values, including color, depth, and stencil values, are stored in this
buffer (the multisample buffer). Samples contain separate color values for each fragment color.
...
During multisample rendering the contents of a pixel fragment are changed
in two ways. First, each fragment includes a coverage value with SAMPLES bits.
...
Second, each fragment includes SAMPLES depth values and sets of associated
data, instead of the single depth value and set of associated data that is maintained
in single-sample rendering mode.
So, each sample contains a distinct color, coverage bit, and depth. What's the difference from a normal supersampling? Seems like a "weighted" supersampling to me, where each final pixel value is determined by the coverage value of its samples instead of a simple average, but I am very unsure about this. And what about texture coordinates at sample level?
If I store, say, normals in a RGBF multisampled texture, will I read them back "antialiased" (that is, approaching 0) on the edges of a polygon?
A fragment shader is called once per fragment, unless it uses gl_SampleID, glSampleIn or has a 'sample' storage qualifier. How can a fragment shader be invoked once per fragment and get an antialiased rendering?

OpenGL on Silicon Graphics Systems:
http://www-f9.ijs.si/~matevz/docs/007-2392-003/sgi_html/ch09.html#LE68984-PARENT
mentions: When you use multisampling and read back color, you get the resolved color value (that is, the average of the samples). When you read back stencil or depth, you typically get back a single sample value rather than the average. This sample value is typically the one closest to the center of the pixel.
And there's this technical spec (1994) from the OpenGL site. It explains in full detail what is done If MULTISAMPLE_SGIS is enabled: http://opengl.org/registry/specs/SGIS/multisample.txt
See also this related question: How are depth values resolved in OpenGL textures when multisampling?
And the answers to this question, where GL_MULTISAMPLE_ARB is recommended: where is GL_MULTISAMPLE defined?. The specs for GL_MULTISAMPLE_ARB (2002) are here: http://www.opengl.org/registry/specs/ARB/multisample.txt

Related

Any way to obtain percentage coverage of fragment (pixel) by primitive in hlsl/glsl fragment shader?

When the rasterizer invokes on primitive it split it into the collection of fragments (pixels). Next, the fragment shader called for every obtained pixel. Is there any way for me to have additional float parameter in my fragment shader, that will store information about how much the exact pixel is covered by the source primitive? This should have non-trivial value from 0-1 on triangle border pixels. Obviously it will be 1 on every "inside" triangle pixel.
I want rasterizer calculate and pass this value for me.
I thoight the "coservative rasterization" could help with that, but as I understand it uses for slightly different tasks (mostly for collision detection).
Also, as I understand there is no build-in method to do that. May be I can change the rasterized nature to do this? Is it possible?
When rendering to a multisampled framebuffer, you can look at the gl_SampleMaskIn[] bitmask array in the fragment shader to detect how many samples will be covered by the current fragment. This is about as close as you're going to get, and it's not great for what you want.
Obviously, it has the limitation of having the same granularity as the sample locations within a pixel. But the full mask also may be fewer than the number of samples in the framebuffer. If the renderer decides to generate multiple fragments per-pixel during multisample rasterization, the sample mask that any such fragments will only be for the samples that this particular fragment will write.
So if you have a 16-sample multisample framebuffer, the implementation may generate 4 fragments per-pixel, each covering a distinct set of 4 samples. So the sample bitmask for a fragment will never have more than 4 bits, even though you asked for 16x multisample rendering. And there's basically nothing you can do to detect if this is happening (outside of doing tests on specific hardware). All of this is implementation-defined.
Basically, what you want isn't really available; gl_SampleMask is the closest you can get, and how useful it is will be very implementation-dependent.
Maybe one could use GL_POLYGON_SMOOTH somehow for this, since as far as I understand it does exactly this, calculate the coverage of the current fragment and then modulates the fragment's alpha based on this

Is it possible to depth test against a depth texture I am also sampling, in the same draw call?

Context:
I am using a deferred rendering setup, where in the first stage I have two FBO's: one is the GBuffer, for storing the normals, albedo, and material information for all visible fragments. This FBO has a 32-bit depth texture. This gets drawn into in a geometry pass, before any lighting is calculated.
The second FBO is color-only, and starts off black, but accumulates lighting over several passes, from lighting shaders that sample from the GBuffer and write to the color-only buffer using additive blending.
The problem is, I would really like to utilize early depth testing in order to have my lighting ONLY calculate for fragments that contain actual geometry (Not just sky). The best way I can think of to do this is to use depth testing to fail any pixels that have a depth of one in the case of sunlight, or to fail any pixels that lie behind the sphere of influence for point lights. However, I don't think I can bind this depth texture to my color FBO, since I also sample from it inside the lighting shader to calculate the fragments position in world-space.
So my question is: Is there a way to use the same depth texture for both the early depth test, and for sampling inside the shader? Or if not, is there some other (reasonably performant) way of rejecting pixels that don't have geometry in them? I will not be writing to this depth texture at all in my lighting pass.
I only have to target modern graphics hardware on PC's (So I can use any common extensions, or openGL 4.6 features).
There are rules in OpenGL about reading from data in a shader that's also being updated due to a framebuffer operation. Those rules used to be quite strict. Indeed, pre-GL 4.4, the rules were so strict that what you're trying to do was actually undefined behavior. That is, if an image from a texture was attached to the rendering FBO, and you took a sample from that texture in a way such that it was at all possible to be reading from the attached image, you got undefined behavior. Never mind if your write mask meant that no writing happened; it was UB.
Fortunately, it's well-defined now. You only get UB if you're doing an actual write, not merely because you have an image attached to the FBO. And by "now," I mean basically any hardware made in the last 10 years. While ARB_texture_barrier and GL 4.5 are fairly recent, their predecessor NV_texture_barrier is actually quite old. And despite being an NVIDIA extension by name, it was so widely implemented that it is even available on MacOS implementations.

Can I carry out MSAA for deferred rendering by just rendering the geometry twice?

I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).

Webgl: alternative to writing to gl_FragDepth

In WebGL, is it possible to write to the fragment's depth value or control the fragment's depth value in some other way?
As far as I could find, gl_FragDepth is not present in webgl 1.x, but I am wondering if there is any other way (extensions, browser specific support, etc) to do it.
What I want to archive is to have a ray traced object play along with other elements drawn using the usual model, view, projection.
There is the extension EXT_frag_depth
Because it's an extension it might not be available everywhere so you need to check it exists.
var isFragDepthAvailable = gl.getExtension("EXT_frag_depth");
If isFragDepthAvailable is not falsey then you can enable it in your shaders with
#extension GL_EXT_frag_depth : enable
Otherwise you can manipulate gl_Position.z in your vertex shader though I suspect that's not really a viable solution for most needs.
Brad Larson has a clever workaround for this that he uses in Molecules (full blog post):
To work around this, I implemented my own custom depth buffer using a
frame buffer object that was bound to a texture the size of the
screen. For each frame, I first do a rendering pass where the only
value that is output is a color value corresponding to the depth at
that point. In order to handle multiple overlapping objects that might
write to the same fragment, I enable color blending and use the
GL_MIN_EXT blending equation. This means that the color components
used for that fragment (R, G, and B) are the minimum of all the
components that objects have tried to write to that fragment (in my
coordinate system, a depth of 0.0 is near the viewer, and 1.0 is far
away). In order to increase the precision of depth values written to
this texture, I encode depth to color in such a way that as depth
values increase, red fills up first, then green, and finally blue.
This gives me 768 depth levels, which works reasonably well.
EDIT: Just realized WebGL doesn't support min blending, so not very useful. Sorry.

My own z-buffer

How I can make my own z-buffer for correct blending alpha channels? I'm using glsl.
I have only one idea. And this is use 2 "buffers", one of them storing depth-component and another color (with alpha channel). I don't need access to buffer in my program. I cant use uniform array because glsl have a restriction for the number of uniforms variables. I cant use FBO because behaviour for sometime writing and reading Frame Buffer is not defined (and dont working at any cards).
How I can resolve this problem?!
Or how to read actual real time z-buffer from glsl? (I mean for each fragment shader call z-buffer must be updated)
How I can make my own z-buffer for correct blending alpha channels?
That's not possible. For perfect order-independent transparency you must get rid of z-buffer and replace it with another mechanism for hidden surface removal.
With z-buffer there are two possible ways to tackle the problem.
Multi-layered z-buffer (impractical with hardware acceleration) - basically it'll store several layers of "depth" values and will use it for blending transparent surfaces. Will hog a lot of memory, and there will be maximum number of transparent overlayying surfaces, once you're over the limit, there will be artifacts.
Depth peeling (google it). Order independent transparency, but there's a limit for maximum number of "overlaying" transparent polygons per pixel. Can actually be implemented on hardware.
Both approaches will have a limit (maximum number of overlapping transparent polygons per pixel), once you go over the limit, scene will no longer render properly. Which means the whole thing rather useless.
What you could actually do (to get perfect solution) is to remove the zbuffer completely, and make a graphic rendering pipeline that will gather all polygons to be rendered, clip them, split them (when two polygons intersect), sort them and then paint them on screen in correct order to ensure that you'll get correct result. However, this is hard, and doing it with hardware acceleration is harder. I think (I'm not completely certain it happened) 5 ot 6 years ago some ATI GPU-related document mentioned that some of their cards could render correct scene with Z-Buffer disabled by enabling some kind of extension. However, they didn't say a thing about alpha-blending. I haven't heard about this feature since. Perhaps it didn't become popular and shared the fate of TruForm (forgotten). Also such rendering pipeline will not be able to some things that are possible on z-buffer
If it's order-independent transparencies you're after then the fundamental problem is that a depth buffer stores on depth per pixel but if you're composing a view of partially transparent geometry then more than one fragment contributes to each pixel.
If you were to solve the problem robustly you'd need an ordered list of depths per pixel, going back to the closest opaque fragment. You'd then walk the list in reverse order. In practice OpenGL doesn't do things like variably sized arrays so people achieve pretty much that by drawing their geometry in back-to-front order.
An alternative embodied by GL_SAMPLE_ALPHA_TO_COVERAGE is to switch to screen-door transparency, which is indistinguishable from real transparency either at a really high resolution or with multisampling. Ideally you'd do that stochastically, but that would void the OpenGL rule of repeatability. Nevertheless since you're in GLSL you can do it for yourself. Your sampler simply takes the input alpha and uses that as the probability that it'll output the final pixel. So grab a random value in the range 0.0 to 1.0 from somewhere and if it's greater than the alpha then discard the pixel. Always output with an alpha of 1.0 and just use the normal depth buffer. Answers like this say a bit more on what you can do to get randomish numbers in GLSL, and obviously you want to turn multisampling up as high as possible.
Eric Enderton has written a decent paper (which has a slide version) on stochastic order-independent transparency that goes alongside a DirectX implementation that's worth checking out.