I apologize beforehand for not providing code or images, I am more or less looking for clarification on whether what I'm doing is well-defined and my issues stem from some other bug.
I have implemented a separated Gaussian blur for exponential variance shadow mapping in a compute shader, which runs without issues (as far as I can visually tell). I wanted to re-use that same shader to blur downsampled luminance images for a bloom effect. The issue I observed was that obvious garbage values got blurred into the image at positive x/y in the respective passes (bright colored streaks of varying length). To check whether there is actually garbage inside the texture (we have to use a compatibility profile due to some uses of legacy code in other parts of the application, which makes using an actual frame debugger very difficult) I implemented the same shader (minus some changes regarding kernel size) as a full-screen fragment shader pass, which does not exhibit the noise blurring.
My question relates to the texture sampling in compute shaders. I use a sampler2D and texture(...) to read the to-be-blurred values, the wrapping mode is set to GL_CLAMP_TO_EDGE. Does this perhaps not work as I expect it to in compute shaders? I issued memory barriers after each blur pass with TEXTURE_FETCH (not before, as the downsampling happens in a fragment shader, which shouldn't be an incoherent access).
Thanks for the help in advance.
Related
When the rasterizer invokes on primitive it split it into the collection of fragments (pixels). Next, the fragment shader called for every obtained pixel. Is there any way for me to have additional float parameter in my fragment shader, that will store information about how much the exact pixel is covered by the source primitive? This should have non-trivial value from 0-1 on triangle border pixels. Obviously it will be 1 on every "inside" triangle pixel.
I want rasterizer calculate and pass this value for me.
I thoight the "coservative rasterization" could help with that, but as I understand it uses for slightly different tasks (mostly for collision detection).
Also, as I understand there is no build-in method to do that. May be I can change the rasterized nature to do this? Is it possible?
When rendering to a multisampled framebuffer, you can look at the gl_SampleMaskIn[] bitmask array in the fragment shader to detect how many samples will be covered by the current fragment. This is about as close as you're going to get, and it's not great for what you want.
Obviously, it has the limitation of having the same granularity as the sample locations within a pixel. But the full mask also may be fewer than the number of samples in the framebuffer. If the renderer decides to generate multiple fragments per-pixel during multisample rasterization, the sample mask that any such fragments will only be for the samples that this particular fragment will write.
So if you have a 16-sample multisample framebuffer, the implementation may generate 4 fragments per-pixel, each covering a distinct set of 4 samples. So the sample bitmask for a fragment will never have more than 4 bits, even though you asked for 16x multisample rendering. And there's basically nothing you can do to detect if this is happening (outside of doing tests on specific hardware). All of this is implementation-defined.
Basically, what you want isn't really available; gl_SampleMask is the closest you can get, and how useful it is will be very implementation-dependent.
Maybe one could use GL_POLYGON_SMOOTH somehow for this, since as far as I understand it does exactly this, calculate the coverage of the current fragment and then modulates the fragment's alpha based on this
MSAA using OpenGL.
I just drew a white sphere using 'glutSolidSphere' and filled black where 'dot(Normal, CameraVec) < threshold' for silhouette.
And I found a weird result at the outline of the inner white circle. It looks like MSAA not worked.
By the way, it worked well at the outline(outmost) of the black circle.
If I increase the number of samples, it works well even at the outline of the inner white circle.
I think it should work well independent of the number of samples, because resolving samples occurs after fragment shader.
Is this the right result? If yes, why?
Below is the result of 4 samples(left) and 32 samples(right).
MSAA only helps to smooth polygon edges and intersections. It does nothing to smoothen sharp transitions created by your shader code.
The main idea behind MSAA is that the fragment shader is still executed only once per fragment. Each fragment has multiple samples, and coverage is determined by sample. This means that some samples of the fragment can be inside the rendered polygon, and some outside. The fragment shader output is then written to only the covered samples. But all the covered samples within the fragment get the same value.
The depth buffer also has per-sample resolution, meaning that intersections between polygons also profit from the smoothing produced by MSAA.
Once you are aware how MSAA works, it makes sense that it does nothing for sharp transitions in the interior of polygons, which can be the result of logic applied in the shader. To achieve smoothing in this case, you would have to evaluate the fragment shader per sample, which does not happen with MSAA.
MSAA is attractive because it does perform sufficient anti-aliasing for many use cases, with relatively minimal overhead. But as you noticed, it's obviously not sufficient for all cases.
What you can do about this goes beyond the scope of an answer here. There are two main directions:
You can avoid generating sharp transitions in your shader code. If you use standard texturing, using mipmaps can help. For procedural transitions, you can smooth them out in your code, possibly based on gradient values.
You can use a different anti-aliasing method. There are too many to mention here. It's easy to get perfect anti-aliasing with super-sampling, but it's very expensive. Most methods try to achieve a compromise in getting better results than plain MSAA, while not adding too much overhead.
I'm somewhat puzzled by the fact that you get some smoothing on the inside edge with 32x MSAA. I don't think that's expected. I wonder if there's something going on during the downsampling that produces some form of smoothing.
Criteria: I’m using OpenGL with shaders (GLSL) and trying to stay with modern techniques (e.g., trying to stay away from deprecated concepts).
My questions, in a very general sense--see below for more detail—are as follows:
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
Background: My application draws points connected with lines in an ortho projection (vertices have varying depth in the projection). I’ve only recently started using shaders in the project (trying to get away from deprecated concepts). I understand that standard blending has ordering issues with alpha testing and depth testing: basically, if a “translucent” pixel at a higher z level is drawn first (thus blending with whatever colors were already drawn to that pixel at a lower z level), and an opaque object is then drawn at that pixel but at a lower z level, depth testing prevents changing the pixel that was already drawn for the “higher” z level, thus causing blending issues. To overcome this, you need to draw opaque items first, then translucent items in ascending z order. My gut feeling is that shaders wouldn’t provide an (efficient) way to change this behavior—am I wrong?
Further, for speed and convenience, I pass information for each vertex (along with a couple of uniform variables) to the shaders and they use the information to find a subset of the vertices that need special attention. Without doing a similar set of logic in the app itself (and slowing things down) I can’t know a priori what subset of vericies that is. Thus I send all vertices to the shader. However, when I draw “points” I’d like the shader to ignore all the vertices that aren’t in the subset it determines. I think I can get the effect by setting alpha to zero and using an alpha function in the GL context that will prevent drawing anything with alpha less than, say, 0.01. However, is there a better or more “correct” glsl way for a shader to say “just ignore this vertex”?
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Sort of. If you have access to GL 4.x-class hardware (Radeon HD 5xxx or better, or GeForce 4xx or better), then you can perform order-independent transparency. Earlier versions have techniques like depth peeling, but they're quite expensive.
The GL 4.x-class version uses essentially a series of "linked lists" of transparent samples, which you do a full-screen pass to resolve into the final sample color. It's not free of course, but it isn't as expensive as other OIT methods. How expensive it would be for your case is uncertain; it is proportional to how many overlapping pixels you have.
You still have to draw opaque stuff first, and you have to draw transparent stuff using special shader code.
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
No.
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
No in general, but yes for points. A Geometry shader can conditionally emit vertices, thus allowing you to discard any vertex for arbitrary reasons.
Discarding a vertex in non-point primitives is possible, but it will also affect the interpretation of that primitive. The reason it's simple for points is because a vertex is a primitive, while a vertex in a triangle isn't a whole primitive. You can discard lines, but discarding a vertex within a line is... of dubious value.
That being said, your explanation for why you want to do this is of dubious merit. You want to update vertex data with essentially a boolean value that says "do stuff with me" or not to. That means that, every frame, you have to modify your data to say which points should be rendered and which shouldn't.
The simplest and most efficient way to do this is to simply not render with them. That is, arrange your data so that the only thing on the GPU are the points you want to render. Thus, there's no need to do anything special at all. If you're going to be constantly updating your vertex data, then you're already condemned to dealing with streaming vertex data. So you may as well stream it in a way that makes rendering efficient.
I am very interested in understanding how multisampling works. I have found a large literature on how to enable or use it, but very little information concerning what it really does in order to achieve an antialiased rendering. What I have found, in many places, is conflicting information that only confused me more.
Please note that I know how to enable and use multisampling (I actually already use it), what I don't know is what kind of data really gets into the multisampled renderbuffers/textures, and how this data is used in the rendering pipeline.
I can understand very well how supersampling works, but multisampling still has some obscure areas that I would like to understand.
here is what the specs say: (OpenGL 4.2)
Pixel sample values, including color, depth, and stencil values, are stored in this
buffer (the multisample buffer). Samples contain separate color values for each fragment color.
...
During multisample rendering the contents of a pixel fragment are changed
in two ways. First, each fragment includes a coverage value with SAMPLES bits.
...
Second, each fragment includes SAMPLES depth values and sets of associated
data, instead of the single depth value and set of associated data that is maintained
in single-sample rendering mode.
So, each sample contains a distinct color, coverage bit, and depth. What's the difference from a normal supersampling? Seems like a "weighted" supersampling to me, where each final pixel value is determined by the coverage value of its samples instead of a simple average, but I am very unsure about this. And what about texture coordinates at sample level?
If I store, say, normals in a RGBF multisampled texture, will I read them back "antialiased" (that is, approaching 0) on the edges of a polygon?
A fragment shader is called once per fragment, unless it uses gl_SampleID, glSampleIn or has a 'sample' storage qualifier. How can a fragment shader be invoked once per fragment and get an antialiased rendering?
OpenGL on Silicon Graphics Systems:
http://www-f9.ijs.si/~matevz/docs/007-2392-003/sgi_html/ch09.html#LE68984-PARENT
mentions: When you use multisampling and read back color, you get the resolved color value (that is, the average of the samples). When you read back stencil or depth, you typically get back a single sample value rather than the average. This sample value is typically the one closest to the center of the pixel.
And there's this technical spec (1994) from the OpenGL site. It explains in full detail what is done If MULTISAMPLE_SGIS is enabled: http://opengl.org/registry/specs/SGIS/multisample.txt
See also this related question: How are depth values resolved in OpenGL textures when multisampling?
And the answers to this question, where GL_MULTISAMPLE_ARB is recommended: where is GL_MULTISAMPLE defined?. The specs for GL_MULTISAMPLE_ARB (2002) are here: http://www.opengl.org/registry/specs/ARB/multisample.txt
For implementing a physically accurate motion blur by actually rendering at intermediate locations, it seems that to do this correctly I need a special blending function. Additive blending would only work on a black background, and the standard "transparency" function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) may look okay for small numbers of samples, but it is physically inaccurate because samples rendered at the end will contribute more to the resulting color.
The function I need has to produce a color which is the weighted average of the original and destination colors, depending on the number of samples covering a fragment. However I can generalize this to better account for rendering differences between samples: Suppose I am to render a blurred object n times. Treating color as a 3-vector, Let D be the color DEST - SRC. I want each render to add D/n to the source color.
Can this be done using the fixed-function pipeline? The glBlendFunc reference is rather cryptic, at least to me. It seems like this can be done either trivially or is impossible. It seems like I would want to set alpha to 1/n. For the behavior I just described, am I in need of a GL_DEST_MINUS_SRC_COLOR option?
I also have a related question: At which stage does this blending operation occur? Before or after the fragment shader program? Would i be able to access the source and destination colors in a fragment shader?
I know that one way to accomplish what I want is by using an accumulation buffer. I do not want to do this because it is a waste of memory and fillrate.
The solution I ended up using to implement my effect is a combination of additive blending and a render target that I access as a texture from the fragment shader.