Pixelshader: Feedback on whether anything rendered? - glsl

Is there any way to get feedback from a pixel shader on whether a pixel actually rendered (as opposed to being blocked by zbuffer or stencil buffer)? I'm using GLSL.
I'm trying to determine if a rendered object is visible at all to the camera. Like if I was doing it in pure software, I'd set a boolean false, and turn it true if any pixel actually passed the z and stencil tests.
Any way to do this, via trickery or otherwise?

A fragment shader cannot ask questions of other fragments (except in very isolated circumstances). Nor can a fragment shader peer into the future and ask questions of fragments that have yet to be generated by the rasterizer. As such, a fragment shader cannot know if some other fragment in the same drawing command passed various tests.
Your application can ask these questions, via an occlusion query. You can get a report on whether sampled generated by the drawing commands in the query scope passed. You can even get (an estimate of) the number of samples which passed.
Of course, getting the information is one thing. Using it in a performance-friendly way is another. After all, GPU commands are executed asynchronously. So the answer to this question may not be known until many milliseconds after you issued the commands. And you'd probably prefer not to have the CPU sitting there while the GPU processes stuff.

Related

Access different Fragment in Fragmentshader OpenGL

Can I access and change output values of another Fragment at a certain location in the Fragmentshader?
For example in the main() loop I process everything just like usualy and output the color with some value. But in adition to that I also want the fragment at position vec3(5,3,6) (in world coordinates) to have the same colour.
Now I already did some researche on the web on that. The OpenGL site says, the fragmentshader has one fragment as input and has one fragment as output, which doesnt sound very promising.
Also I know that all fragments are being processed in parallel. But maybe it is posible to say, if the fragment at this position has not been processed yet, write this color to it and take this fragment as already processed.
My be someone can explain if this is posible somehow and if not, why this is not a good idea. The best guess I would have is, to build this logic into the shader, it would have a very bad effect on the general performance.
My be someone can explain if this is posible somehow and if not, why this is not a good idea.
It's not a question of bad idea vs. good idea. It's simply not possible.
The closest you can get to this functionality is ARB_fragment_shader_interlock. Through its interlock and ordering guarantees, it allows limited interoperation. And that limitation is... it only allows interoperation for fragments that cover the same pixel/sample.
So even this functionality does not allow you to write to some other pixel.
The absolute best you can do is use SSBOs and atomic counters to have fragment shaders write what color values and "world coordinates" they would like to write to, then have a second process execute that buffer as either a rendering command or a compute shader to actually write that data.
As already pointed out in Nicol's answer, you can't write to additional fragments of a framebuffer surface in the fragment shader.
The description of your use case is not clear enough to tell what might work best. In the interest of brainstorming, the most direct approach that comes to mind is that you don't use a framebuffer draw surface at all, but output to an image instead.
If you bind a texture as an image, you can write to it in the fragment shader using the imageStore() built-in function. This function takes coordinates as one of the argument, so you can write to any pixel you want, as well as write multiple pixels from the same shader invocation.
Depending on what exactly you want to achieve, I could also imagine a hybrid approach, where your primary rendering still goes to a framebuffer, but you write additional pixel values to an image at the desired positions. Then, in a second rendering pass, you can combine the content of the image with the primary rendering. The combination could be done with blending if the math/logic is simple enough. If you need a more complex combination, you can use a texture as the framebuffer attachment of the initial pass, and then use the result of the rendering and the extra image as two inputs for the fragment shader of the combination pass.

Reading FBO depth attachment whilst depth testing

I'm working with a deferred rendering engine using OpenGL 3.3. I have an FBO set up as my G-buffer with a texture attached as the depth component.
In my lighting pass I need to depth test (with writes disabled) to cull unnecessary pixels. However, I'm currently writing code which will reconstruct world position coordinates, this will also need access to the depth buffer.
Is it legal in Opengl 3.3 to bind a depth attachment to a texture unit and sample it whilst also using it for depth testing in the same pass?
I can't find anything specifically about it in the docs but my gut tells me that using the same buffer/texture for two different purposes will produce undefined behaviour. Does anybody know for sure? I have a limited set of hardware to test on and don't want to make false assumptions about what works.
At the very least this creates a situation where memory coherency cannot be guaranteed (coherency is something you sort of assume at all stages in the traditional pipeline pre-GL4 and have no standardized control over either).
The driver just might cache this memory in an undesirable way since this behavior is undefined. You would like to think that an appropriate combination of writemask and sampling would be a strong hint not to do that, but that is all up to whoever designed the driver and your results will tend to vary by hardware vendor, platform and hardware generation.
This scenario is a use-case for things like NV's texture barrier extension, but that is vendor specific and still does not tackle the problem entirely. If you want to do this sort of thing portably, your best bet is to promote the engine to GL4 and use standardized features for early fragment tests, barriers, etc.
Does your composite pass really need a depth buffer in the first place though? It sounds like you want to re-construct per-pixel position during lighting from the stored depth buffer. That's entirely possible in a framebuffer with no depth attachment at all.
Your G-Buffers will already be filled at this point, and after that you no longer need to do any fragment tests. The one fragment that passed all previous tests is what's eventually written to the G-Buffer and there's no reason to apply any additional tests to it when it comes time to do lighting.

Is it possible to write a bunch of pixels in gl_FragColor?

Has anyone familiar with some sort of OpenGL magic to get rid of calculating bunch of pixels in fragment shader instead of only 1? Especially this issue is hot for OpenGL ES in fact meanwile flaws mobile platforms and necessary of doing things in more accurate (in performance meaning) way on it.
Are any conclusions or ideas out there?
P.S. it's known shader due to GPU architecture organisation is run in parallel for each texture monad. But maybe there techniques to raise it from one pixel to a group of ones or to implement your own glTexture organisation. A lot of work could be done faster this way within GPU.
OpenGL does not support writing to multiple fragments (meaning with distinct coordinates) in a shader, for good reason, it would obstruct the GPUs ability to compute each fragment in parallel, which is its greatest strength.
The structure of shaders may appear weird at first because an entire program is written for only one vertex or fragment. You might wonder why can't you "see" what is going on in neighboring parts?
The reason is an instance of the shader program runs for each output fragment, on each core/thread simultaneously, so they must all be independent of one another.
Parallel, independent, processing allows GPUs to render quickly, because the total time to process a batch of pixels is only as long as the single most intensive pixel.
Adding outputs with differing coordinates greatly complicates this.
Suppose a single fragment was written to by two or more instances of a shader.
To ensure correct results, the GPU can either assign one to be an authority and ignore the other (how does it know which will write?)
Or you can add a mutex, and have one wait around for the other to finish.
The other option is to allow a race condition regarding whichever one finishes first.
Either way this would immensely slows down the process, make the shaders ugly, and introduces incorrect and unpredictable behaviour.
Well firstly you can calculate multiple outputs from a single fragment shader in OpenGL 3 and up. A framebuffer object can have more than one RGBA surfaces (Renderbuffer Objects) attached and generate an RGBA for each of them by using gl_FragData[n] instead of gl_FragColor. See chapter 8 of the 5th edition OpenGL SuperBible.
However, the multiple outputs can only be generated for the same X,Y pixel coordinates in each buffer. This is for the same reason that an older style fragment shader can only generate one output, and can't change gl_FragCoord. OpenGL guarantees that in rendering any primitive, one and only one fragment shader will write to any X,Y pixel in the destination framebuffer(s).
If a fragment shader could generate multiple pixel values at different X,Y coords, it might try to write to the same destination pixel as another execution of the same fragment shader. Same if the fragment shader could change the pixel X or Y. This is the classic multiple threads trying to update shared memory problem.
One way to solve it would be to say "if this happens, the results are unpredictable" which sucks from the programmer point of view because it's completely out of your control. Or fragment shaders would have to lock the pixels they are updating, which would make GPUs far more complicated and expensive, and the performance would suck. Or fragment shaders would execute in some defined order (eg top left to bottom right) instead of in parallel, which wouldn't need locks but the performance would suck even more.

Is it possible to reuse glsl vertex shader output later?

I have a huge mesh(100k triangles) that needs to be drawn a few times and blend together every frame. Is it possible to reuse the vertex shader output of the first pass of mesh, and skip the vertex stage on later passes? I am hoping to save some cost on the vertex pipeline and rasterization.
Targeted OpenGL 3.0, can use features like transform feedback.
I'll answer your basic question first, then answer your real question.
Yes, you can store the output of vertex transformation for later use. This is called Transform Feedback. It requires OpenGL 3.x-class hardware or better (aka: DX10-hardware).
The way it works is in two stages. First, you have to set your program up to have feedback-based varyings. You do this with glTransformFeedbackVaryings. This must be done before linking the program, in a similar way to things like glBindAttribLocation.
Once that's done, you need to bind buffers (given how you set up your transform feedback varyings) to GL_TRANSFORM_FEEDBACK_BUFFER with glBindBufferRange, thus setting up which buffers the data are written into. Then you start your feedback operation with glBeginTransformFeedback and proceed as normal. You can use a primitive query object to get the number of primitives written (so that you can draw it later with glDrawArrays), or if you have 4.x-class hardware (or AMD 3.x hardware, all of which supports ARB_transform_feedback2), you can render without querying the number of primitives. That would save time.
Now for your actual question: it's probably not going to help buy you any real performance.
You're drawing terrain. And terrain doesn't really get any transformation. Typically you have a matrix multiplication or two, possibly with normals (though if you're rendering for shadow maps, you don't even have that). That's it.
Odds are very good that if you shove 100,000 vertices down the GPU with such a simple shader, you've probably saturated the GPU's ability to render them all. You'll likely bottleneck on primitive assembly/setup, and that's not getting any faster.
So you're probably not going to get much out of this. Feedback is generally used for either generating triangle data for later use (effectively pseudo-compute shaders), or for preserving the results from complex transformations like matrix palette skinning with dual-quaternions and so forth. A simple matrix multiply-and-go will barely be a blip on the radar.
You can try it if you like. But odds are you won't have any problems. Generally, the best solution is to employ some form of deferred rendering, so that you only have to render an object once + X for every shadow it casts (where X is determined by the shadow mapping algorithm). And since shadow maps require different transforms, you wouldn't gain anything from feedback anyway.

Self-Referencing Renderbuffers in OpenGL

I have some OpenGL code that behaves inconsistently across different
hardware. I've got some code that:
Creates a render buffer and binds a texture to its color buffer (Texture A)
Sets this render buffer as active, and adjusts the viewport, etc
Activates a pixel shader (gaussian blur, in this instance).
Draws a quad to full screen, with texture A on it.
Unbinds the renderbuffer, etc.
On my development machine this works fine, and has the intended
effect of blurring the texture "in place", however on other hardware
this does not seem to work.
I've gotten it down to two possibilities.
A) Making a renderbuffer render to itself is not supposed to work, and
only works on my development machine due to some sort of fluke.
Or
B) This approach should work, but something else is going wrong.
Any ideas? Honestly I have had a hard time finding specifics about this issue.
A) is the correct answer. Rendering into the same buffer while reading from it is undefined. It might work, it might not - which is exactly what is happening.
In OpenGL's case, framebuffer_object extension has section "4.4.3 Rendering When an Image of a Bound Texture Object is Also Attached to the Framebuffer" which tells what happens (basically, undefined). In Direct3D9, the debug runtime complains loudly if you use that setup (but it might work depending on hardware/driver). In D3D10 the runtime always unbinds the target that is used as destination, I think.
Why this is undefined? One of the reasons GPUs are so fast is that they can make a lot of assumptions. For example, they can assume that units that fetch pixels do not need to communicate with units that write pixels. So a surface can be read, N cycles later the read is completed, N cycles later the pixel shader ends it's execution, then it it put into some output merge buffers on the GPU, and finally at some point it is written to memory. On top of that, the GPUs rasterize in "undefined" order (one GPU might rasterize in rows, another in some cache-friendly order, another in totally another order), so you don't know which portions of the surface will be written to first.
So what you should do is create several buffers. In blur/glow case, two is usually enough - render into first, then read & blur that while writing into second. Repeat this process if needed in ping-pong way.
In some special cases, even the backbuffer might be enough. You simply don't do a glClear, and what you have drawn previously is still there. The caveat is, of course, that you can't really read from the backbuffer. But for effects like fading in and out, this works.