Reusing textures in WebGL while rendering - concurrency

I have a long pipeline of shaders in WebGL, where each shader reads from an input texture, renders to another texture, and by the end the last texture contains the desired result.
Can I reuse textures in the pipeline, as if it was running synchronously?
// ... init texA to contain input ...
shader1.samplingFrom(texA).renderTo(texB);
shader2.samplingFrom(texB).renderTo(texA);
shader3.samplingFrom(texA).renderTo(texB);
return readPixels(texB); // Always the same answer?
For that matter, can I even rely on a shader finishing before the next stage starts?
// ... init texA to contain input ...
shader1.samplingFrom(texA).renderTo(texB);
shader2.samplingFrom(texB).renderTo(texC);
shader3.samplingFrom(texC).renderTo(texD);
return readPixels(texD); // Always the same answer?
I initially thought that I could reuse textures, but I've noticed odd behavior that goes away if I stall the pipeline (implying a race condition of some kind) so now I'm not sure what guarantees are provided.

Yes you can reuse textures. What you can't do, in OpenGL/WebGL is read from and render to the same texture in the same draw call.
Also shaders run one after the other, not in parallel, at least in OpenGL. A single shader might do some things internally in parallel but the result is required to be the same as if it had run serially.

Related

unity3d, multiple render targets - different behavior in Direct3D/OpenGl

I'm writing shader for unity3d. The shader uses multiple render targets to render post processing effect.
However, I've run into interesting issue.
When Unity3d runs in direct3d mode, by default all standard shaders write data only into first color buffer (i.e. with index 0). I.e. if I attach 3 color buffers to camera, call Camera.Render color buffer with index 0 will contain rendered scene, and all the other buffers will remain untouched unless some shader specifically write in them. My shader utilizes that behavior (I use buffers with indexes 1 and 2 to accumulate data needed for post process effect).
However, in OpenGL mode standard unity3d shaders write in ALL color buffers at once. I.e. if I attach multiple render buffers to a camera, call Camera.Render all 3 buffers will contain copy of rendered scene.
That breaks my shader in OpenGL mode.
How can I fix that? I need to render the whole scene in one go, and only objects that have specific shader should modify additional color buffers.
I need to render scene in one go because using layer masks causes unity to recalculate projector shadows for ALL lights and I need shadows to be correct.
Advice?
Sadly, it turned out that "not writing into one of the render targets" is undocumented behavior in opengl. Standard unity shader when compiled for forward rendering path produces gl_FragData[0] = ...; assignment and writes into only one buffer, which triggers undocumented behavior and causes the mess.
In order to fix that problem, I would need to make unity write data explicitly into additional render targets in standard shaders. Unfortunately, this cannot be done, because there is no "entry point" to "hook" standard shader and write additional data into other color buffers. The closest thing to that is "finalcolor" modifier, but it does not actually allow to write into additional buffers via CG shader (that requires additional data to be from fragment shader, which is inaccessible from surface shader), it is only possible to modify one color.
I decided to rewrite portion of the shader (so it won't trigger undocumented behavior in OpenGL) and gave up on having unity shadowmap support in the effect. As far as I know, there is no other options short of modifying unity engine (requires "special arrangements" and source code access) or replacing entire lighting system with my own.

Is it possible to write a bunch of pixels in gl_FragColor?

Has anyone familiar with some sort of OpenGL magic to get rid of calculating bunch of pixels in fragment shader instead of only 1? Especially this issue is hot for OpenGL ES in fact meanwile flaws mobile platforms and necessary of doing things in more accurate (in performance meaning) way on it.
Are any conclusions or ideas out there?
P.S. it's known shader due to GPU architecture organisation is run in parallel for each texture monad. But maybe there techniques to raise it from one pixel to a group of ones or to implement your own glTexture organisation. A lot of work could be done faster this way within GPU.
OpenGL does not support writing to multiple fragments (meaning with distinct coordinates) in a shader, for good reason, it would obstruct the GPUs ability to compute each fragment in parallel, which is its greatest strength.
The structure of shaders may appear weird at first because an entire program is written for only one vertex or fragment. You might wonder why can't you "see" what is going on in neighboring parts?
The reason is an instance of the shader program runs for each output fragment, on each core/thread simultaneously, so they must all be independent of one another.
Parallel, independent, processing allows GPUs to render quickly, because the total time to process a batch of pixels is only as long as the single most intensive pixel.
Adding outputs with differing coordinates greatly complicates this.
Suppose a single fragment was written to by two or more instances of a shader.
To ensure correct results, the GPU can either assign one to be an authority and ignore the other (how does it know which will write?)
Or you can add a mutex, and have one wait around for the other to finish.
The other option is to allow a race condition regarding whichever one finishes first.
Either way this would immensely slows down the process, make the shaders ugly, and introduces incorrect and unpredictable behaviour.
Well firstly you can calculate multiple outputs from a single fragment shader in OpenGL 3 and up. A framebuffer object can have more than one RGBA surfaces (Renderbuffer Objects) attached and generate an RGBA for each of them by using gl_FragData[n] instead of gl_FragColor. See chapter 8 of the 5th edition OpenGL SuperBible.
However, the multiple outputs can only be generated for the same X,Y pixel coordinates in each buffer. This is for the same reason that an older style fragment shader can only generate one output, and can't change gl_FragCoord. OpenGL guarantees that in rendering any primitive, one and only one fragment shader will write to any X,Y pixel in the destination framebuffer(s).
If a fragment shader could generate multiple pixel values at different X,Y coords, it might try to write to the same destination pixel as another execution of the same fragment shader. Same if the fragment shader could change the pixel X or Y. This is the classic multiple threads trying to update shared memory problem.
One way to solve it would be to say "if this happens, the results are unpredictable" which sucks from the programmer point of view because it's completely out of your control. Or fragment shaders would have to lock the pixels they are updating, which would make GPUs far more complicated and expensive, and the performance would suck. Or fragment shaders would execute in some defined order (eg top left to bottom right) instead of in parallel, which wouldn't need locks but the performance would suck even more.

Shader framebuffer readback

I was wondering if there is support in the newer shader models to read-back a pixel value from the target framebuffer. I assume that this is alrdy done in later (non-programmable) stages in the drawing pipeline which made me hope that this feature might have been added into the programmable pipeline.
I am aware that it is possible to draw to a texture bound framebuffer and then send this texture to the shader, I was just hoping for a more elegant way to achieve the same functionality.
As Andrew notes, the framebuffer access is logically a separate stage from the fragment shader, so reading the framebuffer in the fragment shader is impossible. The reason for this (to answer Andrew's question) is a combination of performance and the ordering requirements of the graphics pipeline. The way the rendering pipeline is defined, framebuffer blending operations MUST occur in the same order as the triangles/primitives that went into the beginning of the pipeline. The fragment shaders, on the other hand, can happen in any order. So by having them be separate stages, the GPU is free to run fragment shaders as fast as it can, as their inputs become available, without having to synchronize between them. As long as it maintains enough bufffer space to hold on to the outputs of the fragment shaders, so that they can be accumulated and allow the framebuffer blends and writes to occur in order, all is well, as the results of any given fragment shader are not visible until after the blending stage.
If there was a way for the fragment shader to read the framebuffer, it would require some sort of synchronization to ensure that those reads happen in order, thus greatly slowing things down.
No. As you mention, rendering to a texture is the way to achieve that functionality.
If you take a look at a block diagram of a GPU pipeline, you'll see that the blending stage - which is what combines fragment shader output with the framebuffer - is separate from the fragment shader and is fixed-function.
I'm not a GPU designer - so I can only speculate the reason for this. Presumably it is to keep framebuffer access fast and insulate the fragment shader stage from the frame buffer so that it can be better parallelised. There are probably also issues regarding multi-sampling, and so on.
(Not to mention that fixed-function blending is "good enough" in most cases.)
Actually I think this is now doable with Direct3D 11 SM 5.0 (I didn't test it though).
You can bind an UAV to a PS 5.0, for allowing read and write operations on it using method OMSetRenderTargetsAndUnorderedAccessViews.
In that case the backbuffer of the swap chain in which you render has to be created with flag DXGI_USAGE_UNORDERED_ACCESS (I guess).
This is used in DXSDK OIT11 sample.
It is possible to read back the contents of the frame buffer in the fragment shader with Shader_framebuffer_fetch extension. The support can be added to the GPU with some performance loss. In fact, these days I'm working on to add the support of this extension in the OpenGL ES2.0 driver of a well known GPU brand in the consumer electronics market.
You can draw to a texture TEX (using a render target view) and then bind that as an input to another shader (using a shader resource view). TEX is then a pseduo-framebuffer.

DirectX post-processing shader

I have a simple application in which I need to let the user select a shader (.fx HLSL or assembly file, possibly with multiple passes, but all and only pixel shader) and preview it.
The application runs, the list of shaders comes up, and a button launches the "preview window."
From this preview window (which has a DirectX viewport in it), the user selects an image and the shader is run on that image and displayed. Only one frame needs rendered (not real-time).
I have a vertex/pixel shader combination set up that takes a quad and renders it to the screen, textured with the chosen image. This works perfectly.
I need to then run another effect, purely pixel shader, on the output from the first effect, and display the final image (post-processed) to the screen. This doesn't work at all.
I've tried for the past few days to get it working, but for no apparent reason, the identical code blocks used to render each effect only render the first. I can add the second shader file as a second pass in the first shader file and it runs perfectly (although completely defeats my goal of previewing user-created shaders). When I try to use a second effect (which loads and compiles just fine), it does nothing.
I've taken the results of the first shader (with GetRenderTargetData) and placed them in a texture & surface (destTex and destSur), then set that texture as the input for the second pass (using dev->SetTexture and later effect->SetTexture("thisframe", destTex)).
All calls succeed, effects compile, textures load, quads are drawn, but the effect is not visible.
I suspected at first the device (created with software vertex processing) was causing the issue, but that doesn't seem to be the case (I tried with hardware and mixed).
Additionally, using both a HAL and REF device (not a problem, since the app isn't realtime anyways), that second shader isn't visible.
Everything is written in C++ for Direct3D 9.
Try clearing the depth-stencil buffer after each time you render the quad.
First Create a texture, then render the first shader directly into that texture. Finally render the second shader with the texture as input to the Backbuffer.
There must be some kind of vertex input and vertex processing (either fixed-function or shader) in order for the pixel shader to be run. Are you supplying the vertex shader, and if so are you sure it does what the pixel shader expects? What does your draw call look like?
It's probably worth looking at a PIX trace of your app to see what the device state is when trying to use the user effect.

Self-Referencing Renderbuffers in OpenGL

I have some OpenGL code that behaves inconsistently across different
hardware. I've got some code that:
Creates a render buffer and binds a texture to its color buffer (Texture A)
Sets this render buffer as active, and adjusts the viewport, etc
Activates a pixel shader (gaussian blur, in this instance).
Draws a quad to full screen, with texture A on it.
Unbinds the renderbuffer, etc.
On my development machine this works fine, and has the intended
effect of blurring the texture "in place", however on other hardware
this does not seem to work.
I've gotten it down to two possibilities.
A) Making a renderbuffer render to itself is not supposed to work, and
only works on my development machine due to some sort of fluke.
Or
B) This approach should work, but something else is going wrong.
Any ideas? Honestly I have had a hard time finding specifics about this issue.
A) is the correct answer. Rendering into the same buffer while reading from it is undefined. It might work, it might not - which is exactly what is happening.
In OpenGL's case, framebuffer_object extension has section "4.4.3 Rendering When an Image of a Bound Texture Object is Also Attached to the Framebuffer" which tells what happens (basically, undefined). In Direct3D9, the debug runtime complains loudly if you use that setup (but it might work depending on hardware/driver). In D3D10 the runtime always unbinds the target that is used as destination, I think.
Why this is undefined? One of the reasons GPUs are so fast is that they can make a lot of assumptions. For example, they can assume that units that fetch pixels do not need to communicate with units that write pixels. So a surface can be read, N cycles later the read is completed, N cycles later the pixel shader ends it's execution, then it it put into some output merge buffers on the GPU, and finally at some point it is written to memory. On top of that, the GPUs rasterize in "undefined" order (one GPU might rasterize in rows, another in some cache-friendly order, another in totally another order), so you don't know which portions of the surface will be written to first.
So what you should do is create several buffers. In blur/glow case, two is usually enough - render into first, then read & blur that while writing into second. Repeat this process if needed in ping-pong way.
In some special cases, even the backbuffer might be enough. You simply don't do a glClear, and what you have drawn previously is still there. The caveat is, of course, that you can't really read from the backbuffer. But for effects like fading in and out, this works.