I have some OpenGL code that behaves inconsistently across different
hardware. I've got some code that:
Creates a render buffer and binds a texture to its color buffer (Texture A)
Sets this render buffer as active, and adjusts the viewport, etc
Activates a pixel shader (gaussian blur, in this instance).
Draws a quad to full screen, with texture A on it.
Unbinds the renderbuffer, etc.
On my development machine this works fine, and has the intended
effect of blurring the texture "in place", however on other hardware
this does not seem to work.
I've gotten it down to two possibilities.
A) Making a renderbuffer render to itself is not supposed to work, and
only works on my development machine due to some sort of fluke.
Or
B) This approach should work, but something else is going wrong.
Any ideas? Honestly I have had a hard time finding specifics about this issue.
A) is the correct answer. Rendering into the same buffer while reading from it is undefined. It might work, it might not - which is exactly what is happening.
In OpenGL's case, framebuffer_object extension has section "4.4.3 Rendering When an Image of a Bound Texture Object is Also Attached to the Framebuffer" which tells what happens (basically, undefined). In Direct3D9, the debug runtime complains loudly if you use that setup (but it might work depending on hardware/driver). In D3D10 the runtime always unbinds the target that is used as destination, I think.
Why this is undefined? One of the reasons GPUs are so fast is that they can make a lot of assumptions. For example, they can assume that units that fetch pixels do not need to communicate with units that write pixels. So a surface can be read, N cycles later the read is completed, N cycles later the pixel shader ends it's execution, then it it put into some output merge buffers on the GPU, and finally at some point it is written to memory. On top of that, the GPUs rasterize in "undefined" order (one GPU might rasterize in rows, another in some cache-friendly order, another in totally another order), so you don't know which portions of the surface will be written to first.
So what you should do is create several buffers. In blur/glow case, two is usually enough - render into first, then read & blur that while writing into second. Repeat this process if needed in ping-pong way.
In some special cases, even the backbuffer might be enough. You simply don't do a glClear, and what you have drawn previously is still there. The caveat is, of course, that you can't really read from the backbuffer. But for effects like fading in and out, this works.
Related
I want to partially render a 3D scene, by this I mean I want to render some pixels and skip others. There are many non-realtime renderers that allow selecting a section that you want to render.
Example, fully rendered image (all pixels rendered) vs partially rendered:
I want to make the renderer not render part of a scene, in this case the renderer would just skip rendering these areas and save resources (memory/CPU).
If it's not possible to do in OpenGL, can someone suggest any other open source renderer, it could even be a software renderer.
If you're talking about rendering rectangular subportions of a display, you'd use glViewport and adjust your projection appropriately.
If you want to decide whether to render or not per pixel, especially with the purely fixed pipeline, you'd likely use a stencil buffer. That does exactly much the name says — you paint as though spraying through a stencil. It's a per-pixel mask, reliably at least 8 bits per pixel, and has supported in hardware for at least the last fifteen years. Amongst other uses, it used to be how you could render a stipple without paying for the 'professional' cards that officially supported glStipple.
With GLSL there is also the discard statement that immediately ends processing of a fragment and produces no output. The main caveat is that on some GPUs — especially embedded GPUs — the advice is to prefer returning any colour with an alpha of 0 (assuming that will have no effect according to your blend mode) if you can avoid a conditional by doing so. Conditionals and discards otherwise can have a strong negative effect on parallelism as fragment shaders are usually implemented by SIMD units doing multiple pixels simultaneously, so any time that a shader program look like they might diverge there can be a [potentially unnecessary] splitting of tasks. Very GPU dependent stuff though, so be sure to profile in real life.
EDIT: as pointed out in the comments, using a scissor rectangle would be smarter than adjusting the viewport. That both means you don't have to adjust your projection and, equally, that rounding errors in any adjustment can't possibly create seams.
It's also struck me that an alternative to using the stencil for a strict binary test is to pre-populate the z-buffer with the closest possible value on pixels you don't want redrawn; use the colour mask to draw to the depth buffer only.
You can split the scene and render it in parts - this way you will render with less memory consumption and you can simply skip unnecessary parts or regions. Also read this
Has anyone familiar with some sort of OpenGL magic to get rid of calculating bunch of pixels in fragment shader instead of only 1? Especially this issue is hot for OpenGL ES in fact meanwile flaws mobile platforms and necessary of doing things in more accurate (in performance meaning) way on it.
Are any conclusions or ideas out there?
P.S. it's known shader due to GPU architecture organisation is run in parallel for each texture monad. But maybe there techniques to raise it from one pixel to a group of ones or to implement your own glTexture organisation. A lot of work could be done faster this way within GPU.
OpenGL does not support writing to multiple fragments (meaning with distinct coordinates) in a shader, for good reason, it would obstruct the GPUs ability to compute each fragment in parallel, which is its greatest strength.
The structure of shaders may appear weird at first because an entire program is written for only one vertex or fragment. You might wonder why can't you "see" what is going on in neighboring parts?
The reason is an instance of the shader program runs for each output fragment, on each core/thread simultaneously, so they must all be independent of one another.
Parallel, independent, processing allows GPUs to render quickly, because the total time to process a batch of pixels is only as long as the single most intensive pixel.
Adding outputs with differing coordinates greatly complicates this.
Suppose a single fragment was written to by two or more instances of a shader.
To ensure correct results, the GPU can either assign one to be an authority and ignore the other (how does it know which will write?)
Or you can add a mutex, and have one wait around for the other to finish.
The other option is to allow a race condition regarding whichever one finishes first.
Either way this would immensely slows down the process, make the shaders ugly, and introduces incorrect and unpredictable behaviour.
Well firstly you can calculate multiple outputs from a single fragment shader in OpenGL 3 and up. A framebuffer object can have more than one RGBA surfaces (Renderbuffer Objects) attached and generate an RGBA for each of them by using gl_FragData[n] instead of gl_FragColor. See chapter 8 of the 5th edition OpenGL SuperBible.
However, the multiple outputs can only be generated for the same X,Y pixel coordinates in each buffer. This is for the same reason that an older style fragment shader can only generate one output, and can't change gl_FragCoord. OpenGL guarantees that in rendering any primitive, one and only one fragment shader will write to any X,Y pixel in the destination framebuffer(s).
If a fragment shader could generate multiple pixel values at different X,Y coords, it might try to write to the same destination pixel as another execution of the same fragment shader. Same if the fragment shader could change the pixel X or Y. This is the classic multiple threads trying to update shared memory problem.
One way to solve it would be to say "if this happens, the results are unpredictable" which sucks from the programmer point of view because it's completely out of your control. Or fragment shaders would have to lock the pixels they are updating, which would make GPUs far more complicated and expensive, and the performance would suck. Or fragment shaders would execute in some defined order (eg top left to bottom right) instead of in parallel, which wouldn't need locks but the performance would suck even more.
I've seen the occasional article suggest ordering your vertices from nearest to furthest from the camera when sending them to OpenGL (for any of the OpenGL variants). The reason suggested by this is that OpenGL will not fully process/render a vertex if it is behind another vertex already rendered.
Since ordering vertices by depth is a costly component of any project, as typically this ordering frequently changes, how common or necessary is such design?
I had previously thought that OpenGL would "look" at all the vertices submitted and process its own depth buffering on them, regardless of their order, before rendering the entire batch. But if in fact a vertex gets rendered to the screen before another, then I can see how ordering might benefit performance.
Is drawing front-to-back necessary for optimizing renders?
Once a primitive is rasterized, its z value can be used to do an "early z kill", which skips running the fragment shader. That's the main reason to render front-to-back. Tip: When you have transparent (alpha textured) polygons, you must render back-to-front.
The OpenGL spec defines a state machine and does not specify in what order the rendering actually happens, only that the results should be correct (within certain tolerances).
Edit for clarity: What I'm trying to say above is that the hardware can do whatever it wants, as long as the primitives appear to have been processed in order
However, most GPUs are streaming processors and their OpenGL drivers do not "batch up" geometry, except perhaps for performance reasons (minimum DMA size, etc). If you feed in polygon A followed by polygon B, then they are fed into the pipeline one after the other and are processed independently (for the most part) of each other. If there are a sufficient number of polys between A and B, then there's a good chance A completes before B, and if B was behind A, its fragments will be discarded via "early z kill".
Edit for clarity: What I'm trying to say above is that since hw does not "batch up" geometry, it cannot do the front-to-back ordering automatically.
You are confusing a few concepts here. There is no need to re-order vertices (*). But you should draw objects that are opaque front to back. This enables what is called "early z rejection" on the GPU. If the GPU knows that a pixel is not going to be shaded by the z test it does not have to run the shader, do texture fetches etc.. This applies to objects in draw calls though, not to individual objects.
A simple example: You have a player character and a sky background. If you draw the player first, the GPU will never have to do the texture lookups for the pixels where the player is. If you do it the other way around, you first draw all the sky and then cover it up.
Transparent geometry needs to draw back to front of course.
( * )=vertices can be re-ordered for better performance. But doing early z is much more important and done per object.
I'm kind of stuck on the logic behind an SDL2 texture. To me, they are pointless since you cannot draw to them.
In my program, I have several surfaces (or what were surfaces before I switched to SDL2) that I just blitted together to form layers. Now, it seems, I have to create several renderers and textures to create the same effect since SDL_RenderCopy takes a texture pointer.
Not only that, but all renderers have to come from a window, which I understand, but still fouls me up a bit more.
This all seems extremely bulky and slow. Am I missing something? Is there a way to draw directly to a texture? What are the point of textures, and am I safe to have multiple (if not hundreds) of renderers in place of what were surfaces?
SDL_Texture objects are stored as close as possible to video card memory and therefore can easily be accelerated by your GPU. Resizing, alpha blending, anti-aliasing and almost any compute-heavy operation can harshly be affected by this performance boost. If your program needs to run a per-pixel logic on your textures, you are encouraged to convert your textures into surfaces temporarily. Achieving a workaround with streaming textures is also possible.
Edit:
Since this answer recieves quite the attention, I'd like to elaborate my suggestion.
If you prefer to use Texture -> Surface -> Texture workflow to apply your per-pixel operation, make sure you cache your final texture unless you need to recalculate it on every render cycle. Textures in this solution are created with SDL_TEXTUREACCESS_STATIC flag.
Streaming textures (creation flag is SDL_TEXTUREACCESS_STREAMING) are encouraged for use cases where source of the pixel data is network, a device, a frameserver or some other source that is beyond SDL applications' full reach and when it is apparent that caching frames from source is inefficient or would not work.
It is possible to render on top of textures if they are created with SDL_TEXTUREACCESS_TARGET flag. This limits the source of the draw operation to other textures although this might already be what you required in the first place. "Textures as render targets" is one of the newest and least widely supported feature of SDL2.
Nerd info for curious readers:
Due to the nature of SDL implementation, the first two methods depend on application level read and copy operations, though they are optimized for suggested scenarios and fast enough for realtime applications.
Copying data from application level is almost always slow when compared to post-processing on GPU. If your requirements are more strict than what SDL can provide and your logic does not depend on some outer pixel data source, it would be sensible to allocate raw OpenGL textures painted from you SDL surfaces and apply shaders (GPU logic) to them.
Shaders are written in GLSL, a language which compiles into GPU assembly. Hardware/GPU Acceleration actually refers to code parallelized on GPU cores and using shaders is the prefered way to achieve that for rendering purposes.
Attention! Using raw OpenGL textures and shaders in conjunction with SDL rendering functions and structures might cause some unexpected conflicts or loss of flexibility provided by the library.
TLDR;
It is faster to render and operate on textures than surfaces although modifying them can sometimes be cumborsome.
Through creating a SDL2 Texture as a STREAMING type, one can lock and unlock the entire texture or just an area of pixels to perform direct pixel operations. One must create prior a SDL2 Surface, and link with lock-unlock as follows:
SDL_Surface surface = SDL_CreateSurface(..);
SDL_LockTexture(texture, &rect, &surface->pixels, &surface->pitch);
// paint into surface pixels
SDL_UnlockTexture(texture);
The key is, if you draw to texture of larger size, and the drawing is incremental ( e.g. data graph in real time ) be sure to only lock and unlock the actual area to update. Otherwise the operations will be slow, with heavy memory copying.
I have experienced reasonable performance and the usage model is not too difficult to understand.
In SDL2 it is possible to render off-screen / render directly to a texture. The function to use is:
int SDL_SetRenderTarget(SDL_Renderer *renderer, SDL_Texture *texture);
This only works if the renderer enables SDL_RENDERER_TARGETTEXTURE.
I would like to efficiently render in an interlaced mode using GLSL.
I can alrdy do this like:
vec4 background = texture2D(plane[5], gl_TexCoord[1].st);
if(is_even_row(gl_TexCoord[1].t))
{
vec4 foreground = get_my_color();
gl_FragColor = vec4(fore.rgb * foreground .a + background .rgb * (1.0-foreground .a), background .a + fore.a);
}
else
gl_FragColor = background;
However, as far as I have understood the nature of branching in GLSL is that both branches will actually be executed, since "even_row" is considered as run-time value.
Is there any trick I can use here in order to avoid unnecessarily calling the rather heavy function "get_color"? The behavior of is_even_row is quite static.
Or is there some other way to do this?
NOTE: glPolygonStipple will not work since I have custom blend functions in my GLSL code.
(comment to answer, as requested)
The problem with interlacing is that GPUs run shaders in 2x2 clusters, which means that you gain nothing from interlacing (a good software implementation might possibly only execute the actual pixels that are needed, unless you ask for partial derivatives).
At best, interlacing runs at the same speed, at worst it runs slower because of the extra work for the interlacing. Some years ago, there was an article in ShaderX4, which suggested interlaced rendering. I tried that method on half a dozen graphics cards (3 generations of hardware of each the "two big" manufacturers), and it ran slower (sometimes slightly, sometimes up to 50%) in every case.
What you could do is do all the expensive rendering in 1/2 the vertical resolution, this will reduce the pixel shader work (and texture bandwidth) by 1/2. You can then upscale the texture (GL_NEAREST), and discard every other line.
The stencil test can be used to discard pixels before the pixel shader is executed. Of course the hardware still runs shaders in 2x2 groups, so in this pass you do not gain anything. However, that does not matter if it's just the very last pass, which is a trivial shader writing out a single fetched texel. The more costly composition shaders (the ones that matter!) run at half resolution.
You find a detailled description including code here: fake dynamic branching. This demo avoids lighting pixels by discarding those that are outside the light's range using the stencil.
Another way which does not need the stencil buffer is to use "explicit Z culling". This may in fact be even easier and faster.
For this, clear Z, disable color writes (glColorMask), and draw a fullscreen quad whose vertices have some "close" Z coordinate, and have the shader kill fragments in every odd line (or use the deprecated alpha test if you want, or whatever). gl_FragCoord.y is a very simple way of knowing which line to kill, using a small texture that wraps around would be another (if you must use GLSL 1.0).
Now draw another fullscreen quad with "far away" Z values in the vertices (and with depth test, of course). Simply fetch your half-res texture (GL_NEAREST filtering), and write it out. Since the depth buffer has a value that is "closer" in every other row, it will discard those pixels.
How does glPolygonStipple compare to this? Polygon stipple is a deprecated feature, because it is not directly supported by the hardware and has to be emulated by the driver either by "secretly" rewriting the shader to include extra logic or by falling back to software.
This is probably not the right way to do interlacing. If you really need to achieve this effect, don't do it in the fragment shader like this. Instead, here is what you could do:
Initialize a full screen 1-bit stencil buffer, where each bit stores the parity of its corresponding row.
Render your scene like usual to a temporary FBO with 1/2 the vertical resoltion.
Turn on the stencil test, and switch the stencil func depending on which set of scan lines you are going to draw.
Blit a rescaled version of the aforementioned fbo (containing the contents of your frame) to the stencil buffer.
Note that you could skip the offscreen FBO step and draw directly using the stencil buffer, but this would waste some fill rate testing those pixels that are just going to clipped anyway. If your program is shader heavy, the solution I just mentioned would be optimal. If it is not, you may end up being marginally better off drawing directly to the screen.