Using OpenGL instancing for rendering 2D scene with object depths and alpha blending - opengl

Here's what I'm trying to do: I want to render a 2D scene, consisting of a number of objects (quads), using instancing. Objects with a lower y value (towards the bottom of the screen) need to be rendered in front of the ones with higher y values. And alpha blending also needs to work.
So my first idea was to use the Z value for depth, but I soon realized alpha blending will not work unless the objects are drawn in the right order. But I'm not issuing one call for each quad, but use a single instanced call to render the whole scene. Putting the instance data in the correct sorted order seems to work for me, but I doubt this is something I can rely on, since the GPU is supposed to run those computations in parallel as much as possible.
So the question is, is there a way to make this work? The best thing I can think of right now is to issue an instanced call for each separate y value (and issue those in order, back to front). Is there a better way to do this?

Instancing is best used for cases where each instance is medium-sized: hundreds or maybe thousands of triangles. Quads are not a good candidate for instancing.
Just build and render a sequence of triangles. There are even ways to efficiently get around the lack of a GL_QUADS primitive type in modern OpenGL.
Putting the instance data in the correct sorted order seems to work for me, but I doubt this is something I can rely on, since the GPU is supposed to run those computations in parallel as much as possible.
That's not how GPUs work.
When you issue a rendering command, what you (eventually) get is a sequence of primitives. Because the vertices that were given to that command are ordered (first to last), and the instances in that command are ordered, and even the draws within a single draw command are ordered, an order can be assigned to every primitive in the draw call with respect to every other primitive based on the order of vertices, instances, and draws.
This defines the primitive order for a drawing command. GPUs guarantee that blending (and logical operations and other visible post-fragment shader operations) will respect the primitive order of a rendering command and between rendering commands. That is, if you draw 2 triangles in a single call, and the first is behind the second (with depth testing turned off), then blending for the second triangle will respect the data written by the first.
Basically, if you give primitives to the GPU in an order, the GPU will respect that order with regard to blending and such.
So again, just build a ordered stream of triangles to represent your quads and render them.

Related

In OpenGL, what's the best way to achieve a combined blend of primitives that don't all use the same shader?

Let's say I have a bunch of semi-transparent triangles that I want to render as part of the same scene, correctly blended in order of depth. If they all use the same shader (and same uniforms, OpenGL state etc), then all I have to do is sort them by depth before submitting to OpenGL and I can render them all with a single draw call.
But what do I do if I want to render some of the triangles using a different shader? I can't do a single draw call anymore because there's two shaders. Do I sort the two sets of triangles separately and render them one after the other? But that only works if the depth values of one of the sets happen to be all less than the depth values of the other set. What if the depth values of the two sets interleave? In the worst case, what if the two sets of triangles are perfectly interleaved, so that between every two consecutive triangles of one set there's a triangle from the other set? What do I do then? Do I have to do as many draw calls as there are triangles in order to get the right result? I would like to limit the number of draw calls since I've heard having many draw calls is bad for performance. Is there a better way to do this?
In the worst case, what if the two sets of triangles are perfectly interleaved, so that between every two consecutive triangles of one set there's a triangle from the other set? What do I do then? Do I have to do as many draw calls as there are triangles in order to get the right result?
Yes, exactly so.
I would like to limit the number of draw calls since I've heard having many draw calls is bad for performance.
Each draw call has a comparatively large overhead, sure. The less draw calls you can render your scene in, the faster it will be.
Is there a better way to do this?
Sure, stop using different shaders. The whole "different shader" thing is just an assumption on your end, I haven't yet seen proof that it's actually needed. Between bindless textures, instanced drawing, SSBOs and plain ol atlases, you'd have to be rendering some pretty crazy triangles that you can't write just one shader for all of them.

Is there a defined draw order for OpenGL instanced drawing? [duplicate]

Here's what I'm trying to do: I want to render a 2D scene, consisting of a number of objects (quads), using instancing. Objects with a lower y value (towards the bottom of the screen) need to be rendered in front of the ones with higher y values. And alpha blending also needs to work.
So my first idea was to use the Z value for depth, but I soon realized alpha blending will not work unless the objects are drawn in the right order. But I'm not issuing one call for each quad, but use a single instanced call to render the whole scene. Putting the instance data in the correct sorted order seems to work for me, but I doubt this is something I can rely on, since the GPU is supposed to run those computations in parallel as much as possible.
So the question is, is there a way to make this work? The best thing I can think of right now is to issue an instanced call for each separate y value (and issue those in order, back to front). Is there a better way to do this?
Instancing is best used for cases where each instance is medium-sized: hundreds or maybe thousands of triangles. Quads are not a good candidate for instancing.
Just build and render a sequence of triangles. There are even ways to efficiently get around the lack of a GL_QUADS primitive type in modern OpenGL.
Putting the instance data in the correct sorted order seems to work for me, but I doubt this is something I can rely on, since the GPU is supposed to run those computations in parallel as much as possible.
That's not how GPUs work.
When you issue a rendering command, what you (eventually) get is a sequence of primitives. Because the vertices that were given to that command are ordered (first to last), and the instances in that command are ordered, and even the draws within a single draw command are ordered, an order can be assigned to every primitive in the draw call with respect to every other primitive based on the order of vertices, instances, and draws.
This defines the primitive order for a drawing command. GPUs guarantee that blending (and logical operations and other visible post-fragment shader operations) will respect the primitive order of a rendering command and between rendering commands. That is, if you draw 2 triangles in a single call, and the first is behind the second (with depth testing turned off), then blending for the second triangle will respect the data written by the first.
Basically, if you give primitives to the GPU in an order, the GPU will respect that order with regard to blending and such.
So again, just build a ordered stream of triangles to represent your quads and render them.

Is drawing front-to-back necessary for optimizing renders?

I've seen the occasional article suggest ordering your vertices from nearest to furthest from the camera when sending them to OpenGL (for any of the OpenGL variants). The reason suggested by this is that OpenGL will not fully process/render a vertex if it is behind another vertex already rendered.
Since ordering vertices by depth is a costly component of any project, as typically this ordering frequently changes, how common or necessary is such design?
I had previously thought that OpenGL would "look" at all the vertices submitted and process its own depth buffering on them, regardless of their order, before rendering the entire batch. But if in fact a vertex gets rendered to the screen before another, then I can see how ordering might benefit performance.
Is drawing front-to-back necessary for optimizing renders?
Once a primitive is rasterized, its z value can be used to do an "early z kill", which skips running the fragment shader. That's the main reason to render front-to-back. Tip: When you have transparent (alpha textured) polygons, you must render back-to-front.
The OpenGL spec defines a state machine and does not specify in what order the rendering actually happens, only that the results should be correct (within certain tolerances).
Edit for clarity: What I'm trying to say above is that the hardware can do whatever it wants, as long as the primitives appear to have been processed in order
However, most GPUs are streaming processors and their OpenGL drivers do not "batch up" geometry, except perhaps for performance reasons (minimum DMA size, etc). If you feed in polygon A followed by polygon B, then they are fed into the pipeline one after the other and are processed independently (for the most part) of each other. If there are a sufficient number of polys between A and B, then there's a good chance A completes before B, and if B was behind A, its fragments will be discarded via "early z kill".
Edit for clarity: What I'm trying to say above is that since hw does not "batch up" geometry, it cannot do the front-to-back ordering automatically.
You are confusing a few concepts here. There is no need to re-order vertices (*). But you should draw objects that are opaque front to back. This enables what is called "early z rejection" on the GPU. If the GPU knows that a pixel is not going to be shaded by the z test it does not have to run the shader, do texture fetches etc.. This applies to objects in draw calls though, not to individual objects.
A simple example: You have a player character and a sky background. If you draw the player first, the GPU will never have to do the texture lookups for the pixels where the player is. If you do it the other way around, you first draw all the sky and then cover it up.
Transparent geometry needs to draw back to front of course.
( * )=vertices can be re-ordered for better performance. But doing early z is much more important and done per object.

Draw a bunch of elements generated by CUDA/OpenCL?

I'm new to graphics programming, and need to add on a rendering backend for a demo we're creating. I'm hoping you guys can point me in the right direction.
Short version: Is there any way to send OpenGL an array of data for distinct elements, without having to issue a draw command for each element distinctly?
Long version: We have a CUDA program (will eventually be OpenCL) which calculates a bunch of data for a bunch of objects for us. We then need to render these objects using, e.g., OpenGL.
The CUDA kernel can generate our vertices, and using OpenGL interop, it can shove these in an OpenGL VBO and not have to transfer the data back to host device memory. But the problem is we have a bunch (upwards of a million is our goal) distinct objects. It seems like our best bet here is allocating one VBO and putting every object's vertices into it. Then we can call glDrawArrays with offsets and lengths of each element inside that VBO.
However, each object may have a variable number of vertices (though the total vertices in the scene can be bounded.) I'd like to avoid having to transfer a list of start indices and lengths from CUDA -> CPU every frame, especially given that these draw commands are going right back to the GPU.
Is there any way to pack a buffer with data such that we can issue only one call to OpenGL to render the buffer, and it can render a number of distinct elements from that buffer?
(Hopefully I've also given enough info to avoid a XY problem here.)
One way would be to get away from understanding these as individual objects and making them a single large object drawn with a single draw call. The question is, what data is it that distinguishes the objects from each other, meaning what is it you change between the individual calls to glDrawArrays/glDrawElements?
If it is something simple, like a color, it would probably be easier to supply this an additional per-vertex attribute. This way you can render all objects as one single large object using a single draw call with the indiviudal sub-objects (which really only exist conceptually now) colored correctly. The memory cost of the additional attribute may be well worth it.
If it is something a little more complex (like a texture), you may still be able to index it using an additional per-vertex attribute, being either an index into a texture array (as texture arrays should be supported on CUDA/OpenCL-able hardware) or a texture coordinate into a particular subregion of a single large texture (a so-called texture atlas).
But if the difference between those objects is something more complex, as a different shader or something, you may really need to render individual objects and make individual draw calls. But you still don't need to neccessarily make a round-trip to the CPU. With the use of the ARB_draw_indirect extension (which is core since GL 4.0, I think, but may be supported on GL 3 hardware (and thus CUDA/CL-hardware), don't know) you can source the arguments to a glDrawArrays/glDrawElements call from an additional buffer (into which you can write with CUDA/CL like any other GL buffer). So you can assemble the offset-length-information of each individual object on the GPU and store them in a single buffer. Then you do your glDrawArraysIndirect loop offsetting into this single draw-indirect-buffer (with the offset between the individual objects now being constant).
But if the only reason for issuing multiple draw calls is that you want to render the objects as single GL_TRIANGLE_STRIPs or GL_TRIANGLE_FANs (or, god beware, GL_POLYGONs), you may want to reconsider just using a bunch of GL_TRIANGLES so that you can render all objects in a single draw call. The (maybe) time and memory savings from using triangle strips are likely to be outweight by the overhead of multiple draw calls, especially when rendering many small triangle strips. If you really want to use strips or fans, you may want to introduce degenerate triangles (by repeating vertices) to seprate them from each other, even when drawn with a single draw call. Or you may look into the glPrimitiveRestartIndex function introduced with GL 3.1.
Probably not optimal, but you could make a single glDrawArray on your whole buffer...
If you use GL_TRIANGLES, you can fill your buffer with zeroes, and write only the needed vertices in your kernel. This way "empty" regions of your buffer will be drawn as 0-area polygons ( = degenerate polygons -> not drawn at all )
If you use GL_TRIANGLE_STRIP, you can do the same, but you'll have to duplicate your first vertex in order to make a fake triangle between (0,0,0) and your mesh.
This can seem overkill, but :
- You'll have to be able to handle as many vertices anyway
- degenerate triangles use no fillrate, so they are almost free (the vertex shader is still computed, though)
A probably better solution would be to use glDrawElements instead : In you kernel, you also generate an index list for your whole buffer, which will be able to completely skip regions of your buffer.

Disable writing in depth buffer from glsl

All geometry is storing in one VBO (Transparent + Not transparent). I can not sort geometry. How I can disable writing in depth buffer from glsl without loss the data colors?
If I understand right, you want to disable depth writes because you draw both opaque and transparent objects. Apart from the fact that it doesn't work that way from within GLSL, it would not produce what you want, if it did.
If you just disabled depth writes ad hoc, the opaque objects coming after a transparent object would overwrite it, regardless of the z order.
What you really want to do is this:
Enable depth writes and depth test
Draw all opaque geometry. If you can, in a roughly sorted (roughly is good enough!) order, closest objects first.
Disable depth writes, keep depth test enabled
Enable blending
Draw transparent objects, sorted in the opposite direction, that is farthest away first. This occludes transparent objects with opaque geometry and makes blending work correctly.
If, for some reason, you can't sort the opaque geometry (though there is really no reason why you can't do that?), never mind -- it will be slightly slower because it does not cull fragments, but it will produce the same image.
If, for some reason, you can't sort the transparent geometry, you will have to expect incorrect results where several transparent objects overlap. This may or may not be noticeable (especially if the order is "random", i.e. changes frame by frame, it will be very noticeable -- otherwise you might in fact get away with it although it's incorrect).
Note that as datenwolf has pointed out already, the fact that several objects are in one VBO does not mean you can't draw a subset of them, or several subsets in any order you want. After all, a VBO only holds some vertices, it is up to you which groups of them you draw in which order.
You can't.
I can not sort geometry.
Why? You think because it's all in one VBO? Then I've got good news: It's perfectly possible to draw from just a subset of a buffer object.