OpenGL/C++: Back to Front rendering with vertex buffers - c++

I'm currently wrapping my head around how to efficiently render polygons within a vertex buffer in back-to-front order to get transparency working..
I got my vertex buffers and index buffers set up, doing glDrawElements rendering and everything works nicely except transparency cause i currently render in arbitrary (the order the objects were created) order..
I will later implement octree rendering, but this will only help in the overall vertex-buffer-rendering order (what vertex buffer to render first), but not the order WITHIN the vertex buffer..
The only thing I can think of is reorder my index buffers every time i do a camera position change, which feels terrible inefficient since i store around 65.000 vertexes per vbo (using a GLushort for the indexes to achieve an optimal vbo size of around 1-4MB)..
Is there a better way to order vertexes in the vertex buffer object (or better phrased the corresponding indexes in the index buffer object)?

There are two methods for that (I have not used any of them myself though)
Peeling (dual peeling) http://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf
Stochastic Transparency
http://www.cse.chalmers.se/~d00sint/StochasticTransparency_I3D2010.pdf
Also if your objects are convex peeling can be easily implemented by first drawing back faces and front faces (using GL_CULL_FACE and inverting normals for correct lighting in a shader)

Related

Using Index buffers in DirectX 11; how does it know?

Let's say I create two vertex buffers, for two different meshes.
(I'm assuming creating separate buffers for separate meshes is how it's usually done)
Now, let's say I want to draw one of the meshes using an index buffer.
Looking at the book Practical Rendering and Computation with Direct3D 11 it doesnt seem like the creation of an index buffer in any way references a vertex buffer, so how does the index buffer know (during input assembly) what vertex buffer to act on?
I've done some googling without answers, which leads me to assume there's something obvious about it that I'm missing.
You are right, index buffers do not reference specific vertex buffers. During DrawIndexed active index buffer is used to supply indices into active vertex buffers (the ones you set using SetIndexBuffer/SetVertexBuffers).
Indeed, Index Buffers and Vertex Buffers are completely independent.
Index buffer will know about VertexBuffer at draw time (eg: when both as bound to the pipeline)
You can think of Index Buffer as a "Lookup Table", where you keep a list or elements indices to draw.
That also means you can attach two completely "logically unrelated" buffers to the pipeline and draw it, nothing will prevent you from doing that, but you will of course have some strange visual results.
Decoupling both has many advantages, here are a few examples:
You can reuse an index buffer (for example, two displaced grids with identical resolution can share the same index buffer). That can be a decent memory gain.
You can draw your Vertex buffer on it's own and do some processing per vertex (draw a point list for sprites for example, or apply skinning/displacement into a Stream Output buffer , then draw the resulting vertex buffer using DrawIndexed)
Both Vertex/Index buffers can also be bound as ByteAddressBuffer, so you can also process your geometry in Compute Shader, and build another optimized index buffer, with culled triangles for example, then process the Indexed Draw with the optimized buffer. Applying those culls in with indices instead of vertices is often faster than vertex, as you will move much less memory.
This is a niche case, but sometimes I have to draw a mesh as a set of triangles, but then draw as a set of lines (some form of wireframe). If as example, you take a single Box, you will not want to draw the diagonals as lines, so I have a shared Vertexbuffer with box vertices, then one IndexBuffer dedicated to draw triangle list, and another to draw line list. In case of large models, this can also be an effective memory gain.

efficient update of GL state given a change to the scene

Suppose we have a scene which consists of a list of n meshes in draw order. The triangle count of each mesh is bounded by a constant (though that constant may be large). Also suppose we have GL state such that all meshes can be rendered with a single draw call (glDrawArrays/Elements).
Given a change to the scene, which may consist of:
Inserting a mesh in the list
Removing a mesh from the list
Changing the geometry of the mesh (which may include changing its triangle count)
Is there a O(1) way to update GL state for a single change to the scene?
Because of the draw-order and single-draw-call constraints, the meshes must be laid out linearly in a VBO in that order. Thus, if a mesh is inserted in the list, the VBO must be resized by moving data, which is not O(1).
This isn't really an OpenGL problem. The work complexity has to do with how you model geometry, which happens well before you start trying to shove it into a GPU.
But there are some GL-specific things you can think about:
Separate vertex array order from draw order by using an index buffer and a DrawElements call. If you're drawing multiple entities out of one vertex buffer and you expect them to change, you can leave some padding in the vertex buffer and address vertices by index.
Think about how you're getting that vertex data to the GPU if it's changing every frame. For example, with double- buffering or MapBufferRange, the CPU can work on filling a buffer with new vertex data for the next frame while the GPU draws the current frame from a different buffer (or range).
The work you do to arrange/modify vertices every frame still can't be O(1). GPU work (and CPU/GPU transfers) tends to be thought of more in ms than order analysis terms, but there are things you can to minimize time.

OpenGL- drawarrays or drawelements?

I'm making a small 2D game demo and from what I've read, it's better to use drawElements() to draw an indexed triangle list than using drawArrays() to draw an unindexed triangle list.
But it doesn't seem possible as far as I know to draw multiple elements that are not connected in a single draw call with drawElements().
So for my 2D game demo where I'm only ever going to draw squares made of two triangles, what would be the best approach so I don't end having one draw call per object?
Yes, it's better to use indices in many cases since you don't have to store or transfer duplicate vertices and you don't have to process duplicate vertices (vertex shader only needs to be run once per vertex). In the case of quads, you reduce 6 vertices to 4, plus a small amount of index data. Two thirds is quite a good improvement really, especially if your vertex data is more than just position.
In summary, glDrawElements results in
Less data (mostly), which means more GPU memory for other things
Faster updating if the data changes
Faster transfer to the GPU
Faster vertex processing (no duplicates)
Indexing can affect cache performance, if the reference vertices that aren't near each other in memory. Modellers commonly produce meshes which are optimized with this in mind.
For multiple elements, if you're referring to GL_TRIANGLE_STRIP you could use glPrimitiveRestartIndex to draw multiple strips of triangles with the one glDrawElements call. In your case it's easy enough to use GL_TRIANGLES and reference 4 vertices with 6 indices for each quad. Your vertex array then needs to store all the vertices for all your quads. If they're moving you still need to send that data to the GPU every frame. You could position all the moving quads at the front of the array and only update the active ones. You could also store static vertex data in a separate array.
The typical approach to drawing a 3D model is to provide a list of fixed vertices for the geometry and move the whole thing with the model matrix (as part of the model-view). The confusing part here is that the mesh data is so small that, as you say, the overhead of the draw calls may become quite prominent. I think you'll have to draw a LOT of quads before you get to the stage where it'll be a problem. However, if you do, instancing or some similar idea such as particle systems is where you should look.
Perhaps only go down the following track if the draw calls or data transfer becomes a problem as there's a lot involved. A good way of implementing particle systems entirely on the GPU is to store instance attributes such as position/colour in a texture. Each frame you use an FBO/render-to-texture to "ping-pong" this data between another texture and update the attributes in a fragment shader. To draw the particles, you can set up a static VBO which stores quads with the attribute-data texture coordinates for use in the vertex shader where the particle position can be read and applied. I'm sure there's a bunch of good tutorials/implementations to follow out there (please comment if you know of a good one).

Pairwise vertex attributes in OpenGL

I'm trying to visualise a graph with OpenGL. I have vertex buffer with points in 3D space, and an index buffer that specifies lines between vertices. I use glDrawElements to draw the graph. Everything works. Problem is that I need to visualise the edge weights. My problem is that edge weights are pairwise attributes and I have no idea how to put this information into my shader. Only solutions I can think of is drawing each edge individually with DrawRangeElements and setting the edge weight between every call. Is there a better way of doing this?
There's no need to employ a geometry shader. Just render them as GL_LINES, duplicating the positions as needed as well as providing the same "weight" attribute for each pair of verts on a line. This is ultimately no different from rendering a cube, where each face needs its own normals.
If (and only if) you absolutely must have that memory back, and you can't simply compress your vertex data (using normalized shorts, unnormalized shorts, or whatever), here are some techniques you can use. Be warned: this is a memory-vs-performance tradeoff. So unless you have real memory pressures, just duplicate your vertex data and get it over with.

Reverse triangle lookup from affected pixels?

Assume I have a 3D triangle mesh, and a OpenGL framebuffer to which I can render the mesh.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
The only way I could think of doing this is to individually render each triangle from the mesh, then go through each pixel in the framebuffer to determine if it was affected by the triangle (using the depth buffer or a user-defined fragment shader output variable). I would then have to clear the framebuffer and do the same for the next triangle.
Is there a more efficient way to do this?
I considered, for each fragment in the fragment shader, writing out a triangle identifier, but GLSL doesn't allow outputting a list of integers.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
You will not be able to do it for entire scene. There's no structure that allow you to associate "list" with every pixel.
You can get list of primitives that affected certain area using select buffer (see glRenderMode(GL_SELECT)).
You can get scene depth complexity using stencil buffer techniques.
If there are 8 triangles total, then you can get list of triangles that effected every pixel using stencil buffer (basically, assign unique (1 << n) stencil value to each triangle, and OR it with existing stencil buffer value for every stencil OP).
But to solve it in generic case, you'll need your own rasterizer and LOTS of memory to store per-pixel triangle lists. The problem is quite similar to multi-layered depth buffer, after all.
Is there a more efficient way to do this?
Actually, yes, but it is not hardware accelerated and OpenGL has nothing to do it. Store all rasterized triangles in OCT-tree. Launch a "ray" through that OCT-tree for every pixel you want to test, and count triangles this ray hits. That's collision detection problem.