What is the purpose of OpenGL texture buffer objects? - opengl

We use buffer objects for reducing copy operations from CPU-GPU and for texture buffer objects we can change target from vertex to texture in buffer objects. Is there any other advantage here of texture buffer objects? Also, it does not allow filtering, is there any disadvantage of this?

A buffer texture is similar to a 1D-texture but has a backing buffer store that's not part of the texture object (in contrast to any other texture object) but realized with an actual buffer object bound to TEXTURE_BUFFER. Using a buffer texture has several implications and, AFAIK, one use-case that can't be mapped to any other type of texture.
Note that a buffer texture is not a buffer object - a buffer texture is merely associated with a buffer object using glTexBuffer.
By comparison, buffer textures can be huge. Table 23.53 and following of the core OpenGL 4.4 spec defines a minimum maximum (i.e. the minimal value that implementations must provide) number of texels MAX_TEXTURE_BUFFER_SIZE. The potential number of texels being stored in your buffer object is computed as follows (as found in GL_ARB_texture_buffer_object):
floor(<buffer_size> / (<components> * sizeof(<base_type>))
The resulting value clamped to MAX_TEXTURE_BUFFER_SIZE is the number of addressable texels.
Example:
You have a buffer object storing 4MiB of data. What you want is a buffer texture for addressing RGBA texels, so you choose an internal format RGBA8. The addressable number of texels is then
floor(4MiB / (4 * sizeof(UNSIGNED_BYTE)) == 1024^2 texels == 2^20 texels
If your implementation supports this number, you can address the full range of values in your buffer object. The above isn't too impressive and can simply be achieved with any other texture on current implementations. However, the machine on which I'm writing this answer supports 2^28 == 268435456 texels.
With OpenGL 4.4 (and 4.3 and possibly with earlier 4.x versions), the MAX_TEXTURE_SIZE is 2 ^ 16 texels per 1D-texture, so a buffer texture can still be 4 times as large. On my local machine I can allocate a 2GiB buffer texture (even larger actually), but only a 1GiB 1D-texture when using RGBAF32 texels.
A use-case for buffer textures is random (and atomic, if desired) read-/write-access (the latter via image load/store) to a large data store inside a shader. Yes, you can do random read-access on arrays of uniforms inside one or multiple blocks but it get's very tedious if you have to process a lot of data and have to work with multiple blocks and even then, looking at the maximum combined size of all uniform components (where a single float component has a size of 4 bytes) in all uniform blocks for a single stage,
MAX_(stage)_UNIFORM_BLOCKS *
MAX_UNIFORM_BLOCK_SIZE +
MAX_(stage)_UNIFORM_COMPONENTS * 4
isn't really a lot of space to work with in a shader stage (depending on how large your implementation allows the above number to be).
An important difference between textures and buffer textures is that the data store, as a regular buffer object, can be used in operations where a texture simply does not work. The extension mentions:
The use of a buffer object to provide storage allows the texture data to
be specified in a number of different ways: via buffer object loads
(BufferData), direct CPU writes (MapBuffer), framebuffer readbacks
(EXT_pixel_buffer_object extension). A buffer object can also be loaded
by transform feedback (NV_transform_feedback extension), which captures
selected transformed attributes of vertices processed by the GL. Several
of these mechanisms do not require an extra data copy, which would be
required when using conventional TexImage-like entry points.
An implication of using buffer textures is that look-ups inside a shader can only be done via texelFetch. Buffer textures also aren't mip-mapped and, as you already mentioned, during fetches there is no filtering.
Addendum:
Since OpenGL 4.3, we have what is called a
Shader Storage Buffer. These too provide random (atomic) read-/write-access to a large data store but don't need to be accessed with texelFetch() or image load/store functions as is the case for buffer textures. Using buffer textures also implies having to deal with gvec4 return values, both with texelFetch() and imageLoad() / imageStore(). This becomes very tedious as soon as you want to work with structures (or arrays thereof) and you don't want to think of some stupid packing scheme using multiple instances of vec4 or using multiple buffer textures to achieve something similar. With a buffer accessed as shader storage, you can simple index into the data store and pull one or more instances of some struct {} directly from the buffer.
Also, since they are very similar to uniform blocks, using them should be fairly straight forward - if you know how to use uniform buffers, you don't have a long way to go learn how to use shader storage buffers.
It's also absolutely worth browsing the Issues section of the corresponding ARB extension.
Performance Implications
Daniel Rakos did some performance analysis years ago, both as a comparison of uniform buffers and buffer textures, and also on a little more general note based on information from AMD's OpenCL programming guide. There is now a very recent version, specifically targeting OpenCL optimization an AMD platforms.
There are many factors influencing performance:
access patterns and resulting caching behavior
cache line sizes and memory layou
what kind of memory is accessed (registers, local, global, L1/L2 etc.) and its respective memory bandwidth
how well memory fetching latency is hidden by doing something else in the meantime
what kind of hardware you're on, i.e. a dedicated graphics card with dedicated memory or some unified memory architecture
etc., etc.
As always when worrying about performance: implement something that works and see if that solutions is fast enough for your needs. Otherwise, implement two or more approaches to solving the problem, profile them and compare.
Also, vendor specific guides can offer a great deal of insight. The above mentioned OpenCL user and optimization guides provide a high-level architectural perspective and specific hints on how to optimize your CL kernels - stuff that's also relevant when developing shaders.

A one use case I have found was to store per primitive attributes (accessed in the fragment shader with help of gl_PrimitiveID) while still maintaining unique vertices in the indexed mesh.

Related

Dynamic size arrays in Uniform buffer in Vulkan

I recently re-wrote some code to use Shader Storage Buffer Object in OpenGL, to send in dynamic sized arrays into GLSL, vastly increasing performance when drawing many procedurally generated objects.
What I have is a couple of thousand points, and for each point I render a procedurally generated circular billboard. Each one can in turn have different colors and radius, as well as a few other characteristics (represented as bools or enums)
I fill a vector with these positions, packed together with the radius and color. Then I upload it as a Shader Storage Buffer Object with dynamic size. I create a dummy VAO, containing 0 vbos, but call the draw command with the same amount of points that I have.
Inside the shader, I then iterate through this array, using the gl_VertexID, and generate a quad (two triangles) with texture coordinates, for each point.
I'm looking for a way of doing the same in Vulkan. Is there some way in Vulkan, to pass a dynamic sized array into a shader? Reading about Shader Storage Buffer objects in Graham Seller's Vulkan book, it only mentions them being read-write, but not capable of dynamically sized arrays.
Edit: It seems that storage buffers are in fact capable of dynamic sized arrays, based on Sasha Willems particle example. Is there a way of doing the same thing via uniforms?
I may be misunderstanding your question. SSBOs have identical behavior and functionality between the two APIs. That's why they're named the same. An unbounded array at the end of a storage block will have its length defined at runtime, based on what data you provide.
The size of a buffer descriptor is not hard coded into the descriptor set layout; it's something you set with VkWriteDescriptorSet. Now unlike the offset, you cannot change a descriptor set's size without changing the descriptor itself. That is, you don't have an equivalent to vkCmdBindDescriptorSets's pDynamicOffsets field. So you have to actually update the descriptor in-situ to change the length.
But that just requires double-buffering your descriptor set; it shouldn't be a problem.
Is there a way of doing the same thing via uniforms?
Again, the answer is the same for Vulkan as for OpenGL: no.

Reuse vertex attribute buffer as index buffer?

Can I use a VBO which I initialise like this:
GLuint bufferID;
glGenBuffers(1,&BufferID);
glBindBuffer(GL_ARRAY_BUFFER,bufferID);
glBufferData(GL_ARRAY_BUFFER,nBytes,indexData,GL_DYNAMIC_DRAW);
as an index buffer, like this:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,bufferID);
/* ... set up vertex attributes, NOT using bufferID in the process ... */
glDrawElements(...);
I would like to use the buffer mostly as an attribute buffer and occasionally as an index buffer (but never at the same time).
There is nothing in the GL which prevents you from doing such things, your code above is legal GL. You can bind every buffer to every buffer binding target (you can even bind the same buffer to different targets at the same time, so it is even OK if attributes and index data come from the same buffer). However, the GL implementation might do some optimizations based on the observed behavior of the application, so you might end up with sub-optimal performance if you suddenly change the usage of an existing buffer object with such an approach, or use it for two things at once.
Update
The ARB_vertex_buffer_object extension spec, which introduced the concept of buffer objects to OpenGL, mentions this topic in the "Issues" section:
Should this extension include support for allowing vertex indices to be stored in buffer objects?
RESOLVED: YES. It is easily and cleanly added with just the
addition of a binding point for the index buffer object. Since
our approach of overloading pointers works for any pointer in GL,
no additional APIs need be defined, unlike in the various
*_element_array extensions.
Note that it is expected that implementations may have different
memory type requirements for efficient storage of indices and
vertices. For example, some systems may prefer indices in AGP
memory and vertices in video memory, or vice versa; or, on
systems where DMA of index data is not supported, index data must
be stored in (cacheable) system memory for acceptable
performance. As a result, applications are strongly urged to
put their models' vertex and index data in separate buffers, to
assist drivers in choosing the most efficient locations.
The reasoning that some implementations might prefer to keep index buffers in system RAM seems quite outdated, though.
While completely legal, it's sometimes discouraged to have attribute data and index data in the same buffer. I suspect that this is mostly based on a paragraph in the spec document (e.g. page 49 of the OpenGL 3.3 spec, at the end of the section "2.9.7 Array Indices in Buffer Objects"):
In some cases performance will be optimized by storing indices and array data in separate buffer objects, and by creating those buffer objects with the corresponding binding points.
While it seems plausible that it could be harmful to performance, I would be very interested to see benchmark results on actual platforms showing it. Attribute data and index data are used at the same time, and with the same access operations (CPU write, or blit from temporary storage, for filling the buffer with data, GPU read during rendering). So I can't think of a very good reason why they would need to be treated differently.
The only difference I can think of is that the index data is always read sequentially, while the attribute data is read out of order during indexed rendering. So it might be possible to apply different caching attributes for performance tuning the access in both cases.

Using separate vertex buffer for dynamic and static objects in DirectX11

Are there any benefits of having separate vertex buffers for static and dynamic objects in a DirectX 11 application? My approach is to have the vertices of all objects in a scene stored in the same vertex buffer.
However, I will only have to re-map a small number of objects (1 to 5) of the whole collection (up to 200 objects). The majority of objects are static and will not be transformed in any way. What is the best approach for doing this?
Mapping a big vertex buffer with discard forces the driver to allocate new memory every frame. Up to ~4 frames can be in flight, so there can be 4 copies of that buffer. This can lead to memory overcommitment and stuttering. For example, ATI advises to discard vertex buffers up to 4 mb max (GCN Performance Tweets). Besides, every time you will have to needlessly copy static data to a new vertex buffer.
Mapping with no overwrite should work better. It would require to manually manage the memory, so you won't overwrite the data which is in flight. I'm not sure about the performance implications, but for sure this isn't a recommended path.
Best approach would be to simplify driver's work by providing as many hints as possible. Create static vertex buffers with immutable flag, long lived with default flag and dynamic with dynamic flag. See vendor guides like GCN Performance Tweets or Don’t Throw it all Away: Efficient Buffer Management for additional details.

How do I deal with many variables per triangle in OpenGL?

I'm working with OpenGL and am not totally happy with the standard method of passing values PER TRIANGLE (or in my case, quads) that need to make it to the fragment shader, i.e., assign them to each vertex of the primitive and pass them through the vertex shader to presumably be unnecessarily interpolated (unless using the "flat" directive) in the fragment shader (so in other words, non-varying per fragment).
Is there some way to store a value PER triangle (or quad) that needs to be accessed in the fragment shader in such a way that you don't need redundant copies of it per vertex? Is so, is this way better than the likely overhead of 3x (or 4x) the data moving code CPU side?
I am aware of using geometry shaders to spread the values out to new vertices, but I heard geometry shaders are terribly slow on non up to date hardware. Is this the case?
OpenGL fragment language supports the gl_PrimitiveID input variable, which will be the index of the primitive for the currently processed fragment (starting at 0 for each draw call). This can be used as an index into some data store which holds per-primitive data.
Depending on the amount of data that you will need per primitive, and the number of primitives in total, different options are available. For a small number of primitives, you could just set up a uniform array and index into that.
For a reasonably high number of primitives, I would suggest using a texture buffer object (TBO). This is basically an ordinary buffer object, which can be accessed read-only at random locations via the texelFetch GLSL operation. Note that TBOs are not really textures, they only reuse the existing texture object interface. Internally, it is still a data fetch from a buffer object, and it is very efficient with none of the overhead of the texture pipeline.
The only issue with this approach is that you cannot easily mix different data types. You have to define a base data type for your TBO, and every fetch will get you the data in that format. If you just need some floats/vectors per primitive, this is not a problem at all. If you e.g. need some ints and some floats per primitive, you could either use different TBOs, one for each type, or with modern GLSL (>=3.30), you could use an integer type for the TBO and reinterpret the integer bits as floating point with intBitsToFloat(), so you can get around that limitation, too.
You can use one element in the vertex array for rendering multiple vertices. It's called instanced vertex attributes.

What does glTexStorage do?

The documentation indicates that this "allocates" storage for a texture and its levels. The pseudocode provided seems to indicate that this is for the mipmap levels.
How does usage of glTexStorage relate to glGenerateMipmap? glTexStorage seems to "lock" a texture's storage size. It seems to me this would only serve to make things less flexible. Are there meant to be performance gains to be had here?
It's pretty new and only available in 4.2 so I'm going to try to avoid using it, but I'm confused because its description makes it sound kind of important.
How is storage for textures managed in earlier GL versions? When i call glTexImage2D I effectively erase and free the storage previously associated with the texture handle, yes? and generating mipmaps also automatically handles storage for me as well.
I remember using the old-school glTexSubImage2D method to implement OpenGL 2-style render-to-texture to do some post-process effects in my previous engine experiment. It makes sense that glTexStorage will bring about a more sensible way of managing texture-related resources, now that we have better ways to do RTT.
To understand what glTexStorage does, you need to understand what the glTexImage* functions do.
glTexImage2D does three things:
It allocates OpenGL storage for a specific mipmap layer, with a specific size. For example, you could allocate a 64x64 2D image as mipmap level 2.
It sets the internal format for the mipmap level.
It uploads pixel data to the texture. The last step is optional; if you pass NULL as the pointer value (and no buffer object is bound to GL_PIXEL_UNPACK_BUFFER), then no pixel transfer takes place.
Creating a mipmapped texture by hand requires a sequence of glTexImage calls, one for each mipmap level. Each of the sizes of the mipmap levels needs to be the proper size based on the previous level's size.
Now, if you look at section 3.9.14 of the GL 4.2 specification, you will see two pages of rules that a texture object must follow to be "complete". A texture object that is incomplete cannot be accessed from.
Among those rules are things like, "mipmaps must have the appropriate size". Take the example I gave above: a 64x64 2D image, which is mipmap level 2. It would be perfectly valid OpenGL code to allocate a mipmap level 1 that used a 256x256 texture. Or a 16x16. Or a 10x345. All of those would be perfectly functional as far as source code is concerned. Obviously they would produce nonsense as a texture, since the texture would be incomplete.
Again consider the 64x64 mipmap 2. I create that as my first image. Now, I could create a 128x128 mipmap 1. But I could also create a 128x129 mipmap 1. Both of these are completely consistent with the 64x64 mipmap level 2 (mipmap sizes always round down). While they are both consistent, they're also both different sizes. If a driver has to allocate the full mipmap chain at once (which is entirely possible), which size does it allocate? It doesn't know. It can't know until you explicitly allocate the rest.
Here's another problem. Let's say I have a texture with a full mipmap chain. It is completely texture complete, according to the rules. And then I call glTexImage2D on it again. Now what? I could accidentally change the internal format. Each mipmap level has a separate internal format; if they don't all agree, then the texture is incomplete. I could accidentally change the size of the texture, again making the texture incomplete.
glTexStorage prevents all of these possible errors. It creates all the mipmaps you want up-front, given the base level's size. It allocates all of those mipmaps with the same image format, so you can't screw that up. It makes the texture immutable, so you can't come along and try to break the texture with a bad glTexImage2D call. And it prevents other errors I didn't even bother to cover.
The question isn't "what does glTexStorage do?" The question is "why did we go so long without it."
glTexStorage has no relation to glGenerateMipmap; they are orthogonal functionality. glTexStorage does exactly what it says: it allocates texture storage space. It does not fill that space with anything. So it creates a texture with a given size filled with uninitialized data. Much like glRenderbufferStorage allocates a renderbuffer with a given size filled with uninitialized data. If you use glTexStorage, you need to upload data with glTexSubImage (since glTexImage is forbidden on an immutable texture).
glTexStorage creates space for mipmaps. glGenerateMipmap creates the mipmap data itself (the smaller versions of the base layer). It can also create space for mipmaps if that space doesn't already exist. They're used for two different things.
Before calling glGenerateMipmap​, the base mipmap level must be established. (either with mutable or immutable storage).so...,you can using glTexImage2D+glGenerateMipmap only,more simple!