How do I deal with many variables per triangle in OpenGL? - opengl

I'm working with OpenGL and am not totally happy with the standard method of passing values PER TRIANGLE (or in my case, quads) that need to make it to the fragment shader, i.e., assign them to each vertex of the primitive and pass them through the vertex shader to presumably be unnecessarily interpolated (unless using the "flat" directive) in the fragment shader (so in other words, non-varying per fragment).
Is there some way to store a value PER triangle (or quad) that needs to be accessed in the fragment shader in such a way that you don't need redundant copies of it per vertex? Is so, is this way better than the likely overhead of 3x (or 4x) the data moving code CPU side?
I am aware of using geometry shaders to spread the values out to new vertices, but I heard geometry shaders are terribly slow on non up to date hardware. Is this the case?

OpenGL fragment language supports the gl_PrimitiveID input variable, which will be the index of the primitive for the currently processed fragment (starting at 0 for each draw call). This can be used as an index into some data store which holds per-primitive data.
Depending on the amount of data that you will need per primitive, and the number of primitives in total, different options are available. For a small number of primitives, you could just set up a uniform array and index into that.
For a reasonably high number of primitives, I would suggest using a texture buffer object (TBO). This is basically an ordinary buffer object, which can be accessed read-only at random locations via the texelFetch GLSL operation. Note that TBOs are not really textures, they only reuse the existing texture object interface. Internally, it is still a data fetch from a buffer object, and it is very efficient with none of the overhead of the texture pipeline.
The only issue with this approach is that you cannot easily mix different data types. You have to define a base data type for your TBO, and every fetch will get you the data in that format. If you just need some floats/vectors per primitive, this is not a problem at all. If you e.g. need some ints and some floats per primitive, you could either use different TBOs, one for each type, or with modern GLSL (>=3.30), you could use an integer type for the TBO and reinterpret the integer bits as floating point with intBitsToFloat(), so you can get around that limitation, too.

You can use one element in the vertex array for rendering multiple vertices. It's called instanced vertex attributes.

Related

Confusion About glVertexAttrib... Functions

After a lot of searching, I still am confused about what the glVertexAttrib... functions (glVertexAttrib1d, glVertexAttrib1f, etc.) do and what their purpose is.
My current understanding from reading this question and the documentation is that their purpose is to somehow set a vertex attribute as constant (i.e. don't use an array buffer). But the documentation also talks about how they interact with "generic vertex attributes" which are defined as follows:
Generic attributes are defined as four-component values that are organized into an array. The first entry of this array is numbered 0, and the size of the array is specified by the implementation-dependent constant GL_MAX_VERTEX_ATTRIBS. Individual elements of this array can be modified with a glVertexAttrib call that specifies the index of the element to be modified and a value for that element.
It says that they are all "four-component values", yet it is entirely possible to have more or less components than that in a vertex attribute.
What is this saying exactly? Does this only work for vec4 types? What would be the index of a "generic vertex attribute"? A clear explanation is probably what I really need.
In OpenGL, a vertex is specified as a set of vertex attributes. With the advent of the programmable pipleine, you are responsible for writing your own vertex processing functionality. The vertex shader does process one vertex, and gets this specific vertex' attributes as input.
These vertex attributes are called generic vertex attributes, since their meaning is completely defined by you as the application programmer (in contrast to the legacy fixed function pipeline, where the set of attributes were completely defined by the GL).
The OpenGL spec requires implementors to support at least 16 different vertex attributes. So each vertex attribute can be identified by its index from 0 to 15 (or whatever limit your implementation allows, see glGet(GL_MAX_VERTEX_ATTRIBS,...)).
A vertex attribute is conceptually treated as a four-dimensional vector. When you use less than vec4 in a shader, the additional elements are just ignored. If you specify less than 4 elements, the vector is always filled to the (0,0,0,1), which makes sense for both RGBA color vectors, as well as homogenous vertex coordinates.
Though you can declare vertex attributes of mat types, this will just be mapped to a number of consecutive vertex attribute indices.
The vertex attribute data can come from either a vertex array (nowadays, these are required to lie in a Vertex Buffer Object, possibly directly in VRAM, in legacy GL, they could also come from the ordinary client address space) or from the current value of that attribute.
You enable the fetching from attribute arrays via glEnableVertexAttribArray.If a vertex array for a particular attribute you access in your vertex shader is enabled, the GPU will fetch the i-th element from that arry when processing vertex i. FOr all other attributes you access, you will get the current value for that array.
The current value can be set via the glVertexAttrib[1234]* family of GL functions. They cannot be changed durint the draw call, so they remain constant during the whole draw call - just like uniform variables.
One important thing worth noting is that by default, vertex attributes are always floating point, ad you must declare in float/vec2/vec3/vec4 in the vertex shader to acces them. Setting the current value with for example glVertexAttrib4ubv or using GL_UNISGNED_BYTE as the type parameter of glVertexAttribPointer will not change this. The data will be automatically converted to floating-point.
Nowadays, the GL does support two other attribute data types, though: 32 bit integers, and 64 bit double precision floating-point values. YOu have to declare them as int/ivec*, uint/uvec* or double/dvec* respectively in the shader, and you have to use completely separate functions when setting up the array pointer or current values: glVertexAttribIPointer and glVertexAttribI* for signed/unsigned integers and
glVertexAttribLPointer and glVertexAttribL* for doubles ("long floats").

OpenGL: How to pass vectors with variable size to shaders?

What is an efficient way to pass a variably sized data (which is not unique for each vertex, but rather is common to groups of vertices) into shaders?
For example, I have three polygons with different numbers of vertices. Each polygon has a different color and is colored uniformly (each polygon's fragment has the same color). It seems ineffective to pass the polygon's color in the vertices attributes as a vector containing the same color values for each vertex in polygon. My thought is to assign to each polygon a unique number (starting from zero), create a vector with three color values (one color for each polygon), and to pass for each vertex not a color value (four floats), but the polygon number which the vertex belongs to (one integer). And have the shader fetch the polygon color from the color's vector using the polygon number as the index.
The color values for polygons can be passed in a vector of a variable length (vector whose size is not specified during compilation) through a "Shader Storage Block" where it can be accessed by the shader.
Is there other method (maybe commonly used technique) for passing variably sized vectors to shaders?
This is precisely the use case of Buffer Texture Objects.
The easiest approach is to manage the color apart from the vertex data and pass it ass a uniform to the shader. But only if the color is the same for each vertex in the vector. If however the color changes for every primitive (or each vertex) don't mind, the overhead implied by the computation with indexing etc. is way worse than simply having to flush the data on a per vertex basis to your video memory! (Unless this is already your bottleneck, but it shouldn't really be with nowadays hardware)
To rapidly flush the data from the vector for rendering, the best approach is to create a
vertex buffer object.
You basically create a large pool of vertices (and indices, depending on your precise use). Using glDrawElements (or glDrawArray) you can control how many primitive of what type shall be rendered.
Reserve a vertex buffer object with the maximal assumed size of memory you may require in the advance, so you avoid the overhead of recreating it again and again. Initialize the number of primitives to render, to 0. Each time the vector changes, copy the data to the vertex buffer object, from the first vertex to the last in the vector. Modify the number of primitives to be rendered. That's it, the complete overhead left is only flushing the vector to the vertex buffer object. No complex bookkeeping or similar required.

What is the purpose of OpenGL texture buffer objects?

We use buffer objects for reducing copy operations from CPU-GPU and for texture buffer objects we can change target from vertex to texture in buffer objects. Is there any other advantage here of texture buffer objects? Also, it does not allow filtering, is there any disadvantage of this?
A buffer texture is similar to a 1D-texture but has a backing buffer store that's not part of the texture object (in contrast to any other texture object) but realized with an actual buffer object bound to TEXTURE_BUFFER. Using a buffer texture has several implications and, AFAIK, one use-case that can't be mapped to any other type of texture.
Note that a buffer texture is not a buffer object - a buffer texture is merely associated with a buffer object using glTexBuffer.
By comparison, buffer textures can be huge. Table 23.53 and following of the core OpenGL 4.4 spec defines a minimum maximum (i.e. the minimal value that implementations must provide) number of texels MAX_TEXTURE_BUFFER_SIZE. The potential number of texels being stored in your buffer object is computed as follows (as found in GL_ARB_texture_buffer_object):
floor(<buffer_size> / (<components> * sizeof(<base_type>))
The resulting value clamped to MAX_TEXTURE_BUFFER_SIZE is the number of addressable texels.
Example:
You have a buffer object storing 4MiB of data. What you want is a buffer texture for addressing RGBA texels, so you choose an internal format RGBA8. The addressable number of texels is then
floor(4MiB / (4 * sizeof(UNSIGNED_BYTE)) == 1024^2 texels == 2^20 texels
If your implementation supports this number, you can address the full range of values in your buffer object. The above isn't too impressive and can simply be achieved with any other texture on current implementations. However, the machine on which I'm writing this answer supports 2^28 == 268435456 texels.
With OpenGL 4.4 (and 4.3 and possibly with earlier 4.x versions), the MAX_TEXTURE_SIZE is 2 ^ 16 texels per 1D-texture, so a buffer texture can still be 4 times as large. On my local machine I can allocate a 2GiB buffer texture (even larger actually), but only a 1GiB 1D-texture when using RGBAF32 texels.
A use-case for buffer textures is random (and atomic, if desired) read-/write-access (the latter via image load/store) to a large data store inside a shader. Yes, you can do random read-access on arrays of uniforms inside one or multiple blocks but it get's very tedious if you have to process a lot of data and have to work with multiple blocks and even then, looking at the maximum combined size of all uniform components (where a single float component has a size of 4 bytes) in all uniform blocks for a single stage,
MAX_(stage)_UNIFORM_BLOCKS *
MAX_UNIFORM_BLOCK_SIZE +
MAX_(stage)_UNIFORM_COMPONENTS * 4
isn't really a lot of space to work with in a shader stage (depending on how large your implementation allows the above number to be).
An important difference between textures and buffer textures is that the data store, as a regular buffer object, can be used in operations where a texture simply does not work. The extension mentions:
The use of a buffer object to provide storage allows the texture data to
be specified in a number of different ways: via buffer object loads
(BufferData), direct CPU writes (MapBuffer), framebuffer readbacks
(EXT_pixel_buffer_object extension). A buffer object can also be loaded
by transform feedback (NV_transform_feedback extension), which captures
selected transformed attributes of vertices processed by the GL. Several
of these mechanisms do not require an extra data copy, which would be
required when using conventional TexImage-like entry points.
An implication of using buffer textures is that look-ups inside a shader can only be done via texelFetch. Buffer textures also aren't mip-mapped and, as you already mentioned, during fetches there is no filtering.
Addendum:
Since OpenGL 4.3, we have what is called a
Shader Storage Buffer. These too provide random (atomic) read-/write-access to a large data store but don't need to be accessed with texelFetch() or image load/store functions as is the case for buffer textures. Using buffer textures also implies having to deal with gvec4 return values, both with texelFetch() and imageLoad() / imageStore(). This becomes very tedious as soon as you want to work with structures (or arrays thereof) and you don't want to think of some stupid packing scheme using multiple instances of vec4 or using multiple buffer textures to achieve something similar. With a buffer accessed as shader storage, you can simple index into the data store and pull one or more instances of some struct {} directly from the buffer.
Also, since they are very similar to uniform blocks, using them should be fairly straight forward - if you know how to use uniform buffers, you don't have a long way to go learn how to use shader storage buffers.
It's also absolutely worth browsing the Issues section of the corresponding ARB extension.
Performance Implications
Daniel Rakos did some performance analysis years ago, both as a comparison of uniform buffers and buffer textures, and also on a little more general note based on information from AMD's OpenCL programming guide. There is now a very recent version, specifically targeting OpenCL optimization an AMD platforms.
There are many factors influencing performance:
access patterns and resulting caching behavior
cache line sizes and memory layou
what kind of memory is accessed (registers, local, global, L1/L2 etc.) and its respective memory bandwidth
how well memory fetching latency is hidden by doing something else in the meantime
what kind of hardware you're on, i.e. a dedicated graphics card with dedicated memory or some unified memory architecture
etc., etc.
As always when worrying about performance: implement something that works and see if that solutions is fast enough for your needs. Otherwise, implement two or more approaches to solving the problem, profile them and compare.
Also, vendor specific guides can offer a great deal of insight. The above mentioned OpenCL user and optimization guides provide a high-level architectural perspective and specific hints on how to optimize your CL kernels - stuff that's also relevant when developing shaders.
A one use case I have found was to store per primitive attributes (accessed in the fragment shader with help of gl_PrimitiveID) while still maintaining unique vertices in the indexed mesh.

OpenGL multiple VBO

If I have two different primitive types which are drawn dynamically according to user input, should I use 2 separate VBOs or is there a way of using one and of somehow identifying where the different primitive vertices begin and end?
The primitive types are triangle strips and points
Either method works. If using a VBO the data parameter of the gl…Pointer functions is interpreted as a numeric byte offset into the VBO. It's perfectly possible to put seveal batches of vertex attributes into a single VBO and point OpenGL to the sections within the VBO by their offsets.
So before each glDraw… call use the right gl…Pointer calls to set the right offsets. Or even better yet (if supported, i.e. you've got a new enough OpenGL version): Use Vertex Array Objects to encapsulate whole set of gl…Pointer settings into one abstract object that can be bound by a single OpenGL call.

GLSL per vertex fixed size array

Is it possible in desktop GLSL to pass a fixed size array of floats to the vertex shader as an attribute? If yes, how?
I want to have per vertex weights for character animation so I would like to have something like the following in my vertex shader:
attribute float weights[25];
How would I fill the attribute array from my C++ & OpenGL program? I have seen in another question that I could get the attribute location of the array attribute and then just add the index to that location. Could someone give an example on that for my pretty large array?
Thanks.
Let's start with what you asked for.
On pretty much no hardware that exists currently will attribute float weights[25]; compile. While shaders can have arrays of attributes, each array index represents a new attribute index. And on all hardware the currently exists, the maximum number of attribute indices is... 16. You'd need 25, and that's just for the weights.
Now, you can mitigate this easily enough by remembering that you can use vec4 attributes. Thus, you store every four array elements in a single attribute. Your array would be attribute vec4 weights[7]; which is doable. Your weight-fetching logic will have to change of course.
Even so, you don't seem to be taking in the ramifications of what this would actually mean for your vertex data. Each attribute represents a component of a vertex's data. Each vertex for a rendering call will have the same amount of data; the contents of that data will differ, but not how much data.
In order to do what you're suggesting, every vertex in your mesh would need 25 floats describing the weight. Even if this was stored as normalized unsigned bytes, that's still 25 extra bytes worth of data at a minimum. That's a lot. Especially considering that for the vast majority of vertices, most of these values will be 0. Even in the worst case, you'd be looking at maybe 6-7 bones affecting an single vertex.
The way skinning is generally done in vertex shaders is to limit the number of bones that affects a single vertex to four. This way, you don't use an array of attributes; you just use a vec4 attribute for the weights. Of course, you also now need to say which bone is associated with which weight. So you have a second vec4 attribute that specifies the bone index for that weight.
This strikes a good balance. You only take up 2 extra attributes (which can be unsigned bytes in terms of size). And for the vast majority of vertices, you'll never even notice, because most vertices are only influenced by 1-3 bones. A few uses 4, and fewer still use 5+. In those cases, you just cut off the lowest weights and recompute the weights of the others proportionately.
Nicol Bolas already gave you an answer how to restructure your task. You should do it, because processing 25 floats for a vertex, probably through some quaternion multiplication will waste a lot of good GPU processing power; most of the attributes for a vertex will translate close to an identity transform anyway.
However for academic reasons I'm going to tell you, how to pass 25 floats per vertex. The key is not using attributes for this, but fetching the data from some buffer, a texture. The GLSL vertex shader stage has the builtin variable gl_VertexID, which passes the index of the currently processed vertex. With recent OpenGL you can access textures from the vertex shader as well. So you have a texture of size vertex_count × 25 holding the values. In your vertex shader you can access them using the texelFetch function, i.e. texelFetch(param_buffer, vec2(gl_VertexID, 3));
If used in skeletal animation this system is often referred to as texture skinning. However it should be used sparingly, as it's a real performance hog. But sometimes you can't avoid it, for example when implementing a facial animation system where you have to weight all the vertices to 26 muscles, if you want to accurately simulate a human face.