What is the difference between uniforms and constant buffers?
Are they completely separate or can the uniforms be seen as in a constant buffer? In other words, if you want to set a uniform, do you need a constant buffer or is there another way?
I ask because I have four variables (float2 pan, float scale and float rotation) that will in all likelihood not be changing at the same time. Do I need a constant buffer to set them all at once or is it better to set them individually, if possible?
Uniforms are used when you have a variable that will be assigned from a external source from outside the function. In your case of the uniforms declaration will be better because you said that you will not be changing at the same time. The const declaration makes the variable unchangeable. Check out this page for resources..http://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/uniform.php
Related
So I have one huge VBO where all my models are stored, and a bunch of draw calls ready to submit to glMultiDrawArraysIndirect.
I also have a uniform block full of matrices so ship A goes to position A, ship B goes to position B, etc.
My question is - how does one make glsl aware which draw call is which? I tried changing the baseInstance variable but that doesn't seem to affect gl_InstanceID, which also starts at 0 for every draw call. After reading further on the khronos page, it seems like this variable won't affect anything.
So what is the proper way to include matrices so each draw call draws things at different positions?
If you have GL 4.6/ARB_shader_draw_parameters, then you have access to gl_DrawID, which is exactly what it sounds like: the zero-based index of the draw in any multi-draw command. It's also guaranteed to be dynamically uniform, so you can use it to access texture arrays (that is, sampler2D texarray[5];, not sampler2DArray texarray;) and other things that require dynamically uniform values.
If you don't... then your best bet is to create an instance array that contains indices, starting with 0. Your VS will have an input corresponding to this value. gl_InstanceID is not affected by the base instance, but the value fetched from an instance array is affected by it. So it will give you a proper index, at the cost of having a seemingly pointless value lying around.
Also, such a value will not be dynamically uniform. But that's usually not a big deal. It's only a problem if you want to access texture arrays.
I'm trying to make an array of vec3 available to a fragment shader. In the targeted application, there could be several hundred elements.
I tested transferring data in the form of a shader storage buffer object, declared as
layout(binding = 0) buffer voxels { vec3 xyz[]; }
and set using glBufferData, but I found that my fragment shader becomes very slow, even with only 33 elements.
Moreover, when I convert the same data into the GLSL code of a const vec3[] and include it in the shader code, the shader becomes noticeably faster.
Is there a better way – faster than an SSBO and more elegant than creating shader code?
As might already be apparent from the above, the array is only read from in the shader. It is constant within the shader as well as over shader invocations for different fragments, so effectively a uniform, and it is set only once or a few times over the runtime of the program.
I'd recommend using std430 layout specifier on the SSBO given that you are using vec3 data types, otherwise you'll be forced to pad the data, which isn't going to be great. In general, if the buffer is a fixed size, then prefer using glBufferSubData instead of glBufferData (the latter may reallocate memory on the GPU).
As yet another alternative, if you are able to target GL 4.4+, consider using glBufferStorage instead (or even better, if GL4.5 is available, use glCreateuffers, and glNamedBufferStorage). This let's you pass a few more hints to the GL driver about the way in which the buffer will be consumed. I'd try out a few options (e.g. mapping v.s. sub-data v.s. recreating each time).
After a lot of searching, I still am confused about what the glVertexAttrib... functions (glVertexAttrib1d, glVertexAttrib1f, etc.) do and what their purpose is.
My current understanding from reading this question and the documentation is that their purpose is to somehow set a vertex attribute as constant (i.e. don't use an array buffer). But the documentation also talks about how they interact with "generic vertex attributes" which are defined as follows:
Generic attributes are defined as four-component values that are organized into an array. The first entry of this array is numbered 0, and the size of the array is specified by the implementation-dependent constant GL_MAX_VERTEX_ATTRIBS. Individual elements of this array can be modified with a glVertexAttrib call that specifies the index of the element to be modified and a value for that element.
It says that they are all "four-component values", yet it is entirely possible to have more or less components than that in a vertex attribute.
What is this saying exactly? Does this only work for vec4 types? What would be the index of a "generic vertex attribute"? A clear explanation is probably what I really need.
In OpenGL, a vertex is specified as a set of vertex attributes. With the advent of the programmable pipleine, you are responsible for writing your own vertex processing functionality. The vertex shader does process one vertex, and gets this specific vertex' attributes as input.
These vertex attributes are called generic vertex attributes, since their meaning is completely defined by you as the application programmer (in contrast to the legacy fixed function pipeline, where the set of attributes were completely defined by the GL).
The OpenGL spec requires implementors to support at least 16 different vertex attributes. So each vertex attribute can be identified by its index from 0 to 15 (or whatever limit your implementation allows, see glGet(GL_MAX_VERTEX_ATTRIBS,...)).
A vertex attribute is conceptually treated as a four-dimensional vector. When you use less than vec4 in a shader, the additional elements are just ignored. If you specify less than 4 elements, the vector is always filled to the (0,0,0,1), which makes sense for both RGBA color vectors, as well as homogenous vertex coordinates.
Though you can declare vertex attributes of mat types, this will just be mapped to a number of consecutive vertex attribute indices.
The vertex attribute data can come from either a vertex array (nowadays, these are required to lie in a Vertex Buffer Object, possibly directly in VRAM, in legacy GL, they could also come from the ordinary client address space) or from the current value of that attribute.
You enable the fetching from attribute arrays via glEnableVertexAttribArray.If a vertex array for a particular attribute you access in your vertex shader is enabled, the GPU will fetch the i-th element from that arry when processing vertex i. FOr all other attributes you access, you will get the current value for that array.
The current value can be set via the glVertexAttrib[1234]* family of GL functions. They cannot be changed durint the draw call, so they remain constant during the whole draw call - just like uniform variables.
One important thing worth noting is that by default, vertex attributes are always floating point, ad you must declare in float/vec2/vec3/vec4 in the vertex shader to acces them. Setting the current value with for example glVertexAttrib4ubv or using GL_UNISGNED_BYTE as the type parameter of glVertexAttribPointer will not change this. The data will be automatically converted to floating-point.
Nowadays, the GL does support two other attribute data types, though: 32 bit integers, and 64 bit double precision floating-point values. YOu have to declare them as int/ivec*, uint/uvec* or double/dvec* respectively in the shader, and you have to use completely separate functions when setting up the array pointer or current values: glVertexAttribIPointer and glVertexAttribI* for signed/unsigned integers and
glVertexAttribLPointer and glVertexAttribL* for doubles ("long floats").
Should I disable shaders attributes when switching to a program shader which uses less (or different locations of) attributes?
I enable and disable these attributes with glEnableVertexAttribArray()/glDisableVertexAttribArray().
Is there any performance impact, or could it bring some bugs, or doing enable/disable will be slower than activate all attributes and let them activated ?
The OP most likely understands the first part already, but let me just reiterate some points on vertex attributes to set the basis for the more interesting part. I'll assume that all vertex data comes from buffers, and not talk about the case where calls like glVertexAttrib3f() are used to set "constant" values for attributes.
The glEnableVertexAttribArray() and glVertexAttribPointer() calls specify which vertex attributes are enabled, and describe how the GPU should retrieve their values. This includes their location in memory, how many components they have, their type, stride, etc. I'll call the collected state specified by these calls "vertex attribute state" in the rest of this answer.
The vertex attribute state is not part of the shader program state. It lives in Vertex Attribute Objects (VAOs), together with some other related state. Therefore, binding a different program changes nothing about the vertex attribute state. Only binding a different VAO does, or of course making one of the calls above.
Vertex attributes are tied to attribute/in variables in the vertex shader by setting the location of the in variables. This specifies which vertex attribute the value of each in variable should come from. The location value is part of the program state.
Based on this, when binding a different program, it is necessary that the locations of the in variables are properly set to refer to the desired attribute. As long as the same attribute is always used for the shader, this has to be done only once while building the shader. Beyond that, all the attributes used by the shader have to be enabled with glEnableVertexAttribArray(), or by binding a VAO that contains the state.
Now, finally coming to the core of the question: What happens if attributes that are not used by the program are enabled?
I believe that having unused attributes enabled is completely legal. At least I've never seen anything in the spec that says otherwise. I just checked again, and still found nothing. Therefore, there will be no bugs resulting from having unused attributes enabled.
Does it hurt performance? The short answer is that it possibly could. Let's look at two hypothetical hardware architectures:
Architecture A has reading of vertex attribute values baked into the vertex shader code.
Architecture B has a fixed function unit that reads vertex attribute values. This fixed function unit is controlled by the vertex attribute state, and writes the values into on-chip memory, where vertex shader instances pick them up.
With architecture A, having unused attributes enabled would have no effect at all. They would simply never be read.
With architecture B, the fixed function unit might read the unused attributes. The vertex shader would end up not using them, but they could still be read from main/video memory into on-chip memory. The driver could avoid that by checking which attributes are used by the current shader, and set up the fixed function unit with only those attributes. The downside is that the state setup for the fixed function unit has to be checked/updated every time a new shader is bound, which is otherwise unnecessary. But it prevents reading unused attributes from memory.
Going one step farther, let's say we do end up reading unused attributes from memory. If and how much this hurts is impossible to answer in general. Intuitively, I would expect it to matter very little if the attributes are interleaved, and the unused attributes are in the same cache lines as used attributes. On the other hand, if reading unused attributes causes extra cache misses, it would at least use memory bandwidth, and consume power.
In summary, I don't believe there's a clear and simple answer. Chances are that having unused attributes enabled will not hurt at all, or very little. But I would personally disable them anyway. There is a potential that it might make a difference, and it's very easy to do. Particularly if you use VAOs, you can generally set up the whole vertex attribute state with a single glBindVertexArray() call, so enabling/disabling exactly the needed attributes does not require additional API calls.
I'm working with OpenGL and am not totally happy with the standard method of passing values PER TRIANGLE (or in my case, quads) that need to make it to the fragment shader, i.e., assign them to each vertex of the primitive and pass them through the vertex shader to presumably be unnecessarily interpolated (unless using the "flat" directive) in the fragment shader (so in other words, non-varying per fragment).
Is there some way to store a value PER triangle (or quad) that needs to be accessed in the fragment shader in such a way that you don't need redundant copies of it per vertex? Is so, is this way better than the likely overhead of 3x (or 4x) the data moving code CPU side?
I am aware of using geometry shaders to spread the values out to new vertices, but I heard geometry shaders are terribly slow on non up to date hardware. Is this the case?
OpenGL fragment language supports the gl_PrimitiveID input variable, which will be the index of the primitive for the currently processed fragment (starting at 0 for each draw call). This can be used as an index into some data store which holds per-primitive data.
Depending on the amount of data that you will need per primitive, and the number of primitives in total, different options are available. For a small number of primitives, you could just set up a uniform array and index into that.
For a reasonably high number of primitives, I would suggest using a texture buffer object (TBO). This is basically an ordinary buffer object, which can be accessed read-only at random locations via the texelFetch GLSL operation. Note that TBOs are not really textures, they only reuse the existing texture object interface. Internally, it is still a data fetch from a buffer object, and it is very efficient with none of the overhead of the texture pipeline.
The only issue with this approach is that you cannot easily mix different data types. You have to define a base data type for your TBO, and every fetch will get you the data in that format. If you just need some floats/vectors per primitive, this is not a problem at all. If you e.g. need some ints and some floats per primitive, you could either use different TBOs, one for each type, or with modern GLSL (>=3.30), you could use an integer type for the TBO and reinterpret the integer bits as floating point with intBitsToFloat(), so you can get around that limitation, too.
You can use one element in the vertex array for rendering multiple vertices. It's called instanced vertex attributes.