In Direct3D, when we specify the input to vertex shader using D3D11_INPUT_ELEMENT_DESC and when we call IASetVertexBuffers, we have to specify 'the input slot(s)'. What exactly are these input slots, and what is the OpenGL equivalent? These input slots don't seem to be the same thing as 'locations' specified via layout(location = K) in GLSL and the first argument to glVertexAttribPointer.
A vertex input comes from an array of values stored in a buffer. The SemanticIndex identifies where that input goes in the shader. The InputSlot defines which buffer provides the data for that attribute. Buffers aren't bound to semantic indices; they're bound to input slots.
See, D3D has two arrays of things: an array of semantics (which match semantics defined in the shader) and an array of input slots. Each semantic you use has an INPUT_DESC structure for it, which specifies (among other things) the mapping from a semantic to the input slots that feeds it. Vertex buffers are bound to input slots. Any inputs which use those slots will pull their data from the same buffer. A particular input can have an offset for how its vertex data is accessed; this allows for interleaving vertex data.
OpenGL 4.3/ARB_separate_attrib_format provides similar concepts with glVertexAttribFormat and glVertexAttribBinding. The former specifies what attribute location the vertex data goes to. The latter specifies which buffer binding index will feed that attribute. Buffers are bound to buffer binding indices with glBindVertexBuffer(s).
When using the older glBindBuffer(GL_ARRAY_BUFFER)/glVertexAttribPointer APIs, the buffer binding index and the attribute location index are effectively the same. But this is a poor API and if you don't have to support older GL versions, I'd advise you to use the newer stuff.
I want to make a couple of vec4 vertex attributes in my shaders. I've done quite a bit of googling, but I can't seem to find consistent information for specifically what I want to do.
My goal here is to move skinning to the GPU, so I need a list of bones and weights per vertex, hence why I want to use vertex attributes. I have 2 arrays of floats that represent this data. Basically this:
weightsBuffer = new float[vSize*4];
indexesBuffer = new int[vSize*4];
The part that I can't consistently find is how to upload these and use them in the shader. To be clear, I don't want to upload all the position, normal and texture coordinate data, I'm already using display lists and have decided to keep using them for a few reasons that aren't relevant. How can I create the buffers and bind them properly so I can use them?
Thanks.
Binding your bone weights and indices is no different of a process than binding your position data. Assuming the data is generated properly in your buffers, you use glBindAttribLocation to bind the attribute index in your vertex stream to your shader variable, and glVertexAttribPointer to define your vertex array (and don't forget glEnableVertexAttribArray).
The exact code may vary, depending on whether you're using VAOs and VBOs (or just client buffers). If you want a more specific answer, you should provide your code and shader.
I have GLSL compute shader. Very simple one. I specify input texture and output image, mapped to the texture.
layout (binding=2) uniform sampler2D srcTex;
layout (binding=5, r32f) writeonly uniform image2D destTex;
In my code, I need to call glBindImageTexture to attach texture to image. Now, first parameter of this function is
unit
Specifies the index of the image unit to which to bind the texture
I know, that I can set this value to 5 from code manually, but how to do this automatically.
If I create shader I use refraction to get variable names and its locations.
How can I get binding ID?
How can I get texture format from layout (r32f) to set it in my code automatically?
If I create shader I use refraction to get variable names and its locations.
I think you mean reflection and not refraction, assuming you mean the programming language concept.
Now, the interesting thing here is that image2D and sampler2D are what are known as opaque types in GLSL (handles to an image unit). Ordinarily, you could use the modern glGetProgramResource* (...) API (GL_ARB_program_interface_query or core in 4.3) to query really detailed information about uniforms, buffers, etc. in a GLSL shader. However, opaque types are not considered program resources by GLSL and are not compatible with that feature - the information you want is related to the image unit the uniform references and not the uniform itself.
How can I get binding ID?
This is fairly straight-forward.
You can call glGetUniformiv (...) to get the value assigned to your image2D. On the GL-side of things opaque uniforms work no differently than any other uniform (for assigning and querying values), but in GLSL you cannot assign a value to an opaque data type using the = operator. That is why the layout (binding = ...) semantics were created, they allow you to assign the binding in the shader itself rather than having to call an API function and are completely optional.
How can I get texture format from layout (r32f) to set it in my code automatically?
That is not currently possible, and in the future may become irrelevant for loads (it already is for stores). You do not technically need an exact match here. As long as image format size / class match, you can do an image load no problem.
In truth, the only reason you have to declare the format in GLSL is so that the return format for imageLoad (...) is known at compile-time (that makes it irrelevant for writeonly qualified images). There is an EXT extension right now (GL_EXT_shader_image_load_formatted) that completely eliminates the need to establish the image format for non-writeonly images.
Even for non-writeonly images, since only the size / class need to match, I do not think you really need this. r32f has an image class of 1x32 and a size of 32, but the fact that it is floating-point is not relevant to anything. Thus, what you might really consider is naming your uniforms by their image class instead - call this uniform something like destTex_1x32 and it will be obvious that it's a 1-component 32-bit image.
I am writing an OpenGL3+ application and have some confusion about the use of VAOs. Right now I just have one VAO, a normalised quad set around the origin. This single VAO contains 3 VBOs; one for positions, one for surface normals, and one GL_ELEMENT_ARRAY_BUFFER for indexing (so I can store just 4 vertices, rather than 6).
I have set up some helper methods to draw objects to the scene, such as drawCube() which takes position and rotation values and follows the procedure;
Bind the quad VAO.
Per cube face:
Create a model matrix that represents this face.
Upload the model matrix to the uniform mat4 model vertex shader variable.
Call glDrawElements() to draw the quad into the position for this face.
I have just set about the task of adding per-cube colors and realised that I can't add my color VBO to the single VAO as it will change with each cube, and this doesn't feel right.
I have just read the question; OpenGL VAO best practices, which tells me that my approach is wrong, and that I should use more VAOs to save the work of setting the whole scene up every time.
How many VAOs should be used? Clearly my approach of having 1 is not optimal, should there be a VAO for every static surface in the scene? What about ones that move?
I am writing to a uniform variable for each vertex, is that correct? I read that uniform shader variables should not change mid-frame, if I am able to write different values to my uniform variable, how do uniforms differ from simple in variables in a vertex shader?
Clearly my approach of having 1 is not optimal, should there be a VAO for every static surface in the scene?
Absolutely not. Switching VAOs is costly. If you allocate one VAO per object in your scene, you need to switch the VAO before rendering such objects. Scale that up to a few hundred or thousand objects currently visible and you get just as much VAO changes. The questions is, if you have multiple objects which share a common memory layout, i.e. sizes/types/normalization/strides of elements are the same, why would you want to define multiple VAOs that all store the same information? You control the offset where you want to start pulling vertex attributes from directly with a corresponding draw call.
For non-indexed geometry this is trivial, since you provide a first (or an array of offsets in the multi-draw case) argument to gl[Multi]DrawArrays*() which defines the offset into the associated ARRAY_BUFFER's data store.
For indexed geometry, and if you store indices for multiple objects in a single ELEMENT_ARRAY_BUFFER, you can use gl[Multi]DrawElementsBaseVertex to provide a constant offset for indices or manually offset your indices by adding a constant offset before uploading them to the buffer object.
Being able to provide offsets into a buffer store also implies that you can store multiple distinct objects in a single ARRAY_BUFFER and corresponding indices in a single ELEMENT_ARRAY_BUFFER. However, how large buffer objects should be depends on your hardware and vendors differ in their recommendations.
I am writing to a uniform variable for each vertex, is that correct? I read that uniform shader variables should not change mid-frame, if I am able to write different values to my uniform variable, how do uniforms differ from simple in variables in a vertex shader?
First of all, a uniforms and shader input/output variables declared as in/out differ in various instances:
input/output variables define an interface between shader stages, i.e. output variables in one shader stage are backed by a corresponding and equally named input variable in the following stage. A uniform is available in all stages if declared with the same name and is constant until changed by the application.
input variables inside a vertex shader are filled from an ARRAY_BUFFER. Uniforms inside a uniform block are backed a UNIFORM_BUFFER.
input variables can also be written directly using the glVertexAttrib*() family of functions. single uniforms are written using the glUniform*() family of functions.
the values of uniforms are program state. the values of input variables are not.
The semantic difference should also be obvious: uniforms, as their name suggests, are usually constant among a set of primitives, whereas input variables usually change per vertex or fragment (due to interpolation).
EDIT: To clarify and to factor in Nicol Bolas' remark: Uniforms cannot be changed by the application for a set of vertices submitted by a single draw call, neither can vertex attributes by calling glVertexAttrib*(). Vertex shader inputs backed by a buffer objects will change either once per vertex or at some specific rate set by glVertexAttribDivisor.
EDIT2: To clarify how a VAO can theoretically store multiple layouts, you can simply define multiple arrays with different indices but equal semantics. For instance,
glVertexAttribPointer(0, 4, ....);
and
glVertexAttribPointer(1, 3, ....);
could define two arrays with indices 0 and 1, component sized 3 and 4 and both refer to position attributes of vertices. However, depending on what you want to render, you can bind a hypothetical vertex shader input
// if you have GL_ARB_explicit_attrib_location or GL3.3 available, use explicit
// locations
/*layout(location = 0)*/ in vec4 Position;
or
/*layout(location = 1)*/ in vec3 Position;
to either index 0 or 1 explicitly or glBindAttribLocation() and still use the same VAO. AFAIK, the spec says nothing about what happens if an attribute is enabled but not sourced by the current shader but I suspect implementation to simply ignore the attribute in that case.
Whether you source the data for said attributes from the same or a different buffer object is another question but of course possible.
Personally I tend to use one VBO and VAO per layout, i.e. if my data is made up of an equal number of attributes with the same properties, I put them into a single VBO and a single VAO.
In general: You can experiment with this stuff a lot. Do it!
I'm coding a small rendering engine with GLSL shaders:
Each Mesh (well, submesh) has a number of vertex streams (eg. position,normal,texture,tangent,etc) into one big VBO and a MaterialID.
Each Material has a set of textures and properties (eg. specular-color, diffuse-color, color-texture, normal-map,etc)
Then I have a GLSL shader, with it's uniforms and attributes. Let's say:
uniform vec3 DiffuseColor;
uniform sampler2D NormalMapTexture;
attribute vec3 Position;
attribute vec2 TexCoord;
I'm a little bit stuck in trying to design a way for the GLSL shader to define the stream mappings (semantics) for the attributes and uniforms, and then bind the vertex streams to the appropriate attributes.
Something in the lines of saying to the mesh :"put your position stream in attribute "Position" and your tex coordinates in "TexCoord". Also put your material's diffuse color in "DiffuseColor" and your material's second texture in "NormalMapTexture"
At the moment I am using hard-coded names for the attributes (ie. vertex pos is always "Position" ,etc) and checking each uniform and attribute name to understand what the shader is using it for.
I guess I'm looking for some way of creating a "vertex declaration", but including uniforms and textures too.
So I'm just wondering how people do this in large-scale rendering engines.
Edit:
Recap of suggested methods:
1. Attribute/Uniform semantic is given by the name of the variable
(what I'm doing now)
Using pre-defined names for each possible attribute.The GLSL binder will query the name for each attribute and link the vertex array based on the name of the variable:
//global static variable
semantics (name,normalize,offset) = {"Position",false,0} {"Normal",true,1},{"TextureUV,false,2}
...when linking
for (int index=0;index<allAttribs;index++)
{
glGetActiveAttrib(program,index,bufSize,length,size[index],type[index],name);
semantics[index]= GetSemanticsFromGlobalHardCodedList(name);
}
... when binding vertex arrays for render
for (int index=0;index<allAttribs;index++)
{
glVertexAttribPointer(index,size[index],type[index],semantics[index]->normalized,bufferStride,semantics[index]->offset);
}
2. Predefined locations for each semantic
GLSL binder will always bind the vertex arrays to the same locations.It is up to the shader to use the the appropriate names to match. (This seems awfully similar to method 1, but unless I misunderstood, this implies binding ALL available vertex data, even if the shader does not consume it)
.. when linking the program...
glBindAttribLocation(prog, 0, "mg_Position");
glBindAttribLocation(prog, 1, "mg_Color");
glBindAttribLocation(prog, 2, "mg_Normal");
3. Dictionary of available attributes from Material, Engine globals, Renderer and Mesh
Maintain list of availlable attributes published by the active Material, the Engine globals, the current Renderer and the current Scene Node.
eg:
Material has (uniformName,value) = {"ambientColor", (1.0,1.0,1.0)}, {"diffuseColor",(0.2,0.2,0.2)}
Mesh has (attributeName,offset) = {"Position",0,},{"Normals",1},{"BumpBlendUV",2}
then in shader:
uniform vec3 ambientColor,diffuseColo;
attribute vec3 Position;
When binding the vertex data to the shader, the GLSL binder will loop over the attribs and bind to the one found (or not? ) in the dictionary:
for (int index=0;index<allAttribs;index++)
{
glGetActiveAttrib(program,index,bufSize,length,size[index],type[index],name);
semantics[index] = Mesh->GetAttributeSemantics(name);
}
and the same with uniforms, only query active Material and globals aswell.
Attributes:
Your mesh has a number of data streams. For each stream you can keep the following info: (name, type, data).
Upon linking, you can query the GLSL program for active attributes and form an attribute dictionary for this program. Each element here is just (name, type).
When you draw a mesh with a specified GLSL program, you go through programs attribute dictionary and bind the corresponding mesh streams (or reporting an error in case of inconsistency).
Uniforms:
Let the shader parameter dictionary be the set of (name, type, data link). Typically, you can have the following dictionaries:
Material (diffuse,specular,shininess,etc) - taken from the material
Engine (camera, model, lights, timers, etc) - taken from engine singleton (global)
Render (custom parameters related to the shader creator: SSAO radius, blur amount, etc) - provided exclusively by the shader creator class (render)
After linking, the GLSL program is given a set of parameter dictionaries in order to populate it's own dictionary with the following element format: (location, type, data link). This population is done by querying the list of active uniforms and matching (name, type) pair with the one in dictionaries.
Conclusion:
This method allows for any custom vertex attributes and shader uniforms to be passed, without hard-coded names/semantics in the engine. Basically only the loader and render know about particular semantics:
Loader fills out the mesh data streams declarations and materials dictionaries.
Render uses a shader that is aware of the names, provides additional parameters and selects proper meshes to be drawn with.
From my experience, OpenGL does not define the concept of attributes or uniforms semantics.
All you can do is define your own way of mapping your semantics to OpenGL variables, using the only parameter you can control about these variables: their location.
If you're not constrained by platform issues, you could try to use the 'new' GL_ARB_explicit_attrib_location (core in OpenGL 3.3 if I'm not mistaken) that allows shaders to explicitly express which location is intended for which attribute. This way, you can hardcode (or configure) which data you want to bind on which attribute location, and query the shaders' locations after it's compiled. It seems that this feature is not yet mature, and perhaps subject to bugs in various drivers.
The other way around is to bind the locations of your attributes using glBindAttribLocation. For this, you have to know the names of the attributes that you want to bind, and the locations you want to assign them.
To find out the names used in a shader, you can:
query the shader for active attributes
parse the shader source code to find them yourself
I would not recommend using the GLSL parsing way (although it may suit your needs if you're in simple enough contexts): the parser can easily be defeated by the preprocessor. Supposing that your shader code becomes somewhat complex, you may want to start using #includes, #defines, #ifdef, etc. Robust parsing supposes that you have a robust preprocessor, which can become quite a heavy lift to set up.
Anyway, with your active attributes names, you have to assign them locations (and/or semantics), for this, you're alone with your use case.
In our engine, we happily hardcode locations of predefined names to specific values, such as:
glBindAttribLocation(prog, 0, "mg_Position");
glBindAttribLocation(prog, 1, "mg_Color");
glBindAttribLocation(prog, 2, "mg_Normal");
...
After that, it's up to the shader writer to conform to the predefined semantics of the attributes.
AFAIK it's the most common way of doing things, OGRE uses it for example. It's not rocket science but works well in practice.
If you want to add some control, you could provide an API to define the semantics on a shader basis, perhaps even having this description in an additional file, easily parsable, living near the shader source code.
I don't get into uniforms where the situation is almost the same, except that the 'newer' extensions allow you to force GLSL uniform blocks to a memory layout that is compatible with your application.
I'm not satisfied by all this myself, so I'll be happy to have some contradictory information :)
You may want to consider actually parsing the GLSL itself.
The uniform/attribute declaration syntax is pretty simple. You can come up with a small manual parser that looks for lines that start with uniform or attribute, get the type and name and then expose some C++ API using strings. This will save you the trouble of hard coded names. If you don't want to get your hands dirty with manual parsing a couple of likes of Spirit would do the trick.
You probably won't want to fully parse GLSL so you'll need to make sure you don't do anything funny in the decelerations that might alter the actual meaning. One complication that comes to mind is conditional compilation using macros in the GLSL.