glVertexAttribPointer and glVertexAttribFormat: What's the difference? - opengl

OpenGL 4.3 and OpenGL ES 3.1 added several alternative functions for specifying vertex arrays: glVertexAttribFormat, glBindVertexBuffers, etc. But we already had functions for specifying vertex arrays. Namely glVertexAttribPointer.
Why add new APIs that do the same thing as the old ones?
How do the new APIs work?

glVertexAttribPointer has two flaws, one of them semi-subjective, the other objective.
The first flaw is its dependency on GL_ARRAY_BUFFER. This means that the behavior of glVertexAttribPointer is contingent on whatever was bound to GL_ARRAY_BUFFER at the time it was called. But once it is called, what is bound to GL_ARRAY_BUFFER no longer matters; the buffer object's reference is copied into the VAO. All this is very unintuitive and confusing, even to some semi-experienced users.
It also requires you to provide an offset into the buffer object as a "pointer", rather than as an integer byte offset. This means that you perform an awkward cast from an integer to a pointer (which must be matched by an equally awkward cast in the driver).
The second flaw is that it conflates two operations that, logically, are quite separate. In order to define a vertex array that OpenGL can read, you must provide two things:
How to fetch the data from memory.
What that data looks like.
glVertexAttribPointer provides both of these simultaneously. The GL_ARRAY_BUFFER buffer object, plus the offset "pointer" and stride define where the data is stored and how to fetch it. The other parameters describes what a single unit of data looks like. Let us call this the vertex format of the array.
As a practical matter, users are far more likely to change where vertex data comes from than vertex formats. After all, many objects in the scene store their vertices in the same way. Whatever that way may be: 3 floats for position, 4 unsigned bytes for colors, 2 unsigned shorts for tex-coords, etc. Generally speaking, you have only a few vertex formats.
Whereas you have far more locations where you pull data from. Even if the objects all come from the same buffer, you will likely want to update the offset within that buffer to switch from object to object.
With glVertexAttribPointer, you can't update just the offset. You have to specify the whole format+buffer information all at once. Every time.
VAOs mitigate having to make all those calls per object, but it turns out that they don't really solve the problem. Oh sure, you don't have to actually call glVertexAttribPointer. But that doesn't change the fact that changing vertex formats is expensive.
As discussed here, changing vertex formats is pretty expensive. When you bind a new VAO (or rather, when you render after binding a new VAO), the implementation either changes the vertex format regardless or has to compare the two VAOs to see if the vertex formats they define are different. Either way, it's doing work that it doesn't need to be doing.
glVertexAttribFormat and glBindVertexBuffer fix both of these problems. glBindVertexBuffer directly specifies the buffer object and takes the byte offset as an actual (64-bit) integer. So there's no awkward use of the GL_ARRAY_BUFFER binding; that binding is solely used for manipulating the buffer object.
And because the two separate concepts are now separate functions, you can have a VAO that stores a format, bind it, then bind vertex buffers for each object or group of objects that you render with. Changing vertex buffer binding state is cheaper than vertex format state.
Note that this separation is formalized in GL 4.5's direct state access APIs. That is, there is no DSA version of glVertexAttribPointer; you must use glVertexArrayAttribFormat and the other separate format APIs.
The separate attribute binding functions work like this. glVertexAttrib*Format functions provides all of the vertex formatting parameters for an attribute. Each of its parameters have the exact same meaning as the parameters from the equivalent call to glVertexAttrib*Pointer.
Where things get a bit confusing is with glBindVertexBuffer.
Its first parameter is an index. But this is not an attribute location; it is merely a buffer binding point. This is a separate array from attribute locations with its own maximum limit. So the fact that you bind a buffer to index 0 means nothing about where attribute location 0 gets its data from.
The connection between buffer bindings and attribute locations is defined by glVertexAttribBinding. The first parameter is the attribute location, and the second is the buffer binding index to fetch that attribute's location with. Since the function's name starts with "VertexAttrib", you should consider this to be part of the vertex format state and thus is expensive to change.
The nature of offsets may be a bit confusing at first as well. glVertexAttribFormat has an offset parameter. But so too does glBindVertexBuffer. But these offsets mean different things. The easiest way to understand the difference is by using an example of an interleaved data structure:
struct Vertex
{
GLfloat pos[3];
GLubyte color[4];
GLushort texCoord[2];
};
The vertex buffer binding offset specifies the byte offset from the start of the buffer object to the first vertex index. That is, when you render index 0, the GPU will fetch memory from the buffer object's address + the binding offset.
The vertex format offset specifies the offset from the start of each vertex to that particular attribute's data. If the data in the buffer is defined by Vertex, then the offset for each attribute would be:
glVertexAttribFormat(0, ..., offsetof(Vertex, pos)); //AKA: 0
glVertexAttribFormat(1, ..., offsetof(Vertex, color)); //Probably 12
glVertexAttribFormat(2, ..., offsetof(Vertex, texCoord)); //Probably 16
So the binding offset defined where vertex 0 is in memory, while the format offsets define where the each attribute's data comes from within a vertex.
The last thing to understand is that the buffer binding is where the stride is defined. This may seem odd, but think about it from the hardware perspective.
The buffer binding should contain all of the information needed by the hardware to turn a vertex index or instance index into a memory location. Once that's done, the vertex format explains how to interpret the bytes in that memory location.
This is also why the instance divisor is part of the buffer binding state, via glVertexBindingDivisor. The hardware needs to know the divisor in order to convert an instance index into a memory address.
Of course, this also means that you can no longer rely on OpenGL to compute the stride for you. In the above cast, you simply use sizeof(Vertex).
Separate attribute formats completely covers the old glVertexAttribPointer model so well that the old function is now defined entirely in terms of the new:
void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
{
glVertexAttrib*Format(index, size, type, {normalized,} 0);
glVertexAttribBinding(index, index);
GLuint buffer;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
if(buffer == 0)
glErrorOut(GL_INVALID_OPERATION); //Give an error.
if(stride == 0)
stride = CalcStride(size, type);
GLintptr offset = reinterpret_cast<GLintptr>(pointer);
glBindVertexBuffer(index, buffer, offset, stride);
}
Note that this equivalent function uses the same index value for the attribute location and the buffer binding index. If you're doing interleaved attributes, you should avoid this where possible; instead, use a single buffer binding for all attributes that are interleaved from the same buffer.

Related

What exactly is a VBO in OpenGL?

I am trying to understand the theory behind OpenGL and I'm studying VBOs at the moment.
This is what I understand so far: when we declare a series of vertices, let's say 3 vertices that form a triangle primitive, we basically store those nowhere, they're simply declared in code.
But, if we want to store them somewhere we can use a VBO that stores the definition of those vertices. And, through the same VBO we send all that vertex info to the Vertex Shader (which is a bunch of code). Now, the VBO is located in the GPU, so we are basically storing all that info on the GPU's memory when we call the VBO. Then the Vertex Shader, which is part of the Pipeline Rendering process, "comes" to the GPU's memory, "looks" into the VBO and retrieves all that info. In other words, the VBO stores the vertex data (triangle vertices) and sends it to the Vertex Shader.
So, VBO -> send info to -> Vertex Shader.
Is this correct? I'm asking to make sure if this is the correct interpretation, as I find myself drawing triangles on screen and sometimes letters made up of many triangles with a bunch of code and functions that I basically learned by memory but don't really understand what they do.
To break it down:
// here I declare the VBO
unsigned int VBO;
// we have 1 VBO, so we generate an ID for it, and that ID is: GL_ARRAY_BUFFER
glGenBuffers(1, &VBO)
// GL_ARRAY_BUFFER is the VBO's ID, which we are going to bind to the VBO itself
glBindBuffer(GL_ARRAY_BUFFER, VBO)
// bunch of info in here that basically says: I want to take the vertex data (the
// triangle that I declared as a float) and send it to the VBO's ID, which is
// GL_ARRAY_BUFFER, then I want to specify the size of the vertex
// data, the vertex data itself and the 'static draw' thingy
glBufferData(...).
After doing all that, the VBO now contains all the vertex data within. So we tell the VBO, ok now send it to the Vertex Shader.
And that's the start of the Pipeline, jsut the beginning.
Is this correct? (I haven't read what VAOs do yet, before I get to that I'd like to know if the way I deconstruct VBOs in my mind is the right way, or else I'm confused)
I think you are mixing up lots of different things and have several confusions, so I'm try to work through most of them in the order you brought them up:
when we declare a series of vertices, let's say 3 vertices that form a triangle primitive, we basically store those nowhere, they're simply declared in code.
No. If you store data "nowhere", then you don't have it. Also you are mixing up declaration, definiton and initialization of variables here. For vertex data (like all other forms of data), there are two basic strategies:
You store the data somewhere, typically in a file. Specifying it directly in source code just means that it is stored in some binary file, potentially the executable itself (or some shared library used by it)
You procedurally generate the data through some mathematical formula or more general by some algortihm
Methods 1. and 2 can of course be mixed, and usually, method 2 will need some parameters (which itself need to be stored somewhere, so the parameters are just case 1 again).
And, through the same VBO we send all that vertex info to the Vertex Shader (which is a bunch of code). Now, the VBO is located in the GPU, so we are basically storing all that info on the GPU's memory when we call the VBO.
OpenGL is actually just a specification which is completely agnostic about the existence of a GPU and the existence of VRAM. And as such, OpenGL uses the concept of buffer objects (BOs) as some continuous block of memory of a certain size which is completely managed by the GL implementation. You as the user can ask the GL to create or destroy such BOs, specify their size, and have complete control of the contents - you can put an MP3 file into a BO if you like (not that there would be a good use case for this).
The GL implementation on the other hand controls where this memory is actually allocated, and GL implementations for GPUs
which actually have dedicated video memory have the option to store a BO directly in VRAM. The hints like GL_STATIC_DRAW are there to help the GL implementation decide where to best put such a buffer (but that hint system is somewhat flawed, and better alternatives exist in modern GL, but I'm not going into that here). GL_STATIC_DRAW means you intent to specify the contents once and use the may times as the source of a drawing option - so the data won't change often (and certainly not on a per-frame basis or even more often), and it might be a very good idea to store it in VRAM if such a thing exists.
Then the Vertex Shader, which is part of the Pipeline Rendering process, "comes" to the GPU's memory, "looks" into the VBO and retrieves all that info.
I think one could put it that way, although some GPUs have a dedicated "vertex fetch" hardware stage which actually reads the vertex data which is then fed to the vertex shaders. But that's not a really important point - the vertex shader needs to access each vertex' data, and that means the GPU will read that memory (VRAM or system memory or whatever) at some point before or during the execution of a vertex shader.
In other words, the VBO stores the vertex data (triangle vertices)
Yes. A buffer object which is used as source for the vertex shader's per-vertex inputs ("vertex attributes") is called a vertex buffer object ("VBO"), so that just follows directly from the definition of the term.
and sends it to the Vertex Shader.
I wouldn't put it that way. A BO is just a block of memory, it doesn't actively do anything. It is just a passive element: it is being written to or being read from. That's all.
// here I declare the VBO
unsigned int VBO;
No, you are declaring (and defining) a variable in the context of your programming language, and this variable is later used to hold the name of a buffer object. And in the GL, object names are just positive integers (so 0 is reserved for the GL as "no such object" or "default object", depending on the object type).
// we have 1 VBO, so we generate an ID for it, and that ID is: GL_ARRAY_BUFFER
glGenBuffers(1, &VBO)
No. glGenBuffers(n,ptr) just generates names for n new buffer objects, so it will generate n previously unused buffer names (and mark them as used) and returns them by writing them to the array pointed to byptr. So in this case, it just creates one new buffer object name and stores it in your VBO variable.
GL_ARRAY_BUFFER has nothing to do with this.
// GL_ARRAY_BUFFER is the VBO's ID, which we are going to bind to the VBO itself
glBindBuffer(GL_ARRAY_BUFFER, VBO)
No, GL_ARRAY_BUFFER is not the VBO's ID, the value of yourVBO variable is the VBO's ID (name!).
GL_ARRAY_BUFFER is the binding target. OpenGL buffer objects can be used for different purposes, and using them as the source for vertex data is just one of them, and GL_ARRAY_BUFFER refers to that use case.
Note that classic OpenGL uses the concept of binding for two purposes:
bind-to-use: Whenever you issue a GL call which depends on some GL objects, the objects you want to work with have to be currently bound to some (specific, depending on the use case) binding target (not only buffer objects, but also textures and others).
bind-to_modify: Whenever you as the user want to modify the state of some object, you have to bind it first to some binding target, and all the object state modify functions don't directly take the name of the GL object to work on as parameter, but the binding target, and will affect the object which is currently bound at that target. (Modern GL also has direct state access which allows you to modify objects without having to bind them first, but I'm also not going into details about that here).
Binding a buffer object to some of the buffer object binding targets means that you can use that object for the purpose defined by the target. But note that a buffer object doesn't change because it is bound to a target. You can bind a buffer object to different targets even at the same time. A GL buffer object doesn't have a type. Calling a buffer a "VBO" usually just means that you intent to use it as GL_ARRAY_BUFFER, but the GL doesn't care. It does care about what is buffer is bound as GL_ARRAY_BUFFER at the time of the glVertexAttribPointer() call.
// bunch of info in here that basically says: I want to take the vertex data (the
// triangle that I declared as a float) and send it to the VBO's ID, which is
// GL_ARRAY_BUFFER, then I want to specify the size of the vertex
// data, the vertex data itself and the 'static draw' thingy
glBufferData(...).
Well, glBufferData just creates the actual data storage for a GL buffer object (that is, the real memory), meaning you specify the size of the buffer (and the usage hint I mentioned earlier where you tell the GL how you intend to use the memory), and it optionally allows you to initialize the buffer by copying data from your application's memory into the buffer object. It doesn't care about the actual data, and the types you use).
Since you use GL_ARRAY_BUFFER here as the target parameter, this operation will affect the BO which is currently bound as GL_ARRAY_BUFFER.
After doing all that, the VBO now contains all the vertex data within.
Basically, yes.
So we tell the VBO, ok now send it to the Vertex Shader.
No. The GL uses Vertex Array Objects (VAOs) which store for each vertex shader input attribute where to find the data (in which buffer object, at which offset inside the buffer object) and how to interpret this data (by specifying the data types).
Later during the the draw call, the GL will fetch the data from the relevant locations within the buffer objects, as you specified it in the VAO. If this memory access is triggered by the vertex shader itself, or if there is a dedicated vertex fetch stage which reads the data before and forwards it to the vertex shader - or if there is a GPU at all - is totally implementation-specific, and none of your concern.
And that's the start of the Pipeline, just the beginning.
Well, depends on how you look at things. In a traditional rasterizer-based rendering pipline, the "vertex fetch" is more or less the first stage, and vertex buffer objects will just hold the memory where to fetch the vertex data from (and VAOs telling it which buffer objects to use, and which actual locations, and how to interpret them).
It all boils down to this: when you work in "normal" programs, all what you have is the CPU, caches, registers, main memory, etc.
However, when you work with computer graphics (and other fields), you want to use a GPU because it is faster for that particular task. The GPU is an independent computer on its own, with its own processor, pipeline and main even memory.
This means your program needs to somehow transfer all the data to the other computer and tell the other computer what to do with it. This is no easy task, so OpenGL simplifies things for you. Thus they give you an abstraction (VBO) that represents a buffer of vertices in the GPU, among many other abstractions for other tasks. Then they give you functions to create that resource (glGenBuffers), fill it with data (glBufferData), "bind it" to work with it (glBindBuffer), etc.
Remember, it is all a simplification for your benefit. In truth, the details of how everything is performed underneath is way more complex. Having abstractions like VBOs for vertices or IBOs for indexes makes it easier to work with them.

Could I change a vertex attribute's target vbo? [duplicate]

OpenGL 4.3 and OpenGL ES 3.1 added several alternative functions for specifying vertex arrays: glVertexAttribFormat, glBindVertexBuffers, etc. But we already had functions for specifying vertex arrays. Namely glVertexAttribPointer.
Why add new APIs that do the same thing as the old ones?
How do the new APIs work?
glVertexAttribPointer has two flaws, one of them semi-subjective, the other objective.
The first flaw is its dependency on GL_ARRAY_BUFFER. This means that the behavior of glVertexAttribPointer is contingent on whatever was bound to GL_ARRAY_BUFFER at the time it was called. But once it is called, what is bound to GL_ARRAY_BUFFER no longer matters; the buffer object's reference is copied into the VAO. All this is very unintuitive and confusing, even to some semi-experienced users.
It also requires you to provide an offset into the buffer object as a "pointer", rather than as an integer byte offset. This means that you perform an awkward cast from an integer to a pointer (which must be matched by an equally awkward cast in the driver).
The second flaw is that it conflates two operations that, logically, are quite separate. In order to define a vertex array that OpenGL can read, you must provide two things:
How to fetch the data from memory.
What that data looks like.
glVertexAttribPointer provides both of these simultaneously. The GL_ARRAY_BUFFER buffer object, plus the offset "pointer" and stride define where the data is stored and how to fetch it. The other parameters describes what a single unit of data looks like. Let us call this the vertex format of the array.
As a practical matter, users are far more likely to change where vertex data comes from than vertex formats. After all, many objects in the scene store their vertices in the same way. Whatever that way may be: 3 floats for position, 4 unsigned bytes for colors, 2 unsigned shorts for tex-coords, etc. Generally speaking, you have only a few vertex formats.
Whereas you have far more locations where you pull data from. Even if the objects all come from the same buffer, you will likely want to update the offset within that buffer to switch from object to object.
With glVertexAttribPointer, you can't update just the offset. You have to specify the whole format+buffer information all at once. Every time.
VAOs mitigate having to make all those calls per object, but it turns out that they don't really solve the problem. Oh sure, you don't have to actually call glVertexAttribPointer. But that doesn't change the fact that changing vertex formats is expensive.
As discussed here, changing vertex formats is pretty expensive. When you bind a new VAO (or rather, when you render after binding a new VAO), the implementation either changes the vertex format regardless or has to compare the two VAOs to see if the vertex formats they define are different. Either way, it's doing work that it doesn't need to be doing.
glVertexAttribFormat and glBindVertexBuffer fix both of these problems. glBindVertexBuffer directly specifies the buffer object and takes the byte offset as an actual (64-bit) integer. So there's no awkward use of the GL_ARRAY_BUFFER binding; that binding is solely used for manipulating the buffer object.
And because the two separate concepts are now separate functions, you can have a VAO that stores a format, bind it, then bind vertex buffers for each object or group of objects that you render with. Changing vertex buffer binding state is cheaper than vertex format state.
Note that this separation is formalized in GL 4.5's direct state access APIs. That is, there is no DSA version of glVertexAttribPointer; you must use glVertexArrayAttribFormat and the other separate format APIs.
The separate attribute binding functions work like this. glVertexAttrib*Format functions provides all of the vertex formatting parameters for an attribute. Each of its parameters have the exact same meaning as the parameters from the equivalent call to glVertexAttrib*Pointer.
Where things get a bit confusing is with glBindVertexBuffer.
Its first parameter is an index. But this is not an attribute location; it is merely a buffer binding point. This is a separate array from attribute locations with its own maximum limit. So the fact that you bind a buffer to index 0 means nothing about where attribute location 0 gets its data from.
The connection between buffer bindings and attribute locations is defined by glVertexAttribBinding. The first parameter is the attribute location, and the second is the buffer binding index to fetch that attribute's location with. Since the function's name starts with "VertexAttrib", you should consider this to be part of the vertex format state and thus is expensive to change.
The nature of offsets may be a bit confusing at first as well. glVertexAttribFormat has an offset parameter. But so too does glBindVertexBuffer. But these offsets mean different things. The easiest way to understand the difference is by using an example of an interleaved data structure:
struct Vertex
{
GLfloat pos[3];
GLubyte color[4];
GLushort texCoord[2];
};
The vertex buffer binding offset specifies the byte offset from the start of the buffer object to the first vertex index. That is, when you render index 0, the GPU will fetch memory from the buffer object's address + the binding offset.
The vertex format offset specifies the offset from the start of each vertex to that particular attribute's data. If the data in the buffer is defined by Vertex, then the offset for each attribute would be:
glVertexAttribFormat(0, ..., offsetof(Vertex, pos)); //AKA: 0
glVertexAttribFormat(1, ..., offsetof(Vertex, color)); //Probably 12
glVertexAttribFormat(2, ..., offsetof(Vertex, texCoord)); //Probably 16
So the binding offset defined where vertex 0 is in memory, while the format offsets define where the each attribute's data comes from within a vertex.
The last thing to understand is that the buffer binding is where the stride is defined. This may seem odd, but think about it from the hardware perspective.
The buffer binding should contain all of the information needed by the hardware to turn a vertex index or instance index into a memory location. Once that's done, the vertex format explains how to interpret the bytes in that memory location.
This is also why the instance divisor is part of the buffer binding state, via glVertexBindingDivisor. The hardware needs to know the divisor in order to convert an instance index into a memory address.
Of course, this also means that you can no longer rely on OpenGL to compute the stride for you. In the above cast, you simply use sizeof(Vertex).
Separate attribute formats completely covers the old glVertexAttribPointer model so well that the old function is now defined entirely in terms of the new:
void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
{
glVertexAttrib*Format(index, size, type, {normalized,} 0);
glVertexAttribBinding(index, index);
GLuint buffer;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
if(buffer == 0)
glErrorOut(GL_INVALID_OPERATION); //Give an error.
if(stride == 0)
stride = CalcStride(size, type);
GLintptr offset = reinterpret_cast<GLintptr>(pointer);
glBindVertexBuffer(index, buffer, offset, stride);
}
Note that this equivalent function uses the same index value for the attribute location and the buffer binding index. If you're doing interleaved attributes, you should avoid this where possible; instead, use a single buffer binding for all attributes that are interleaved from the same buffer.

Difference between glBindBuffer and glBindBufferBase

I think what glBindBuffer(target, buffer) do is to store the buffer's address on the target, which is a special address.
But I found the glBindBufferBase(target, index, buffer). I think target should be a array, this operation stores the buffer address to the array according to the index.
If what I thought is right, then the glBindBuffer is equivalent to glBindBufferBase(target, someindex, buffer)?
Maybe someindex is 0?
They're not used for the same purpose.
glBindBuffer is used to bind a buffer to a specific target so that all operations which modify that target are mapped to that buffer afterwards.
glBindBufferBase is used for a totally different purpose, it's used bind a buffer to a specific binding point in an indexed array (when data is not supposed to be directly modified but rather used). While this may seem convoluted it's really easy to see. Assume you want to pass an uniform block to your shader, then you have a table which maps named buffers to specific indices in an array which are then mapped to bindings in a shader like in the following figure:
so glBindBufferBase is creating the arrows on the right in which you specify the index, while glBindBuffer is just binding the buffer to the specific target.
You would then use glGetUniformBlockIndex to get the correct index in the shader which is then linked to the binding point (the left arrows) through glUniformBlockBinding.

What is the difference between glVertexAttribDivisor and glVertexBindingDivisor?

I was looking for ways to associate attributes with arbitrary groupings of verticies, at first instancing appeared to be the only way for me to accomplish this, but then I stumbled up this question and this answer states :
However what is possible with newer versions of OpenGL is setting the rate at which a certain vertex attribute's buffer offset advances. Effectively this means that the data for a given vertex array gets duplicated to n vertices before the buffer offset for a attribute advances. The function to set this divisor is glVertexBindingDivisor.
(emphasis mine)
Which to me seems as if the answer is claiming I can divide on the number of vertices instead of the number of instances. However, when I look at glVertexBindingDivisor's documentation and compare it to glVertexAttribDivisor's they both appear to refer to the division taking place over instances and not vertices. For example in glVertexBindingDivisor's documentation it states:
glVertexBindingDivisor and glVertexArrayBindingDivisor modify the rate at which generic vertex attributes advance when rendering multiple instances of primitives in a single draw command. If divisor is zero, the attributes using the buffer bound to bindingindex advance once per vertex. If divisor is non-zero, the attributes advance once per divisor instances of the set(s) of vertices being rendered. An attribute is referred to as instanced if the corresponding divisor value is non-zero.
(emphasis mine)
So what is the actual difference between these two functions?
OK, first a little backstory.
As of OpenGL 4.3/ARB_vertex_attrib_binding (AKA: where glVertexBindingDivisor comes from, so this is relevant), VAOs are conceptually split into two parts: an array of vertex formats that describe a single attribute's worth of data, and an array of buffer binding points which describe how to fetch arrays of data (the buffer object, the offset, the stride, and the divisor). The vertex format specifies which buffer binding point its data comes from, so that multiple attributes can get data from the same array (ie: interleaving).
When VAOs were split into these two parts, the older APIs were re-defined in terms of the new system. So if you call glVertexAttribPointer with an attribute index, this function will set the vertex format data for the format at the given index, and it will set the buffer binding state (buffer object, byte offset, etc) for the same index. Now, these are two separate arrays of VAO state data (vertex format and buffer binding); this function is simply using the same index in both arrays.
But since the vertex format and buffer bindings are separate now, glVertexAttribPointer also does the equivalent of saying that the vertex format at index index gets its data from the buffer binding at index index. This is important because that's not automatic; the whole point of vertex_attrib_binding is that a vertex format at one index can use a buffer binding from a different index. So when you're using the old API, it's resetting itself to the old behavior by linking format index to binding index.
Now, what does all that have to do with the divisor? Well, because that thing I just said is literally the only difference between them.
glVertexAttribDivisor is the old-style API for setting the divisor. It takes an attribute index, but it acts on state which is part of the buffer binding point (instancing is a per-array construct, not a per-attribute construct now). This means that the function assumes (in the new system) that the attribute at index fetches its data from the buffer binding point at index.
And what I just said is a bit of a lie. It enforces this "assumption" by directly setting the vertex format to use that buffer binding point. That is, it does the same last step as glVertexAttribPointer did.
glVertexBindingDivisor is the modern function. It is not passed an attribute index; it is passed a buffer binding index. As such, it does not change the attribute's buffer binding index.
So glVertexAttribDivisor is exactly equivalent to this:
void glVertexAttribDivisor(GLuint index, GLuint divisor)
{
glVertexBindingDivisor(index, divisor);
glVertexAttribBinding(index, index);
}
Obviously, glVertexBindingDivisor doesn't do that last part.
So what is the actual difference between these two functions?
Modern OpenGL has two different APIs for specifying vertex attribute arrays and their properties. The traditional glVertexAttribArray and friends, where glVertexAttribDivisor is also part of.
With ARB_vertex_attrib_binding (in core since GL 4.3), a new API was introduced, which separates the vertex format from the pointers. It is expected that switching the data pointers is fast, while switching the vertex format can be more expensive. The new API allows to explictely control both aspects separately, while the old API always sets both at once.
For the new API, a new layer of introduction was introduced: the buffer binding points. (See the OpenGL wiki for more details.) glVertexBindingDivisor specifies the attribute instancing divisor for such a binding point, so it is the conceptual equivalent of the glVertexAttribDivisor function for the new API.

OpenGL - glDrawElements vs Vertex Array Objects

I need help to see the trade-offs between them.
It looks to me that glDrawElements() needs to get the index-data "live" as a parameter.
On the other side if I use VAOs then during startup I buffer the data and the driver might decide to put it on the GPU, then during rendering I only bind the VAO and call glDrawArrays().
Is there no way to combine the advantages? Can we buffer the index-data too?
And how would that look in the vertex shader? Can it use the index and look it up in the vertex positions array?
This information is really a bit hard to find, but one can use glDrawElements also in combination with a VAO. The index data can then (but doesn't have to) be supplied by a ELEMENT_ARRAY_BUFFER. Indexing works then as usual, one does not have to do anything special in the vertex shader. OpenGL ensures already that the indices are used in the correct way during primitiv assembly.
The spec states to this in section 10.3.10:
DrawElements, DrawRangeElements, and DrawElementsInstanced source
their indices from the buffer object whose name is bound to ELEMENT_-
ARRAY_BUFFER, using their indices parameters as offsets into the buffer object
This basically means, that whenever a ELEMENT_ARRAY_BUFFER is bound, the indices parameter is used as an offset into this buffer (0 means start from the beginning). When no such buffer is bound, the indices pointer specifies the address of a index array.