What is the meaning of the index argument in glDrawElements? - c++

There are two OpenGL documentation pages which have slightly different descriptions of the "index" parameter of the glDrawElements function. On www.opengl.org/sdk/docs/man4/ it says:
indices
Specifies a pointer to the location where the indices are stored.
And on www.khronos.org/opengles/sdk/docs/man3 it says:
indices
Specifies a byte offset (cast to a pointer type) into the buffer bound
to GL_ELEMENT_ARRAY_BUFFER to start reading indices from. If no buffer
is bound, specifies a pointer to the location where the indices are stored.
I am on Windows by the way, using OpenGL 4+.
So I've copied my index array into the element buffer object I created, the indices pointer argument I need to supply is the offset in bytes of the first index? So if I want to start drawing at index 3, the argument would be 2 * sizeof(GLuint), cast as a pointer?
I actually went to the effort of creating an EBO for this, but from the looks of it it says that if no EBO is bound the pointer points straight to the location where the indices are, not the EBO. Am I right that this means it will point to your array on the system RAM? (EDIT: I just realised this doesn't make sense, if the pointer is at 0x00000008 it can't go to that address in system memory.) And if so, does it then copy the index array to the graphics card each time in order to be able to use it? Thanks.

As per OpenGL 4.5, core profile, reading from client memory is unsupported (§10.3.10 OpenGL 4.5 core spec):
DrawElements, DrawRangeElements, and DrawElementsInstanced source their indices from the buffer object whose name is bound to
ELEMENT_ARRAY_BUFFER, using their indices parameters as offsets into the buffer object in the same fashion as described in section 10.3.9. [...] If zero is bound to ELEMENT_ARRAY_BUFFER, the result of these drawing commands is undefined.
So your approach of creating an EBO is correct. Except if your 0th index is located at offset zero then the 3rd index is located at offset 3*sizeof(type).
As for your second quotation: in the older OpenGL versions you could pass a pointer to the client memory (in your process virtual address space, not the physical address) and leave ELEMENT_ARRAY_BUFFER unbound.

Related

Could I change a vertex attribute's target vbo? [duplicate]

OpenGL 4.3 and OpenGL ES 3.1 added several alternative functions for specifying vertex arrays: glVertexAttribFormat, glBindVertexBuffers, etc. But we already had functions for specifying vertex arrays. Namely glVertexAttribPointer.
Why add new APIs that do the same thing as the old ones?
How do the new APIs work?
glVertexAttribPointer has two flaws, one of them semi-subjective, the other objective.
The first flaw is its dependency on GL_ARRAY_BUFFER. This means that the behavior of glVertexAttribPointer is contingent on whatever was bound to GL_ARRAY_BUFFER at the time it was called. But once it is called, what is bound to GL_ARRAY_BUFFER no longer matters; the buffer object's reference is copied into the VAO. All this is very unintuitive and confusing, even to some semi-experienced users.
It also requires you to provide an offset into the buffer object as a "pointer", rather than as an integer byte offset. This means that you perform an awkward cast from an integer to a pointer (which must be matched by an equally awkward cast in the driver).
The second flaw is that it conflates two operations that, logically, are quite separate. In order to define a vertex array that OpenGL can read, you must provide two things:
How to fetch the data from memory.
What that data looks like.
glVertexAttribPointer provides both of these simultaneously. The GL_ARRAY_BUFFER buffer object, plus the offset "pointer" and stride define where the data is stored and how to fetch it. The other parameters describes what a single unit of data looks like. Let us call this the vertex format of the array.
As a practical matter, users are far more likely to change where vertex data comes from than vertex formats. After all, many objects in the scene store their vertices in the same way. Whatever that way may be: 3 floats for position, 4 unsigned bytes for colors, 2 unsigned shorts for tex-coords, etc. Generally speaking, you have only a few vertex formats.
Whereas you have far more locations where you pull data from. Even if the objects all come from the same buffer, you will likely want to update the offset within that buffer to switch from object to object.
With glVertexAttribPointer, you can't update just the offset. You have to specify the whole format+buffer information all at once. Every time.
VAOs mitigate having to make all those calls per object, but it turns out that they don't really solve the problem. Oh sure, you don't have to actually call glVertexAttribPointer. But that doesn't change the fact that changing vertex formats is expensive.
As discussed here, changing vertex formats is pretty expensive. When you bind a new VAO (or rather, when you render after binding a new VAO), the implementation either changes the vertex format regardless or has to compare the two VAOs to see if the vertex formats they define are different. Either way, it's doing work that it doesn't need to be doing.
glVertexAttribFormat and glBindVertexBuffer fix both of these problems. glBindVertexBuffer directly specifies the buffer object and takes the byte offset as an actual (64-bit) integer. So there's no awkward use of the GL_ARRAY_BUFFER binding; that binding is solely used for manipulating the buffer object.
And because the two separate concepts are now separate functions, you can have a VAO that stores a format, bind it, then bind vertex buffers for each object or group of objects that you render with. Changing vertex buffer binding state is cheaper than vertex format state.
Note that this separation is formalized in GL 4.5's direct state access APIs. That is, there is no DSA version of glVertexAttribPointer; you must use glVertexArrayAttribFormat and the other separate format APIs.
The separate attribute binding functions work like this. glVertexAttrib*Format functions provides all of the vertex formatting parameters for an attribute. Each of its parameters have the exact same meaning as the parameters from the equivalent call to glVertexAttrib*Pointer.
Where things get a bit confusing is with glBindVertexBuffer.
Its first parameter is an index. But this is not an attribute location; it is merely a buffer binding point. This is a separate array from attribute locations with its own maximum limit. So the fact that you bind a buffer to index 0 means nothing about where attribute location 0 gets its data from.
The connection between buffer bindings and attribute locations is defined by glVertexAttribBinding. The first parameter is the attribute location, and the second is the buffer binding index to fetch that attribute's location with. Since the function's name starts with "VertexAttrib", you should consider this to be part of the vertex format state and thus is expensive to change.
The nature of offsets may be a bit confusing at first as well. glVertexAttribFormat has an offset parameter. But so too does glBindVertexBuffer. But these offsets mean different things. The easiest way to understand the difference is by using an example of an interleaved data structure:
struct Vertex
{
GLfloat pos[3];
GLubyte color[4];
GLushort texCoord[2];
};
The vertex buffer binding offset specifies the byte offset from the start of the buffer object to the first vertex index. That is, when you render index 0, the GPU will fetch memory from the buffer object's address + the binding offset.
The vertex format offset specifies the offset from the start of each vertex to that particular attribute's data. If the data in the buffer is defined by Vertex, then the offset for each attribute would be:
glVertexAttribFormat(0, ..., offsetof(Vertex, pos)); //AKA: 0
glVertexAttribFormat(1, ..., offsetof(Vertex, color)); //Probably 12
glVertexAttribFormat(2, ..., offsetof(Vertex, texCoord)); //Probably 16
So the binding offset defined where vertex 0 is in memory, while the format offsets define where the each attribute's data comes from within a vertex.
The last thing to understand is that the buffer binding is where the stride is defined. This may seem odd, but think about it from the hardware perspective.
The buffer binding should contain all of the information needed by the hardware to turn a vertex index or instance index into a memory location. Once that's done, the vertex format explains how to interpret the bytes in that memory location.
This is also why the instance divisor is part of the buffer binding state, via glVertexBindingDivisor. The hardware needs to know the divisor in order to convert an instance index into a memory address.
Of course, this also means that you can no longer rely on OpenGL to compute the stride for you. In the above cast, you simply use sizeof(Vertex).
Separate attribute formats completely covers the old glVertexAttribPointer model so well that the old function is now defined entirely in terms of the new:
void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
{
glVertexAttrib*Format(index, size, type, {normalized,} 0);
glVertexAttribBinding(index, index);
GLuint buffer;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
if(buffer == 0)
glErrorOut(GL_INVALID_OPERATION); //Give an error.
if(stride == 0)
stride = CalcStride(size, type);
GLintptr offset = reinterpret_cast<GLintptr>(pointer);
glBindVertexBuffer(index, buffer, offset, stride);
}
Note that this equivalent function uses the same index value for the attribute location and the buffer binding index. If you're doing interleaved attributes, you should avoid this where possible; instead, use a single buffer binding for all attributes that are interleaved from the same buffer.

Difference between glBindBuffer and glBindBufferBase

I think what glBindBuffer(target, buffer) do is to store the buffer's address on the target, which is a special address.
But I found the glBindBufferBase(target, index, buffer). I think target should be a array, this operation stores the buffer address to the array according to the index.
If what I thought is right, then the glBindBuffer is equivalent to glBindBufferBase(target, someindex, buffer)?
Maybe someindex is 0?
They're not used for the same purpose.
glBindBuffer is used to bind a buffer to a specific target so that all operations which modify that target are mapped to that buffer afterwards.
glBindBufferBase is used for a totally different purpose, it's used bind a buffer to a specific binding point in an indexed array (when data is not supposed to be directly modified but rather used). While this may seem convoluted it's really easy to see. Assume you want to pass an uniform block to your shader, then you have a table which maps named buffers to specific indices in an array which are then mapped to bindings in a shader like in the following figure:
so glBindBufferBase is creating the arrows on the right in which you specify the index, while glBindBuffer is just binding the buffer to the specific target.
You would then use glGetUniformBlockIndex to get the correct index in the shader which is then linked to the binding point (the left arrows) through glUniformBlockBinding.

OpenGL - glDrawElements vs Vertex Array Objects

I need help to see the trade-offs between them.
It looks to me that glDrawElements() needs to get the index-data "live" as a parameter.
On the other side if I use VAOs then during startup I buffer the data and the driver might decide to put it on the GPU, then during rendering I only bind the VAO and call glDrawArrays().
Is there no way to combine the advantages? Can we buffer the index-data too?
And how would that look in the vertex shader? Can it use the index and look it up in the vertex positions array?
This information is really a bit hard to find, but one can use glDrawElements also in combination with a VAO. The index data can then (but doesn't have to) be supplied by a ELEMENT_ARRAY_BUFFER. Indexing works then as usual, one does not have to do anything special in the vertex shader. OpenGL ensures already that the indices are used in the correct way during primitiv assembly.
The spec states to this in section 10.3.10:
DrawElements, DrawRangeElements, and DrawElementsInstanced source
their indices from the buffer object whose name is bound to ELEMENT_-
ARRAY_BUFFER, using their indices parameters as offsets into the buffer object
This basically means, that whenever a ELEMENT_ARRAY_BUFFER is bound, the indices parameter is used as an offset into this buffer (0 means start from the beginning). When no such buffer is bound, the indices pointer specifies the address of a index array.

glVertexAttribPointer and glVertexAttribFormat: What's the difference?

OpenGL 4.3 and OpenGL ES 3.1 added several alternative functions for specifying vertex arrays: glVertexAttribFormat, glBindVertexBuffers, etc. But we already had functions for specifying vertex arrays. Namely glVertexAttribPointer.
Why add new APIs that do the same thing as the old ones?
How do the new APIs work?
glVertexAttribPointer has two flaws, one of them semi-subjective, the other objective.
The first flaw is its dependency on GL_ARRAY_BUFFER. This means that the behavior of glVertexAttribPointer is contingent on whatever was bound to GL_ARRAY_BUFFER at the time it was called. But once it is called, what is bound to GL_ARRAY_BUFFER no longer matters; the buffer object's reference is copied into the VAO. All this is very unintuitive and confusing, even to some semi-experienced users.
It also requires you to provide an offset into the buffer object as a "pointer", rather than as an integer byte offset. This means that you perform an awkward cast from an integer to a pointer (which must be matched by an equally awkward cast in the driver).
The second flaw is that it conflates two operations that, logically, are quite separate. In order to define a vertex array that OpenGL can read, you must provide two things:
How to fetch the data from memory.
What that data looks like.
glVertexAttribPointer provides both of these simultaneously. The GL_ARRAY_BUFFER buffer object, plus the offset "pointer" and stride define where the data is stored and how to fetch it. The other parameters describes what a single unit of data looks like. Let us call this the vertex format of the array.
As a practical matter, users are far more likely to change where vertex data comes from than vertex formats. After all, many objects in the scene store their vertices in the same way. Whatever that way may be: 3 floats for position, 4 unsigned bytes for colors, 2 unsigned shorts for tex-coords, etc. Generally speaking, you have only a few vertex formats.
Whereas you have far more locations where you pull data from. Even if the objects all come from the same buffer, you will likely want to update the offset within that buffer to switch from object to object.
With glVertexAttribPointer, you can't update just the offset. You have to specify the whole format+buffer information all at once. Every time.
VAOs mitigate having to make all those calls per object, but it turns out that they don't really solve the problem. Oh sure, you don't have to actually call glVertexAttribPointer. But that doesn't change the fact that changing vertex formats is expensive.
As discussed here, changing vertex formats is pretty expensive. When you bind a new VAO (or rather, when you render after binding a new VAO), the implementation either changes the vertex format regardless or has to compare the two VAOs to see if the vertex formats they define are different. Either way, it's doing work that it doesn't need to be doing.
glVertexAttribFormat and glBindVertexBuffer fix both of these problems. glBindVertexBuffer directly specifies the buffer object and takes the byte offset as an actual (64-bit) integer. So there's no awkward use of the GL_ARRAY_BUFFER binding; that binding is solely used for manipulating the buffer object.
And because the two separate concepts are now separate functions, you can have a VAO that stores a format, bind it, then bind vertex buffers for each object or group of objects that you render with. Changing vertex buffer binding state is cheaper than vertex format state.
Note that this separation is formalized in GL 4.5's direct state access APIs. That is, there is no DSA version of glVertexAttribPointer; you must use glVertexArrayAttribFormat and the other separate format APIs.
The separate attribute binding functions work like this. glVertexAttrib*Format functions provides all of the vertex formatting parameters for an attribute. Each of its parameters have the exact same meaning as the parameters from the equivalent call to glVertexAttrib*Pointer.
Where things get a bit confusing is with glBindVertexBuffer.
Its first parameter is an index. But this is not an attribute location; it is merely a buffer binding point. This is a separate array from attribute locations with its own maximum limit. So the fact that you bind a buffer to index 0 means nothing about where attribute location 0 gets its data from.
The connection between buffer bindings and attribute locations is defined by glVertexAttribBinding. The first parameter is the attribute location, and the second is the buffer binding index to fetch that attribute's location with. Since the function's name starts with "VertexAttrib", you should consider this to be part of the vertex format state and thus is expensive to change.
The nature of offsets may be a bit confusing at first as well. glVertexAttribFormat has an offset parameter. But so too does glBindVertexBuffer. But these offsets mean different things. The easiest way to understand the difference is by using an example of an interleaved data structure:
struct Vertex
{
GLfloat pos[3];
GLubyte color[4];
GLushort texCoord[2];
};
The vertex buffer binding offset specifies the byte offset from the start of the buffer object to the first vertex index. That is, when you render index 0, the GPU will fetch memory from the buffer object's address + the binding offset.
The vertex format offset specifies the offset from the start of each vertex to that particular attribute's data. If the data in the buffer is defined by Vertex, then the offset for each attribute would be:
glVertexAttribFormat(0, ..., offsetof(Vertex, pos)); //AKA: 0
glVertexAttribFormat(1, ..., offsetof(Vertex, color)); //Probably 12
glVertexAttribFormat(2, ..., offsetof(Vertex, texCoord)); //Probably 16
So the binding offset defined where vertex 0 is in memory, while the format offsets define where the each attribute's data comes from within a vertex.
The last thing to understand is that the buffer binding is where the stride is defined. This may seem odd, but think about it from the hardware perspective.
The buffer binding should contain all of the information needed by the hardware to turn a vertex index or instance index into a memory location. Once that's done, the vertex format explains how to interpret the bytes in that memory location.
This is also why the instance divisor is part of the buffer binding state, via glVertexBindingDivisor. The hardware needs to know the divisor in order to convert an instance index into a memory address.
Of course, this also means that you can no longer rely on OpenGL to compute the stride for you. In the above cast, you simply use sizeof(Vertex).
Separate attribute formats completely covers the old glVertexAttribPointer model so well that the old function is now defined entirely in terms of the new:
void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
{
glVertexAttrib*Format(index, size, type, {normalized,} 0);
glVertexAttribBinding(index, index);
GLuint buffer;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
if(buffer == 0)
glErrorOut(GL_INVALID_OPERATION); //Give an error.
if(stride == 0)
stride = CalcStride(size, type);
GLintptr offset = reinterpret_cast<GLintptr>(pointer);
glBindVertexBuffer(index, buffer, offset, stride);
}
Note that this equivalent function uses the same index value for the attribute location and the buffer binding index. If you're doing interleaved attributes, you should avoid this where possible; instead, use a single buffer binding for all attributes that are interleaved from the same buffer.

The 4th argument in glDrawElements is WHAT?

I am confused about glDrawElements() . I was following a tutorial which said that the 4th argument of the glDrawElements() is "Offset within the GL_ELEMENT_ARRAY_BUFFER". But I was having an error "Access voilation: Trying to read 0x0000", if I passed 0 as offset.
So I dig further into the problem and found that OpenGL Documentation provides two different definitions of the 4th argument:
First:
indices: Specifies a byte offset (cast to a pointer type) into the
buffer bound to GL_ELEMENT_ARRAY_BUFFER​ to start reading indices
from.
(Found Here: https://www.opengl.org/wiki/GLAPI/glDrawElements)
Second:
indices: Specifies a pointer to the location where the indices are stored.
(Found Here: https://www.opengl.org/sdk/docs/man4/index.php
and Here: http://www.khronos.org/opengles/sdk/docs/man/xhtml/glDrawElements.xml)
Which one is true and How to use it correctly?
EDIT: Here is my code: http://pastebin.com/fdxTMjnC
Both are correct. The 2 cases relate to how the buffer of indices is uploaded to the GL Hardware and how it is used to draw. These are described below:
(1) Without using VBO (Vertex Buffer Object):
In this case, the parameter indicates a pointer to the array of indices. Everytime glDrawElements is called, the buffer is uploaded to GL HW.
(2) Using VBO:
For this case - see the definition of Index as "Specifies a byte offset (cast to a pointer type) into the buffer bound to GL_ELEMENT_ARRAY_BUFFER​ to start reading indices from". This means that the data is already uploaded separately using glBufferData, and the index is used as an offset only. Everytime glDrawElements is called, the buffer is not uploaded, but only the offset can change if required. This makes it more efficient, especially where large number of vertices are involved.
If you use direct drawing, then
indices​ defines the offset into the index buffer object (bound to GL_ELEMENT_ARRAY_BUFFER​, stored in the VAO) to begin reading data.
That means, you need to create VAO, bind to it, and then use glDrawElements() for rendering
The interpretation of that argument depends on whether an internal array has been bound in the GL state machine. If such an array is bound, it's an offset. If no such array is bound, it's a pointer to your memory.
Using an internal array is more performant, so recent documentation (especially wikis) is strongly biased toward that usage. However, there's a long song and dance to set them up.
The function that binds or unbinds the internal array is glBindVertexArray. Check the documentation for these related functions:
glGenVertexArrays
glBufferData
(this is an incomplete list. I've got to run so I'll have to edit later.)
Whatever is on opengl.org/wiki is most likely to be correct. And I can tell you from working projects that it is indeed defined as follows:
Specifies a byte offset (cast to a pointer type) into the buffer bound to
GL_ELEMENT_ARRAY_BUFFER​ to start reading indices from.
Just cast some integer to GLvoid* and you should be good to go. That is at least for the modern programmable pipeline, I have no idea for the fixed pipeline (nor should you use it!).
If you are using the modern pipeline, without further information, I would bet on the actual function not being loaded correctly. That being GLEW not initialized correctly or if you're doing things manually trying to use GetProcAddress on win32 (seeing as glDrawElements is core since 1.1, and 1.0/1.1 shouldn't be loaded with GetProcAddress on win32).
And indeed note you should've bound an element buffer to the binding point as others have said. Though in practice you can get away with it without binding a VAO. I've done this on NVidia, AMD and even Intel. But in production you should use VAOs.