The 4th argument in glDrawElements is WHAT? - c++

I am confused about glDrawElements() . I was following a tutorial which said that the 4th argument of the glDrawElements() is "Offset within the GL_ELEMENT_ARRAY_BUFFER". But I was having an error "Access voilation: Trying to read 0x0000", if I passed 0 as offset.
So I dig further into the problem and found that OpenGL Documentation provides two different definitions of the 4th argument:
First:
indices: Specifies a byte offset (cast to a pointer type) into the
buffer bound to GL_ELEMENT_ARRAY_BUFFER​ to start reading indices
from.
(Found Here: https://www.opengl.org/wiki/GLAPI/glDrawElements)
Second:
indices: Specifies a pointer to the location where the indices are stored.
(Found Here: https://www.opengl.org/sdk/docs/man4/index.php
and Here: http://www.khronos.org/opengles/sdk/docs/man/xhtml/glDrawElements.xml)
Which one is true and How to use it correctly?
EDIT: Here is my code: http://pastebin.com/fdxTMjnC

Both are correct. The 2 cases relate to how the buffer of indices is uploaded to the GL Hardware and how it is used to draw. These are described below:
(1) Without using VBO (Vertex Buffer Object):
In this case, the parameter indicates a pointer to the array of indices. Everytime glDrawElements is called, the buffer is uploaded to GL HW.
(2) Using VBO:
For this case - see the definition of Index as "Specifies a byte offset (cast to a pointer type) into the buffer bound to GL_ELEMENT_ARRAY_BUFFER​ to start reading indices from". This means that the data is already uploaded separately using glBufferData, and the index is used as an offset only. Everytime glDrawElements is called, the buffer is not uploaded, but only the offset can change if required. This makes it more efficient, especially where large number of vertices are involved.

If you use direct drawing, then
indices​ defines the offset into the index buffer object (bound to GL_ELEMENT_ARRAY_BUFFER​, stored in the VAO) to begin reading data.
That means, you need to create VAO, bind to it, and then use glDrawElements() for rendering

The interpretation of that argument depends on whether an internal array has been bound in the GL state machine. If such an array is bound, it's an offset. If no such array is bound, it's a pointer to your memory.
Using an internal array is more performant, so recent documentation (especially wikis) is strongly biased toward that usage. However, there's a long song and dance to set them up.
The function that binds or unbinds the internal array is glBindVertexArray. Check the documentation for these related functions:
glGenVertexArrays
glBufferData
(this is an incomplete list. I've got to run so I'll have to edit later.)

Whatever is on opengl.org/wiki is most likely to be correct. And I can tell you from working projects that it is indeed defined as follows:
Specifies a byte offset (cast to a pointer type) into the buffer bound to
GL_ELEMENT_ARRAY_BUFFER​ to start reading indices from.
Just cast some integer to GLvoid* and you should be good to go. That is at least for the modern programmable pipeline, I have no idea for the fixed pipeline (nor should you use it!).
If you are using the modern pipeline, without further information, I would bet on the actual function not being loaded correctly. That being GLEW not initialized correctly or if you're doing things manually trying to use GetProcAddress on win32 (seeing as glDrawElements is core since 1.1, and 1.0/1.1 shouldn't be loaded with GetProcAddress on win32).
And indeed note you should've bound an element buffer to the binding point as others have said. Though in practice you can get away with it without binding a VAO. I've done this on NVidia, AMD and even Intel. But in production you should use VAOs.

Related

Difference between glBindBuffer and glBindBufferBase

I think what glBindBuffer(target, buffer) do is to store the buffer's address on the target, which is a special address.
But I found the glBindBufferBase(target, index, buffer). I think target should be a array, this operation stores the buffer address to the array according to the index.
If what I thought is right, then the glBindBuffer is equivalent to glBindBufferBase(target, someindex, buffer)?
Maybe someindex is 0?
They're not used for the same purpose.
glBindBuffer is used to bind a buffer to a specific target so that all operations which modify that target are mapped to that buffer afterwards.
glBindBufferBase is used for a totally different purpose, it's used bind a buffer to a specific binding point in an indexed array (when data is not supposed to be directly modified but rather used). While this may seem convoluted it's really easy to see. Assume you want to pass an uniform block to your shader, then you have a table which maps named buffers to specific indices in an array which are then mapped to bindings in a shader like in the following figure:
so glBindBufferBase is creating the arrows on the right in which you specify the index, while glBindBuffer is just binding the buffer to the specific target.
You would then use glGetUniformBlockIndex to get the correct index in the shader which is then linked to the binding point (the left arrows) through glUniformBlockBinding.

What is the difference between glVertexAttribDivisor and glVertexBindingDivisor?

I was looking for ways to associate attributes with arbitrary groupings of verticies, at first instancing appeared to be the only way for me to accomplish this, but then I stumbled up this question and this answer states :
However what is possible with newer versions of OpenGL is setting the rate at which a certain vertex attribute's buffer offset advances. Effectively this means that the data for a given vertex array gets duplicated to n vertices before the buffer offset for a attribute advances. The function to set this divisor is glVertexBindingDivisor.
(emphasis mine)
Which to me seems as if the answer is claiming I can divide on the number of vertices instead of the number of instances. However, when I look at glVertexBindingDivisor's documentation and compare it to glVertexAttribDivisor's they both appear to refer to the division taking place over instances and not vertices. For example in glVertexBindingDivisor's documentation it states:
glVertexBindingDivisor and glVertexArrayBindingDivisor modify the rate at which generic vertex attributes advance when rendering multiple instances of primitives in a single draw command. If divisor is zero, the attributes using the buffer bound to bindingindex advance once per vertex. If divisor is non-zero, the attributes advance once per divisor instances of the set(s) of vertices being rendered. An attribute is referred to as instanced if the corresponding divisor value is non-zero.
(emphasis mine)
So what is the actual difference between these two functions?
OK, first a little backstory.
As of OpenGL 4.3/ARB_vertex_attrib_binding (AKA: where glVertexBindingDivisor comes from, so this is relevant), VAOs are conceptually split into two parts: an array of vertex formats that describe a single attribute's worth of data, and an array of buffer binding points which describe how to fetch arrays of data (the buffer object, the offset, the stride, and the divisor). The vertex format specifies which buffer binding point its data comes from, so that multiple attributes can get data from the same array (ie: interleaving).
When VAOs were split into these two parts, the older APIs were re-defined in terms of the new system. So if you call glVertexAttribPointer with an attribute index, this function will set the vertex format data for the format at the given index, and it will set the buffer binding state (buffer object, byte offset, etc) for the same index. Now, these are two separate arrays of VAO state data (vertex format and buffer binding); this function is simply using the same index in both arrays.
But since the vertex format and buffer bindings are separate now, glVertexAttribPointer also does the equivalent of saying that the vertex format at index index gets its data from the buffer binding at index index. This is important because that's not automatic; the whole point of vertex_attrib_binding is that a vertex format at one index can use a buffer binding from a different index. So when you're using the old API, it's resetting itself to the old behavior by linking format index to binding index.
Now, what does all that have to do with the divisor? Well, because that thing I just said is literally the only difference between them.
glVertexAttribDivisor is the old-style API for setting the divisor. It takes an attribute index, but it acts on state which is part of the buffer binding point (instancing is a per-array construct, not a per-attribute construct now). This means that the function assumes (in the new system) that the attribute at index fetches its data from the buffer binding point at index.
And what I just said is a bit of a lie. It enforces this "assumption" by directly setting the vertex format to use that buffer binding point. That is, it does the same last step as glVertexAttribPointer did.
glVertexBindingDivisor is the modern function. It is not passed an attribute index; it is passed a buffer binding index. As such, it does not change the attribute's buffer binding index.
So glVertexAttribDivisor is exactly equivalent to this:
void glVertexAttribDivisor(GLuint index, GLuint divisor)
{
glVertexBindingDivisor(index, divisor);
glVertexAttribBinding(index, index);
}
Obviously, glVertexBindingDivisor doesn't do that last part.
So what is the actual difference between these two functions?
Modern OpenGL has two different APIs for specifying vertex attribute arrays and their properties. The traditional glVertexAttribArray and friends, where glVertexAttribDivisor is also part of.
With ARB_vertex_attrib_binding (in core since GL 4.3), a new API was introduced, which separates the vertex format from the pointers. It is expected that switching the data pointers is fast, while switching the vertex format can be more expensive. The new API allows to explictely control both aspects separately, while the old API always sets both at once.
For the new API, a new layer of introduction was introduced: the buffer binding points. (See the OpenGL wiki for more details.) glVertexBindingDivisor specifies the attribute instancing divisor for such a binding point, so it is the conceptual equivalent of the glVertexAttribDivisor function for the new API.

What is the meaning of the index argument in glDrawElements?

There are two OpenGL documentation pages which have slightly different descriptions of the "index" parameter of the glDrawElements function. On www.opengl.org/sdk/docs/man4/ it says:
indices
Specifies a pointer to the location where the indices are stored.
And on www.khronos.org/opengles/sdk/docs/man3 it says:
indices
Specifies a byte offset (cast to a pointer type) into the buffer bound
to GL_ELEMENT_ARRAY_BUFFER to start reading indices from. If no buffer
is bound, specifies a pointer to the location where the indices are stored.
I am on Windows by the way, using OpenGL 4+.
So I've copied my index array into the element buffer object I created, the indices pointer argument I need to supply is the offset in bytes of the first index? So if I want to start drawing at index 3, the argument would be 2 * sizeof(GLuint), cast as a pointer?
I actually went to the effort of creating an EBO for this, but from the looks of it it says that if no EBO is bound the pointer points straight to the location where the indices are, not the EBO. Am I right that this means it will point to your array on the system RAM? (EDIT: I just realised this doesn't make sense, if the pointer is at 0x00000008 it can't go to that address in system memory.) And if so, does it then copy the index array to the graphics card each time in order to be able to use it? Thanks.
As per OpenGL 4.5, core profile, reading from client memory is unsupported (§10.3.10 OpenGL 4.5 core spec):
DrawElements, DrawRangeElements, and DrawElementsInstanced source their indices from the buffer object whose name is bound to
ELEMENT_ARRAY_BUFFER, using their indices parameters as offsets into the buffer object in the same fashion as described in section 10.3.9. [...] If zero is bound to ELEMENT_ARRAY_BUFFER, the result of these drawing commands is undefined.
So your approach of creating an EBO is correct. Except if your 0th index is located at offset zero then the 3rd index is located at offset 3*sizeof(type).
As for your second quotation: in the older OpenGL versions you could pass a pointer to the client memory (in your process virtual address space, not the physical address) and leave ELEMENT_ARRAY_BUFFER unbound.

OpenGL - glDrawElements vs Vertex Array Objects

I need help to see the trade-offs between them.
It looks to me that glDrawElements() needs to get the index-data "live" as a parameter.
On the other side if I use VAOs then during startup I buffer the data and the driver might decide to put it on the GPU, then during rendering I only bind the VAO and call glDrawArrays().
Is there no way to combine the advantages? Can we buffer the index-data too?
And how would that look in the vertex shader? Can it use the index and look it up in the vertex positions array?
This information is really a bit hard to find, but one can use glDrawElements also in combination with a VAO. The index data can then (but doesn't have to) be supplied by a ELEMENT_ARRAY_BUFFER. Indexing works then as usual, one does not have to do anything special in the vertex shader. OpenGL ensures already that the indices are used in the correct way during primitiv assembly.
The spec states to this in section 10.3.10:
DrawElements, DrawRangeElements, and DrawElementsInstanced source
their indices from the buffer object whose name is bound to ELEMENT_-
ARRAY_BUFFER, using their indices parameters as offsets into the buffer object
This basically means, that whenever a ELEMENT_ARRAY_BUFFER is bound, the indices parameter is used as an offset into this buffer (0 means start from the beginning). When no such buffer is bound, the indices pointer specifies the address of a index array.

Clarification of GL.DrawElements parameters

I am having a hard time to match up the OpenGL specification (version 3.1, page 27) with common example usage all over the internet.
The OpenGL spec version 3.1 states for DrawElements:
The command
void DrawElements(enum mode, sizei count, enum type, void *indices);
constructs a sequence of geometric primitives by successively transferring the
count elements whose indices are stored in the currently bound element array
buffer (see section 2.9.5) at the offset defined by indices to the GL. The i-th element transferred by DrawElements will be taken from element indices[i] of
each enabled array.
I tend to interpret this as follows:
The indices parameter holds at least count values of type type. Its elements serve as offsets into the actual element buffer. Since for every usage of DrawElements an element buffer must be currently bound, we actually have 2 obligatory sets of indices here: one in the element buffer and another in the indices array.
This would seem somehow wasting for most situations. Unless one has to draw a model which is defined with an element array buffer but needs to sort its elements back to front due to transparency or so. But how would we achieve to render with the plain element array buffer (no sorting) than ?
Now, strange enough, most examples and tutorials in the internet (here,here half page down 'Indexed drawing'.) give a single integer as indices parameter, mostly it is 0. Sometimes (void*)0. It is always only a single integer offset - clearly no array for the indices parameter!
I have used the last variant (giving a single pointerized integer for indices) successfully with some NVIDIA graphics. But I get crashes on Intel on board chips. And I am wondering, who is wrong: me, the spec or thousands of examples. What are the correct parameter and usage of DrawElements? If the single integer is allowed, how does this go along with the spec?
You're tripping over the legacy glDrawElements has ever since OpenGL-1.1. Back then there were no VBOs, but just client side arrays, and the program would actually give a pointer (=array in C terms) of indices into the buffer/array set with the gl…Pointer functions.
Now with index buffers, the parameter is actually just an offset into the server side buffer. You might be very interested in this SO Question: What is the result of NULL + int?
Also I gave an exhaustive answer there, I strongly recommend reading https://stackoverflow.com/a/8284829/524368
What I wrote about function signatures and typecasts also applies to glDraw… calls.