Following another question from me, here is a specific example where I want to avoid offsetof.
For using with glVertexAttribPointer, I have to use offsetof for the last parameter.
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex),
(const GLvoid *) offsetof(Vertex, _color));
Vertex is a class. Is there a way I can avoid using this one? I tried with pointer to members, but no luck.
Cannot compile in the following
glVertexAttribPointer(GLKVertexAttribColor, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex),
(const GLvoid *)&Vertex::_color);
You cannot get the address of a non-static member qualified as Vertex::_color. You would need an instance of Vertex, but even then the address returned would be relative to the program's address space and this is not what you want when you use VBOs.
offsetof (...) is used is to find the address of an element in a data structure. This address does not point to actual memory, it is literally just an offset from the beginning of the structure (and this is why it uses size_t rather than intptr_t or void *).
Historically, when vertex arrays were introduced in OpenGL 1.1, it used the data pointer in a call to glVertexPointer (...) to reference client (program) memory. Back then, the address you passed actually pointed to memory in your program. Beginning with Vertex Buffer Objects (GL 1.5), OpenGL re-purposed the pointer parameter in glVertexPointer (...) to serve as a pointer to server (GPU) memory. If you have a non-zero VBO bound when you call glVertexPointer (...) then the pointer is actually an offset.
More precisely, the pointer when using VBOs is relative to the bound VBO's data store and the first datum in your VBO begins at address 0. This is why offsetof (...) makes sense in this context, and there is no reason to avoid using it.
Related
When creating a Buffer and setting its data, it is required to bind it first to a target and then populate the buffer bound to that target with some data:
GLenum target = GL_ARRAY_BUFFER;
glGenBuffers(1, &bufferId);
glBindBuffer(target, bufferId);
glBufferData(target, m_capacity*sizeof(value_type), m_data, GL_STREAM_DRAW);
glBindBuffer(target, 0);
But to my understanding it does not really matter if I a buffer that was populated on the GL_ARRAY_BUFFER target is later used on e.g. the GL_UNIFORM_BUFFER target. But if this is the case why do we need the target to populate the buffer and why is the signature of glBufferData not:
void glBufferData( GLint bufferId,
GLsizeiptr size,
const GLvoid * data,
GLenum usage);
Is that just a historical reason or because opengl is a statemachine or do I miss something and the target has an other purpose there.
This is a common OpenGL API thing - most of the work with OpenGL objects (textures, buffers, ...) is done via binding them to a specific target and then using this target to refer to currently bound object (more on this here). Unfortunatelly, I do not know the exact reason for this, but it seems to appear historical now - I've seen some proposed extension for direct object access via object id's (UPD: user ratchet freak says that it is direct_state_access extension, core in 4.5).
The documentation on glBindBuffer says that
When a buffer object is bound to a target, the previous binding for that target is automatically broken.
I'd suppose that changing a buffer's binding type and expecting the buffer's state to stay preserved is not a good idea.
UPDATE
From OpenGL wiki
The target​ defines how you intend to use this binding of the buffer object. When you're just creating and/or filling the buffer object with data, the target you use doesn't technically matter.
So, it seems that the target matters only on how you use the buffer, and you can safely bind it to any random type and fill it with data, but it still seems to be a bad practice.
To use glBufferData, you need some way of indicating the target of the data you are uploading. The target parameter (the first one), lets this call know the destination for your data. If your only aim is to upload the data and then unbind, as you said, it doesn't really matter which buffer binding you use. You are free to bind that buffer to any other binding at a later time.
However, it quite common to setup vertex attributes during GL_ARRAY_BUFFER data initialization time as well (with glVertexAttribPointer/glEnableVertexArray), which might be another reason to use GL_ARRAY_BUFFER binding over any other arbitrary binding. Also, if you intend to actually use the buffer data you are uploading in a subsequent draw call, and don't need to break the binding, it is more efficient to leave the binding in place.
The specification for the glVertexAttribPointer is as follows:
void glVertexAttribPointer( GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);
Given that the last parameter is just a 4-byte integer offset, why does OpenGL expect it to be passed in as a void pointer?
Legacy.
That argument had a different meaning before VBOs: you'd keep the vertex data in client memory and pass the address of the array (see glEnableClientState and such).
Now the last parameter can have 2 meanings (offset for buffer objects, address for client state arrays). Khronos did not provide a separate version for gl*Pointer functions for buffer objects, so you need to do this awkward cast.
One way of looking at it is that the last argument is always a pointer:
If no VBO is bound, it's a pointer relative to base address 0. Which is a regular memory address, just the way pointers are normally used in C/C++.
If a VBO bound, it's a pointer relative to the base address of the buffer.
At least that's the only logical explanation I could ever find.
Personally, I think that overloading the entry point this way was a very unfortunate decision, for a number of reasons:
It confuses people to no end.
It requires ugly type casts in C/C++.
It does not work at all in more type safe languages, like Java.
In languages like Java, you typically end up with overloaded versions of the function that accept different types. As a somewhat curious historical note, the overloaded version with an int argument was missing in the initial version of the GLES20 bindings in Android, which meant that you could not use VBOs from Java. So this has tripped up more that just the occasional casual OpenGL programmer.
I have the following working code, however I'm not convinced that I'm calling glDeleteBuffers in a safe way. In practice it's working (for now at least) but from what I've been reading I don't think it should work.
GLuint vao_id;
glGenVertexArrays(1, &vao_id);
glBindVertexArray(vao_id);
GLuint VBO;
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, size, data, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);
//Alternate position <<----
//Unbind the VAO
glBindVertexArray(0);
//Current position <<----
glDeleteBuffers(1, &VBO);
I am currently calling glDeleteBuffers straight after unbinding the VAO. I have tried calling it in the alternative position marked - immediately after I have set the attribute pointer. This however caused a crash - my guess is this was because when I made the draw call there was no data to be drawn because I'd deleted it.
The thing that confuses me is that it works as I currently have it. I'm worried that a) I don't quite understand what happens when the buffer is delete and b) that it only works by chance and could unexpectedly break.
As far as I understand calling glDeleteBuffers deletes the data so there shouldn't be any data to draw - but there is. So my other thought was that when I re-bind the VAO the data is restored, although that didn't make much sense to me because I can't reason where the data would be restored from.
Can someone let me know if I am using glDeleteBuffer correctly? and if not where it should be called (I'm guessing once there is no need for the data to be drawn any more, probably at the end of the program).
What you're seeing is well defined behavior. The following are the key parts of the spec related to this (emphasis added).
From section "5.1.2 Automatic Unbinding of Deleted Objects" in the OpenGL 4.5 spec:
When a buffer, texture, or renderbuffer object is deleted, it is unbound from any bind points it is bound to in the current context, and detached from any attachments of container objects that are bound to the current context, as described for DeleteBuffers, DeleteTextures, and DeleteRenderbuffers.
and "5.1.3 Deleted Object and Object Name Lifetimes":
When a buffer, texture, sampler, renderbuffer, query, or sync object is deleted, its name immediately becomes invalid (e.g. is marked unused), but the underlying object will not be deleted until it is no longer in use.
A buffer, texture, sampler, or renderbuffer object is in use if any of the following conditions are satisfied:
the object is attached to any container object
...
The VAO is considered a "container object" for the VBO in this case. So as long as the VBO is referenced in a VAO, and the VAO itself is not deleted, the VBO stays alive. This is why your version of the code with the glDeleteBuffers() at the end works.
However, if the VAO is currently bound, and you delete the VBO, it is automatically unbound from the VAO. Therefore, it is not referenced by the VAO anymore, and deleted immediately. This applies to the case where you call glDeleteBuffers() immediately after glVertexAttribPointer().
In any case the id (aka name) becomes invalid immediately. So you would not be able to bind it again, and for example modify the data.
There are some caveats if you dig into the specs more deeply. For example, if you delete a buffer, and it stays alive because it is still referenced by a VAO, the name of the buffer could be used for a new buffer. This means that you basically have two buffers with the same name, which can result in some confusing behavior.
Partly for that reason, I personally wouldn't call glDelete*() for objects that you want to keep using. But others like to call glDelete*() as soon as possible.
I would like to highlight in a separate answer what #Onyxite has pointed out in the first comment of the accepted answer. This has driven me nuts and I have been hours tracking down this issue.
AMD Windows drivers have a BUG where if you delete a VBO after unbinding all its referenced VAOs, it will DELETE the buffer and its underlying object, so nothing will be drawn. This may result in a black screen, or OpenGL not drawing that part of the VAO.
So, taking this in consideration, the answer to the question would be:
Even when it is correct as per the OpenGL specification, you should not call glDeleteBuffers() until you are going to actually delete the VAOs referencing that buffer.
So you should follow Reto Korandi's advice and do not call glDelete*() for objects that you want to keep using.
The position that u have mention is not correct to call glDeleteBuffer because till at u haven't rendered the object. i think it would be better if u call is function after rendering object mean;s after calling glDrawArray or glDrawIndex.
if u first delete the buffer and later u call draw, u might have to face crash problem. because draw call would try to access the buffer that u have deleted before.
Here is the formal declaration for glBufferData which is used to populate a VBO:
void glBufferData(GLenum target, GLsizeiptr size, const GLvoid* data, GLenum usage);
What is confusing, however, is that you can have multiple VBOs, but this function does not require a handle to a particular VBO, so how does it know which VBO you are intending?
The target parameter can be either GL_ARRAY_BUFFER or GL_ELEMENT_ARRAY_BUFFER but my understanding is that you can have more than one of each of these.
The same is true of the similar glBufferSubData method, which is intended to be called subsequent times on a VBO -- how does it know which VBO to handle?
This is a common pattern in OpenGL to bind object to a target and perform operations on it by issuing function calls without a handle. The same applies to the textures.
OpenGL operations that use a buffer object, make use of the buffer that has been bound by the most recent call to glBindBuffer on the used target.
glBindBuffer is a function that exposes the given buffer as bound. Such a glBufferData access it then by side-effect, through the currently bound buffer object.
Sample code:
1. glGenBuffers(1, &VboId);
2. glBindBuffer(GL_ARRAY_BUFFER, VboId);
3. glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
4. glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
So we generate a generic VBO handle, and then we bind it using "GL_ARRAY_BUFFER". Binding it seems to have 2 purposes:
We must bind the buffer before we can copy data to the GPU via glBufferData
We must bind the buffer before we can add attributes to it via glVertexAttribPointer
And I think those are the only 2 times you need to bind the VBO. My question is, is there any scenario in which target (GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER, GL_PIXEL_PACK_BUFFER, or GL_PIXEL_UNPACK_BUFFER) would be different on lines 2 and 3? Or we would want to rebind it to a different target before line 4?
Can we bind multiple buffer targets to a single VBO?
You do not bind targets to a buffer object. Targets are locations in the OpenGL context that you can bind things (like buffer objects) to. So you bind buffer objects to targets, not the other way around.
A buffer object (there is no such thing as a VBO. There are simply buffer objects) is just a unformatted, linear array of memory owned by the OpenGL driver. You can use it as source for vertex array data, by binding the buffer to GL_ARRAY_BUFFER and calling one of the gl*Pointer functions. These function only work with the buffer currently bound to GL_ARRAY_BUFFER. You can use them as the source for index data by binding them to GL_ELEMENT_ARRAY_BUFFER and calling one of the glDrawElements functions.
The functions used to modify a buffer objects contents (glBufferData, glMapBuffer, glBufferSubData, etc) all specifically take a target for their operations to work on. So glBufferData(GL_ARRAY_BUFFER, ...) does its stuff to whatever buffer is currently bound to GL_ARRAY_BUFFER.
So there are two kinds of functions that affect buffer objects: those that modify their contents, and those that use them in operations. The latter are specific to a source; glVertexAttribPointer always uses the buffer currently bound to GL_ARRAY_BUFFER. You can't make it use a different target. Similarly, glReadPixels always uses the buffer bound to GL_PIXEL_PACK_BUFFER. And so forth. If a function does stuff with buffer objects but doesn't take a target as a parameter, then its documentation will tell you which target it looks for its buffer from.
Note: Vertex arrays are kinda weird. The association between a vertex attribute and a buffer object is made by calling glVertexAttribPointer. What this function does is set the appropriate data for that attribute, using the buffer object that is currently bound to GL_ARRAY_BUFFER. By "currently bound", I mean bound at the time this function is called. So immediately after calling this function, you can call glBindBuffer(GL_ARRAY_BUFFER, 0), and it will change nothing about what happens when you go to render. It will render just fine.
In this way, you can use different buffer objects for different attributes. The information will be retained until you change it with another glVertexAttribPointer call for that particular attribute.