I've read that a VBO (Vertex Buffer Object) essentially keeps a reference count, so that if the VBO's name is given to glDeleteBuffers(), it isn't truly dismissed if a living VAO (Vertex Array Object) still references it. This behavior is similar to "Smart Pointers" newer languages are increasingly adopting. But to what extent this is true and can be designed around, and if it applies to IBO (Index Buffer Object) as well, I haven't been able to find any information on.
If a VBO is kept alive by a VAO that references it and I don't intend to update it or use it beyond the VAO's death, I think the best play is to destroy my reference to it. Is it proper to do so? And can I do the same with an IBO?
Objects can be attached to other objects. So long as an object is attached to another object, the attached object will not actually be destroyed by calling glDelete*. It will be destroyed only after it is either unattached or the object it is attached to is destroyed as well.
This isn't really something to worry about all that much. If you glDelete* an object, you should not directly use that name again.
Related
Is it required that we should unbind a buffer object before deleting it? If I had bound it in a VAO and deleted it without unbinding (binding to 0), what will happen? Will the reference still exist?
public void dispose()
{
glBindVertexArray(0);
glDeleteVertexArrays(vaoID);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDeleteBuffers(vboVertID);
glDeleteBuffers(vboColID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDeleteBuffers(eboID);
}
Is it a good or a bad practice to unbind before deletion?
Is it necessary?
No.
Is it a good idea?
Possibly, but not in your current pseudo-code.
The only time you would want to manually unbind a resource in GL prior to deletion is if you have it bound in a separate context. That is because one the criteria for actually freeing the memory associated with a GL resource is that it have a reference count of 0. glDelete* (...) only unbinds an object from the current context prior to putting it on a queue of objects to free.
If you delete it while a VAO that is not currently bound holds a pointer to this buffer, or if it is bound in a completely different OpenGL context from the one you call glDelete* (...) in, then the reference count does not reach 0 before glDelete* (...) finishes. As a result, the memory will not be freed until you actually unbind it from or destroy all VAO / render contexts that are holding references. You will effectively leak memory until you take care of all the dangling references.
In short, glDelete* (...) will always unbind resources from the current context and reclaim any names for immediate reuse, but it will only free the associated memory if after unbinding it the reference count is 0.
In this case, unbinding is completely unnecessary because you are doing so in the same context you call glDeleteBuffers (...) from. This call implicitly unbinds the object you are deleting, so you are doing something redundant. What is more, you already deleted your VAO prior to calling glDeleteBuffers (...) -- when that VAO was deleted, it relinquished all of its pointers and thus decremented the reference count to your buffer.
Official documentation (https://www.opengl.org/sdk/docs/man/html/glDeleteBuffers.xhtml) says:
glDeleteBuffers deletes n buffer objects named by the elements of the array buffers. After a buffer object is deleted, it has no contents, and its name is free for reuse (for example by glGenBuffers). If a buffer object that is currently bound is deleted, the binding reverts to 0 (the absence of any buffer object).
As for VAO - https://www.opengl.org/registry/specs/ARB/vertex_array_object.txt
(2) What happens when a buffer object that is attached to a non-current
VAO is deleted?
RESOLUTION: Nothing (though a reference count may be decremented).
A buffer object that is deleted while attached to a non-current VAO
is treated just like a buffer object bound to another context (or to
a current VAO in another context).
Let's say I allocate memory for a uniform buffer, like so:
GLuint length(0x1000);
GLuint myBuffer;
glGenBuffers(1, &myBuffer);
glBindBuffer(GL_UNIFORM_BUFFER, myBuffer);
glBufferData(GL_UNIFORM_BUFFER, length, NULL, GL_STATIC_DRAW);
When I am done using the buffer, I would like to make sure that the memory is available again for other buffers. Is it sufficient to call glDeleteBuffers(1,&myBuffer) to achieve that? Because my gut feeling tells me there should be a call symmetrical to glBufferData for that
(like glInvalidateBufferData in OpenGL 4), but nothing of the kind is mentioned in the documentation for glBufferData at all (http://www.opengl.org/sdk/docs/man/xhtml/glBufferData.xml)
Not to be a buzz kill, but container objects such as Vertex Array Objects significantly complicate this discussion.
Normally, when you delete a buffer object two key things happen that allow the memory to be reclaimed:
Its name (GLuint ID) is freed up for reuse immediately
The object is unbound from the currently active context
There is a hidden caveat that needs to be observed:
The data store is not actually freed until there are no remaining references to the object in any context.
When you delete a Vertex Buffer Object that is bound to a Vertex Array Object and that Vertex Array Object is not currently bound, the behavior discussed in bullet point 2 does not occur. What happens is the name is freed up, but the VAO continues to reference both the name (which is now invalid) and the data store (which continues to exist). The memory for the buffer object will not be reclaimed until this Vertex Array Object is deleted, or the binding is changed so that it no longer references the original buffer object.
For a more authoritative explanation of the above, I suggest you read Section 5.1.2 and Section 5.1.3 of the OpenGL 4.4 core spec. I will list the most relevant parts of both below.
5.1.2 Automatic Unbinding of Deleted Objects
When a buffer, texture, or renderbuffer object is deleted, it is unbound from any bind points it is bound to in the current context, and detached from any attachments of container objects that are bound to the current context, as described for DeleteBuffers, DeleteTextures, and DeleteRenderbuffers. If the object binding was established with other related state (such as a buffer range in BindBufferRange or selected level and layer information in FramebufferTexture or BindImageTexture), that state is not affected by the automatic unbind. Bind points in other contexts are not affected. Attachments to unbound container objects, such as deletion of a buffer attached to a vertex array object which is not bound to the context, are not affected and continue to act as references on the deleted object, as described in the following section.
5.1.3 Deleted Object and Object Name Lifetimes
[...]
The underlying storage backing a deleted object will not be reclaimed by the GL until all references to the object from container object attachment points, context binding points, or views are removed.
NOTE: This behavior applies to all container objects in OpenGL, memory is not reclaimed until all references to a resource are eliminated. Familiarizing yourself with the necessary conditions (see: 5.1.2) for references to be removed will serve you well in the long-run.
glDeleteBuffers marks the selected buffers for deletion and deallocation, which gets done as soon as no part of OpenGL any longer needs the buffer's data internally. For all practical means glDeleteBuffers frees the buffers.
I have run into an issue I am unsure of how to properly handle. I recently began creating a particle system for my game, and have been using a structure called 'Particle' for my particle data. 'Particle' contains the vertex information for rendering.
The reason I am having issues is that I am pooling my particle structures in heap memory in order to save on large amounts of allocations, however I am unsure of how to use an array of pointers in glBufferData, I am under the impression that glBufferData requires the actual structure instance rather then a pointer to the structure instance.
I know I can rebuild an array of floats each render just to draw my particles, but is there an OpenGL call like glBufferData which I am missing somewhere that is able to de-reference my pointers as it is going through the data I supply? I would ideally like to prevent having to iterate over the array just to copy the data.
I am under the impression that glBufferData requires the actual structure instance rather then a pointer to the structure instance.
Correct. Effectively glBufferData creates a flat copy of the data preseted to it at the address pointed it via the data parameter.
which I am missing somewhere that is able to de-reference my pointers as it is going through the data I supply?
You're thinking of client side vertex arrays, and those are among the oldest features of OpenGL. They're around since OpenGL-1.1, released 19 years ago.
You just don't use a buffer object, i.e. don't call glGenBuffers, glBindBuffer, glBufferData and pass your client side data address directly to glVertexPointer or glVertexAttribPointer.
However I strongly advise to actually use buffer objects. The data must be copied to the GPU anyway, so that it can be rendered. And doing it through a buffer object enables the OpenGL driver to work more efficiently. Also since OpenGL-4 the use of buffer objects is no longer optional.
First of all, I do realize this completely contradicts the purpose of a shared_ptr. I am dealing with some library code where instances of a ParticleSystem expect to have a shared_ptr passed to them during construction to set the texture used for each particle. The thing is, I've already built the rest of my program in a way where my textures have concrete ownership (if that's the right term) - the TextureCache owns all Textures. So I need a way to work with this ParticleSystem class without allowing it to delete my textures. If I were to simply create a new instance like ParticleSystem(std::shared_ptr<Texture>&myTexture) then it would attempt to destroy the texture upon its destruction (which is an unwanted and invalid operation, since my textures aren't even created with new).
The cleanest way I see around this problem is something like this:
Create a shared_ptr holding the texture in the function that creates the ParticleSystem.
Then using placement new, reconstruct the shared_ptr in the same memory location as the shared_ptr I just created. The texture will now have a reference count of 2.
Create the particle system.
Let the shared_ptr go out of scope. Its deconstructor will be called since it was allocated on the stack, and it will decrement the reference count only by 1. Thus the reference count for the object will always be 1 greater than it truly is, and so it will never be automatically destroyed.
I believe this solution is sound, but it still feels incredibly hackish. Is there a better way to solve my problem?
If you want to pass unmanaged pointer (that you manage by yourself) to code expecting smart pointer such as shared_ptr, you can just disable «smart» pointer functionality by creating empty, but not-null shared_ptr via aliasing constructor:
Texture* unmanagedPointer = ...
shared_ptr<Texture> smartPointer(shared_ptr<Texture>(), unmanagedPointer);
This solution is more efficient and shorter than custom deleter others suggested, since no control block allocation and reference counting is going on.
Some additional details can be found here:
What is the difference between an empty and a null std::shared_ptr in C++?
How to avoid big memory allocation with std::make_shared
You can create shared_ptr with custom deleter that does nothing. This will prevent deleting textures owned by this shared_ptr.
struct null_deleter
{
void operator()(void const *) const
{
}
};
shared_ptr<Texture> CreateTexture(Texture* myTexture)
{
shared_ptr<Texture> pTexture(myTexture, null_deleter());
return pTexture;
}
shared_ptr allows you to supply a custom deleter. So shared_ptr can be used for memory allocaed with malloc or whatever memory allocation scheme you're using, you could even use it to automtically unlock a mutex or close a file, but I digress. You could create a shared_ptr with a null deleter which would not do anything when its referene count reaches 0.
Store shared_ptr in you Cache as Vaughn Cato suggests. In order to remove a texture from the cache when no one uses it just check if use_count function of the shared_ptr returns 1, which means cache is the only owner
I've been using glBufferData, and it makes sense to me that you'd have to specify usage hints (e.g. GL_DYNAMIC_DRAW).
However, it was recently suggested to me on Stack Overflow that I use glMapBuffer or glMapBufferRange to modify non-contiguous blocks of vertex data.
When using glMapBuffer, there does not seem to be any point at which you specify a usage hint. So, my questions are as follows:
Is it valid to use glMapBuffer on a given VBO if you've never called glBufferData on that VBO?
If so, how does OpenGL guess the usage, since it hasn't been given a hint?
What are the advantages/disadvantages of glMapBuffer vs glBufferData? (I know they don't do exactly the same thing. But it seems that by getting a pointer with glMapBuffer and then writing to that address, you can do the same thing glBufferData does.)
Is it valid to use glMapBuffer on a given VBO if you've never called glBufferData on that VBO?
No, because to map some memory, it must be allocated first.
If so, how does OpenGL guess the usage, since it hasn't been given a hint?
It doesn't. You must call glBufferData at least once to initialize the buffer object. If you don't want to actually upload data (because you're going to use glMapBuffer), just pass a null pointer for the data pointer. This works just like with glTexImage, where a buffer/texture object is created, to be filled with either glBufferSubData/glTexSubImage, or in the case of a buffer object as well as through a memory mapping.
What are the advantages/disadvantages of glMapBuffer vs glBufferData? (I know they don't do exactly the same thing. But it seems that by getting a pointer with glMapBuffer and then writing to that address, you can do the same thing glBufferData does.)
glMapBuffer allows you to write to the buffer asynchronously from another thread. And for some implementations it may be possible, that the OpenGL driver gives your process direct access to DMA memory of the GPU or even better to the memory of the GPU itself. For example on SoC architectures with integrated graphics.
No, this appears to be invalid. You must call glBufferData, because otherwise OpenGL cannot know the size of your buffer.
As to which is faster, neither I nor the internet at large appears to know a definite answer. Just test it and see.