Difference in glGenBuffers and glCreateBuffers - opengl

Given we are using OpenGL 4.5 or have support for the GL_ARB_direct_state_access extension, we have the new function glCreateBuffers.
This function has an identical signature to glGenBuffers, but specifies:
returns n previously unused buffer names in buffers, each representing a new buffer object initialized as if it had been bound to an unspecified target
glGenBuffers has the following specification:
Buffer object names returned by a call to glGenBuffers are not returned by subsequent calls, unless they are first deleted with glDeleteBuffers.
So any buffer name returned by glCreateBuffers will never be used again by itself, but could be used by glGenBuffers.
It seems that glCreateBuffers will always create new buffer objects and return their names, and glGenBuffers will only create new buffers if there are no previous buffers that have since been deleted.
What advantage does adding this function have?
When should I use glCreateBuffers over glGenBuffers?
P.S.
I think this stands for all glCreate* functions added by GL_ARB_direct_state_access

What you are noticing here is basically tidying up the API for consistency against Shader and Program object creation. Those have always been generated and initialized in a single call and were the only part of the API that worked that way. Every other object was reserved first using glGen* (...) and later initialized by binding the reserved name to a target.
In fact, prior to GL 3.0 it was permissible to skip glGen* (...) altogether and create an object simply by binding a unique number somewhere.
In GL 4.5, every type of object was given a glCreate* (...) function that generates and initializes them in a single call in GL 4.5. This methodology fits nicely with Direct State Access, where modifying (in this case creating) an object does not require altering (and potentially restoring) a binding state.
Many objects require a target (e.g. textures) when using the API this way, but buffer objects are for all intents and purposes typeless. That is why the API signature is identical. When you create a buffer object with this interface, it is "initialized as if it had been bound to an unspecified target." That would be complete nonsense for most types of objects in GL; they need a target to properly initialize them.
The primary consideration here is that you may want to create and setup state for an object in GL without affecting some other piece of code that expects the object bound to a certain target to remain unchanged. That is what Direct State Access was created for, and that is the primary reason these functions exist.
In theory, as dari points out, initializing a buffer object by binding it to a specific target potentially gives the driver hints about its intended usage. I would not put much stock in that though, that is as iffy as the actual usage flags when glBufferData (...) is called; a hint at best.

OpenGL 4.5 Specification - 6.1 Creating and Binding Buffer Objects:
A buffer object is created by binding a name returned by GenBuffers to
a buffer target. The binding is effected by calling
void BindBuffer( enum target, uint buffer );
target must be one of the targets listed
in table 6.1. If the buffer object named buffer has not been
previously bound, the GL creates a new state vector, initialized with
a zero-sized memory buffer and comprising all the state and with the
same initial values listed in table 6.2.
So the difference between glGenBuffers and glCreateBuffers is, that glGenBuffers only returns an unused name, while glCreateBuffers also creates and initializes the state vector described above.
Usage:
It is recommended to use glGenBuffers + glBindBuffer, because
the GL may make different choices about storage location and
layout based on the initial binding.
Since in glCreateBuffers no initial binding is given this choice cannot be made.

glCreateBuffers does not have a target because buffer objects are not typed. The first binding target was only ever used as a hint in OpenGL. And Khronos considered giving glCreateBuffers a target parameter, but they decided against it:
NamedBufferData (and the corresponding function from the original EXT)
do not include the <target> parameter. Does implementations may make
initial assumptions about the usage of a data store based on this
parameter. Where did it go? Should we bring it back?
RESOLVED: No need for a target parameter for buffer. Implemetations[sic]
don't make usage assumption based on the <target> parameter. Only one
vendor extension do so AMD_pinned_memory. A[sic] for consistent approach
to specify a buffer usage would be to add a new flag for that <flags>
parameter of BufferStorage.
Emphasis added.

Related

Where does the Vertex Data is stored in the memory in OpenGL? [duplicate]

My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory
But if this is true why do I not need to dereference anything when using these variables?
OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.
This dichotomy exists for legacy historical reasons.
The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.
This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.
The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.
As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.
Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).
Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.
And the path of least resistance for that was to not create a new API, even if it makes the API really silly.

Why OpenGL is designed to bind buffer to target before set buffer data? [duplicate]

I don't understand what the purpose is of binding points (such as GL_ARRAY_BUFFER) in OpenGL. To my understanding glGenBuffers() creates a sort of pointer to a vertex buffer object located somewhere within GPU memory.
So:
glGenBuffers(1, &bufferID)
means I now have a handle, bufferID, to 1 vertex object on the graphics card. Now I know the next step would be to bind bufferID to a binding point
glBindBuffer(GL_ARRAY_BUFFER, bufferID)
so that I can use that binding point to send data down using the glBufferData() function like so:
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW)
But why couldn't I just use the bufferID to specifiy where I want to send the data instead? Something like:
glBufferData(bufferID, sizeof(data), data, GL_STATIC_DRAW)
Then when calling a draw function I would also just put in which ever ID to whichever VBO I want the draw function to draw. Something like:
glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)
Why do we need the extra step of indirection with glBindBuffers?
OpenGL uses object binding points for two things: to designate an object to be used as part of a rendering process, and to be able to modify the object.
Why it uses them for the former is simple: OpenGL requires a lot of objects to be able to render.
Consider your overly simplistic example:
glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)
That API doesn't let me have separate vertex attributes come from separate buffers. Sure, you might then propose glDrawArrays(GLint count, GLuint *object_array, ...). But how do you connect a particular buffer object to a particular vertex attribute? Or how do you have 2 attributes come from buffer 0 and a third attribute from buffer 1? Those are things I can do right now with the current API. But your proposed one can't handle it.
And even that is putting aside the many other objects you need to render: program/pipeline objects, texture objects, UBOs, SSBOs, transform feedback objects, query objects, etc. Having all of the needed objects specified in a single command would be fundamentally unworkable (and that leaves aside the performance costs).
And every time the API would need to add a new kind of object, you would have to add new variations of the glDraw* functions. And right now, there are over a dozen such functions. Your way would have given us hundreds.
So instead, OpenGL defines ways for you to say "the next time I render, use this object in this way for that process." That's what binding an object for use means.
But why couldn't I just use the bufferID to specifiy where I want to send the data instead?
This is about binding an object for the purpose of modifying the object, not saying that it will be used. That is... a different matter.
The obvious answer is, "You can't do it because the OpenGL API (until 4.5) doesn't have a function to let you do it." But I rather suspect the question is really why OpenGL doesn't have such APIs (until 4.5, where glNamedBufferStorage and such exist).
Indeed, the fact that 4.5 does have such functions proves that there is no technical reason for pre-4.5 OpenGL's bind-object-to-modify API. It really was a "decision" that came about by the evolution of the OpenGL API from 1.0, thanks to following the path of least resistance. Repeatedly.
Indeed, just about every bad decision that OpenGL has made can be traced back to taking the path of least resistance in the API. But I digress.
In OpenGL 1.0, there was only one kind of object: display list objects. That means that even textures were not stored in objects. So every time you switched textures, you had to re-specify the entire texture with glTexImage*D. That means re-uploading it. Now, you could (and people did) wrap each texture's creation in a display list, which allowed you to switch textures by executing that display list. And hopefully the driver would realize you were doing that and instead allocate video memory and so forth appropriately.
So when 1.1 came around, the OpenGL ARB realized how mind-bendingly silly that was. So they created texture objects, which encapsulate both the memory storage of a texture and the various state within. When you wanted to use the texture, you bound it. But there was a snag. Namely, how to change it.
See, 1.0 had a bunch of already existing functions like glTexImage*D, glTexParamter and the like. These modify the state of the texture. Now, the ARB could have added new functions that do the same thing but take texture objects as parameters.
But that would mean dividing all OpenGL users into 2 camps: those who used texture objects and those who did not. It meant that, if you wanted to use texture objects, you had to rewrite all of your existing code that modified textures. If you had some function that made a bunch of glTexParameter calls on the current texture, you would have to change that function to call the new texture object function. But you would also have to change the function of yours that calls it so that it would take, as a parameter, the texture object that it operates on.
And if that function didn't belong to you (because it was part of a library you were using), then you couldn't even do that.
So the ARB decided to keep those old functions around and simply have them behave differently based on whether a texture was bound to the context or not. If one was bound, then glTexParameter/etc would modify the bound texture, rather than the context's normal texture.
This one decision established the general paradigm shared by almost all OpenGL objects.
ARB_vertex_buffer_object used this paradigm for the same reason. Notice how the various gl*Pointer functions (glVertexAttribPointer and the like) work in relation to buffers. You have to bind a buffer to GL_ARRAY_BUFFER, then call one of those functions to set up an attribute array. When a buffer is bound to that slot, the function will pick that up and treat the pointer as an offset into the buffer that was bound at the time the *Pointer function was called.
Why? For the same reason: ease of compatibility (or to promote laziness, depending on how you want to see it). ATI_vertex_array_object had to create new analogs to the gl*Pointer functions. Whereas ARB_vertex_buffer_object just piggybacked off of the existing entrypoints.
Users didn't have to change from using glVertexPointer to glVertexBufferOffset or some other function. All they had to do was bind a buffer before calling a function that set up vertex information (and of course change the pointers to byte offsets).
It also mean that they didn't have to add a bunch of glDrawElementsWithBuffer-type functions for rendering with indices that come from buffer objects.
So this wasn't a bad idea in the short term. But as with most short-term decision making, it starts being less reasonable with time.
Of course, if you have access to GL 4.5/ARB_direct_state_access, you can do things the way they ought to have been done originally.

Why are all openGL objects stored in GLuints?

My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory
But if this is true why do I not need to dereference anything when using these variables?
OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.
This dichotomy exists for legacy historical reasons.
The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.
This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.
The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.
As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.
Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).
Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.
And the path of least resistance for that was to not create a new API, even if it makes the API really silly.

OpenGL id value guarantee

I am confronted with the fact that sometimes my OpenGL Context gets re-created and my initialization needs to be redone to re-initialize the re-created OpenGL Context.
Right now I am not using many elements, but I mentioned that the IDs (or names or whatchacallthem) that I receive from glGenX are always the same as long as I call the functions in the same order: The first texture I create gets ID 1, the second ID 2 etc. etc.
Is this guaranteed? Because when it is, my internal organization of those OpenGL elements does not need to be re-done, as even though the OpenGL context is another, a reference to texture ID 4 will always point at the correct texture as long as that texture gets re-loaded into the GPU 4th in row?
No, I have never seen a guarantee that the generation of object names will produce the same result each time. The language for the glGen*() calls always sounds like this:
returns n previous unused [..] object names
It never says anything more about how these object names are constructed.
Now, in reality, if you're calling everything in exactly the same sequence, it seems very likely that you're going to get the same names. Software tends to be deterministic in cases like this. But relying on it still sounds like a bad idea to me. It will be one of these things that will probably work for the longest time, and then come back and bite you when you least expect it at the most unfortunate time (like the day before you plan to ship, or after you already shipped).
I don't think this is strictly within the scope of your question, but just to make sure that you're not making any false assumptions: You definitely can't expect the names to be sequential. They are on some platforms/vendors, but not on others. There's also no valid expectation on whether different object types use different names, or if they use the same values. For example, if you call glGenTextures() to create a texture name, and glGenBuffers() to create a buffer name, you could get the same value for the two names.
Also, just to avoid possible misunderstandings: Even if you assume that you already know the names that are going to be generated, you still need to call the glGen*() functions to generate the names if you use the OpenGL Core Profile. It used to be legal to just use any values you wanted for object names, but that's not legal anymore in modern OpenGL. Or in the words of the spec, from the Removed Features appendix:
Application-generated object names - the names of all object types, such as buffer, query, and texture objects, must be generated using the corresponding Gen* commands. Trying to bind an object name not returned by a Gen* command will result in an INVALID_OPERATION error.

why the second call to glBindBuffer?

I'm reading along in this tutorial and i get down towards the end in how to use Vertex Buffers and i see that the vertex buffer which I generated and called glBindBuffer on once already i have to bind a second time:
glBindBuffer(GL_ARRAY_BUFFER, vbo_triangle);
I'm still very new to openGL (like 3 days) so I'm trying to wrap my mind around how a lot of these things work. I spend most of my time on khronos or opengl.org reading about the commands, but I couldn't figure out why this one gets called twice. any hints? Thanks.
Does that particular example strictly need the second bind? No. OpenGL retains state, so if a buffer object is bound to a target, then it will remain bound until you bind something else to that target.
However, what happens if you insert code after the creation of the buffer that creates a second buffer? After all, you might want to have two objects. Or 10. Or however many you want; they don't have to share buffer objects.
Once you do that, your code breaks because the buffer that your code expects to be bound isn't actually bound. Therefore, unless you're good at managing state and really know what you're doing (and if you're still following tutorials, the answer is "no"), you should set whatever state you need to do what you intend.
Therefore, if you intend to draw from a particular buffer, you should bind it and set the appropriate state (the gl*Pointer calls).
You have to bind and unbind the buffer to copy to it from 'C' and draw with it from openGL.
Think of it as locking/unlocking between the program and the graphics.
So the sequence is
create
bind
stuff data
unbind
bind
display
unbind
Brief : The second one usualy unbinds the first one.
If no buffer object with name buffer exists, one is created with that name. When a buffer object is bound to a target, the previous binding for that target is automatically broken.
Buffer object names are unsigned integers. The value zero is reserved, but there is no default buffer object for each buffer object target. Instead, buffer set to zero effectively unbinds any buffer object previously bound, and restores client memory usage for that buffer object target (if supported for that target). Buffer object names and the corresponding buffer object contents are local to the shared object space of the current GL rendering context; two rendering contexts share buffer object names only if they explicitly enable sharing between contexts through the appropriate GL windows interfaces functions.
Reference : https://www.opengl.org/sdk/docs/man/html/glBindBuffer.xhtml