My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory
But if this is true why do I not need to dereference anything when using these variables?
OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.
This dichotomy exists for legacy historical reasons.
The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.
This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.
The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.
As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.
Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).
Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.
And the path of least resistance for that was to not create a new API, even if it makes the API really silly.
Related
My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory
But if this is true why do I not need to dereference anything when using these variables?
OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.
This dichotomy exists for legacy historical reasons.
The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.
This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.
The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.
As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.
Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).
Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.
And the path of least resistance for that was to not create a new API, even if it makes the API really silly.
I don't understand what the purpose is of binding points (such as GL_ARRAY_BUFFER) in OpenGL. To my understanding glGenBuffers() creates a sort of pointer to a vertex buffer object located somewhere within GPU memory.
So:
glGenBuffers(1, &bufferID)
means I now have a handle, bufferID, to 1 vertex object on the graphics card. Now I know the next step would be to bind bufferID to a binding point
glBindBuffer(GL_ARRAY_BUFFER, bufferID)
so that I can use that binding point to send data down using the glBufferData() function like so:
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW)
But why couldn't I just use the bufferID to specifiy where I want to send the data instead? Something like:
glBufferData(bufferID, sizeof(data), data, GL_STATIC_DRAW)
Then when calling a draw function I would also just put in which ever ID to whichever VBO I want the draw function to draw. Something like:
glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)
Why do we need the extra step of indirection with glBindBuffers?
OpenGL uses object binding points for two things: to designate an object to be used as part of a rendering process, and to be able to modify the object.
Why it uses them for the former is simple: OpenGL requires a lot of objects to be able to render.
Consider your overly simplistic example:
glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)
That API doesn't let me have separate vertex attributes come from separate buffers. Sure, you might then propose glDrawArrays(GLint count, GLuint *object_array, ...). But how do you connect a particular buffer object to a particular vertex attribute? Or how do you have 2 attributes come from buffer 0 and a third attribute from buffer 1? Those are things I can do right now with the current API. But your proposed one can't handle it.
And even that is putting aside the many other objects you need to render: program/pipeline objects, texture objects, UBOs, SSBOs, transform feedback objects, query objects, etc. Having all of the needed objects specified in a single command would be fundamentally unworkable (and that leaves aside the performance costs).
And every time the API would need to add a new kind of object, you would have to add new variations of the glDraw* functions. And right now, there are over a dozen such functions. Your way would have given us hundreds.
So instead, OpenGL defines ways for you to say "the next time I render, use this object in this way for that process." That's what binding an object for use means.
But why couldn't I just use the bufferID to specifiy where I want to send the data instead?
This is about binding an object for the purpose of modifying the object, not saying that it will be used. That is... a different matter.
The obvious answer is, "You can't do it because the OpenGL API (until 4.5) doesn't have a function to let you do it." But I rather suspect the question is really why OpenGL doesn't have such APIs (until 4.5, where glNamedBufferStorage and such exist).
Indeed, the fact that 4.5 does have such functions proves that there is no technical reason for pre-4.5 OpenGL's bind-object-to-modify API. It really was a "decision" that came about by the evolution of the OpenGL API from 1.0, thanks to following the path of least resistance. Repeatedly.
Indeed, just about every bad decision that OpenGL has made can be traced back to taking the path of least resistance in the API. But I digress.
In OpenGL 1.0, there was only one kind of object: display list objects. That means that even textures were not stored in objects. So every time you switched textures, you had to re-specify the entire texture with glTexImage*D. That means re-uploading it. Now, you could (and people did) wrap each texture's creation in a display list, which allowed you to switch textures by executing that display list. And hopefully the driver would realize you were doing that and instead allocate video memory and so forth appropriately.
So when 1.1 came around, the OpenGL ARB realized how mind-bendingly silly that was. So they created texture objects, which encapsulate both the memory storage of a texture and the various state within. When you wanted to use the texture, you bound it. But there was a snag. Namely, how to change it.
See, 1.0 had a bunch of already existing functions like glTexImage*D, glTexParamter and the like. These modify the state of the texture. Now, the ARB could have added new functions that do the same thing but take texture objects as parameters.
But that would mean dividing all OpenGL users into 2 camps: those who used texture objects and those who did not. It meant that, if you wanted to use texture objects, you had to rewrite all of your existing code that modified textures. If you had some function that made a bunch of glTexParameter calls on the current texture, you would have to change that function to call the new texture object function. But you would also have to change the function of yours that calls it so that it would take, as a parameter, the texture object that it operates on.
And if that function didn't belong to you (because it was part of a library you were using), then you couldn't even do that.
So the ARB decided to keep those old functions around and simply have them behave differently based on whether a texture was bound to the context or not. If one was bound, then glTexParameter/etc would modify the bound texture, rather than the context's normal texture.
This one decision established the general paradigm shared by almost all OpenGL objects.
ARB_vertex_buffer_object used this paradigm for the same reason. Notice how the various gl*Pointer functions (glVertexAttribPointer and the like) work in relation to buffers. You have to bind a buffer to GL_ARRAY_BUFFER, then call one of those functions to set up an attribute array. When a buffer is bound to that slot, the function will pick that up and treat the pointer as an offset into the buffer that was bound at the time the *Pointer function was called.
Why? For the same reason: ease of compatibility (or to promote laziness, depending on how you want to see it). ATI_vertex_array_object had to create new analogs to the gl*Pointer functions. Whereas ARB_vertex_buffer_object just piggybacked off of the existing entrypoints.
Users didn't have to change from using glVertexPointer to glVertexBufferOffset or some other function. All they had to do was bind a buffer before calling a function that set up vertex information (and of course change the pointers to byte offsets).
It also mean that they didn't have to add a bunch of glDrawElementsWithBuffer-type functions for rendering with indices that come from buffer objects.
So this wasn't a bad idea in the short term. But as with most short-term decision making, it starts being less reasonable with time.
Of course, if you have access to GL 4.5/ARB_direct_state_access, you can do things the way they ought to have been done originally.
Given we are using OpenGL 4.5 or have support for the GL_ARB_direct_state_access extension, we have the new function glCreateBuffers.
This function has an identical signature to glGenBuffers, but specifies:
returns n previously unused buffer names in buffers, each representing a new buffer object initialized as if it had been bound to an unspecified target
glGenBuffers has the following specification:
Buffer object names returned by a call to glGenBuffers are not returned by subsequent calls, unless they are first deleted with glDeleteBuffers.
So any buffer name returned by glCreateBuffers will never be used again by itself, but could be used by glGenBuffers.
It seems that glCreateBuffers will always create new buffer objects and return their names, and glGenBuffers will only create new buffers if there are no previous buffers that have since been deleted.
What advantage does adding this function have?
When should I use glCreateBuffers over glGenBuffers?
P.S.
I think this stands for all glCreate* functions added by GL_ARB_direct_state_access
What you are noticing here is basically tidying up the API for consistency against Shader and Program object creation. Those have always been generated and initialized in a single call and were the only part of the API that worked that way. Every other object was reserved first using glGen* (...) and later initialized by binding the reserved name to a target.
In fact, prior to GL 3.0 it was permissible to skip glGen* (...) altogether and create an object simply by binding a unique number somewhere.
In GL 4.5, every type of object was given a glCreate* (...) function that generates and initializes them in a single call in GL 4.5. This methodology fits nicely with Direct State Access, where modifying (in this case creating) an object does not require altering (and potentially restoring) a binding state.
Many objects require a target (e.g. textures) when using the API this way, but buffer objects are for all intents and purposes typeless. That is why the API signature is identical. When you create a buffer object with this interface, it is "initialized as if it had been bound to an unspecified target." That would be complete nonsense for most types of objects in GL; they need a target to properly initialize them.
The primary consideration here is that you may want to create and setup state for an object in GL without affecting some other piece of code that expects the object bound to a certain target to remain unchanged. That is what Direct State Access was created for, and that is the primary reason these functions exist.
In theory, as dari points out, initializing a buffer object by binding it to a specific target potentially gives the driver hints about its intended usage. I would not put much stock in that though, that is as iffy as the actual usage flags when glBufferData (...) is called; a hint at best.
OpenGL 4.5 Specification - 6.1 Creating and Binding Buffer Objects:
A buffer object is created by binding a name returned by GenBuffers to
a buffer target. The binding is effected by calling
void BindBuffer( enum target, uint buffer );
target must be one of the targets listed
in table 6.1. If the buffer object named buffer has not been
previously bound, the GL creates a new state vector, initialized with
a zero-sized memory buffer and comprising all the state and with the
same initial values listed in table 6.2.
So the difference between glGenBuffers and glCreateBuffers is, that glGenBuffers only returns an unused name, while glCreateBuffers also creates and initializes the state vector described above.
Usage:
It is recommended to use glGenBuffers + glBindBuffer, because
the GL may make different choices about storage location and
layout based on the initial binding.
Since in glCreateBuffers no initial binding is given this choice cannot be made.
glCreateBuffers does not have a target because buffer objects are not typed. The first binding target was only ever used as a hint in OpenGL. And Khronos considered giving glCreateBuffers a target parameter, but they decided against it:
NamedBufferData (and the corresponding function from the original EXT)
do not include the <target> parameter. Does implementations may make
initial assumptions about the usage of a data store based on this
parameter. Where did it go? Should we bring it back?
RESOLVED: No need for a target parameter for buffer. Implemetations[sic]
don't make usage assumption based on the <target> parameter. Only one
vendor extension do so AMD_pinned_memory. A[sic] for consistent approach
to specify a buffer usage would be to add a new flag for that <flags>
parameter of BufferStorage.
Emphasis added.
I am confronted with the fact that sometimes my OpenGL Context gets re-created and my initialization needs to be redone to re-initialize the re-created OpenGL Context.
Right now I am not using many elements, but I mentioned that the IDs (or names or whatchacallthem) that I receive from glGenX are always the same as long as I call the functions in the same order: The first texture I create gets ID 1, the second ID 2 etc. etc.
Is this guaranteed? Because when it is, my internal organization of those OpenGL elements does not need to be re-done, as even though the OpenGL context is another, a reference to texture ID 4 will always point at the correct texture as long as that texture gets re-loaded into the GPU 4th in row?
No, I have never seen a guarantee that the generation of object names will produce the same result each time. The language for the glGen*() calls always sounds like this:
returns n previous unused [..] object names
It never says anything more about how these object names are constructed.
Now, in reality, if you're calling everything in exactly the same sequence, it seems very likely that you're going to get the same names. Software tends to be deterministic in cases like this. But relying on it still sounds like a bad idea to me. It will be one of these things that will probably work for the longest time, and then come back and bite you when you least expect it at the most unfortunate time (like the day before you plan to ship, or after you already shipped).
I don't think this is strictly within the scope of your question, but just to make sure that you're not making any false assumptions: You definitely can't expect the names to be sequential. They are on some platforms/vendors, but not on others. There's also no valid expectation on whether different object types use different names, or if they use the same values. For example, if you call glGenTextures() to create a texture name, and glGenBuffers() to create a buffer name, you could get the same value for the two names.
Also, just to avoid possible misunderstandings: Even if you assume that you already know the names that are going to be generated, you still need to call the glGen*() functions to generate the names if you use the OpenGL Core Profile. It used to be legal to just use any values you wanted for object names, but that's not legal anymore in modern OpenGL. Or in the words of the spec, from the Removed Features appendix:
Application-generated object names - the names of all object types, such as buffer, query, and texture objects, must be generated using the corresponding Gen* commands. Trying to bind an object name not returned by a Gen* command will result in an INVALID_OPERATION error.
I have been wondering for a while: OpenGL object "names", integers generated by glGenTextures etc, seem to never be zero, so I use zero to indicate uninitialised handles and check for errors. So far it's been okay.
I also am told that it is good practice to call glBind*(0) after you're done with an object to make sure that lingering bound objects are not accidentally manipulated afterwards. Sounds sensible.
Are there any situations in which an OpenGL object ID would be zero, making my tests invalid, or when using zero in this way would have surprising effects because it doesn't refer to a non-object?
P.S. Is there a symbolic name for zero-as-a-non-object?
P.P.S. Are there ever going to be performance penalties for heavy use of binding/unbinding pattern? (There are some parts of code which, due to encapsulation, have mostly redundant re-bindings.)
From the OpenGL 4.4 Core Profile Specification:
Each object type has a corresponding name space. Names of objects are represented by unsigned integers of type uint. The name zero is reserved by the GL;
for some object types, zero names a default object of that type, and in others zero
will never correspond to an actual instance of that object type.
You can rely on the name-generating functions (e.g. GenBuffers) to never return zero as a generated name. You can use zero to represent “no object”.
Texture name zero is a reserved texture name, whenever you delete a texture it is as if it were bound to texture name zero. So in this case, it is sort of a special texture name that is always in use.
glGenTextures() only retrieves names that are currently not being used and since zero is in use, you should never get it as a valid name.
Note however, using glIsTexture on zero will return false, so it is technically not recognized as a texture.
Most functions that check validity of a GL object (glIsTexture, glIsBuffer, etc.) return GL_FALSE when given a value of 0. Therefore, to answer your first question I don't think your tests will be broken.
As for #2, yes. Redundantly binding/unbinding things MAY result in performance degradation depending on the drivers and the object being bound/unbound. In general, for performance reasons - it is better to avoid state changes (redundant or otherwise) in OpenGL.
Hope this helps.
Are there any situations in which an OpenGL object ID would be zero, making my tests invalid, or when using zero in this way would have surprising effects because it doesn't refer to a non-object?
Binding object zero afterwards will never make your code invalid, unless later code still expects the object to be bound. As for any surprising effects, as long as you don't actually use object zero for anything, as long as you're not performing any operations on that object, you're fine.
That being said, here is a comprehensive list of all OpenGL Objects who's object zero is not a non-object:
Framebuffer objects. Zero is the default framebuffer, which is treated very specially from non-default framebuffers. You're probably going to need to deliberately use this default object at some point ;)
Vertex array objects, but only in compatibility OpenGL. In a core profile, VAO zero is not a valid object.
Transform Feedback objects, grandfathered in from pre-GL-4.x functionality.
Texture objects. These objects behave in a very strange and bizarre way, very much unlike a regular texture object. Never deliberately put data in them; use them only to unbind a texture. That's probably the biggest "surprising effect" you'll see.
P.P.S. Are there ever going to be performance penalties for heavy use of binding/unbinding pattern? (There are some parts of code which, due to encapsulation, have mostly redundant re-bindings.)
It's possible that this usage pattern could cause performance issues. But if your rendering code is that encapsulated, you'll probably have plenty of other performance problems. Sorting based on state changes for example must be pretty difficult.
Are there any situations in which an OpenGL object ID would be zero, making my tests invalid, or when using zero in this way would have surprising effects because it doesn't refer to a non-object?
Textures. Binding texture 0 isn't quite the same thing as glDisable(GL_TEXTURE_[1|2|3]D).