glCreateFramebuffers vs glGenFramebuffers [duplicate] - opengl

I've browsed the OpenGL standards looking for an explanation for this... why do some objects (shader objects) use functions starting with the prefix glCreate and some objects (buffer objects) use function starting with the prefix glGen? Is there a semantic reason for this?

The glGen… functions go back to OpenGL-1.1 (glGenTextures) and are used to create object names without actually initializing the object. However most of the time those functions are used to create only one object name at a time. So instead of passing them a pointer to a buffer and the size of the buffer you could most of the time just return a single integer.
When 3Dlabs introduced GLSL they tried to break with the old glGen… convention to modernize the OpenGL API.
Yes, this is a bit inconsequential and frankly I'd prefer the GLSL API to use the glGen… naming convention. But we're stuck with glCreateShader and glCreateProgram and that's it.
If you want to have a single naming convention you may write the following wrappers:
GLuint glCreateTexture(void) { GLuint name; glGenTextures(1, &name); return name; }
GLuint glCreateBuffer(void) { GLuint name; glGenTextures(1, &name); return name; }
...

Since OpenGL 4.5, my understanding is that the glCreate* functions are generally meant to be used with direct state access functions, introduced in OpenGL 4.5, whereas the glGen* require binding the respective buffer, array, texture etc. before they can be used since they are uninitialized until then.
e.g. from https://www.opengl.org/sdk/docs/man4/html/
glGenVertexArrays (...) The names returned in arrays are marked as used, for the purposes of glGenVertexArrays only, but they acquire state and type only when they are first bound.
glCreateVertexArrays returns n previously unused vertex array object names in arrays, each representing a new vertex array object initialized to the default state.
One may argue that the glCreateProgram makes sense too. If it produces an initialized object it is in accordance with the rest of the syntax.

Related

Where does the Vertex Data is stored in the memory in OpenGL? [duplicate]

My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory
But if this is true why do I not need to dereference anything when using these variables?
OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.
This dichotomy exists for legacy historical reasons.
The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.
This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.
The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.
As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.
Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).
Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.
And the path of least resistance for that was to not create a new API, even if it makes the API really silly.

Why are all openGL objects stored in GLuints?

My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory
But if this is true why do I not need to dereference anything when using these variables?
OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.
This dichotomy exists for legacy historical reasons.
The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.
This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.
The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.
As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.
Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).
Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.
And the path of least resistance for that was to not create a new API, even if it makes the API really silly.

How do you use new and delete with OpenGL's buffer objects?

I am learning OpenGL and using it with C++... I'm a beginner so sorry if this is a stupid question.
I was following this tutorial (https://learnopengl.com/#!Getting-started/Hello-Triangle), and I split the code to create & compile the fragment shader and vertex shader into separate functions. Obviously this means the objects go out of scope once the function ends so I tried to use new and delete with them.
I tried doing this:
GLuint * pvertexShader; //pointer to a GLuint
pvertexShader = new GLuint;
but didn't know how to put the actual buffer object into the new memory I just allocated.
I know it's bad practice to use raw new/delete, so should I try using a smart pointer instead? If so, how would that work with OpenGL's objects?
Or should I make a wrapper class and put the code to generate the buffer into the class constructor, so it becomes like this:
class VBO;
(snip)
pvertexShader = new VBO; //the class is constructed on the heap
As far as I know when you call new for an object then it gets constructed on the heap.
If I did this, would I still be able to use functions like glBindBuffer or glAttachShader by passing the pointer pvertexShader instead of the actual vertex shader? so writing glAttachShader(shaderprogram, pvertexShader).
EDIT
Ok, so now I found out that anything you make is allocated onto the GPU by OpenGL, so you don't have to use new/delete.
Since what I'm doing is like this:
SetUpVertexShader(){
//all the code to make and compile a shader
};
SetUpFragmentShader(){
//all the code to make and compile this shader
};
Then when I get to the stage where you link the shaders and the program, the two shaders made in those functions have gone out of scope (according to XCode).
How do I prevent/get around this?
The actual memory for the buffers you allocate with pretty much anything in OpenGL lies within the GPU. gl_buffer* functions do not return the first address of an array of memory like you seem to expect, instead they insert an ID (of type GLuint) to the id argument that you can then use with other OpenGL functions to work with the memory that was initialized in the GPU. You don't need a pointer to its address, OpenGL only asks for the number it internally assigned to the buffer.
Here's a small example illustrating this.
// unsigned int to hold the ID that OpenGL will assign to this shader
GLuint sVertex;
sVertex = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(sVertex, 1, &vertexSource, NULL);
glCompileShader(sVertex);
Notice how this vertex shader was compiled without even moving around its address. The reason is that OpenGL manages that for you on the GPU side, all you have to do is work with the GLuint that contains a number OpenGL uses to identify your shader.
It's counter intuitive at first but this system does have its benefits, for example you can move around any kind off correctly initialized GL resource (like a texture or a shader) with just its (GLuint)ID, for example.
// texture_id contains the number that OpenGL uses to reference this particular texture in the GPU memory
void use_texture(GLuint texture_id);
Step by step.
You only need pointers when you store complex data, likely inside a struct/class. And then smart pointers are quite appropiate.
glXXXX calls don't provide pointers (except glMapBuffer). Any memory they create (e.g. by glBufferData) is GPU memory, not client (i.e. on the CPU side) one. Don't bother with it.
Reading shaders code, compiling them, linking them, getting errors, are repetitive jobs with few client results to store, like the integer that identifies the shader program. They are suitable to be encapsulated in classes.
Objects VBO, VAO, vertices data and so on are also suitable for classes. Same goes for camera handling, rendering and other features of your app.
The main point is that you organizate your code by tasks/objects.

Difference in glGenBuffers and glCreateBuffers

Given we are using OpenGL 4.5 or have support for the GL_ARB_direct_state_access extension, we have the new function glCreateBuffers.
This function has an identical signature to glGenBuffers, but specifies:
returns n previously unused buffer names in buffers, each representing a new buffer object initialized as if it had been bound to an unspecified target
glGenBuffers has the following specification:
Buffer object names returned by a call to glGenBuffers are not returned by subsequent calls, unless they are first deleted with glDeleteBuffers.
So any buffer name returned by glCreateBuffers will never be used again by itself, but could be used by glGenBuffers.
It seems that glCreateBuffers will always create new buffer objects and return their names, and glGenBuffers will only create new buffers if there are no previous buffers that have since been deleted.
What advantage does adding this function have?
When should I use glCreateBuffers over glGenBuffers?
P.S.
I think this stands for all glCreate* functions added by GL_ARB_direct_state_access
What you are noticing here is basically tidying up the API for consistency against Shader and Program object creation. Those have always been generated and initialized in a single call and were the only part of the API that worked that way. Every other object was reserved first using glGen* (...) and later initialized by binding the reserved name to a target.
In fact, prior to GL 3.0 it was permissible to skip glGen* (...) altogether and create an object simply by binding a unique number somewhere.
In GL 4.5, every type of object was given a glCreate* (...) function that generates and initializes them in a single call in GL 4.5. This methodology fits nicely with Direct State Access, where modifying (in this case creating) an object does not require altering (and potentially restoring) a binding state.
Many objects require a target (e.g. textures) when using the API this way, but buffer objects are for all intents and purposes typeless. That is why the API signature is identical. When you create a buffer object with this interface, it is "initialized as if it had been bound to an unspecified target." That would be complete nonsense for most types of objects in GL; they need a target to properly initialize them.
The primary consideration here is that you may want to create and setup state for an object in GL without affecting some other piece of code that expects the object bound to a certain target to remain unchanged. That is what Direct State Access was created for, and that is the primary reason these functions exist.
In theory, as dari points out, initializing a buffer object by binding it to a specific target potentially gives the driver hints about its intended usage. I would not put much stock in that though, that is as iffy as the actual usage flags when glBufferData (...) is called; a hint at best.
OpenGL 4.5 Specification - 6.1 Creating and Binding Buffer Objects:
A buffer object is created by binding a name returned by GenBuffers to
a buffer target. The binding is effected by calling
void BindBuffer( enum target, uint buffer );
target must be one of the targets listed
in table 6.1. If the buffer object named buffer has not been
previously bound, the GL creates a new state vector, initialized with
a zero-sized memory buffer and comprising all the state and with the
same initial values listed in table 6.2.
So the difference between glGenBuffers and glCreateBuffers is, that glGenBuffers only returns an unused name, while glCreateBuffers also creates and initializes the state vector described above.
Usage:
It is recommended to use glGenBuffers + glBindBuffer, because
the GL may make different choices about storage location and
layout based on the initial binding.
Since in glCreateBuffers no initial binding is given this choice cannot be made.
glCreateBuffers does not have a target because buffer objects are not typed. The first binding target was only ever used as a hint in OpenGL. And Khronos considered giving glCreateBuffers a target parameter, but they decided against it:
NamedBufferData (and the corresponding function from the original EXT)
do not include the <target> parameter. Does implementations may make
initial assumptions about the usage of a data store based on this
parameter. Where did it go? Should we bring it back?
RESOLVED: No need for a target parameter for buffer. Implemetations[sic]
don't make usage assumption based on the <target> parameter. Only one
vendor extension do so AMD_pinned_memory. A[sic] for consistent approach
to specify a buffer usage would be to add a new flag for that <flags>
parameter of BufferStorage.
Emphasis added.

Concept behind OpenGL's 'Bind' functions

I am learning OpenGL from this tutorial.
My question is about the specification in general, not about a specific function or topic.
When seeing code like the following:
glGenBuffers(1, &positionBufferObject);
glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
I'm confused about the utility of calling the bind functions before and after setting the buffer data.
It seems superfluous to me, due to my inexperience with OpenGL and Computer Graphics in general.
The man page says that:
glBindBuffer lets you create or use a named buffer object. Calling glBindBuffer with target set to
GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER, GL_PIXEL_PACK_BUFFER or GL_PIXEL_UNPACK_BUFFER and buffer set to the
name of the new buffer object binds the buffer object name to the target. When a buffer object is bound to a
target, the previous binding for that target is automatically broken.
What exactly is the concept/utility of 'binding' something to a 'target' ?
the commands in opengl don't exist in isolation. they assume the existence of a context. one way to think of this is that there is, hidden in the background, an opengl object, and the functions are methods on that object.
so when you call a function, what it does depends on the arguments, of course, but also on the internal state of opengl - on the context/object.
this is very clear with bind, which says "set this as the current X". then later functions modify the "current X" (where X might be buffer, for example). [update:] and as you say, the thing that is being set (the attribute in the object, or the "data member") is the first argument to bind. so GL_ARRAY_BUFFER names a particular thing that you are setting.
and to answer the second part of the question - setting it to 0 simply clears the value so you don't accidentally make unplanned changes elsewhere.
The OpenGL technique can be incredibly opaque and confusing. I know! I've been writing 3D engines based upon OpenGL for years (off and on). In my case part of the problem is, I write the engine to hide the underlying 3D API (OpenGL), so once I get something working I never see the OpenGL code again.
But here is one technique that helps my brain comprehend the "OpenGL way". I think this way of thinking about it is true (but not the whole story).
Think about the hardware graphics/GPU cards. They have certain capabilities implemented in hardware. For example, the GPU may only be able to update (write) one texture at a time. Nonetheless, it is mandatory that the GPU contain many textures within the RAM inside the GPU, because transfer between CPU memory and GPU memory is very slow.
So what the OpenGL API does is to create the notion of an "active texture". Then when we call an OpenGL API function to copy an image into a texture, we must do it this way:
1: generate a texture and assign its identifier to an unsigned integer variable.
2: bind the texture to the GL_TEXTURE bind point (or some such bind point).
3: specify the size and format of the texture bound to GL_TEXTURE target.
4: copy some image we want on the texture to the GL_TEXTURE target.
And if we want to draw an image on another texture, we must repeat that same process.
When we are finally ready to render something on the display, we need our code to make one or more of the textures we created and copied images upon to become accessible by our fragment shader.
As it turns out, the fragment shader can access more than one texture at a time by accessing multiple "texture units" (one texture per texture unit). So, our code must bind the textures we want to make available to the texture units our fragment shaders expect them bound to.
So we must do something like this:
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, mytexture0);
glActiveTexture (GL_TEXTURE1);
glBindTexture (GL_TEXTURE_2D, mytexture1);
glActiveTexture (GL_TEXTURE2);
glBindTexture (GL_TEXTURE_2D, mytexture2);
glActiveTexture (GL_TEXTURE3);
glBindTexture (GL_TEXTURE_2D, mytexture3);
Now, I must say that I love OpenGL for many reasons, but this approach drive me CRAZY. That's because all the software I have written for years would look like this instead:
error = glSetTexture (GL_TEXTURE0, GL_TEXTURE_2D, mytexture0);
error = glSetTexture (GL_TEXTURE1, GL_TEXTURE_2D, mytexture1);
error = glSetTexture (GL_TEXTURE2, GL_TEXTURE_2D, mytexture2);
error = glSetTexture (GL_TEXTURE3, GL_TEXTURE_2D, mytexture3);
Bamo. No need for setting all this state over and over and over again. Just specify which texture-unit to attach the texture to, plus the texture-type to indicate how to access the texture, plus the ID of the texture I want to attach to the texture unit.
I also wouldn't need to bind a texture as the active texture to copy an image to it, I would just give the ID of the texture I wanted to copy to. Why should it need to be bound?
Well, there's the catch that forces OpenGL to be structured in the crazy way it is. Because the hardware does some things, and the software driver does other things, and because what is done where is a variable (depends on GPU card), they need some way to keep the complexity under control. Their solution is essentially to have only one bind point for each kind of entity/object, and to require we bind our entities to those bind points before we call functions that manipulate them. And as a second purpose, binding entities is what makes them available to the GPU, and our various shaders that execute in the GPU.
At least that's how I keep the "OpenGL way" straight in my head. Frankly, if someone really, really, REALLY understands all the reasons OpenGL is (and must be) structured the way it is, I'd love them to post their own reply. I believe this is an important question and topic, and the rationale is rarely if ever described at all, much less in a manner that my puny brain can comprehend.
From the section Introduction: What is OpenGL?
Complex aggregates like structs are never directly exposed in OpenGL. Any such constructs are hidden behind the API. This makes it easier to expose the OpenGL API to non-C languages without having a complex conversion layer.
In C++, if you wanted an object that contained an integer, a float, and a string, you would create it and access it like this:
struct Object
{
int count;
float opacity;
char *name;
};
//Create the storage for the object.
Object newObject;
//Put data into the object.
newObject.count = 5;
newObject.opacity = 0.4f;
newObject.name = "Some String";
In OpenGL, you would use an API that looks more like this:
//Create the storage for the object
GLuint objectName;
glGenObject(1, &objectName);
//Put data into the object.
glBindObject(GL_MODIFY, objectName);
glObjectParameteri(GL_MODIFY, GL_OBJECT_COUNT, 5);
glObjectParameterf(GL_MODIFY, GL_OBJECT_OPACITY, 0.4f);
glObjectParameters(GL_MODIFY, GL_OBJECT_NAME, "Some String");
None of these are actual OpenGL commands, of course. This is simply an example of what the interface to such an object would look like.
OpenGL owns the storage for all OpenGL objects. Because of this, the user can only access an object by reference. Almost all OpenGL objects are referred to by an unsigned integer (the GLuint). Objects are created by a function of the form glGen*, where * is the type of the object. The first parameter is the number of objects to create, and the second is a GLuint* array that receives the newly created object names.
To modify most objects, they must first be bound to the context. Many objects can be bound to different locations in the context; this allows the same object to be used in different ways. These different locations are called targets; all objects have a list of valid targets, and some have only one. In the above example, the fictitious target “GL_MODIFY” is the location where objectName is bound.
This is how most OpenGL objects work, and buffer objects are "most OpenGL objects."
And if that's not good enough, the tutorial covers it again in Chapter 1: Following the Data:
void InitializeVertexBuffer()
{
glGenBuffers(1, &positionBufferObject);
glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
The first line creates the buffer object, storing the handle to the object in the global variable positionBufferObject. Though the object now exists, it does not own any memory yet. That is because we have not allocated any with this object.
The glBindBuffer function binds the newly-created buffer object to the GL_ARRAY_BUFFER binding target. As mentioned in the introduction, objects in OpenGL usually have to be bound to the context in order for them to do anything, and buffer objects are no exception.
The glBufferData function performs two operations. It allocates memory for the buffer currently bound to GL_ARRAY_BUFFER, which is the one we just created and bound. We already have some vertex data; the problem is that it is in our memory rather than OpenGL's memory. The sizeof(vertexPositions) uses the C++ compiler to determine the byte size of the vertexPositions array. We then pass this size to glBufferData as the size of memory to allocate for this buffer object. Thus, we allocate enough GPU memory to store our vertex data.
The other operation that glBufferData performs is copying data from our memory array into the buffer object. The third parameter controls this. If this value is not NULL, as in this case, glBufferData will copy the data referenced by the pointer into the buffer object. After this function call, the buffer object stores exactly what vertexPositions stores.
The fourth parameter is something we will look at in future tutorials.
The second bind buffer call is simply cleanup. By binding the buffer object 0 to GL_ARRAY_BUFFER, we cause the buffer object previously bound to that target to become unbound from it. Zero in this cases works a lot like the NULL pointer. This was not strictly necessary, as any later binds to this target will simply unbind what is already there. But unless you have very strict control over your rendering, it is usually a good idea to unbind the objects you bind.
Binding a buffer to a target is something like setting a global variable. Subsequent function calls then operate on that global data. In the case of OpenGL all the "global variables" together form a GL context. Virtually all GL functions read from that context or modify it in some way.
The glGenBuffers() call is sort of like malloc(), allocating a buffer; we set a global to point to it with glBindBuffer(); we call a function that operates on that global (glBufferData()) and then we set the global to NULL so it won't inadvertently operate on that buffer again using glBindBuffer().
OpenGL is what is known as a "state machine," to that end OpenGL has several "binding targets" each of which can only have one thing bound at once. Binding something else replaces the current bind, and thus changes it's state. Thus by binding buffers you are (re)defining the state of the machine.
As a state machine, whatever information you have bound will have an effect on the next output of the machine, in OpenGL that is its next draw-call. Once that is done you could bind new vertex data, bind new pixel data, bind new targets etc then initiate another draw call. If you wanted to create the illusion of movement on your screen, when you were satisfied you had drawn your entire scene (a 3d engine concept, not an OpenGL concept) you'd flip the framebuffer.