Are PBOs shared across OpenGL contexts? - opengl

Are PBOs or any kind of buffer objects shared across multiple contexts in OpenGL (such as textures)?
My best guess is that NO as the following code is not working:
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, lastFrame->pbo);
glDrawPixels(lastFrame->width, lastFrame->height, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8, NULL);
lastFrame->pbo is a buffer which has been created by another GL context. PBO's size is 4*lastFrame->width*lastFrame->height. If instead of binding a PBO I upload data from memory (with the same size), it works fine
The glDrawPixels command is throwing GL_INVALID_OPERATION.
EDIT: lastFrame->pbo is a GLuint and width and height are u_int32_t
EDIT 2: I'm using GLFW for contexts.

The OpenGL specification 4.6 in its Chapter 5 says:
Objects that may be shared between contexts include buffer objects,
program and shader objects, renderbuffer objects, sampler objects,
sync objects, and texture objects (except for the texture objects
named zero).
And
Objects which contain references to other objects include framebuffer,
program pipeline, query, transform feedback, and vertex array objects.
Such objects are called container objects and are not shared.
A Pixel Buffer Object (PBO) is a buffer object. So it is shared.
Your GL_INVALID_OPERATION error may come from not setting as current the context where you use some gl-calls. Or trying to set as current the same context to two different threads at once.

The question is if a PBO object or any kind of buffer object gets shared across multiple contexts in OpenGL (such as textures).
As noted by #Ripi2's answer, the GL spec allows sharing of buffer and texture objects between contexts. But that does not mean that they are automatically shared. You must create shared GL contexts for that to work.
I'm using glfw for contexts
Context sharing is explained in the GLFW documentation.

Related

glGenBuffers() object names in different OpenGL contexts

Let's say there is a GUI program with two windows. Each window has its own OpenGL context. There is only one thread.
At some point we want to render stuff in the first and in the second window, so we allocate one buffer for each of the OpenGL contexts with glGenBuffers(1, &buffer_) (among other stuff).
My question is, does the glGenBuffers() function returns unique object names globally, or is it local for each of the OpenGL contexts? In other words, can these two OpenGL contexts have the same object names given by the glGenBuffers()? Apart from object name == 0 of course, which is a special object name.
In case they can, does it mean they share this object name? What would happen if one of the OpenGL contexts deallocates the object by glDeleteBuffers(1, &buffer_)?
Depends if the contexts are in the same share group or not.
See chapter 5 of the OpenGL 4.6 Core Profile specification, "Shared Objects and Multiple Contexts".
The two contexts in my example were not shared (according to genpfault it's the default behavior). Yet, the VBO object names acquired by the glGenBuffers() function were the same in both contexts.
Everything runs smoothly, I can switch between two windows and modify object VBOs presented there (like colors) or their shader uniforms (like camera position)).
I'm using PyQt5 biding for Qt framework and the graphics renderer is Intel(R) UHD Graphics 630.
The program had one QMdiSubWindow with two instances of QMdiSubWindow, each containing one rendered object.
I can only conclude that in this case, when contexts are not shared, object names are unique for the context.

Trying to understand OpenGL state and objects storage

I am having a hard time understanding what exactly is meant by state. I have read about vertex array objects (VAOs) and vertex buffer objects (VBOs) and as well as context creation. So far i have understood that context is the state of everything associated with the instance of OpenGL that you have created.
I have also understood that a VAO is a reference to the names you created which OpenGL allocated and VBO is the data of those VAOs. However, in the OpenGL red book it says that when you create a VBO that OpenGL allocates a state to the VBO which you obviously have to bind to the VAO.
What I don't understand is relationship between the objects and the context. If the context is supposed to be the state of the OpenGL instance, why does it create additional states which it allocates to the vertex buffer object? Whenever you call some function that changes that state of the context, it actually changes the VBO and not the actual default state of the context.
Have I understood this correctly or am I confusing something here?
The OpenGL context contains all state within your OpenGL instance. The VBOs and VAOs are sub-states within that 'global' state. They cannot exist outside of the GL context (although you could potentially share them with other contexts). When you perform operations on either a VBO or VAO, you are modifying the state of the VBO/VAO, which is a part of the GL context, thus you are modifying it as well.

Sharing egl Contexts wrt OpenGL ES

One creates an eglContext with:
EGLContext eglCreateContext( EGLDisplay display,
EGLConfig config,
EGLContext share_context,
EGLint const * attrib_list);
The spec allows one to specify a share_context which allows object sharing between the two context.
If one does specify a share_context what exactly is shared (programs, textures, framebuffer objects)? and what exactly remains sandboxed?
Also does this sharing work both ways or only one way?
An extraction from OGL ES 2.0.25 spec (Appendix C:
Shared Objects and Multiple Contexts):
The share list of a context is the group of all contexts which share objects with that context.
Objects that can be shared between contexts on the share list include vertex buffer objects, program and shader objects, renderbuffer objects, and texture objects (except for the texture objects named zero).
It is undefined whether framebuffer objects are shared by contexts on the share list. The framebuffer object namespace may or may not be shared. This means that using the same name for a framebuffer object in multiple contexts on the share list could either result in multiple distinct framebuffer objects, or in a single framebuffer object which is shared. Therefore applications using OpenGL ES should avoid using the same framebuffer object name in multiple contexts on the same share list.

glGenBuffers function usage

For each buffer type, there a special function to generate names for it like glGenFramebuffers for framebuffers, glGenRenderbuffers for render buffers, and glGenTextures for textures.
But there is a function called glGenBuffers. What types of buffers does this function generate? & how these buffers can be used in my program?
glGenBuffers allocates a name for a "Buffer Object". An OpenGL Buffer Object represents a block of memory which can be accessed by both the application and the GPU (though typically not by both at the same time). OpenGL can use the contents of a buffer object for a variety of purposes, and a single buffer object may be used for more than one purpose over its lifetime, though in practice buffer objects are often created exclusively for one type of data.
You can get an idea for the uses that a buffer object has by looking at the list of binding points to which you can attach a buffer object (using the glBindBuffer function). For example:
GL_ARRAY_BUFFER​ - OpenGL will read vertex data from the buffer object.
GL_ELEMENT_ARRAY_BUFFER​ - OpenGL will read vertex indices (e.g., for glDrawElements) from the buffer object.
GL_PIXEL_UNPACK_BUFFER​ - Functions that read pixel data (e.g., glTexImage2D) will read that data from the buffer object.
And many more.
For more information, see the OpenGL wiki page for Buffer Object.

Concept behind OpenGL's 'Bind' functions

I am learning OpenGL from this tutorial.
My question is about the specification in general, not about a specific function or topic.
When seeing code like the following:
glGenBuffers(1, &positionBufferObject);
glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
I'm confused about the utility of calling the bind functions before and after setting the buffer data.
It seems superfluous to me, due to my inexperience with OpenGL and Computer Graphics in general.
The man page says that:
glBindBuffer lets you create or use a named buffer object. Calling glBindBuffer with target set to
GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER, GL_PIXEL_PACK_BUFFER or GL_PIXEL_UNPACK_BUFFER and buffer set to the
name of the new buffer object binds the buffer object name to the target. When a buffer object is bound to a
target, the previous binding for that target is automatically broken.
What exactly is the concept/utility of 'binding' something to a 'target' ?
the commands in opengl don't exist in isolation. they assume the existence of a context. one way to think of this is that there is, hidden in the background, an opengl object, and the functions are methods on that object.
so when you call a function, what it does depends on the arguments, of course, but also on the internal state of opengl - on the context/object.
this is very clear with bind, which says "set this as the current X". then later functions modify the "current X" (where X might be buffer, for example). [update:] and as you say, the thing that is being set (the attribute in the object, or the "data member") is the first argument to bind. so GL_ARRAY_BUFFER names a particular thing that you are setting.
and to answer the second part of the question - setting it to 0 simply clears the value so you don't accidentally make unplanned changes elsewhere.
The OpenGL technique can be incredibly opaque and confusing. I know! I've been writing 3D engines based upon OpenGL for years (off and on). In my case part of the problem is, I write the engine to hide the underlying 3D API (OpenGL), so once I get something working I never see the OpenGL code again.
But here is one technique that helps my brain comprehend the "OpenGL way". I think this way of thinking about it is true (but not the whole story).
Think about the hardware graphics/GPU cards. They have certain capabilities implemented in hardware. For example, the GPU may only be able to update (write) one texture at a time. Nonetheless, it is mandatory that the GPU contain many textures within the RAM inside the GPU, because transfer between CPU memory and GPU memory is very slow.
So what the OpenGL API does is to create the notion of an "active texture". Then when we call an OpenGL API function to copy an image into a texture, we must do it this way:
1: generate a texture and assign its identifier to an unsigned integer variable.
2: bind the texture to the GL_TEXTURE bind point (or some such bind point).
3: specify the size and format of the texture bound to GL_TEXTURE target.
4: copy some image we want on the texture to the GL_TEXTURE target.
And if we want to draw an image on another texture, we must repeat that same process.
When we are finally ready to render something on the display, we need our code to make one or more of the textures we created and copied images upon to become accessible by our fragment shader.
As it turns out, the fragment shader can access more than one texture at a time by accessing multiple "texture units" (one texture per texture unit). So, our code must bind the textures we want to make available to the texture units our fragment shaders expect them bound to.
So we must do something like this:
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, mytexture0);
glActiveTexture (GL_TEXTURE1);
glBindTexture (GL_TEXTURE_2D, mytexture1);
glActiveTexture (GL_TEXTURE2);
glBindTexture (GL_TEXTURE_2D, mytexture2);
glActiveTexture (GL_TEXTURE3);
glBindTexture (GL_TEXTURE_2D, mytexture3);
Now, I must say that I love OpenGL for many reasons, but this approach drive me CRAZY. That's because all the software I have written for years would look like this instead:
error = glSetTexture (GL_TEXTURE0, GL_TEXTURE_2D, mytexture0);
error = glSetTexture (GL_TEXTURE1, GL_TEXTURE_2D, mytexture1);
error = glSetTexture (GL_TEXTURE2, GL_TEXTURE_2D, mytexture2);
error = glSetTexture (GL_TEXTURE3, GL_TEXTURE_2D, mytexture3);
Bamo. No need for setting all this state over and over and over again. Just specify which texture-unit to attach the texture to, plus the texture-type to indicate how to access the texture, plus the ID of the texture I want to attach to the texture unit.
I also wouldn't need to bind a texture as the active texture to copy an image to it, I would just give the ID of the texture I wanted to copy to. Why should it need to be bound?
Well, there's the catch that forces OpenGL to be structured in the crazy way it is. Because the hardware does some things, and the software driver does other things, and because what is done where is a variable (depends on GPU card), they need some way to keep the complexity under control. Their solution is essentially to have only one bind point for each kind of entity/object, and to require we bind our entities to those bind points before we call functions that manipulate them. And as a second purpose, binding entities is what makes them available to the GPU, and our various shaders that execute in the GPU.
At least that's how I keep the "OpenGL way" straight in my head. Frankly, if someone really, really, REALLY understands all the reasons OpenGL is (and must be) structured the way it is, I'd love them to post their own reply. I believe this is an important question and topic, and the rationale is rarely if ever described at all, much less in a manner that my puny brain can comprehend.
From the section Introduction: What is OpenGL?
Complex aggregates like structs are never directly exposed in OpenGL. Any such constructs are hidden behind the API. This makes it easier to expose the OpenGL API to non-C languages without having a complex conversion layer.
In C++, if you wanted an object that contained an integer, a float, and a string, you would create it and access it like this:
struct Object
{
int count;
float opacity;
char *name;
};
//Create the storage for the object.
Object newObject;
//Put data into the object.
newObject.count = 5;
newObject.opacity = 0.4f;
newObject.name = "Some String";
In OpenGL, you would use an API that looks more like this:
//Create the storage for the object
GLuint objectName;
glGenObject(1, &objectName);
//Put data into the object.
glBindObject(GL_MODIFY, objectName);
glObjectParameteri(GL_MODIFY, GL_OBJECT_COUNT, 5);
glObjectParameterf(GL_MODIFY, GL_OBJECT_OPACITY, 0.4f);
glObjectParameters(GL_MODIFY, GL_OBJECT_NAME, "Some String");
None of these are actual OpenGL commands, of course. This is simply an example of what the interface to such an object would look like.
OpenGL owns the storage for all OpenGL objects. Because of this, the user can only access an object by reference. Almost all OpenGL objects are referred to by an unsigned integer (the GLuint). Objects are created by a function of the form glGen*, where * is the type of the object. The first parameter is the number of objects to create, and the second is a GLuint* array that receives the newly created object names.
To modify most objects, they must first be bound to the context. Many objects can be bound to different locations in the context; this allows the same object to be used in different ways. These different locations are called targets; all objects have a list of valid targets, and some have only one. In the above example, the fictitious target “GL_MODIFY” is the location where objectName is bound.
This is how most OpenGL objects work, and buffer objects are "most OpenGL objects."
And if that's not good enough, the tutorial covers it again in Chapter 1: Following the Data:
void InitializeVertexBuffer()
{
glGenBuffers(1, &positionBufferObject);
glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
The first line creates the buffer object, storing the handle to the object in the global variable positionBufferObject. Though the object now exists, it does not own any memory yet. That is because we have not allocated any with this object.
The glBindBuffer function binds the newly-created buffer object to the GL_ARRAY_BUFFER binding target. As mentioned in the introduction, objects in OpenGL usually have to be bound to the context in order for them to do anything, and buffer objects are no exception.
The glBufferData function performs two operations. It allocates memory for the buffer currently bound to GL_ARRAY_BUFFER, which is the one we just created and bound. We already have some vertex data; the problem is that it is in our memory rather than OpenGL's memory. The sizeof(vertexPositions) uses the C++ compiler to determine the byte size of the vertexPositions array. We then pass this size to glBufferData as the size of memory to allocate for this buffer object. Thus, we allocate enough GPU memory to store our vertex data.
The other operation that glBufferData performs is copying data from our memory array into the buffer object. The third parameter controls this. If this value is not NULL, as in this case, glBufferData will copy the data referenced by the pointer into the buffer object. After this function call, the buffer object stores exactly what vertexPositions stores.
The fourth parameter is something we will look at in future tutorials.
The second bind buffer call is simply cleanup. By binding the buffer object 0 to GL_ARRAY_BUFFER, we cause the buffer object previously bound to that target to become unbound from it. Zero in this cases works a lot like the NULL pointer. This was not strictly necessary, as any later binds to this target will simply unbind what is already there. But unless you have very strict control over your rendering, it is usually a good idea to unbind the objects you bind.
Binding a buffer to a target is something like setting a global variable. Subsequent function calls then operate on that global data. In the case of OpenGL all the "global variables" together form a GL context. Virtually all GL functions read from that context or modify it in some way.
The glGenBuffers() call is sort of like malloc(), allocating a buffer; we set a global to point to it with glBindBuffer(); we call a function that operates on that global (glBufferData()) and then we set the global to NULL so it won't inadvertently operate on that buffer again using glBindBuffer().
OpenGL is what is known as a "state machine," to that end OpenGL has several "binding targets" each of which can only have one thing bound at once. Binding something else replaces the current bind, and thus changes it's state. Thus by binding buffers you are (re)defining the state of the machine.
As a state machine, whatever information you have bound will have an effect on the next output of the machine, in OpenGL that is its next draw-call. Once that is done you could bind new vertex data, bind new pixel data, bind new targets etc then initiate another draw call. If you wanted to create the illusion of movement on your screen, when you were satisfied you had drawn your entire scene (a 3d engine concept, not an OpenGL concept) you'd flip the framebuffer.