OpenGL framebuffer bind target - opengl

When calling glBindFramebuffer(GLenum target, GLuint framebuffer);, I know that target can either be GL_READ_FRAMEBUFFER, GL_DRAW_FRAMEBUFFER, or GL_FRAMEBUFFER which is both.
However, when I attach texture or Renderbuffer objects to the framebuffer, I must provide a binding target as well.
My question is, when I attach anything to a framebuffer through binding target target, does it stay constant?
This means when a renderbuffer is attached to a framebuffer through binding point GL_DRAW_FRAMEBUFFER, it will always be the target of drawing operations, and if I want that renderbuffer to be read from then I must call glFramebufferRenderbuffer() again with target set to GL_READ_FRAMEBUFFER this time.
Can anyone confirm this? I'm asking because I'm trying to encapsulate all these in C++ classes.

This means when a renderbuffer is attached to a framebuffer through binding point GL_DRAW_FRAMEBUFFER, it will always be the target of drawing operations, and if I want that renderbuffer to be read from then I must call glFramebufferRenderbuffer() again with target set to GL_READ_FRAMEBUFFER this time.
No. The attachments are per-FBO state, they have nothing to do with the binding point the FBO is bound.
An object may be bound to semantically different binding targets, but the object itself defines all of it's state. For example, you might load data to an buffer which is bound as GL_PIXEL_UNPACK_BUFFER, and later use this data as vertex attributes by binding it as GL_VERTEX_ARRAY_BUFFER.
However, when I attach texture or Renderbuffer objects to the framebuffer, I must provide a binding target as well.
Well, that is beacuse traditional GL uses bind to modify semantics. If you want to manipulate the state of any object, you must bind it to some target (matching the object type, of course), and all state setting functions will reference the object indirectly by addressing the target. This does not meen that the modifications have anything to do with the binding point.
Bind-to-modify his has been an often criticised principle of OpenGL. For a long time, the EXT_direct_state_access extension had been implemented by some vendors, to address this issue. It provides functions to directly reference the objects instead of indirectly using the binding points. In your example, glNamedFramebufferRenderbufferEXT will allow you to directly attach a renderbuffer without binding the FBO first.
Finally with OpenGL 4.5, direct state access was promoted a core feature of OpenGL, and the ARB_direct_state_access was created to allow implementors to provide the final API (which is different from the EXT version in some aspects) for earlier GL versions. Now there is the official glNamedFramebufferRenderbuffer() function.

Related

Restoring the last texture state in legacy OpenGL 1.x

I would like to remember a current texture state in OpenGL 1.x and later restore it. I can use glIsEnabled to check which texture types are active.
Does it make sense to have enabled more than one text type, for example, GL_TEXTURE_2D and GL_TEXTURE_CUBE_MAP?
glGet* function allows to get a current texture id, for example, GL_TEXTURE_BINDING_2D but to bind previous texture I also need to know appropriate target for glBindTexture.
How to achieve it?

Is it enough to call glActiveTexture to edit a texture once it has already been bound?

If my understanding is correct, a texture unit has a number of targets (GL_TEXTURE_2D etc.) that I can bind textures to. I can change the currently active texture unit with glActiveTexture. When I'm calling glBindTexture, I bind the specified texture object to the specified target of the currently active texture unit, right?
When I want to later change the parameters of a texture or call a function like glTexSubImage2D, is it enough to call glActiveTexture with the texture unit that my texture is bound to? Or do I have to call glBindTexture everytime, even if the texture is already bound to a unit?
So long as you know what state is bound to which unit, you can rely on just changing glActiveTexture to switch to the right unit to find it.
However, you should not rely upon this for this purpose. Not because OpenGL is unreliable, but because you may be wrong about what you think you've bound to which unit.
Furthermore, it blurs the line between binding a texture to render with it and binding a texture to modify it. You want the two to be entirely separate. First, because modifying a texture's state in the rendering loop (where you're binding it to modify it) is bad form and likely to be slow. And second, so that when it comes time to adopt GL 4.5 and DSA, you can do so quickly and efficiently.
You can think of Texture unit as pipes present on GPU .
There are many such texture units present on GPU. By calling glActivateTexture() you are telling OpenGL driver to tell GPU that which ever texture I will bind next load/connect it in mentioned texture unit.By saying glBindTexture() you are telling driver that whatever operations I am about to do, please do it on the texture I have bound. As once you have call glTextureImage2D your texture is residing in driver memory and hence you dont have direct access to it. So driver has given you handle to texture object which you bind and tell driver which resource I am talking about.
So when ever you render its always good to do both activate texture unit and then bind your texture. As you have done activate first and bind later the texture will be automatically bound to unit you have given.
If you dont mention texture unit by default GL_TEXTURE0 will be used and hence sometimes it could be confusing.
Hope this helps.

QOpenGLFrameBufferObject: binding texture name from texture() gives InvalidOperation error

In my application I have a module that manages rendering using OpenGL, and renders the results into QOpenGLFrameBufferObjects. I know this much works, because I'm able to save their contents using QOpenGLFrameBufferObject::toImage() and see the render.
I am now attempting to take the texture ID returned from QOpenGLFrameBufferObject::texture(), bind it as a normal OpenGL texture and render it into a QOpenGLWidget viewport using a fullscreen quad. (I'm using this two-step method because I'm not aware of a way to get around the fact that QOpenGLWidgets each work in their own context, but that's a different story.)
The problem here is that glBindTexture() returns InvalidOperation when I call it. According to the OpenGL documentation, this is because "[The] texture was previously created with a target that doesn't match that of [the input]." However, I created the frame buffer object by passing GL_TEXTURE_2D into the constructor, and am passing the same in as the target to glBindTexture(), so I'm not sure where I'm going wrong. There isn't much documentation online about how to correctly use QOpenGLFrameBufferObject::texture().
Other supplementary information, in case it helps:
The creation of the frame buffer object doesn't set any special formats. They're left at whatever defaults Qt uses. As far as I know, this means it also has no depth or stencil attachments as of yet, though once I've got the basics working this will probably change.
Binding the FBO before binding its texture doesn't seem to make a difference.
QOpenGLFrameBufferObject::isValid() returns true;
Calling glIsTexture() on the texture handle returns false, but I'm not sure why this would be given that it's a value provided to me by Qt for the purposes of binding an OpenGL texture. The OpenGL documentation does mention that "a name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture", but here I can't bind it anyway.
I'm attempting to bind the texture in a different context to the one the FBO was created in (ie. the QOpenGLWidget's context instead of the render module's context).
I'll provide some code, but a lot of what I have is specific to the systems that exist in the rendering module, so there's only a small amount of relevant OpenGL code.
In the render module context:
QOpenGLFrameBufferObject fbo = new QOpenGLFrameBufferObject(QSize(...), GL_TEXTURE_2D);
// Do some rendering in this context later
In the QOpenGLWidget context, after having rendered to the frame buffer in the rendering module:
GLuint textureId = fbo->texture();
glBindTexture(GL_TEXTURE_2D, textureId)) // Invalid operation
EDIT: It turns out the culprit was that my contexts weren't actually being shared, as I'd misinterpreted what the Qt::AA_ShareOpenGLContexts application attribute did. Once I made them properly shared the issue was fixed.

OpenGL texture terminology/conceptual confusion

I've found a lot of resources that tell you what to type to get a texture on screen, but would like a higher level conceptual understanding of what the openGL API is "doing" and what all of the differences in terminology "mean".
I'm going to do my best to explain what I've picked up, but would love any corrections/additions, or pointers to resources where I can read further (and just a note that I've found the documentation of the actual API calls to just reference themselves in circles and be conceptually lacking).
glGenTextures- this won't actually allocate any memory for the data of a texture on the graphics card (you just tell it "how many" textures you want it to generate, so it doesn't know anything about the size...), but instead sets kind of a "name" aside so you can reference given textures consistently (I've been thinking of it as kind of "allocating a pointer").
glBindTexture- use the "name" generated in glGenTexture to specify that "we're now talking about this texture for future API calls until further notice", and further, we're specifying some metadata about that "pointer" we've allocated saying whether the texture it points to (/will point to) is of type GL_TEXTURE_2D or ..._3D or whatever. (Is it just me, or is it weird that this call has those two seemingly totally different functionalities?)
glTexParameter- sets other specified metadata about the currently "bound" texture. (I like this API as it seems pretty self explanatory and lets you set metadata explicitly... but I wonder why letting OpenGL know that it's a GL_TEXTURE_2D isn't part of THIS call, and not the previous? Especially because you have to specify that it's a GL_TEXTURE_2D every time you call this anyways? And why do you have to do that?)
glTexImage2D- allocates the memory for the actual data for the texture on the graphics card (and optionally uploads it). It further specifies some metadata regarding how it ought be read: its width, height, formatting (GL_RGB, GL_RGBA, etc...). Now again, why do I again have to specify that it's a GL_TEXTURE_2D when I've done it in all the previous calls? Also, I guess I can understand why this includes some metadata (rather than offloading ALL the texture metadata calls to glTexParameter as these are pretty fundamental/non-optional bits of info, but there are also some weird parameters that seem like they oughtn't have made the cut? oh well...)
glActiveTexture- this is the bit that I really don't get... So I guess graphics cards are capable of having only a limited number of "texture units"... what is a texture unit? Is it that there can only be N texture buffers? Or only N texture pointers? Or (this is my best guess...) there can only be N pointers being actively read by a given draw call? And once I get that, where/how often to I have to specify the "Active Texture"? Does glBindTexture associate the bound texture with the currently active texture? Or is it the other way around (bind, then set active)? Or does uploading/allocating the graphics card memory do that?
sampler2D- now we're getting into glsl stuff... So, a sampler is a thing that can reference a texture from within a shader. I can get its location via glGetUniformLocation, so I can set which texture that sampler is referencing- does this correspond to the "Active Texture"? So if I want to talk about the texture I've specified as GL_TEXTURE0, I'd call glUniform1i(location_of_sampler_uniform,0)? Or are those two different things?
I think that's all I got... if I'm obviously missing some intuition or something, please let me know! Thanks!
Let me apologize for answering with what amounts to a giant wall of text. I could not figure out how to format this any less obnoxious way ;)
glGenTextures
this won't actually allocate any memory for the data of a texture on the graphics card (you just tell it "how many" textures you want it to generate, so it doesn't know anything about the size...), but instead sets kind of a "name" aside so you can reference given textures consistently (I've been thinking of it as kind of "allocating a pointer").
You can more or less think of it as "allocating a pointer." What it really does is reserve a name (handle) in the set of textures. Nothing is allocated at all at this point, basically it just flags GL to say "you can't hand out this name anymore." (more on this later).
glBindTexture
use the "name" generated in glGenTexture to specify that "we're now talking about this texture for future API calls until further notice", and further, we're specifying some metadata about that "pointer" we've allocated saying whether the texture it points to (/will point to) is of type GL_TEXTURE_2D or ..._3D or whatever. (Is it just me, or is it weird that this call has those two seemingly totally different functionalities?)
If you will recall, glGenTextures (...) only reserves a name. This function is what takes the reserved name and effectively finalizes it as a texture object (the first time it is called). The type you pass here is immutable, once you bind a name for the first time, it has to use the same type for every successive bind.
Now you have finally finished allocating a texture object, but it has no data store at this point -- it is just a set of states with no data.
glTexParameter
sets other specified metadata about the currently "bound" texture. (I like this API as it seems pretty self explanatory and lets you set metadata explicitly... but I wonder why letting OpenGL know that it's a GL_TEXTURE_2D isn't part of THIS call, and not the previous? Especially because you have to specify that it's a GL_TEXTURE_2D every time you call this anyways? And why do you have to do that?)
I am actually not quite clear what you are asking here -- maybe my explanation of the previous function call will help you? But you are right, this function sets the state associated with a texture object.
glTexImage2D
allocates the memory for the actual data for the texture on the graphics card (and optionally uploads it). It further specifies some metadata regarding how it ought be read: its width, height, formatting (GL_RGB, GL_RGBA, etc...). Now again, why do I again have to specify that it's a GL_TEXTURE_2D when I've done it in all the previous calls? Also, I guess I can understand why this includes some metadata (rather than offloading ALL the texture metadata calls to glTexParameter as these are pretty fundamental/non-optional bits of info, but there are also some weird parameters that seem like they oughtn't have made the cut? oh well...)
This is what allocates the data store and (optionally) uploads texture data (you can supply NULL for the data here and opt to finish the data upload later with glTexSubImage2D (...)).
You have to specify the texture target here because there are half a dozen different types of textures that use 2D data stores. The simplest way to illustrate this is a cubemap.
A cubemap has type GL_TEXTURE_CUBE_MAP, but you cannot upload its texture data using GL_TEXTURE_CUBE_MAP -- that is nonsensical. Instead, you call glTexImage2D (...) while the cubemap is bound to GL_TEXTURE_CUBE_MAP and then you pass something like GL_TEXTURE_CUBE_MAP_POSITIVE_X to indicate which of the 6 2D faces of the cubemap you are referencing.
glActiveTexture
this is the bit that I really don't get... So I guess graphics cards are capable of having only a limited number of "texture units"... what is a texture unit? Is it that there can only be N texture buffers? Or only N texture pointers? Or (this is my best guess...) there can only be N pointers being actively read by a given draw call? And once I get that, where/how often to I have to specify the "Active Texture"? Does glBindTexture associate the bound texture with the currently active texture? Or is it the other way around (bind, then set active)? Or does uploading/allocating the graphics card memory do that?
This is an additional level of indirection for texture binding (GL did not always have multiple texture units and you would have to do multiple render passes to apply multiple textures).
Once multi-texturing was introduced, binding a texture actually started to work this way:
glBindTexture (target, name) => ATIU.targets [target].bound = name
Where:
* ATIU is the active texture image unit
* targets is an array of all possible texture types that can be bound to this unit
* bound is the name of the texture bound to ATIU.targets [target]
The rules since OpenGL 3.0 have been, you get a minimum of 16 of these for every shader stage in the system.
This requirement allows you enough binding locations to maintain a set of 16 different textures for each stage of the programmable pipeline (vertex,geometry,fragment -- 3.x, tessellation control / evaluation -- 4.0). Most implementations can only use 16 textures in a single shader invocation (pass, basically), but you have a total of 48 (GL3) or 80 (GL4) places you can select from.
sampler2D
now we're getting into glsl stuff... So, a sampler is a thing that can reference a texture from within a shader. I can get its location via glGetUniformLocation, so I can set which texture that sampler is referencing- does this correspond to the "Active Texture"? So if I want to talk about the texture I've specified as GL_TEXTURE0, I'd call glUniform1i(location_of_sampler_uniform,0)? Or are those two different things?
Yes, the samplers in GLSL store indices that correspond to GL_TEXTUREn, where n is the value you have assigned to this uniform.
These are not regular uniforms, mind you, they are called opaque types (the value assigned cannot be changed/assigned from within a shader at run-time). You do not need to know that, but it might help to understand that if the question ever arises:
"Why can't I dynamically select a texture image unit for my sampler at run-time?" :)
In later versions of OpenGL, samplers actually became state objects of their own. They decouple some of the state that used to be tied directly to texture objects but had nothing to do with interpreting how the texture's data was stored. The decoupled state includes things like texture wrap mode, min/mag filter and mipmap levels. Sampler objects store no data.
This decoupling takes place whenever you bind a sampler object to a texture image unit - that will override the aforementioned states that are duplicated by every texture object.
So effectively, a GLSL sampler* references neither a texture nor a sampler; it references a texture image unit (which may have one or both of those things bound to it). GLSL will pull sampler state and texture data accordingly from that unit based on the declared sampler type.

OpenGL, disabling a texture unit, glActiveTexture and glBindTexture

How do I turn off a texture unit, or at least prevent its state changing when I bind a texture? I'm using shaders so there's no glDisable for this I don't think. The problem is that the chain of events might look something like this:
Create texture 1 (implies binding it)
Use texture 1 with texture unit 1
Create texture 2 (implies binding it)
Use texture 2 with texture unit 2
, but given glActiveTexture semantics, it seems this isn't possible, because the create of texture 2 will become associated with the state of texture unit 1, as that was the last unit I called glActiveTexture on. i.e. you have to write:
Create texture 1
Create texture 2
Use texture 1 with texture unit 1
Use texture 2 with texture unit 2
I've simplified the example of course, but the fact that creating and binding a texture can incidentally affect the currently active texture unit even when you are only binding the texture as part of the creation process is something that makes me somewhat uncomfortable. Unless of course I've made an error here and there's something I can do to disable state changes in the current glActiveTexture?
Thanks for any assistance you can give me here.
This is pretty much something you just have to learn to live with in OpenGL. GL functions only affect the current state. So to modify an object, you must bind it to the current state, and modify it.
In general however, you shouldn't have a problem. There is no reason to create textures in the same place where you're binding them for use. The code that actually walks your scene and binds textures for rendering should never be creating textures. The rendering code should establish all necessary state for each rendering (unless it knows that all necessary state was previously established in this rendering call). So it should be binding all of the textures that each object needs. So previously created textures will be evicted.
And in general, I would suggest unbinding textures after creation (ie: glBindTexture(..., 0)). This prevents them from sticking around.
And remember: when you bind a texture, you also unbind whatever texture was currently bound. So the texture functions will only affect the new object.
However, if you want to rely on an EXT extension, there is EXT_direct_state_access. It is supported by NVIDIA and AMD, so it's fairly widely available. It allows you to modify objects without binding them, so you can create a texture without binding it.