In my application I have a module that manages rendering using OpenGL, and renders the results into QOpenGLFrameBufferObjects. I know this much works, because I'm able to save their contents using QOpenGLFrameBufferObject::toImage() and see the render.
I am now attempting to take the texture ID returned from QOpenGLFrameBufferObject::texture(), bind it as a normal OpenGL texture and render it into a QOpenGLWidget viewport using a fullscreen quad. (I'm using this two-step method because I'm not aware of a way to get around the fact that QOpenGLWidgets each work in their own context, but that's a different story.)
The problem here is that glBindTexture() returns InvalidOperation when I call it. According to the OpenGL documentation, this is because "[The] texture was previously created with a target that doesn't match that of [the input]." However, I created the frame buffer object by passing GL_TEXTURE_2D into the constructor, and am passing the same in as the target to glBindTexture(), so I'm not sure where I'm going wrong. There isn't much documentation online about how to correctly use QOpenGLFrameBufferObject::texture().
Other supplementary information, in case it helps:
The creation of the frame buffer object doesn't set any special formats. They're left at whatever defaults Qt uses. As far as I know, this means it also has no depth or stencil attachments as of yet, though once I've got the basics working this will probably change.
Binding the FBO before binding its texture doesn't seem to make a difference.
QOpenGLFrameBufferObject::isValid() returns true;
Calling glIsTexture() on the texture handle returns false, but I'm not sure why this would be given that it's a value provided to me by Qt for the purposes of binding an OpenGL texture. The OpenGL documentation does mention that "a name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture", but here I can't bind it anyway.
I'm attempting to bind the texture in a different context to the one the FBO was created in (ie. the QOpenGLWidget's context instead of the render module's context).
I'll provide some code, but a lot of what I have is specific to the systems that exist in the rendering module, so there's only a small amount of relevant OpenGL code.
In the render module context:
QOpenGLFrameBufferObject fbo = new QOpenGLFrameBufferObject(QSize(...), GL_TEXTURE_2D);
// Do some rendering in this context later
In the QOpenGLWidget context, after having rendered to the frame buffer in the rendering module:
GLuint textureId = fbo->texture();
glBindTexture(GL_TEXTURE_2D, textureId)) // Invalid operation
EDIT: It turns out the culprit was that my contexts weren't actually being shared, as I'd misinterpreted what the Qt::AA_ShareOpenGLContexts application attribute did. Once I made them properly shared the issue was fixed.
Related
I would like to remember a current texture state in OpenGL 1.x and later restore it. I can use glIsEnabled to check which texture types are active.
Does it make sense to have enabled more than one text type, for example, GL_TEXTURE_2D and GL_TEXTURE_CUBE_MAP?
glGet* function allows to get a current texture id, for example, GL_TEXTURE_BINDING_2D but to bind previous texture I also need to know appropriate target for glBindTexture.
How to achieve it?
I have the following OpenGL setup for troubleshooting frame buffer issues:
I render a cube into a frame buffer.
I use the target texture from this frame buffer to draw a textured quad, which displays the cube in my viewport.
This works OK when both stages of the process are done in the same context, but breaks if stage 1 is done in a different context to stage 2 (note that these contexts are both shared and both on the same thread). In this case, I only ever see the cube displayed when I resize my viewport (which recreates my frame buffer). The cube is sometimes corrupted or fragmented, which leads me to believe that all I'm seeing is parts of memory that were used by the texture before it was resized, and that nothing is ever displayed properly.
The reason I have to have this setup is that in my actual application I'm using Qt OpenGL widgets, which are forced to use their own individual contexts, so I have to render my scene in its own dedicated context and then copy it to the relevant viewports using shareable OpenGL resources. If I don't do this, I get errors caused by VAOs being bound/used in other contexts.
I've tried the following unsuccessful combinations (where the primary context is where I use the texture to draw the quad, and the secondary context where the "offscreen" rendering of the cube into the frame buffer takes place):
Creating the frame buffer, its render buffer and its texture all in the secondary context.
Creating the frame buffer and the render buffer in the secondary context, creating the texture in the primary context, and then attaching the texture to the frame buffer in the secondary context.
Creating the frame buffer, its render buffer and two separate textures in the secondary context. One of these textures is initially attached to the frame buffer for rendering. Once the rendering to the frame buffer is complete, the first texture is detached and the second one attached. The previously attached texture containing the content of the rendering is used with the quad in the primary context.
In addition, I can't use glBlitFramebuffer() as I don't have access to the frame buffer the QOpenGLWidget uses in the application (as far as I've tried, QOpenGLWidget::defaultFramebufferObject() returns 0 which causes glBlitFramebuffer to give me errors).
The only way I have managed to get the rendering to work is to use a QOpenGLFrameBuffer and call takeTexture() when I want to use the texture with the quad. However, doing it this way means that the QOpenGLFrameBuffer creates itself a new texture and I have to destroy the old one once I've used it, which seems very inefficient.
Is there anything I can do to solve this problem?
I've got a project that uses a texture like that. You need to call glFinish() after drawing and before using the texture from QOpenGLFramebufferObject::texture(). That was our problem on some of the OSes.
I know that we can create such a FBO (no color attachment, just a depth attachment) e.g this can be used for shadow mapping.
Also, the FBO completeness check states that
Each draw buffers must either specify color attachment points that
have images attached or must be GL_NONE.
(GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER when false). Note that this
test is not performed if OpenGL 4.2 or ARB_ES2_compatibility is
available.
My question is, is it necessary to explicitly mention this by using
glDrawBuffer(GL_NONE);
If I dont specify any color attachments, is it not understood by OpenGL that it will not have any color buffers attached?
(My Program just worked fine on OpenGL 4.0 without mentioning "glDrawBuffer(GL_NONE);" so I assume its okay not to, but wiki says FB completeness check must have failed)
In my application, using a depth buffer for shadow mapping, NOT calling
glDrawBuffer(GL_NONE);
does NOT result in an incomplete framebuffer if this framebuffer has no color attachments.
However, everything does turn into crap and the depth texture is apparently either not writeable or readable or both.
Why this is so, and whether this is universally so I will leave in the middle. I am just informing you of my findings, and my findings indicate you should be cautious with omitting this statement.
How do I turn off a texture unit, or at least prevent its state changing when I bind a texture? I'm using shaders so there's no glDisable for this I don't think. The problem is that the chain of events might look something like this:
Create texture 1 (implies binding it)
Use texture 1 with texture unit 1
Create texture 2 (implies binding it)
Use texture 2 with texture unit 2
, but given glActiveTexture semantics, it seems this isn't possible, because the create of texture 2 will become associated with the state of texture unit 1, as that was the last unit I called glActiveTexture on. i.e. you have to write:
Create texture 1
Create texture 2
Use texture 1 with texture unit 1
Use texture 2 with texture unit 2
I've simplified the example of course, but the fact that creating and binding a texture can incidentally affect the currently active texture unit even when you are only binding the texture as part of the creation process is something that makes me somewhat uncomfortable. Unless of course I've made an error here and there's something I can do to disable state changes in the current glActiveTexture?
Thanks for any assistance you can give me here.
This is pretty much something you just have to learn to live with in OpenGL. GL functions only affect the current state. So to modify an object, you must bind it to the current state, and modify it.
In general however, you shouldn't have a problem. There is no reason to create textures in the same place where you're binding them for use. The code that actually walks your scene and binds textures for rendering should never be creating textures. The rendering code should establish all necessary state for each rendering (unless it knows that all necessary state was previously established in this rendering call). So it should be binding all of the textures that each object needs. So previously created textures will be evicted.
And in general, I would suggest unbinding textures after creation (ie: glBindTexture(..., 0)). This prevents them from sticking around.
And remember: when you bind a texture, you also unbind whatever texture was currently bound. So the texture functions will only affect the new object.
However, if you want to rely on an EXT extension, there is EXT_direct_state_access. It is supported by NVIDIA and AMD, so it's fairly widely available. It allows you to modify objects without binding them, so you can create a texture without binding it.
I'm considering refactoring a large part of my rendering code and one question popped to mind:
Is it possible to render to both the screen and to a texture using multiple color attachments in a Frame Buffer Object? I cannot find any information if this should be possible or not even though it has many useful applications. I guess it should be enough to bind my texture as color attachment0 and renderbuffer 0 to attachment1?
For example I want to make an interactive application where you can "draw" on a 3D model. I resolve where the user draws by rendering the UV-coordinates to a texture so I can look up at the mouse-coordinates where to modify the texture. In my case it would be fastest to have a shader that both draws the UV's to the texture and the actual texture to the screen in one pass.
Are there better ways to do this or am I on the right track?
There is no such thing as "default renderbuffer" in OpenGL. There is the window system provided default frame buffer with reserved name zero, but that basically means "no FBO enabled". So no, unfortunately normal OpenGL provides no method to somehow use its color buffer as a color attachment to any other FBO. I'm not aware of any extensions that could possible provide this feature.
With render buffers there is also the reserved name zero, but it's only a special "none" variable and allows unbinding render buffers.