How can an OpenGL Vertex Array Object get deleted without calling glDeleteVertexArrays? - c++

I am developing an After Effects plugin where I use a VAO for OpenGL rendering. After full screen RAM preview the VAO, which has the handle number 1, is somehow deleted (glGenVertexArrays generates 1 again). The strange thing is that the shaders and framebuffers are still valid, so it's not the entire OpenGL context that gets reset. Does anyone know what could cause this?

The most likely explanation is, that your plugin gets a completely newly created OpenGL context everything time happens. If your OpenGL context shared its "list" namespace with another "caching" context, and this sharing is re-established for the new context, you'd observe this behavior.
The strange thing is that the shaders and framebuffers are still valid, so it's not the entire OpenGL context that gets reset.
When establishing OpenGL context namespace sharing some kinds of objects are shared, i.e. get their "internal reference counts" (you can't directly access thise) incremented for each participating context, while others are not. Objects that hold data in any form (textures, buffer objects, shaders) are shared, while abstraction objects that hold state (array objects and framebuffer objects among them) are not.
So if a new context is created and a namespace sharing with the cache context established, you'll see all the textures, shaders and so on, you created previously, while VAOs and FBOs would disappear.
If you want to catch this situation use wglGetCurrentContext to get the OS handle. You can safely cast a windows handle to uintptr_t integer type, so for debugging you could print the value of the handle and look for if it changes.

Related

OpenGL - Can I use texture after querying its data into PBO?

I render into a texture via FBO. I want to copy the texture data into a PBO so I use glGetTexImage. I will use glMapBuffer on this PBO but only in the next frame (or later) so it should not cause a stall.
However, can I use the texture immediately after the glGetTexImage call without causing a stall? Can I bind it to a texture unit and render from it? Can I render to it again via FBO?
However, can I use the texture immediately after the glGetTexImage call without causing a stall?
That's implementation dependent behavior. It may or may not cause a stall, depending on how the implementation does actual data transfers.
Can I bind it to a texture unit and render from it?
Yes.
Can I render to it again via FBO?
Yes. This however might or might not cause a stall, depending on how the implementation internally deals with data consistency requirements. I.e. before modifying the data, the texture data either must be completely transferred into the PBO, or if the implementation can detect, that the entire thing will get altered (e.g. by issuing a glClear call that matches the attachment of the texture), it might simply orphan the internal data structure and start with a fresh memory region, avoiding that stall.
This is one of those corner cases which are nigh impossible to predict. You'll have to profile the performance and see for yourself. The deadsure way to avoid stalls is to use a fresh texture object.

QOpenGLFrameBufferObject: binding texture name from texture() gives InvalidOperation error

In my application I have a module that manages rendering using OpenGL, and renders the results into QOpenGLFrameBufferObjects. I know this much works, because I'm able to save their contents using QOpenGLFrameBufferObject::toImage() and see the render.
I am now attempting to take the texture ID returned from QOpenGLFrameBufferObject::texture(), bind it as a normal OpenGL texture and render it into a QOpenGLWidget viewport using a fullscreen quad. (I'm using this two-step method because I'm not aware of a way to get around the fact that QOpenGLWidgets each work in their own context, but that's a different story.)
The problem here is that glBindTexture() returns InvalidOperation when I call it. According to the OpenGL documentation, this is because "[The] texture was previously created with a target that doesn't match that of [the input]." However, I created the frame buffer object by passing GL_TEXTURE_2D into the constructor, and am passing the same in as the target to glBindTexture(), so I'm not sure where I'm going wrong. There isn't much documentation online about how to correctly use QOpenGLFrameBufferObject::texture().
Other supplementary information, in case it helps:
The creation of the frame buffer object doesn't set any special formats. They're left at whatever defaults Qt uses. As far as I know, this means it also has no depth or stencil attachments as of yet, though once I've got the basics working this will probably change.
Binding the FBO before binding its texture doesn't seem to make a difference.
QOpenGLFrameBufferObject::isValid() returns true;
Calling glIsTexture() on the texture handle returns false, but I'm not sure why this would be given that it's a value provided to me by Qt for the purposes of binding an OpenGL texture. The OpenGL documentation does mention that "a name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture", but here I can't bind it anyway.
I'm attempting to bind the texture in a different context to the one the FBO was created in (ie. the QOpenGLWidget's context instead of the render module's context).
I'll provide some code, but a lot of what I have is specific to the systems that exist in the rendering module, so there's only a small amount of relevant OpenGL code.
In the render module context:
QOpenGLFrameBufferObject fbo = new QOpenGLFrameBufferObject(QSize(...), GL_TEXTURE_2D);
// Do some rendering in this context later
In the QOpenGLWidget context, after having rendered to the frame buffer in the rendering module:
GLuint textureId = fbo->texture();
glBindTexture(GL_TEXTURE_2D, textureId)) // Invalid operation
EDIT: It turns out the culprit was that my contexts weren't actually being shared, as I'd misinterpreted what the Qt::AA_ShareOpenGLContexts application attribute did. Once I made them properly shared the issue was fixed.

Re-create FBO (frame buffer object) each frame

I would like to create FBO but that will be "shared" between all contexts.
Because an FBO is container and can not be "shared", but only its buffers, I thought to do the follow:
Create an object FBODescriptor that is a descriptor of the desired FBO, it also contains opengl buffers that are shared.
At each frame, and in any context that is active, I create the FBO (so it is available for the current context), attach to it the buffers, and delete the FBO container after the rendering.
In this way, I have a desired FBO that is available for any context.
My assumption is that because the resources buffers are existing and no need to re-create them each frame but only the FBO container, it doesn't have a meaningful penalty.
May it works?
This will work, in the sense that it will function. But it is not in any way good. Indeed, any solution you come up with that results in "create an OpenGL object every frame" should be immediately discarded. Or at least considered highly suspect.
FBOs often do a lot of validation of their state. This is not cheap. Indeed, the recommendation has often been to not modify FBOs at all after you finish making them. Don't attach new images, don't remove them, anything. This would obviously include deleting and recreating them.
If you want to propagate changes to framebuffers across contexts, then you should do it in a way where you only modify or recreate FBOs when you need to. That is, when they actually change.

Common OpenGL cleanup operation WITHOUT destroying context

Recently, I've been doing offscreen GPU acceleration for my real-time program.
I want to create a context and reuse it several times (100+). And I'm using OpenGL 2.1 and GLSL version 1.20.
Each time I reuse the context, I'm going to do the following things:
Compile shaders, link program then glUsePrograme (Question 1: should I relink the program or re-create the program each time?)
Generate FBO and Texture, then bind them so I can do offscreen rendering. (Question2: should I destroy those FBO and Texture)
Generate GL_Array_BUFFER and put some vertices data in it. (Question3: Do I even need to clean this?)
glDrawArray bluh bluh...
Call glFinish() then copy data from GPU to CPU by calling glReadPixels.
And is there any other necessary cleanup operation that I should consider?
If you can somehow cache or otherwise keep the OpenGL object IDs, then you should not delete them and instead just reuse them on the next run. Unless you acquire new IDs reusing the old ones will either replace the existing objects (properly releasing their allocations) or just change their data.
The call to glFinish before glReadPixels is superfluous, because glReadPixels causes an implicit synchronization and finish.

How could OpenGL buffers' state persist between program runs?

I'm writing an OpenGL program that draws into an Auxiliary Buffer, then the content of the Auxiliary Buffer is accumulated to the Accumulation Buffer before being GL_RETURN-ed to the Back buffer (essentially to be composited to the screen). In short, I'm doing sort of a motion blur. However the strange thing is, when I recompile and rerun my program, I was seeing the content of the Auxiliary/Accumulation Buffer from the previous program runs. This does not make sense. Am I misunderstanding something, shouldn't OpenGL's state be completely reset when the program restarts?
I'm writing an SDL/OpenGL program in Gentoo Linux nVidia Drivers 195.36.31 on GeForce Go 6150.
No - there's no reason for your GPU to ever clear its memory. It's your responsibility to clear out (or initialize) your textures before using them.
Actually, the OpenGL state is initialized to well-defined values.
However, the GL state consists of settings like all binary switches (glEnable), blending, depth test mode... etc, etc. Each of those has its default settings, which are described in OpenGL specs and you can be sure that they will be enforced upon context creation.
The point is, the framebuffer (or texture data or vertex buffers or anything) is NOT a part of what is called "GL state". GL state "exists" in your driver. What is stored in the GPU memory is totally different thing and it is uninitialized until you ask the driver (via GL calls) to initialize it. So it's completely possible to have the remains of previous run in texture memory or even in the frame buffer itself if you don't clear or initialize it at startup.