Recently, I've been doing offscreen GPU acceleration for my real-time program.
I want to create a context and reuse it several times (100+). And I'm using OpenGL 2.1 and GLSL version 1.20.
Each time I reuse the context, I'm going to do the following things:
Compile shaders, link program then glUsePrograme (Question 1: should I relink the program or re-create the program each time?)
Generate FBO and Texture, then bind them so I can do offscreen rendering. (Question2: should I destroy those FBO and Texture)
Generate GL_Array_BUFFER and put some vertices data in it. (Question3: Do I even need to clean this?)
glDrawArray bluh bluh...
Call glFinish() then copy data from GPU to CPU by calling glReadPixels.
And is there any other necessary cleanup operation that I should consider?
If you can somehow cache or otherwise keep the OpenGL object IDs, then you should not delete them and instead just reuse them on the next run. Unless you acquire new IDs reusing the old ones will either replace the existing objects (properly releasing their allocations) or just change their data.
The call to glFinish before glReadPixels is superfluous, because glReadPixels causes an implicit synchronization and finish.
Related
I render into a texture via FBO. I want to copy the texture data into a PBO so I use glGetTexImage. I will use glMapBuffer on this PBO but only in the next frame (or later) so it should not cause a stall.
However, can I use the texture immediately after the glGetTexImage call without causing a stall? Can I bind it to a texture unit and render from it? Can I render to it again via FBO?
However, can I use the texture immediately after the glGetTexImage call without causing a stall?
That's implementation dependent behavior. It may or may not cause a stall, depending on how the implementation does actual data transfers.
Can I bind it to a texture unit and render from it?
Yes.
Can I render to it again via FBO?
Yes. This however might or might not cause a stall, depending on how the implementation internally deals with data consistency requirements. I.e. before modifying the data, the texture data either must be completely transferred into the PBO, or if the implementation can detect, that the entire thing will get altered (e.g. by issuing a glClear call that matches the attachment of the texture), it might simply orphan the internal data structure and start with a fresh memory region, avoiding that stall.
This is one of those corner cases which are nigh impossible to predict. You'll have to profile the performance and see for yourself. The deadsure way to avoid stalls is to use a fresh texture object.
I would like to create FBO but that will be "shared" between all contexts.
Because an FBO is container and can not be "shared", but only its buffers, I thought to do the follow:
Create an object FBODescriptor that is a descriptor of the desired FBO, it also contains opengl buffers that are shared.
At each frame, and in any context that is active, I create the FBO (so it is available for the current context), attach to it the buffers, and delete the FBO container after the rendering.
In this way, I have a desired FBO that is available for any context.
My assumption is that because the resources buffers are existing and no need to re-create them each frame but only the FBO container, it doesn't have a meaningful penalty.
May it works?
This will work, in the sense that it will function. But it is not in any way good. Indeed, any solution you come up with that results in "create an OpenGL object every frame" should be immediately discarded. Or at least considered highly suspect.
FBOs often do a lot of validation of their state. This is not cheap. Indeed, the recommendation has often been to not modify FBOs at all after you finish making them. Don't attach new images, don't remove them, anything. This would obviously include deleting and recreating them.
If you want to propagate changes to framebuffers across contexts, then you should do it in a way where you only modify or recreate FBOs when you need to. That is, when they actually change.
I have a really obscure problem that I hope somebody can help with. I have implemented vertex skinning on the CPU (the GPU was too slow because the bad performance of looking up a bone transform in a vertex shader) using a background thread on OSX. I do not need a shared context because no GL calls are made. I allocate a buffer in the process heap big enough to hold my character's vertices. I skin and animate that buffer on the background thread. In the main thread I simply glBufferSubData() that buffer down to my VBO, synchronizing with the end of the buffer update so I don't get tearing in my verts. The VBO has previously been bound to a VAO (one VAO per VBO per character instance). So I only have to bind the VAO and draw my mesh. Not very difficult so far. A single IBO is bound to all VAO's per character instance.
Here's the rub. If I only have one character instance the code works perfectly. I have a lovely little warrior princess doing her idle animation. The moment I add a second instance nothing happens--only the first instance renders.
So the big question is what am I doing wrong? I'm pretty sure my VBO, IBO and VAOs are correct. There's a separate VBO and VAO per instance. I turned off indexing (no IBO) and still the instances fail to draw so it's not that IMHO. I've verified the state using the Mac OpenGL Profiler and everything looks good per instance. Is there some kind of weird flushing not going on due to my glBufferSubData call. glMapBuffer was just too slow!
If you need source code to look at I can upload that easily enough. Just wondering if anyone's heard of weirdness like what I'm seeing when dealing with buffer objects in OpenGL on the Mac.
I am developing an After Effects plugin where I use a VAO for OpenGL rendering. After full screen RAM preview the VAO, which has the handle number 1, is somehow deleted (glGenVertexArrays generates 1 again). The strange thing is that the shaders and framebuffers are still valid, so it's not the entire OpenGL context that gets reset. Does anyone know what could cause this?
The most likely explanation is, that your plugin gets a completely newly created OpenGL context everything time happens. If your OpenGL context shared its "list" namespace with another "caching" context, and this sharing is re-established for the new context, you'd observe this behavior.
The strange thing is that the shaders and framebuffers are still valid, so it's not the entire OpenGL context that gets reset.
When establishing OpenGL context namespace sharing some kinds of objects are shared, i.e. get their "internal reference counts" (you can't directly access thise) incremented for each participating context, while others are not. Objects that hold data in any form (textures, buffer objects, shaders) are shared, while abstraction objects that hold state (array objects and framebuffer objects among them) are not.
So if a new context is created and a namespace sharing with the cache context established, you'll see all the textures, shaders and so on, you created previously, while VAOs and FBOs would disappear.
If you want to catch this situation use wglGetCurrentContext to get the OS handle. You can safely cast a windows handle to uintptr_t integer type, so for debugging you could print the value of the handle and look for if it changes.
I'm writing an OpenGL program that draws into an Auxiliary Buffer, then the content of the Auxiliary Buffer is accumulated to the Accumulation Buffer before being GL_RETURN-ed to the Back buffer (essentially to be composited to the screen). In short, I'm doing sort of a motion blur. However the strange thing is, when I recompile and rerun my program, I was seeing the content of the Auxiliary/Accumulation Buffer from the previous program runs. This does not make sense. Am I misunderstanding something, shouldn't OpenGL's state be completely reset when the program restarts?
I'm writing an SDL/OpenGL program in Gentoo Linux nVidia Drivers 195.36.31 on GeForce Go 6150.
No - there's no reason for your GPU to ever clear its memory. It's your responsibility to clear out (or initialize) your textures before using them.
Actually, the OpenGL state is initialized to well-defined values.
However, the GL state consists of settings like all binary switches (glEnable), blending, depth test mode... etc, etc. Each of those has its default settings, which are described in OpenGL specs and you can be sure that they will be enforced upon context creation.
The point is, the framebuffer (or texture data or vertex buffers or anything) is NOT a part of what is called "GL state". GL state "exists" in your driver. What is stored in the GPU memory is totally different thing and it is uninitialized until you ask the driver (via GL calls) to initialize it. So it's completely possible to have the remains of previous run in texture memory or even in the frame buffer itself if you don't clear or initialize it at startup.