I would like to create FBO but that will be "shared" between all contexts.
Because an FBO is container and can not be "shared", but only its buffers, I thought to do the follow:
Create an object FBODescriptor that is a descriptor of the desired FBO, it also contains opengl buffers that are shared.
At each frame, and in any context that is active, I create the FBO (so it is available for the current context), attach to it the buffers, and delete the FBO container after the rendering.
In this way, I have a desired FBO that is available for any context.
My assumption is that because the resources buffers are existing and no need to re-create them each frame but only the FBO container, it doesn't have a meaningful penalty.
May it works?
This will work, in the sense that it will function. But it is not in any way good. Indeed, any solution you come up with that results in "create an OpenGL object every frame" should be immediately discarded. Or at least considered highly suspect.
FBOs often do a lot of validation of their state. This is not cheap. Indeed, the recommendation has often been to not modify FBOs at all after you finish making them. Don't attach new images, don't remove them, anything. This would obviously include deleting and recreating them.
If you want to propagate changes to framebuffers across contexts, then you should do it in a way where you only modify or recreate FBOs when you need to. That is, when they actually change.
Related
I render into a texture via FBO. I want to copy the texture data into a PBO so I use glGetTexImage. I will use glMapBuffer on this PBO but only in the next frame (or later) so it should not cause a stall.
However, can I use the texture immediately after the glGetTexImage call without causing a stall? Can I bind it to a texture unit and render from it? Can I render to it again via FBO?
However, can I use the texture immediately after the glGetTexImage call without causing a stall?
That's implementation dependent behavior. It may or may not cause a stall, depending on how the implementation does actual data transfers.
Can I bind it to a texture unit and render from it?
Yes.
Can I render to it again via FBO?
Yes. This however might or might not cause a stall, depending on how the implementation internally deals with data consistency requirements. I.e. before modifying the data, the texture data either must be completely transferred into the PBO, or if the implementation can detect, that the entire thing will get altered (e.g. by issuing a glClear call that matches the attachment of the texture), it might simply orphan the internal data structure and start with a fresh memory region, avoiding that stall.
This is one of those corner cases which are nigh impossible to predict. You'll have to profile the performance and see for yourself. The deadsure way to avoid stalls is to use a fresh texture object.
Can I attach two textures to one FBO, and switch between them using glDrawBuffers, binding the inactive one as shader input? This seems much more efficient than switching FBOs for multipass effects.
If we're assuming you don't have access to OpenGL 4.5/ARB/NV_texture_barrier, no you cannot. The part of the OpenGL specification that forbids feedback loops on framebuffer attached images does not care whether the image can be written to or not. This is also true for array layers or mipmap levels; reading from one layer while writing to another layer will not save you.
All that matters is attachment. You must either bind a new FBO that doesn't have the texture attached, or remove the attachment from the current FBO.
Though again, texture barrier functionality makes everything I said irrelevant. And considering how widespread it is, it's really not something you should be concerned about.
I succeeded in render to texture with Texturebuffer, using VAO and shaders.
But FBO has another options for color buffer, it's Renderbuffer. I searched a lot on the internet, but cannot found any example related to draw Renderbuffer as Texturebuffer with shaders
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
If yes, please lead me or give any example to draw render buffer as texture buffer.
My target is just for study, but I'd like to know is that a better way to draw textures? Should we use it frequently?
First of all, don't use the term "texture buffer" when you really just mean texture. A "buffer texture"/"texture buffer object" is a different conecpt, completely unrelated here.
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
No. Renderbuffers were there when FBOs were first invented. One being faster than the other is not generally true either, but these are implementation details. But it is also irrelevant.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
Nope. You cant use the contents of a renderbuffer directly as a source for texture mapping. Renderbuffesr are just abstract memory regions the GPU renders to, and they are not in the format required for texturing. You can read back the results to the CPU using glReadPixels, our you could copy the data into a texture object, e.g. via glCopyTexSubImage - but that would be much slower than directly rendering into textures.
So renderbuffers are good for a different set of use cases:
offscreen rendering (e.g. where the image results will be written to a file, or encoded to a video)
as helper buffers during rendering, like the depth buffer or stencil buffer, where you do not care anbout the final contents of these buffers anyway
as intermediate buffer when the image data can't be directly used by the follwoing steps, e.g. when using multisampling, and copying the result to a non-multisampled framebuffer or texture
It appears that you have your terminology mixed up.
You attach images to Framebuffer Objects. Those images can either be a Renderbuffer Object (this is an offscreen surface that has very few uses besides attaching and blitting) or they can be part of a Texture Object.
Use whichever makes sense. If you need to read the results of your drawing in a shader then obviously you should attach a texture. If you just need a depth buffer, but never need to read it back, a renderbuffer might be fine. Some older hardware does not support multisampled textures, so that is another situation where you might favor renderbuffers over textures.
Performance wise, do not make any assumptions. You might think that since renderbuffers have a lot fewer uses they would somehow be quicker, but that's not always the case. glBlitFramebuffer (...) can be slower than drawing a textured quad.
Recently, I've been doing offscreen GPU acceleration for my real-time program.
I want to create a context and reuse it several times (100+). And I'm using OpenGL 2.1 and GLSL version 1.20.
Each time I reuse the context, I'm going to do the following things:
Compile shaders, link program then glUsePrograme (Question 1: should I relink the program or re-create the program each time?)
Generate FBO and Texture, then bind them so I can do offscreen rendering. (Question2: should I destroy those FBO and Texture)
Generate GL_Array_BUFFER and put some vertices data in it. (Question3: Do I even need to clean this?)
glDrawArray bluh bluh...
Call glFinish() then copy data from GPU to CPU by calling glReadPixels.
And is there any other necessary cleanup operation that I should consider?
If you can somehow cache or otherwise keep the OpenGL object IDs, then you should not delete them and instead just reuse them on the next run. Unless you acquire new IDs reusing the old ones will either replace the existing objects (properly releasing their allocations) or just change their data.
The call to glFinish before glReadPixels is superfluous, because glReadPixels causes an implicit synchronization and finish.
I recently read that simply switching the render targets of a framebuffer object is much faster than switching framebuffer object.
As extreme as it sounds, does this this mean I should only ever use one framebuffer object and only switchout it's targets?
EDIT: I changed 'swapping' to 'switching' to avoid confusion. By switching I mean binding a new framebuffer in place of the old one. Not to be confused with the SwapBuffers() call used to swap the front- and backbuffers.
EDIT: this answer is probably wrong. Read the comments below.
It's faster to switch framebuffer-attachable textures than switching between framebuffers (FBOs). More here http://www.songho.ca/opengl/gl_fbo.html
There are limits to how many attachments a FBO can have though.