Is it possible to render to a FBO with render calls that use fbos themselves?
for instance here is a bit of pseudo code.
Bind (top level FBO)
render water <-- (generate and use own sub fbos)
render shadows <-- (generate and use sub fbos)
render regular scene
etc..
unbind (top level FBO)
Blur Top level FBO, bloom,
render final scene to a quad using the top level FBO generated texture. I'm interested in doing post processing like bloom to my final game scene.
If I get your question right you want to compose a final scene from different rendering results,right?So first,this is completely possible.You can reserve an FBO per effect if you want.But your pseudo-code lack efficiency and would impact performance.No need to create sub-FBOs in runtime all the time.It is expensive operation.If you are after a pipeline with post-processing stage you would usually need no more than 2 FBOs (offscreen).Also remember you always have the default FBOs (front,back,left,right) which are created by the context.So you can render your 3D stuff into FBO -1 than use its texture as source for FBO-2 to apply post-processing effects.Then blit the results into the default screen FBO.
I don't see a reason to create FBO per effect.The execution is still serial.That's, you render effect after effect so you can reuse the same FBO again and again.Also,you may consider,instead of multiple FBOs use multiple render buffers or texture attachments of one FBO and decide into which of those you want to render your stuff.
Related
Is it necessary to render a scene to a texture which is then being used on a quad, covering the whole frame in order to be able to do post processing stuff? Is it because otherwise you would not be able to have the rendered image as a whole because the shader program would automatically render the image on the screen without it being possible to be edited inbetween?
Is it necessary to render a scene to a texture which is then being used on a quad
Yes and no. Yes, you need to render the scene to a texture. But with Compute Shaders, you don't have to render the texture to a quad.
The reason why you need to render to a texture is that you usually need to fully rendered image for the post processing effect. But this is not possible in the first render pass since you don't have access to neighbor fragments and you also wouldn't see fragments that are written after the the current one.
As #Spektre noted in a comment, the second major reason why render to texture is needed is that the OpenGL pipeline can not read actual rendering target so we need to separate processing into passes so we can read what was rendered.
glFramebufferTexture allows one to bind an entire cubemap as a color attachment for layered rendering. In turn, glReadBuffer then allows one to bind said entire cubemap as a read buffer.
I want to render a scene to the non-zero mip levels of a cubemap texture. I'm using layered rendering to render not to one face, but to the entire thing in one go. However, the shader used for this uses the 0th mip level of that same texture. Since I don't think I can expose the texture to a shader and to a framebuffer attachment at the same time, I'm rendering to a different texture and copying the contents of that texture to my original texture's desired mip level.
Right now I'm doing this with a pass-through shader, which is pretty slow since it's layered rendering thus uses a geometry shader, and it would be better to use an API function. However, glCopyTexSubImage2D only allows cubemap faces, and neither it nor glCopyTexSubImage3D seem to accept cubemaps as input. Apart from 4.5-specific functions such as glCopyTextureSubImage3D, is there any way to retrieve an entire cubemap from the framebuffer into a cubemap texture ? I'm also aware that glCopyImageSubData exists, but something at the feature level of glFramebufferTexture is preferrable (so 3.2).
I'm fairly new to OpenGL and trying to figure out how to add a post-processing stage to my scene rendering. What I believe I know so far is that I create an FBO, render the scene to that, and then I can render to the back buffer using my post-processing shader with the texture from the FBO as the input.
But where this goes beyond my knowledge is when multisampling gets thrown in. The FBO must be multisampled. That leaves two possibilities: 1. the post-process shader operates 1:1 on subsamples to generate the final multisampled screen output, or 2. the shader must resolve the multiple samples and output a single screen fragment for each screen pixel. How can these be done?
Well, option 1 is supported in the GL via the features braught in via GL_ARB_texture_multisample (in core since GL 3.2). Basically, this brings new multisample texture types, and the corresponding samplers like sampler2DMS, where you explicitely can fetch from a particular sample index. If this approach can be efficiently used to implement your post-processing effect, I don't know.
Option 2 is a little bit different than what you describe. Not the shader will do the multisample resolve. You can render into a multisample FBO (don't need a texture for that, a renderbuffer will do as well) and do the resolve explicitely using glBlitFramebuffer, into another, non-multisampled FBO (this time, with a texture). This non-multisamples texture can then be used as input for the post-processing. And neither the post-processing nor the default framebuffer need to be aware of multisampling at all.
Situation
I am writing an image compositor, and I am using FBOs.
As gl can not read a texture that it is currently writing to, currently I am using a pseudo "Fbo Flip Chain" logic. I create a list of about 10 FBO and each time I render I move to the next FBO which can happily read from one of the previous FBO textures.
However this is deeply flawed because of the way I am currently doing my composting an fbo can be overwritten when it is still needed in its current state. I tied a "locking/unlocking" logic but it is flakey.
An image currently goes through the following stages:
Draw The Image to a texture (using fbo)
fbo flip
Composite all of the Images "children" onto image
fbo flip for each child this is the issue
convert the image to the destination type
fbo flip
Blend the image with its destination
fbo flip
pass texture along for composting with siblings
I have decided to re-factor this flip chain-compositing idea. The way I see it, it would be better to do one of the following:
Possible Solutions
Each image would contain its own multiple fbos, each with its own render target, switching to a designated fbo at each stage*
Each Image would contain one fbo and mutiple texture targets, changing the attachment for each stage*
Each image would have 2 render targets ("front", "back"), there would be 1 master fbo and the master fbo would overwrite the "front" texture, while reading from the "back" texture (thinking more like a back buffer and a front buffer). At the end of each stage* swap them over.
*(each stage = where a flip is currently performed)
I am currently leaning towards number 2, each image would end up with 4/5 textures, that would be designated to each stage. And when the image is "drawn" it can test if it can start its draw from a different stage.
e.g. the image has been drawn and is already converted to the correct type, so take the "post_convert" texture and just blend.
Issues:
I am unsure which is quicker, attaching and detaching render targets. or binding FBOs...
There is an awful lot of textures and fbos being thrown around at the moment. And I am very wary of the fact that I am programming with GL in C# so I need to nail down a system of managing the gl memory alongside C#s GC.
At office we're working with an old GLX/Motif software that uses OpenGL's AccumulationBuffer to implement anti-aliasing for saving images.
Our problem is that Apple removed the AccumulationBuffer from all of its drivers (starting from OS X 10.7.5), and some Linux drivers like Intel HDxxxx don't support it neither.
Then I would like to update the anti-aliasing code of the software for making it compatible with most actual OSs and GPUs, but keeping the generated images as beautiful as they were before (because we need them for scientific publications).
SuperSampling seems to be the oldest and the best quality anti-aliasing method, but I can't find any example of SSAA that doesn't use AccumulationBuffer. Is there a different way to implement SuperSampling with OpenGL/GLX ???
You can use FBOs to implement the same kind of anti-aliasing that you most likely used with accumulation buffers. The process is almost the same, except that you use a texture/renderbuffer as your "accumulation buffer". You can either use two FBOs for the process, or change the attached render target of a single render FBO.
In pseudo-code, using two FBOs, the flow looks roughly like this:
create renderbuffer rbA
create fboA (will be used for accumulation)
bind fboA
attach rbA to fboA
clear
create texture texB
create fboB (will be used for rendering)
attach texB to fboB
(create and attach a renderbuffer for the depth buffer)
loop over jitter offsets
bind fboB
clear
render scene, with jitter offset applied
bind fboA
bind texB for texturing
set blend function GL_CONSTANT_ALPHA, GL_ONE
set blend color 0.0, 0.0, 0.0, 1.0 / #passes
enable blending
render screen size quad with simple texture sampling shader
disable blending
end loop
bind fboA as read_framebuffer
bind default framebuffer as draw framebuffer
blit framebuffer
Full super-sampling is also possible. As Andon in the comment above suggested, you create an FBO with a render target that is a multiple of your window size in each dimension, and in the end do a down-scaling blit to your window. The whole thing tends to be slow and use a lot of memory, even with just a factor of 2.