MSAA resolve into texture attachment of the same FBO - opengl

I have been using several custom FBOs. FBO-A has MSAA texture attached into which the geometry is rendered.Then it is resolved by blitting MSAA texture attachment of FBO-A into regular tex 2d attachment of FBO-B.This procedure implies switching between multiple FBOs and it is stated in several sources that it is more performance wise to rather switch between the attachments than between different FBOs. I tried to set both MSAA texture and the regular one attached to the same FBO. But I found I can't do the resolve by blitting. If I do texture copy from MSAA to the regular one ,will the MSAA be resolved as with blitting?
UPDATE:
Just for those interested to know whether it's worth (performance wise) to use several FBOs vs several attachments in a single FBO.
I just did a test (NVidia Quadro 4000) and the result was pretty identical FPS (+= 15-20 frames).it is probably hardware and OGL implementation dependent though.

I tried to set both MSAA texture and the regular one attached to the same FBO. But I found I can't do the resolve by blitting.
Of course not. In order to do a blit, the source and destination framebuffers must be complete. And one of the rules of completeness states that all of the attached images must have the same number of samples.
If I do texture copy from MSAA to the regular one ,will the MSAA be resolved as with blitting?
What do you mean by a "texture copy?"
If you're talking about using the new 4.3/ARB_copy_image glCopyImageSubData, then no. Again, the sample counts of the source and destination images must match.
If you're talking about copying from framebuffers to textures using glCopyTexSubImage2D and the like, then yes, that will perform a multisample resolve.
However, you really should just do the blit.

Related

Can linear filtering be used for an FBO blit of an MSAA texture to non-MSAA texture?

I have two 2D textures. The first, an MSAA texture, uses a target of GL_TEXTURE_2D_MULTISAMPLE. The second, non MSAA texture, uses a target of GL_TEXTURE_2D.
According to OpenGL's spec on ARB_texture_multisample, only GL_NEAREST is a valid filtering option when the MSAA texture is being drawn to.
In this case, both of these textures are attached to GL_COLOR_ATTACHMENT0 via their individual Framebuffer objects. Their resolutions are also the same size (to my knowledge this is necessary when blitting an MSAA to non-MSAA).
So, given the current constraints, if I blit the MSAA holding FBO to the non-MSAA holding FBO, do I still need to use GL_NEAREST as the filtering option, or is GL_LINEAR valid, since both textures have already been rendered to?
The filtering options only come into play when you sample from the textures. They play no role while you render to the texture.
When sampling from multisample textures, GL_NEAREST is indeed the only supported filter option. You also need to use a special sampler type (sampler2DMS) in the GLSL code, with corresponding sampling instructions.
I actually can't find anything in the spec saying that setting the filter to GL_LINEAR for multisample textures is an error. But the filter is not used at all. From the OpenGL 4.5 spec (emphasis added):
When a multisample texture is accessed in a shader, the access takes one vector of integers describing which texel to fetch and an integer corresponding to the sample numbers described in section 14.3.1 determining which sample within the texel to fetch. No standard sampling instructions are allowed on the multisample texture targets, and no filtering is performed by the fetch.
For blitting between multisample and non-multisample textures with glBlitFramebuffer(), the filter argument can be either GL_LINEAR or GL_NEAREST, but it is ignored in this case. From the 4.5 spec:
If the read framebuffer is multisampled (its effective value of SAMPLE_BUFFERS is one) and the draw framebuffer is not (its value of SAMPLE_BUFFERS is zero), the samples corresponding to each pixel location in the source are converted to a single sample before being written to the destination. filter is ignored.
This makes sense because there is a restriction in this case that the source and destination rectangle need to be the same size:
An INVALID_OPERATION error is generated if either the read or draw framebuffer is multisampled, and the dimensions of the source and destination rectangles provided to BlitFramebufferare not identical.
Since the filter is only applied when the image is stretched, it does not matter in this case.

OpenGL: Post-Processing + Multisampling =?

I'm fairly new to OpenGL and trying to figure out how to add a post-processing stage to my scene rendering. What I believe I know so far is that I create an FBO, render the scene to that, and then I can render to the back buffer using my post-processing shader with the texture from the FBO as the input.
But where this goes beyond my knowledge is when multisampling gets thrown in. The FBO must be multisampled. That leaves two possibilities: 1. the post-process shader operates 1:1 on subsamples to generate the final multisampled screen output, or 2. the shader must resolve the multiple samples and output a single screen fragment for each screen pixel. How can these be done?
Well, option 1 is supported in the GL via the features braught in via GL_ARB_texture_multisample (in core since GL 3.2). Basically, this brings new multisample texture types, and the corresponding samplers like sampler2DMS, where you explicitely can fetch from a particular sample index. If this approach can be efficiently used to implement your post-processing effect, I don't know.
Option 2 is a little bit different than what you describe. Not the shader will do the multisample resolve. You can render into a multisample FBO (don't need a texture for that, a renderbuffer will do as well) and do the resolve explicitely using glBlitFramebuffer, into another, non-multisampled FBO (this time, with a texture). This non-multisamples texture can then be used as input for the post-processing. And neither the post-processing nor the default framebuffer need to be aware of multisampling at all.

Multiple render targets in one FBO with different size textures?

Can I have textures of different sizes attached to a single FBO, and then use those for multiple render targets? Will I need to do anything special with glViewport to make this happen? Suppose I have a 1024x1024 texture for COLOR_ATTACHMENT0 and a 512x512 texture for COLOR_ATTACHMENT1, and I call glDrawBuffers(2, {COLOR_ATTACHMENT0, COLOR_ATTACHMENT1}) (I realize that syntax is incorrect, but you get the idea...), will it render the full scene in both attachments? I'm chiefly thinking the utility of this would be the ability to render a scene at full quality and a down-sampled version at one go, perhaps with certain masks or whatever so it could be used in an effects compositor/post-processing. Many thanks!
Since GL3.0 you can actually attach textures of different sizes. But you must be aware that the rendered area will be the one of the smallest texture. Read here :
http://www.opengl.org/wiki/Framebuffer_Object

What is an OpenGL RBO?

I just read the following presentation which seemed to recommend RBOs over PBOs for GPU->CPU transfers. I have been looking for some source explaining RBOs, without success.
Anyone know of a good source explaining RBOs?
From opengl wiki:
Renderbuffer Objects are OpenGL
Objects that contain images. They are
created and used specifically with
Framebuffer Objects. They are
optimized for being used as render
targets, while Textures may not be.
more information here
An example on gamedev.net here (have a look at "Adding a Depth Buffer" section)
EDIT
When you render to a frame buffer you can choose between two types of framebuffer-attachable images; texture images and renderbuffer images. In the former case you will render to texture the frame buffer, in the latter you will obtain an offscreen rendering.
Here is a discussion on the difference between this two kind of frambuffer-attachable images.
Here you can find more information about FBO and attachable images.

OpenGL FBO renderbuffer or texture attatchment

In what cases would I want to have a renderbuffer attachment in an OpenGL FBO, instead of a texture attachment, besides for the default framebuffer? As, a texture attachment seems far more versatile.
Textures provide more features to you (sampling!, formats variety) and hence are more likely subject to performance loss.
The answer is simple: use Textures wherever you have to sample from the surface (no alternatives).
Use Render buffers wherever you don't need to sample. The driver may or may not decide to store your pixel data more effectively based on your expressed intention of not doing sampling.
You can use GL blitting afterwards to do something with the result.
Extending the question to OpenGL|ES, another reason to use RenderBuffers instead of textures is also that textures may not be supported in some configurations and prevent you from building a valid FBO. I specifically think about depth textures, which are not natively supported on some hardware, for instance nVidia Tegra 2/3.