OpenGL - Multi Sampled FBO methodology - opengl

I am implementing Multisampled Antialiasing into my deferred rendering engine and I'm curious, can I simply use a multisampled texture for the albedo output? Or will the depth texture, normals texture and even lighting texture have to be multisampled also?

Related

Premultiplied alpha and multisampling in OpenGL

I'm rendering some triangles into multisampled texture. Then blit this texture into normal texture and draw a textured quad onto screen.
I need the final texture to have premultiplied alpha. Is there a way to tell OpenGL to automatically premultiply semi-transparent pixels in multisampled texture (or when blitting)?
Or the only way is using an extra shader pass to perform multiplication manually?
A multisample resolve blit can't perform pre-multiplication. You will have to pre-multiply the texture's pixels in the process that reads from the texture.

OpenGL offscreen rendering

I was trying to render a 2D scene into a off-screen FrameBuffer Object and use glFrameBufferTexture2D function to use the frame as texture to texture a cube.
The 2D scene is rendered in one context and the texture is used in another one in the same thread.
The problem is when I textured the cube, alpha channel seemed to be incorrect. I used apitrace to check the texture, and the texture has correct alpha value, and the shader was merely out_color = texture(in_texture, uv_coords)
The problem was solved if I blit the off-screen framebuffer color attachment to anything, whether it be itself or framebuffer 0 (output window).
I was wondering why this is happening and how to solve it without needing to blit the framebuffer.
Found out that I used single buffering for the static 2D scene and requires a glFlush to flush the pipeline.

Fragment shader for multisampled depth textures

Which operations would be ideal in fragment shader for multisampled depth textures? I mean, for RGBA textures, we could just take average of color values came from texelFetch().
What should be ideal shader code for multisampled depth texture?
Multisample depth textures are a Shader Model 4.1 (DX 10.1) feature first and foremost (multisample color textures are DX 10.0). OpenGL does not make this clear, but not all GL3 class hardware will support them. That said, since multisample textures are a GL 3.2 feature, this issue is largely moot in the OpenGL world; something that might come up once in a blue moon.
In any event, there is no difference between a multisample depth texture and color texture assuming your hardware supports the former. Even if the depth texture is an integer format, when you sample it using texelFetch (...) on a sampler2DMS you get a single-precision 4-component floating-point vector of the form: vec4 (depth.r, depth.r, depth.r, 1.0).
You can average the texels together if you want for multisample depth resolve, but the difference in depth between all of the samples can also be useful for quickly finding edges in your rendered scene to implement things like bilateral filtering.

How to resolve a multisampled depth buffer to a depth texture?

I'm having trouble with the msaa depth buffer resolving to a depth texture. I create my glTexImage2d for both my color texture and my depth texture and bind them to the "resolve" ( non-msaa ) framebuffer. Then I create and bind another framebuffer and create my multisampled renderbuffers ( both color and depth ). I know to get my color texture information from the multisampled renderbuffer I just blit with the write buffer being my color texture framebuffer and the read buffer being my msaa-framebuffer. My question is how do I blit my multisampled depth buffer to my depth texture.
You would do this the same way you would do for the color buffer, only you would use GL_DEPTH_BUFFER_BIT and make sure you use GL_NEAREST for the filter mode. If you want linear filtering on a depth/stencil buffer, for whatever reason, you can actually do this in GLSL if you use a depth texture (of course this still remains invalid for MSAA textures) - but it is invalid to do this using glBlitFramebuffer (...).
You do not have to do any of the glReadBuffer (...) nonsense like you do when you want to select an individual color buffer, glBlitFramebuffer (...) will copy the depth buffer from the bound GL_READ_FRAMEBUFFER to the bound GL_DRAW_FRAMEBUFFER.

Renderbuffer or texture object after fragment shader?

I am working on openGL ES2.0 and glsl and I have a question about FBO.
I pass two textures on my openGL ES2.0 code and through glsl shader, particularly fragment shader, I subtract two textures and make a binary image, just like opencv treshold function. My question is that I am not sure if I should use Renderbuffer or texture object for my FBO. I have to choose one since I can only use 1 color attachment (due to restriction of openGL ES2.0). Since the output image after my fragment shader will be a binary image (black or white), shouldn't it be Renderbuffer object?
A texture is a series of images which can be read from (via normal texturing means) and rendered into via FBOs. A renderbuffer is an image that can only be rendered into.
You should use a renderbuffer for images that you will only use as a render target. If you need to sample from it later, you should use a texture.