Fragment shader for multisampled depth textures - opengl

Which operations would be ideal in fragment shader for multisampled depth textures? I mean, for RGBA textures, we could just take average of color values came from texelFetch().
What should be ideal shader code for multisampled depth texture?

Multisample depth textures are a Shader Model 4.1 (DX 10.1) feature first and foremost (multisample color textures are DX 10.0). OpenGL does not make this clear, but not all GL3 class hardware will support them. That said, since multisample textures are a GL 3.2 feature, this issue is largely moot in the OpenGL world; something that might come up once in a blue moon.
In any event, there is no difference between a multisample depth texture and color texture assuming your hardware supports the former. Even if the depth texture is an integer format, when you sample it using texelFetch (...) on a sampler2DMS you get a single-precision 4-component floating-point vector of the form: vec4 (depth.r, depth.r, depth.r, 1.0).
You can average the texels together if you want for multisample depth resolve, but the difference in depth between all of the samples can also be useful for quickly finding edges in your rendered scene to implement things like bilateral filtering.

Related

OpenGL - Multi Sampled FBO methodology

I am implementing Multisampled Antialiasing into my deferred rendering engine and I'm curious, can I simply use a multisampled texture for the albedo output? Or will the depth texture, normals texture and even lighting texture have to be multisampled also?

Multisampling in pipeline

In multisampling, during rasterization there are more than one sample points in each pixel and sample points are decided which constitutes the primitive.
which attributes are same for each sample in a pixel? I read somewhere that color and texture values are same but depth and stensil values for samples in a pixel are different. But as fragment shader is executed for each sample points then they should be different.
Also, When does the multiple samples are resolved in pipeline, after fragment shader? And do they linearly average out?
First you have to realize how multisampling works and why it was created.
I am going to approach this from the perspective of anti-aliasing, because that was the primary use-case for multisampling up until multisample textures were introduced in GL3.
When something is multisampled, this means that each sample point contains multiple samples. These samples may be identical to one another if a primitive has relatively uniform characteristics (e.g. has the same depth everywhere) and smart GL/hardware implementations are capable of identifying such situations and reducing memory bandwidth by reading/writing shared samples intelligently (similar to color/depth buffer compression). However, the cost in terms of required storage for a 4x MSAA framebuffer is the same as 4x SSAA because GL has to accommodate the worst-case scenario, where each of the 4 samples is unique.
Which attributes are same for each sample in a pixel?
Each fragment may cover multiple sample points for attributes such as color, texture coordinates, etc. Rather than invoking the fragment shader 4x as frequently to achieve 4x anti-aliasing, a trick was devised where each attribute would be sampled at the fragment center (this is the default) and then a single output written to each of the covered sample locations. The default behavior is somewhat lacking in the situation where the center of a fragment is not part of the actual coverage area - for this, centroid sampling was introduced... vertex attributes will be interpolated at the center of a primitive's coverage area within a fragment rather than the center of the fragment itself.
Later, when it comes time to write a color to the framebuffer, all of these samples need to be averaged to produce a single pixel; we call this multisample resolve. This works well for some things, but it does not address issues of aliasing that occurs during fragment shading itself.
Texturing occurs during the execution of a fragment shader, and this means that the sample frequency for texturing remains the same, so MSAA generally does not help with texture aliasing. Thus, while supersample anti-aliasing improves both aliasing at geometric edges and texture / shader aliasing (things like specular highlights), multisampling generally only reduces "jaggies."
I read somewhere that color and texture values are same but depth and stensil values for samples in a pixel are different.
In short, anything that is computed in the fragment shader will be the same for all covered samples. Anything that can be determined before fragment shading (e.g. depth) may vary.
Fragment tests such as depth/stencil are evaluated for each sub-sample. But multisampled depth buffers carry some restrictions. Up until D3D 10.1, hardware was not required to support multisampled depth textures so you could not sample multisampled depth buffers in a fragment shader.
But as fragment shader is executed for each sample points then they should be different.
There is a feature called sample shading, which can force an implementation of MSAA to work more like SSAA by improving the ratio between fragments shaded and samples generated during rasterization. But by default, the behavior you are describing is not multisampling.
When does the multiple samples are resolved in pipeline, after fragment shader? And do they linearly average out?
Multisample resolution occurs after fragment shading, anytime you have to write the contents of a multisampled buffer into a singlesampled buffer. This includes things like glBlitFramebuffer (...). You can also manually implement multisample resolution yourself in the fragment shader, if you use multisampled textures.
Finally, regarding the process used for multisample resolution, that is implementation-specific as is the sample layout. If you ever open your display driver's control panel and look at the myriad of anti-aliasing options available you will see multiple options for sample layout and MSAA resolve algorithm.
I would highly suggest you take a look at this article. While it is related to D3D10+ and not OpenGL, the general rules apply to both APIs (D3D9 has slightly different rules) and the quality of the diagrams is phenomenal.
In particular, pay special attention to the section on MSAA rasterization rules for triangles, which states:
For a triangle, a coverage test is performed for each sample location (not for a pixel center). If more than one sample location is covered, a pixel shader runs once with attributes interpolated at the pixel center. The result is stored (replicated) for each covered sample location in the pixel that passes the depth/stencil test.

FBO depth texture in OpenGL 2.1+

I'm curently learning to create shadown using GLSL but I have some troubles here:
1. In GLSL 3.3, we can use this statement in fragment shader:
layout(location = 0) out float fragmentdepth;
to write only depth 16bit out to texture (setted as GL_DEPTH_COMPONENT16 before) but how can I do some thing like that in OpenGL 2.1 (GLSL 1.20)?
As far as I know, for rendering depth buffer, we only need change the camera position to light position and camera direction to light direction and changed back if we are drawing real scene, Is it right?
In GLSL 3.3, we can use this statement in fragment shader:
to write only depth 16bit out to texture (setted as GL_DEPTH_COMPONENT16 before)
No, you can't.
That will set fragmentdepth to write to a color buffer. And you cannot attach an image with the GL_DEPTH_COMPONENT16 image format to a GL_COLOR_ATTACHMENTi attachment point of an FBO. Attempting to do so will give you a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT error.
In all versions of GLSL, the only way to write to the attached depth buffer is to write to gl_FragDepth. And in all versions of GLSL, you can only have one depth buffer attached to the FBO. Though Image Load/Store does allow you to work around that, but you lose depth testing and such.
As stated here you need to use FBO extension (EXT) if targeting OpenGL 2.1.Also why on earth are you still using fixed pipeline?It is deprecated .If your hardware allows you - use OpenGL 3.3+ and then leverage (core) FBOs with texture attachments (or renderbuffers) to which you can draw depth buffer data.Yes,you can still do the same with the deprecated profile but well,the modern OpenGL made a huge step forward since then.
Anyways , here is what you need for your version.

Why does OpenGL lighten my scene when multisampling with an FBO?

I just switched my OpenGL drawing code from drawing to the display directly to using an off-screen FBO with render buffers attached. The off-screen FBO is blitted to the screen correctly when I allocate normal render buffer storage.
However, when I enable multisampling on the render buffers (via glRenderbufferStorageMultisample), every color in the scene seems like it has been brightened (thus giving different colors than the non-multisampled part).
I suspect there's some glEnable option that I need to set to maintain the same colors, but I can't seem to find any mention of this problem elsewhere.
Any ideas?
I stumbled upon the same problem, due to the lack of proper downsampling because of mismatching sample locations. What worked for me was:
A separate "single sample" FBO with identical attachments, format and dimension (with texture or renderbuffer attached) to blit into for downsampling and then draw/blit this to the window buffer
Render into a multisample window buffer with multisample texture having the same sample count as input, by passing all corresponding samples per fragment using a GLSL fragment shader. This worked with sample shading enabled and is the overkill approach for deferred shading as you can calculate light, shadow, AO, etc. per sample.
I did also rather sloppy manual downsampling to single sample framebuffers using GLSL, where I had to fetch each sample separately using texelFetch().
Things got really slow with multisampling. Although CSAA performed better than MSAA, I recommend to take a look at FXAA shaders for postprocessing as a considerable alternative, when performance is an issue or those rather new extensions required, such as ARB_texture_multisample, are not available.
Accessing samples in GLSL:
vec4 texelDownsampleAvg(sampler2DMS sampler,ivec2 texelCoord,const int sampleCount)
{
vec4 accum = texelFetch(sampler,texelCoord,0);
for(int sample = 1; sample < sampleCount; ++sample) {
accum += texelFetch(sampler,texelCoord,sample);
}
return accum / sampleCount;
}
http://developer.download.nvidia.com/opengl/specs/GL_EXT_framebuffer_multisample.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_framebuffer_blit.txt
11) Should blits be allowed between buffers of different bit sizes?
Resolved: Yes, for color buffers only. Attempting to blit
between depth or stencil buffers of different size generates
INVALID_OPERATION.
13) How should BlitFramebuffer color space conversion be
specified? Do we allow context clamp state to affect the
blit?
Resolved: Blitting to a fixed point buffer always clamps,
blitting to a floating point buffer never clamps. The context
state is ignored.
http://www.opengl.org/registry/specs/ARB/sample_shading.txt
Blitting multisampled FBO with multiple color attachments in OpenGL
The solution that worked for me was changing the renderbuffer color format. I picked GL_RGBA32F and GL_DEPTH_COMPONENT32F (figuring that I wanted the highest precision), and the NVIDIA drivers interpret that differently (I suspect sRGB compensation, but I could be wrong).
The renderbuffer image formats I found to work are GL_RGBA8 with GL_DEPTH_COMPONENT24.

OpenGL per pixel lighting in fixed function pipeline

Is it possible to enable per-pixel lighting (so that I can have nice specular highlights on low tessellated surfaces) in the OpenGL fixed function pipeline?
The only way to do this is using precomputed cubemaps. The fixed function pipeline interpolates colors and texture coordinates across polygons. Color is useless but the texturing can be used.
It won't be position-dependent, but you can precalculate cubemaps for areas and blend between them using BLEND_ADD and drawing it twice with both cubemaps you're LERPing between.