Why does OpenGL lighten my scene when multisampling with an FBO? - opengl

I just switched my OpenGL drawing code from drawing to the display directly to using an off-screen FBO with render buffers attached. The off-screen FBO is blitted to the screen correctly when I allocate normal render buffer storage.
However, when I enable multisampling on the render buffers (via glRenderbufferStorageMultisample), every color in the scene seems like it has been brightened (thus giving different colors than the non-multisampled part).
I suspect there's some glEnable option that I need to set to maintain the same colors, but I can't seem to find any mention of this problem elsewhere.
Any ideas?

I stumbled upon the same problem, due to the lack of proper downsampling because of mismatching sample locations. What worked for me was:
A separate "single sample" FBO with identical attachments, format and dimension (with texture or renderbuffer attached) to blit into for downsampling and then draw/blit this to the window buffer
Render into a multisample window buffer with multisample texture having the same sample count as input, by passing all corresponding samples per fragment using a GLSL fragment shader. This worked with sample shading enabled and is the overkill approach for deferred shading as you can calculate light, shadow, AO, etc. per sample.
I did also rather sloppy manual downsampling to single sample framebuffers using GLSL, where I had to fetch each sample separately using texelFetch().
Things got really slow with multisampling. Although CSAA performed better than MSAA, I recommend to take a look at FXAA shaders for postprocessing as a considerable alternative, when performance is an issue or those rather new extensions required, such as ARB_texture_multisample, are not available.
Accessing samples in GLSL:
vec4 texelDownsampleAvg(sampler2DMS sampler,ivec2 texelCoord,const int sampleCount)
{
vec4 accum = texelFetch(sampler,texelCoord,0);
for(int sample = 1; sample < sampleCount; ++sample) {
accum += texelFetch(sampler,texelCoord,sample);
}
return accum / sampleCount;
}
http://developer.download.nvidia.com/opengl/specs/GL_EXT_framebuffer_multisample.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_framebuffer_blit.txt
11) Should blits be allowed between buffers of different bit sizes?
Resolved: Yes, for color buffers only. Attempting to blit
between depth or stencil buffers of different size generates
INVALID_OPERATION.
13) How should BlitFramebuffer color space conversion be
specified? Do we allow context clamp state to affect the
blit?
Resolved: Blitting to a fixed point buffer always clamps,
blitting to a floating point buffer never clamps. The context
state is ignored.
http://www.opengl.org/registry/specs/ARB/sample_shading.txt
Blitting multisampled FBO with multiple color attachments in OpenGL

The solution that worked for me was changing the renderbuffer color format. I picked GL_RGBA32F and GL_DEPTH_COMPONENT32F (figuring that I wanted the highest precision), and the NVIDIA drivers interpret that differently (I suspect sRGB compensation, but I could be wrong).
The renderbuffer image formats I found to work are GL_RGBA8 with GL_DEPTH_COMPONENT24.

Related

how to retrieve z depth and color of a rendered pixel

I would like to retrieve the z height of each pixels of a rendered object in a scene.
I will need to retrieve the color rendered too.
What are the opengl technics to implement ?
glReadPixels and CPU side code
use glReadPixels to obtain both RGB and Depth buffers. Here examples for both:
depth buffer got by glReadPixels is always 1
OpenGL Scale Single Pixel Line
That will read the buffers into CPU accessible memory. This way is slow (due to sync) but should work on any platform.
FBO render to texture and GPU shader
Faster method is to use FBO and render to texture and use that output in next rendering pass as input texture for computing your stuff inside shaders. This however will not run properly on Intel and might need additional tweaking of code between nVidia and AMD.
If you have per pixel output use single QUAD covering your screen as the second rendering pass.
If you got single output for the whole screen instead use single POINT render and compute all in the fragment shader (scann the whole texture inside) something like this:
How to implement 2D raycasting light effect in GLSL
The difference is that by usnig shaders and FBO you are not transferring data between GPU/CPU so its way faster.
The content of the targeted textures can be still readed by CPU using texture related GL functions
compute GPU shaders
There are also compute shaders out there but I did not use them yet so I am just guessing however with them it might be possible to do your stuff in single pass and also the form of the result and computation should not be as limiting.
My bet is that you are doing some post processing similar to Deferred Shading so googling such topic/tutorials might help.

GLSL : accessing framebuffer to get RGB and change it

I'd like to access framebuffer to get RGB and change their values for each pixel. It is because the glReadPixels, and glDrawPixels are too slow to use, so that i should use shaders instead of using them.
Now, I write code, and success to display three-dimensional model using GLSL shaders.
I drew two cubes as follows.
....
glDrawArrays(GL_TRIANGLES, 0, 12*6);
....
and fragment shader :
varying vec3 fragmentColor;
void main()
{
gl_FragColor = vec4(fragmentColor, 1);
}
Then, how can I access to RGB values and change it?
For example, If the pixel values at (u1, v1) on window and (u2, v2) are (0,0,255), then I want to change them to (255,0,0)
With the exception of an OpenGL ES-only extension, fragment shaders cannot just read from the current framebuffer. Otherwise, we wouldn't need blending.
You also can't just render to the image you're reading from in a shader. So if you need to do some sort of post-processing, then that is best done by rendering to a separate image. That is, you do your rendering to image 1, then bind that as a texture and change the FBO so that you're rendering to image 2.
Alternatively, if you have access to OpenGL 4.5/ARB/NV_texture_barrier, then you can use texture barriers to handle this. This permits you a single read/modify/write pass, if you bind the current framebuffer's image as a texture. You'd issue the barrier before doing your read/modify/write, then bind that texture to a sampler while still rendering to that framebuffer.
Also, this requires that the FS read from the exact texel that it would write to. Assuming a viewport anchored at 0,0, the code for this would be texelFetch(sampler, ivec2(gl_FragCoord.xy), 0). You can't read from someone else's texel and modify it.
Obviously you must be rendering to a texture; you cannot use the default framebuffer for this.
Texture barrier could be used for cases where you read from different texels than you write to. But that would require doing something similar to the first case of switching bound images. Though you wouldn't need to change the FBO exactly; you could change the region of the FBO that you render to. That is, so long as you're reading from a different area than you're rendering to, and you use barriers appropriately when switching between those regions, everything is fine.

OpenGL: Post-Processing + Multisampling =?

I'm fairly new to OpenGL and trying to figure out how to add a post-processing stage to my scene rendering. What I believe I know so far is that I create an FBO, render the scene to that, and then I can render to the back buffer using my post-processing shader with the texture from the FBO as the input.
But where this goes beyond my knowledge is when multisampling gets thrown in. The FBO must be multisampled. That leaves two possibilities: 1. the post-process shader operates 1:1 on subsamples to generate the final multisampled screen output, or 2. the shader must resolve the multiple samples and output a single screen fragment for each screen pixel. How can these be done?
Well, option 1 is supported in the GL via the features braught in via GL_ARB_texture_multisample (in core since GL 3.2). Basically, this brings new multisample texture types, and the corresponding samplers like sampler2DMS, where you explicitely can fetch from a particular sample index. If this approach can be efficiently used to implement your post-processing effect, I don't know.
Option 2 is a little bit different than what you describe. Not the shader will do the multisample resolve. You can render into a multisample FBO (don't need a texture for that, a renderbuffer will do as well) and do the resolve explicitely using glBlitFramebuffer, into another, non-multisampled FBO (this time, with a texture). This non-multisamples texture can then be used as input for the post-processing. And neither the post-processing nor the default framebuffer need to be aware of multisampling at all.

OpenGL SuperSampling Anti-Aliasing?

At office we're working with an old GLX/Motif software that uses OpenGL's AccumulationBuffer to implement anti-aliasing for saving images.
Our problem is that Apple removed the AccumulationBuffer from all of its drivers (starting from OS X 10.7.5), and some Linux drivers like Intel HDxxxx don't support it neither.
Then I would like to update the anti-aliasing code of the software for making it compatible with most actual OSs and GPUs, but keeping the generated images as beautiful as they were before (because we need them for scientific publications).
SuperSampling seems to be the oldest and the best quality anti-aliasing method, but I can't find any example of SSAA that doesn't use AccumulationBuffer. Is there a different way to implement SuperSampling with OpenGL/GLX ???
You can use FBOs to implement the same kind of anti-aliasing that you most likely used with accumulation buffers. The process is almost the same, except that you use a texture/renderbuffer as your "accumulation buffer". You can either use two FBOs for the process, or change the attached render target of a single render FBO.
In pseudo-code, using two FBOs, the flow looks roughly like this:
create renderbuffer rbA
create fboA (will be used for accumulation)
bind fboA
attach rbA to fboA
clear
create texture texB
create fboB (will be used for rendering)
attach texB to fboB
(create and attach a renderbuffer for the depth buffer)
loop over jitter offsets
bind fboB
clear
render scene, with jitter offset applied
bind fboA
bind texB for texturing
set blend function GL_CONSTANT_ALPHA, GL_ONE
set blend color 0.0, 0.0, 0.0, 1.0 / #passes
enable blending
render screen size quad with simple texture sampling shader
disable blending
end loop
bind fboA as read_framebuffer
bind default framebuffer as draw framebuffer
blit framebuffer
Full super-sampling is also possible. As Andon in the comment above suggested, you create an FBO with a render target that is a multiple of your window size in each dimension, and in the end do a down-scaling blit to your window. The whole thing tends to be slow and use a lot of memory, even with just a factor of 2.

OpenGL antialiasing without the accumulation buffer

On an NVIDIA card I can perform full scene anti-aliasing using the accumulation buffer something like this:
if(m_antialias)
{
glClear(GL_ACCUM_BUFFER_BIT);
for(int j = 0; j < antialiasing; j++)
{
accPerspective(m_camera.FieldOfView(), // Vertical field of view in degrees.
aspectratio, // The aspect ratio.
20., // Near clipping
1000.,
JITTER[antialiasing][j].X(), JITTER[antialiasing][j].Y(),
0.0, 0.0, 1.0);
m_camera.gluLookAt();
ActualDraw();
glAccum(GL_ACCUM, float(1.0 / antialiasing));
glDrawBuffer(GL_FRONT);
glAccum(GL_RETURN, float(antialiasing) / (j + 1));
glDrawBuffer(GL_BACK);
}
glAccum(GL_RETURN, 1.0);
}
On ATI cards the accumulation buffer is not implemented, and everyone says that you can do that in shader language now. The problem with that, of course, is that GLSL is a pretty high barrier to entry for an OpenGL beginner.
Can anyone point me to something that will show me how to do whole-scene anti-aliasing in a way that ATI cards can do, and that a newbie can understand?
Why would you ever do antialiasing this way, regardless of whether you have accumulation buffers or not? Just use multisampling; it's not free, but it's much cheaper than what you're doing.
First, you have to create a context with a multisampled buffer. That means you need to use WGL/GLX_ARB_multisample, which means that on Windows, you need to do two-stage context creation. You should request a pixel format with 1 *_SAMPLE_BUFFERS_ARB and some number of *_SAMPLES_ARB. The larger the number of samples, the better the antialiasing (also the slower). You can get the maximum number of samples with wglGetPixelFormatAttribfv or glXGetConfig.
Once you have successfully created a context with a multisample framebuffer, you render as normal, with one exception: call glEnable(GL_MULTISAMPLE) in your setup code. This will activate multisampled rendering.
And that's all you need.
Alternatively, if you're using GL 3.x or have access to ARB_framebuffer_object, you can skip the context stuff and create a multisampled framebuffer. Your depth buffer and color buffer(s) must all have the same number of samples. I would suggest using renderbuffers for these, since you're still using fixed-function (and you can't texture from a multisample texture in the fixed-function pipeline).
You create multisampled renderbuffers for color and depth (they must have the same number of samples). You set them up in an FBO, and render into them (with glEnable(GL_MULTISAMPLE), of course). When you're done, you then use glBlitFramebuffer to blit from your multisample framebuffer into the back-buffer (which shouldn't be multisampled).
The problem with that, of course, is that GLSL is a pretty high barrier to entry for an OpenGL beginner.
Says who? There is nothing wrong with a beginner learning from shaders. Indeed, in my experience, such beginners often learn better, because they understand the details of what's going on more effectively.