Render to FBO + glReadPixels all black - opengl

I am trying to render a simple checkerboard in a FBO and then do a glReadPixels().
When I do it without FBO, everything works fine. So I assume that my render function is ok and so is the glReadPixels(). With the FBO, all I get are the lines that I draw after the calls to FBO have been done.
Here is my code (Python, aiming cross platform):
def renderFBO():
#WhyYouNoWorking(GL_FRAMEBUFFER) # degug function... error checking
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebuffer)
glBindRenderbuffer( GL_RENDERBUFFER, renderbufferA)
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA, window.width, window.height)
glBindRenderbuffer( GL_RENDERBUFFER, renderbufferB)
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH_COMPONENT, window.width, window.height)
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebuffer)
glFramebufferRenderbuffer( GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbufferA)
glFramebufferRenderbuffer( GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderbufferB)
#WhyYouNoWorking(GL_FRAMEBUFFER)
glDrawBuffer(GL_COLOR_ATTACHMENT0)
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glViewport( 0, 0, window.width, window.height)
DrawChecker(Nbr = 16, Dark = 25.0/255, Light = 75.0/255)
for i in range(len(labelSysInfo)):
pyglet.text.Label(labelSysInfo[i], font_name='Times New Roman', font_size=26, x=(window.width*0.68), y= (window.height*0.04*i)+(window.height*2/3), anchor_x='left', anchor_y='center', color = (250, 250, 250, 150)).draw()
glReadPixels(0, 0, window.width, window.height, GL_RGBA, GL_UNSIGNED_BYTE, a)
glBindFramebuffer( GL_FRAMEBUFFER, 0)
My other function:
def on_draw(dt):
glDrawBuffer(GL_BACK)
glClear(GL_COLOR_BUFFER_BIT)
glClearColor( 0.0, 0.0, 0.0, 1.0)
glLoadIdentity()
glEnable(GL_TEXTURE_2D)
glDisable(GL_TEXTURE_2D)
BlueLine() # draw a simple line. works fine
DropFrameTest() # draw a simple line. works fine
In the main, the call to renderFBO() is done once, and then on_draw is called periodically.
dt = pyglet.clock.tick()
renderFBO()
pyglet.clock.schedule_interval(on_draw, 0.007)
pyglet.app.run()

At a guess, you've bound the framebuffer to the GL_DRAW_FRAMEBUFFER only. Use
glBindFramebuffer(GL_FRAMEBUFFER, ...
and
glFramebufferRenderbuffer(GL_FRAMEBUFFER, ...
to both read and write with the same FBO.
I'm sure you already have but checking for framebuffer completeness (glCheckFramebufferStatus) and for GL errors (glGetError, or the new extension) is also very useful.
[EDIT]
(The shotgun problem solving tactics from the comments)
If you see an image on the first frame, but none on the next there must be something staying behind from the previous frame.
The most common problem is forgetting to clear the depth buffer - but you haven't.
Next up are stencil buffers and blending (neither look like they're enabled to begin with).
Maybe a new FBO handle is being generated each frame and you're running out?
Another common problem is accumulating matrix transforms, but you have glLoadIdentity so should be no issue there.

Related

Depth testing doesn't work when using custom framebuffer

I'm stydying framebuffers and I've made a mirror in my scene. It works fine except the depth testing. Got stuck trying to make it work. (when rendering to default frame buffer - depth testing works fine). Would appreciate any help. Here is the code:
glEnable( GL_DEPTH_TEST );
glViewport( 0, 0, 512, 512 );
unsigned int fbo;
glGenFramebuffers( 1, &fbo );
glBindFramebuffer( GL_FRAMEBUFFER, fbo );
unsigned int rbo;
glGenRenderbuffers( 1, &rbo );
glBindRenderbuffer( GL_RENDERBUFFER, rbo );
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH, 512, 512 );
glBindRenderbuffer( GL_RENDERBUFFER, 0 );
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, rbo ); //if remove this, mirror works but without depth test
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
this->mirror->texturePack[0]->textureId(), 0 );
//render scene from mirror camera
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
glViewport( 0, 0, this->width(), this->height() );
//render scene from main camera
Your farme buffer is inclomplete, because GL_DEPTH is no valid internal format for a render buffer storage.
See glRenderbufferStorage. Try GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT32:
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 512, 512 );
See OpenGL 4.6 core profile specification, 9.4 Framebuffer Completeness, page 323:
The internal formats of the attached images can affect the completeness of
the framebuffer, so it is useful to first define the relationship between the internal
format of an image and the attachment points to which it can be attached.
• An internal format is depth-renderable if it is DEPTH_COMPONENT or one
of the formats from table 8.13 whose base internal format is DEPTH_-
COMPONENT or DEPTH_STENCIL. No other formats are depth-renderable.
Note, the framebuffer completeness can be checked by:
glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE
I've solved it finally. I've added the glClear( GL_DEPTH_BUFFER_BIT ) command right after binding mirror framebuffer and after that it worked.

OpenGL How to render to texture with multi-sampling

Implementing some effect, I end up with 1 frame buffer associated to 1 texture, which holds my final scene. This texture is then applied on a fullscreen quad.
The result is what I expect as far as the effect goes, but I noticed that edges on the scene thus rendered, weren't smooth - presumably, because multi-sampling did not apply during render-to-framebuffer passes, as it does when I render directly to the screen buffer.
So my question is
How can I apply/use multi-sampling on this final texture, so that its content shows smooth edges?
EDIT: I have removed the original version of my code here, which was using
a classic FrameBuffer + Texture not multi-sampled. Below is the lastest,
following suggestions in the comments.
For now also, I'll focusing on getting the glBlitFramebuffer approach to work!
So my code now goes like so:
// Unlike before, finalTexture is multi-sampled, thus created like this:
glGenFramebuffers(1, &finalFrame);
glGenTextures(1, &finalTexture);
glBindFramebuffer(GL_FRAMEBUFFER, finalFrame);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, finalTexture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA, w, h, GL_TRUE);
glFramebufferTexture2D(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D_MULTISAMPLE,
finalTexture,
0);
// Alternative using a render buffer instead of a texture.
//glGenRenderbuffers(1, &finalColor);
//glBindRenderbuffer(GL_RENDERBUFFER, finalColor);
//glRenderbufferStorageMultisample(GL_RENDERBUFFER, 8, GL_RGBA, w, h);
//glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, finalColor);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Then I introduced a new frame buffer to resolve the multi-sampling:
// This one's not multi-sampled.
glGenFramebuffers(1, &resolveFrame);
glGenTextures(1, &resolveTexture);
glBindFramebuffer(GL_FRAMEBUFFER, resolveFrame);
glBindTexture(GL_TEXTURE_2D, resolveTexture);
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D,
resolveTexture,
0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Now a lot of code to produce a glowing effect, things like:
// 1. Generate 1 frame buffer with 2 color attachments (textures) - no multisampling
// 2. Render the 3D scene to it:
// - texture 0 receives the entire scene
// - texture 1 receives glowing objects only
// 3. Generate 2 frame buffers with 1 color attachment (texture) each - no multisampling
// - we can call them Texture 2 and texture 3
// 4. Ping-pong Render a fullscreen textured quad on them
// - On the first iteration we use texture 1
// - Then On each following iteration we use one another's texture (3,2,3...)
// - Each time we apply a gaussian blur
// 5. Finally sum texture 0 and texture 3 (holding the last blur result)
// - For this we create a multi-sampled frame buffer:
// - Created as per code here above: finalFrame & **finalTexture**
// - To produce the sum, we draw a full screen texured quad with 2 sampler2D:
// - The fragment shader then computes texture0+texture3 on each pixel
// - finalTexture now holds the scene as I expect it to be
// Then I resolve the multi-sampled texture into a normal one:
glBindFramebuffer(GL_READ_FRAMEBUFFER, finalFrame);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, resolveFrame);
glBlitFramebuffer(0, 0, w, h, 0, 0, w, h, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
// And the last stage: render onto the screen:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resolveTexture);
drawFullScreenQuad( ... );
The resulting output is correct, meaning that I can see the scene with the desired glowing effect... But no apparent multi-sampling! :(
Note: I am starting to wonder, if I am using multi-sampling at the right stage - I will be experimenting on this - but any chance I should use it when rendering the initial 3D scene for the first time, on the initial FBOs? (the ones I refer to in the comments and I didn't want to post here to avoid confusion :s)
I added more detailed comments on what's going on before this last stage with final & resolve frame buffers.
You have: "step 5. Finally sum texture 0 and texture 3 (holding the last blur result) - For this we create a multi-sampled frame buffer". But this way multisampling will only apply to fullscreen quad.
"if I am using multi-sampling at the right stage" so the answer to your question is no, you need to use multisampling on another stage when you render a scene.
I have very similar setup with framebuffers (that one which is used to render the scene is multisampled) two output textures (for color info and for highlights which will later be blurred to achieve glow) and ping-pong framebuffers. I also use glBlitFramebuffer solution (also I use 2 blit calls for each color attachment, each one will go in own texture), have not found any way of making it render directly into framebuffer with attached texture.
If you want some code, this is solution that worked for me (it is in C# though):
// ----------------------------
// Initialization
int BlitFrameBufferHandle = GL.GenFramebuffer();
GL.BindFramebuffer(FramebufferTarget.Framebuffer, BlitFrameBufferHandle);
// need to setup this for 2 color attachments:
GL.DrawBuffers(2, new [] {DrawBuffersEnum.ColorAttachment0, DrawBuffersEnum.ColorAttachment1});
// create texture 0
int ColorTextureHandle0 = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, ColorTextureHandle0);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int) TextureMinFilter.Linear); // can use nearest for min and mag filter also
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int) TextureMagFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int) TextureWrapMode.ClampToEdge);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int) TextureWrapMode.ClampToEdge);
// for HRD use PixelInternalFormat.Rgba16f and PixelType.Float. Otherwise PixelInternalFormat.Rgba8 and PixelType.UnsignedByte
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba16f, Width, Height, 0, PixelFormat.Rgba, PixelType.Float, IntPtr.Zero);
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, FramebufferAttachment.ColorAttachment0, TextureTarget.Texture2D, ColorTextureHandle0, 0);
// create texture 1
int ColorTextureHandle1 = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, ColorTextureHandle1);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int) TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int) TextureMagFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int) TextureWrapMode.ClampToEdge);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int) TextureWrapMode.ClampToEdge);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba16f, Width, Height, 0, PixelFormat.Rgba, PixelType.Float, IntPtr.Zero);
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, FramebufferAttachment.ColorAttachment1, TextureTarget.Texture2D, ColorTextureHandle1, 0);
// check FBO error
var error = GL.CheckFramebufferStatus(FramebufferTarget.Framebuffer);
if (error != FramebufferErrorCode.FramebufferComplete) {
throw new Exception($"OpenGL error: Framwbuffer status {error.ToString()}");
}
int FrameBufferHandle = GL.GenFramebuffer();
GL.BindFramebuffer(FramebufferTarget.Framebuffer, FrameBufferHandle);
// need to setup this for 2 color attachments:
GL.DrawBuffers(2, new [] {DrawBuffersEnum.ColorAttachment0, DrawBuffersEnum.ColorAttachment1});
// render buffer 0
int RenderBufferHandle0 = GL.GenRenderbuffer();
GL.BindRenderbuffer(RenderbufferTarget.Renderbuffer, RenderBufferHandle0);
GL.RenderbufferStorageMultisample(RenderbufferTarget.Renderbuffer, 8, RenderbufferStorage.Rgba16f, Width, Height);
GL.FramebufferRenderbuffer(FramebufferTarget.Framebuffer, FramebufferAttachment.ColorAttachment0, RenderbufferTarget.Renderbuffer, RenderBufferHandle0);
// render buffer 1
int RenderBufferHandle1 = GL.GenRenderbuffer();
GL.BindRenderbuffer(RenderbufferTarget.Renderbuffer, RenderBufferHandle1);
GL.RenderbufferStorageMultisample(RenderbufferTarget.Renderbuffer, 8, RenderbufferStorage.Rgba16f, Width, Height);
GL.FramebufferRenderbuffer(FramebufferTarget.Framebuffer, FramebufferAttachment.ColorAttachment1, RenderbufferTarget.Renderbuffer, RenderBufferHandle1);
// depth render buffer
int DepthBufferHandle = GL.GenRenderbuffer();
GL.BindRenderbuffer(RenderbufferTarget.Renderbuffer, DepthBufferHandle);
GL.RenderbufferStorageMultisample(RenderbufferTarget.Renderbuffer, 8, RenderbufferStorage.DepthComponent24, Width, Height);
GL.FramebufferRenderbuffer(FramebufferTarget.Framebuffer, FramebufferAttachment.DepthAttachment, RenderbufferTarget.Renderbuffer, DepthBufferHandle);
// check FBO error
var error = GL.CheckFramebufferStatus(FramebufferTarget.Framebuffer);
if (error != FramebufferErrorCode.FramebufferComplete) {
throw new Exception($"OpenGL error: Framwbuffer status {error.ToString()}");
}
// unbind FBO
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
// ----------------------------
// Later for each frame
GL.BindFramebuffer(FramebufferTarget.Framebuffer, FrameBufferHandle);
// render scene ...
// blit data from FrameBufferHandle to BlitFrameBufferHandle
GL.BindFramebuffer(FramebufferTarget.ReadFramebuffer, FrameBufferHandle);
GL.BindFramebuffer(FramebufferTarget.DrawFramebuffer, BlitFrameBufferHandle);
// blit color attachment0
GL.ReadBuffer(ReadBufferMode.ColorAttachment0);
GL.DrawBuffer(DrawBufferMode.ColorAttachment0);
GL.BlitFramebuffer(
0, 0, Width, Height,
0, 0, Width, Height,
ClearBufferMask.ColorBufferBit, BlitFramebufferFilter.Nearest
);
// blit color attachment1
GL.ReadBuffer(ReadBufferMode.ColorAttachment1);
GL.DrawBuffer(DrawBufferMode.ColorAttachment1);
GL.BlitFramebuffer(
0, 0, Width, Height,
0, 0, Width, Height,
ClearBufferMask.ColorBufferBit, BlitFramebufferFilter.Nearest
);
// after that use textures ColorTextureHandle0 and ColorTextureHandle1 to render post effects using ping-pong framebuffers ...
Just implemented a bloom effect myself, faced the same aliased edges on the resulting image and faced the exactly same issues. Hence sharing my experience here.
Aliasing happens when you render the lines with OpenGL - e.g. edges of a triangle or a polygon, since OpenGL draws "diagonal" (or simply put non-straight) lines on the screen using quite simple (yet fast) algorithms.
That being said, if you want to anti-alias something - that would be a 3D shape, not a texture - it is just a plain image after all.
Off-topic: in order to fix aliasing on an image you would apply the similar technique, but you would need to figure out where the "edges" are on the image and then follow the same algorithm per "edge" pixel. "Edge" (in quotes) since they are just ordinary pixels from the image perspective and being an edge is just extra context we humans attach to those pixels.
With that out of our way, the thing with two image attachments is actually a nice optimization - you do not need to render your entire scene twice to different framebuffers. But you will pay the price of copying the data from each multi-sampled framebuffer attachment to a separate non-multisampled texture for post-processing.
A bit off-topic: performance-wise, I think this is exactly the same (or within a very small threshold) - rendering an entire scene twice, to two separate framebuffers with two separate multi-sampled attachments (as inputs for the post-processing) and then copying each of them separately to two separate non-multisampled textures.
So the last step before you can apply your (any) post-processing to the multi-sampled scene is to convert each multi-sampled render result to non-multisampled texture - so that your shaders work with plain sampler2D.
It would be something similar to this:
glBindFramebuffer(GL_READ_FRAMEBUFFER, bloomFBOWith2MultisampledAttachments);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, temporaryFBOWith1NonMultisampledAttachment);
// THIS IS IMPORTANT
glReadBuffer(GL_COLOR_ATTACHMENT0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glBlitFramebuffer(0, 0, windowWidth, windowHeight, 0, 0, windowWidth, windowHeight, GL_COLOR_BUFFER_BIT, GL_NEAREST);
// bloomFBOWith2MultisampledAttachments is still bound
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, blurFramebuffer1);
// THIS IS IMPORTANT
glReadBuffer(GL_COLOR_ATTACHMENT1);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glBlitFramebuffer(0, 0, windowWidth, windowHeight, 0, 0, windowWidth, windowHeight, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
Given you are rendering your scene to two attachments in one framebuffer, you will then need to copy from each of those multi-sampled attachments to non-multi-sampled textures and use them for additive rendering and blurring, correspondingly.
If you don't mind messy code and the use of globjects for OpenGL APIs abstraction, here's my entire bloom solution with anti-aliasing.
And few screenshots:
The first screenshot does not use a framebuffer to render to, so the lines are really smooth.
The second screenshot is the first implementation of a bloom effect (available as a separate CMake project).
Aliasing is more visible on longer distances, so the third screenshots shows a bit more of a scene - the edges look really stairs-like.
The last two screenshots show the bloom effect with anti-aliasing applied.
Note how lantern only has somewhat low-resolution texture, hence aliased lines, whilst the paper has its edges smoothed out by anti-aliasing.

Multisample Texture produces artifacts at horizon

I have implemented a deferred rendering and am trying to use multisample textures for anti aliasing.
I render the scene into a FBO with multisample textures, use glBlit to create regular textures in a second FBO and finally bind the texture to the lighting shader that produces the final image.
// draw to textures
mMultiGeometryFBO->bind();
glViewport(0,0,mWidth,mHeight);
glEnable(GL_DEPTH_TEST);
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT );
// calling all modules to draw to FBO
for(auto r : mRenderer)
r->renderMaterial(camera);
glBindFramebuffer(GL_READ_FRAMEBUFFER, mMultiGeometryFBO->fbo());
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mGeometryFBO->fbo());
glReadBuffer(GL_COLOR_ATTACHMENT0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glBlitFramebuffer(0, 0, mWidth, mHeight,
0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, GL_LINEAR);
glReadBuffer(GL_COLOR_ATTACHMENT1);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
glBlitFramebuffer(0, 0, mWidth, mHeight,
0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glReadBuffer(GL_COLOR_ATTACHMENT2);
glDrawBuffer(GL_COLOR_ATTACHMENT2);
glBlitFramebuffer(0, 0, mWidth, mHeight,
0, 0, mWidth, mHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
// draw to screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_DEPTH_BUFFER_BIT);
mSkybox->renderMaterial(camera);
mShader->use();
mShader->setTexture("tDiffuse", mDiffuseColor, 0);
mShader->setTexture("tNormal", mNormals, 1);
mShader->setTexture("tMaterial", mMaterialParams, 2);
mShader->setTexture("tDepth", mDepthBuffer, 3);
mShader->setTexture("tLights", mLightColor, 4);
mQuad->draw();
This produces a visible line at the horizon (between geometry and skybox).
The color is the clear color. Only clearing the depth reduces the problem when moving. Rendering the SkyBox to the FBO before rendering the geometry produces less visible artifacts, but the line is still there.
Edit: forgot the picture
Resolving the multisample target before the lighting pass does not make sense, conceptually. What you will get is that the values in your gbuffers will be averaged at the edges of objects. This is especially bad for the normal directions. Think about it: If you have a pixel which contains 50% of your ground plane, and 50% of your sky, you will get a normal direction which is (normal_ground + normal_sky)/2. This is totally different from calculating the final color of each of this parts with their original normal and mixing the resulting colors.
If you want to do multisampling with deferred rendering, you have to use the multisampling target for the lighting, and will have to enable per sample shading and actually access and light each sample individually, and only blit the final result to a non-multisampled target. However, that will be exorbitantly expensive. You especially lose the benefits of multisampling vs. supersampling.
I don't know if there are some neat tricks trick to still work with multisampling in a more efficient way, but the usual approach is to not use multisampling at all and doing the anti-aliasing via some image-based postprocessing pass.

GL_DEPTH_TEST does not work in OpenGL 3.3

I am writing a simple rendering program using OpenGL 3.3.
I have following lines in my code (which should enable depth test and culling):
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
ExitOnGLError("ERROR: Could not set OpenGL depth testing options");
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
ExitOnGLError("ERROR: Could not set OpenGL culling options");
However after rendering I see the following result:
As you can see the depth test does not seem to work. What am I doing wrong? Where I should look for the problem?
Some information that may be useful:
In the projection matrix I have near clipping plane set to 0.2 and far to 3.2 (so near plane is not zero).
I render mesh and texture it using simple method with glDrawArrays and two buffers for vertex and texture coordinates. Shaders than are used to display these arrays properly.
I do not calculate and draw normals.
Context creation code: http://pastebin.com/mRMUxPL1
UPDATE:
Finally got it working! As it turns out I was not creating the buffer for depth rendering.
When I replaced this code (buffers initialization):
glGenFramebuffers(1, &mainFrameBufferId);
glGenRenderbuffers(1, &renderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGBA8,
camera.imageSize.width,
camera.imageSize.height);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBufferId);
CV_Assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
with this one:
glGenFramebuffers(1, &mainFrameBufferId);
glGenRenderbuffers(1, &renderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGBA8,
camera.imageSize.width,
camera.imageSize.height);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBufferId);
CV_Assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
glGenRenderbuffers(1, &depthRenderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_DEPTH24_STENCIL8,
camera.imageSize.width,
camera.imageSize.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferId);
everything started to work fine!
static int visualAttribs[] = { None };
^^^^
int numberOfFramebufferConfigurations = 0;
GLXFBConfig* fbConfigs = glXChooseFBConfig( display, DefaultScreen(display), visualAttribs, &numberOfFramebufferConfigurations );
CV_Assert(fbConfigs != 0);
glXChooseFBConfig():
GLX_DEPTH_SIZE: Must be followed by a nonnegative minimum size specification. If this value is zero, frame buffer configurations with no depth buffer are preferred. Otherwise, the largest available depth buffer of at least the minimum size is preferred. The default value is 0.
Try setting visualAttribs[] to something like { GLX_DEPTH_SIZE, 16, None }

glReadPixels() sets GL_INVALID_OPERATION error

I'm trying to implement color picking with FBO. I have multisampled FBO (fbo[0]) which I use to render the scene and I have non multisampled FBO (fbo[1]) which I use for color picking.
The problem is: when I try to read pixel data from fbo[1] everything goes well until glReadPixels call which sets GL_INVALID_OPERATION flag. I've checked the manual and can't find the reason why.
The code to create FBO:
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0]);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numSamples, GL_RGBA8, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1]);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numSamples, GL_DEPTH24_STENCIL8, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[2]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[3]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, resolution[0], resolution[1]);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo[3]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[2]);
OGLChecker::checkFBO(GL_DRAW_FRAMEBUFFER);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[0]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo[1]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[0]);
OGLChecker::checkFBO(GL_DRAW_FRAMEBUFFER);
My checker stays silent so the FBOs are complete. Next the picking code
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// bla, bla, bla
// do the rendering
unsigned int result;
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo[1]);
int sb;
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
glGetIntegerv(GL_SAMPLE_BUFFERS, &sb);
// glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
OGLChecker::getGlError();
std::cerr << "Sample buffers " << sb << std::endl;
glReadPixels(pos.x(), resolution.y() - pos.y(), 1, 1, GL_RED, GL_UNSIGNED_INT, &result);
OGLChecker::getGlError();
return result;
the output:
Sample buffers 0
OpenGL Error : Invalid Operation
The interesting fact that if I uncomment glBindFramebuffer(GL_READ_FRAMEBUFFER, 0); then no error happens and pixels are read from screen (but I don't need this).
What may be wrong here?
Your problem is the format parameter. For a texture that has a one-channel integer internal format the correct parameter isn't GL_RED, but GL_RED_INTEGER:
glReadPixels(pos.x(), resolution.y() - pos.y(), 1, 1, GL_RED_INTEGER, GL_UNSIGNED_INT, &result);
Look at the OpenGL documentation wiki (emphasis mine):
...
format
Specifies the format of the pixel data. For transfers of depth, stencil, or depth/stencil data, you must use GL_DEPTH_COMPONENT, GL_STENCIL_INDEX, or GL_DEPTH_STENCIL, where appropriate. For transfers of normalized integer or floating-point color image data, you must use one of the following: GL_RED, GL_GREEN, GL_BLUE, GL_RG, GL_RGB, GL_BGR, GL_RGBA, and GL_BGRA. For transfers of non-normalized integer data, you must use one of the following: GL_RED_INTEGER, GL_GREEN_INTEGER, GL_BLUE_INTEGER, GL_RG_INTEGER, GL_RGB_INTEGER, GL_BGR_INTEGER, GL_RGBA_INTEGER, and GL_BGRA_INTEGER. Even if no actual pixel transfer is made (data​ is NULL and no buffer is bound to GL_PIXEL_UNPACK_BUFFER), you must set this parameter correctly for the internal format of the destination image.
...
Note: the official reference page is incomplete/wrong.
Given that it's "fixed" if you uncomment that line of code, I wonder if your driver is lying to you about GL_SAMPLE_BUFFERS being 0. From http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml:
GL_INVALID_OPERATION is generated if GL_READ_FRAMEBUFFER_BINDING is non-zero, the read framebuffer is complete, and the value of GL_SAMPLE_BUFFERS for the read framebuffer is greater than zero.
If you're using NVIDIA's binary driver on Linux and have switched to a non-graphical virtual console (e.g. CTRL+ALT+F1) then any attempt to glReadPixels() will return GL_INVALID_OPERATION (0x502).
Solution: Switch back to the graphical console (usually CTRL+ALT+F7).