I am writing a simple rendering program using OpenGL 3.3.
I have following lines in my code (which should enable depth test and culling):
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
ExitOnGLError("ERROR: Could not set OpenGL depth testing options");
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
ExitOnGLError("ERROR: Could not set OpenGL culling options");
However after rendering I see the following result:
As you can see the depth test does not seem to work. What am I doing wrong? Where I should look for the problem?
Some information that may be useful:
In the projection matrix I have near clipping plane set to 0.2 and far to 3.2 (so near plane is not zero).
I render mesh and texture it using simple method with glDrawArrays and two buffers for vertex and texture coordinates. Shaders than are used to display these arrays properly.
I do not calculate and draw normals.
Context creation code: http://pastebin.com/mRMUxPL1
UPDATE:
Finally got it working! As it turns out I was not creating the buffer for depth rendering.
When I replaced this code (buffers initialization):
glGenFramebuffers(1, &mainFrameBufferId);
glGenRenderbuffers(1, &renderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGBA8,
camera.imageSize.width,
camera.imageSize.height);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBufferId);
CV_Assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
with this one:
glGenFramebuffers(1, &mainFrameBufferId);
glGenRenderbuffers(1, &renderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGBA8,
camera.imageSize.width,
camera.imageSize.height);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBufferId);
CV_Assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
glGenRenderbuffers(1, &depthRenderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_DEPTH24_STENCIL8,
camera.imageSize.width,
camera.imageSize.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferId);
everything started to work fine!
static int visualAttribs[] = { None };
^^^^
int numberOfFramebufferConfigurations = 0;
GLXFBConfig* fbConfigs = glXChooseFBConfig( display, DefaultScreen(display), visualAttribs, &numberOfFramebufferConfigurations );
CV_Assert(fbConfigs != 0);
glXChooseFBConfig():
GLX_DEPTH_SIZE: Must be followed by a nonnegative minimum size specification. If this value is zero, frame buffer configurations with no depth buffer are preferred. Otherwise, the largest available depth buffer of at least the minimum size is preferred. The default value is 0.
Try setting visualAttribs[] to something like { GLX_DEPTH_SIZE, 16, None }
Related
How can I attach a depth-buffer to my framebufferobject when I use GL_TEXTURE_2D_MULTISAMPLE. glCheckFramebufferStatus(msaa_fbo) from the code below returns 0. From the documentation this seems to mean that msaa_fba is not a framebuffer, but it is created from glGenFramebuffers(1, &msaa_fbo);.
Additionally, if an error occurs, zero is returned.
GL_INVALID_ENUM is generated if target is not GL_DRAW_FRAMEBUFFER, GL_READ_FRAMEBUFFER or GL_FRAMEBUFFER.
The error is 1280, which I think means GL_INVALID_ENUM.
If i remove the depth buffer attachment the program runs and renders (although without depth testing). The error is still present when it runs then. With the depth attachment included there is an error (1286) after every frame, which is INVALID_FRAMEBUFFER. I don't know how to continue from here. Some examples I've looked at do somewhat the same but seem to work.
glGenTextures(1, &render_target_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, render_target_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_RGBA8, width, height, false);
glGenFramebuffers(1, &msaa_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, msaa_fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, render_target_texture, 0);
glGenRenderbuffers(1, &depth_render_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, depth_render_buffer);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, NUM_SAMPLES, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_render_buffer);
GLenum status = glCheckFramebufferStatus(msaa_fbo);
Most of the code is from this.
EDIT
The status check was wrong, it should've been GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);. Now there is no error when I don't include the depth. When I include depth I get this error now: GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE.
EDIT 2
Documentation claims that this happens when GL_TEXTURE_SAMPLES and GL_RENDERBUFFER:SAMPLES don't match.
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE is returned if the value of GL_RENDERBUFFER_SAMPLES is not the same for all attached renderbuffers; if the value of GL_TEXTURE_SAMPLES is the not same for all attached textures; or, if the attached images are a mix of renderbuffers and textures, the value of GL_RENDERBUFFER_SAMPLES does not match the value of GL_TEXTURE_SAMPLES.
But they do!
I've tested them like this:
std::cout << "GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE" << std::endl;
GLsizei gts, grs;
glGetTexLevelParameteriv(GL_TEXTURE_2D_MULTISAMPLE, 0, GL_TEXTURE_SAMPLES, >s);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_SAMPLES, &grs);
std::cout << "GL_TEXTURE_SAMPLES: " << gts << std::endl;
std::cout << "GL_RENDERBUFFER_SAMPLES: " << grs << std::endl;
Output is:
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE
GL_TEXTURE_SAMPLES: 8
GL_RENDERBUFFER_SAMPLES: 8
EDIT 3
Worked around this by using two textures instead of a texture and a renderbuffer like this:
glGenFramebuffers(1, &msaa_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, msaa_fbo);
glGenTextures(1, &render_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, render_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_RGBA8, width, height, false);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, render_texture, 0);
glGenTextures(1, &depth_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, depth_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_DEPTH_COMPONENT, width, height, false);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D_MULTISAMPLE, depth_texture, 0);
I'm am still interested in what was wrong with the original implementation, so question is still standing.
You need to used fixed sample locations for the texture if you mix it with renderbuffers. From the spec, in section "Framebuffer Completeness":
The value of TEXTURE_FIXED_SAMPLE_LOCATIONS is the same for all attached textures; and, if the attached images are a mix of renderbuffers and textures, the value of TEXTURE_FIXED_SAMPLE_LOCATIONS must be TRUE for all attached textures.
{FRAMEBUFFER_INCOMPLETE_MULTISAMPLE}
To avoid this error condition, you the call for setting up the texture storage needs to be changed to:
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,
NUM_SAMPLES, GL_RGBA8, width, height, GL_TRUE);
I am trying to render a simple checkerboard in a FBO and then do a glReadPixels().
When I do it without FBO, everything works fine. So I assume that my render function is ok and so is the glReadPixels(). With the FBO, all I get are the lines that I draw after the calls to FBO have been done.
Here is my code (Python, aiming cross platform):
def renderFBO():
#WhyYouNoWorking(GL_FRAMEBUFFER) # degug function... error checking
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebuffer)
glBindRenderbuffer( GL_RENDERBUFFER, renderbufferA)
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA, window.width, window.height)
glBindRenderbuffer( GL_RENDERBUFFER, renderbufferB)
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH_COMPONENT, window.width, window.height)
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebuffer)
glFramebufferRenderbuffer( GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbufferA)
glFramebufferRenderbuffer( GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderbufferB)
#WhyYouNoWorking(GL_FRAMEBUFFER)
glDrawBuffer(GL_COLOR_ATTACHMENT0)
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glViewport( 0, 0, window.width, window.height)
DrawChecker(Nbr = 16, Dark = 25.0/255, Light = 75.0/255)
for i in range(len(labelSysInfo)):
pyglet.text.Label(labelSysInfo[i], font_name='Times New Roman', font_size=26, x=(window.width*0.68), y= (window.height*0.04*i)+(window.height*2/3), anchor_x='left', anchor_y='center', color = (250, 250, 250, 150)).draw()
glReadPixels(0, 0, window.width, window.height, GL_RGBA, GL_UNSIGNED_BYTE, a)
glBindFramebuffer( GL_FRAMEBUFFER, 0)
My other function:
def on_draw(dt):
glDrawBuffer(GL_BACK)
glClear(GL_COLOR_BUFFER_BIT)
glClearColor( 0.0, 0.0, 0.0, 1.0)
glLoadIdentity()
glEnable(GL_TEXTURE_2D)
glDisable(GL_TEXTURE_2D)
BlueLine() # draw a simple line. works fine
DropFrameTest() # draw a simple line. works fine
In the main, the call to renderFBO() is done once, and then on_draw is called periodically.
dt = pyglet.clock.tick()
renderFBO()
pyglet.clock.schedule_interval(on_draw, 0.007)
pyglet.app.run()
At a guess, you've bound the framebuffer to the GL_DRAW_FRAMEBUFFER only. Use
glBindFramebuffer(GL_FRAMEBUFFER, ...
and
glFramebufferRenderbuffer(GL_FRAMEBUFFER, ...
to both read and write with the same FBO.
I'm sure you already have but checking for framebuffer completeness (glCheckFramebufferStatus) and for GL errors (glGetError, or the new extension) is also very useful.
[EDIT]
(The shotgun problem solving tactics from the comments)
If you see an image on the first frame, but none on the next there must be something staying behind from the previous frame.
The most common problem is forgetting to clear the depth buffer - but you haven't.
Next up are stencil buffers and blending (neither look like they're enabled to begin with).
Maybe a new FBO handle is being generated each frame and you're running out?
Another common problem is accumulating matrix transforms, but you have glLoadIdentity so should be no issue there.
I'm trying to implement color picking with FBO. I have multisampled FBO (fbo[0]) which I use to render the scene and I have non multisampled FBO (fbo[1]) which I use for color picking.
The problem is: when I try to read pixel data from fbo[1] everything goes well until glReadPixels call which sets GL_INVALID_OPERATION flag. I've checked the manual and can't find the reason why.
The code to create FBO:
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0]);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numSamples, GL_RGBA8, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1]);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numSamples, GL_DEPTH24_STENCIL8, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[2]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[3]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, resolution[0], resolution[1]);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo[3]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[2]);
OGLChecker::checkFBO(GL_DRAW_FRAMEBUFFER);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[0]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo[1]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[0]);
OGLChecker::checkFBO(GL_DRAW_FRAMEBUFFER);
My checker stays silent so the FBOs are complete. Next the picking code
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// bla, bla, bla
// do the rendering
unsigned int result;
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo[1]);
int sb;
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
glGetIntegerv(GL_SAMPLE_BUFFERS, &sb);
// glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
OGLChecker::getGlError();
std::cerr << "Sample buffers " << sb << std::endl;
glReadPixels(pos.x(), resolution.y() - pos.y(), 1, 1, GL_RED, GL_UNSIGNED_INT, &result);
OGLChecker::getGlError();
return result;
the output:
Sample buffers 0
OpenGL Error : Invalid Operation
The interesting fact that if I uncomment glBindFramebuffer(GL_READ_FRAMEBUFFER, 0); then no error happens and pixels are read from screen (but I don't need this).
What may be wrong here?
Your problem is the format parameter. For a texture that has a one-channel integer internal format the correct parameter isn't GL_RED, but GL_RED_INTEGER:
glReadPixels(pos.x(), resolution.y() - pos.y(), 1, 1, GL_RED_INTEGER, GL_UNSIGNED_INT, &result);
Look at the OpenGL documentation wiki (emphasis mine):
...
format
Specifies the format of the pixel data. For transfers of depth, stencil, or depth/stencil data, you must use GL_DEPTH_COMPONENT, GL_STENCIL_INDEX, or GL_DEPTH_STENCIL, where appropriate. For transfers of normalized integer or floating-point color image data, you must use one of the following: GL_RED, GL_GREEN, GL_BLUE, GL_RG, GL_RGB, GL_BGR, GL_RGBA, and GL_BGRA. For transfers of non-normalized integer data, you must use one of the following: GL_RED_INTEGER, GL_GREEN_INTEGER, GL_BLUE_INTEGER, GL_RG_INTEGER, GL_RGB_INTEGER, GL_BGR_INTEGER, GL_RGBA_INTEGER, and GL_BGRA_INTEGER. Even if no actual pixel transfer is made (data is NULL and no buffer is bound to GL_PIXEL_UNPACK_BUFFER), you must set this parameter correctly for the internal format of the destination image.
...
Note: the official reference page is incomplete/wrong.
Given that it's "fixed" if you uncomment that line of code, I wonder if your driver is lying to you about GL_SAMPLE_BUFFERS being 0. From http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml:
GL_INVALID_OPERATION is generated if GL_READ_FRAMEBUFFER_BINDING is non-zero, the read framebuffer is complete, and the value of GL_SAMPLE_BUFFERS for the read framebuffer is greater than zero.
If you're using NVIDIA's binary driver on Linux and have switched to a non-graphical virtual console (e.g. CTRL+ALT+F1) then any attempt to glReadPixels() will return GL_INVALID_OPERATION (0x502).
Solution: Switch back to the graphical console (usually CTRL+ALT+F7).
Have anyone done this succesfully? It seems whatever index format I use in the stencil render buffer glCheckFramebufferStatus(...) returns GL_FRAMEBUFFER_UNSUPPORTED.
I've succesfully bound both a depth\color render buffer, but whenever I try to do the same thing with my stencil buffer I get (as I said) GL_FRAMEBUFFER_UNSUPPORTED.
Here is snippets of my code:
// Create frame buffer
GLuint fb;
glGenFramebuffers(1, &fb);
// Create stencil render buffer (note that I create depth buffer the exact same way, and It works.
GLuint sb;
glGenRenderbuffers(1, &sb);
glBindRenderbuffer(GL_RENDERBUFFER, sb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, w, h);
// Attach color
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, cb, 0);
// Attach stencil buffer
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rb);
// And here I get an GL_FRAMEBUFFER_UNSUPPORTED when doing glCheckFramebufferStatus()
Any ideas?
Note: The color attachement is a texture and not a renderbuffer
Never use a free-standing stencil buffer. If you need stencil, always use a depth+stencil image format. Note that the stencil index formats are not required image formats.
Even though you're not using a depth buffer here, you still should use GL_DEPTH24_STENCIL8, which you should attach to GL_DEPTH_STENCIL_ATTACHMENT.
You can use stencil-only on recent nvidia hardware/drivers
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_2D, fboStencilTexture, 0);
Still no support for both separate depth and stencil
I have a very basic fragment shader which I want to output 'gl_PrimitiveID' to a fragment buffer object (FBO) which I have defined. Below is my fragment shader:
#version 150
uniform vec4 colorConst;
out vec4 fragColor;
out uvec4 triID;
void main(void)
{
fragColor = colorConst;
triID.r = uint(gl_PrimitiveID);
}
I setup my FBO like this:
GLuint renderbufId0;
GLuint renderbufId1;
GLuint depthbufId;
GLuint framebufId;
// generate render and frame buffer objects
glGenRenderbuffers( 1, &renderbufId0 );
glGenRenderbuffers( 1, &renderbufId1 );
glGenRenderbuffers( 1, &depthbufId );
glGenFramebuffers ( 1, &framebufId );
// setup first renderbuffer (fragColor)
glBindRenderbuffer(GL_RENDERBUFFER, renderbufId0);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, gViewWidth, gViewHeight);
// setup second renderbuffer (triID)
glBindRenderbuffer(GL_RENDERBUFFER, renderbufId1);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB32UI, gViewWidth, gViewHeight);
// setup depth buffer
glBindRenderbuffer(GL_RENDERBUFFER, depthbufId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, gViewWidth, gViewHeight);
// setup framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, framebufId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbufId0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_RENDERBUFFER, renderbufId1);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthbufId );
// check if everything went well
GLenum stat = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(stat != GL_FRAMEBUFFER_COMPLETE) { exit(0); }
// setup color attachments
const GLenum att[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, att);
// render mesh
RenderMyMesh()
// copy second color attachment (triID) to local buffer
glReadBuffer(GL_COLOR_ATTACHMENT1);
glReadPixels(0, 0, gViewWidth, gViewHeight, GL_RED, GL_UNSIGNED_INT, data);
For some reason glReadPixels gives me a 'GL_INVALID_OPERATION' error? However if i change the internal format of renderbufId1 from 'GL_RGB32UI' to 'GL_RGB' and I use 'GL_FLOAT' in glReadPixels instead of 'GL_UNSIGNED_INT' then everything works fine. Does anyone know why I am getting the 'GL_INVALID_OPERATION' error and how I can solve it?
Is there an alternative way of outputting 'gl_PrimitiveID'?
PS: The reason I want to output 'gl_PrimitiveID' like this is explained here: Picking triangles in OpenGL core profile when using glDrawElements
glReadPixels(0, 0, gViewWidth, gViewHeight, GL_RED, GL_UNSIGNED_INT, data);
As stated on the OpenGL Wiki, you need to use GL_RED_INTEGER when transferring true integer data. Otherwise, OpenGL will try to use floating-point conversion on it.
BTW, make sure you're using glBindFragDataLocation to set up which buffers those fragment shader outputs go to. Alternatively, you can set it up explicitly in the shader if you're using GLSL 3.30 or above.