I'm trying to implement color picking with FBO. I have multisampled FBO (fbo[0]) which I use to render the scene and I have non multisampled FBO (fbo[1]) which I use for color picking.
The problem is: when I try to read pixel data from fbo[1] everything goes well until glReadPixels call which sets GL_INVALID_OPERATION flag. I've checked the manual and can't find the reason why.
The code to create FBO:
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0]);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numSamples, GL_RGBA8, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1]);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numSamples, GL_DEPTH24_STENCIL8, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[2]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[3]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, resolution[0], resolution[1]);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo[3]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[2]);
OGLChecker::checkFBO(GL_DRAW_FRAMEBUFFER);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[0]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo[1]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[0]);
OGLChecker::checkFBO(GL_DRAW_FRAMEBUFFER);
My checker stays silent so the FBOs are complete. Next the picking code
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// bla, bla, bla
// do the rendering
unsigned int result;
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo[1]);
int sb;
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
glGetIntegerv(GL_SAMPLE_BUFFERS, &sb);
// glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
OGLChecker::getGlError();
std::cerr << "Sample buffers " << sb << std::endl;
glReadPixels(pos.x(), resolution.y() - pos.y(), 1, 1, GL_RED, GL_UNSIGNED_INT, &result);
OGLChecker::getGlError();
return result;
the output:
Sample buffers 0
OpenGL Error : Invalid Operation
The interesting fact that if I uncomment glBindFramebuffer(GL_READ_FRAMEBUFFER, 0); then no error happens and pixels are read from screen (but I don't need this).
What may be wrong here?
Your problem is the format parameter. For a texture that has a one-channel integer internal format the correct parameter isn't GL_RED, but GL_RED_INTEGER:
glReadPixels(pos.x(), resolution.y() - pos.y(), 1, 1, GL_RED_INTEGER, GL_UNSIGNED_INT, &result);
Look at the OpenGL documentation wiki (emphasis mine):
...
format
Specifies the format of the pixel data. For transfers of depth, stencil, or depth/stencil data, you must use GL_DEPTH_COMPONENT, GL_STENCIL_INDEX, or GL_DEPTH_STENCIL, where appropriate. For transfers of normalized integer or floating-point color image data, you must use one of the following: GL_RED, GL_GREEN, GL_BLUE, GL_RG, GL_RGB, GL_BGR, GL_RGBA, and GL_BGRA. For transfers of non-normalized integer data, you must use one of the following: GL_RED_INTEGER, GL_GREEN_INTEGER, GL_BLUE_INTEGER, GL_RG_INTEGER, GL_RGB_INTEGER, GL_BGR_INTEGER, GL_RGBA_INTEGER, and GL_BGRA_INTEGER. Even if no actual pixel transfer is made (data is NULL and no buffer is bound to GL_PIXEL_UNPACK_BUFFER), you must set this parameter correctly for the internal format of the destination image.
...
Note: the official reference page is incomplete/wrong.
Given that it's "fixed" if you uncomment that line of code, I wonder if your driver is lying to you about GL_SAMPLE_BUFFERS being 0. From http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml:
GL_INVALID_OPERATION is generated if GL_READ_FRAMEBUFFER_BINDING is non-zero, the read framebuffer is complete, and the value of GL_SAMPLE_BUFFERS for the read framebuffer is greater than zero.
If you're using NVIDIA's binary driver on Linux and have switched to a non-graphical virtual console (e.g. CTRL+ALT+F1) then any attempt to glReadPixels() will return GL_INVALID_OPERATION (0x502).
Solution: Switch back to the graphical console (usually CTRL+ALT+F7).
Related
How can I attach a depth-buffer to my framebufferobject when I use GL_TEXTURE_2D_MULTISAMPLE. glCheckFramebufferStatus(msaa_fbo) from the code below returns 0. From the documentation this seems to mean that msaa_fba is not a framebuffer, but it is created from glGenFramebuffers(1, &msaa_fbo);.
Additionally, if an error occurs, zero is returned.
GL_INVALID_ENUM is generated if target is not GL_DRAW_FRAMEBUFFER, GL_READ_FRAMEBUFFER or GL_FRAMEBUFFER.
The error is 1280, which I think means GL_INVALID_ENUM.
If i remove the depth buffer attachment the program runs and renders (although without depth testing). The error is still present when it runs then. With the depth attachment included there is an error (1286) after every frame, which is INVALID_FRAMEBUFFER. I don't know how to continue from here. Some examples I've looked at do somewhat the same but seem to work.
glGenTextures(1, &render_target_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, render_target_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_RGBA8, width, height, false);
glGenFramebuffers(1, &msaa_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, msaa_fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, render_target_texture, 0);
glGenRenderbuffers(1, &depth_render_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, depth_render_buffer);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, NUM_SAMPLES, GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_render_buffer);
GLenum status = glCheckFramebufferStatus(msaa_fbo);
Most of the code is from this.
EDIT
The status check was wrong, it should've been GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);. Now there is no error when I don't include the depth. When I include depth I get this error now: GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE.
EDIT 2
Documentation claims that this happens when GL_TEXTURE_SAMPLES and GL_RENDERBUFFER:SAMPLES don't match.
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE is returned if the value of GL_RENDERBUFFER_SAMPLES is not the same for all attached renderbuffers; if the value of GL_TEXTURE_SAMPLES is the not same for all attached textures; or, if the attached images are a mix of renderbuffers and textures, the value of GL_RENDERBUFFER_SAMPLES does not match the value of GL_TEXTURE_SAMPLES.
But they do!
I've tested them like this:
std::cout << "GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE" << std::endl;
GLsizei gts, grs;
glGetTexLevelParameteriv(GL_TEXTURE_2D_MULTISAMPLE, 0, GL_TEXTURE_SAMPLES, >s);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_SAMPLES, &grs);
std::cout << "GL_TEXTURE_SAMPLES: " << gts << std::endl;
std::cout << "GL_RENDERBUFFER_SAMPLES: " << grs << std::endl;
Output is:
GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE
GL_TEXTURE_SAMPLES: 8
GL_RENDERBUFFER_SAMPLES: 8
EDIT 3
Worked around this by using two textures instead of a texture and a renderbuffer like this:
glGenFramebuffers(1, &msaa_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, msaa_fbo);
glGenTextures(1, &render_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, render_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_RGBA8, width, height, false);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, render_texture, 0);
glGenTextures(1, &depth_texture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, depth_texture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, NUM_SAMPLES, GL_DEPTH_COMPONENT, width, height, false);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D_MULTISAMPLE, depth_texture, 0);
I'm am still interested in what was wrong with the original implementation, so question is still standing.
You need to used fixed sample locations for the texture if you mix it with renderbuffers. From the spec, in section "Framebuffer Completeness":
The value of TEXTURE_FIXED_SAMPLE_LOCATIONS is the same for all attached textures; and, if the attached images are a mix of renderbuffers and textures, the value of TEXTURE_FIXED_SAMPLE_LOCATIONS must be TRUE for all attached textures.
{FRAMEBUFFER_INCOMPLETE_MULTISAMPLE}
To avoid this error condition, you the call for setting up the texture storage needs to be changed to:
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,
NUM_SAMPLES, GL_RGBA8, width, height, GL_TRUE);
I am writing a simple rendering program using OpenGL 3.3.
I have following lines in my code (which should enable depth test and culling):
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
ExitOnGLError("ERROR: Could not set OpenGL depth testing options");
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
ExitOnGLError("ERROR: Could not set OpenGL culling options");
However after rendering I see the following result:
As you can see the depth test does not seem to work. What am I doing wrong? Where I should look for the problem?
Some information that may be useful:
In the projection matrix I have near clipping plane set to 0.2 and far to 3.2 (so near plane is not zero).
I render mesh and texture it using simple method with glDrawArrays and two buffers for vertex and texture coordinates. Shaders than are used to display these arrays properly.
I do not calculate and draw normals.
Context creation code: http://pastebin.com/mRMUxPL1
UPDATE:
Finally got it working! As it turns out I was not creating the buffer for depth rendering.
When I replaced this code (buffers initialization):
glGenFramebuffers(1, &mainFrameBufferId);
glGenRenderbuffers(1, &renderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGBA8,
camera.imageSize.width,
camera.imageSize.height);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBufferId);
CV_Assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
with this one:
glGenFramebuffers(1, &mainFrameBufferId);
glGenRenderbuffers(1, &renderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_RGBA8,
camera.imageSize.width,
camera.imageSize.height);
glBindFramebuffer(GL_FRAMEBUFFER, mainFrameBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER,
renderBufferId);
CV_Assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
glGenRenderbuffers(1, &depthRenderBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBufferId);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_DEPTH24_STENCIL8,
camera.imageSize.width,
camera.imageSize.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferId);
everything started to work fine!
static int visualAttribs[] = { None };
^^^^
int numberOfFramebufferConfigurations = 0;
GLXFBConfig* fbConfigs = glXChooseFBConfig( display, DefaultScreen(display), visualAttribs, &numberOfFramebufferConfigurations );
CV_Assert(fbConfigs != 0);
glXChooseFBConfig():
GLX_DEPTH_SIZE: Must be followed by a nonnegative minimum size specification. If this value is zero, frame buffer configurations with no depth buffer are preferred. Otherwise, the largest available depth buffer of at least the minimum size is preferred. The default value is 0.
Try setting visualAttribs[] to something like { GLX_DEPTH_SIZE, 16, None }
Have anyone done this succesfully? It seems whatever index format I use in the stencil render buffer glCheckFramebufferStatus(...) returns GL_FRAMEBUFFER_UNSUPPORTED.
I've succesfully bound both a depth\color render buffer, but whenever I try to do the same thing with my stencil buffer I get (as I said) GL_FRAMEBUFFER_UNSUPPORTED.
Here is snippets of my code:
// Create frame buffer
GLuint fb;
glGenFramebuffers(1, &fb);
// Create stencil render buffer (note that I create depth buffer the exact same way, and It works.
GLuint sb;
glGenRenderbuffers(1, &sb);
glBindRenderbuffer(GL_RENDERBUFFER, sb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, w, h);
// Attach color
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, cb, 0);
// Attach stencil buffer
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rb);
// And here I get an GL_FRAMEBUFFER_UNSUPPORTED when doing glCheckFramebufferStatus()
Any ideas?
Note: The color attachement is a texture and not a renderbuffer
Never use a free-standing stencil buffer. If you need stencil, always use a depth+stencil image format. Note that the stencil index formats are not required image formats.
Even though you're not using a depth buffer here, you still should use GL_DEPTH24_STENCIL8, which you should attach to GL_DEPTH_STENCIL_ATTACHMENT.
You can use stencil-only on recent nvidia hardware/drivers
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_2D, fboStencilTexture, 0);
Still no support for both separate depth and stencil
I'm looking how to convert a GL_RGBA framebuffer texture to GL_COMPRESSED_RGBA texture, preferably on the GPU. Framebuffers apparently can´t have the GL_COMPRESSED_RGBA internal format, thus I need a way to convert.
See this document that describes OpenGL Texture Compression. The sequence of steps is like (this is hacky - Buffer objects for the textures throughout would improve things somewhat)
GLUint mytex, myrbo, myfbo;
glGenTextures(1, &mytex);
glBindTexture(GL_TEXTURE_2D, mytex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glGenRenderbuffers(1, &myrbo);
glBindRenderbuffer(GL_RENDERBUFFER, myrbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height)
glGenFramebuffers(1, &myfbo);
glBindFramebuffer(GL_FRAMEBUFFER, myfbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, myrbo);
// If you need a Z Buffer:
// create a 2nd renderbuffer for the framebuffer GL_DEPTH_ATTACHMENT
// render (i.e. create the data for the texture)
// Now get the data out of the framebuffer by requesting a compressed read
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA,
0, 0, width, height, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glDeleteRenderbuffers(1, &myrbo);
glDeleteFramebuffers(1, &myfbo);
// Validate it's compressed / read back compressed data
GLInt format = 0, compressed_size = 0;
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
char *data = malloc(compressed_size);
glGetCompressedTexImage(GL_TEXTURE_2D, 0, data);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTexture(1, &mytex);
// data now contains the compressed thing
If you'd use a PBO object for the texture, you'd be able to get away without the malloc().
If you would like to perform the compression on the GPU without transfer to the CPU - here's two samples you might be able to repurpose for OpenGL (they're DX based)
GPU accelerated texture compression
GPU accelerated texture compression 2
Hope this helps!
I'm trying to render a multisampled scene to texture, here is the code I'm using. I'm getting a black screen. I check the fbo completeness at the end of init, and they report that both fbo's are complete.
void init_rendered_FBO() {
glGenFramebuffers(1,&fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glGenTextures(1,&fbo_tex);
glBindTexture(GL_TEXTURE_2D, fbo_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width_screen, height_screen, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, width_screen, height_screen);
glBindTexture (GL_TEXTURE_2D, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fbo_tex, 0);
int objectType;
glGetFramebufferAttachmentParameteriv(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE,&objectType);
ASSERT(objectType == GL_TEXTURE);
int objectName;
glGetFramebufferAttachmentParameteriv(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME,&objectName);
ASSERT(glIsTexture(objectName) == GL_TRUE);
int wid, hei, fmt;
glBindTexture(GL_TEXTURE_2D, objectName);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &wid);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &hei);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT,
&fmt);
glBindTexture(GL_TEXTURE_2D, 0);
std::string format = convertInternalFormatToString(fmt);
std::cout << "Color attachment 0: " << objectName << " " << wid << "x" << hei << ", " << format << std::endl;
ASSERT(checkFramebufferStatus());
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
// this is the init function that gets called.
void init_rendered_multisample_FBO() {
init_rendered_FBO();
// now I'm going to set up the additional component necessary to perform multisampling which is a new fbo
// that has a multisampled color buffer attached. I won't need a multisample depth buffer.
glGenFramebuffers(1,&multi_fbo);
glBindFramebuffer(GL_FRAMEBUFFER,multi_fbo);
glGenRenderbuffers(1,&renderbuffer_multi);
glBindRenderbuffer(GL_RENDERBUFFER,renderbuffer_multi);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_RGBA, width_screen,height_screen);
glBindRenderbuffer(GL_RENDERBUFFER,0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer_multi);
int objectType;
glGetFramebufferAttachmentParameteriv(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE,&objectType);
ASSERT(objectType == GL_RENDERBUFFER);
int objectName;
glGetFramebufferAttachmentParameteriv(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME,&objectName);
ASSERT(glIsRenderbuffer(objectName) == GL_TRUE);
glBindRenderbuffer(GL_RENDERBUFFER,objectName);
int wid,hei,fmt,sam;
glGetRenderbufferParameteriv(GL_RENDERBUFFER,GL_RENDERBUFFER_WIDTH,&wid);
glGetRenderbufferParameteriv(GL_RENDERBUFFER,GL_RENDERBUFFER_HEIGHT,&hei);
glGetRenderbufferParameteriv(GL_RENDERBUFFER,GL_RENDERBUFFER_INTERNAL_FORMAT,&fmt);
glGetRenderbufferParameteriv(GL_RENDERBUFFER,GL_RENDERBUFFER_SAMPLES,&sam);
glBindRenderbuffer(GL_RENDERBUFFER,0);
printf("Renderbuffer: %dx%d, fmt=%d, samples=%d\n",wid,hei,fmt,sam);
ASSERT(checkFramebufferStatus());
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
// this is called after rendering to multi_fbo
void resolve_multisample_FBO() {
glBindFramebuffer(GL_READ_FRAMEBUFFER, multi_fbo);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
GLenum drawbuf = GL_COLOR_ATTACHMENT0;
glDrawBuffers(1,&drawbuf);
glBlitFramebuffer(0,0,width_screen,height_screen,0,0,width_screen,height_screen,GL_COLOR_BUFFER_BIT,GL_NEAREST);
}
Can you spot anything I left out? I think the issue may be with the glFramebufferRenderbuffer call. I tried switching the first arg to GL_READ_FRAMEBUFFER but it did not fix.
I check glGetError and there are no errors. If I setup something wrong, surely something will fail and give me INVALID_ENUM or INVALID_OPERATION to help me narrow down the issue.
The way I use this code is that in order to enable multisampling all I have to change is to bind multi_fbo when drawing, and then call the resolve function which will bind and blit. After that, my fbo texture (which works fine when I render directly to it) should now contain the multisampled render. But it's just black. I also did call glEnable(GL_MULTISAMPLE) in another part of initialization code.
I'm now going to attempt to blit a non multisampled texture to another texture to make sure that works. Hopefully that will help me narrow down where I screwed up.
Update: So it turns out copying a regular FBO to another identical FBO (both have textures attached to color attachment 0) produces the same black screen. The blit simply doesn't work.
What is also strange is that once I attempt to blit, then draw the texture of the SOURCE fbo, it's still all black. It's like attempting to blit just ruins everything.
Does anybody know of or have any FBO blitting code? I can't find any but I know people have gotten multisampled FBO's working.
Here's how you can help: If you have code which calls glBlitFramebuffer at any point, I would like to see the other calls for setting that operation up. I can't seem to get anything but black buffers and textures once I invoke it.
Update: Blit to backbuffer works, even with multisampling! This of course requires absolutely no setup since I just bind the framebuffer 0. So the problem seems to be my setting up of the FBO which has a texture attachment which is supposed to be blitted to.
Well turns out I didn't do anything wrong other than fail to reset the framebuffer binding.
At the end of void resolve_multisample_FBO() I simply needed a glBindFramebuffer(GL_FRAMEBUFFER, 0);
A terrible way to waste 3 hours, I assure you. But multisampling looks great and more than makes up for it.