Frame Buffer Object (FBO) and Render & Depth Buffers Relation - opengl

I saw many examples on the web (for example) which do the following
Create and Bind FBO
Create and Bind BUFFERS (texture, render, depth, stencil)
Then, UnBind BUFFERS
To work with FBO- Bind FBO, do the work then UnBind FBO
However, also Bind texture BUFFER for read, write etc. with texture BUFFER
BUT NEVER EVER SEEN re-Bind of other BUFFERS (render, depth, stencil), Why?
Example of BUFFERS creation and bind/unbind (Below code is just for example only to show what I explained and works perfectly),
// create a framebuffer object, you need to delete them when program exits.
glGenFramebuffersEXT(1, &fboId);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// create color buffer object and attached to fbo
glGenRenderbuffersEXT(1, &rboId);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rboId);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGB, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0); //UnBind
if(useDepthBuffer) {
glGenRenderbuffersEXT(1, &rboIdDepth);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rboIdDepth);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0); //UnBind
}
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, rboId);
if(useDepthBuffer)
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, rboIdDepth);
// check FBO status
printFramebufferInfo();
bool status = checkFramebufferStatus();
if(!status)
fboUsed = false;
.
//then,
.
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// Do the work
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
1. Why dont we need to bind all the BUFFERS again (I mean while
working-with/drawing-objects-to
FBO)?
2. What is going on under-the-hood here?
EDIT: attach-> Bind and deattach-> UnBind

I don't know if I completely understood you, but the renderbuffers bound to the attachment points (GL_COLOR_ATTACHMENT...) are per-FBO state and this FBO state remains unchanged you only need to bind the FBO to tell OpenGL that this FBO is now used and all its state (that you set earlier) will take effect.

I think by
Then, deattach BUFFERS
You refer to this
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rboIdDepth);
However this is not a detach, this is a unbind, which means something different. The renderbuffer is still attached to the FBO. The binding however selects the buffer object on which the following buffer object operations are to be performed on. It's kinda like the with statement found in some languages.
The actual attaching of a buffer object, that doesn't have to be bound, happens here
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_COLOR_ATTACHMENT0_EXT,
GL_RENDERBUFFER_EXT,
rboId ); // rboID != 0
It would be detached by a matching
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_COLOR_ATTACHMENT0_EXT,
GL_RENDERBUFFER_EXT,
0 );

Related

How to draw to offscreen color buffer in OpenGL and then draw the result to a sprite surface?

Here is a description of the problem:
I want to render some VBO shapes (rectangles, circles, etc) to an off screen framebuffer object. This could be any arbitrary shape.
Then I want to draw the result on a simple sprite surface as a texture, but not on the entire screen itself.
I can't seem to get this to work correctly.
When I run the code, I see the shapes being drawn all over the screen, but not in the sprite in the middle. It remains blank. Even though it seems like I set up the FBO with 1 color texture, it still only renders to screen even if I select the FBO object into context.
What I want to achieve is these shapes being drawn to an off screen texture (using an FBO, obviously) and then render it on the surface of a sprite (or a cube, or we) drawn somewhere on the screen. Yet, whatever I draw, appears to be drawn in the screen itself.
The tex(tex_object_ID); function is just a short-hand wrapper for OpenGL's standard texture bind. It selects a texture into current rendering context.
No matter what I try I get this result: The sprite is blank, but all these shapes should appear there, not on the main screen. (Didn't I bind rendering to FBO? Why is it still rendering on screen?)
I think it is just a logistics of setting up FBO in the right order that I am missing. Can anyone tell what's wrong with my code?
Not sure why the background is red, as I clear it after I select the FBO. It is the sprite that should get the red background & shapes drawn on it.
/*-- Initialization -- */
GLuint texture = 0;
GLuint Framebuffer = 0;
GLuint GenerateFrameBuffer(int dimension)
{
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, dimension, dimension, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
glGenFramebuffers(1, &Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glDrawBuffer(GL_COLOR);
glReadBuffer(GL_COLOR);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
console_log("GL_FRAMEBUFFER != GL_FRAMEBUFFER_COMPLETE\n");
return texture;
}
// Store framebuffer texture (should I store texture here or Framebuffer object?)
GLuint FramebufferHandle = GenerateFrameBuffer( 256 );
Standard OpenGL initialization code follows, memory is allocated, VBO's are created and bound, etc. This works correctly and there aren't errors in initialization. I can render VBOs, polygons, textured polygons, lines, etc, on standard double buffer with success.
Next, in my render loop I do the following:
// Possible problem?
// Should FramebufferHandle be passed here?
// I tried "texture" and "Framebuffer " as well, to no effect:
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferHandle);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
// Mini shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramMini.use();
//Clear frame buffer with blue color
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);// | GL_DEPTH_BUFFER_BIT);
// Set yellow to draw different shapes on the framebuffer
color = {1.0f,1.0f,0.0f};
// Draw several shapes (already correctly stored in VBO objects)
Memory.select(VBO_RECTANGLES); // updates uniforms
glDrawArrays(GL_QUADS, 0, Memory.renderable[VBO_RECTANGLES].indexIndex);
Memory.select(VBO_CIRCLES); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_CIRCLES].indexIndex);
Memory.select(VBO_2D_LIGHT); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_2D_LIGHT].indexIndex);
// Done writing to framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
Model.scale(10.0);
// Select texture shader to draw what was drawn on offscreen Framebuffer / texture
// Standard texture shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramTexture.use();
// This is a wrapper for bind texture to ID, just shorthand function name
tex(texture); // FramebufferHandle; // ? // maybe the mistake in binding to the wrong target object?
color = {0.5f,0.2f,0.0f};
Memory.select(VBO_SPRITE); Select a square VBO for rendering sprites (works if any other texture is assigned to it)
// finally draw the sprite with Framebuffer's texture:
glDrawArrays(GL_TRIANGLES, 0, Memory.renderable[VBO_SPRITE].indexIndex);
I may have gotten the order of something completely wrong. Or FramebufferHandle/Framebuffer/texture object is not passed to something correctly. But I spent all day, and hope someone more experienced than me can see the mistake.
GL_COLOR is not an accepted value for glDrawBuffer
See OpenGL 4.6 API Compatibility Profile Specification, 17.4.1 Selecting Buffers for Writing, Table 17.4 and Table 17.5, page 628
NONE, FRONT_LEFT, FRONT_RIGHT, BACK_LEFT, BACK_RIGHT, FRONT, BACK, LEFT, RIGHT, FRONT_AND_BACK, AUXi.
Arguments to DrawBuffer when the context is bound to a default framebuffer, and the buffers they indicate. The same arguments are valid for ReadBuffer, but only a single buffer is selected as discussed in section.
COLOR_ATTACHMENTi
Arguments to DrawBuffer(s) and ReadBuffer when the context is bound to a framebuffer object, and the buffers they indicate. i in COLOR_ATTACHMENTi may range from zero to the value of MAX_COLOR_ATTACHMENTS minus one.
Thsi means that glDrawBuffer(GL_COLOR); and glReadBuffer(GL_COLOR); will generate a GL_INVALID_ENUM error.
Try to use COLOR_ATTACHMENT0 instead.
Furthermore, glCheckFramebufferStatus(GL_FRAMEBUFFER), checkes the completeness of the framebuffer object which is bound to the target.
This means that
glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE
has to be done before
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Or you have to use:
glNamedFramebufferReadBuffer(Framebuffer, GL_FRAMEBUFFER);

OpenGL: failed to create frame buffer object

An OpenGL FBO is created and bind color buffer and stencil buffer to it .
glCheck(glBindFramebuffer(GL_FRAMEBUFFER, fbo));
// Delete old framebuffers if ever created.
if ( clrRbo != 0)
glDeleteRenderbuffers(1, & clrRbo);
if ( stencilRbo != 0)
glDeleteRenderbuffers(1, & stencilRbo);
// Create new framebuffers with new size.
int maxSize; glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE, &maxSize);
glCheck(glGenRenderbuffers(1,&clrRbo));
glCheck(glBindRenderbuffer(GL_RENDERBUFFER,clrRbo));
glCheck(glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, (std::min)(maxSize,m_localvp.pix_width),(std::min)(maxSize, m_localvp.pix_height)))
glCheck(glGenRenderbuffers(1,&stencilRbo));
glCheck(glBindRenderbuffer(GL_RENDERBUFFER,stencilRbo));
glCheck(glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX, (std::min)(maxSize,m_localvp.pix_width),(std::min)(maxSize, m_localvp.pix_height)))
// Bind new framebuffers to FBO;
glCheck(glBindFramebuffer(GL_FRAMEBUFFER, fbo));
glCheck(glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, clrRbo));
glCheck(glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, stencilRbo));
Problem that bothers me lies at :
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, (std::min)(maxSize,m_localvp.pix_width),(std::min)(maxSize, m_localvp.pix_height))
This call fails (get GL_INVALID_OPERATION error ) when different internal formats are used, which caused the failed creation of FBO. Some machine works fine with GL_RGBA and some works with GL_RGBA8, and some works with both. What's the difference between GL_RGBA8 and GL_RGBA. How could I figure out which format is supported on current machine?

Binding a stencil render buffer to a frame buffer in opengl

Have anyone done this succesfully? It seems whatever index format I use in the stencil render buffer glCheckFramebufferStatus(...) returns GL_FRAMEBUFFER_UNSUPPORTED.
I've succesfully bound both a depth\color render buffer, but whenever I try to do the same thing with my stencil buffer I get (as I said) GL_FRAMEBUFFER_UNSUPPORTED.
Here is snippets of my code:
// Create frame buffer
GLuint fb;
glGenFramebuffers(1, &fb);
// Create stencil render buffer (note that I create depth buffer the exact same way, and It works.
GLuint sb;
glGenRenderbuffers(1, &sb);
glBindRenderbuffer(GL_RENDERBUFFER, sb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, w, h);
// Attach color
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, cb, 0);
// Attach stencil buffer
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rb);
// And here I get an GL_FRAMEBUFFER_UNSUPPORTED when doing glCheckFramebufferStatus()
Any ideas?
Note: The color attachement is a texture and not a renderbuffer
Never use a free-standing stencil buffer. If you need stencil, always use a depth+stencil image format. Note that the stencil index formats are not required image formats.
Even though you're not using a depth buffer here, you still should use GL_DEPTH24_STENCIL8, which you should attach to GL_DEPTH_STENCIL_ATTACHMENT​.
You can use stencil-only on recent nvidia hardware/drivers
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_2D, fboStencilTexture, 0);
Still no support for both separate depth and stencil

Problems trying to create multiple Frame Buffer Objects in OSX but not Linux

I'm writing an applications that contains a collection of small scatter plots (called ProjectionAxes). Rather then re-render the entire plot whenever new data is acquired I have created a texture and render to that texture and then render the texture to a quad. Each plot is an object and has its own texture, render buffer object, and frame buffer object.
This has been working great under Linux but not under OSX. When I run the program on Linux each object creates its own texture, FBO, and RBO and everything renders fine. However when I run the same code on OSX the objects do not generate separate FBOs but appear to all be using the same FBO.
In my test program I create two instances of ProjectionAxes. On the first call to plot() the axes detect that the textures haven't been created and then generates them. During this generation process I display the integer values of the textureId, RBOid, and FBOid. When I run my code this is the output I get when I run the program under Linux:
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:1
Creating a new frame buffer object fboID:1 rboID:1
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:2
Creating a new frame buffer object fboID:2 rboID:2
And for OSX:
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:1
Creating a new frame buffer object fboID:1 rboID:1
ProjectionAxes::plot() --> Texture is invalid regenerating it!
Creating a new texture, textureId:2
Creating a new frame buffer object fboID:1 rboID:2
Notice that under linux the two FBO's have different IDs, whereas under OSX they do not.
What do I need to do to indicate to OSX that I want each object to use its own FBO?
Here is the code I use to create my FBO:
void ProjectionAxes::createFBO(){
std::cout<<"Creating a new frame buffer object";//<<std::endl;
glDeleteFramebuffers(1, &fboId);
glDeleteRenderbuffers(1, &rboId);
// Generate and Bind the frame buffer
glGenFramebuffersEXT(1, &fboId);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// Generate and bind the new Render Buffer
glGenRenderbuffersEXT(1, &rboId);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rboId);
std::cout<<" fboID:"<<fboId<<" rboID:"<<rboId<<std::endl;
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, texWidth, texHeight);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// Attach the texture to the framebuffer
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, textureId, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, rboId);
// If the FrameBuffer wasn't created then we have a bigger problem. Abort the program.
if(!checkFramebufferStatus()){
std::cout<<"FrameBufferObject not created! Are you running the newest version of OpenGL?"<<std::endl;
std::cout<<"FrameBufferObjects are REQUIRED! Quitting!"<<std::endl;
exit(1);
}
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
And here is the code I use to create my texture:
void ProjectionAxes::createTexture(){
texWidth = BaseUIElement::width;
texHeight = BaseUIElement::height;
std::cout<<"Creating a new texture,";
// Delete the old texture
glDeleteTextures(1, &textureId);
// Generate a new texture
glGenTextures(1, &textureId);
std::cout<<" textureId:"<<textureId<<std::endl;
// Bind the texture, and set the appropriate parameters
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glGenerateMipmap(GL_TEXTURE_2D);
// generate a new FrameBufferObject
createFBO();
// the texture should now be valid, set the flag appropriately
isTextureValid = true;
}
Try and temporarily comment out the glDelete* function calls. You're probably doing something strange with the handles.

glReadPixels from FBO fails with multisampling

I have an FBO object with a color and depth attachment which I render to and then read from using glReadPixels() and I'm trying to add to it multisampling support.
Instead of glRenderbufferStorage() I'm calling glRenderbufferStorageMultisampleEXT() for both the color attachment and the depth attachment. The frame buffer object seem to have been created successfully and is reported as complete.
After rendering I'm trying to read from it with glReadPixels(). When the number of samples is 0 i.e. multisampling disables it works perfectly and I get the image I want. when I set the number of samples to something else, say 4, the frame buffer is still constructed OK but glReadPixels() fails with an INVALID_OPERATION
Anyone have an idea what could be wrong here?
EDIT: The code of glReadPixels:
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, ptr);
where ptr points to an array of width*height uints.
I don't think you can read from a multisampled FBO with glReadPixels(). You need to blit from the multisampled FBO to a normal FBO, bind the normal FBO, and then read the pixels from the normal FBO.
Something like this:
// Bind the multisampled FBO for reading
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, my_multisample_fbo);
// Bind the normal FBO for drawing
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, my_fbo);
// Blit the multisampled FBO to the normal FBO
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//Bind the normal FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, my_fbo);
// Read the pixels!
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
You can't read the multisample buffer directly with glReadPixels since it would raise an GL_INVALID_OPERATION error. You need to blit to another surface so that the GPU can do a downsample. You could blit to the backbuffer, but there is the problem of the "pixel owner ship test". It is best to make another FBO. Let's assume you made another FBO and now you want blit. This requires GL_EXT_framebuffer_blit. Typically, when your driver supports GL_EXT_framebuffer_multisample, it also supports GL_EXT_framebuffer_blit, for example the nVidia Geforce 8 series.
//Bind the MS FBO
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, multisample_fboID);
//Bind the standard FBO
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fboID);
//Let's say I want to copy the entire surface
//Let's say I only want to copy the color buffer only
//Let's say I don't need the GPU to do filtering since both surfaces have the same dimension
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//--------------------
//Bind the standard FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Source: GL EXT framebuffer multisample