What's wrong with my framebuffer? - c++

I'm creating a framebuffer object to be my gbuffer for deferred shading. I mainly learned from http://ogldev.atspace.co.uk/, and modified to be a little more... sane. Here's the source code where I create the framebuffer:
/* Create the FBO */
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
/* Create the gbuffer textures */
glGenTextures(GBUFFER_NUM_TEXTURES, tex);
/* Create the color buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_COLOR]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex[GBUFFER_COLOR], 0);
/* Create the normal buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_NORMAL]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RG16F, width, height, 0, GL_RG, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, tex[GBUFFER_NORMAL], 0);
/* Create the depth-stencil buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_DEPTH_STENCIL]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, tex[GBUFFER_DEPTH_STENCIL], 0);
GLenum drawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, drawBuffers);
glReadBuffer(GL_NONE);
/* Check for errors */
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
{
error("In GBuffer::init():\n");
errormore("Failed to create Framebuffer, status: 0x%x\n", status);
fbo = 0;
return;
}
// restore default FBO
glBindFramebuffer(GL_FRAMEBUFFER, 0);
When I run this, however, status returns GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If it's not clear, I'm trying to create 3 gbuffers:
a 32-bit RGBA color buffer (I'd use 24-bit but I'm scared of alignment penalties),
a 32-bit RG normal buffer (each component using a 16-bit float, but I might get away with a signed short?)
a 24-bit Depth buffer packed with an 8-bit Stencil buffer
(total of 96 bits, or 12 bytes)
Possible problem areas that I can see might be using GL_FLOAT for the normal buffer, and GL_FLOAT for the depth-stencil buffer. I'd imagine GL_HALF_FLOAT would be more appropriate for normals, but that's not on the list of types that I can use with glTexImage2D. Similarly, I have no idea what type is most appropriate to use for a depth-stencil buffer.
What am I doing wrong?

Your use of GL_FLOAT is mostly irrelevant, since no pixel transfer actually happens.
You can supply anything you want there as long as it is a meaningful data type. While no pixel transfer happens when you pass NULL for data, GL still validates the pixel transfer data type against the set of valid types and will raise an error if you do something wrong. To that end, if it raises an error the texture will be incomplete and thus cannot be used as an FBO attachment.
Here is where the problem lies, GL_FLOAT is not a meaningful data type for pixel transfer into a packed GL_DEPTH_STENCIL image format... it expects a packed data type such as GL_UNSIGED_INT_24_8 or something really exotic like GL_FLOAT_32_UNSIGNED_INT_24_8_REV​ (64-bit packed Floating-Point Depth + Stencil format).
In any event, there are actually two components that need to be packed into your data type. GL_FLOAT can only describe one of the two components, because floating-point stencil buffers are meaningless.
By the way, this whole confusing mess about pixel transfer data type can be completely avoided if you use something like glTexStorage2D (...) to only allocate storage for the texture. glTexImage2D (...) does double-duty, allocating storage for a texture LOD and providing a mechanism to initialize it with data. You really do not care about the later part if you are drawing into the texture with an FBO, since that is the only place it gets any data from.

Related

Can I use OpenGL for off-screen rendering? [duplicate]

This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.

glReadPixels() sets GL_INVALID_OPERATION error

I'm trying to implement color picking with FBO. I have multisampled FBO (fbo[0]) which I use to render the scene and I have non multisampled FBO (fbo[1]) which I use for color picking.
The problem is: when I try to read pixel data from fbo[1] everything goes well until glReadPixels call which sets GL_INVALID_OPERATION flag. I've checked the manual and can't find the reason why.
The code to create FBO:
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0]);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numSamples, GL_RGBA8, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1]);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numSamples, GL_DEPTH24_STENCIL8, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[2]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, resolution[0], resolution[1]);
glBindRenderbuffer(GL_RENDERBUFFER, rbo[3]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, resolution[0], resolution[1]);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo[3]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[2]);
OGLChecker::checkFBO(GL_DRAW_FRAMEBUFFER);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[0]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rbo[1]);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[0]);
OGLChecker::checkFBO(GL_DRAW_FRAMEBUFFER);
My checker stays silent so the FBOs are complete. Next the picking code
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
// bla, bla, bla
// do the rendering
unsigned int result;
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo[1]);
int sb;
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
glGetIntegerv(GL_SAMPLE_BUFFERS, &sb);
// glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
OGLChecker::getGlError();
std::cerr << "Sample buffers " << sb << std::endl;
glReadPixels(pos.x(), resolution.y() - pos.y(), 1, 1, GL_RED, GL_UNSIGNED_INT, &result);
OGLChecker::getGlError();
return result;
the output:
Sample buffers 0
OpenGL Error : Invalid Operation
The interesting fact that if I uncomment glBindFramebuffer(GL_READ_FRAMEBUFFER, 0); then no error happens and pixels are read from screen (but I don't need this).
What may be wrong here?
Your problem is the format parameter. For a texture that has a one-channel integer internal format the correct parameter isn't GL_RED, but GL_RED_INTEGER:
glReadPixels(pos.x(), resolution.y() - pos.y(), 1, 1, GL_RED_INTEGER, GL_UNSIGNED_INT, &result);
Look at the OpenGL documentation wiki (emphasis mine):
...
format
Specifies the format of the pixel data. For transfers of depth, stencil, or depth/stencil data, you must use GL_DEPTH_COMPONENT, GL_STENCIL_INDEX, or GL_DEPTH_STENCIL, where appropriate. For transfers of normalized integer or floating-point color image data, you must use one of the following: GL_RED, GL_GREEN, GL_BLUE, GL_RG, GL_RGB, GL_BGR, GL_RGBA, and GL_BGRA. For transfers of non-normalized integer data, you must use one of the following: GL_RED_INTEGER, GL_GREEN_INTEGER, GL_BLUE_INTEGER, GL_RG_INTEGER, GL_RGB_INTEGER, GL_BGR_INTEGER, GL_RGBA_INTEGER, and GL_BGRA_INTEGER. Even if no actual pixel transfer is made (data​ is NULL and no buffer is bound to GL_PIXEL_UNPACK_BUFFER), you must set this parameter correctly for the internal format of the destination image.
...
Note: the official reference page is incomplete/wrong.
Given that it's "fixed" if you uncomment that line of code, I wonder if your driver is lying to you about GL_SAMPLE_BUFFERS being 0. From http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml:
GL_INVALID_OPERATION is generated if GL_READ_FRAMEBUFFER_BINDING is non-zero, the read framebuffer is complete, and the value of GL_SAMPLE_BUFFERS for the read framebuffer is greater than zero.
If you're using NVIDIA's binary driver on Linux and have switched to a non-graphical virtual console (e.g. CTRL+ALT+F1) then any attempt to glReadPixels() will return GL_INVALID_OPERATION (0x502).
Solution: Switch back to the graphical console (usually CTRL+ALT+F7).

Binding a stencil render buffer to a frame buffer in opengl

Have anyone done this succesfully? It seems whatever index format I use in the stencil render buffer glCheckFramebufferStatus(...) returns GL_FRAMEBUFFER_UNSUPPORTED.
I've succesfully bound both a depth\color render buffer, but whenever I try to do the same thing with my stencil buffer I get (as I said) GL_FRAMEBUFFER_UNSUPPORTED.
Here is snippets of my code:
// Create frame buffer
GLuint fb;
glGenFramebuffers(1, &fb);
// Create stencil render buffer (note that I create depth buffer the exact same way, and It works.
GLuint sb;
glGenRenderbuffers(1, &sb);
glBindRenderbuffer(GL_RENDERBUFFER, sb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, w, h);
// Attach color
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, cb, 0);
// Attach stencil buffer
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rb);
// And here I get an GL_FRAMEBUFFER_UNSUPPORTED when doing glCheckFramebufferStatus()
Any ideas?
Note: The color attachement is a texture and not a renderbuffer
Never use a free-standing stencil buffer. If you need stencil, always use a depth+stencil image format. Note that the stencil index formats are not required image formats.
Even though you're not using a depth buffer here, you still should use GL_DEPTH24_STENCIL8, which you should attach to GL_DEPTH_STENCIL_ATTACHMENT​.
You can use stencil-only on recent nvidia hardware/drivers
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_2D, fboStencilTexture, 0);
Still no support for both separate depth and stencil

Convert texture to GL_COMPRESSED_RGBA

I'm looking how to convert a GL_RGBA framebuffer texture to GL_COMPRESSED_RGBA texture, preferably on the GPU. Framebuffers apparently can´t have the GL_COMPRESSED_RGBA internal format, thus I need a way to convert.
See this document that describes OpenGL Texture Compression. The sequence of steps is like (this is hacky - Buffer objects for the textures throughout would improve things somewhat)
GLUint mytex, myrbo, myfbo;
glGenTextures(1, &mytex);
glBindTexture(GL_TEXTURE_2D, mytex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glGenRenderbuffers(1, &myrbo);
glBindRenderbuffer(GL_RENDERBUFFER, myrbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height)
glGenFramebuffers(1, &myfbo);
glBindFramebuffer(GL_FRAMEBUFFER, myfbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, myrbo);
// If you need a Z Buffer:
// create a 2nd renderbuffer for the framebuffer GL_DEPTH_ATTACHMENT
// render (i.e. create the data for the texture)
// Now get the data out of the framebuffer by requesting a compressed read
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA,
0, 0, width, height, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glDeleteRenderbuffers(1, &myrbo);
glDeleteFramebuffers(1, &myfbo);
// Validate it's compressed / read back compressed data
GLInt format = 0, compressed_size = 0;
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
char *data = malloc(compressed_size);
glGetCompressedTexImage(GL_TEXTURE_2D, 0, data);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTexture(1, &mytex);
// data now contains the compressed thing
If you'd use a PBO object for the texture, you'd be able to get away without the malloc().
If you would like to perform the compression on the GPU without transfer to the CPU - here's two samples you might be able to repurpose for OpenGL (they're DX based)
GPU accelerated texture compression
GPU accelerated texture compression 2
Hope this helps!

glReadPixels from FBO fails with multisampling

I have an FBO object with a color and depth attachment which I render to and then read from using glReadPixels() and I'm trying to add to it multisampling support.
Instead of glRenderbufferStorage() I'm calling glRenderbufferStorageMultisampleEXT() for both the color attachment and the depth attachment. The frame buffer object seem to have been created successfully and is reported as complete.
After rendering I'm trying to read from it with glReadPixels(). When the number of samples is 0 i.e. multisampling disables it works perfectly and I get the image I want. when I set the number of samples to something else, say 4, the frame buffer is still constructed OK but glReadPixels() fails with an INVALID_OPERATION
Anyone have an idea what could be wrong here?
EDIT: The code of glReadPixels:
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, ptr);
where ptr points to an array of width*height uints.
I don't think you can read from a multisampled FBO with glReadPixels(). You need to blit from the multisampled FBO to a normal FBO, bind the normal FBO, and then read the pixels from the normal FBO.
Something like this:
// Bind the multisampled FBO for reading
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, my_multisample_fbo);
// Bind the normal FBO for drawing
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, my_fbo);
// Blit the multisampled FBO to the normal FBO
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//Bind the normal FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, my_fbo);
// Read the pixels!
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
You can't read the multisample buffer directly with glReadPixels since it would raise an GL_INVALID_OPERATION error. You need to blit to another surface so that the GPU can do a downsample. You could blit to the backbuffer, but there is the problem of the "pixel owner ship test". It is best to make another FBO. Let's assume you made another FBO and now you want blit. This requires GL_EXT_framebuffer_blit. Typically, when your driver supports GL_EXT_framebuffer_multisample, it also supports GL_EXT_framebuffer_blit, for example the nVidia Geforce 8 series.
//Bind the MS FBO
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, multisample_fboID);
//Bind the standard FBO
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fboID);
//Let's say I want to copy the entire surface
//Let's say I only want to copy the color buffer only
//Let's say I don't need the GPU to do filtering since both surfaces have the same dimension
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//--------------------
//Bind the standard FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Source: GL EXT framebuffer multisample