draw in FrameBuffer but get only black - opengl

windows
using glew
I'm trying to render offscreen and save the img opengl rendered to a png file.
I followed a highly rated answer on stackoverflow:
How to render offscreen on OpenGL?
But the png file I get is only a black screen.
Here's my code relating to it:
glutCreateWindow(argv[0]);
if(GLEW_OK!=glewInit())
{
return -1;
}
initScene();
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(GL_RENDERBUFFER,render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8, viewport.w, viewport.h);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glClear(GL_COLOR_BUFFER_BIT); // clear the color buffer
glMatrixMode(GL_MODELVIEW); // indicate we are specifying camera transformations
glLoadIdentity(); // make sure transformation is "zero'd"
//draw...
//glBegin(GL_POINTS) glColor3f, glVertex2f
//glFlush();
glFinish();
/*glutDisplayFunc(myDisplay);
glutPostRedisplay();*/
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
savePNG(outputPNGName,0,0,viewport.w,viewport.h);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
How to solve the problem?
Thank you
savePNG (related code):
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(x, y, width, height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid *)image);

There are at least two problems in this code:
GL_RGB8 is not a valid format for a renderbuffer. From the glRenderbufferStorage() man page:
internalformat specifies the internal format to be used for the renderbuffer object's storage and must be a color-renderable, depth-renderable, or stencil-renderable format.
Table 8.13 in the latest spec document (4.5, downloadable from https://www.opengl.org/registry) lists all formats, with a column showing which of them are color-renderable. RGB8 does not have a checkmark in that column. You can use GL_RGBA8 instead, which is color-renderable.
You may also want to check out the glCheckFramebufferStatus() function, which allows you to check if your framebuffer setup is valid.
While we don't see the code for savePNG(), there is no way it can know that you want to read the pixel data from your FBO. It will most likely use glReadPixels(), which reads data from the current read framebuffer, while your code only sets the draw framebuffer. Before calling savePNG(), add this call to set the read framebuffer to your FBO:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);

Related

Draw OpenGL renderbuffer to screen

I created a Renderbuffer, that's then modified in OpenCL.
//OpenGL
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, 600, 600);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
//OpenCL
renderEngine = new OpenCLProgram("render.cl");
renderEngine->addArgumentGLRBuffer(colorRenderbuffer);
How would I then proceed drawing my OpenCL creation, the buffer to the screen? I could bind it to a texture and draw a quad the size of my window, but I am not that sure, if it is the most efficient way. Also, if there was a better way of drawing to the screen from OpenCL, that would help!
The call you're looking for is glBlitFramebuffer(). To use this, you bind your FBO as the read framebuffer, and the default framebuffer as the draw framebuffer:
glBindFramebuffer(GL_READ_FRAMEBUFFER, srcFbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, srcWidth, srcHeight, 0, 0, dstWidth, dstHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
Adjust the parameters for your specific use based on the linked man page.
This is preferable over writing your own shader and rendering a screen sized quad. Not only is it simpler, and requires fewer state changes, it can also be more efficient. Knowing that a blit operation needs to be performed gives the implementation a chance to use a more efficient path. For example, where present, it could use a dedicated blit engine that can run asynchronously to the general rendering functionality of the GPU.
Whether you should use a renderbuffer or texture is not as clear cut. Chances are that it won't make much of a difference. Still, I would recommend to use a renderbuffer as long as that's all you need. Because it has more limited functionality, the driver has the option to create a memory allocation that is more optimized for the purpose. Rendering to a renderbuffer can potentially be more efficient than rendering to a texture on some hardware, particularly if your rendering is pixel output limited.
Don't make it a renderbuffer.
OpenGL renderbuffers exist for the sole purpose of being render targets. The only OpenGL operations that read from them are per-sample operations during rendering to the framebuffer, framebuffer blits, and pixel transfer operations.
Use a texture instead. There is no reason you couldn't create a 600x600 GL_RGBA8 2D texture.

OpenGL ES can't render to texture

I wrote this code:
First, I generate a texture and a depth buffer, then bind them to a framebuffer.
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
GLint max;
glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE,&max);;
if(max<=esContext->width||max<=esContext->height)
{
printf("Too big!\n");
getchar();
}
glGenFramebuffers(1,&framebuffer1);
glGenTextures(1,&texturel);
glBindTexture(GL_TEXTURE_2D,texturel);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,esContext->width,esContext->height,0,GL_RGBA,GL_UNSIGNED_BYTE,NULL);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glGenRenderbuffers(1,&depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER,depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER,GL_DEPTH_COMPONENT24,esContext->width,esContext->height);
glGenRenderbuffers(2,renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER,renderbuffer[0]);
glRenderbufferStorage(GL_RENDERBUFFER,GL_RGBA8,esContext->width,esContext->height);
glBindFramebuffer(GL_FRAMEBUFFER,framebuffer1);
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,texturel,0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER,depthbuffer);
Then I render a box to the framebuffer and try to render the texture to my screen:
glBindFramebuffer(GL_FRAMEBUFFER,framebuffer1);
glViewport(0,0,esContext->width,esContext->height);
glClearColor(1.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
g_renderer->draw_box(&s);
glBindFramebuffer(GL_FRAMEBUFFER,0);
glClearColor(1.0,1.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
g_renderer->render(texturel,0,0,esContext->width/2,esContext->height/2);
eglSwapBuffers( esContext->eglDisplay, esContext->eglSurface );
At the end, the result looks like random data.
I have tried many ways to render to texture, even copying the code in the OpenGL ES books, but the result is still wrong.
Assuming that you're using ES 2.0, the formats you are using for your texture and renderbuffer are not valid for render targets.
In ES 2.0, the only depth format that is valid for render targets is DEPTH_COMPONENT16.
For color render targets, the only valid formats are RGBA4, RGB5_A1, and RGB565.
Therefore, to get this to work with standard ES 2.0, you can for example use:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
esContext->width, esContext->height, 0,
GL_RGB, GL_UNSIGNED_SHORT_5_6_5, NULL);
...
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16,
esContext->width, esContext->height);
There are extensions to ES 2.0 to add support for the formats you are trying to use:
OES_rgb8_rgba8 adds support for RGB8 and RGBA8 as a color render target format.
OES_depth24 adds support for DEPTH_COMPONENT24 as a depth render target format.
But you will have to test for the presence of these extensions before attempting to use these formats.
Anytime you have problems with FBO rendering, it's always a good idea to call glCheckFramebufferStatus() to validate that the framebuffer status is valid.

Draw the contents of the render buffer Object

Do not quite understand the operation render buffer object. For example if I want to show what is in the render buffer, I must necessarily do the render to texture?
GLuint fbo,color_rbo,depth_rbo;
glGenFramebuffers(1,&fbo);
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glGenRenderbuffersEXT(1, &color_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,GL_RENDERBUFFER_EXT, color_rb);
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depth_rb);
if(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)!=GL_FRAMEBUFFER_COMPLETE_EXT)return 1;
glBindFramebuffer(GL_FRAMEBUFFER,0);
//main loop
//This does not work :-(
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawCube();
glBindFramebuffer(GL_FRAMEBUFFER,0);
any idea?
You are not going to see anything when you draw to an FBO instead of the default framebuffer, that is part of the point of FBOs.
Your options are:
Blit the renderbuffer into another framebuffer (in this case it would probably be GL_BACK for the default backbuffer)
Draw into a texture attachment and then draw texture-mapped primitives (e.g. triangles / quad) if you want to see the results.
Since 2 is pretty self-explanatory, I will explain option 1 in greater detail:
/* We are going to blit into the window (default framebuffer) */
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);
glDrawBuffer (GL_BACK); /* Use backbuffer as color dst. */
/* Read from your FBO */
glBindFramebuffer (GL_READ_FRAMEBUFFER, fbo);
glReadBuffer (GL_COLOR_ATTACHMENT0); /* Use Color Attachment 0 as color src. */
/* Copy the color and depth buffer from your FBO to the default framebuffer */
glBlitFramebuffer (0,0, width,height,
0,0, width,height,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT,
GL_NEAREST);
There are a couple of things worth mentioning here:
First, blitting from one framebuffer to another is often measurably slower than drawing two textured triangles that fill the entire viewport. Second, you cannot use linear filtering when you blit a depth or stencil image... but you can if you take the texture mapping approach (this only truly matters if the resolution of your source and destination buffers differ when blitting).
Overall, drawing a textured primitive is the more flexible solution. Blitting is most useful if you need to do Multisample Anti-Aliasing, because you would have to implement that in a shader otherwise and multisample texturing was added after Framebuffer Objects; some older hardware/drivers support FBOs but not multisample color (requires DX10 hardware) or depth (requires DX10.1 hardware) textures.

OpenGL glGeneratemipmap and Framebuffers

I'm wrapping my head around generating mipmaps on the fly, and reading this bit with this code: http://www.g-truc.net/post-0256.html
//Create the mipmapped texture
glGenTextures(1, &ColorbufferName);
glBindTexture(ColorbufferName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_UNSIGNED_BYTE, NULL);
glGenerateMipmap(GL_TEXTURE_2D); // /!\ Allocate the mipmaps /!\
...
//Create the framebuffer object and attach the mipmapped texture
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glFramebufferTexture2D(
GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorbufferName, 0);
...
//Commands to actually draw something
render();
...
//Generate the mipmaps of ColorbufferName
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, ColorbufferName);
glGenerateMipmap(GL_TEXTURE_2D);
My questions:
Why does glGenerateMipmap needs to be called twice in the case of render to texture?
Does it have to be called like this every frame?
If I for example import a diffuse 2d texture I only need to call it once after I load it into OpenGL like this:
GLCALL(glGenTextures(1, &mTexture));
GLCALL(glBindTexture(GL_TEXTURE_2D, mTexture));
GLint format = (colorFormat == ColorFormat::COLOR_FORMAT_RGB ? GL_RGB : colorFormat == ColorFormat::COLOR_FORMAT_RGBA ? GL_RGBA : GL_RED);
GLCALL(glTexImage2D(GL_TEXTURE_2D, 0, format, textureWidth, textureHeight, 0, format, GL_UNSIGNED_BYTE, &textureData[0]));
GLCALL(glGenerateMipmap(GL_TEXTURE_2D));
GLCALL(glBindTexture(GL_TEXTURE_2D, 0));
I suspect it is because the textures are redrawn every frame and the mipmap generation uses its content in the process but I want confirmation of this.
3 - Also, if I render to my gbuffer and then immediately glBlitFramebuffer it to the default FBO, do I need to bind and glGenerateMipmap like this?
GLCALL(glBindTexture(GL_TEXTURE_2D, mGBufferTextures[GBuffer::GBUFFER_TEXTURE_DIFFUSE]));
GLCALL(glGenerateMipmap(GL_TEXTURE_2D));
GLCALL(glReadBuffer(GL_COLOR_ATTACHMENT0 + GBuffer::GBUFFER_TEXTURE_DIFFUSE));
GLCALL(glBlitFramebuffer(0, 0, mWindowWidth, mWindowHeight, 0, 0, mWindowWidth, mWindowHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR));
As explained in the post you link to, "[glGenerateMipmap] does actually two things which is maybe the only issue with it: It allocates the mipmaps memory and generate the mipmaps."
Notice that what precedes the first glGenerateMipmap call is a glTexImage2D call with a NULL data pointer. Those two calls combined will simply allocate the memory for all of the texture's levels. The data they contain at this point is garbage.
Once you have an image loaded into the texture's first level, you will have to call glGenerateMipmap a second time to actually fill the smaller levels with downsampled images.
Your guess is right, glGenerateMipmap is called every frame because the image rendered to the texture's first level changes every frame (since it is being rendered to). If you don't call the function, then the smaller mipmaps will never be modified (if you were to map such a texture, you would see your uninitialized smaller mipmap levels when far enough away).
No. Mipmaps are only needed if you intend to map the texture to triangles with a texture filtering mode that uses mipmaps. If you're only dealing with the first level of the texture, you don't need to generate the mipmaps. In fact, if you never map the texture, you can use a renderbuffer instead of a texture in your framebuffer.

Using openGL's glBindFramebuffer seems to have no effect

I am getting into FBOs (Framebuffer Objects) in openGL. Right now, I'm simply trying to render something to an FBO, then use the texture associated with it to render that image to the screen. I have been working on this problem for hours today and yesterday. I've tried copying as closely as I can two different examples, and yet I still have the same problem. I am absolutely stuck.
It seems like what is happening is that the framebuffer object is not actually being binded. In the code, I have two sets of glClear() and glClearColor() commands: the first for drawing to the framebuffer, and the second for drawing to the screen. However, when I comment out the second set, the first set is clearly affecting the screen. If the FBO is binded, shouldn't it receive those commands, and not affect the actual output to the screen directly?
To begin, I use glewInit(), and then I create an FBO, and then a Renderbuffer object and a texture to associate with it, and do all of the necessary steps to put it all together:
glewInit();
int width=512,height=512;
glGenFramebuffers(1, &fbo);
glGenRenderbuffers(1, &rbo);
glGenTextures(1, &fboTex);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glBindTexture(GL_TEXTURE_2D, fboTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_INT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTex, 0);
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
assert(status==GL_FRAMEBUFFER_COMPLETE);
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
Then, I draw to the framebuffer object.
glClearColor(0.5,0.5,0.5,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glColor4f(1.0,0,0,1);
glBegin(GL_QUADS);
glVertex2f(100,100);
glVertex2f(200,100);
glVertex2f(200,250);
glVertex2f(100,200);
glEnd();
I then unbind each of the following three objects:
glBindFramebuffer(GL_FRAMEBUFFER,0);
glBindRenderbuffer(GL_RENDERBUFFER,0);
glBindTexture(GL_TEXTURE_2D,0);
Then I attempt to draw the texture to the window:
glEnable(GL_TEXTURE_2D);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindTextureEXT(GL_TEXTURE_2D, fboTex);
glBegin(GL_QUADS);
glTexCoord2f(0,0);glVertex3f(-.5,-.5,0);
glTexCoord2f(1,0);glVertex3f(.5,-.5,0);
glTexCoord2f(1,1);glVertex3f(.5,.5,0);
glTexCoord2f(0,1);glVertex3f(-.5,.5,0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
This has got to be either some really simple mistake or misunderstanding that somehow evaded eradication when I retyped all this twice, or a driver issue? My driver is supposed to be able to run version 3.2 of openGL...
Any help on this frustrating issue would be great.
EDIT: I found out what I was ultimately doing wrong. I didn't realize that glColor commands affected any drawing done, regardless of whether you have a framebuffer binded at the time or not. I needed to change the glColor back to (1,1,1) after drawing to the FBO, in order to render the FBO's texture later with all of its color.
Without a full code example it's difficult to see what's wrong. For kickstarting your FBO endeavors I provide https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/minimalfbo