Rendering to framebuffer and screen - opengl

I am trying to render to an fbo and then use that texture as an input to my second render pass for post processing, but it seems that glClear and glClearColor affect the texture that has been rendered to. How can I make them only affect the display buffer?
My code looks something like this:
UseRenderingShaderProgram();
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderWorld();
// render to screen
UsePostProcessingShaderProgram();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClearColor(0.0, 0.0, 0.0, 1.0); // <== texture appears to get cleared in this two lines.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderWorld();
glfwSwapBuffers();

If I had to make an educated guess you're defined your texture to use mipmap minification filtering. After rendering to a texture using a FBO only one mipmap level is defined. But without all the mipmap levels selected being created the texture is incomplete and will not deliver data. The easiest solution would be disabling mipmapping for this texture by setting its minification filter parameter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Also you must make sure that you're correctly unbinding and binding the texture being attached to the FBO.
Unbind the texture before binding the FBO (you can have it attached to it the whole time safely).
Unbind the FBO before binding the texture as image source for rendering.
Adding those two changes (binding order and mipmap levels) should make your texture appear.

Related

OpenGL - Is there an easier way to fill window with a texture, instead using VBO,etc?

My OpenGL window is drawn like this:
glClearColor(0.3f, 0.4f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
I want to use a texture to fill up the window.
Is there an easier way to do that, instead of creating another VBO, EBO besides the one I'm already using for my triangles?
Since there is the glClearColor that fills the background..
The most direct and generally most efficient way to draw a texture to the window is by using glBlitFramebuffer().
To use this, you need to create an FBO, and attach your texture texId to it:
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboId);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, texId, 0);
Note that the code above bound GL_READ_FRAMEBUFFER, since we want to use this as the source of the blit.
Then, to copy the content:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); // if not already bound
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
This is for the case where texture and window have the same size. Otherwise, you can specify different sizes in the first 8 arguments, and may want to use GL_LINEAR for the last parameter.
Using glBlitFramebuffer() has a few advantages over drawing a window sized textured quad:
It needs fewer API calls.
You don't need to write a shader for the copy operation.
You don't need to bind a different shader program, which can reduce overhead.
The driver may have a more optimized code path for the operation, compared to using an app provided shader and draw call.
Many GPUs have dedicated units for blitting data, which can be more efficient than the programmable shader units. They can also potentially run in parallel to the general purpose programmable part of the GPU, allowing the copy to be executed in parallel with rendering. If that applies, the performance gain can be very substantial.
In one word: No.
Well in legacy OpenGL there'd be glDrawPixels but this function never was very well supported and dead slow on most implementation. You better forget that I told you about it. Also it's been removed from modern OpenGL and never existed in OpenGL-ES.
There are already some answers to this question, but I want to add some more alternatives, for completeness:
1. attributeless rendering
With modern GL, you can render completely without vertex attributes. You can put the 4 2d coordiantes of the full screen rect directly as a const array into the vertex shader and access them via gl_VertexID:
// VERTEX SHADER
#version 150 core
out vec2 v_tex;
const vec2 pos[4]=vec2[4](vec2(-1.0, 1.0),
vec2(-1.0,-1.0),
vec2( 1.0, 1.0),
vec2( 1.0,-1.0));
void main()
{
v_tex=0.5*pos[gl_VertexID] + vec2(0.5);
gl_Position=vec4(pos[gl_VertexID], 0.0, 1.0)
}
// FRAGMENT SHADER
#version 150 core
in vec2 v_tex;
uniform sampler2D texSampler;
out vec4 color;
void main()"
{
color=texture(texSampler, v_tex);
}
If your texture exactly matches the resolution of your viewport (so you are not scaling the texture at all), you can completely remove the v_tex varying and use color=texelFetch(texSampler, ivec2(gl_FragCoord.xy)) in the FS, as #datenwolf suggested in his comment.
In any case, you still need some VAO bound, even if no attributes are enabled in it. So this method requires you to do the following once during intialization:
Create and compile the shaders and link them to the program
Create a new VAO name by a glGenVertexArrays() call
And for drawing, you have to:
Bind the texture you want to draw
Use the program
Bind the (still empty) VAO
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
You might also be able to simply re-use the currently bound VAO. As the shader does not access any attributes, it does not matter what data your VBOs provide, and which attributes are enabled currently.
This method requires you to switch the shader, which isn't exactly cheap either, so it might be better to just switch the buffer bindigs and keep the current shader.. But you might need to switch the shader anyway.
2. nvidia-specifc extension
NVidia provides a specific extension for the task of drawing a texture to the screen: NV_draw_texture. This introduces the glDrawTextureNV() function which allows drawing a texture without setting changing anything on the GL state. Quoting from the overview section of the extension spec:
While this functionality can be obtained in unextended OpenGL by drawing a
rectangle and using a fragment shader to do a texture lookup,
DrawTextureNV() is likely to have better power efficiency on
implementations supporting this extension. Additionally, use of this
extension frees the application developer from having to set up
specialized shaders, transformation matrices, vertex attributes, and
various other state in order to render the rectangle.
The drawback of this method is of course that it is nvidia-specific, so it is probably of less practical use in a general GL application.
You can render your texture to a fullscreen quad using an ortographic projection:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDisable(GL_LIGHTING);
// Set up ortographic projection
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
// Render a quad
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(0,0);
glTexCoord2f(0,1); glVertex2f(0,width);
glTexCoord2f(1,1); glVertex2f(height, width);
glTexCoord2f(1,0); glVertex2f(height,0);
glEnd();
// Reset Projection Matrix
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
Render this into your framebuffer instead of glClearColor.

OpenGL ES can't render to texture

I wrote this code:
First, I generate a texture and a depth buffer, then bind them to a framebuffer.
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
GLint max;
glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE,&max);;
if(max<=esContext->width||max<=esContext->height)
{
printf("Too big!\n");
getchar();
}
glGenFramebuffers(1,&framebuffer1);
glGenTextures(1,&texturel);
glBindTexture(GL_TEXTURE_2D,texturel);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,esContext->width,esContext->height,0,GL_RGBA,GL_UNSIGNED_BYTE,NULL);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glGenRenderbuffers(1,&depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER,depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER,GL_DEPTH_COMPONENT24,esContext->width,esContext->height);
glGenRenderbuffers(2,renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER,renderbuffer[0]);
glRenderbufferStorage(GL_RENDERBUFFER,GL_RGBA8,esContext->width,esContext->height);
glBindFramebuffer(GL_FRAMEBUFFER,framebuffer1);
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,texturel,0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER,depthbuffer);
Then I render a box to the framebuffer and try to render the texture to my screen:
glBindFramebuffer(GL_FRAMEBUFFER,framebuffer1);
glViewport(0,0,esContext->width,esContext->height);
glClearColor(1.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
g_renderer->draw_box(&s);
glBindFramebuffer(GL_FRAMEBUFFER,0);
glClearColor(1.0,1.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
g_renderer->render(texturel,0,0,esContext->width/2,esContext->height/2);
eglSwapBuffers( esContext->eglDisplay, esContext->eglSurface );
At the end, the result looks like random data.
I have tried many ways to render to texture, even copying the code in the OpenGL ES books, but the result is still wrong.
Assuming that you're using ES 2.0, the formats you are using for your texture and renderbuffer are not valid for render targets.
In ES 2.0, the only depth format that is valid for render targets is DEPTH_COMPONENT16.
For color render targets, the only valid formats are RGBA4, RGB5_A1, and RGB565.
Therefore, to get this to work with standard ES 2.0, you can for example use:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
esContext->width, esContext->height, 0,
GL_RGB, GL_UNSIGNED_SHORT_5_6_5, NULL);
...
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16,
esContext->width, esContext->height);
There are extensions to ES 2.0 to add support for the formats you are trying to use:
OES_rgb8_rgba8 adds support for RGB8 and RGBA8 as a color render target format.
OES_depth24 adds support for DEPTH_COMPONENT24 as a depth render target format.
But you will have to test for the presence of these extensions before attempting to use these formats.
Anytime you have problems with FBO rendering, it's always a good idea to call glCheckFramebufferStatus() to validate that the framebuffer status is valid.

Draw the contents of the render buffer Object

Do not quite understand the operation render buffer object. For example if I want to show what is in the render buffer, I must necessarily do the render to texture?
GLuint fbo,color_rbo,depth_rbo;
glGenFramebuffers(1,&fbo);
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glGenRenderbuffersEXT(1, &color_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,GL_RENDERBUFFER_EXT, color_rb);
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depth_rb);
if(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)!=GL_FRAMEBUFFER_COMPLETE_EXT)return 1;
glBindFramebuffer(GL_FRAMEBUFFER,0);
//main loop
//This does not work :-(
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawCube();
glBindFramebuffer(GL_FRAMEBUFFER,0);
any idea?
You are not going to see anything when you draw to an FBO instead of the default framebuffer, that is part of the point of FBOs.
Your options are:
Blit the renderbuffer into another framebuffer (in this case it would probably be GL_BACK for the default backbuffer)
Draw into a texture attachment and then draw texture-mapped primitives (e.g. triangles / quad) if you want to see the results.
Since 2 is pretty self-explanatory, I will explain option 1 in greater detail:
/* We are going to blit into the window (default framebuffer) */
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);
glDrawBuffer (GL_BACK); /* Use backbuffer as color dst. */
/* Read from your FBO */
glBindFramebuffer (GL_READ_FRAMEBUFFER, fbo);
glReadBuffer (GL_COLOR_ATTACHMENT0); /* Use Color Attachment 0 as color src. */
/* Copy the color and depth buffer from your FBO to the default framebuffer */
glBlitFramebuffer (0,0, width,height,
0,0, width,height,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT,
GL_NEAREST);
There are a couple of things worth mentioning here:
First, blitting from one framebuffer to another is often measurably slower than drawing two textured triangles that fill the entire viewport. Second, you cannot use linear filtering when you blit a depth or stencil image... but you can if you take the texture mapping approach (this only truly matters if the resolution of your source and destination buffers differ when blitting).
Overall, drawing a textured primitive is the more flexible solution. Blitting is most useful if you need to do Multisample Anti-Aliasing, because you would have to implement that in a shader otherwise and multisample texturing was added after Framebuffer Objects; some older hardware/drivers support FBOs but not multisample color (requires DX10 hardware) or depth (requires DX10.1 hardware) textures.

Using openGL's glBindFramebuffer seems to have no effect

I am getting into FBOs (Framebuffer Objects) in openGL. Right now, I'm simply trying to render something to an FBO, then use the texture associated with it to render that image to the screen. I have been working on this problem for hours today and yesterday. I've tried copying as closely as I can two different examples, and yet I still have the same problem. I am absolutely stuck.
It seems like what is happening is that the framebuffer object is not actually being binded. In the code, I have two sets of glClear() and glClearColor() commands: the first for drawing to the framebuffer, and the second for drawing to the screen. However, when I comment out the second set, the first set is clearly affecting the screen. If the FBO is binded, shouldn't it receive those commands, and not affect the actual output to the screen directly?
To begin, I use glewInit(), and then I create an FBO, and then a Renderbuffer object and a texture to associate with it, and do all of the necessary steps to put it all together:
glewInit();
int width=512,height=512;
glGenFramebuffers(1, &fbo);
glGenRenderbuffers(1, &rbo);
glGenTextures(1, &fboTex);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glBindTexture(GL_TEXTURE_2D, fboTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_INT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTex, 0);
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
assert(status==GL_FRAMEBUFFER_COMPLETE);
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
Then, I draw to the framebuffer object.
glClearColor(0.5,0.5,0.5,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glColor4f(1.0,0,0,1);
glBegin(GL_QUADS);
glVertex2f(100,100);
glVertex2f(200,100);
glVertex2f(200,250);
glVertex2f(100,200);
glEnd();
I then unbind each of the following three objects:
glBindFramebuffer(GL_FRAMEBUFFER,0);
glBindRenderbuffer(GL_RENDERBUFFER,0);
glBindTexture(GL_TEXTURE_2D,0);
Then I attempt to draw the texture to the window:
glEnable(GL_TEXTURE_2D);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindTextureEXT(GL_TEXTURE_2D, fboTex);
glBegin(GL_QUADS);
glTexCoord2f(0,0);glVertex3f(-.5,-.5,0);
glTexCoord2f(1,0);glVertex3f(.5,-.5,0);
glTexCoord2f(1,1);glVertex3f(.5,.5,0);
glTexCoord2f(0,1);glVertex3f(-.5,.5,0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
This has got to be either some really simple mistake or misunderstanding that somehow evaded eradication when I retyped all this twice, or a driver issue? My driver is supposed to be able to run version 3.2 of openGL...
Any help on this frustrating issue would be great.
EDIT: I found out what I was ultimately doing wrong. I didn't realize that glColor commands affected any drawing done, regardless of whether you have a framebuffer binded at the time or not. I needed to change the glColor back to (1,1,1) after drawing to the FBO, in order to render the FBO's texture later with all of its color.
Without a full code example it's difficult to see what's wrong. For kickstarting your FBO endeavors I provide https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/minimalfbo

render transparent textures

I have one texture that has some portions which are transparent transparent I want to apply over an object whose faces are some opaque material (or colour if it's simpler) but the final object gets transparent. I want the final object to be totally opaque.
Here is my code:
First I set the material:
glDisable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT);
glColor4f(0.00, 0.00, 0.00, 1.00);
glColorMaterial(GL_FRONT_AND_BACK, GL_DIFFUSE);
glColor4f(0.80, 0.80, 0.80, 1.00);
glColorMaterial(GL_FRONT_AND_BACK, GL_SPECULAR);
glColor4f(0.01, 0.01, 0.01, 1.00);
glEnable(GL_COLOR_MATERIAL);
Then I setup the VBOs
glBindTexture(GL_TEXTURE_2D, object->texture);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glBindBuffer(GL_ARRAY_BUFFER, object->object);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), ver_offset);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), tex_offset);
glNormalPointer(GL_FLOAT, sizeof(Vertex), nor_offset);
And finally I draw the object
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
glBlendFunc(GL_ONE, GL_ZERO);
glDrawArrays(GL_TRIANGLES, 0, object->num_faces);
glEnable(GL_TEXTURE_2D);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDrawArrays(GL_TRIANGLES, 0, object->num_faces);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
I tried passing different arguments to glBlendFunc() with no prevail. I've uploaded the source here: http://dpaste.com/83559/
UPDATE
I get this, but I want this (or without texture this).
The 2nd and the 3rd picture are produces with glm. I studied the sources, but since my knowledge of OpenGL is limited I didn't understand much.
If you're trying to apply two textures to your object you really want to set two textures and use multitexturing to achieve this look. Your method is drawing the geometry twice which is a huge waste of performance.
Multitexturing will just sample from two texture units while only drawing the geometry once. You can do this with shaders (the way things really should be done) or you can still used the fixed function pipeline (see: http://bluevoid.com/opengl/sig00/advanced00/notes/node62.html)
AFAIK the blend function takes fragment colors (opposed to texture colors). So if you draw the object a second time with blending, the triangles become transparent.
What you want to accomplish could be done using multitexturing.
This is just a wild guess, as you have failed to provide any screenshots of what the actual problem is, but why do you disable the depth test? Surely you want to enable depth testing on the first pass with a standard GL_LESS and then do the second pass with GL_EQUAL?
Edit:
ie
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST); // ie do not disable
glDepthFunc( GL_LESS ); // only pass polys have a z value less than ones already in the z-buffer (ie are in front of any previous pixels)
glDisable(GL_TEXTURE_2D);
glBlendFunc(GL_ONE, GL_ZERO);
glDrawArrays(GL_TRIANGLES, 0, object->num_faces);
// for the second pass we only want to blend pixels where they occupy the same position
// as in the previous pass. Therefore set to equal and only pixels that match the
// previous pass will be blended together.
glDepthFunc( GL_EQUAL );
glEnable(GL_TEXTURE_2D);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDrawArrays(GL_TRIANGLES, 0, object->num_faces);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_BLEND);
Try disabling blending and drawing a single pass with your texture function set to GL_DECAL instead of GL_MODULATE. This will blend between the vertex color and the texture color based on the texture’s alpha channel, but leave the alpha channel set to the vertex color.
Note that this will ignore any lighting applied to the vertex color anywhere the texture was opaque, but this sounds like the intended effect, based on your description.
It will be much simpler with pixel shaders. otherwise I think you need multi-passes rendering or more than one texture.
You can find comprehensive details here :http://www.opengl.org/resources/faq/technical/transparency.htm