I have frame buffer, with depth component and 4 color attachments with 4 textures
I draw some stuff into it and unbind the buffer after, using 4 textures for fragment shader (deferred lighting).
Later i want to draw some more stuff on the screen, using the depth buffer from my framebuffer, is it possible?
I tried binding the framebuffer again and specifying glDrawBuffer(GL_FRONT), but it does not work.
Like Nicol already said, you cannot use an FBOs depth buffer as the default framebuffer's depth buffer directly.
But you can copy the FBO's depth buffer over to the default framebuffer using the EXT_framebuffer_blit extension (which should be core since GL 3):
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_DEPTH_BUFFER_BIT, GL_NEAREST);
If this extension is not supported (which I doubt when you already have FBOs), you can use a depth texture for the FBO's depth attachment and render this to the default framebuffer using a textured quad and a simple pass through fragment shader that writes into gl_FragDepth. Though this might be slower than just blitting it over.
I just experienced that copying a depth buffer from a renderbuffer to the main (context-provided) depth buffer is highly unreliable when using glBlitFramebuffer. Just because you cannot guarantee the format does match. Using GL_DEPTH_COMPONENT24 as my internal depth-texture-format just didn't work on my AMD Radeon 6950 (latest driver) because Windows (or the driver) decided to use the equivalent to GL_DEPTH24_STENCIL8 as the depth-format for my front/backbuffer, although i did not request any stencil precision (stencil-bits set to 0 in the pixel format descriptor). When using GL_DEPTH24_STENCIL8 for my framebuffer's depth-texture the Blitting worked as expected, but I had other issues with this format. The first attempt worked fine on NVIDIA cards, so I'm pretty sure I did not mess things up.
What works best (in my experience) is copying via shader:
The Fragment-Program (aka Pixel-Shader) [GLSL]
#version 150
uniform sampler2D depthTexture;
in vec2 texCoords; //texture coordinates from vertex-shader
void main( void )
{
gl_FragDepth = texture(depthTexture, texCoords).r;
}
The C++ code for copying looks like this:
glDepthMask(GL_TRUE);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glEnable(GL_DEPTH_TEST); //has to be enabled for some reason
glBindFramebuffer(GL_FRAMEBUFFER, 0);
depthCopyShader->Enable();
DrawFullscreenQuad(depthTextureIndex);
I know the thread is old, but it was one of my first results when googeling my issue, so I want to keep it as consistent as possible.
You cannot attach images (color or depth) to the default framebuffer. Similarly, you can't take images from the default framebuffer and attach them to an FBO.
Related
I'm still new on opengl3 and I'm trying to create a multipass rendering.
In order to do that, I created FBO, generated several textures and attached them to it with
unsigned index_col = 0;
for (index_col = 0; index_col < nbr_textures; ++index_col)
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + index_col, texture_colors[index_col], 0);
It works well (I try to believe that I'm doing good here!).
My comprehension problem occurs after, when I try to render in the first texture offscreen, then in the second texture, then render on the screen.
To render to a particular texture, I'm using :
FBO.bind();
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glBindTexture(GL_TEXTURE_2D, FBO.getColorTextures(0)); //getColorTextures(0) is texture_colors[0]
Then I draw, using my shader, and after I would like to do :
glDrawBuffer(GL_COLOR_ATTACHMENT1);
glBindTexture(GL_TEXTURE_2D, FBO.getColorTextures(1));
and after all
glBindFramebuffer(GL_FRAMEBUFFER, 0);
RenderToScreen(); // assuming this function render to screen with a quad
My question is :
What is the difference between glDrawBuffer and glBindTexture? Is it necessary to call both? Aren't textures attached to buffer? (I can't test it actually, because I'm trying to make it works...)
Thanks!
glBindTexture is connecting a texture with a texture sampler unit for reading. glDrawBuffer selects the destination for drawing writes. If you want to select a texture as rendering target use glDrawBuffer on the color attachment the texture is attached to; and make sure that none of the texture sampler units it is currently bound to is used as a shader input! The result of creating a feedback loop is undefined.
glDrawBuffer selects the color buffer (in this case of the framebuffer object) that you will write to:
When colors are written to the frame buffer, they are written into the color buffers specified by glDrawBuffer
If you wanted to draw to multiple color buffers you would have written
GLuint attachments[2] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, attachments);
while glBindTexture binds a texture to a texture unit.
They serve different purposes - remember that OpenGL and its current rendering context behave as a state machine.
Use OpenGL (version 330) multisample, in QT framework.
The rendering image is like a star shape.
I use fragment shader to render the shape intensity on the black canvas.
I do not use OpenGL primitives.
When multisample is not used, and when the rendering output canvas has a smaller resolution (say 400x400 pixels), I can see aliasing effects along star shape edges.
If I increase the resolution, say 1500x1500 pixels, then the aliasing effects are much less obvious. So I think mutlisampling should be able to improve the result.
Now, in order to improve speed, I do not increase the resolution of the render buffer. Instead, I decide to try to use multisampling to reduce aliasing effects.
int num_samples = 2; // 4; // I guess the maximum for most graphic cards are 8
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, tex);
glTexImage2DMultisample( GL_TEXTURE_2D_MULTISAMPLE, num_samples, GL_R11F_G11F_B10F, width, height, true );
GLuint fbo;
glGenFramebuffers( 1, &fbo );
glBindFramebuffer( GL_FRAMEBUFFER, fbo );
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, tex, 0 );
glViewport(0,0, width, height);
glEnable(GL_MULTISAMPLE);
// ... some code
// draw a rectangle, as it is 2D image processing
// OpenGL render program release
// now convert multisample frame buffer fbo to a regular frame buffer qopenglFramebufferOjbectP
// qopenglFramebufferOjbectP is QOpenGLFramebufferObject
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, qopenglFramebufferOjbectP->handle());
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_LINEAR);
The whole code seems not to be totally wrong, since the output is the desired shape, except the anti aliasing effect.
The problem is:
Either I use multisample (with different sample numbers as 2 4, or 8), or I do not use multisample, the results are the same. I specially wrote the results out to images, and compared them side by side.
But if multisampling takes effect, the results should be expected to have less aliasing effects than that when multismaple is not used.
I use fragment shader to render the shape intensity on the black canvas. I do not use OpenGL primitives.
The basic idea of multisampling is that you're doing the same number of fragment shader invocations as non-multisampling, but a particular fragment only writes the outputs to specific samples in each pixel based on the geometry of the primitives you render. You are rendering what I presume is a quad; any apparent geometry is a fiction created by the fragment shader. Hence you have gained no benefit from the technique.
Imposter-based techniques don't usually benefit from multisampling.
There are ways to handle this, of course. The most obvious is to turn on per-sample shading, but this also effectively turns multisampling into super-sampling. That is, it isn't cheap.
A better idea would be to explicitly output a coverage mask with gl_SampleMask. It's not easy and it depends on how you generate your geometry. The idea is to, for each sample that a fragment covers, detect if that sample is within the imposter-generated geometry. If so, set that sample's mask to 1; if not, set it to 0. Thus, you generate 1 output value, and it is broadcast to the non-zero samples.
Both this and per-sample shading require GL 4.0+ (or ARB_sample_shading).
It seems to be difficult to find information about how to access depth and stencil buffers in shaders of successive render passes.
In a first render pass, I do not only render color and depth information but also make use of stencil operations to count objects. I use a multi render target FBO for this, with color buffers and a combined depth stencil buffer attached. All of them are in the form of textures (no render buffer objects involved).
In a second render pass (when rendering to the screen), I want to access the previously computed stencil index on a per-pixel basis (but not necessarily the same pixel I'm drawing then), similar like you would like to access the previously rendered color buffer to apply some post processing effect.
But I fail to bind the depth stencil texture in the second pass to my shader program as a uniform. At least only black values are read from it, so I guess it's not bound correctly.
Is it possible to bind a depth stencil texture to a texture unit for use in a shader program? Is it impossible to access depth and stencil textures using "normal" samplers? Is it possible with some "special" sampler? Does it depend on the interpolation mode set on the texture or a similar setting?
If not, what is the best (fastest) way to copy the stencil information into a separate color texture between these two render passes? Maybe involving a third render pass which draws a single color using stencil test (I only need a binary version of the stencil buffer in the final render pass, to be precise I need to test if the value is zero).
The setup for the textures being used by the intermediate FBO is as follows:
// The textures for color information (GL_COLOR_ATTACHMENT*):
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// The texture for depth and stencil information (GL_DEPTH_STENCIL_ATTACHMENT*):
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, w, h, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, 0);
In the second render pass, I currently only try to "debug" the contents of all textures. Therefore I setup the shader with these values:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, <texture>);
glUniform1i(texLocation, 0);
and let the shader program simply copy the texture to the screen:
uniform sampler2D tex;
in vec2 texCoord;
out vec4 fragColor;
void main() {
fragColor = texture2D(tex, texCoord);
}
The results are as followed:
When <texture> above refers to one of my color textures, I see the color output rendered in the first render pass, which is what I expect.
When <texture> above refers to the depth stencil texture, the shader doesn't do anything (I see the color with which I clear the screen).
When copying the depth stencil texture to the CPU and examine it, I see both the depth and stencil information in the packed 24 + 8 bit data as expected.
I have no experience with using stencil as a texture, but you may want to take a look at the following extension :
http://www.opengl.org/registry/specs/ARB/stencil_texturing.txt
Another option could be to create a view of the texture using
http://www.opengl.org/registry/specs/ARB/texture_view.txt
Or you could count objects without the stencil buffer, perhaps using MRT and additive blending on second render target using :
http://www.opengl.org/registry/specs/EXT/draw_buffers2.txt
But I'm afraid those options are not included in pure GL3.3...
I am trying to configure a Frame buffer object with depth buffer that has 32 bits, render to that and then merely copy the resulting color buffer to the system color buffer.
Can someone help me how to code this?
You can attach a texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24,
width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_,
textureId, level);
to an FBO and then use this texture to draw a full-screen quad on the screen.
rendering to a texture and then using fullscreen quad:
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
By using this GL_DEPTH_COMPONENT24 you will have max depth precision that hardware uses.
In the fragment shader (for the fullscreen quad) you can read from such texture and use it as a gray scale image.
here is another related question: How to visualize a depth texture in OpenGL?
On the other hand if you want to have 32 bit buffer... maybe it is easier to use GL_R32F texture and calculate depth values in fragment shader. That way you will have better control over that process.
I have created a texture using
glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, CONSENSUS_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
This texture is used in other code and filled with depth. Now I want to copy the depth values to an RGBA texture (doesn't matter which color channel).
How can I do this?
If it needs to be fast, I'd say render an orthograhic quad the size of the texture and use a shader to read from the depth texture and write to the target texture.
If performance doesn't matter that much you can use PBOs (might even be faster depending on your render pipeline but stalls the CPU). Here's an overview on said PBOs
I don't know of any inherent OpenGL method to do this.