I know that we can attach a layered texture which is:
A mipmap level of a 1D/2D texture array
A mipmap levevl of a 3D texture
A mipmap levevl of a Cube Texture/ Cube Texture Array
to a FBO and do layered rendering.
The OpenGL wiki also says "Layered rendering is the process of having the GS send specific primitives to different layers of a layered framebuffer."
Can default framebuffer be a layered framebuffer? i.e Can I bind a 3d texture to the default FB and use a geometry shader to render to different layers of this texture?
I tried writing such a program but the screen is blank and I am not sure if this is right.
If it is not, what is possibly happening when I bind default FB for layered rendering?
If you use the default framebuffer for layered rendering, then everything you draw will go straight to the default framebuffer. They will behave as if everything were in the same layer.
OpenGL 4.4 Core Specification - 9.8 Layered Framebuffers - pp. 296
A framebuffer is considered to be layered if it is complete and all of its populated attachments are layered. When rendering to a layered framebuffer, each fragment generated by the GL is assigned a layer number.
[...]
A layer number written by a geometry shader has no effect if the framebuffer is not layered.
Related
I use OpenGL 3.2 to render shadow maps. For this, I construct a framebuffer that renders to a depth texture.
To attach the texture to the framebuffer, I use:
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shdw_texture, 0 );
This works great. After rendering the light view, my GLSL shader can sample the depth texture to solve visibility of light.
The problem I am trying to solve now, is to have many more shadow maps, let's say 50 of them. In my main render pass I don't want to be sampling from 50 different textures. I could use an atlas, but I wondered: could I pass all these shadow maps as slices from a 2D texture array?
So, somehow create a GL_TEXTURE_2D_ARRAY with a DEPTH format, and bind one layer of the array to the framebuffer?
Can framebuffers be backed for DEPTH by a texture array layer, instead of just a depth texture?
In general, you need to distinguish whether you want to create a layered framebuffer (see Layered Images) or whether you want to attach a single layer of a multilayered texture to a framebuffer.
Use glFramebufferTexture3D to attach a layer of a 3D texture (TEXTURE_3D) or array texture to a framebuffer or use glFramebufferTextureLayer to attach a layer of a three-dimensional or array texture to the framebuffer. In either case the last argument specifies the layer of the texture.
Layered attachments can be attached with glFramebufferTexture. See Layered rendering.
glFramebufferTexture allows one to bind an entire cubemap as a color attachment for layered rendering. In turn, glReadBuffer then allows one to bind said entire cubemap as a read buffer.
I want to render a scene to the non-zero mip levels of a cubemap texture. I'm using layered rendering to render not to one face, but to the entire thing in one go. However, the shader used for this uses the 0th mip level of that same texture. Since I don't think I can expose the texture to a shader and to a framebuffer attachment at the same time, I'm rendering to a different texture and copying the contents of that texture to my original texture's desired mip level.
Right now I'm doing this with a pass-through shader, which is pretty slow since it's layered rendering thus uses a geometry shader, and it would be better to use an API function. However, glCopyTexSubImage2D only allows cubemap faces, and neither it nor glCopyTexSubImage3D seem to accept cubemaps as input. Apart from 4.5-specific functions such as glCopyTextureSubImage3D, is there any way to retrieve an entire cubemap from the framebuffer into a cubemap texture ? I'm also aware that glCopyImageSubData exists, but something at the feature level of glFramebufferTexture is preferrable (so 3.2).
I have a problem with different visual results when using a FBO compared to the default framebuffer:
I render my OpenGL scene into a framebuffer object, because I use this for color picking. The thing is that if I render the scene directly to the default framebuffer, the output on the screen is quite smooth, meaning the edges of my objects look a bit like if they were anti-aliased. When I render the scene into the FBO and afterwards use the output to texture a quad that spans the whole viewport, the objects have very hard edges where you can easily see every single colored pixel that belongs to the objects.
Good:
Bad:
At the moment I have no idea what the reason for this could be. I am not using some kind of anti-aliasing.
System:
Fedora 18 x64
Intel HD Graphics 4000 and Nvidia GT 740M (same result)
Edit1:
As stated by Damon and Steven Lu, there is probably some kind of anti-aliasing enabled by the system by default. I couldn't figure out so far how to disable this feature.
The thing is that I was just curious why this setting only had an effect on the default framebuffer and not the one handled by the FBO. To get anti-aliased edges for the FBO too, I will probably have to implement my own AA method.
Once you draw your scene into custom FBO the externally defined MSAA level doesn't apply anymore.You must configure your FBO to have Multi-sample texture or render buffer attachments setting number of sample levels along the way.Here is a reference.
In DirectX you are able to have separate render targets and depth buffers, so you can bind a render target and a depth buffer, do some rendering, remove the depth buffer and then do more rendering using the old depth buffer as a texture.
How would you go about this in opengl? From my understanding, you have a framebuffer object that contains both the color buffer(s) and an optional depth buffer. I don't think I can bind several framebuffer objects at the same time, would I have to recreate the framebuffer object every time it changes(probably several times a frame)? How do normal opengl programs do this?
A Framebuffer Object is nothing more than a series of references to images. These can be images in Textures (such as a mipmap layer of a 2D texture) or Renderbuffers (which can't be used as textures).
There is nothing stopping you from assembling an FBO that uses a texture's image for its color buffer and a texture's image for its depth buffer. Nor is there anything stopping you from later (so long as you're not rendering to that FBO while doing this) sampling from the texture as a depth texture. The FBO does not suddenly own these images exclusively or something.
In all likelihood, what has happened is that you've misunderstood the difference between an FBO and OpenGL's Default Framebuffer. The default framebuffer (ie: the window) is unchangeable. You can't take it's depth buffer and use it as a texture or something. What you do with an FBO is your own business, but OpenGL won't let you play with its default framebuffer in the same way.
You can bind multiple render targets to a single FBO, which should to the trick. Also since OpenGL is a state machine you can change the binding and number of targets anytime it is required.
I am working on openGL ES2.0 and glsl and I have a question about FBO.
I pass two textures on my openGL ES2.0 code and through glsl shader, particularly fragment shader, I subtract two textures and make a binary image, just like opencv treshold function. My question is that I am not sure if I should use Renderbuffer or texture object for my FBO. I have to choose one since I can only use 1 color attachment (due to restriction of openGL ES2.0). Since the output image after my fragment shader will be a binary image (black or white), shouldn't it be Renderbuffer object?
A texture is a series of images which can be read from (via normal texturing means) and rendered into via FBOs. A renderbuffer is an image that can only be rendered into.
You should use a renderbuffer for images that you will only use as a render target. If you need to sample from it later, you should use a texture.