Renderbuffer or texture object after fragment shader? - glsl

I am working on openGL ES2.0 and glsl and I have a question about FBO.
I pass two textures on my openGL ES2.0 code and through glsl shader, particularly fragment shader, I subtract two textures and make a binary image, just like opencv treshold function. My question is that I am not sure if I should use Renderbuffer or texture object for my FBO. I have to choose one since I can only use 1 color attachment (due to restriction of openGL ES2.0). Since the output image after my fragment shader will be a binary image (black or white), shouldn't it be Renderbuffer object?

A texture is a series of images which can be read from (via normal texturing means) and rendered into via FBOs. A renderbuffer is an image that can only be rendered into.
You should use a renderbuffer for images that you will only use as a render target. If you need to sample from it later, you should use a texture.

Related

Render to a layer of a texture array in OpenGL

I use OpenGL 3.2 to render shadow maps. For this, I construct a framebuffer that renders to a depth texture.
To attach the texture to the framebuffer, I use:
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shdw_texture, 0 );
This works great. After rendering the light view, my GLSL shader can sample the depth texture to solve visibility of light.
The problem I am trying to solve now, is to have many more shadow maps, let's say 50 of them. In my main render pass I don't want to be sampling from 50 different textures. I could use an atlas, but I wondered: could I pass all these shadow maps as slices from a 2D texture array?
So, somehow create a GL_TEXTURE_2D_ARRAY with a DEPTH format, and bind one layer of the array to the framebuffer?
Can framebuffers be backed for DEPTH by a texture array layer, instead of just a depth texture?
In general, you need to distinguish whether you want to create a layered framebuffer (see Layered Images) or whether you want to attach a single layer of a multilayered texture to a framebuffer.
Use glFramebufferTexture3D to attach a layer of a 3D texture (TEXTURE_3D) or array texture to a framebuffer or use glFramebufferTextureLayer to attach a layer of a three-dimensional or array texture to the framebuffer. In either case the last argument specifies the layer of the texture.
Layered attachments can be attached with glFramebufferTexture. See Layered rendering.

Copy entire cubemap texture from framebuffer

glFramebufferTexture allows one to bind an entire cubemap as a color attachment for layered rendering. In turn, glReadBuffer then allows one to bind said entire cubemap as a read buffer.
I want to render a scene to the non-zero mip levels of a cubemap texture. I'm using layered rendering to render not to one face, but to the entire thing in one go. However, the shader used for this uses the 0th mip level of that same texture. Since I don't think I can expose the texture to a shader and to a framebuffer attachment at the same time, I'm rendering to a different texture and copying the contents of that texture to my original texture's desired mip level.
Right now I'm doing this with a pass-through shader, which is pretty slow since it's layered rendering thus uses a geometry shader, and it would be better to use an API function. However, glCopyTexSubImage2D only allows cubemap faces, and neither it nor glCopyTexSubImage3D seem to accept cubemaps as input. Apart from 4.5-specific functions such as glCopyTextureSubImage3D, is there any way to retrieve an entire cubemap from the framebuffer into a cubemap texture ? I'm also aware that glCopyImageSubData exists, but something at the feature level of glFramebufferTexture is preferrable (so 3.2).

How to render color and depth in multisample texture?

In order to implement "depth-peeling", I render my OpenGL scene in to a series of framebuffers each equipped with a rgba color texture and depth texture. This works fine if I don't care about anti-aliasing. If I do, then it seems the correct thing to do is enable GL_MULTISAMPLING and use a GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D. But I'm confused about which other calls need to be replaced.
In particular, how should I adapt my framebuffer construction to use glTexImage2DMultisample instead of glTexImage2D?
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
There are many related questions on this topic, but none of the solutions I found seem to point to a canonical tutorial or clear example putting all these together.
While I have only used multisampled FBO rendering with renderbuffers, not textures, the following is my understanding.
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
No, that's all you need. You create the texture with glTexImage2DMultisample(), and then attach it using GL_TEXTURE_2D_MULTISAMPLE as the 3rd argument to glFramebufferTexture2D(). The only constraint is that the level (5th argument) has to be 0.
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Yes. If you attach a depth buffer to the same FBO, you need to use a multisampled renderbuffer, with the same number of samples as the color buffer. So you create your depth renderbuffer with glRenderbufferStorageMultisample(), passing in the same sample count you used for the color buffer.
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
Not for rendering into the framebuffer. Once you're done rendering, you have a couple of options:
You can downsample (resolve) the multisample texture to a regular texture, and then use the regular texture for your subsequent rendering. For resolving the multisample texture, you can use glBlitFramebuffer(), where the multisample texture is attached to the GL_READ_FRAMEBUFFER, and the regular texture to the GL_DRAW_FRAMEBUFFER.
You can use the multisample texture for your subsequent rendering. You will need to use the sampler2DMS type for the samplers in your shader code, with the corresponding sampling functions.
For option 1, I don't really see a good reason to use a multisample texture. You might just as well use a multisample renderbuffer, which is slightly easier to use, and should be at least as efficient. For this, you create a renderbuffer for the color attachment, and allocate it with glRenderbufferStorageMultisample(), very much like what you need for the depth buffer.

Separate Frame Buffer and Depth Buffer in OpenGL

In DirectX you are able to have separate render targets and depth buffers, so you can bind a render target and a depth buffer, do some rendering, remove the depth buffer and then do more rendering using the old depth buffer as a texture.
How would you go about this in opengl? From my understanding, you have a framebuffer object that contains both the color buffer(s) and an optional depth buffer. I don't think I can bind several framebuffer objects at the same time, would I have to recreate the framebuffer object every time it changes(probably several times a frame)? How do normal opengl programs do this?
A Framebuffer Object is nothing more than a series of references to images. These can be images in Textures (such as a mipmap layer of a 2D texture) or Renderbuffers (which can't be used as textures).
There is nothing stopping you from assembling an FBO that uses a texture's image for its color buffer and a texture's image for its depth buffer. Nor is there anything stopping you from later (so long as you're not rendering to that FBO while doing this) sampling from the texture as a depth texture. The FBO does not suddenly own these images exclusively or something.
In all likelihood, what has happened is that you've misunderstood the difference between an FBO and OpenGL's Default Framebuffer. The default framebuffer (ie: the window) is unchangeable. You can't take it's depth buffer and use it as a texture or something. What you do with an FBO is your own business, but OpenGL won't let you play with its default framebuffer in the same way.
You can bind multiple render targets to a single FBO, which should to the trick. Also since OpenGL is a state machine you can change the binding and number of targets anytime it is required.

How to post-process image with shaders in OpenGL?

Shaders can not read data from framebuffer, they can pass data only forward by rendering pipeline. But for post-processing it is needed to read rendered image.
I'm going to resolve this as following: 1) create a texture with size of viewport; 2) render image to the texture normally; 3) render the texture to framebuffer passing it through post-processing shader.
Am I doing right? Are there more effective ways to do post-processing?
That is indeed the usual way to do post-processing ! Render to texture by binding an FBO for your first pass, then use that texture as the input of your post-process shader after unbinding your FBO (i.e. returning to the default frame buffer).