OpenGL Layered Rendering & Renderbuffer - opengl

Why exactly does it not work to use a renderbuffer on a layered framebuffer?
I read that if you want to have the depth values you have to use extra textures.
Is that so? Or is there another option to use a renderbuffer?

Yes, that is so. Why? Well, the simple answer is that it's defined that way. From the OpenGL spec document, section "Whole Framebuffer Completeness":
If any framebuffer attachment is layered, all populated attachments must be layered. Additionally, all populated color attachments must be from textures of the same target (three-dimensional, one- or two-dimensional array, cube map, or cube map array textures).
Since there are no layered renderbuffers, you can't use them for layered rendering.

Related

Copy entire cubemap texture from framebuffer

glFramebufferTexture allows one to bind an entire cubemap as a color attachment for layered rendering. In turn, glReadBuffer then allows one to bind said entire cubemap as a read buffer.
I want to render a scene to the non-zero mip levels of a cubemap texture. I'm using layered rendering to render not to one face, but to the entire thing in one go. However, the shader used for this uses the 0th mip level of that same texture. Since I don't think I can expose the texture to a shader and to a framebuffer attachment at the same time, I'm rendering to a different texture and copying the contents of that texture to my original texture's desired mip level.
Right now I'm doing this with a pass-through shader, which is pretty slow since it's layered rendering thus uses a geometry shader, and it would be better to use an API function. However, glCopyTexSubImage2D only allows cubemap faces, and neither it nor glCopyTexSubImage3D seem to accept cubemaps as input. Apart from 4.5-specific functions such as glCopyTextureSubImage3D, is there any way to retrieve an entire cubemap from the framebuffer into a cubemap texture ? I'm also aware that glCopyImageSubData exists, but something at the feature level of glFramebufferTexture is preferrable (so 3.2).

How to render color and depth in multisample texture?

In order to implement "depth-peeling", I render my OpenGL scene in to a series of framebuffers each equipped with a rgba color texture and depth texture. This works fine if I don't care about anti-aliasing. If I do, then it seems the correct thing to do is enable GL_MULTISAMPLING and use a GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D. But I'm confused about which other calls need to be replaced.
In particular, how should I adapt my framebuffer construction to use glTexImage2DMultisample instead of glTexImage2D?
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
There are many related questions on this topic, but none of the solutions I found seem to point to a canonical tutorial or clear example putting all these together.
While I have only used multisampled FBO rendering with renderbuffers, not textures, the following is my understanding.
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
No, that's all you need. You create the texture with glTexImage2DMultisample(), and then attach it using GL_TEXTURE_2D_MULTISAMPLE as the 3rd argument to glFramebufferTexture2D(). The only constraint is that the level (5th argument) has to be 0.
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Yes. If you attach a depth buffer to the same FBO, you need to use a multisampled renderbuffer, with the same number of samples as the color buffer. So you create your depth renderbuffer with glRenderbufferStorageMultisample(), passing in the same sample count you used for the color buffer.
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
Not for rendering into the framebuffer. Once you're done rendering, you have a couple of options:
You can downsample (resolve) the multisample texture to a regular texture, and then use the regular texture for your subsequent rendering. For resolving the multisample texture, you can use glBlitFramebuffer(), where the multisample texture is attached to the GL_READ_FRAMEBUFFER, and the regular texture to the GL_DRAW_FRAMEBUFFER.
You can use the multisample texture for your subsequent rendering. You will need to use the sampler2DMS type for the samplers in your shader code, with the corresponding sampling functions.
For option 1, I don't really see a good reason to use a multisample texture. You might just as well use a multisample renderbuffer, which is slightly easier to use, and should be at least as efficient. For this, you create a renderbuffer for the color attachment, and allocate it with glRenderbufferStorageMultisample(), very much like what you need for the depth buffer.

GLSL Renderbuffer really required?

I am trying to write a program that writes video camera frames into a quad.
I saw tutorials explaining that with framebuffers can be faster, but still learning how to do it.
But then besides the framebuffer, I found that there is also renderbuffers.
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I understand that renderbuffers are for depth testing, which I think is only for checking Z position of the pixel, therefore would be silly to have to create a render buffer for my scenario, correct?
A framebuffer object is a place to stick images so that you can render to them. Color buffers, depth buffers, etc all go into a framebuffer object.
A renderbuffer is like a texture, but with two important differences:
It is always 2D and has no mipmaps. So it's always exactly 1 image.
You cannot read from a renderbuffer. You can attach them to an FBO and render to them, but you can't sample from them with a texture access or something.
So you're talking about two mostly separate concepts. Renderbuffers do not have to be "for depth testing." That is a common use case for renderbuffers, because if you're rendering the colors to a texture, you usually don't care about the depth. You need a depth because you need depth testing for hidden-surface removal. But you don't need to sample from that depth. So instead of making a depth texture, you make a depth renderbuffer.
But renderbuffers can also use colors rather than depth formats. You just can't attach them as textures. You can still blit from/to them, and you can still read them back with glReadPixels. You just can't read from them in a shader.
Oddly enough, this does nothing to answer your question:
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I don't see why you need a framebuffer or a renderbuffer of any kind. A texture is a texture; just draw a textured quad.

How to render Framebuffer Objects on multi-sampled textures?

I currently have a rendering engine using multiple passes in which various parts of the image are rendered on textures, and then combined using shaders. It works, and now I would like to activate multi-sampling.
I read here ( http://www.opengl.org/wiki/Framebuffer_Object_Examples#MSAA ) that, with OpenGL, you can't attach a GL_TEXTURE2D_MULTISAMPLE to a framebuffer object.
It seems one way to use multi-sampling and still have access to the result as texture is to use a multi-sampled render buffer, and then copy the result into a multisample texture.
My question is: what would be the best way to go forward?
Is it possible to render in a render buffer and use the output in my shader, without copying into a texture?
Should I indeed copy the content of the buffer into a texture, and then use it?
Is there another, better, solution?
Thanks.
I read here ( http://www.opengl.org/wiki/Framebuffer_Object_Examples#MSAA ) that, with OpenGL, you can't attach a GL_TEXTURE2D_MULTISAMPLE to a framebuffer object.
Read it again. It says nothing about GL_TEXTURE_2D_MULTISAMPLE textures. Actually, I take that back: don't read that page again. If you want good FBO info, read the page on Framebuffer Objects that explains 3.x behavior. The page you linked to is old.
Back in the EXT days, all you had were multisampled renderbuffers, because multisample textures didn't exist. You could create multisampled buffers, but you couldn't texture with them. You could only blit them.
In 3.3 OpenGL, you can create multisampled textures. And you can attach them just like any other texture to an FBO.

Is it possible to attach the default renderbuffer to a FBO?

I'm considering refactoring a large part of my rendering code and one question popped to mind:
Is it possible to render to both the screen and to a texture using multiple color attachments in a Frame Buffer Object? I cannot find any information if this should be possible or not even though it has many useful applications. I guess it should be enough to bind my texture as color attachment0 and renderbuffer 0 to attachment1?
For example I want to make an interactive application where you can "draw" on a 3D model. I resolve where the user draws by rendering the UV-coordinates to a texture so I can look up at the mouse-coordinates where to modify the texture. In my case it would be fastest to have a shader that both draws the UV's to the texture and the actual texture to the screen in one pass.
Are there better ways to do this or am I on the right track?
There is no such thing as "default renderbuffer" in OpenGL. There is the window system provided default frame buffer with reserved name zero, but that basically means "no FBO enabled". So no, unfortunately normal OpenGL provides no method to somehow use its color buffer as a color attachment to any other FBO. I'm not aware of any extensions that could possible provide this feature.
With render buffers there is also the reserved name zero, but it's only a special "none" variable and allows unbinding render buffers.