Render to a layer of a texture array in OpenGL - opengl

I use OpenGL 3.2 to render shadow maps. For this, I construct a framebuffer that renders to a depth texture.
To attach the texture to the framebuffer, I use:
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shdw_texture, 0 );
This works great. After rendering the light view, my GLSL shader can sample the depth texture to solve visibility of light.
The problem I am trying to solve now, is to have many more shadow maps, let's say 50 of them. In my main render pass I don't want to be sampling from 50 different textures. I could use an atlas, but I wondered: could I pass all these shadow maps as slices from a 2D texture array?
So, somehow create a GL_TEXTURE_2D_ARRAY with a DEPTH format, and bind one layer of the array to the framebuffer?
Can framebuffers be backed for DEPTH by a texture array layer, instead of just a depth texture?

In general, you need to distinguish whether you want to create a layered framebuffer (see Layered Images) or whether you want to attach a single layer of a multilayered texture to a framebuffer.
Use glFramebufferTexture3D to attach a layer of a 3D texture (TEXTURE_3D) or array texture to a framebuffer or use glFramebufferTextureLayer to attach a layer of a three-dimensional or array texture to the framebuffer. In either case the last argument specifies the layer of the texture.
Layered attachments can be attached with glFramebufferTexture. See Layered rendering.

Related

Downscaling texture - glBlitFramebuffer vs separate render pass

I need to render some material texture of the object to the downscaled texture. When I use framebuffer with MRT I first render to the texture of the same size as a scene texture. Theoretically I can use MRT with different sizes but GL spec says:
If the attachment sizes are not all identical, rendering will be
limited to the largest area that can fit in all of the attachments
Then I use glBlitFramebuffer to downscale the render target. glBlitFramebuffer filter is set to GL_LINEAR. Alternatively I can use a slower method - additional render pass which renders some scene objects directly to the downscaled texture. In both cases for that material texture I use the sampler with the following parameters:
GL_TEXTURE_MIN_FILTER: GL_LINEAR_MIPMAP_LINEAR
GL_TEXTURE_MAG_FILTER: GL_LINEAR
At first glance both methods produce the same downscale texture but after careful observation there are some minor difference. Moreover downscaled texture produced by glBlitFramebuffer seems to have less details.
Here is my explanation:
When the material texture is rendered to the texture of the same size as scene texture mipmaps are available but later glBlitFramebuffer doesn't use them so rendering results are slightly different. Is it a correct explanation ?

How to render color and depth in multisample texture?

In order to implement "depth-peeling", I render my OpenGL scene in to a series of framebuffers each equipped with a rgba color texture and depth texture. This works fine if I don't care about anti-aliasing. If I do, then it seems the correct thing to do is enable GL_MULTISAMPLING and use a GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D. But I'm confused about which other calls need to be replaced.
In particular, how should I adapt my framebuffer construction to use glTexImage2DMultisample instead of glTexImage2D?
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
There are many related questions on this topic, but none of the solutions I found seem to point to a canonical tutorial or clear example putting all these together.
While I have only used multisampled FBO rendering with renderbuffers, not textures, the following is my understanding.
Do I need to change the calls to glFramebufferTexture2D beyond using GL_TEXTURE_2D_MULTISAMPLE instead of GL_TEXTURE_2D?
No, that's all you need. You create the texture with glTexImage2DMultisample(), and then attach it using GL_TEXTURE_2D_MULTISAMPLE as the 3rd argument to glFramebufferTexture2D(). The only constraint is that the level (5th argument) has to be 0.
If I'm rendering both color and depth into textures, do I need to make a call to glRenderbufferStorageMultisample?
Yes. If you attach a depth buffer to the same FBO, you need to use a multisampled renderbuffer, with the same number of samples as the color buffer. So you create your depth renderbuffer with glRenderbufferStorageMultisample(), passing in the same sample count you used for the color buffer.
Finally, is there some glBlit* that I need to do in addition to setting up textures for the framebuffer to render into?
Not for rendering into the framebuffer. Once you're done rendering, you have a couple of options:
You can downsample (resolve) the multisample texture to a regular texture, and then use the regular texture for your subsequent rendering. For resolving the multisample texture, you can use glBlitFramebuffer(), where the multisample texture is attached to the GL_READ_FRAMEBUFFER, and the regular texture to the GL_DRAW_FRAMEBUFFER.
You can use the multisample texture for your subsequent rendering. You will need to use the sampler2DMS type for the samplers in your shader code, with the corresponding sampling functions.
For option 1, I don't really see a good reason to use a multisample texture. You might just as well use a multisample renderbuffer, which is slightly easier to use, and should be at least as efficient. For this, you create a renderbuffer for the color attachment, and allocate it with glRenderbufferStorageMultisample(), very much like what you need for the depth buffer.

OpenGL - how to render object to 3D texture as a volumetric billboard

I'm trying to implement volumetric billboards in OpenGL 3.3+ as described here
and video here.
The problem I'm facing now (quite basic) is: how do I render a 3D object to a 3D texture (as described in the paper) efficiently? Assuming the object could be stored in a 256x256x128 tex creating 256*256*128*2 framebuffers (because it's said that it should be rendered twice at each axis: +X,-X,+Y,-Y,+Z,-Z) would be insane and there are too few texture units to process that many textures as far as I know (not to mention the amount of time needed).
Does anyone have any idea how to deal with something like that?
A slice of 3D texture can be directly attached to the current framebuffer. So, create a frame buffer, a 3D texture and then do rendering like:
glFramebufferTexture3D( GL_FRAMEBUFFER, Attachment, GL_TEXTURE_3D,
TextureID, 0, ZSlice );
...render to the slice of 3D texture...
So, you need only 1 framebuffer that will be iterated by the number of Z-slices in your target 3D texture.

Normal back buffer + render to depth texture in OpenGL (FBOs)

Here's the situation: I have a texture containing some depth values. I want to render some geometry using that texture as the depth buffer, but I want the color to be written to the normal framebuffer (i.e., the window's framebuffer).
How do I do this? I tried creating an FBO and only binding a depth texture to it (GL_DEPTH_COMPONENT) but that didn't work; none of the colors showed up.
No you can't. The FBO you are rendering to may be either a main framebuffer or an off-screen one. You can't mix them in any way.
Instead, I would suggest you to render to a color renderbuffer and then do a simple blitting operation into the main framebuffer.
Edit-1.
Alternatively, if you already have depth in the main FB, you can first blit your depth and then render to a main FB, thus saving video memory on the additional color renderbuffer.
P.S. Blitting is done via glBlitFramebuffer. In order to make it work you should setup GL_READ_FRAMEBUFFER, GL_DRAW_FRAMEBUFFER and glDrawBuffer() for each of them.

How to visualize a depth texture in OpenGL?

I'm working on a shadow mapping algorithm, and I'd like to debug the depth map that it's generating on its first pass. However, depth textures don't seem to render properly to the viewport. Is there any easy way to display a depth texture as a greyscale image, preferably without using a shader?
You may need to change the depth texture parameters to display it as greyscale levels :
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE )
glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE )
You can then normally use the texture as a 'normal' greyscale 2d texture, either via fixed function, or a 'sampler2d' shader uniform.
Depth textures (2D) can be used just like any regular grayscale texture. The only problem might be that the values inside it are all too high and you only see a white texture. If that's the case play around with the z-near and -far planes that are used when creating the depth texture (or scale the values with a shader or maybe glTexEnv).
Sure, just bind your depth texture to your favourite texture unit, enable texturing, and draw a 2D quad! You could also size the quad to only fill part of the screen so that you can view the shadowmap in realtime.
OpenGL also has functions which can copy the texture into an array for you. You could save this as an image and use an image viewer to view it.