How many depthtextures can i bind to a framebuffer? - opengl

I am trying to create shadow maps of many objects in a sceneRoom with their shadows being projected on the sceneRoom. Untill now i've been able to project the shadows of the sceneRoom on itself, but i want to project the shadows of other Objects in the sceneRoom on the sceneRoom's floor.
is it possible to create multiple depth textures in one framebuffer? or should i use several Framebuffers where each has one depth texture?

There is only one GL_DEPTH_ATTACHMENT point, so you can only have at most one attached depth buffer at any time. So you have to use some other method.

No, there is only one attachment point (well, technically two if you count GL_DEPTH_STENCIL_ATTACHMENT) for depth in an FBO. You can only attach one thing to the depth, but that does not mean you are limited to a single image.
You can use an array texture to store multiple depth images and then attach this array texture to GL_DEPTH_ATTACHMENT.
However, the only way to draw into an explicit array level in this texture would be to use a Geometry Shader to do layered rendering. Since it sounds like each one of these depth images you are interested in are actually completely different sets of geometry, this does not sound like the approach you want. If you used a Geometry Shader to do this, you would process the same set of geometry for each layer.
One thing you could consider is actually using a single depth buffer, but packing your shadow maps into an atlas. If each of your shadow maps is 512x512, you could store 4 of them in a single texture with dimensions 1024x1024 and adjust texture coordinates (and viewport when you draw into the atlas) appropriately. The reason you might consider doing this is because changing the render target (FBO state) tends to be the most expensive thing you would do between draw calls in a series of depth-only draws. You might change a few uniforms or vertex pointers, but those are dirt cheap to change.

Related

Initializing a cubemap with existing 2D textures and glTextureView

I am trying to create a cubemap from six existing textures. Texture views seem to be made for this sort of thing, also updating the cubemap when the original textures change. However, cubemaps can only be views of texture arrays (or other cubemaps) and i can't find any way to fill an array with several already existing 2D textures using glTextureView since glTextureView can only be used with uninitialized textures.
Is there a way to do this or is drawing into the cubemap via an FBO the only way?
No, I do not believe this is supported. You can't "merge" multiple textures into a single texture without any copying.
I can think of a few options you have:
The ideal solution is of course that you place the data into cube map faces in the first place, instead of using separate textures. Note that gTextureView() supports the opposite direction. If you partly want to use a texture as a regular GL_TEXTURE_2D, and partly as the face of a cube map, you can store it in a cube map face, and create a texture view to treat the face as a 2D texture where that's needed.
You copy the textures into the cube map faces. The most efficient approach should be using glBlitFramebuffer(). Of course copying data is always undesirable, but sometimes it's necessary.
This may be somewhat unconventional, but you could... not use a cube map. You could use 6 separate samplers in the shader, and bind the 6 textures you want to use as cube faces to those samplers. Then you can decide which of the six textures to sample, and what texture coordinates to use, in your shader code. This shouldn't be too difficult if you look up the logic/math that is used under the hood when you sample cube maps.

How does vertex array work with materials?

My model structure contains a VertexList, ColorList and a NormalList. I'm going to do the TextureCoordList, but I'm a little confused with this. Some polygon has texture, some doesn't, and some has another texture as the other. So how does it work? I render the model as one Vertex buffer.
Texture coordinates don't have any information on what texture, if any, they are being used for. They're just numbers that specify where on the texture OpenGL should sample. You can render the same mesh with the same texture coordinates with different textures, or even no textures at all.
If you're using shaders, you don't even have to use the texture coordinates for texturing; you can use them for whatever you want (though in that case, consider renaming them by using vertex attributes instead).

Can I specify a viewport per render target?

First my problem: I'm trying to render to multiple buffers in an FBO. I set multiple buffers using glDrawBuffers, and rendering to them using the appropriate gl_FragData. All good and well, but in my situation one of the buffers should be downsampled, by a quarter to be exact (w/2, h/2).
Of course, I can do this by blitting those specific buffers afterwards or I can simply do the downsampling on the CPU (current solution). But then I read about viewport arrays and found this quote in the ARB specification, which seems to be exactly what I want, without any extra conversions.
Additionally, when combined with multiple framebuffer attachments, it
allows a different viewport rectangle to be selected for each.
Of course, the specification never mentions afterwards how to do this or what they actually mean, multiple framebuffer attachments is quite generic. I only noticed I can set a specific viewport as an output of the geometry shader (output gl_ViewportIndex). So I could call the geometry twice for each viewport in the array. But as far as I understand, this will simply call the fragment shader with another viewport transformation applied, not one per target buffer. This of course makes not much sense for my usecase, also I can't see how this could ever help to select a viewport per framebuffer attachment
For my situation it does not make much sense to add a geometry shader. And as viewport transform is only applied after the fragment shader, it does make sense to have a viewport per render target, which the previous quote seems to confirm. Is this actually possible, and if so, how would I accomplish this?
Oh, and I've tried the obvious already: resizing the renderbuffer of that target (let's say I use GL_COLOR_ATTACHMENT1) to the downsampled version and setting index 1 of the viewport array to the according size. I ended up with a picture of the lower left quadrant of the image, essentially telling me the viewport was unchanged.
Viewport arrays can only be used with geometry shaders; without them, array index 0 will be used for all rendering.
Remember: the viewport transform happens before rasterization. Thus, if you want to transform a triangle by multiple viewports, you're effectively asking the system to render that triangle multiple times. And the only way to do that is with a geometry shader that outputs the primitive multiple times.

Reverse triangle lookup from affected pixels?

Assume I have a 3D triangle mesh, and a OpenGL framebuffer to which I can render the mesh.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
The only way I could think of doing this is to individually render each triangle from the mesh, then go through each pixel in the framebuffer to determine if it was affected by the triangle (using the depth buffer or a user-defined fragment shader output variable). I would then have to clear the framebuffer and do the same for the next triangle.
Is there a more efficient way to do this?
I considered, for each fragment in the fragment shader, writing out a triangle identifier, but GLSL doesn't allow outputting a list of integers.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
You will not be able to do it for entire scene. There's no structure that allow you to associate "list" with every pixel.
You can get list of primitives that affected certain area using select buffer (see glRenderMode(GL_SELECT)).
You can get scene depth complexity using stencil buffer techniques.
If there are 8 triangles total, then you can get list of triangles that effected every pixel using stencil buffer (basically, assign unique (1 << n) stencil value to each triangle, and OR it with existing stencil buffer value for every stencil OP).
But to solve it in generic case, you'll need your own rasterizer and LOTS of memory to store per-pixel triangle lists. The problem is quite similar to multi-layered depth buffer, after all.
Is there a more efficient way to do this?
Actually, yes, but it is not hardware accelerated and OpenGL has nothing to do it. Store all rasterized triangles in OCT-tree. Launch a "ray" through that OCT-tree for every pixel you want to test, and count triangles this ray hits. That's collision detection problem.

Avoiding glBindTexture() calls?

My game renders lots of cubes which randomly have 1 of 12 textures. I already Z order the geometry so therefore I cant just render all the cubes with texture1 then 2 then 3 etc... because that would defeat z ordering. I already keep track of the previous texture and in they are == then I do not call glbindtexture, but its still way too many calls to this. What else can I do?
Thanks
Ultimate and fastest way would be to have an array of textures (normal ones or cubemaps). Then dynamically fetch the texture slice according to an id stored in each cube instance data/ or cube face data (if you want a different texture on a per cube face basis) using GLSL built-in gl_InstanceID or gl_PrimitiveID.
With this implementation you would bind your texture array just once.
This would of course required used of gpu_shader4 and texture_array extensions:
http://developer.download.nvidia.com/opengl/specs/GL_EXT_gpu_shader4.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_texture_array.txt
I have used this mechanism (using D3D10, but principle applies too) and it worked very well.
I had to map on sprites (3D points of a constant screen size of 9x9 or 15x15 pixels IIRC) differents textures indicating each a different meaning for the user.
Edit:
If you don't feel comfy with all shader stuff, I would simply sort cubes by textures, and don't Z order the geometry. Then measure performances gains.
Also I would try to add a pre-Z pass where you render all your cubes in Z buffer only, then render normal scene, and see if it speed up things (if fragments bound, it could help).
You can pack your textures into one texture and offset the texture coordinates accordingly
glMatrixMode(GL_TEXTURE) will also allow you to perform transformations on the texture space (to avoid changing all the texture coords)
Also from NVIDIA:
Bindless Graphics