Is there a way to render monochromatically to a frame buffer in OpenGL?
My end goal is to render to a Cubic texture to create shadow maps for shading in my application.
From what I understand a way to do this would be, for each light source, render the scene 6 times (using the 6 possible orthogonal orientations for the camera) to an FBO each, then add all of them to the cube map.
I alrady have the shaders that render the depth map for one such camera position. However, these shaders render in full RGB, which, for a depth map, is 3 times bigger than it needs to be. Is there a way to render monochromatically so as to reduce the size of the textures?
How do you create texture[s] for your shadowmap (or cubemap)? If you use GL_DEPTH_COMPONENT[16|24|32] format while creating texture then the texture will be single channel as you want.
Check official documentation: https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glTexImage2D.xml
GL_DEPTH_COMPONENT
Each element is a single depth value.
The GL converts it to floating point, multiplies by the signed scale factor
GL_DEPTH_SCALE, adds the signed bias GL_DEPTH_BIAS,
and clamps to the range [0,1] (see glPixelTransfer).
As you can see it says each element is SINGLE depth value.
So if you use something like this:
for (i = 0; i < 6; i++)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GL_DEPTH_COMPONENT24,
size,
size,
0,
GL_DEPTH_COMPONENT,
GL_FLOAT,
NULL);
single element size must be 24-bit (maybe 32 with padding). Otherwise it would be ridiculous to specify depth size if it will store them as RGB[A].
This post also validates that depth texture is single channel texture: https://www.opengl.org/discussion_boards/showthread.php/123939-How-is-data-stored-in-GL_DEPTH_COMPONENT24-texture
"I alrady have the shaders that render the depth map for one such camera position. However, these shaders render in full RGB, which, for a depth map, is 3 times bigger than it needs to be."
In general you render scene to shadowmap to get depth value (or distance), right? Then why do you render as RGB anyway? If you only need to depth values, you don't need to color attachments because you don't need to write them, you only write to depth buffer (actually OpenGL itself do this if you are not overriding its values in frag)
Related
Some intro:
I'm currently trying to see how I can convert a depth map into a point cloud. In order to do this, I render a scene as usually and produce a depth map. From the depth map I try to recreate the scene as a point cloud from the given camera angle.
In order to do this I created a FBO so I can render my scene's depth map on a texture. The depth map is rendered on the texture successfully. I know it is done because I'm able to generate the point cloud from the depth texture using glGetTexImage and converting the data acquired.
The problem:
For presentation purposes, I want the depth map to be visible on a separate window. So, I just created a simple shader to draw the depth map texture on a quad. However, instead of the depth texture being drawn on the quad, the texture being drawn is the last that was bound using GlBindTexture. For example :
glUseProgram(simpleTextureViewerProgram);
glBindVertexArray(quadVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,randomTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glUniform1i(quadTextureSampler, 0);
glDrawArrays(GL_TRIANGLES, 0, 6);
The code above renders the "randomTexure" on the quad instead of the "depthTexture". As I said earlier, "depthTexture" is the one I use in glGetTexImage, so it does contain the depth map.
I may be wrong but if I had to make a guess then the last GlBindTexture command fails and the problem is that "depthTexture" is not an RGB texture but a depth component texture. Is this the reason? How can I draw my depth map on the quad then?
I am computing the 3D coordinates from the 2D screen mouse click. Then I draw point at the computed 3D coordinate. Nothing is wrong in the code, nothing is wrong in the method, everything is working fine. But there is one issue which is relevant to depth.
If the object size is around (1000, 1000, 1000), I get the full depth, the exact 3D coordinate of the object's surfel. But when I load the object with size (20000, 20000, 20000), I do not the get the exact (depth) 3D coordinates. I get some offset from the surface. The point draws with a some offset from the surface. So my first question is that why it is happening? and the second question is how can I get the full depth and the accurate 3D coordinate for very large scale objects?
I draw a 3D point with
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 0.999999);
and using
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &wz);
if(wz > 0.0000001f && wz < 0.999999f)
{
gluUnProject()....saving 3D coordinate
}
The reason why this happens is the limited precision of the depth buffer. A 8-bit depth buffer can, for example, only store 2^8=256 different depth values.
The second parameter set that effects the depth precision are the settings for near and far plane in the projection, since this is the range that has to be mapped to the available data values in the depth buffer. If one sets this range to [0, 100] using a 8-bit depth buffer, then the actual precision is 100/256 = ~0.39, which means, that approximately 0.39 units in eye space will have the same depth value assigned.
Now to your problem: Most probably you have too less bits assigned to the depth buffer. As described above this introduces an error since the exact depth value can not be stored. This is why the points are close to the surface, but not exactly on it.
I have solved this issue that was happening because of depth range. Because OpenGL is a state machine, so I think I might somewhere changed the depth range which should be from 0.0 to 1.0. I think its always better to put depth range just after clearing the depth buffer, I used to have following settings just after clearing the depth and color buffers.
Solution:
{
glClearColor(0.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 1.0);
glDepthMask(GL_TRUE);
}
I do understand the concepts behind and Color and Depth Buffers both in case of default FB and a Frame Buffer Object when there is ONE color buffer.
But what I dont understand is how the depth buffer is "tied" to the color buffers when there are multiple color buffers.
For instance:
In case of multiple render targets, we can have N number of color buffers attached to different color attachment points. But we have only 1 depth attachment point. Does this mean that between all the color attachment points, only the first's (COLOR_ATTACHMENT0) final pixel color is computed based on the depth values from depth buffer? What about the color of remaining color buffers. Do they ignore the depth comparison to determine their final pixel color?
What about the case of layered rendering. Suppose I attach a texture array(GL_TEXTURE_2D_ARRAY) of size N to the first color buffer (COLOR_ATTACHMENT0). I now must attach a depth texture too of type GL_TEXTURE_2D_ARRAY (otherwise its a incomplete attachment). What should be the size of this of depth attachment. If I make it N, will each of the layers get depth values? Or N must be 1?
Can an expert please answer both questions. Thanks.
The depth buffer is used once per fragment. If the depth test fails, the entire fragment is discarded, so none of the writes to any of the color buffers are performed.
The fragments will use the depth buffer corresponding to the layer they are on. For example, if a fragment is written to gl_Layer 2, it will use the 3rd layer of both the depth buffer and color buffer.
I'm making point lights with shadow maps, and currently I have it done with six depth map textures, each rendered individually and applied to the light map. This works quite well, but the performance cost is high.
Now, to improve the performance by reducing FBO changes and shader swapping between depthmap shader and lightmap shader, I was thinking of a couple of approaches. First one involves having a single texture, six times larger than the shadow map, and rendering all the point light depth maps "at once", and then using this texture to lay out the light map in one call. Is it possible to render only to portions of textures?
To elaborate the first example, it would be something like this:
1. Set shadow map size to 128x128
2. Create a depth map texture of 384x256 (3 x 2 shadow map)
3. Render the first 'side' of a point light to the rectangle (0, 0, 128, 128)
4. Render the second 'side' to the rectangle (128, 0, 128, 128)
5. - 9. Render the other 'sides' in a similar manner
10. Swap to light map mode
11. Render the light map in a single call using the 'cube-map'-ish depth texture
Second method I thought is to use 3D textures instead of partial rendering, but I still have a similar question: can I render to only a certain 'layer' of a 3D texture while retaining the other layers?
Why would combined shadow maps have any advantage?
You have to render scene six times.
You could render 6 times 128x128 FBO and then create 256x384 texture and fill it with previously rendered textures. But bandwidth and sampling rate per-pixel remain exact same.
Rendering one part while preserving other's might be done with stencil buffer, but in this case I don't see any point of creating combined-shadow-map.
Hope this helps. Any feedback would be appreciated.
So I'm trying to replace a part of a texture over another in GLSL, first step in a grand scheme.
So I have a image, 2048x2048, with 3 textures on the top left, each 512x512. For testing purposes I'm trying to just repeatedly draw the first one.
//get coord of smaller texture
coord = vec2(int(gl_TexCoord[0].s)%512,int(gl_TexCoord[0].t)%512);
//grab color from it and return it
fragment = texture2D(textures, coord);
gl_FragColor = fragment;
It seems that it only grabs the same pixel, I get one color from the texture returned to me. Everything ends up grey. Anyone know what's off?
Unless that's a rectangle texture (which is isn't since you're using texture2D), your texture coordinates are normalized. That means that the range [0, 1] maps to the entire range of the texture. 0.5 always means halfway, whether for a 256 sized texture or a 8192 one.
Therefore, you need to stop passing non-normalized texture coordinates (texel values). Pass normalized texture coordinates and adjust those.