True depth values using OpenGL readPixels - opengl

I'd like to retrieve the depth buffer from my camera view for a 3D filtering application. Currently, I'm using glReadPixels to get the depth component. Instead of the [0,1] values, I need the true values for the depth buffer, or true distance to the camera in world coordinates.
I tried to transform the depth values by the GL_DEPTH_BIAS and GL_DEPTH_SCALE, but that didn't work.
glReadPixels(0, 0, width_, height_, GL_DEPTH_COMPONENT, GL_FLOAT, depth_buffer);
glGetDoublev(GL_DEPTH_BIAS, &depth_bias); // Returns 0.0
glGetDoublev(GL_DEPTH_SCALE, &depth_scale); // Returns 1.0
I realize this is similar to Getting the true z value from the depth buffer, but I'd like to get the depth values into main memory, not in a shader.

Try using gluUnproject() after retrieving the normalized depth value from the Z-buffer as before. You'll need to supply the modelview matrix, projection matrix, and viewport values, which you can retrieve with glGetDoublev() and glGetIntegerv().

Instead of the [0,1] values, I need the true values for the depth buffer, or true distance to the camera in world coordinates.
The depth buffer doesn't contain distance values from the camera. The depth values are the perpendicular distance to the plane of the camera. So if you really need radial distances to the camera, you'll need to compute them in a shader and write them to a buffer; the depth buffer isn't going to help.
but I'd like to get the depth values into main memory, not in a shader.
Then do what those shaders do, except in C/C++ (or whatever) code rather than in a shader. The math is the same either way. Just loop over each value in the depth buffer and transform it.

Related

Monochrome rendering in OpenGL to frambuffer object?

Is there a way to render monochromatically to a frame buffer in OpenGL?
My end goal is to render to a Cubic texture to create shadow maps for shading in my application.
From what I understand a way to do this would be, for each light source, render the scene 6 times (using the 6 possible orthogonal orientations for the camera) to an FBO each, then add all of them to the cube map.
I alrady have the shaders that render the depth map for one such camera position. However, these shaders render in full RGB, which, for a depth map, is 3 times bigger than it needs to be. Is there a way to render monochromatically so as to reduce the size of the textures?
How do you create texture[s] for your shadowmap (or cubemap)? If you use GL_DEPTH_COMPONENT[16|24|32] format while creating texture then the texture will be single channel as you want.
Check official documentation: https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glTexImage2D.xml
GL_DEPTH_COMPONENT
Each element is a single depth value.
The GL converts it to floating point, multiplies by the signed scale factor
GL_DEPTH_SCALE, adds the signed bias GL_DEPTH_BIAS,
and clamps to the range [0,1] (see glPixelTransfer).
As you can see it says each element is SINGLE depth value.
So if you use something like this:
for (i = 0; i < 6; i++)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GL_DEPTH_COMPONENT24,
size,
size,
0,
GL_DEPTH_COMPONENT,
GL_FLOAT,
NULL);
single element size must be 24-bit (maybe 32 with padding). Otherwise it would be ridiculous to specify depth size if it will store them as RGB[A].
This post also validates that depth texture is single channel texture: https://www.opengl.org/discussion_boards/showthread.php/123939-How-is-data-stored-in-GL_DEPTH_COMPONENT24-texture
"I alrady have the shaders that render the depth map for one such camera position. However, these shaders render in full RGB, which, for a depth map, is 3 times bigger than it needs to be."
In general you render scene to shadowmap to get depth value (or distance), right? Then why do you render as RGB anyway? If you only need to depth values, you don't need to color attachments because you don't need to write them, you only write to depth buffer (actually OpenGL itself do this if you are not overriding its values in frag)

OpenGL: Mapping texture on a sphere using spherical coordinates

I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).

How to get accurate 3D depth from 2D screen mouse click for large scale object in OpenGL?

I am computing the 3D coordinates from the 2D screen mouse click. Then I draw point at the computed 3D coordinate. Nothing is wrong in the code, nothing is wrong in the method, everything is working fine. But there is one issue which is relevant to depth.
If the object size is around (1000, 1000, 1000), I get the full depth, the exact 3D coordinate of the object's surfel. But when I load the object with size (20000, 20000, 20000), I do not the get the exact (depth) 3D coordinates. I get some offset from the surface. The point draws with a some offset from the surface. So my first question is that why it is happening? and the second question is how can I get the full depth and the accurate 3D coordinate for very large scale objects?
I draw a 3D point with
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 0.999999);
and using
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &wz);
if(wz > 0.0000001f && wz < 0.999999f)
{
gluUnProject()....saving 3D coordinate
}
The reason why this happens is the limited precision of the depth buffer. A 8-bit depth buffer can, for example, only store 2^8=256 different depth values.
The second parameter set that effects the depth precision are the settings for near and far plane in the projection, since this is the range that has to be mapped to the available data values in the depth buffer. If one sets this range to [0, 100] using a 8-bit depth buffer, then the actual precision is 100/256 = ~0.39, which means, that approximately 0.39 units in eye space will have the same depth value assigned.
Now to your problem: Most probably you have too less bits assigned to the depth buffer. As described above this introduces an error since the exact depth value can not be stored. This is why the points are close to the surface, but not exactly on it.
I have solved this issue that was happening because of depth range. Because OpenGL is a state machine, so I think I might somewhere changed the depth range which should be from 0.0 to 1.0. I think its always better to put depth range just after clearing the depth buffer, I used to have following settings just after clearing the depth and color buffers.
Solution:
{
glClearColor(0.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 1.0);
glDepthMask(GL_TRUE);
}

OpenGL Depth buffer in orthographic projection

If I use orthographic projection in OpenGL, but still set different z-values to my objects, does it still be visible in Depth Buffer?
I mean, in color buffer everything looks plain and at one distance. But wherever they will "colorized" in different shades? Does depth buffer "understand" depth in orthographic projection?
A depth buffer has nothing to do with the projection matrix. Simply put, a z-buffer takes note of the closest Z-value at a given point. As things are drawn it looks at the current value. If the new value is less then the existing value it is accepted and the z buffer is updated. If the value is greater then the value, behind the current value, it is discarded. The depth buffer has nothing to do with color. I think you might be confusing blending with depth testing.
For example, say you have two quads A & B.
A.z = - 1.0f;
B.z = - 2.0f;
If you assume that both quads have the same dimensions, outside of their Z value, then you can see how drawing both would be a waste. Since quad A is in front of quad B it is a waste to draw quad B. What the depth buffer does is checks the zcords. In this example if you had enabled depth testing & a depth buffer quad B would never have been drawn because the depth testing would have shown it was occluded by quad A.

Sampling data from a shadow map texture using automatic comparison via the texture2D function

I've got a sampler2DShadow in my shader and I want to use it to implement shadow mapping. My shadow texture has the good initializers, with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE and GL_TEXTURE_COMPARE_FUNC set to GL_LEQUAL (meaning that the comparison should return 1 if the r value of my coordinates are less or equal to the depth value fetched in the texture). This texture is bound to the GL_DEPTH_ATTACHMENT of a FBO rendered in light space coordinates.
What coordinates should I give the texture2D function in my final fragment shader? I currently have a
smooth in vec4 light_vert_pos
set in my fragment shader that is defined in the vertex shader by the function
light_vert_pos = light_projection_camera_matrix*modelview*in_Vertex;
I would assume I could multiply my lighting by the value
texture2D(shadowmap,(light_vert_pos.xyz)/light_vert_pos.w)
but this does not seem to work. Since light_vert_pos is only in post projective coordinates (the matrix used to create it is the matrix I use to create the depth buffer in the FBO), should I manually clamp the 3 x/y/z variables to [0,1]?
You don't say how you generated your depth values. So I'll assume you generated your depth values by rendering triangles using normal projection. That is, you transform the geometry to camera space, transform it to projection space, and let the rasterization pipeline handle things from there as normal.
In order to make shadow mapping work, your texture coordinates must match what the rasterizer did.
The output of a vertex shader is clip-space. From there, you get the perspective divide, followed by the viewport transform. The latter uses the values from glViewport and glDepthRange to compute the window-space XYZ. The window-space Z is the depth value written to the depth buffer.
Note that this is all during the depth pass: the generation of the depth values for the shadow map.
However, you can take some shortcuts. If your glViewport range was set to the same size as the texture (which is generally how it's done), then you can ignore the viewport transform. You will still need the glDepthRange you used in the depth pass.
In your fragment shader, you can perform the perspective divide, which puts the coordinates in normalized device coordinate (NDC) space. That space is [-1, 1] in all directions. Your texture coordinates are [0, 1], so you need to divide the X and Y by two and add 0.5 to them:
vec3 ndc_space_values = light_vert_pos.xyz / light_vert_pos.w
vec3 texCoords;
texCoords.xy = ndc_space_values.xy * 0.5 + 0.5;
To compute the Z value, you need to know the near and far values you use for glDepthRange.
texCoords.z = ((f-n) * 0.5) * ndc_space_values.z + ((n+f) * 0.5);
Where n and f are the glDepthRange near and far values. You can of course precompute some of these and pass them as uniforms. Or, if you use the default range of near=0 and far=1, you get
texCoords.z = ndc_space_values.z * 0.5 + 0.5;
Which looks familiar somehow.
Aside:
Since you defined your inputs with in rather than varying, you have to be using GLSL 1.30 or above. So why are you using texture2D (which is an old function) rather than texture?