In OpenGL, I'm trying to render a depth map that doesn't loose precision with fa away objects.
Do you know of any way I can accomplish this task?
My approach so far: I tried the "reverse depth" trick (here for example).
I modified the perspective projection matrix in such a way that the near and far values are mapped as [-n, -f] -> [1, 0] instead of the usual [-n, -f] -> [-1, 1]
I rendered the depth map to a framebuffer, using in particular the 32F depth component glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, conf::SCR_WIDTH, conf::SCR_HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL)
plus I tackled the need for clipping to [1,0] using glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE) (documentation here)
lastly, after rendering the depth map to the framebuffer, I take the depth values and map them back to eye coordinates, divide by the far plane value, and use this value as a color
However I couldn't notice any gain in precision. Am I doing anything wrong? Is there any better approach?
Example on a close by cube: you can really see the discrete transitions, I would like a smoother output
Related
I am computing the 3D coordinates from the 2D screen mouse click. Then I draw point at the computed 3D coordinate. Nothing is wrong in the code, nothing is wrong in the method, everything is working fine. But there is one issue which is relevant to depth.
If the object size is around (1000, 1000, 1000), I get the full depth, the exact 3D coordinate of the object's surfel. But when I load the object with size (20000, 20000, 20000), I do not the get the exact (depth) 3D coordinates. I get some offset from the surface. The point draws with a some offset from the surface. So my first question is that why it is happening? and the second question is how can I get the full depth and the accurate 3D coordinate for very large scale objects?
I draw a 3D point with
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 0.999999);
and using
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &wz);
if(wz > 0.0000001f && wz < 0.999999f)
{
gluUnProject()....saving 3D coordinate
}
The reason why this happens is the limited precision of the depth buffer. A 8-bit depth buffer can, for example, only store 2^8=256 different depth values.
The second parameter set that effects the depth precision are the settings for near and far plane in the projection, since this is the range that has to be mapped to the available data values in the depth buffer. If one sets this range to [0, 100] using a 8-bit depth buffer, then the actual precision is 100/256 = ~0.39, which means, that approximately 0.39 units in eye space will have the same depth value assigned.
Now to your problem: Most probably you have too less bits assigned to the depth buffer. As described above this introduces an error since the exact depth value can not be stored. This is why the points are close to the surface, but not exactly on it.
I have solved this issue that was happening because of depth range. Because OpenGL is a state machine, so I think I might somewhere changed the depth range which should be from 0.0 to 1.0. I think its always better to put depth range just after clearing the depth buffer, I used to have following settings just after clearing the depth and color buffers.
Solution:
{
glClearColor(0.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 1.0);
glDepthMask(GL_TRUE);
}
The idea
I need to create a 2D texture to be fed with resonably precise float values (I mean at least as precise as a glsl mediump float). I want to store in it each pixel's distance from the camera. I don't want the GL Zbuffer distance to near plane, only my own lovely custom data :>
The problem/What I've tried
By using a standard texture as color attachment, I don't get enough precision. Or maybe I missed something ?
By using a depth attachment texture as GL_DEPTH_COMPONENT32 I am getting the clamped near plane distance - rubbish.
So it seems I am stuck at not using a depth attachment even tho they seem to eventually hold more precision. So is there a way to have mediump float precision for standard textures ?
I find it strange OpenGL doesn't have a generic container for arbitrary data. I mean with custom bit-depth. Or maybe I missed something again!
You can use floating point textures instead of a RGBA texture with 8 bit per pixel. However support of these depends on the devices you want to support, especially older mobile devices have a lack of support for these formats.
Example for GL_RGBA16F( Not tested ):
glTexImage2D(GL_TEXTURE_2D, mipmap, GL_RGBA16F, mipmapWidth, mipmapHeight, GL_RGBA, GL_HALF_FLOAT, null);
Now you can store the data in your fragment-shader manually. However clamping still occurs depending on you MVP. Also you need to pass the data to the fragment shader.
There are also 32bit formats.
There are a number of options for texture formats that give you more than 8-bit component precision.
If your values are in a pre-defined range, meaning that you can easily map your values into the [0.0, 1.0] interval, normalized 16-bit formats are your best option. For a single component texture, the format would be GL_R16. You can allocate a texture of this format using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, 512, 512, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);
There are matching formats for 2, 3, and 4 components (GL_RG16, GL_RGB16, GL_RGBA16).
If you need a larger range of values that is not easily constrained, float textures become more attractive. The corresponding calls for 1 component textures are:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16F, 512, 512, 0, GL_RED, GL_HALF_FLOAT, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, NULL);
Again, there are matching formats for 2, 3, and 4 components.
The advantage of float textures is that, just based on the nature of float values encoded with a mantissa and exponent, they can cover a wide range. This comes at the price of less precision. For example, GL_R16F gives you 11 bits of precision, while GL_R16 gives you a full 16 bits. Of course GL_R32F gives you plenty of precision (24 bits) as well as a wide range, but it uses twice the amount of storage.
You would probably have an easier time accomplishing this in GLSL as opposed to the C API. However, any custom depth buffer will be consistently, considerably slower than the one provided by OpenGL. Don't expect it to operate in real-time.
If your aim is to have access to the raw distance of any fragment from the camera, remember that depth buffer values are z/w, where z is the distance from the near plane and w is the distance from the camera. So, it is possible to extract quickly with an acceptable amount of precision. However, you are still faced with your original problem: fragments between the camera and the near plane will not be in the depth buffer.
I'm using OpenGL (with Python bindings) to render depth maps of models saved as .obj files. The output is a numpy array with the depth to the object at each pixel.
The seemingly relevant parts of the code I'm using look like this:
glDepthFunc(GL_LESS) # The Type Of Depth Test To Do
glEnable(GL_DEPTH_TEST) # Enables Depth Testing
... load the vertices and triangles, set the camera viewing position ...
depth = np.array(glReadPixels(0, 0, Width, Height, GL_DEPTH_COMPONENT, GL_FLOAT), dtype=np.float32)
I can use this to successfully render depth images of the object.
However, what I want to do is, instead of obtaining the depth of the face closest to the camera along each ray, I want to obtain the depth of the face furthest from the camera. I want faces to be rendered regardless of in which direction they are facing – i.e. I don't want to backface-cull.
I tried to achieve this by modifying the above code like so:
glDisable(GL_CULL_FACE) # Turn off backface culling
glDepthFunc(GL_GREATER) # The Type Of Depth Test To Do
glEnable(GL_DEPTH_TEST) # Enables Depth Testing
... load the vertices and triangles, set the camera viewing position ...
depth = np.array(glReadPixels(0, 0, Width, Height, GL_DEPTH_COMPONENT, GL_FLOAT), dtype=np.float32)
However, when I do this I don't get any depth at all rendered. I suspect that this is because I am using GL_DEPTH_COMPONENT to read out the depth, however I am not sure what to change to fix this.
How can I get the depth of the faces furthest from the camera?
Switching the depth test to GL_GREATER in priciple will do the trick, you overlooked just a tiny detail: You need to initialize the depth buffer differently. By default, it will be intialized to 1.0 when clearing the depth buffer, so that GL_LESS comparisions might update it to values smaller than 1.0.
Now, you want it to work the other way around, so you must intialize it to 0.0. To do so, just add a glClearDepth(0.0f) before the glClear(... | GL_DEPTH_BUFFER_BIT).
Furthermore, you yourself mention that you don't want backface culling for that. But instead of disabling that you can also switch the face culling around using glCullFace. You probably want GL_FRONT here, as GL_BACK is the default. But disabling it will work too, of course.
Make sure you use glDepthMask(GL_TRUE); to enable depth writing to the buffer.
Problem Explaination
I am currently implementing point lights for a deferred renderer and am having trouble determining where a the heavy pixelization/triangulation that is only noticeable near the borders of lights is coming from.
The problem appears to be caused by loss of precision somewhere, but I have been unable to track down the precise source. Normals are an obvious possibility, but I have a classmate who is using directx and is handling his normals in a similar manner with no issues.
From about 2 meters away in our game's units (64 units/meter):
A few centimeters away. Note that the "pixelization" does not change size in the world as I approach it. However, it will appear to swim if I change the camera's orientation:
A comparison with a closeup from my forward renderer which demonstrates the spherical banding that one would expect with a RGBA8 render target (only 0-255 possible values for each color). Note that in my deferred picture the back walls exhibit normal spherical banding:
The light volume is shown here as the green wireframe:
As can be seen the effect isn't visible unless you get close to the surface (around one meter in our game's units).
Position reconstruction
First, I should mention that I am using a spherical mesh which I am using to only render the portion of the screen that the light overlaps. I rendering only the back-faces if the depth is greater or equal the depth buffer as suggested here.
To reconstruct the camera space position of a fragment I am taking the vector from the camera space fragment on the light volume, normalizing it, and scaling it by the linear depth from my gbuffer. This is sort of a hybrid of the methods discussed here (using linear depth) and here (spherical light volumes).
Geometry Buffer
My gBuffer setup is:
enum render_targets { e_dist_32f = 0, e_diffuse_rgb8, e_norm_xyz8_specpow_a8, e_light_rgb8_specintes_a8, num_rt };
//...
GLint internal_formats[num_rt] = { GL_R32F, GL_RGBA8, GL_RGBA8, GL_RGBA8 };
GLint formats[num_rt] = { GL_RED, GL_RGBA, GL_RGBA, GL_RGBA };
GLint types[num_rt] = { GL_FLOAT, GL_FLOAT, GL_FLOAT, GL_FLOAT };
for(uint i = 0; i < num_rt; ++i)
{
glBindTexture(GL_TEXTURE_2D, _render_targets[i]);
glTexImage2D(GL_TEXTURE_2D, 0, internal_formats[i], _width, _height, 0, formats[i], types[i], nullptr);
}
// Separate non-linear depth buffer used for depth testing
glBindTexture(GL_TEXTURE_2D, _depth_tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);
Normal Precision
The problem was that my normals just didn't have enough precision. At 8 bits per component that means 255 discrete possible values. Examining the normals in my gbuffer overlaid ontop of the lighting showed a 1-1 correspondence with normal value to lit "pixel" value.
I am unsure why my classmate does not get the same issue (he is going to investigate further).
After some more research I found that a term for this is quantization. Another example of it can be seen here with a specular highlight on page 19.
Solution
After changing my normal render target to RG16F the problem is resolved.
Using method suggested here to store and retrieve normals I get the following results:
I now need to store my normals more compactly (I only have room for 2 components). This is a good survey of techniques if anyone finds themselves in the same situation.
[EDIT 1]
As both Andon and GuyRT have pointed out in the comments, 16 bits is a bit overkill for what I need. I've switched to RGB10_A2 as they suggested and it gives very satisfactory results, even on rounded surfaces. The extra 2 bits help a lot (256 vs 1024 discrete values).
Here's what it looks like now.
It should also be noted (for anyone that references this post in the future) that the image I posted for RG16F has some undesirable banding from the method I was using to compress/decompress the normal (there was some error involved).
[EDIT 2]
After discussing the issue some more with a classmate (who is using RGB8 with no ill effects), I think it is worth mentioning that I might just have the perfect combination of elements to make this appear. The game I'm building this renderer for is a horror game that places you in pitch black environments with a sonar-like ability. Normally in a scene you would have a number of lights at different angles (my classmate's environments are all very well lit - they're making an outdoor racing game). That combined with the fact that it only appears on very round objects relatively close up might be why I provoked this. This is all just a (slightly educated) guess on my part.
I'd like to retrieve the depth buffer from my camera view for a 3D filtering application. Currently, I'm using glReadPixels to get the depth component. Instead of the [0,1] values, I need the true values for the depth buffer, or true distance to the camera in world coordinates.
I tried to transform the depth values by the GL_DEPTH_BIAS and GL_DEPTH_SCALE, but that didn't work.
glReadPixels(0, 0, width_, height_, GL_DEPTH_COMPONENT, GL_FLOAT, depth_buffer);
glGetDoublev(GL_DEPTH_BIAS, &depth_bias); // Returns 0.0
glGetDoublev(GL_DEPTH_SCALE, &depth_scale); // Returns 1.0
I realize this is similar to Getting the true z value from the depth buffer, but I'd like to get the depth values into main memory, not in a shader.
Try using gluUnproject() after retrieving the normalized depth value from the Z-buffer as before. You'll need to supply the modelview matrix, projection matrix, and viewport values, which you can retrieve with glGetDoublev() and glGetIntegerv().
Instead of the [0,1] values, I need the true values for the depth buffer, or true distance to the camera in world coordinates.
The depth buffer doesn't contain distance values from the camera. The depth values are the perpendicular distance to the plane of the camera. So if you really need radial distances to the camera, you'll need to compute them in a shader and write them to a buffer; the depth buffer isn't going to help.
but I'd like to get the depth values into main memory, not in a shader.
Then do what those shaders do, except in C/C++ (or whatever) code rather than in a shader. The math is the same either way. Just loop over each value in the depth buffer and transform it.