In an effort to improve cascaded shadow maps, I have looked into using GL_DEPTH_CLAMP and moving the near and far plane just around the actual view frustum instead of the global bounding box. But the use of GL_DEPTH_CLAMP appears to have no effect and the near plane clips the geometry.
I use reverse z as per Reversed-Z in OpenGL:
The tl/dr version is:
Change clip control to 0/1
glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE);
Use a floating point depth buffer.
Clear the depth buffer to 0
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glClearDepth(0.0f);
glClear(GL_DEPTH_BUFFER_BIT);
Change depth test to grater
glDepthFunc(GL_GREATER);
Change the projection matrix to match
The reverse z works really fine, but apparently it messes with GL_DEPTH_CLAMP. If I switch off reverse z GL_DEPTH_CLAMP appears to work as designed; but many other bits in the code get messed up. (It's just not switchable anymore.)
If you look at the documentation of GL_DEPTH_CLAMP it states the following:
If enabled, the -wc ≤ zc ≤ wc plane equation is ignored by view volume clipping (effectively, there is no near or far plane clipping).
My guess is that GL_DEPTH_CLAMP's implementation is just not compatible with reverse z. But I feel like I am missing something. Any idea on how to get GL_DEPTH_CLAMP to working with reverse z?
See the following image of the depth buffers:
As you can see in the the first cascade, the near plane clips the geometry, it should be all white.
Related
I have four arbitrary points (lt,rt,rb,lb) in 3d space and I would like these points to define my near clipping plane (lt stands for left-top, rt for right-top and so on).
Unfortunately, these points are not necessarily a rectangle (in screen space). They are however a rectangle in world coordinates.
The context is that I want to have a mirror surface by computing the mirrored world into a texture. The mirror is an arbitary translated and rotated rectangle in 3d space.
I do not want to change the texture coordinates on the vertices, because that would lead to ugly pixelisation when you e.g. look at the mirror from the side. When I would do that, also culling would not work correctly which would lead to huge performance impacts in my case (small mirror, huge world).
I also cannot work with the stencil buffer, because in some scenarios I have mirrors facing each other which would also lead to a huge performance drop. Furthermore, I would like to keep my rendering pipeline simple.
Can anyone tell me how to compute the according projection matrix?
Edit: Of cause I already have moved my camera accordingly. That is not the problem here.
Instead of tweaking the projection matrix (which I don't think can be done in the general case), you should define an additional clipping plane. You do that by enabling:
glEnable(GL_CLIP_DISTANCE0);
And then set gl_ClipDistance vertex shader output to be the distance of the vertex from the mirror:
gl_ClipDistance[0] = dot(vec4(vertex_position, 1.0), mirror_plane);
As a simple demonstration of my problem, I am trying to render a large but simple mesh to a texture to be used later, but strangely enough, the further-away-from-the-camera parts of this mesh are displayed in front of the closer-to-the-camera parts, when viewed from a specific angle. Despite the undeniable fact that I do beyond all doubt use depth testing:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
As an example, I am trying to render a subdivided grid (on the xz plane) centered on the origin, with a smooth "hill" in the middle of the grid.
When rendered to the screen no errors occur and the mesh looks like this (rendered using orthographic projection, and with the greyscale color representing depth, no error will furthermore occur even if the mesh is viewed from any side):
Rendering to the screen is of course done by making sure the framebuffer is set to 0 (glBindFramebuffer(GL_FRAMEBUFFER, 0);), but i need to render this scene to another framebuffer which is not my screen, in order to use this render as a texture.
So i have set up another framebuffer, and an output texture, and now i am redering this scene to the framebuffer (with absolutely nothing changed, except the framebuffer and the viewport size, which is set to match the output texture). For the purpose of demonstrating the error, which I am experiancing, I am then then rendering this rendered texture, onto a plane which is then displayed on the screen.
When the mesh is rotated seen from the positive x axis, and rotated around the y axis, centered on its origin between -0.5 π rad and 0.5 π rad, the rendered texture looks exactly identical to the result when rendering to the screen, as seen on the image above.
However when rotation around the y axis is greater than 0.5 π rad or less than -0.5 π rad the closer-to-the-camera hill is rendered behind the further-away-from-the-camera plane (the fact that the hill is closer to the camera can be proven by looking at the color, which represents debth):
(whoops got the title wrong on the window, ignore that)
In the borderregions with a rotation around the y axis of close to 0.5 π rad or -0.5 π rad the scene looks like this.
(whoops got the title wrong on the window again, ignore that again)
To recap. This error with the depth sorting happens only when rendering to a texture using a framebuffer, and only when the object is viewed from a specific angle. When the object is rendered directly to the screen, no error occurs. My question is therefor: why does this happen, and how (if at all) can I avoid avoid it.
If this problem only happens when you're rendering to the texture framebuffer, you probably don't have a depth attachment properly linked to it.
Make sure that during FBO init you are linking it to a depth texture as well.
There's a good example of how to do this here.
Also, check all of the matrices you're using to render -- I've had several cases in the past where improper matrices have thrown off depth calculations.
I'm having some issues with z fighting while drawing simple 2d textured quads using opengl. The symptoms are both objects moving at the same speed and one on top of another but periodically one can see through the other and vice versa - sort of like a "flickering". I assume this is indeed z fighting.
I have turned off Depth Testing and have the following as well:
gl.Disable(gl.DEPTH_TEST)
gl.DepthFunc(gl.LESS)
gl.Enable(gl.BLEND)
gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
My view and ortho matrices are as follows:
I have tried to set the near and far distances much greater ( like range of 50000 but still no help)
Projection := mathgl.Ortho(0.0, float32(width), float32(height), 0.0, -5.0, 5.0)
View := mathgl.LookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0)
The only difference with my opengl process is that instead of a drawelements call for each individual object, I package all vertices, uvs(sprite atlas), translation, rotation, etc in one big package sent to vertex shader.
Does anyone have remedies for 2d z fighting?
edit:
i'm adding some pictures to further describe the scenario:
These images are taken a few seconds apart from each other. They are simply texture moving from left to right. As they move; you see from the image, that one sprite over-lapse the other and vice versa back and forth etc very fast.
Also note that my images (sprites) are pngs that have a transparent background to them..
It definitely isn't depth fighting if you have depth testing disabled as shown in the code snippet.
"I package all vertices, uvs(sprite atlas), translation, rotation, etc in one big package sent to vertex shader." - You need to look into the order that you add your sprites. Perhaps it's inconsistent for some reason.
This could be Z fighting
the usual causes are:
fragments are at the same Z-coordinate or closer then accuracy of Z-coordinate
fragments are too far from perspective camera with perspective projection the more far you are from Z near the less accuracy
some ways to fix this:
change size/position of overlapped surfaces slightly
use more bits for Z-Buffer (Depth)
use linear or logarithmic Z-buffer
increase Z-near or decrease Z-far or both for perspective projection you can combine more frustrums to get high definition Z range
sometimes helps to use glDepthFunc(GL_LEQUAL)
This could be an issue with Blending.
as you use Blending you need to render a bit differently. To render transparency correctly you must Z-sort the scene otherwise artifacts can occur. If you got too much dense geometry of transparent objects or objects near them (many polygon edges near). In addition Z-fighting creates a magnitude higher artifacts with blending.
some ways to fix this:
Z sorting can be partially done by multi pass rendering + Depth test + switching front face
so first render all solids and then render Z-sorted transparent objects with front face set to the side not facing camera. Then render the same objects with front face set for side facing camera. You need to use depth test for this!!!. This way you do not need to sort all polygons of scene just the transparent objects. Results are not 100% correct for complex transparent geometries but the results are usually good enough (especially for dynamic scenes). This is how the output from this looks like
it is a glass cup a bit messed up visually by selected blending function for this case because darker pixels means 2 layers of glass on purpose it is not a bug. Therefore the opening looks like the front/back faces are swapped
use less dense geometry for transparent objects
get rid of Z-fighting issues
I am working in VR field where good calibration of a projected screen is very important, and because of difficult-to-adjust ceiling mounts and other hardware specificities, I am looking for a fullscreen shader method to “correct” the shape of the screen.
Most of 2D or 3D engines allows to apply a full-screen effect or deformation by redrawing the rendering result on a quad that you can deform or render in a custom way.
The first idea was to use a vertex shader to offset the corners of this screen quad, so the image is deformed as a quadrilateral (like the hardware keystone on a projector), but it won’t be enough for the requirements
(this approach is described on math.stackexchange with a live fiddle demo).
In my target case:
The image deformation must be non-linear most of the time, so 9 or 16 control points are needed to get a finer adjust.
The borders of the image are not straight (barrel or pillow effect), so even with few control points, the image must be distorted in a curved way in between. Otherwise the deformation would make visible linear seams between at each control points’ limits.
Ideally, knowing the corrected position of each control points of 3x3 or 4x4 grid, the way would be to define a continuous transform for the texture coordinates of the image being drawn on the full screen
quad:
u,v => corrected_u, corrected_v
You can find an illustration here.
I’ve saw some FFD algorithm that works in 2D or 3D that would allow to deform “softly” an image or mesh as if it was made of rubber, but the implementation seems heavy for a real-time shader.
I thought also of a weight-based deformation as we have in squeletal/soft-bodies animation, but seems uncertain to weight properly the control points.
Do you know a method, algorithm or general approach that could help me solve the problem ?
I saw some mesh-based deformation like the new Oculus Rift DK2 requires for its own deformations, but most of the 2D/3D engine use a single quad made of 4 vertices only in standard.
If you need non linear deformation Bezier Surfaces are pretty handy and easy to implement.
You can either pre build them in CPU, or use hardware tessellation (example provided here)
Continuing my research, I found a way.
I created a 1D RGB texture corresponding to a "ramp" or cosine values. This will be the 3 influence coefficients of offset parameters on a 0..1 axis, with 3 coefficients at 0, 0.5 and 1 :
Red starts at 1 at x=0 and goes down to 0 at x=.5
Green start at 0 at x=0, goes to 1 at x=0.5 and goes back to 0 at x=1
Blue starts at 0 at x=0.1 and goes up to 1 at x=1
With these, from 9 float2 uniforms I can interpolate very softly my parameters over the image (with 3 lookups on horizontal, and a final one for vertical).
Then, one interpolated, I offsets the texture coord with these and it works :-D
This is more or less a weighted interpolation of the coordinates using texture lookups for speedup.
I've been learning OpenGL, and the one topic that continues to baffle me is the far clipping plane. While I can understand the reasoning behind the near clipping plane, and the side clipping planes (which never have any real effect because objects outside them would never be rendered anyway), the far clipping plane seems only to be an annoyance.
Since those behind OpenGL have obviously thought this through, I know there must be something I am missing. Why does OpenGL have a far clipping plane? More importantly, because you cannot turn it off, what are the recommended idioms and practices to use when drawing things at huge distances (for objects such as stars thousands of units away in a space game, a skybox, etc.)? Are you expected just to make the clipping plane very far away, or is there a more elegant solution? How is this done in production software?
The only reason is depth-precision. Since you only have a limited number of bits in the depth buffer, you can also just represent a finite amount of depth with it.
However, you can set the far plane to infinitely far away: See this. It just won't work very well with the depth buffer - you will see a lot of artifacts if you have occlusion far away.
So since this revolves around the depth buffer, you won't have a problem dealing with further-away stuff, as long as you don't use it. For example, a common technique is to render the scene in "slabs" that each only use the depth buffer internally (for all the stuff in one slab) but some form of painter's algorithm externally (for the slabs, so you draw the furthest one first)
Why does OpenGL have a far clipping plane?
Because computers are finite.
There are generally two ways to attempt to deal with this. One way is to construct the projection by taking the limit as z-far approaches infinity. This will converge on finite values, but it can play havoc with your depth precision for distant objects.
An alternative (if you're willing to have objects beyond a certain distance fail to depth-test correctly at all) is to turn on depth clamping with glEnable(GL_DEPTH_CLAMP). This will prevent clipping against the near and far planes; it's just that any fragments that would have normalized z coordinates outside of the [-1, 1] range will be clamped to that range. As previously indicated, it screws up depth tests between fragments that are being clamped, but usually those objects are far away.
It's just "the fact" that OpenGL depth test was performed in Window Space Coordinates (Normalized device coordinates in [-1,1]^3. With extra scaling glViewport and glDepthRange).
So from my point of view it's one of the design point of view of the OpenGL library.
One of approach to eliminate this OpenGL extension/OpenGL core functionality https://www.opengl.org/registry/specs/ARB/depth_clamp.txt if it is available in your OpenGL version.
I want to describe that in the perspective projection there is nothing about "far clipping plane".
3.1 For perspective projection you need to setup point \vec{c} as center of projection and plane on which projection will be performed. Let's call it
image plane T: (\vec{r}-\vec{r_0},\vec{n})
3.2 Let's assume that projected plane T split arbitary point \vec{r} and \vec{c} central of projection. In other case \vec{r} and \vec{c} are in one hafe-space and point \vec{r} should be discarded.
3.4 The idea of projection is to find intersection \vec{i} with plane T
\vec{i}=(1-t)\vec{c}+t\vec{r}
3.5 As it is
(\vec{i}-\vec{r_0},\vec{n})=0
=>
( (1-t)\vec{c}+t\vec{r}-\vec{r_0},\vec{n})=0
=>
( \vec{c}+t(\vec{r}-\vec{c})-\vec{r_0},\vec{n})=0
3.6. From "3.5" derived t can be subtitute into "3.4" and you will receive projection into plane T.
3.7. After projection you point will lie in the plane. But if assume that image plane is parallel to OXY plane, then I can suggest to use original "depth" for point after projection.
So from geometry point of view it is possible not to use far plane at all. As also not to use [-1,1]^3 model explicitly at all.
p.s. I don't know how to type latex formulas in correct way, s.t. they will be rendered.