I haven't done much OpenGL stuff, I assume there is an obvious cause of what I am seeing.
Basically the pictures below explain everything.
The only thing changed between the images is the x rotation on the projection matrix.
OpenGL 1.1 on Windows.
Any help is much appreciated.
This looks like a ortho projection. I suggest you increase the range between the near and far value. Unlike perspective projection, where due to nonlinearity you should limit the near-to-far range to what's absolutely necessary, you can safely choose a near/far range for ortho that's very wide without running into trouble. I suggest you use [-1000, 1000] (with a 24 bit depth buffer that gives you a depth resolution of ~1/8000 viewspace Z units).
Related
I have a project I'm working on that is making movies from a simulation. The simulation is passed from another program that defines the projection matrix.
The issue I am running into is that the other program has a sort of 'fake' orthographic view, what I mean by this is that its projection matrix is as follows:
PerspectiveMatrix = glm::perspective(3.5, 1, 1.0f, 50.0f);
And it uses the LookAt function:
ViewMatrix = glm::lookAt(
(2000,-3000,2000), // eye
(0,0,0), // center
(0,0,1)//up
);
So what I mean by 'fake' orthographic view is that they have positioned the camera far enough away (and small angle to zoom the scene) that the "view lines" (for lack of a better term) are almost parallel like in a real orthographic projection.
So this is all fine and well but what I've run into, and is an issue in the other program as well, is that all of the high precision depth testing is close to the camera and in my case this is empty space. This means that there is quite a lot of z fighting as shown in the link below:
So my question is what ways can I change my depth testing in order to maybe bias the buffer towards the far plane? or something along those lines. I have tried moving the NearPlane farther out, which has the result of zooming out the screen, so I compensate with a narrower angle in the perspective. But doing this enough times makes the problem worse, there isn't z fighting but it doesn't draw things at the right depth. The spheres end up on top of everything.
I did find some info at Outerra:
http://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html
They had some ideas for reversing the depth buffer but it was Nvidia specific and I need to be compatible with both ATI and Nvidia
Both logarithmic depth and reversed depth mapping described in that blog post will work for you.
Reverse floating point is better, and it works normally in DirectX. In OpenGL it won't bring you any extra precision due to a design flaw, unless the driver exposes the NV_depth_buffer_float extension, through which you can effectively turn off the bias that makes it unusable normally.
AMD supports that extension since their 13.12 Catalyst drivers, so the technique is usable on all 5000+ series AMD GPUs (older series aren't supported by the drivers).
Simpler than any of above: move your znear further from the camera. Looks like it's the third parameter of glm::perspective(), set to 1.0 in your example. Set it as large as you can before it starts clipping away stuff in the foreground of your scene, and your z-buffer precision problems will probably go away.
Reverse-float-z is great but only really needed for scenes with wider field of view and deeper geometry. For a near-orthographic scene like yours, just set your znear/zfar appropriately.
In OpenGL (all versions, though I happen to be working in OpenGL ES 2.0) there is the option of using a perspective projection versus an orthogonal one. Is there a way to control the degree of orthogonality?
For the sake of picturing the issue (and please don't take this as the actual question, I am well aware there is no camera in OpenGL) assume that a scene is rendered with the viewport "looking" down the -z axis. Two parallel lines extending a finite distance down the -z axis at (x,y)=1,1 and (x,y)=-1,1 will appear as points in orthogonal projection, or as two lines that eventually converge to a single pixel in perspective projection. Is there a way to have the x- and y- values represented by the outer edges of the screen remain the same as in projection space - I assume this requires not changing the frustum - but have the lines only converge part of the way to a single pixel?
Is there a way to control the degree of orthogonality?
Either something is orthogonal, or it is not. There's no such thing like "just a little orthogonal".
Anyway, from a mathematical point of view, a perspective projection with an infinitely narrow field of view is orthogonal. So you can use glFrustum with a very large near and far plane distance, together with a countering translation in modelview to bring the far away viewing volume back to the origin.
I've been learning OpenGL, and the one topic that continues to baffle me is the far clipping plane. While I can understand the reasoning behind the near clipping plane, and the side clipping planes (which never have any real effect because objects outside them would never be rendered anyway), the far clipping plane seems only to be an annoyance.
Since those behind OpenGL have obviously thought this through, I know there must be something I am missing. Why does OpenGL have a far clipping plane? More importantly, because you cannot turn it off, what are the recommended idioms and practices to use when drawing things at huge distances (for objects such as stars thousands of units away in a space game, a skybox, etc.)? Are you expected just to make the clipping plane very far away, or is there a more elegant solution? How is this done in production software?
The only reason is depth-precision. Since you only have a limited number of bits in the depth buffer, you can also just represent a finite amount of depth with it.
However, you can set the far plane to infinitely far away: See this. It just won't work very well with the depth buffer - you will see a lot of artifacts if you have occlusion far away.
So since this revolves around the depth buffer, you won't have a problem dealing with further-away stuff, as long as you don't use it. For example, a common technique is to render the scene in "slabs" that each only use the depth buffer internally (for all the stuff in one slab) but some form of painter's algorithm externally (for the slabs, so you draw the furthest one first)
Why does OpenGL have a far clipping plane?
Because computers are finite.
There are generally two ways to attempt to deal with this. One way is to construct the projection by taking the limit as z-far approaches infinity. This will converge on finite values, but it can play havoc with your depth precision for distant objects.
An alternative (if you're willing to have objects beyond a certain distance fail to depth-test correctly at all) is to turn on depth clamping with glEnable(GL_DEPTH_CLAMP). This will prevent clipping against the near and far planes; it's just that any fragments that would have normalized z coordinates outside of the [-1, 1] range will be clamped to that range. As previously indicated, it screws up depth tests between fragments that are being clamped, but usually those objects are far away.
It's just "the fact" that OpenGL depth test was performed in Window Space Coordinates (Normalized device coordinates in [-1,1]^3. With extra scaling glViewport and glDepthRange).
So from my point of view it's one of the design point of view of the OpenGL library.
One of approach to eliminate this OpenGL extension/OpenGL core functionality https://www.opengl.org/registry/specs/ARB/depth_clamp.txt if it is available in your OpenGL version.
I want to describe that in the perspective projection there is nothing about "far clipping plane".
3.1 For perspective projection you need to setup point \vec{c} as center of projection and plane on which projection will be performed. Let's call it
image plane T: (\vec{r}-\vec{r_0},\vec{n})
3.2 Let's assume that projected plane T split arbitary point \vec{r} and \vec{c} central of projection. In other case \vec{r} and \vec{c} are in one hafe-space and point \vec{r} should be discarded.
3.4 The idea of projection is to find intersection \vec{i} with plane T
\vec{i}=(1-t)\vec{c}+t\vec{r}
3.5 As it is
(\vec{i}-\vec{r_0},\vec{n})=0
=>
( (1-t)\vec{c}+t\vec{r}-\vec{r_0},\vec{n})=0
=>
( \vec{c}+t(\vec{r}-\vec{c})-\vec{r_0},\vec{n})=0
3.6. From "3.5" derived t can be subtitute into "3.4" and you will receive projection into plane T.
3.7. After projection you point will lie in the plane. But if assume that image plane is parallel to OXY plane, then I can suggest to use original "depth" for point after projection.
So from geometry point of view it is possible not to use far plane at all. As also not to use [-1,1]^3 model explicitly at all.
p.s. I don't know how to type latex formulas in correct way, s.t. they will be rendered.
At first I wondered why NDC ranges from -1 to 1, rather than from 0 to 1. I figured maybe having the origin at the center is useful for something.
But why is it using a left-handed coordinate system?
Could it be just so that the Z-value is higher for objects farther away? That would be a good enough reason for me.
But why is it using a left-handed coordinate system?
Let's add "by default" to that question. Because all that's needed to change it is glDepthRange(1.0f, 0.0f); and now you're right-handed. And yes, that is perfectly legal GL syntax; there has never been a restriction that the range near z is less than the range far z.
Why is it left-handed by default? Maybe someone on the ARB liked it that way. Maybe someone on the ARB liked increasing depth range instead of decreasing, and the lower-left corner orientation made that necessary. Maybe because the progenitor of OpenGL, IRIX GL, did it this way and the ARB didn't see any need to change it.
The reason for this is immaterial; handedness is purely a notational convenience. In some places, right-handed makes sense. In others, left-handed makes sense. And since it is trivially changed, just use what works for you.
It would then be consistent with everything else.
Consistent with what everything else? All of the fixed-function stuff that's been removed?
The viewport transform is all done by OpenGL internally, where nobody can get at it. You don't even provide a matrix; you just provide a viewport and depth range. So from the perspective of someone using fixed-function GL, everything really is right-handed.
The only time the handedness even comes up is when dealing with vertex shaders directly, where you have to know what the handedness of the clip-space is. And in that case, the change in handedness is a simple function of the perspective projection negating the Z. Or, if you like left-handed coordinates, the perspective projection not negating the Z. Or again, just reverse the glDepthRange and now you're right-handed.
But why is it using a left-handed coordinate system?
So that increasing distances from the projection plane in view direction map to increasing values of the depth buffer.
Could it be just so that the Z-value is higher for objects farther away? That would be a good enough reason for me.
Well, that's exactly why it is that way.
Guess the change in handedness is owned to how graphics cards interpret z values. Greater z value -> greater depth. The implied change of handedness can be performed by the projection matrix without additional costs. Changing it later would add an additional computation step.
I'm writing a space exploration application. I've decided on light years being the units and have accurately modeled the distances between stars. After tinkering and a lot of arduous work (mostly learning the ropes) I have got the camera working correctly from the point of view of a starship traversing through the cosmos.
Initially I paid no attention to the zNear parameter of gluPerspective () until I worked on planetary objects. Since my scale is in light year units I soon realized that due to zNear being 1.0f I would not be able to see such objects. After experimentation I arrived at these figures:
#define POV 45
#define zNear 0.0000001f
#define zFar 100000000.0f
gluPerspective (POV, WinWidth/WinHeight, zNear ,zFar);
This works exceptionally well in that I was able to cruise my solar system (position 0,0,0) and move up close to the planets which look great lit and texture mapped. However other systems (not at position 0,0,0) were much harder to cruise through because the objects moved away from the camera in unusual ways.
I had noticed however that strange visual glitches started to take place when cruising through the universe. Objects behind me would 'wrap around' and show ahead, if I swing 180 degrees in the Y direction they'll also appear in their original place. So when warping through space, most the stars are correctly parallaxing but some appear and travel in the opposite direction (which is disturbing to say the least).
By changing the zNear to 0.1f immediately corrects ALL of these glitches (but also won't resolve solar system objects). So I'm stuck. I've also tried working with glFrustum and it produces exactly the same results.
I use the following to view the world:
glTranslatef(pos_x, pos_y, pos_z);
With relevant camera code to orientate as required. Even disabling camera functionality does not change anything. I've even tried gluLookAt() and again it produces the same results.
Does gluPerspective() have limits when extreme zNear / zFar values are used? I tried to reduce the range but to no avail. I even changed my world units from light years to kilometers by scaling everything up and using a bigger zNear value - nothing. HELP!
The problem is that you want to resolve too much at the same time. You want to view things on the scale of the solar system, while also having semi-galactic scale. That is simply not possible. Not with a real-time renderer.
There is only so much floating-point precision to go around. And with your zNear being incredibly close, you've basically destroyed your depth buffer for anything that is more than about 0.0001 away from your camera.
What you need to do is to draw things based on distance. Near objects (within a solar system's scale) are drawn with one perspective matrix, using one depth range (say, 0 to 0.8). Then more distant objects are drawn with a different perspective matrix and a different depth range (0.8 to 1). That's really the only ways you're going to make this work.
Also, you may need to compute the matrices for objects on the CPU in double-precision math, then translate them back to single-precision for OpenGL to use.
OpenGL should not be drawing anything farther from the camera than zFar, or closer to the camera than zNear.
But for things in between, OpenGL computes a depth value that is stored in the depth buffer which it uses to tell whether one object is blocking another. Unfortunately, the depth buffer has limited precision (generally 16 or 24 bits) and according to this, roughly log2(zFar/zNear) bits of precision are lost. Thus, a zFar/zNear ratio of 10^15 (~50 bits lost) is bound to cause problems. One option would be to slightly increase zNear (if you can). Otherwise, you will need to look into Split Depth Buffers or Logarithmic Depth Buffers
Nicol Bolas already told you one piece of the story. The other is, that you should start thinking about a structured way to store the coordinates: Store the position of each object in relation to the object that dominates it gravitatively and use apropriate units for those.
So you have stars. Distances between stars are measured in lightyears. Stars are orbited by planets. Distances within a starsystem are measured in lightminutes to lighthours. Planets are orbited by moons. Distances in a planetary system are measured in lightseconds.
To display such scales you need to render in multiple passes. The objects with their scales form a tree. First you sort the branches distant to close, then you traverse the tree depth first. For each branching level you use apropriate projection parameters so that the near→far clip planes snuggily fit the to be rendered objects. After rendering each level clear the depth buffer.