I'm writing a space exploration application. I've decided on light years being the units and have accurately modeled the distances between stars. After tinkering and a lot of arduous work (mostly learning the ropes) I have got the camera working correctly from the point of view of a starship traversing through the cosmos.
Initially I paid no attention to the zNear parameter of gluPerspective () until I worked on planetary objects. Since my scale is in light year units I soon realized that due to zNear being 1.0f I would not be able to see such objects. After experimentation I arrived at these figures:
#define POV 45
#define zNear 0.0000001f
#define zFar 100000000.0f
gluPerspective (POV, WinWidth/WinHeight, zNear ,zFar);
This works exceptionally well in that I was able to cruise my solar system (position 0,0,0) and move up close to the planets which look great lit and texture mapped. However other systems (not at position 0,0,0) were much harder to cruise through because the objects moved away from the camera in unusual ways.
I had noticed however that strange visual glitches started to take place when cruising through the universe. Objects behind me would 'wrap around' and show ahead, if I swing 180 degrees in the Y direction they'll also appear in their original place. So when warping through space, most the stars are correctly parallaxing but some appear and travel in the opposite direction (which is disturbing to say the least).
By changing the zNear to 0.1f immediately corrects ALL of these glitches (but also won't resolve solar system objects). So I'm stuck. I've also tried working with glFrustum and it produces exactly the same results.
I use the following to view the world:
glTranslatef(pos_x, pos_y, pos_z);
With relevant camera code to orientate as required. Even disabling camera functionality does not change anything. I've even tried gluLookAt() and again it produces the same results.
Does gluPerspective() have limits when extreme zNear / zFar values are used? I tried to reduce the range but to no avail. I even changed my world units from light years to kilometers by scaling everything up and using a bigger zNear value - nothing. HELP!
The problem is that you want to resolve too much at the same time. You want to view things on the scale of the solar system, while also having semi-galactic scale. That is simply not possible. Not with a real-time renderer.
There is only so much floating-point precision to go around. And with your zNear being incredibly close, you've basically destroyed your depth buffer for anything that is more than about 0.0001 away from your camera.
What you need to do is to draw things based on distance. Near objects (within a solar system's scale) are drawn with one perspective matrix, using one depth range (say, 0 to 0.8). Then more distant objects are drawn with a different perspective matrix and a different depth range (0.8 to 1). That's really the only ways you're going to make this work.
Also, you may need to compute the matrices for objects on the CPU in double-precision math, then translate them back to single-precision for OpenGL to use.
OpenGL should not be drawing anything farther from the camera than zFar, or closer to the camera than zNear.
But for things in between, OpenGL computes a depth value that is stored in the depth buffer which it uses to tell whether one object is blocking another. Unfortunately, the depth buffer has limited precision (generally 16 or 24 bits) and according to this, roughly log2(zFar/zNear) bits of precision are lost. Thus, a zFar/zNear ratio of 10^15 (~50 bits lost) is bound to cause problems. One option would be to slightly increase zNear (if you can). Otherwise, you will need to look into Split Depth Buffers or Logarithmic Depth Buffers
Nicol Bolas already told you one piece of the story. The other is, that you should start thinking about a structured way to store the coordinates: Store the position of each object in relation to the object that dominates it gravitatively and use apropriate units for those.
So you have stars. Distances between stars are measured in lightyears. Stars are orbited by planets. Distances within a starsystem are measured in lightminutes to lighthours. Planets are orbited by moons. Distances in a planetary system are measured in lightseconds.
To display such scales you need to render in multiple passes. The objects with their scales form a tree. First you sort the branches distant to close, then you traverse the tree depth first. For each branching level you use apropriate projection parameters so that the near→far clip planes snuggily fit the to be rendered objects. After rendering each level clear the depth buffer.
Related
I have some problem with render my solar system.
I have small objects and a large object, which is located at a distance 10e9 times greater than small objects. How I can render all that? (I want see the Sun, when I around my small objects)
glm::perspective(m_fInitialFoV, 4.0f / 3.0f, near, far);
If far/near is to big, all object are flickering.
Best way to do it is to group objects by scale and render them separately. For example, the solar system could use as unit the scale of Earth, which is 6378.1 km radius, and 147.09x10^6 km far away from the sun.
Normalizing these around the Earth's radius, we get:
Earth's Radius: 1.0 unit
Distance from sun: 23061.72 units
Same principle of normalization applies to other solar system bodies:
Venus' Radius: 0.9488405638 units
Distance from sun: 16851.41 units
Pluto's Radius: 0.186105580 units
Distance from sun: 695633.4 units
You should also setup your depth to incorporate such numbers. As long as bodies have a lot of space between themselves, no errors and z-fighting should occur.
So:
Render the 'big' group first. (big depth range)
Then clear only the depth of your screen.
Then render objects that are small and close to the camera, such as asteroids, spaceships, etc. Things that do not need such large
numbers. (here use a substantially smaller depth range)
You're done.
As you can see, you could group the scene based on scale and on multiple levels. You could, for example, get the list of coordinates for the local group of stars, render them first as points, clear the depth, then render the bodies of the system you are into, clear the depth again, and render the local objects at your position. You should also consider, that a planet like Pluto is barely visible from Earth, so you can simply use a lot smaller depth range for drawing.
A simpler solution than multi-pass rendering would be to use a logarithmic depth buffer. At the cost of some precision, you can expand your available depth range exponentially. Take a look here for more info: http://outerra.blogspot.com/2009/08/logarithmic-z-buffer.html
Hi computer graphics and math people :-)
Short question: How to let an artist choose a meaningful light size in world space for shadow maps filtered by percentage closer filtering (PCF) and is it possible to use the same technique to support spot and directional light sources?
Longer question: I have implemented shadow mapping and filter the edges by applying percentage closer filtering (PCF). The filter kernel is a Poisson-disk in contrast to a regular, rectangular filter kernel. You can think of a Poisson-disk as sample positions more or less randomly distributed inside the unit circle. So the size of the filter region is simply a factor multiplied to each of the 2D sample positions of the kernel (Poisson-disk).
I can adjust the radius/factor for the Poisson-disk and change the size of the penumbra at runtime for either a spot light (perspective frustum) or directional light (orhtographic frustum). This works great but the values for the parameter does not really make any sense which is fine for small 3d samples or even games where one can invest some time to adjust the value empirically. What I want is a parameter called "LightSize" that has an actual meaning in world space. A large scene, for example, with a building that is 100 units long the LightSize has to be larger than in a scene with a close-up of a book shelf to result in the same smooth shadows. On the other hand a fixed LightSize would result in extremely smooth shadows on the shelf and quite hard shadows outside the building. This question is not about soft shadows, contact hardening etc. so ignore physically accurate blocker-receiver estimations ;-)
Oh and take a look at my awesome MS Paint illustrations:
Idea 1: If I use the LightSize directly as the filter-size, a factor of 0.5 would result in a Poisson-disk of the diagonal 1.0 and a radius 0.5. Since texture coordinates are in the range [0,1] this leads to a filter size that evaluates the whole texture for each fragment: imagine a fragment in the center of the shadow map, this fragment would fetch neighboring texels that are distributed inside the whole area of the texture. This would of course yield extremely large penumbra, but let's call this the "maximum". A penumbra factor of 0.05 for example would result in a diagonal of 0.1 so that each fragment would evaluate about 10% of its neighboring texels (ignore the circle etc. just think in 2d from a side view). This approach works but when the angle of a spotlight becomes larger or the frustum of a directional light changes its size, the penumbra changes its width because the LightSize defines the penumbra in texture space (UV space). The penumbra should stay the same independent of the size of the near plane. Imagine a fitted orthographic frustum. When the camera rotates the fitted frustum changes in size and so does the size of the penumbra, which is wrong.
Idea 2: Divide the LightSize by the size of the near plane in world space. This works great for orthographic projections because when the size of the frustum becomes larger, the LightSize is divided by a larger value so that the penumbra stays the same in world space. Unfortunately this doesn't work for a perspective frustum because the distance of the near plane leads to a changing size of the near plane so that the penumbra size is now dependent on the near plane distance which is anoying and wrong.
It feels like there has to be a way so that the artist can choose a meaningful light size in world space. I know that PCF is only a (quite bad) approximation of a physically plausible light source, but imagine the following:
When the light source is sampled multiple times by using a Poisson-disk in world space one can create physically accurate shadows by rendering a hard shadow for each sample position. This works for spot lights. The situation is different for directional lights. One can use an "angle in the origin of the directional light" and render multiple hard shadows for each slightly rotated frustum. This does not make any physically sense at all in the real world, but directional lights do not exist, so... by the way the sampling of a light source is often referred to as multi-view soft shadows (MVSS).
Do you have any suggestions? Could it be that spot and directional lights have to be handled differently and PCF does not allow me to use a meaningful real world light size for a perspective frustum?
I have a project I'm working on that is making movies from a simulation. The simulation is passed from another program that defines the projection matrix.
The issue I am running into is that the other program has a sort of 'fake' orthographic view, what I mean by this is that its projection matrix is as follows:
PerspectiveMatrix = glm::perspective(3.5, 1, 1.0f, 50.0f);
And it uses the LookAt function:
ViewMatrix = glm::lookAt(
(2000,-3000,2000), // eye
(0,0,0), // center
(0,0,1)//up
);
So what I mean by 'fake' orthographic view is that they have positioned the camera far enough away (and small angle to zoom the scene) that the "view lines" (for lack of a better term) are almost parallel like in a real orthographic projection.
So this is all fine and well but what I've run into, and is an issue in the other program as well, is that all of the high precision depth testing is close to the camera and in my case this is empty space. This means that there is quite a lot of z fighting as shown in the link below:
So my question is what ways can I change my depth testing in order to maybe bias the buffer towards the far plane? or something along those lines. I have tried moving the NearPlane farther out, which has the result of zooming out the screen, so I compensate with a narrower angle in the perspective. But doing this enough times makes the problem worse, there isn't z fighting but it doesn't draw things at the right depth. The spheres end up on top of everything.
I did find some info at Outerra:
http://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html
They had some ideas for reversing the depth buffer but it was Nvidia specific and I need to be compatible with both ATI and Nvidia
Both logarithmic depth and reversed depth mapping described in that blog post will work for you.
Reverse floating point is better, and it works normally in DirectX. In OpenGL it won't bring you any extra precision due to a design flaw, unless the driver exposes the NV_depth_buffer_float extension, through which you can effectively turn off the bias that makes it unusable normally.
AMD supports that extension since their 13.12 Catalyst drivers, so the technique is usable on all 5000+ series AMD GPUs (older series aren't supported by the drivers).
Simpler than any of above: move your znear further from the camera. Looks like it's the third parameter of glm::perspective(), set to 1.0 in your example. Set it as large as you can before it starts clipping away stuff in the foreground of your scene, and your z-buffer precision problems will probably go away.
Reverse-float-z is great but only really needed for scenes with wider field of view and deeper geometry. For a near-orthographic scene like yours, just set your znear/zfar appropriately.
For example if we have ortho projection:
left = -aspect, right = aspect, top = 1.0, bottom = -1.0, far = 1.0, near = -1.0
And will draw triangle at -2.0, it will be cut of by near clipping plane. Will it really saves some precious rendering time?
Culling determinate if we need to draw something and discards if out of our view (written by programer in vertex shader/in main program). Clipping == cheap auto culling?
Also just in theme of cheap culling - will be
if(dist(cam.pos, sprite.pos) < MAX_RENDER_DIST)
draw_sprite(sprite);
just enough for simple 2d game?
Default OpenGL clipping space is -1 to +1, for x, y and z.
The conditional test for sprite distance will work. It is kind of not needed, as the far clipping plane will do almost the same thing. Usually it is good enough. There are cases where the test is needed. Objects at the corner inside the clipping planes may come outside the far clipping plane with the camera turns. The reason is that distance from the camera to the corner is longer than the perpendicular distance from the camera to the far clipping plane. This is not a problem if you have a 2D game and do not allow changes of the camera viewing angle.
If you have a simple 2D game, chances are high that you do not need to worry about graphics optimization. If you are drawing sprites outside of the clipping planes, you save time. But how much time you save depends. If a huge amount of the sprites are outside, you may save considerable time. But then, you should probably consider what algorithm you use, and not draw things that are not going to be shown anyway. If only a small percentage of the sprites are outside, then the time saved will be negligible.
The problem with clipping in the GPU is that it happens relatively late in the pipeline, just before rasterization, so a lot of computations could already be done for nothing.
Doing it on the CPU can save these computations from happening and, also very important, reduce the number of actual draw commands (which can also be a bottleneck).
However, you do want to do this fast in the CPU, typically you'll use an octree or similar to represent data so you can discard an entire subtree at once. If you have to go over each polygon or even object separately this can become to expensive.
So in conclusion: the usefulness depends on where your bottleneck lies (cpu, vertex shader, transmission rate, ...).
I've been learning OpenGL, and the one topic that continues to baffle me is the far clipping plane. While I can understand the reasoning behind the near clipping plane, and the side clipping planes (which never have any real effect because objects outside them would never be rendered anyway), the far clipping plane seems only to be an annoyance.
Since those behind OpenGL have obviously thought this through, I know there must be something I am missing. Why does OpenGL have a far clipping plane? More importantly, because you cannot turn it off, what are the recommended idioms and practices to use when drawing things at huge distances (for objects such as stars thousands of units away in a space game, a skybox, etc.)? Are you expected just to make the clipping plane very far away, or is there a more elegant solution? How is this done in production software?
The only reason is depth-precision. Since you only have a limited number of bits in the depth buffer, you can also just represent a finite amount of depth with it.
However, you can set the far plane to infinitely far away: See this. It just won't work very well with the depth buffer - you will see a lot of artifacts if you have occlusion far away.
So since this revolves around the depth buffer, you won't have a problem dealing with further-away stuff, as long as you don't use it. For example, a common technique is to render the scene in "slabs" that each only use the depth buffer internally (for all the stuff in one slab) but some form of painter's algorithm externally (for the slabs, so you draw the furthest one first)
Why does OpenGL have a far clipping plane?
Because computers are finite.
There are generally two ways to attempt to deal with this. One way is to construct the projection by taking the limit as z-far approaches infinity. This will converge on finite values, but it can play havoc with your depth precision for distant objects.
An alternative (if you're willing to have objects beyond a certain distance fail to depth-test correctly at all) is to turn on depth clamping with glEnable(GL_DEPTH_CLAMP). This will prevent clipping against the near and far planes; it's just that any fragments that would have normalized z coordinates outside of the [-1, 1] range will be clamped to that range. As previously indicated, it screws up depth tests between fragments that are being clamped, but usually those objects are far away.
It's just "the fact" that OpenGL depth test was performed in Window Space Coordinates (Normalized device coordinates in [-1,1]^3. With extra scaling glViewport and glDepthRange).
So from my point of view it's one of the design point of view of the OpenGL library.
One of approach to eliminate this OpenGL extension/OpenGL core functionality https://www.opengl.org/registry/specs/ARB/depth_clamp.txt if it is available in your OpenGL version.
I want to describe that in the perspective projection there is nothing about "far clipping plane".
3.1 For perspective projection you need to setup point \vec{c} as center of projection and plane on which projection will be performed. Let's call it
image plane T: (\vec{r}-\vec{r_0},\vec{n})
3.2 Let's assume that projected plane T split arbitary point \vec{r} and \vec{c} central of projection. In other case \vec{r} and \vec{c} are in one hafe-space and point \vec{r} should be discarded.
3.4 The idea of projection is to find intersection \vec{i} with plane T
\vec{i}=(1-t)\vec{c}+t\vec{r}
3.5 As it is
(\vec{i}-\vec{r_0},\vec{n})=0
=>
( (1-t)\vec{c}+t\vec{r}-\vec{r_0},\vec{n})=0
=>
( \vec{c}+t(\vec{r}-\vec{c})-\vec{r_0},\vec{n})=0
3.6. From "3.5" derived t can be subtitute into "3.4" and you will receive projection into plane T.
3.7. After projection you point will lie in the plane. But if assume that image plane is parallel to OXY plane, then I can suggest to use original "depth" for point after projection.
So from geometry point of view it is possible not to use far plane at all. As also not to use [-1,1]^3 model explicitly at all.
p.s. I don't know how to type latex formulas in correct way, s.t. they will be rendered.