OpenGl And Flickering - opengl

When objects from a CallList intersect the near plane I get a flicker..., what can I do?
Im using OpenGL and SDL.
Yes it is double buffered.

It sounds like you're getting z-fighting.
"Z-fighting is a phenomenon in 3D rendering that occurs when two or more primitives have similar values in the z-buffer, and is particularly prevalent with coplanar polygons. The effect causes pseudo-random pixels to be rendered with the color of one polygon or another in a non-deterministic manner, varying as the scene is animated, causing one polygon to "win" the z test, then another, and so on."
(From wikipedia)
You can get more information about the problem in the OpenGL FAQ.
glPolygonOffset might help, but you can also get yourself into trouble with it. Tom Forsyth has a good explanation in his FAQ Note: It talks about ZBIAS, but that's just the DirectX equivilent.

The problem was that my rotation function had some floating point errors which screwed up my model_view matrix.
None of you could have guessed it, sorry for the waste of your time.
Although I don't think that moving the near plane should be even considered a solution to any kind of problem usually something else is wrong, because openGL does support polygon intersection with the near plane.

Try to put the near clipping plane a little bit further :
for example with gluPerspective -> third parameter zNear
http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/glu/perspective.html

Ah, you meant the near plane. :)
Well...another thing when drawing polygons in the same plane is to use glPolygonOffset
From the description
glPolygonOffset is useful for rendering hidden-line images,
for applying decals to surfaces, and for rendering solids
with highlighted edges.

Related

Simulate Distortion of Spherical Billboard

I need to make spherical billboards (i.e., setting depth), but taking into account perspective projection--ideally including off-center frusta.
I wasn't able to find any references to anyone succeeding at this--although there are plenty of explanations as to why standard billboards don't have perspective distortions. Unfortunately, for my application, the lack isn't a cosmetic defect; it's actually important to the algorithm.
I did a bit of investigation on my own:
The math gets pretty messy rather quickly. The obvious approaches don't work: for example, you can't orient the billboard perpendicular to a viewing ray because tangential rays wouldn't intersect the billboard at right angles.
Probably the most promising approach I found was to render the billboard parallel to the near clipping plane, stretching it with a vertex shader into an ellipse. This only handles perturbations along one axis (so e.g. it won't handle spheres rendered in a corner of the view), but the main obstacle is calculating depth correctly; you can't compute it as you would for an undistorted sphere because the "sphere" is occluding itself.
Point of fact, I didn't find a good solution, and I couldn't find anyone who has. Anyone have an idea?
While browsing around not even remotely working on this problem, I stumbled on http://iquilezles.org/www/articles/sphereproj/sphereproj.htm, which is pretty close. The linked tutorial shows how to compute a bounding ellipse for a rasterized sphere; getting the depth (at worst, using a raycast) should be fairly easy to derive.

Reduce negative perceptual effects when rendering a 3D cube filled with points with OpenGL

I need to render a simple 3D cube with OpenGL filled with points that lie on a regular 64x64x64 grid. An image can be found here.
It's hard to explain, but obviously there are some perceptual difficulties due to the projection from 3d to 2d. I tried to displace the points by an randomly generated offset, which helped a little bit, but wasn't really satisfying.
I think there's even a name for that effect, but I couldn't find it, so it would be great if someone could name it and maybe give some advide to reduce it.
You might be thinking of Moiré patterns. MSAA (multi-sample anti-aliasing) might help, or perhaps the introduction of jitter. See also: Supersampling
Alternatively, you could draw the points using point sprites or billboarding, which can be implemented very efficiently using modern (GL) geometry shaders.

Why does OpenGL have a far clipping plane, and what idioms are used to deal with this?

I've been learning OpenGL, and the one topic that continues to baffle me is the far clipping plane. While I can understand the reasoning behind the near clipping plane, and the side clipping planes (which never have any real effect because objects outside them would never be rendered anyway), the far clipping plane seems only to be an annoyance.
Since those behind OpenGL have obviously thought this through, I know there must be something I am missing. Why does OpenGL have a far clipping plane? More importantly, because you cannot turn it off, what are the recommended idioms and practices to use when drawing things at huge distances (for objects such as stars thousands of units away in a space game, a skybox, etc.)? Are you expected just to make the clipping plane very far away, or is there a more elegant solution? How is this done in production software?
The only reason is depth-precision. Since you only have a limited number of bits in the depth buffer, you can also just represent a finite amount of depth with it.
However, you can set the far plane to infinitely far away: See this. It just won't work very well with the depth buffer - you will see a lot of artifacts if you have occlusion far away.
So since this revolves around the depth buffer, you won't have a problem dealing with further-away stuff, as long as you don't use it. For example, a common technique is to render the scene in "slabs" that each only use the depth buffer internally (for all the stuff in one slab) but some form of painter's algorithm externally (for the slabs, so you draw the furthest one first)
Why does OpenGL have a far clipping plane?
Because computers are finite.
There are generally two ways to attempt to deal with this. One way is to construct the projection by taking the limit as z-far approaches infinity. This will converge on finite values, but it can play havoc with your depth precision for distant objects.
An alternative (if you're willing to have objects beyond a certain distance fail to depth-test correctly at all) is to turn on depth clamping with glEnable(GL_DEPTH_CLAMP). This will prevent clipping against the near and far planes; it's just that any fragments that would have normalized z coordinates outside of the [-1, 1] range will be clamped to that range. As previously indicated, it screws up depth tests between fragments that are being clamped, but usually those objects are far away.
It's just "the fact" that OpenGL depth test was performed in Window Space Coordinates (Normalized device coordinates in [-1,1]^3. With extra scaling glViewport and glDepthRange).
So from my point of view it's one of the design point of view of the OpenGL library.
One of approach to eliminate this OpenGL extension/OpenGL core functionality https://www.opengl.org/registry/specs/ARB/depth_clamp.txt if it is available in your OpenGL version.
I want to describe that in the perspective projection there is nothing about "far clipping plane".
3.1 For perspective projection you need to setup point \vec{c} as center of projection and plane on which projection will be performed. Let's call it
image plane T: (\vec{r}-\vec{r_0},\vec{n})
3.2 Let's assume that projected plane T split arbitary point \vec{r} and \vec{c} central of projection. In other case \vec{r} and \vec{c} are in one hafe-space and point \vec{r} should be discarded.
3.4 The idea of projection is to find intersection \vec{i} with plane T
\vec{i}=(1-t)\vec{c}+t\vec{r}
3.5 As it is
(\vec{i}-\vec{r_0},\vec{n})=0
=>
( (1-t)\vec{c}+t\vec{r}-\vec{r_0},\vec{n})=0
=>
( \vec{c}+t(\vec{r}-\vec{c})-\vec{r_0},\vec{n})=0
3.6. From "3.5" derived t can be subtitute into "3.4" and you will receive projection into plane T.
3.7. After projection you point will lie in the plane. But if assume that image plane is parallel to OXY plane, then I can suggest to use original "depth" for point after projection.
So from geometry point of view it is possible not to use far plane at all. As also not to use [-1,1]^3 model explicitly at all.
p.s. I don't know how to type latex formulas in correct way, s.t. they will be rendered.

Can someone describe the algorithm used by Ken Silverman's Voxlap engine?

From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.

Compute bounding quad of a sphere with vertex shader

I'm trying to implement an algorithm from a graphics paper and part of the algorithm is rendering spheres of known radius to a buffer. They say that they render the spheres by computing the location and size in a vertex shader and then doing appropriate shading in a fragment shader.
Any guesses as to how they actually did this? The position and radius are known in world coordinates and the projection is perspective. Does that mean that the sphere will be projected as a circle?
I found a paper that describes what you need - calculating the bounding quadric. See:
http://web4.cs.ucl.ac.uk/staff/t.weyrich/projects/quadrics/pbg06.pdf
Section 3.2, Bounding Box calculation. The paper also mentions doing it on the vertex shader, so it might be what you're after.
Some personal thought:
You can approximate the bounding box by approximating the size of the sphere by its radius, though. Transform that to screen space and you'll get a slightly larger than correct bounding box, but it won't be that far off. This fails when the camera is too close to the point, or when the sphere it too large, of course. But otherwise should be quite optimal to calculate, as it would be simply a ratio between two similar, right triangles.
If you can figure out the chord length, then the ratio will yield the precise answer, but that's a little beyond me at the moment.
alt text http://xavierho.com/temp/Sphere-Screen-Space.png
Of course, that's just a rough approximation, and has a large error sometimes, but it would get things going quickly, easy.
Otherwise, see paper linked above and use the correct way. =]
The sphere will be projected as an ellipse unless it's at the cameras center as brainjam says.
The article that Xavier Ho links to describes the generalization of sphere projection (That is, quadratic projection). It is a very good read and I recommend it too. However, if you are only interested in sphere projection and more precisely the quadrilateral that bounds the projection then The Mechanics of Robust Stencil Shadows, page 6: Scissor Optimization details how to do it.
A Note on Xavier Ho's Approximation
I would like to add that the approximation that Xavier Ho suggests is, as he notes too, very approximative. I actually used it for a tile-based forward renderer to approximate light bounds in screen space. The following image shows how it neatly enables good performance with 400 omni (spherically bound) lights in a scene: Tile-based Rendering - Far View. However, just like Xavier Ho predicted the inaccuracy of the light bounds causes artifacts up close as seen here when zoomed in: Tile-based Rendering - Close view. The overlapping quadrilaterals fail to bound the lights completely and instead clip the edges revealing the tile grid.
In general, a sphere is seen as an ellipse in perspective:
(source: jrank.org)
The above image is at the bottom of this article.
Section 6 of this article describes how the bounding trapezoid of the sphere's projection is obtained. Before computers, artists and draftsmen has to figure this out by hand.