Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am rendering some old geometry from an old game. Their client had some algorithm which allowed them to see which areas were nearby but I don't have that ability so I am looking into culling the unnecessary polygons. Currently, I am rendering every single polygon in the whole zone regardless if I can see it or not, regardless if it is even in visual range. Obviously this is completely inefficient.
What type of culling should I look into using?
I know I can cull polygons not in the frustum and that will help alleviate some of the load but would I be able to say, choose to not render polygons that are a certain distance from the camera? What is this called? I also am using fog in some of the areas. Same question goes. Can I come up with a way where I can cull everything that is behind the fog, area I cannot see.
There are two different thing to consider: Do you just want to look it properly, i.e. hidden surface removal? Then simple depth testing will do the job; the overhead is, that you process geometry that doen't make it on the screen at all. However if that's a (very) old game, you took the data from, it's very likely that a full map with all it's assets has fewer polygons, than what's visible in modern games on a screenfull. In that case you'll not run in any performance problems.
If you really run into performance problems, you'll need to find a balance on how much time you want to spend, determining what's (not) visible, and actually rendering it. 10 years ago it was still crucial to be almost pixel perfect to save as much rasterizing time as possible. Modern GPUs have so much spare power, that it suffices to just do a coarse selection of what to include in rendering.
These calculations are however completely outside the scope of OpenGL or any other 3D rasterizing API (e.g. Direct3D) — their task is just drawing triangles to the screen using sophisticated rasterization methods; there's no object management, no higher level functions. So it's up to you to implement this.
The typical approach is using a spatial subdivision structure. Most popular are Kd trees, octrees and BSP trees. BSP trees are spatially very efficient, but heavier in computation. Personally I prefer a hybrid/combination of Kd tree and octree, since those are easy to modify to follow dynamic changes in the scene. BSP trees are a lot heavier to update (usually requires a full recomputation).
Given such a spatial structuring it's very easy to determine if a point lies in a specific region of interest. It is also very simple to select nodes in the tree by geometric constraints, like planes. This makes implementing a coarse frustum culling very easy: One uses the frustum clipping planes to select all the nodes from the tree within the planes. To make the GPUs life easier you then might want to sort the nodes near to far; again the tree structure helps you there, as you can recursively sort down the tree, resulting in a nearly optimal O(n log(n)) complexity.
If you still need to improve rendering performance, you could use the spatial divisions defined by the tree, to (invisibly) render testing geometry in a occlusion query, before recursing into the subtree limited by the tested bounds.
I know I can cull polygons not in the frustum and that will help alleviate some of the load but would I be able to say, choose to not render polygons that are a certain distance from the camera? What is this called?
This is already done by the frustum it self. The far plane set a camera distance limit to the object to be rendered.
Have a look at glFrustum.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am a developer of the open-source game, Bitfighter. As per the following SO post, we have used the excellent 'Triangle' library for mesh-zone generation for use with our in-game AI (robots):
Polygon Triangulation with Holes
However, we ran into a small snag when wanting to package our game for Debian - the use of the 'Triangle' library will make our game be considered as 'non-free'.
We have been extremely pleased with the performance of the 'Triangle' library, and don't really want to give it up; however, we don't like dealing with license issues either. Therefore we have embarked upon a quest to find a suitable, permissively-licensed replacement that can match 'Triangle' in its robustness and speed.
We're looking for a C or C++ library for dividing large, complex, areas into triangles, that can handle any type of irregular polygons placed together in any manner, as well as holes. Robustness is our primary need, with speed almost as important.
I have found poly2tri, but it suffers from a bug in which it cannot handle polygons with coincident edges.
We have found several libraries, but all seem to suffer from one thing or another: either too slow, or don't handle holes, or suffer from some bug. Currently we are testing out polypartition and we have high hopes.
What are the best alternatives to the great 'Triangle' library, but have a permissive license?
I found a solution. It was poly2tri after all, with the use of the excellent Clipper library, and some minor algorithmic additions to the inputs.
Our process is as follows:
Run all our holes through Clipper using a union with NonZero winding (this means that inner holes are wound the opposite direction as outer ones). Clipper also guarantees nice clean input points with no repeats within epsilon.
Filter our holes into ones that are wound counter-clockwise and clockwise. Clockwise holes meant the hole was circuitous and that there was another concentric area inside that needed to be triangulated
Using poly2tri, triangulate the outer bounds and each clockwise polygon found, using the rest of the holes as inputs to poly2tri if they were found within one of the bounds.
Result: poly2tri seems to triangulate just about as fast as Triangle and has so far been very robust with everything we've thrown at it.
For those interested, here are our code changes.
Update
I have attempted to pull out our clipper-to-poly2tri code, with our robustness additions, into a separate library which I started here: clip2tri
You can have a look at the 2D Triangulations package of CGAL. An example to triangulate a polygon with holes is given here.
The license of the package is GPLv3+.
Note that it should not be too hard to extract only this package if needed.
As a small side-note:
I recently had to implement a complex polygon clipper & triangulator for cutting window frames into house walls.
While I was happy with the Vatti clipper results, the Delaunay triangulation used in poly2tri was too heavy to allow smooth dragging of the window frame along the barycentric coordinates of the wall face. After scratching my head a little bit, I ended up tricking this much simpler triangular to work with holes:
http://wiki.unity3d.com/index.php?title=Triangulator
What I did was horizontally subdivide the wall face by the height of the shortest clipping poly. In my case they are always rectangles, but they needn't be. Anyway, it forces the clipper to only work with regular or concave polys, and hence enables you to get away with a cheaper triangulation method.
Here are some screenshots showing it working:
https://www.dropbox.com/sh/zbzpvlkwj8b9gl3/sIBYCqa8ak
Hope this helps.
I'm trying to make a game or 3D application using openGL. The game/program will have many objects in them and drawn to the screen(around 7000 of them). When I render them, I would need to calculate the distance between the camera and the object and sort them in order to correctly render the objects within the scene. Knowing this, what is the best way to sort them? I really want the sorting to be done really fast, but I've heard there are "trade off" for them, so what algorithm should I use to get the best performance out of it?
Any help would be greatly appreciated.
Edit: a lot of people are talking about the z-buffer/depth buffer. This doesn't work in some cases like a few people talked about. This is why I asked this question.
Sorting by distance doesn't solve the transparency problem perfectly. Consider the situation where two transparent surfaces intersect and each has a part which is closer to you. Perhaps rare in games, but still something to consider if you don't want an occasional glitched look to your renderer.
The better solution is order-independent transparency. With the latest graphics hardware supporting atomic operations, you can use an A-buffer to do this with little memory overhead and in a single pass so it is pretty efficient. See for example this article.
The issue of sorting your scene is still a valid one, though, even if it isn't for transparency -- it is still useful to sort opaque objects front to back to to allow depth testing to discard unseen fragments. For this, Vaughn provided the great solution of BSP trees -- these have been used for this purpose for as long as 3D games have been around.
Use http://en.wikipedia.org/wiki/Insertion_sort which has O(n) complexity for nearly sorted arrrays.
In your case by exploiting temporal cohesion insertion sort gives fastest results.
It is used for http://en.wikipedia.org/wiki/Sweep_and_prune
From link above:
In many applications, the configuration of physical bodies from one time step to the next changes very little. Many of the objects may not move at all. Algorithms have been designed so that the calculations done in a preceding time step can be reused in the current time step, resulting in faster completion of the calculation.
So in such cases insertion sort is best(or similar sorts with O(n) at best case)
I have a scene built in OpenGL. When my light is in the center of the room, the outside of the room is lit. Is there any easy way to make OpenGl stop the lighting at vertexes, or will it require complex calculations. Here are pictures of my crappy, quick scene showing the lighting as it is when asking this question:
Essentially, you want the walls of the room to cast a shadow. That's what you want when you want the exterior part of the object not to be lit.
Shadowing in graphics, is generally a pretty hard problem. There are a lot of good, and a lot of fast, solutions, but not both -- any one solution is going to be a tradeoff between the two. SIGGRAPH is full of all sorts of papers from Really Smart People trying to solve this problem.
If you want something quick and dirty, shadow mapping is not terribly difficult (at least the simple kind), but it is imprecise. You'll see artifacts along the intersections of your object and the walls, for one. For precision, stencil shadows will work, but you'll have hard-edged shadows.
One solution here would probably be to author the geometry in a way, that objects don't protrude outside walls (or separate them into inside/outside parts). Then render the interior and exterior with different lights.
You wouldn't have the problem if everything would cast shadows on everything.
If the lights are static, you might want to consider pre-calculating (baking) the lighting (together with shadows). You can do this in many 3D packages, and from a programming perspective this might be the simplest solution.
If you had a general solution for real-time rendering shadows for every light, that would also solve your problem, but that is a challenging task, and it might also not be the optimal thing to do, if you want to maintain good frame-rates.
If you want to learn about real-time shadow rendering, I recommend looking at shadow maps - they are generally the solution used in most games today. Note that for a point light you would need to render the shadow map for 6 sides of a cube-map. For practical purposes you should really consider which lights really need to cast shadows, and make some kind of trade-off.
How can I render a bunch of hand drawn shapes in opengl 1.x? I know about instancing but how is it possible in old opengl? Could I get examples of some sort? This is for a game, I'm expecting a thousand or so shapes all of which will need to be updated every frame.
Assuming that (at least most of) the shapes remain unchanged from one frame to the next, so most of the update is just moving them around, you could at least consider building a display list for each shape, then rendering the display lists during an update.
The amount of good you'll get from this varies widely depending on the hardware (and possibly driver) in use though. Some hardware supports display lists directly, and gains a lot from it. With other hardware, you'll be hard put to find any difference at all.
The good points are that at worst this won't do any harm, and building/using display lists is pretty quick and easy. So, in the worst case you don't lose much, and in the best case you might gain quite a bit.
I searched and I found some tutorials how to do terrain collision but they were using .raw files, I'm using .x. But, I think i can do same thing they did. They took x,y,z values of an object can checked it against every single triangle in the terrain. It makes sense but It look like it will be slow. It is just like picking checking against every single triangle is slow.
Is there faster way to do it and good?
UPDATE
My terrain is not flat, if it was i would use bounding boxes.
Last time I did this, I used the Bullet library, and it worked great. It has various collision shapes to choose from, optimised for different scenarios, including general triangle meshes and heightfields. You can use the library's collision routines without the physics.
One common way to significantly reduce the time it takes to detect collisions is to organize the space into an octree, which will allow you to very quickly determine whether or not a collision could occur in a particular node. Generally speaking, it's easier to accomplish these sorts of tasks with a game engine.