Bounding box frustum rendering - Distance rendering - OpenGL - c++

I am rendering an old game format where I have a list of meshes that make up the are you are in. I have finally gotten the PVS (regions visible from another region) working and that cuts out a lot of the meshes I don't need to render but not by much. So now, my list of meshes I should render only includes the other meshes I can see. But it isn't perfect. There are still a ton of meshes include really far away ones that are past the clip.
Now first, I am trying to cull out meshes that aren't in my view frustum. I have heard that a bounding box is the best way to go about doing this. Does anyone have an idea about how I can go about this? I know I need the maximum point (x, y z) and the minimum point (x, y z) so that a box encompasses all of the verticies.
Then, do I check and see if either of those points are in my view frustum? Is it that simple?
Thank you!

AABB or Axis Aligned Bounding Box is a very simple and fast object for testing intersection/containment of two 3D regions.
As you suggest, you are calculating a min and max x,y,z for the two regions you want to compare, for example, the region that describes a frustum and the region that describes a mesh. It is axis aligned because the subsequent cube has edges parallel with each of the axis of the coordinate system. Obviously this can be slightly inaccurate (false positives of intersection/containment, but never false negatives), and so once you filter your list with the AABB test, you might consider performing a more accurate test for the remaining meshes.
You test for intersection/containment as follows:
F = AABB of frustum
M = AABB of mesh
bool is_mesh_in_frustum(const AABB& F, const AABB& M)
{
if( F.min.x > M.max.x || M.min.x > F.max.x || F.min.y > M.max.y || M.min.y > F.max.y || F.min.z > M.max.z || M.min.z > F.max.z )
{
return false;
}
return true;
}
You can also look up algorithms for bounding spheres, oriented bounding box (OBB), and other types of bounding volumes. Depending on how many meshes you are rendering, you may or may not need a more accurate method.
To create an AABB in the first place, you could simply walk the vertices of the mesh and record the minimum/maximum x and y and z values you encounter.
Also consider, if the meshes dont deform, then the bounding box in the meshes coordinate space are going to be static, so you can calculate the AABB for all the meshes as soon as you have the vertex data.
Then you just have to make sure you transform the precalculated AABB min and max vertices into the frustum coordinate space before you do the test each render pass.
EDIT (for comment):
The AABB can provide false positives because it is at best the exact shape of the region you are bounding, but is more typically larger than the region you are bounding.
Consider a sphere, if you use AABB, its like putting a basket ball into a box, you have all these gaps at the corners of the box where the ball cant reach.
Or in the case of a frustum where the frustum angles inwards towards the camera, the AABB for it will simply continue straight along the axis towards the camera, effectively bounding a region larger than the camera can see.
This is a source of inaccuracy, but it should never result in you culling an object that is even slightly inside the frustum, so at worst you will still be drawing some meshes that are close to the camera but still outside the frustum.
You can rectify this by first doing a AABB test, and producing a smaller list of meshes that return true and then performing a more accurate test on that smaller list with a more accurate bounding volume for the frustum and/or meshes.

Related

View frustum culling for animated meshes

I implemented frustum culling in my system, it tests the frustum planes on every object's bounding sphere, and it works great. (I find the PlaneVsAabb check unneeded)
However, the bounding sphere of the mesh is adjusted for its bind pose, so when the mesh starts moving (e.g the player attacks with his sword) some vertices could go out of the sphere.
This often results in a mesh getting culled, although there are some vertices that should be rendered (e.g the player's sword that went out of the sphere).
I could think of two possible solutions for this:
For every mesh in every frame, calculate its new bounding sphere based on bone changes. (I have no idea how to start with this...) Could this be too inefficient?
Add a fixed offset for every sphere radius (based on the entire mesh size maybe?), so there could be no chance of the mesh getting culled even when animated.
(1) would be inefficient in real-time yes. However you can do a mixture of both, by computing the largest possible bounding sphere statically i.e. when you load it. Using that in (2) would guarantee a better result than some arbitrary offset you make up.
(1) You can add locators to key elements (e.g. dummy bone on the tip of the sword) and transform their origin while animating. You can done it on CPU on each update and then calculate bounding box or bounding sphere. Or you can precompute bounding volumes for each frame of animation offline. Doom3 uses second approach.

Surface mesh generation (triangulation) from exact points on a tube surface

What would be recommended ways to generate surface meshes of a particular kind of body given the following?
The geometric body is an extruded 3D "tube" segment. The tube segment has the following properties:
At each value of X, the cross-section is always a simple polygon in the Y-Z plane
The polygons are not guaranteed to be convex
The polygons are not necessarily constant as X is traversed; they smoothly dilate and/or change shape, and the areas of the polygons smoothly vary
The centroids of each X = const polygon, if connected together with simple line segments, would form a very smooth, well behaved "thread" with at most gentle curvature, no sharp bends, folds, or loops, etc.
The surface section is capped by the planar cross-sectional polygons at X = X_start and X = X_end
Objective:
Generate a triangulated surface mesh of the tube surface, respecting the fact that it is bounded at the start and end by flat, planar cross-sectional surfaces
The mesh should be of the tube, not a convex hull of the tube
If the tube surface mesh maintains the property that there is a flat simple polygonal cross-section formed by the vertices at X = X_start and X = X_end, then I have existing code which can mesh the end caps; the real problem I'm trying to solve is to get the 3D tube surface mesh generated. If the solution also can generate the end caps, that's fine too. However, the end cap surfaces need to be identifiable as such for output purposes.
Once the mesh is generated, it needs to be written in a format like OFF, which I think I can handle based on code included with CGAL, examples, etc. The point here is that I don't need to be able further process the mesh (e.g. deformations, add/remove points) programmatically after it is generated.
Known inputs and properties:
I have the polygonal cross-section tube surface vertices at an arbitrary number of X = const stations between X_start and X_end ; I can control the spacing in the X direction as necessary when I create/import the points
The vertices lie exactly on the tube surface and are not corrupted by any noise, joggles, sampling, approximations, etc.
I do not have any guarantees about the relative position of vertices forming each cross-sectional polygon, other than that the polygon vertices are oriented clockwise
I can generate normals for the polygonal vertices in terms of their Y-Z components, but I don't have a priori information about their normal components in the X direction
I can generate any number of vertices on the end caps if necessary
Right now the vertices are 3-space floating-point coordinate values, but if it could somehow help, I could turn each cross-section into a formal CGAL 2D arrangement
Estimated number of vertices would likely be less than 1000, definitely less than say 15K. Processing time is not a concern.
Ideals:
Ideally, the surface mesh would just use the vertices I have, without subtracting or moving any of them, but this is not a hard constraint so long as they are "close"
I need simple polygonal vertices at X_start and X_end so I can cap the surfaces as intended
Initially, CGAL's Poisson Surface Reconstruction method seemed promising, but in the end it seems like it leads to a processing pipeline that might smear the vertices I have; additionally, I don't have full 3D normal information for the points other than the end caps. Moreover, the method would seem to have issues with the sharp, distinct cross-section terminal face surfaces. Maybe I could get around the latter by putting in a bunch of benignly false vertices to extend and terminate the tube, then filter out parts of the triangulation I don't need, but there's no guarantee that the vertices at X_start and X_end would remain, and I would have to "fix-up" the triangulation crossing those planes, which seems non-trivial.
Another possibility might be to compute a full 3D volume mesh using CGAL's 3D mesh generator, but just write out the portion comprising the surface mesh. Is this reasonable? If I could retain the original input vertices, and this overall approach is reasonable, I could filter as I wrote out the triangulation to distinguish between the faces forming the end caps vs. the tube surface.
I also saw this SO question Representing a LiDAR surface using the 3D Delaunay Triangulation as basis? which seems to have some similarities (trying to just retain the input points, and some foreknowledge of the surface properties), but in the end I think my use case is too different.

Find out the texture portion needed for a mesh

I've got a very specific problem. I have an OpenGL application that is used to render video onto 3D meshes. As it turns out, I can make my video sources send me rectangular portions of the image, reducing memory usage. These portions are specified as a Rectangle2D(int x, int y, int width, int height) with 0 <= x <= w <= sourceVideoWidth and 0 <= y <= h <= sourceVideoHeight.
With that said, I want to find out, for each frame, and for each mesh the following:
Whether the mesh is visible
If so, what portion of image should I request
The benefit is reducint the texture upload to GPU, this operation is often the bottleneck in my application.
In order to simplify the problem let's make the assumption that all meshes are 3D rectangles arbitrarily positioned. A 3D rectangle is defined by four points:
class Rectangle3D
{
public:
Vec3 topLeft;
Vec3 topRight;
Vec3 botLeft;
Vec3 botRight;
}
Possible solutions:
A) Split the mesh into a point grid of points with known texture coordinates, and run frustum culling for each point, then, from the visible points find the top left and bottom right texture coordinates that we must request. This is rather inefficient, and the number of points to test multiplies when we add another mesh to the scene. Solutions that use just the four corners of the rectangle might be preferable.
B) Using the frustum defining planes (see frustum culling). For further simplicity, using only the four planes that correspond to the screen sides. Finding out whether the mesh is visible is rather simple. Finding the visible texture coordinates would need several cases:
- One or more frustum sides intersect with the mesh
- No frustum sides intersect with the mesh
- Either the mesh is fully visible
- Or the mesh is surrounding the screen sides
In any case I need several plane-plane and plane-line segment intersections. Which are not necessarily efficient.
C) Make a 2D projection of the Rectangle3D lines, resulting into a four side polygon, then using line segment intersection between the screen sides and the polygon sides. Also accounting for cases where we have no intersection and the mesh is still visible.
D) Using OpenGL occlusion query objects, this way a render pass could generate information about the visible mesh portion.
Is there any other solution that best solves this problem? If not which one would you use and why?
Just one more thought on to your solutions,
Why don't you incorporate one rendering pass for occlusion queries. Split your mesh into imaginary rectangles which tells you about the visible parts of the mesh. Like
Left part of the image is with imaginary sub-rectangles, right part of the image shows sub-rectangles visible within the screen area (red rectangle in this case). Based on this pass result, you will get the co-ordinates of mesh which are visible.
UPDATE:
This is a sample view that explains my point. This can be done by using opengl query objects.
r is result of GL_SAMPLES_PASSED
Since you will know which rectangles are visible through the result of the query objects , you will come to know which co-ordinates are visible.Google for opengl occlusion queries you will get detailed info. Hope this helps.

Determine if a 2d polygon can be drawn with a single triangle fan

At first, I thought this problem would be equivalent to determining if a polygon is convex, however it seems that a non-convex polygon could still be drawn by one triangle fan. Consider this shape, a non-convex polygon. One could easily imagine some region of centerpoints that would allow this polygon to be drawn with a triangle fan (although there would be other centerpoints that would not). Given a fixed centerpoint, I want to be able to determine if the set of 2d points defining the polygon allow for it to be drawn with a single triangle fan.
It seems like the key is making sure nothing "gets in the way" of a line drawn from the centerpoint to any of the vertices, that means other edge lines of vertices. However, it is important to make this as computationally inexpensive as possible, and I'm not sure if there's a nice math shortcut to doing this.
Ultimately, I'm going to have the vertices of polygons moving, and I'll need to determine the "boundary" a vertex is allowed to move, given the rest are fixed (and perhaps later even allowing the simultaneous reactive movement of the direct 2 neighbors as well), to keep the polygon capable of being drawn in a single triangle fan. But that's the future, hopefully the test over the full polygon can be broken into a subset of calculations to test the bounds of a single vertex's movement with the assumption of an already convex polygon.
The property you're looking for is "star-shaped". Star-shaped polygons are are defined by having a point from which the entire polygon is visible.
To test that a polygon is star-shaped, you can construct the region from which the whole polygon will be visible. That region would be a convex set, so you can intersect it with a halfplane in O(log(n)).
That means you can intersect the halfplanes formed by the edges and check that the resulting visibility region is nonempty in O(n log n).
A polygon can be drawn as a triangle fan if the angle from the anchor to each vertex moves in the same direction. The easiest way to test this is to check the dot products of the cross products of the successive vertices.
It will look something like this:
vector lastCross = cross_product( vector(vertex[0] - center), vector(vertex[numVerts - 1] - center) );
canBeFan = true;
for (n = 1; canBeFan && n < numVerts; ++n) {
vector testCross = cross_product( vector(vertex[n] - center), vector(vertex[n - 1] - center) );
if (0.0 >= dot_product(testCross, lastCross) ) {
canBeFan = false;
}
}
It looks like all potential centerpoints will need to be on the interior side of every edge of your polygon. So, treat all your edges as half-spaces, and determine if their intersection is empty or not.
As #jpalecek says, the term for this is star-shaped. If your polygon is star-shaped, there will be a convex polygon (interior to the original) whose points can view all edges of the original -- and, conversely, if no such sub-polygon exists, the original is not star-shaped, and you cannot draw it with a triangle fan.
Determining this sub-polygon is basically a dual application of the convex hull problem; it can be computed in O(n log n).

How to create an even sphere with triangles in OpenGL?

Is there a formula that generates a set of coordinates of triangles whose vertices are located on a sphere?
I am probably looking for something that does something similar to gluSphere. Yet, I need to color the different triangles in specfic colors so that it seems I can't use gluSphere.
Also: I do understand that gluSphere draws edges along lines with equal longitudes and lattitudes which entails the triangles being small at the poles compared to their size at the equator. Now, if such a formula would generate the triangles such that their difference in size is minimized, that would be great.
To calculate the normals and the uv map.
Fortunately there is an amazing trick for calculating the normals, on a sphere. If you think about it, the normals on a sphere are indeed nothing more than the direction from the centre of the sphere, to that point!! Furthermore, if you think it through, that means the normals literally equal the point! i.e., it's the same vector! - just don't forget to normalise the length, for the normal.
You can win bar bets on that one: "is there a shape where all the normals happen to be exactly ... equal to the vertices?" At first glance you'd think, that's impossible, no such coincidental shape could exist. But of course the answer is simply "a sphere with radius one!" Heh!
Regarding the UVs. It is relatively easy on a sphere, assuming you're projecting to 2D in the "obvious" manner, a "rectangle-style" map projection. In that case the u and v is basically just the longitude / latitude of any point, normalised to 0,1.
Hope it helps!
Here's the all-time-classic web page that beautifully explains how to build an icosphere .. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html
Start with a unit icosahedron. Then apply muliple homogenous subdivisions of the triangles, normalizing the resulting vertices distance to the origin.