View frustum culling for animated meshes - c++

I implemented frustum culling in my system, it tests the frustum planes on every object's bounding sphere, and it works great. (I find the PlaneVsAabb check unneeded)
However, the bounding sphere of the mesh is adjusted for its bind pose, so when the mesh starts moving (e.g the player attacks with his sword) some vertices could go out of the sphere.
This often results in a mesh getting culled, although there are some vertices that should be rendered (e.g the player's sword that went out of the sphere).
I could think of two possible solutions for this:
For every mesh in every frame, calculate its new bounding sphere based on bone changes. (I have no idea how to start with this...) Could this be too inefficient?
Add a fixed offset for every sphere radius (based on the entire mesh size maybe?), so there could be no chance of the mesh getting culled even when animated.

(1) would be inefficient in real-time yes. However you can do a mixture of both, by computing the largest possible bounding sphere statically i.e. when you load it. Using that in (2) would guarantee a better result than some arbitrary offset you make up.

(1) You can add locators to key elements (e.g. dummy bone on the tip of the sword) and transform their origin while animating. You can done it on CPU on each update and then calculate bounding box or bounding sphere. Or you can precompute bounding volumes for each frame of animation offline. Doom3 uses second approach.

Related

Retrieving occluded faces given a rectangular region

I am trying to do a click-and-drag selection to select all the visible faces of a model (similar to those in 3D modelling software such as Blender).
Initially I am thinking of using line intersection to find all the occluded faces in the scene: for every pixel in the viewport, trace a line into the scene and find the first intersection. Then the list of occluded faces would be the ones that did not get intersected. But then after experimentation I realized that this method is very slow.
I heard of another method which goes something like:
Assigning a unique color for each primitive.
Project all those onto a virtual plane coincides with the viewport
From the projected pixels, if the colors corresponding to a primitive are not present, then it is occluded.
The problem is that I have no idea how to go about creating such a "virtual" plane, and at the same time not revealing it to the end-user. Any help or better idea to solve this?

Algorithm to constrain moving point onto 3d surface

I'm not really sure where to start looking for info about this question, so I'm asking here. Hopefully it's not too general. I've written a particle library in C++ and am trying to add the ability to constrain particles to the surface of a mesh. Not a rigid constraint though -- I want particles to be able to slide over the surface when affected by forces.
So, imagine I have an arbitrary concave mesh with n triangular faces. I then have a 3d point (particle) located on one of the faces. A apply a directional force to that particle to get it moving, but I want it to move along the topology of the surface, not simply move linearly through space. It should move smoothly over the surface and always be touching a triangle of the mesh.
I've thought about moving the particle linearly at first, and then snapping it to the closest point on the surface, but that would run into a lot of problems, like the particle might snap to other non-contiguous parts of the mesh simply because they happen to be a shorter distance to the particle after it's been moved by the force.
Then I thought about checking its barycentric coordinates and using them to determine which adjacent triangle it should move onto, if it leaves the bounds of its current triangle...but that seems like a hugely inefficient solution riddled with other problems (like if the force moves the particle past the bounds of all adjacent triangles as well).
Then I thought about using UVW coordinates to figure out where the particle would move to, but that wouldn't work either.
Any ideas?
Here's an image to help illustrate the problem:

Interpolate color between voxels

I have a 3D texture containing voxels and I am ray tracing and, everytime i hit a voxel i display the color. The result is nice but you can clearly see the different blocks being separated by one another. i would like to get a smoothing color going from one voxel to the other so I was thinking of doing interpolation.
My problem is that when I hit the voxel I am not sure which other neighbouring voxels to extract the colors from because i don't know if the voxel is part of a wall parallel to some axis or if it is a floor or an isolate part of the scene. Ideally I would have to get, for every voxel, the 26 neighbouring voxels, but that can be quite expensive. Is there any fast and approximate solution for such thing?
PS: I notice that in minecraft there is smooth shadows that form when voxels are placed near each other, maybe that uses a technique that might be adapted for this purpose?

Detect the surfaces of a 3D mesh, selected by mouse

I have an application that is used to display objects in 3D. Now I want to improve my application. If I double click in an area of my aillage, I want to retrieve them arrested and surfaces existing in this area, in order to subdivide this area then. Is it possible to receive them? Thanks.
Convert the click on the viewport to a ray in world space.
Then query your scene with the ray to find intersecting objects (based on axis-aligned bounding box search using your scene's octtree, if you have one).
Then if you need to detect triangles, test the ray against all triangles in the objects found by the scene query. You could optimize this step if necessary by building an octtree for the object's mesh. The one that is closest to the ray origin is the hit point.
For each object you can transform the ray into its own local coordinate system.

Bounding box frustum rendering - Distance rendering - OpenGL

I am rendering an old game format where I have a list of meshes that make up the are you are in. I have finally gotten the PVS (regions visible from another region) working and that cuts out a lot of the meshes I don't need to render but not by much. So now, my list of meshes I should render only includes the other meshes I can see. But it isn't perfect. There are still a ton of meshes include really far away ones that are past the clip.
Now first, I am trying to cull out meshes that aren't in my view frustum. I have heard that a bounding box is the best way to go about doing this. Does anyone have an idea about how I can go about this? I know I need the maximum point (x, y z) and the minimum point (x, y z) so that a box encompasses all of the verticies.
Then, do I check and see if either of those points are in my view frustum? Is it that simple?
Thank you!
AABB or Axis Aligned Bounding Box is a very simple and fast object for testing intersection/containment of two 3D regions.
As you suggest, you are calculating a min and max x,y,z for the two regions you want to compare, for example, the region that describes a frustum and the region that describes a mesh. It is axis aligned because the subsequent cube has edges parallel with each of the axis of the coordinate system. Obviously this can be slightly inaccurate (false positives of intersection/containment, but never false negatives), and so once you filter your list with the AABB test, you might consider performing a more accurate test for the remaining meshes.
You test for intersection/containment as follows:
F = AABB of frustum
M = AABB of mesh
bool is_mesh_in_frustum(const AABB& F, const AABB& M)
{
if( F.min.x > M.max.x || M.min.x > F.max.x || F.min.y > M.max.y || M.min.y > F.max.y || F.min.z > M.max.z || M.min.z > F.max.z )
{
return false;
}
return true;
}
You can also look up algorithms for bounding spheres, oriented bounding box (OBB), and other types of bounding volumes. Depending on how many meshes you are rendering, you may or may not need a more accurate method.
To create an AABB in the first place, you could simply walk the vertices of the mesh and record the minimum/maximum x and y and z values you encounter.
Also consider, if the meshes dont deform, then the bounding box in the meshes coordinate space are going to be static, so you can calculate the AABB for all the meshes as soon as you have the vertex data.
Then you just have to make sure you transform the precalculated AABB min and max vertices into the frustum coordinate space before you do the test each render pass.
EDIT (for comment):
The AABB can provide false positives because it is at best the exact shape of the region you are bounding, but is more typically larger than the region you are bounding.
Consider a sphere, if you use AABB, its like putting a basket ball into a box, you have all these gaps at the corners of the box where the ball cant reach.
Or in the case of a frustum where the frustum angles inwards towards the camera, the AABB for it will simply continue straight along the axis towards the camera, effectively bounding a region larger than the camera can see.
This is a source of inaccuracy, but it should never result in you culling an object that is even slightly inside the frustum, so at worst you will still be drawing some meshes that are close to the camera but still outside the frustum.
You can rectify this by first doing a AABB test, and producing a smaller list of meshes that return true and then performing a more accurate test on that smaller list with a more accurate bounding volume for the frustum and/or meshes.