How to triangulate a sparxe voxel octree (SVO) - c++

I implemented a data structure for material removal simulation based on a sparxe voxel octree (SVO). Now, I want to visualize the result. Therefore, I need to triangulate my sparse voxel octree.
How can I do that? Can you recommend any fast algorithms for that?
A standard voxel model can be triangulated by using marching cube (MC). But as far as I see, I cannot adapt this algorithm for an SVO. The MC algorithm is based on the 15 base patterns which are used to generate the triangles (with the help of a LUT for better performance). But these patterns doesn't work anymore for SVO voxels, because these voxels can have different sizes depending in the local resolution in the tree branch.
So, how do other people triangulate their SVO?

There's an algorithm called the "Transvoxel Algorithm" you can use with marching cubes. I won't post the details here but you can google it. It does some internal voxel tessellation. I have my own tessellation algorithm which is somewhat simplified in that it has far few cases, however both these only allow for a single level of resolution change at a time.
Your best bet may be to not use MC at all and go to surface nets. The main down side is that it can generate non-manifold geometry (if that's something you care about). There are several other variations of that such as "dual contouring" you might want to look into also. Dual contouring allows for sharp corners but requires Hermite data. I believe there is also a manifold version of dual contouring and/or surface nets at the cost of some added complexity.
In any case all this stuff will work with a voxel octree, but it does require some work.

Related

Mesh simplification of a grid-like structure

I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .

Ray-mesh intersection or AABB tree implementation in C++ with little overhead?

Can you recommend me...
either a proven lightweight C / C++ implementation of an AABB tree?
or, alternatively, another efficient data-structure, plus a lightweight C / C++ implementation, to solve the problem of intersecting a large number of rays with a large number of triangles?
"Large number" means several 100k for both rays and triangles.
I am aware that AABB trees are part of the CGAL library and probably of game physics libraries like Bullet. However, I don't want the overhead of an enormous additional library in my project. Ideally, I'd like to use a small float-type templated header-only implementation. I would also go for something with a bunch of CPP files, as long as it integrated easily in my project. Dependency on boost is ok.
Yes, I have googled, but without success.
I should mention that my application context is mesh processing, and not rendering. In a nutshell, I'm transferring the topology of a reference mesh to the geometry of a mesh from a 3D scan. I'm shooting rays from vertices and along the normals of the reference mesh towards the 3D scan, and I need to recover the intersection of these rays with the scan.
Edit
Several answers / comments pointed to nearest-neighbor data structures. I have created a small illustration regarding the problems that arise when ray-mesh intersections are approached with nearest neighbor methods. Nearest neighbors methods can be used as heuristics that work in many cases, but I'm not convinced that they actually solve the problem systematically, like AABB trees do.
While this code is a bit old and using the 3DS Max SDK, it gives a fairly good tree system for object-object collision deformations in C++. Can't tell at a glance if it is Quad-tree, AABB-tree, or even OBB-tree (comments are a bit skimpy too).
http://www.max3dstuff.com/max4/objectDeform/help.html
It will require translation from Max to your own system but it may be worth the effort.
Try the ANN library:
http://www.cs.umd.edu/~mount/ANN/
It's "Approximate Nearest Neighbors". I know, you're looking for something slightly different, but here's how you can use this to speed up your data processing:
Feed points into ANN.
Query a user-selectable (think of this as a "per-mesh knob") radius around each vertex that you want to ray-cast from and find out the mesh vertices that are within range.
Select only the triangles that are within that range, and ray trace along the normal to find the one you want.
By judiciously choosing the search radius, you will definitely get a sizable speed-up without compromising on accuracy.
If there's no real time requirements, I'd first try brute force.
1M * 1M ray->triangle tests shouldn't take much more than a few minutes to run (in CPU).
If that's a problem, the second best thing to do would be to restrict the search area by calculating a adjacency graph/relation between the triangles/polygons in the target mesh. After an initial guess fails, one can try the adjacent triangles. This of course relies on lack of self occlusion / multiple hit points. (which I think is one interpretation of "visibility doesn't apply to this problem").
Also depending on how pathological the topologies are, one could try environment mapping the target mesh on a unit cube (each pixel would consists of a list of triangles projected on it) and test the initial candidate by a single ray->aabb test + lookup.
Given the feedback, there's one more simple option to consider -- space partitioning to simple 3D grid, where each dimension can be subdivided by the histogram of the x/y/z locations or even regularly.
100x100x100 grid is of very manageable size of 1e6 entries
the maximum number of cubes to visit is proportional to the diameter (max 300)
There are ~60000 extreme cells, which suggests an order of 10 triangles per cell
caveats: triangles must be placed on every cell they occupy
-- a conservative algorithm places them to cells they don't belong to; large triangles will probably require clipping and reassembly.

C++ 2D tessellation library?

I've got some convex polygons stored as an STL vector of points (more or less). I want to tessellate them really quickly, preferably into fairly evenly sized pieces, and with no "slivers".
I'm going to use it to explode some objects into little pieces. Does anyone know of a nice library to tessellate polygons (partition them into a mesh of smaller convex polygons or triangles)?
I've looked at a few I've found online already, but I can't even get them to compile. These academic type don't give much regard for ease of use.
CGAL has packages to solve this problem. The best would be probably to use the 2D Polygon Partitioning package. For example you could generate y-monotone partition of a polygon (works for non-convex polygons, as well) and you would get something like this:
The runnning time is O(n log n).
In terms of ease of use this is a small example code generating a random polygon and partitioning it (based on this manual example):
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Partition_traits_2<K> Traits;
typedef Traits::Point_2 Point_2;
typedef Traits::Polygon_2 Polygon_2;
typedef std::list<Polygon_2> Polygon_list;
typedef CGAL::Creator_uniform_2<int, Point_2> Creator;
typedef CGAL::Random_points_in_square_2<Point_2, Creator> Point_generator;
int main( )
{
Polygon_2 polygon;
Polygon_list partition_polys;
CGAL::random_polygon_2(50, std::back_inserter(polygon),
Point_generator(100));
CGAL::y_monotone_partition_2(polygon.vertices_begin(),
polygon.vertices_end(),
std::back_inserter(partition_polys));
// at this point partition_polys contains the partition of the input polygons
return 0;
}
To install cgal, if you are on windows you can use the installer to get the precompiled library, and there are installations guides for every platform on this page. It might not be the simplest to install but you get the most used and robust computational geometry library there is out there, and the cgal mailing list is very helpful to answer questions...
poly2tri looks like a really nice lightweight C++ library for 2D Delaunay triangulation.
As balint.miklos mentioned in a comment above, the Shewchuk's triangle package is quite good. I have used it myself many times; it integrates nicely into projects and there is the triangle++ C++ interface. If you want to avoid slivers, then allow triangle to add (interior) Steiner points, so that you generate a quality mesh (usually a constrained conforming delaunay triangulation).
If you don't want to build the whole of GCAL into your app - this is probably simpler to implement.
http://www.flipcode.com/archives/Efficient_Polygon_Triangulation.shtml
I've just begun looking into this same problem and I'm considering voronoi tessellation. The original polygon will get a scattering of semi random points that will be the centers of the voronoi cells, the more evenly distributed they are the more regularly sized the cells will be, but they shouldn't be in a perfect grid otherwise the interior polygons will all look the same. So the first thing is to be able to generate those cell center points- generating them over the bounding box of the source polygon and a interior/exterior test shouldn't be too hard.
The voronoi edges are the dotted lines in this picture, and are sort of the complement of the delaunay triangulation. All the sharp triangle points become blunted:
Boost has some voronoi functionality:
http://www.boost.org/doc/libs/1_55_0/libs/polygon/doc/voronoi_basic_tutorial.htm
The next step is creating the voronoi polygons. Voro++ http://math.lbl.gov/voro++/ is 3D oriented but it is suggested elsewhere that approximately 2d structure will work, but be much slower than software oriented towards 2D voronoi. The other package that looks to be a lot better than a random academic homepage orphan project is https://github.com/aewallin/openvoronoi.
It looks like OpenCV used to support do something along these lines, but it has been deprecated (but the c-api still works?). cv::distTransform is still maintained but operates on pixels and generates pixel output, not vertices and edge polygon data structures, but may be sufficient for my needs if not yours.
I'll update this once I've learned more.
A bit more detail on your desired input and output might be helpful.
For example, if you're just trying to get the polygons into triangles, a triangle fan would probably work. If you're trying to cut a polygon into little pieces, you could implement some kind of marching squares.
Okay, I made a bad assumption - I assumed that marching squares would be more similar to marching cubes. Turns out it's quite different, and not what I meant at all.. :|
In any case, to directly answer your question, I don't know of any simple library that does what you're looking for. I agree about the usability of CGAL.
The algorithm I was thinking of was basically splitting polygons with lines, where the lines are a grid, so you mostly get quads. If you had a polygon-line intersection, the implementation would be simple. Another way to pose this problem is treating the 2d polygon like a function, and overlaying a grid of points. Then you just do something similar to marching cubes.. if all 4 points are in the polygon, make a quad, if 3 are in make a triangle, 2 are in make a rectangle, etc. Probably overkill. If you wanted slightly irregular-looking polygons you could randomize the locations of the grid points.
On the other hand, you could do a catmull-clark style subdivision, but omit the smoothing. The algorithm is basically you add a point at the centroid and at the midpoint of each edge. Then for each corner of the original polygon you make a new smaller polygon that connects the edge midpoint previous to the corner, the corner, the next edge midpoint, and the centroid. This tiles the space, and will have angles similar to your input polygon.
So, lots of options, and I like brainstorming solutions, but I still have no idea what you're planning on using this for. Is this to create destructible meshes? Are you doing some kind of mesh processing that requires smaller elements? Trying to avoid Gouraud shading artifacts? Is this something that runs as a pre-process or realtime? How important is exactness? More information would result in better suggestions.
If you have convex polygons, and you're not too hung up on quality, then this is really simple - just do ear clipping. Don't worry, it's not O(n^2) for convex polygons. If you do this naively (i.e., you clip the ears as you find them), then you'll get a triangle fan, which is a bit of a drag if you're trying to avoid slivers. Two trivial heuristics that can improve the triangulation are to
Sort the ears, or if that's too slow
Choose an ear at random.
If you want a more robust triangulator based on ear clipping, check out FIST.

Implementing Marching Cube Algorithm?

From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/

robust and flexible 3d data structure

I'm looking for a concept to store (in 3 dimensional euclidian space) faces, edges and vertex such that
information (about relation) isn't duplicated
queries for adjecent and neighboring faces/edges/vertex are fast
the mesh is not limited to connected faces of the same winding
definitions
neighbor of a face: the face that shares an edge with this face
neighbor of a vertex: the vertex that is on the other end of an edge sharing that vertex
adjecent edge: an edge that shares the same vertex of an endpoint with this edge
I have considered the Half-Edge data structure, but queries on it only really work when all connected faces have the same winding.
For instance, consider this pseudo code to access such related entities:
face.neighbors #the neighboring faces
face.edges #the edges shared by this face (in the right winding order)
face.verts #the vertex of that face (in the right winding order)
edge.v1, edge.v2 #the two vertex making up an edge
vertex.edges #the edges this vertex shares
vertex.neighbors # the neighbors of this vertex along each shared edge
I would take a look at CGAL, the Computational Geometry Algorithms Library. You might be able to use something directly, or at least get some good ideas. Half-edge sounds like a good idea; in my opinion, you should enforce uniform winding as much as possible. It sounds like you're not interested in that, however.
One idea could be to retain a pair (one element for each winding) in the data structure for a face; this might give you enough flexibility to implement some kind of half-edge-like data structure on top of it, with good efficiency.
I suggest you also take a look at OpenMesh. It is also heavy on the C++ template side like CGAL.
If you don't want to be limited to manifold surfaces, winged-edge is it these days. If "data about relation" is really your top priority then I guess you're out of luck.
I found a abstract about a data structure for 3d mesh. http://wscg.zcu.cz/wscg2006/Papers_2006/Short/E17-full.pdf
Not so much a recommended structure but rather a 3D library used by a pro for computational task that can be built upon.
Michael Garland, a research scientist with NVIDIA Research, have release his graphics library he uses for computational tasks on 3D models and meshes.
libgfx
qslim a mesh simplification app based on libgfx (I have personally used it and ported qvis to MacOSX)