I need to create a triangle mesh from a set of points. The set has very few points so it doesn't need to be fast or optimised (I will deal with 100 points maximum). The mesh needs to be a constrained "delaunay triangulation". In the image below I showed (on the left) the set of points I start from (blue and red dots). I also know the connections between these points (the outline in black). The mesh needs to look like the example on the right (including the edges in grey that form outside and inner triangles).
I can't use libraries.
I looked at many different algorithms. They are many and it's easy to be confused. I would like to know if there is a naive and thus hopefully simpler algorithm I can use in order to produce the mesh on the right? Brute force approach is fine (ps: I can do a Delaunay triangulation).
Thanks for all the answers.
I went through the process of developing a solution to this problem so thought I would share my own experience, hoping people facing the problem will find the insight useful.
So from my own experience implementing an algorithm I came to the conclusion:
There is not really quick way to this problem. It is not reasonable to think it can be achieved in just 50 lines of code. In fact the routine that I wrote (C++) is about 400 to 500 lines (hard to tell with comments). So reasonably compact yet challenging and it took me a 2 to 3 days to get the logic right (it can be tricky).
I found the algorithm propose by Sloan in "A FAST ALGORITHM FOR GENERATING CONSTRAINED DELAUNAY TRIANGULATIONS" to be perfectly well suited for the problem at hand. The reality when it comes to Delaunay triangulation which was a new subject for me, is that there seems to be a lot of different algorithms approach and this research is pretty old. So for a new comer it's really hard to know where to start.
2.1. It's hard to know which algorithm is recent, simple in its comprehension and fast and simple to implement.
2.2 Generally once you have understood the principle it's mostly a matter of coding the logic in the most efficient way (and it seems that this what most of the algorithms/papers are fighting above).
2.3 I found the paper from Sloan to be understandable, very well explained. If you follow the logic and the instructions, then anyone can really implement a constrained Delaunay triangulation.
So in conclusion:
I recommend the Sloan paper because it includes an explanation on how to create a Delaunay triangulation followed by a constrained triangulation if necessary.
To answer my own question there is not really brute force to this problem. Implementing this technique just requires to implement the full logic and most implementation must require more or less the same amount of work
With the nuance, that I wasn't looking after much optimisations because my point sets are really small. So I am sure many algorithms are better than the one described by Sloan; they probably propose optimized data structures and algorithms optimized to minimize steps such point insertion in triangulation etc.
So anyway Sloan worked. A small image to illustrate the answer and make it more attractive;-)
EDIT
This is production code so helas I can't share that... I could lead me to be fired. The process is very simple though. You look for the intersection between a segment (your constraint) and all edges in the model. Then for each intersected edge, you swap the diagonal between the 2 triangles that this edges belongs to. If the new diagonal intersects the segment still, then add the new diagonal back onto the stack of intersected edges for this segment. If the new diagonal doesn't intersect the segment then add it to the stack of newly created edges. Keep processing the stack of intersected edges until it's empty.
Then once this is finished you need to process the list of new added edges. For each one of them, check that the Delaunay triangulation criterion is respected. If not swap the diagonal of the triangle this edge belongs to. Simple ...
This is just the paper ...
Point Set
26.9375 10.6875
32.75 9.96875
31.375 4.875
27.6562 2.0625
23.9375 -0.75
18.1562 -0.75
10.875 -0.75
6.60938 3.73438
2.34375 8.21875
2.34375 16.3125
2.34375 24.6875
6.65627 29.3125
10.9688 33.9375
17.8438 33.9375
24.5 33.9375
28.7188 29.4062
32.9375 24.875
32.9375 16.6562
32.9375 16.1562
32.9062 15.1562
8.15625 15.1562
8.46875 9.6875
11.25 6.78125
14.0312 3.875
18.1875 3.875
21.2812 3.875
23.4687 5.5
25.6562 7.125
8.46875 19.7812
27 19.7812
26.625 23.9688
24.875 26.0625
22.1875 29.3125
17.9062 29.3125
14.0312 29.3125
11.3906 26.7188
8.75 24.125
These are x/y/z coordinates (z=0)
Segments:
0 1
1 3
3 5
5 7
7 9
9 11
11 13
13 15
15 17
17 19
19 20
20 22
22 24
24 26
26 0
28 29
29 31
31 33
33 35
35 28
Indices start at 0 (0 -> first vertex in vertex list)
I tried it with alpha shapes with good results for a few shapes https://concavehull.codeplex.com/ but it's nowhere near the original constrained delaunay triangulation.
Here is my alpha-shape algorithm:https://alphashape.codeplex.com.
A simple approach seems to be to implement a ear clipping algorithm. Without optimisation as in hash grids or quad trees. For ear clipping you just check every three consecutive vertices a,b, and c. If b is convex and no other vertex of the polygon lies inside the triangle abc then you can clip this triangle reducing the boundary of the polygon by one vertex, b.
Additionally you have to store the neighbourhood relations. Thus, reference from each triangle its, at most three, neighbours.
When the triangulation is finished you convert it to the constrained Delaunay triangulation (CDT). This can be done by edge flipping. Therefore you have to check for every triangle the circumcircle. If no vertex of a neighbouring triangle lies inside the triangle is conform to the CDT otherwise flip the edge of the triangle where the violation occurs.
Edit due to #Betterdev in the comments blow: Possible holes in the input polygon can be added to the initial boundary by adding a bridge. As a preprocessing one can connect a vertex of a hole to a vertex of the boundary by a "double" edge. This is always possible and makes each hole part of the main polygon boundary; and works well with ear clipping. Storing the neighbour through these bridges is vital to the flipping however.
I previously worked on a vector graphics package, so I can't tell you how many hours I've stared at that exact "e" graphic. I eventually settled on the earcut library for triangulation of point data. It's extremely fast and much simpler compared to libraries such as libtess-2.
Related
I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .
I have a wireless mesh network of nodes, each of which is capable of reporting its 'distance' to its neighbors, measured in (simplified) signal strength to them. The nodes are geographically in 3d space but because of radio interference, the distance between nodes need not be trigonometrically (trigonomically?) consistent. I.e., given nodes A, B and C, the distance between A and B might be 10, between A and C also 10, yet between B and C 100.
What I want to do is visualize the logical network layout in terms of connectness of nodes, i.e. include the logical distance between nodes in the visual.
So far my research has shown the multidimensional scaling (MDS) is designed for exactly this sort of thing. Given that my data can be directly expressed as a 2d distance matrix, it's even a simpler form of the more general MDS.
Now, there seem to be many MDS algorithms, see e.g. http://homepage.tudelft.nl/19j49/Matlab_Toolbox_for_Dimensionality_Reduction.html and http://tapkee.lisitsyn.me/ . I need to do this in C++ and I'm hoping I can use a ready-made component, i.e. not have to re-implement an algo from a paper. So, I thought this: https://sites.google.com/site/simpmatrix/ would be the ticket. And it works, but:
The layout is not stable, i.e. every time the algorithm is re-run, the position of the nodes changes (see differences between image 1 and 2 below - this is from having been run twice, without any further changes). This is due to the initialization matrix (which contains the initial location of each node, which the algorithm then iteratively corrects) that is passed to this algorithm - I pass an empty one and then the implementation derives a random one. In general, the layout does approach the layout I expected from the given input data. Furthermore, between different runs, the direction of nodes (clockwise or counterclockwise) can change. See image 3 below.
The 'solution' I thought was obvious, was to pass a stable default initialization matrix. But when I put all nodes initially in the same place, they're not moved at all; when I put them on one axis (node 0 at 0,0 ; node 1 at 1,0 ; node 2 at 2,0 etc.), they are moved along that axis only. (see image 4 below). The relative distances between them are OK, though.
So it seems like this algorithm only changes distance between nodes, but doesn't change their location.
Thanks for reading this far - my questions are (I'd be happy to get just one or a few of them answered as each of them might give me a clue as to what direction to continue in):
Where can I find more information on the properties of each of the many MDS algorithms?
Is there an algorithm that derives the complete location of each node in a network, without having to pass an initial position for each node?
Is there a solid way to estimate the location of each point so that the algorithm can then correctly scale the distance between them? I have no geographic location of each of these nodes, that is the whole point of this exercise.
Are there any algorithms to keep the 'angle' at which the network is derived constant between runs?
If all else fails, my next option is going to be to use the algorithm I mentioned above, increase the number of iterations to keep the variability between runs at around a few pixels (I'd have to experiment with how many iterations that would take), then 'rotate' each node around node 0 to, for example, align nodes 0 and 1 on a horizontal line from left to right; that way, I would 'correct' the location of the points after their relative distances have been determined by the MDS algorithm. I would have to correct for the order of connected nodes (clockwise or counterclockwise) around each node as well. This might become hairy quite quickly.
Obviously I'd prefer a stable algorithmic solution - increasing iterations to smooth out the randomness is not very reliable.
Thanks.
EDIT: I was referred to cs.stackexchange.com and some comments have been made there; for algorithmic suggestions, please see https://cs.stackexchange.com/questions/18439/stable-multi-dimensional-scaling-algorithm .
Image 1 - with random initialization matrix:
Image 2 - after running with same input data, rotated when compared to 1:
Image 3 - same as previous 2, but nodes 1-3 are in another direction:
Image 4 - with the initial layout of the nodes on one line, their position on the y axis isn't changed:
Most scaling algorithms effectively set "springs" between nodes, where the resting length of the spring is the desired length of the edge. They then attempt to minimize the energy of the system of springs. When you initialize all the nodes on top of each other though, the amount of energy released when any one node is moved is the same in every direction. So the gradient of energy with respect to each node's position is zero, so the algorithm leaves the node where it is. Similarly if you start them all in a straight line, the gradient is always along that line, so the nodes are only ever moved along it.
(That's a flawed explanation in many respects, but it works for an intuition)
Try initializing the nodes to lie on the unit circle, on a grid or in any other fashion such that they aren't all co-linear. Assuming the library algorithm's update scheme is deterministic, that should give you reproducible visualizations and avoid degeneracy conditions.
If the library is non-deterministic, either find another library which is deterministic, or open up the source code and replace the randomness generator with a PRNG initialized with a fixed seed. I'd recommend the former option though, as other, more advanced libraries should allow you to set edges you want to "ignore" too.
I have read the codes of the "SimpleMatrix" MDS library and found that it use a random permutation matrix to decide the order of points. After fix the permutation order (just use srand(12345) instead of srand(time(0))), the result of the same data is unchanged.
Obviously there's no exact solution in general to this problem; with just 4 nodes ABCD and distances AB=BC=AC=AD=BD=1 CD=10 you cannot clearly draw a suitable 2D diagram (and not even a 3D one).
What those algorithms do is just placing springs between the nodes and then simulate a repulsion/attraction (depending on if the spring is shorter or longer than prescribed distance) probably also adding spatial friction to avoid resonance and explosion.
To keep a "stable" diagram just build a solution and then only update the distances, re-using the current position from previous solution as starting point. Picking two fixed nodes and aligning them seems a good idea to prevent a slow drift but I'd say that spring forces never end up creating a rotational momentum and thus I'd expect that just scaling and centering the solution should be enough anyway.
Can you recommend me...
either a proven lightweight C / C++ implementation of an AABB tree?
or, alternatively, another efficient data-structure, plus a lightweight C / C++ implementation, to solve the problem of intersecting a large number of rays with a large number of triangles?
"Large number" means several 100k for both rays and triangles.
I am aware that AABB trees are part of the CGAL library and probably of game physics libraries like Bullet. However, I don't want the overhead of an enormous additional library in my project. Ideally, I'd like to use a small float-type templated header-only implementation. I would also go for something with a bunch of CPP files, as long as it integrated easily in my project. Dependency on boost is ok.
Yes, I have googled, but without success.
I should mention that my application context is mesh processing, and not rendering. In a nutshell, I'm transferring the topology of a reference mesh to the geometry of a mesh from a 3D scan. I'm shooting rays from vertices and along the normals of the reference mesh towards the 3D scan, and I need to recover the intersection of these rays with the scan.
Edit
Several answers / comments pointed to nearest-neighbor data structures. I have created a small illustration regarding the problems that arise when ray-mesh intersections are approached with nearest neighbor methods. Nearest neighbors methods can be used as heuristics that work in many cases, but I'm not convinced that they actually solve the problem systematically, like AABB trees do.
While this code is a bit old and using the 3DS Max SDK, it gives a fairly good tree system for object-object collision deformations in C++. Can't tell at a glance if it is Quad-tree, AABB-tree, or even OBB-tree (comments are a bit skimpy too).
http://www.max3dstuff.com/max4/objectDeform/help.html
It will require translation from Max to your own system but it may be worth the effort.
Try the ANN library:
http://www.cs.umd.edu/~mount/ANN/
It's "Approximate Nearest Neighbors". I know, you're looking for something slightly different, but here's how you can use this to speed up your data processing:
Feed points into ANN.
Query a user-selectable (think of this as a "per-mesh knob") radius around each vertex that you want to ray-cast from and find out the mesh vertices that are within range.
Select only the triangles that are within that range, and ray trace along the normal to find the one you want.
By judiciously choosing the search radius, you will definitely get a sizable speed-up without compromising on accuracy.
If there's no real time requirements, I'd first try brute force.
1M * 1M ray->triangle tests shouldn't take much more than a few minutes to run (in CPU).
If that's a problem, the second best thing to do would be to restrict the search area by calculating a adjacency graph/relation between the triangles/polygons in the target mesh. After an initial guess fails, one can try the adjacent triangles. This of course relies on lack of self occlusion / multiple hit points. (which I think is one interpretation of "visibility doesn't apply to this problem").
Also depending on how pathological the topologies are, one could try environment mapping the target mesh on a unit cube (each pixel would consists of a list of triangles projected on it) and test the initial candidate by a single ray->aabb test + lookup.
Given the feedback, there's one more simple option to consider -- space partitioning to simple 3D grid, where each dimension can be subdivided by the histogram of the x/y/z locations or even regularly.
100x100x100 grid is of very manageable size of 1e6 entries
the maximum number of cubes to visit is proportional to the diameter (max 300)
There are ~60000 extreme cells, which suggests an order of 10 triangles per cell
caveats: triangles must be placed on every cell they occupy
-- a conservative algorithm places them to cells they don't belong to; large triangles will probably require clipping and reassembly.
I apologize for the length of this question and give a pre-emptive thanks for anyone who reads through this!
So i've spent the last few days going over the GJK algorithm. I understand the general concepts behind it, and understand the most of the nitty gritties of its implementation in 2D thanks to the wonderful article by William Bittle at http://www.codezealot.org/archives/88 .
I've implemented his pseudo code (found at the end of the article) into my own c++ project, however i want to make a 3D implementation. My weakness comes into using the dot products to test the voronoi regions and the tripleProducts to get perpandicular lines. But im trying to read up more on that.
My problem comes down to the containsOrigin function. Im having trouble visualizing and accounting for the new voronoi regions that the z axis adds. I just can't seem to wrap my head around how to determine which regions contains the origin. I assume there is 4 I have to account for, each extending from the triangular planes that the comprise the 4 faces of the tetrahedron simplex. If the origin is not within any of those regions, then it is contained, and we have a collision.
How do i go about testing if it is contained in a particular voronoi region/ which triangular face is pointing in the direction of the origin?
The current 2D algorithm checks if a triangle is made, if not, then the simplex is a line and it finds the 3rd point. I assume the 3D algorithm with check if a tetrahedron is made, if not, then it will check for a triangle, if true then it will to find a 4th point to make a tetrahedron(how would i get this? using a normal in direction of origin?). If i trangle isnt made, it will find a 3rd point to make a triangle (do i still use triple product for this like in 2D?).
Any suggestions, outlines, resources, code augmentations, comments are much appretiated.
Depending on what result you expect from the GJK algorithm you might want to look at this nice tutorial from Molly Rocket: https://mollyrocket.com/849
Be aware though that his implementation only outputs intersection? yes/no. But it might be a nice start.
I've got some convex polygons stored as an STL vector of points (more or less). I want to tessellate them really quickly, preferably into fairly evenly sized pieces, and with no "slivers".
I'm going to use it to explode some objects into little pieces. Does anyone know of a nice library to tessellate polygons (partition them into a mesh of smaller convex polygons or triangles)?
I've looked at a few I've found online already, but I can't even get them to compile. These academic type don't give much regard for ease of use.
CGAL has packages to solve this problem. The best would be probably to use the 2D Polygon Partitioning package. For example you could generate y-monotone partition of a polygon (works for non-convex polygons, as well) and you would get something like this:
The runnning time is O(n log n).
In terms of ease of use this is a small example code generating a random polygon and partitioning it (based on this manual example):
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Partition_traits_2<K> Traits;
typedef Traits::Point_2 Point_2;
typedef Traits::Polygon_2 Polygon_2;
typedef std::list<Polygon_2> Polygon_list;
typedef CGAL::Creator_uniform_2<int, Point_2> Creator;
typedef CGAL::Random_points_in_square_2<Point_2, Creator> Point_generator;
int main( )
{
Polygon_2 polygon;
Polygon_list partition_polys;
CGAL::random_polygon_2(50, std::back_inserter(polygon),
Point_generator(100));
CGAL::y_monotone_partition_2(polygon.vertices_begin(),
polygon.vertices_end(),
std::back_inserter(partition_polys));
// at this point partition_polys contains the partition of the input polygons
return 0;
}
To install cgal, if you are on windows you can use the installer to get the precompiled library, and there are installations guides for every platform on this page. It might not be the simplest to install but you get the most used and robust computational geometry library there is out there, and the cgal mailing list is very helpful to answer questions...
poly2tri looks like a really nice lightweight C++ library for 2D Delaunay triangulation.
As balint.miklos mentioned in a comment above, the Shewchuk's triangle package is quite good. I have used it myself many times; it integrates nicely into projects and there is the triangle++ C++ interface. If you want to avoid slivers, then allow triangle to add (interior) Steiner points, so that you generate a quality mesh (usually a constrained conforming delaunay triangulation).
If you don't want to build the whole of GCAL into your app - this is probably simpler to implement.
http://www.flipcode.com/archives/Efficient_Polygon_Triangulation.shtml
I've just begun looking into this same problem and I'm considering voronoi tessellation. The original polygon will get a scattering of semi random points that will be the centers of the voronoi cells, the more evenly distributed they are the more regularly sized the cells will be, but they shouldn't be in a perfect grid otherwise the interior polygons will all look the same. So the first thing is to be able to generate those cell center points- generating them over the bounding box of the source polygon and a interior/exterior test shouldn't be too hard.
The voronoi edges are the dotted lines in this picture, and are sort of the complement of the delaunay triangulation. All the sharp triangle points become blunted:
Boost has some voronoi functionality:
http://www.boost.org/doc/libs/1_55_0/libs/polygon/doc/voronoi_basic_tutorial.htm
The next step is creating the voronoi polygons. Voro++ http://math.lbl.gov/voro++/ is 3D oriented but it is suggested elsewhere that approximately 2d structure will work, but be much slower than software oriented towards 2D voronoi. The other package that looks to be a lot better than a random academic homepage orphan project is https://github.com/aewallin/openvoronoi.
It looks like OpenCV used to support do something along these lines, but it has been deprecated (but the c-api still works?). cv::distTransform is still maintained but operates on pixels and generates pixel output, not vertices and edge polygon data structures, but may be sufficient for my needs if not yours.
I'll update this once I've learned more.
A bit more detail on your desired input and output might be helpful.
For example, if you're just trying to get the polygons into triangles, a triangle fan would probably work. If you're trying to cut a polygon into little pieces, you could implement some kind of marching squares.
Okay, I made a bad assumption - I assumed that marching squares would be more similar to marching cubes. Turns out it's quite different, and not what I meant at all.. :|
In any case, to directly answer your question, I don't know of any simple library that does what you're looking for. I agree about the usability of CGAL.
The algorithm I was thinking of was basically splitting polygons with lines, where the lines are a grid, so you mostly get quads. If you had a polygon-line intersection, the implementation would be simple. Another way to pose this problem is treating the 2d polygon like a function, and overlaying a grid of points. Then you just do something similar to marching cubes.. if all 4 points are in the polygon, make a quad, if 3 are in make a triangle, 2 are in make a rectangle, etc. Probably overkill. If you wanted slightly irregular-looking polygons you could randomize the locations of the grid points.
On the other hand, you could do a catmull-clark style subdivision, but omit the smoothing. The algorithm is basically you add a point at the centroid and at the midpoint of each edge. Then for each corner of the original polygon you make a new smaller polygon that connects the edge midpoint previous to the corner, the corner, the next edge midpoint, and the centroid. This tiles the space, and will have angles similar to your input polygon.
So, lots of options, and I like brainstorming solutions, but I still have no idea what you're planning on using this for. Is this to create destructible meshes? Are you doing some kind of mesh processing that requires smaller elements? Trying to avoid Gouraud shading artifacts? Is this something that runs as a pre-process or realtime? How important is exactness? More information would result in better suggestions.
If you have convex polygons, and you're not too hung up on quality, then this is really simple - just do ear clipping. Don't worry, it's not O(n^2) for convex polygons. If you do this naively (i.e., you clip the ears as you find them), then you'll get a triangle fan, which is a bit of a drag if you're trying to avoid slivers. Two trivial heuristics that can improve the triangulation are to
Sort the ears, or if that's too slow
Choose an ear at random.
If you want a more robust triangulator based on ear clipping, check out FIST.