How to access the facets in a CGAL 3D triangulation? - c++

I am using CGAL to compute the 3D triangulation of a set of points:
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Delaunay_triangulation_3<K> CGALTriangulation;
typedef CGALTriangulation::Point Point;
// Construction from a list of points
std::list<Point> points;
points.push_front(Point(0, 0, 0));
points.push_front(Point(2, 0, 0));
points.push_front(Point(0, 2, 0));
points.push_front(Point(2, 2, 0));
points.push_front(Point(1, 1, 1));
// Perform triangulation
CGALTriangulation T(points.begin(), points.end());
Accessing triangles (facets)
I need to create a mesh out of this in Unity, so I am using CGAL because it has a lot of algorithms for taking care of this complex problem. The issue is that it is very difficult in the API to find a way to access the different triangles (and so their vertices) that compose the triangulation and I haven't yet found a way how to do so :(
Note Please note that accessing vertices alone is not enough for me:
for (CGALTriangulation::Finite_vertices_iterator it = T.finite_vertices_begin();
it != T.finite_vertices_end();
it++)
{
CGALTriangulation::Triangulation_data_structure::Vertex v = *it;
// Do something with the vertex
}
Because I do not get any info about which facet (triangle) each vertex belongs to. And the triangles is exactly what I need!
How can I access the triangles (facets) of the triangulation? How to get the vertices out of each facet?

I don't know precisely what you can to achieve. A 3D Delaunay triangulation is a decomposition of the convex hull of your points into tetrahedra. Any way, if you want to access the facets of a triangulation, you use the Finite_facets_iterator.
Something like:
for (CGALTriangulation::Finite_facets_iterator it = T.finite_facets_begin();
it != T.finite_facets_end();
it++)
{
std::pair<CGALTriangulation::Cell_handle, int> facet = *it;
CGALTriangulation::Vertex_handle v1 = facet.first->vertex( (facet.second+1)%4 );
CGALTriangulation::Vertex_handle v2 = facet.first->vertex( (facet.second+2)%4 );
CGALTriangulation::Vertex_handle v3 = facet.first->vertex( (facet.second+3)%4 );
}
If you are interested in a surface mesh, you might want to look a reconstruction algorithms such as Poisson surface reconstruction or Advancing Front Reconstruction.

Related

Attaching info (id) to the vertices for a CGAL Poisson reconstruction

I spent the day trying to achieve the following, with no luck. I have been able to attach an id to the vertices (vertex()->info()) for some triangulations, with the help of std::pair. But now I want to do so for a Poisson surface reconstruction, which uses a Polyhedron_3 as a mesh, and I don't manage. Doing the Poisson surface reconstruction requires to use points with normals. So, for the points, I defined pairs constituted of a pair (point, normal) and an integer; my inputs are a matrix of points pts and a matrix of normals normals (these matrices come from R which runs the C++ code):
const size_t npoints = pts.nrow();
std::vector<IP3wn> points(npoints);
for(size_t i = 0; i < npoints; i++) {
points[i] = std::make_pair(
std::make_pair(Point3(pts(i, 0), pts(i, 1), pts(i, 2)), i + 1),
Vector3(normals(i, 0), normals(i, 1), normals(i, 2)));
}
Polyhedron mesh;
But I can't access to the field info() of the vertices:
for(Polyhedron::Facet_iterator fit = mesh.facets_begin();
fit != mesh.facets_end(); fit++) {
Polyhedron::Facet f = *fit;
facets(i, 0) = f.halfedge()->vertex()->info();
The error message claims that there is no member info. Could you show me the right way please?

Draw (2D) Polygon with given 3D-Vertices and Transformation with VTK

I have some 3d models and I want to display each face of the model seperately. For each face, I have a list of the vertices (as pcl::PointCloud), the Translation vector (as Eigen::Vector3f) and the Rotation Matrix (as Eigen::Quaternionf). The faces can have different shapes. It will be rectangular, round (n-verts polygon) and trapezial.
For the rectangular faces, I used vtkCubeSource so far and it works good. For the round faces, I could maybe use vtkCylinderSource. For trapezial faces, I didn't found a solution so far.
The best would be a class like vtkPolyLineSource, where I just need a list of vertices for any polygons. But as far as I see, vtkPolyLineSource would just draw the the line and don't fill the polynom with a color.
Is there a way to draw a polygon into 3d-space with vtk? Since it is possible to directly draw a 3d-model from a file, I think there should be a method for drawing a model (or just one face), but I couldn't find it so far. That's my first contact with VTK, so I think I just overlooked the right classes.
One reason why I don't just load a model-file is, that I need the faces in different colors and opacitys (defiend at runtime).
Use vtkPolygon
vtkSmartPointer<vtkPoints> points = vtkSmartPointer<vtkPoints>::New();
// ... fill in your points with n points
// Create the polygon
vtkSmartPointer<vtkPolygon> polygon = vtkSmartPointer<vtkPolygon>::New();
polygon->GetPointIds()->SetNumberOfIds(n);
for (int j = 0; j < n; j++)
{
polygon->GetPointIds()->SetId(j, j);
}
// Add the polygon to a list of polygons
vtkSmartPointer<vtkCellArray> polygons = vtkSmartPointer<vtkCellArray>::New();
polygons->InsertNextCell(polygon);
// Create a PolyData
vtkPolyData* polygonPolyData = vtkPolyData::New();
polygonPolyData->SetPoints(points);
polygonPolyData->SetPolys(polygons);
// create mapper and actor using this polydata - the usual stuff

Geometry rounding problems: object no longer convex after simple transformations

I'm making a little app to analyze geometry. In one part of my program, I use an algorithm that has to have a convex object as input. Luckily, all my objects are initially convex, but some are just barely so (see image).
After I apply some transformations, my algorithm fails to work (it produces "infinitely" long polygons, etc), and I think this is because of rounding errors as in the image; the top vertex in the cylinder gets "pushed in" slightly because of rounding errors (very exaggerated in image) and is no longer convex.
So my question is: Does anyone know of a method to "slightly convexify" an object? Here's one method I tried to implement but it didn't seem to work (or I implemented it wrong):
1. Average all vertices together to create a vertex C inside the convex shape.
2. Let d[v] be the distance from C to vertex v.
3. Scale each vertex v from the center C with the scale factor 1 / (1+d[v] * CONVEXIFICATION_FACTOR)
Thanks!! I have CGAL and Boost installed so I can use any of those library functions (and I already do).
You can certainly make the object convex by computing the convex hull of it. But that'll "convexify" anything. If you're sure your input has departed only slightly from being convex, then it shouldn't be a problem.
CGAL appears to have an implementation of 3D Quickhull in it, which would be the first thing to try. See http://doc.cgal.org/latest/Convex_hull_3/ for docs and some example programs. (I'm not sufficiently familiar with CGAL to want to reproduce any examples and claim they're correct.)
In the end I discovered the root of this problem was the fact that the convex hull contained lots of triangles, whereas my input shapes were often cube-shaped, making each quadrilateral region appear as 2 triangles which had extremely similar plane equations, causing some sort of problem in the algorithm I was using.
I solved it by "detriangulating" the polyhedra, using this code. If anyone can spot any improvements or problems, let me know!
#include <algorithm>
#include <cmath>
#include <vector>
#include <CGAL/convex_hull_traits_3.h>
#include <CGAL/convex_hull_3.h>
typedef Kernel::Point_3 Point;
typedef Kernel::Vector_3 Vector;
typedef Kernel::Aff_transformation_3 Transformation;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron;
struct Plane_from_facet {
Polyhedron::Plane_3 operator()(Polyhedron::Facet& f) {
Polyhedron::Halfedge_handle h = f.halfedge();
return Polyhedron::Plane_3(h->vertex()->point(),
h->next()->vertex()->point(),
h->opposite()->vertex()->point());
}
};
inline static double planeDistance(Plane &p, Plane &q) {
double sc1 = max(abs(p.a()),
max(abs(p.b()),
max(abs(p.c()),
abs(p.d()))));
double sc2 = max(abs(q.a()),
max(abs(q.b()),
max(abs(q.c()),
abs(q.d()))));
Plane r(p.a() * sc2,
p.b() * sc2,
p.c() * sc2,
p.d() * sc2);
Plane s(q.a() * sc1,
q.b() * sc1,
q.c() * sc1,
q.d() * sc1);
return ((r.a() - s.a()) * (r.a() - s.a()) +
(r.b() - s.b()) * (r.b() - s.b()) +
(r.c() - s.c()) * (r.c() - s.c()) +
(r.d() - s.d()) * (r.d() - s.d())) / (sc1 * sc2);
}
static void detriangulatePolyhedron(Polyhedron &poly) {
vector<Polyhedron::Halfedge_handle> toJoin;
for (auto edge = poly.edges_begin(); edge != poly.edges_end(); edge++) {
auto f1 = edge->facet();
auto f2 = edge->opposite()->facet();
if (planeDistance(f1->plane(), f2->plane()) < 1E-5) {
toJoin.push_back(edge);
}
}
for (auto edge = toJoin.begin(); edge != toJoin.end(); edge++) {
poly.join_facet(*edge);
}
}
...
Polyhedron convexHull;
CGAL::convex_hull_3(shape.begin(),
shape.end(),
convexHull);
transform(convexHull.facets_begin(),
convexHull.facets_end(),
convexHull.planes_begin(),
Plane_from_facet());
detriangulatePolyhedron(convexHull);
Plane bounds[convexHull.size_of_facets()];
int boundCount = 0;
for (auto facet = convexHull.facets_begin(); facet != convexHull.facets_end(); facet++) {
bounds[boundCount++] = facet->plane();
}
...
This gave the desired result (after and before):

Surface dilatation/erosion on a mesh

I am performing plane detection on a 3D mesh. To fill small holes, I want to perform a dilatation/erosion step. For each planes I know the equation and the corresponding facets (represented by a set corresponding to the ids of the facets).
Currently, I have the following algorithm :
set<int> sFacetsDil;
for(set<int>::iterator it = Planes.sFacets.begin(); it != Plane.sFacets.end(); it++)
{
Facet f = facetMap.at(*it);
vector<Facet> vFacets = facetAround(f);
for(int i = 0; i < vFacets.size(); i++) {
if(isNotInPlane(vFacets[i]))
sFacetsDil.insert(vFacets[i].id);
}
}
Plane.sFacets.insert(sFacetsDil.begin(), sFacetsDil.end());
I do roughly the same thing for the erosion step. However this is quite ineffective: some facets are inside the plane and don't have to be visited for the dilatation steps. I understand I could compute the border of the plane, but I think in the end it would end at iterating over all the facets to find it... Moreover, there are some cases where I'd like to do multiple dilatation steps. So every time the border would have to be computed.
I have the standard halfedge structure for the mesh.
Does anyone know if there is a standard algorithm for this problem?

Some faces are transparent, other are opaque

I have created a regular dodecahedron with OpenGL. I wanted to make the faces transparent (as in the image on Wikipedia) but this doesn't always work. After some digging in the OpenGL documentation, is appears that I "need to sort the transparent faces from back to front". Hm. How do I do that?
I mean I call glRotatef() to rotate the coordinate system but the reference coordinates of the faces stay the same; the rotation effect is applied "outside" of my renering code.
If I apply the transformation to the coordinates, then everything else will stop moving.
How can I sort the faces in this case?
[EDIT] I know why this happens. I have no idea what the solution could look like. Can someone please direct me to the correct OpenGL calls or a piece of sample code? I know when the coordinate transform is finished and I have the coordinates of the vertices of the faces. I know how to calculate the center coordinates of the faces. I understand that I need to sort them by Z value. How to I transform a Vector3f by the current view matrix (or whatever this thing is called that rotates my coordinate system)?
Code to rotate the view:
glRotatef(xrot, 1.0f, 0.0f, 0.0f);
glRotatef(yrot, 0.0f, 1.0f, 0.0f);
When the OpenGL documentation says "sort the transparent faces" it means "change the order in which you draw them". You don't transform the geometry of the faces themselves, instead you make sure that you draw the faces in the right order: farthest from the camera first, nearest to the camera last, so that the colour is blended correctly in the frame buffer.
One way to do this is to compute for each transparent face a representative distance from the camera (for example, the distance of its centre from the centre of the camera), and then sort the list of transparent faces on this representative distance.
You need to do this because OpenGL uses the Z-buffering technique.
(I should add that the technique of "sorting by the distance of the centre of the face" is a bit naive, and leads to the wrong result in cases where faces are large or close to the camera. But it's simple and will get you started; there'll be plenty of time later to worry about more sophisticated approaches to Z-sorting.)
Update: Aaron, you clarified the post to indicate that you understand the above, but don't know how to calculate a suitable Z value for each face. Is that right? I would usually do this by measuring the distance from the camera to the face in question. So I guess this means you don't know where the camera is?
If that's a correct statement of the problem you're having, see OpenGL FAQ 8.010:
As far as OpenGL is concerned, there is no camera. More specifically, the camera is always located at the eye space coordinate (0., 0., 0.).
Update: Maybe the problem is that you don't know how to transform a point by the modelview matrix? If that's the problem, see OpenGL FAQ 9.130:
Transform the point into eye-coordinate space by multiplying it by the ModelView matrix. Then simply calculate its distance from the origin.
Use glGetFloatv(GL_MODELVIEW_MATRIX, dst) to get the modelview matrix as a list of 16 floats. I think you'll have to do the multiplication yourself: as far as I know OpenGL doesn't provide an API for this.
For reference, here is the code (using lwjgl 2.0.1). I define my model by using an array of float arrays for the coordinates:
float one = 1f * scale;
// Cube of size 2*scale
float[][] coords = new float[][] {
{ one, one, one }, // 0
{ -one, one, one },
{ one, -one, one },
{ -one, -one, one },
{ one, one, -one },
{ -one, one, -one },
{ one, -one, -one },
{ -one, -one, -one }, // 7
};
Faces are defined in an array of int arrays. The items in the inner array are indices of vertices:
int[][] faces = new int[][] {
{ 0, 2, 3, 1, },
{ 0, 4, 6, 2, },
{ 0, 1, 5, 4, },
{ 4, 5, 7, 6, },
{ 5, 1, 3, 7, },
{ 4, 5, 1, 0, },
};
These lines load the Model/View matrix:
Matrix4f matrix = new Matrix4f ();
FloatBuffer params = FloatBuffer.allocate (16);
GL11.glGetFloat (GL11.GL_MODELVIEW_MATRIX, params );
matrix.load (params);
I store some information of each face in a Face class:
public static class Face
{
public int id;
public Vector3f center;
#Override
public String toString ()
{
return String.format ("%d %.2f", id, center.z);
}
}
This comparator is then used to sort the faces by Z depth:
public static final Comparator<Face> FACE_DEPTH_COMPARATOR = new Comparator<Face> ()
{
#Override
public int compare (Face o1, Face o2)
{
float d = o1.center.z - o2.center.z;
return d < 0f ? -1 : (d == 0 ? 0 : 1);
}
};
getCenter() returns the center of a face:
public static Vector3f getCenter (float[][] coords, int[] face)
{
Vector3f center = new Vector3f ();
for (int vertice = 0; vertice < face.length; vertice ++)
{
float[] c = coords[face[vertice]];
center.x += c[0];
center.y += c[1];
center.z += c[2];
}
float N = face.length;
center.x /= N;
center.y /= N;
center.z /= N;
return center;
}
Now I need to set up the face array:
Face[] faceArray = new Face[faces.length];
Vector4f v = new Vector4f ();
for (int f = 0; f < faces.length; f ++)
{
Face face = faceArray[f] = new Face ();
face.id = f;
face.center = getCenter (coords, faces[f]);
v.x = face.center.x;
v.y = face.center.y;
v.z = face.center.z;
v.w = 0f;
Matrix4f.transform (matrix, v, v);
face.center.x = v.x;
face.center.y = v.y;
face.center.z = v.z;
}
After this loop, I have the transformed center vectors in faceArray and I can sort them by Z value:
Arrays.sort (faceArray, FACE_DEPTH_COMPARATOR);
//System.out.println (Arrays.toString (faceArray));
Rendering happens in another nested loop:
float[] faceColor = new float[] { .3f, .7f, .9f, .3f };
for (Face f: faceArray)
{
int[] face = faces[f.id];
glColor4fv(faceColor);
GL11.glBegin(GL11.GL_TRIANGLE_FAN);
for (int vertice = 0; vertice < face.length; vertice ++)
{
glVertex3fv (coords[face[vertice]]);
}
GL11.glEnd();
}
Have you tried just drawing each face, in relation to regular world coordinates from back to front? Often it seems like the wording in some of the OpenGL docs is weird. I think if you get the drawing in the right order with out worrying about rotation, it might automatically work when you add rotation. OpenGL might take care of the reordering of faces when rotating the matrix.
Alternatively you can grab the current matrix as you draw ( glGetMatrix() ) and reorder your drawing algorithm depending on which faces are going to be the rotated back/front.
That quote says it all - you need to sort the faces.
When drawing such a simple object you can just render the back faces first and the front faces second using the z-buffer (by rendering twice with different z-buffer comparison functions).
But usually, you just want to transform the object, then sort the faces. You transform just your representation of the object in memory, then determine the drawing order by sorting, then draw in that order with the original coordinates, using transformations as needed (need to be consistent with the sorting you've done). In a real application, you would probably do the transformation implicitly, eg. by storing the scene as a BSP- or Quad- or R- or whatever-tree and simply traversing the tree from various directions.
Note that the sorting part can be tricky, because the function "is-obsucred-by" which is the function you want to compare the faces by (because you need to draw the obscured faces first) is not an ordering, eg. there can be cycles (face A obscures B && face B obscures A). In this case, you would probably split one of the faces to break the loop.
EDIT:
You get the z-coordinate of a vertex by taking the coordinates you pass to glVertex3f(), make it 4D (homogenous coordinates) by appending 1, transform it with the modelview matrix, then transform it with the projection matrix, then do the perspective division. The details are in the OpenGL specs in Chapter 2, section Coordinate transformations.
However, there isn't any API for you to actually do the transformation. The only thing OpenGL lets you do is to draw the primitives, and tell the renderer how to draw them (eg. how to transform them). It doesn't let you easily transform coordinates or anything else (although there IIUC are ways to tell OpenGL to write transformed coordinates to a buffer, this is not that easy). If you want some library to help you manipulate actual objects, coordinates etc., consider using some sort of scenegraph library (OpenInventor or something)