CGAL: Wrong vertex accessed using facet - c++

As far as I understand how a facet works in CGAL (using v5.1), it is composed of a cell handle and an index to the vertex opposing the facet in that cell handle. One can thus obtains all the vertex of the considered cell using (I am working using alpha-shapes in 3D):
typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel;
typedef Kernel::FT FT;
typedef CGAL::Triangulation_vertex_base_with_info_3<std::size_t, Kernel> Vb3;
typedef CGAL::Fixed_alpha_shape_vertex_base_3<Kernel, Vb3> asVb3;
typedef CGAL::Fixed_alpha_shape_cell_base_3<Kernel> asCb3;
typedef CGAL::Triangulation_data_structure_3<asVb3, asCb3> asTds3;
typedef CGAL::Delaunay_triangulation_3<Kernel, asTds3, CGAL::Fast_location> asTriangulation_3;
typedef CGAL::Fixed_alpha_shape_3<asTriangulation_3> Alpha_shape_3;
typedef Kernel::Point_3 Point_3;
...
Alpha_shape_3::Facet facet;
...
Alpha_shape_3::Vertex_handle facetV1 = facet.first->vertex((facet.second+1)%4);
Alpha_shape_3::Vertex_handle facetV2 = facet.first->vertex((facet.second+2)%4);
Alpha_shape_3::Vertex_handle facetV3 = facet.first->vertex((facet.second+3)%4);
Alpha_shape_3::Vertex_handle facetOppositeVertex = facet.first->vertex((facet.second)%4);
I assumed the opposite vertex is inside a cell, and the facet is also part of this cell. Using this, I can compute the exterior-pointing normals to the surface of my alpha-shape (if there is a way to do this automatically using CGAL, I would like to know it). To compute the normal of a point on the free surface, I do an average of the the different facet normals sharing that point, weighted by the angle of the facet at that point. To orient my normals exteriorly to the mesh, I simply check the dot product between the normal and a vector from one point of the facet to the opposite vertex.
The problem is that, sometimes the opposite vertex seems to be in a complete different cell, leading in my FEM computations to bad results. Here are some figures of what happen:
initially
Normal to free surface (good)
when the mesh slightly deforms:
Normal to free surface (bad)
Actually what happens is that the opposite vertex of some facet does not lie in the only cell sharing that free surface facet, but inside another cell far away, such that the orientation of the facet normal is not correctly computed, and that some facet normals cancel each other, leading thus to bad normals.
Is there:
a way to compute those normals automatically in cgal ?
a problem in my understanding of how a facet works ?
NB: I already check when the opposite vertex is infinite and in that case I mirror the facet using the mirror_facet function
EDIT:
As mentionned in a comment, I tried to use different functions to access the vertex:
std::cout << "--------------------------------------------------" << std::endl;
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 0))->info() << std::endl; //facetV1
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 1))->info() << std::endl; //facetV2
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 2))->info() << std::endl; //facetV3
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 3))->info() << std::endl; //opposite vertex
std::cout << "--------------------------------------------------" << std::endl;
But it sometimes gives me:
--------------------------------------------------
117
25
126
117
--------------------------------------------------
So an opposite vertex with the same index as one of the facet's vertex.

The way to get a consistent orientation of vertices in a face is to use the function vertex_triple_index() instead of (+1/2/3%4). So you have:
Alpha_shape_3::Vertex_handle facetV1 = facet.first->vertex(DT::vertex_triple_index(facet.second,0));
Alpha_shape_3::Vertex_handle facetV2 = facet.first->vertex(DT::vertex_triple_index(facet.second,1));
Alpha_shape_3::Vertex_handle facetV3 = facet.first->vertex(DT::vertex_triple_index(facet.second,2));

Related

CGAL Inexact Volume calculation

My Problem
I would like to calculate the exact volume for an intersection with polygonmeshes. Unfortunately the result is wrong!?!?!.
I think it has something to do with me choosing the wrong options.
I have an STEP/OFF file (both work). I move a smaller Cylinder into a bigger Cylinder.
Then I calculate the Intersection. I do not use a pointmap.
If I calculate the Volume of the first three intersections, CGAL tells me their volume is zero, but it is not.
Why the Result is wrong
I know this, because:
I described this Problem analytically and solved the integral with matlab => Volume is not 0
I solved the Problem Using FreeCAD => Volume is the same as in Matlab
I solved the Volume in CGAL Result do not match 1 and 2.
I wrote the Results of many all the intersection into files, and the files concerning the problem are not empty. With a mesh viewer like (gmsh or meshlab) I can confirm heigth width and length. So the volume should not be 0, because it is intersecting so the volume cannot be 0.
What I have done
I have read this:
The Exact Computation Paradigm
Robustness and Precision Issues in Geometric Computation
FAQ: I am using double (or float or ...) as my number type and get assertion failures or incorrect output. Is this a bug in CGAL?
I did not understand how these three apply to my situation.
I am using the Exact_predicates_exact_constructions_kernel, I defined CGAL_DONT_USE_LAZY_KERNEL. I have used the other Kernels and not defined CGAL_DONT_USE_LAZY_KERNEL, the result does not change.
I do not use the the same output and input variable for intersection like in the
Polygon_mesh_processing/corefinement_consecutive_bool_op.cpp, so i do not use a point map as a result.
If needed I will supply the entire example, but I think, I did something wrong and the includes and way how I calculate the volume should suffice.
// originalExampleFrom corefinement_parallel_union_meshes.cpp;
#include <CGAL/Exact_predicates_exact_constructions_kernel_with_sqrt.h>
//#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Polygon_mesh_processing/transform.h>
#include <CGAL/Polygon_mesh_processing/intersection.h>
#include <CGAL/Named_function_parameters.h>
#include <CGAL/boost/graph/named_params_helper.h>
#include <CGAL/Surface_mesh.h>
#include <CGAL/aff_transformation_tags.h>
#include <CGAL/Polygon_mesh_processing/triangulate_faces.h>
#include <CGAL/Polygon_mesh_processing/corefinement.h>
#include <CGAL/Polygon_mesh_processing/repair.h>
#include <CGAL/Polygon_mesh_processing/IO/polygon_mesh_io.h>
#include <CGAL/Aff_transformation_3.h>
#include <iostream>
#include <fstream> // for write to file
#include <cassert>
#include <typeinfo>
//#define CGAL_DONT_USE_LAZY_KERNEL
/*
The corefinement operation (which is also internally used in the three Boolean operations) will correctly change the topology of the input surface mesh
if the point type used in the point property maps of the input meshes is from a CGAL Kernel with exact predicates.
If that kernel does not have exact constructions, the embedding of the output surface mesh might have self-intersections.
In case of consecutive operations, it is thus recommended to use a point property map with points from a kernel
with exact predicates and exact constructions (such as CGAL::Exact_predicates_exact_constructions_kernel).
In practice, this means that with exact predicates and inexact constructions, edges will be split at each intersection with a triangle but the position of the intersection point might create self-intersections due to the limited precision of floating point numbers.
*/
typedef CGAL::Exact_predicates_exact_constructions_kernel Kernel; //read text abouve about kernel
typedef Kernel::Point_3 Point_3;
typedef CGAL::Surface_mesh<Kernel::Point_3> Mesh;
#define CGAL_DONT_USE_LAZY_KERNEL
namespace PMP = CGAL::Polygon_mesh_processing;
bool simulateDrillingRadial(Mesh cutter, Mesh rotor, Mesh &out, double step) { //Kernel::Point_3 *volume
bool validIntersection = false;
CGAL::Aff_transformation_3<Kernel> trans(CGAL::Translation(),
Kernel::Vector_3(0, step, Z_INIT)); // step * 2
PMP::transform(trans, cutter);
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(rotor));
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(rotor));
validIntersection = CGAL::exact(
PMP::corefine_and_compute_intersection(cutter, rotor, out));
#ifndef NDEBUG
CGAL::IO::write_polygon_mesh("union" + std::to_string(step) + ".off", out,
CGAL::parameters::stream_precision(17));
assert(validIntersection);
std::cout << "Cutter Volume: " << PMP::volume(cutter) << std::endl;
std::cout << "Rotor Volume: " << PMP::volume(rotor) << std::endl;
std::cout << "Out Volume: " << CGAL::exact(PMP::volume(out)) << std::endl;
std::cout << "Numb. of Step: " << step << std::endl;
std::cout << "Bohrtiefe : " << Y_INIT - DRMAX * 1.0 / ND * step
<< std::endl;
#endif
return validIntersection;
}
The Rest of the CODE:
bool simulateDrillingRadial(Mesh cutter, Mesh rotor, Mesh &out, double step) { //Kernel::Point_3 *volume
bool validIntersection = false;
CGAL::Aff_transformation_3<Kernel> trans(CGAL::Translation(),
Kernel::Vector_3(0, step, Z_INIT)); // step * 2
PMP::transform(trans, cutter);
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(rotor));
assert(!CGAL::Polygon_mesh_processing::does_self_intersect(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(cutter));
assert(CGAL::Polygon_mesh_processing::does_bound_a_volume(rotor));
validIntersection = CGAL::exact(
PMP::corefine_and_compute_intersection(cutter, rotor, out));
#ifndef NDEBUG
CGAL::IO::write_polygon_mesh("union" + std::to_string(step) + ".step", out,
CGAL::parameters::stream_precision(17));
assert(validIntersection);
std::cout << "Cutter Volume: " << PMP::volume(cutter) << std::endl;
std::cout << "Rotor Volume: " << PMP::volume(rotor) << std::endl;
std::cout << "Out Volume: " << CGAL::exact(PMP::volume(out)) << std::endl;
std::cout << "Bohrtiefe : " << step << std::endl;
#endif
return validIntersection;
}
int main(int argc, char **argv) {
bool validRead = false;
bool validIntersection = false;
Mesh cutter, rotor; //out; // , out;
Mesh out[ND];
Kernel::Point_3 centers[ND];
//Kernel::FT volume[ND];
//GetGeomTraits<TriangleMesh, CGAL_NP_CLASS>::type::FT volume[ND];
double steps[40] = { 40.0000, 39.9981, 39.9926, 39.9833, 39.9703, 39.9535,
39.9330, 39.9086, 39.8804, 39.8482, 39.8121, 39.7719, 39.7276,
39.6791, 39.6262, 39.5689, 39.5070, 39.4404, 39.3689, 39.2922,
39.2103, 39.1227, 39.0293, 38.9297, 38.8235, 38.7102, 38.5893,
38.4601, 38.3219, 38.1736, 38.0140, 37.8415, 37.6542, 37.4490,
37.2221, 36.9671, 36.6740, 36.3232, 35.8661, 35.0000 };
const std::string cutterFile = CGAL::data_file_path("Cutter.off");
const std::string rotorFile = CGAL::data_file_path("Rotor.off");
validRead = (!PMP::IO::read_polygon_mesh(cutterFile, cutter)
|| !PMP::IO::read_polygon_mesh(rotorFile, rotor));
assert(!validRead);
PMP::triangulate_faces(cutter);
PMP::triangulate_faces(rotor);
PMP::transform(rotAroundX(M_PI / 2), cutter);
for (int i = 0; i < ND; i++) {
//simulateDrillingRadial(Mesh & cutter, Mesh & rotor, Mesh & out, unsigned int step)
simulateDrillingRadial(cutter, rotor, out[i], steps[i] + 10);
}
writeToCSV("tmp.csv", ND, out, steps);
return 0;
}
Change the output format of your mesh from *.off to *.stl, then open the intersection mesh in a software like Autodesk Netfabb which can detect and repair errors in the meshes being loaded. I think there’s a high chance the functions you’re using generate meshes with bugs. Possible bugs include holes, duplicate triangles, and self-intersections. Strictly speaking, such meshes do not unambiguously define a solid body and they have no volume.
If you confirm that’s the problem, there’re two ways to fix.
Replace or fix the intersection algorithm making it produce watertight meshes with no self-intersections or duplicate triangles. Maybe the Nef Polyhedra from the same library will help.
Replace volume computation algorithm making it tolerant to the bugs you have in your intersection meshes.
I realize the answer is rather vague. The reason for that — the problem is very hard to solve well. Very smart people published research papers over several decades. Some companies even selling commercial libraries just for reliable Boolean operations on 3D meshes.

How to convert a triangulated hole patch to a surface mesh?

I have a triangulated hole patch with vertices and triangles. Now how do I convert this to a surface mesh?
I am trying to partially fill a hole in my mesh using two different patches. I have the vertex locations for all the points on the boundary (points with z=0). Using them I have triangulated the hole using the following code (from https://doc.cgal.org/latest/Polygon_mesh_processing/Polygon_mesh_processing_2triangulate_polyline_example_8cpp-example.html)
std::vector<PointCGAL> polyline;
Mesh::Property_map<vertex_descriptor, std::string> name;
Mesh::Property_map<vertex_descriptor, PointCGAL> location =
mesh1.points();
BOOST_FOREACH(vertex_descriptor vd, mesh1.vertices()) {
if (location[vd].z() < 0.00001)
{
std::cout << "on Boundary" << endl;
polyline.push_back(PointCGAL(location[vd].x(),
location[vd].y(), location[vd].z()));
}
std::cout << location[vd] << std::endl;
}
typedef CGAL::Triple<int, int, int> Triangle_int;
std::vector<Triangle_int> patch;
patch.reserve(polyline.size() - 2); // there will be exactly n-2
triangles in the patch
CGAL::Polygon_mesh_processing::triangulate_hole_polyline(
polyline,
std::back_inserter(patch));
Have a look at the function polygon_soup_to_polygon_mesh()

NaN values in normal point

I'm having a problem verifying the value contained in the coordinates of the normal point. Initially I estimate the normal of my cloud and when I'll check the xyz coordinates of the normal it's returned nan.
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::Normal>::Ptr normals (new pcl::PointCloud<pcl::Normal>);
// Object for normal estimation.
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
normalEstimation.setInputCloud(cloud);
// For every point, use all neighbors in a radius of 3cm.
normalEstimation.setRadiusSearch(0.03);
// A kd-tree is a data structure that makes searches efficient. More about it later.
// The normal estimation object will use it to find nearest neighbors.
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
normalEstimation.setSearchMethod(kdtree);
// Calculate the normals.
normalEstimation.compute(*normals);
for (size_t i = 0; i < cloud->points.size(); i++){
std::cout << "x: " << cloud->points[i].x <<std::endl;
std::cout << "normal x: " << normals->points[i].normal_x <<std::endl; //-nan here
}
how can I solve it ?
Seems like a problem with your input point cloud. Make sure your point cloud is not a flat plane and make sure you don't have duplicate points. What was ailing me was the search radius. Try playing with that too.
I just encountered a similar problem with very similar code. The comments were very helpful, thanks. This is probably too late, but it might still help somebody.

How do you access the coordinates in a boost graph topology layout?

I'm trying to create an application which displays a simple graph and since I'm using boost::graph for the underlying data structure, I'd like to use the layouting algorithms available in the library as well.
The answer presented here explains how to use a layout algorithm within the boost library to lay out vertices of graph:
How does the attractive force of Fruchterman Reingold work with Boost Graph Library
But sadly it does not explain how - after the layout has been calculated - the coordinates of the vertices can actually be accessed. Even though we get a vector of positions (or rather points), the float components are private, so that doesn't help. The boost::graph documentation also doesn't address this topic.
So how can simple (X,Y) coordinates of each vertex be retrieved after a layouting algorithm has been applied?
After reviewing the boost graph source code it turned out that this isn't so hard after all.
We can use a property map to iterate over the PositionsMap and the [] operator to access the coordinates:
template<typename Graph, typename Positions>
void print_positions(const Graph &g, const Positions &positions) {
auto index_map = boost::get(boost::vertex_index, graph);
using PropertyMap = boost::iterator_property_map<Positions::iterator, decltype(index_map)>;
PropertyMap position_map(positions.begin(), index_map);
BGL_FORALL_VERTICES(v, graph, Graph) {
Position pos = position_map[v];
cout << v << ": " << pos[0] << "|" << pos[1] << endl;
}
}

Skeletal animation, transformation multiplication

I have implemented a skeletal animation system where I seem to be missing one last detail for it to work properly.
I have made an animation which only a part of the character has bones. In this image The stickman has a waving arm, but the arm waves at the wrong place compared to the rest of the stickman. (You barely see it between his legs)
I will try to outline the basics of my matrix computation to see if I am doing something wrong.
Computation of bone specific absolute and relative animation matrix (based on my keyframe matrix data):
if (b == this->root) {
b->absoluteMatrix = M;
} else {
b->absoluteMatrix = b->parent->absoluteMatrix * M;
}
b->skinningMatrix = b->absoluteMatrix * inverse(b->offsetMatrix);
if (this->currentAnimationTime == 0) {
cout << "Bone '" << b->name << "' skinningMatrix:\n";
printMatrix(b->skinningMatrix);
cout << "Bone '" << b->name << "' absoluteMatrix:\n";
printMatrix(b->absoluteMatrix);
cout << "Bone '" << b->name << "' offsetMatrix:\n";
printMatrix(b->offsetMatrix);
cout << "---------------------------------\n";
}
skinningMatrix which I send to the GPU. This prints the following:
where offsetMatrix is a transform that transforms from mesh space to bone space in bind pose.
In my shader I then do:
layout(location = 0) in vec4 v; // normal vertex data
newVertex = (skinningMatrix[boneIndex.x] * v) * weights.x;
newVertex = (skinningMatrix[boneIndex.y] * v) * weights.y + newVertex;
newVertex = (skinningMatrix[boneIndex.z] * v) * weights.z + newVertex;
Any hints on what could be wrong with my computations?
I am currently working through skeletal animation myself, and the only thing I noticed which may be an issue is with how you use the offset matrix from ASSIMP. The matrix in question is a matrix which "transforms from mesh space to bone space in bind pose".
To my knowledge this matrix is intended to be used 'as-is', which will essentially take your vertices into the bones local space, which you will than multiply by a 'new' global joint pose which will take the vertices from bone space to model space.
When you inverse the matrix, you are transforming the vertices into model space again, and than with your current animation frames global joint pose, pushing the vertices even further.
I believe your solution will be to remove the inverting of your offset matrix, which will result in your vertices moving from 'model-joint-model'.