I'd like to perform the following action using CGAL :
Intersect a surface mesh; the cut mesh; with another surface mesh; the cutted mesh; to divide the cutted mesh in 2 different meshes.
To do so I use the CGAL Mesh Slicer which gives me a set of polylines. Unfortunately, the slicer doesn't provide the information of which face of the cutted mesh each polyline belongs to. Moreover it effectively perfom the slicer action.
How can I retrieve this information ?
I need this information to perform the subdivision of the cutted mesh along the polylines, then the division in 2 separate meshes.
In my context the cut mesh is a surface mesh like this :
Here is the cut mesh + the cutted mesh :
In my code, I generate a Plane from each faces of the cut mesh and perform a slice operation with the cutted mesh.
Here is my code :
//Import off files and instantiate meshes
const char* filename1 = "test.off";
const char* filename2 = "cutMesh2.off";
std::ifstream input(filename1);
Mesh cuttedMesh, cutMesh;
if (!input || !(input >> cuttedMesh))
{
std::cerr << "First mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
input.open(filename2);
if (!input || !(input >> cutMesh))
{
std::cerr << "Second mesh is not a valid off file." << std::endl;
return 0;
}
input.close();
// AABB Slicer constructor from the cutted mesh
AABB_tree tree(edges(cuttedMesh).first, edges(cuttedMesh).second, cuttedMesh);
tree.accelerate_distance_queries();
CGAL::Polygon_mesh_slicer<Mesh, K> slicer_aabb(cuttedMesh, tree);
std::cout << cutMesh.num_vertices()<< std::endl;
// For each face of the cut mesh
BOOST_FOREACH(face_descriptor f, faces(cutMesh))
{
std::cout << "Face " << f << std::endl;
Point points [3];
int i = 0;
//for each point of the current face
BOOST_FOREACH(vertex_descriptor v, CGAL::vertices_around_face(cutMesh.halfedge(f), cutMesh))
{
points[i]= cutMesh.point(v);
++i;
}
Polylines polylines;
// Perform the slice between the current face of the cut mesh and the cutted mesh
slicer_aabb(K::Plane_3(points[0],points[1],points[2]), std::back_inserter(polylines));
std::cout << "the slicer intersects " << polylines.size() << " polylines" << std::endl;
//for each polyline computed by this face of the cutmesh
BOOST_FOREACH(Polyline_type polyline,polylines)
{
std::cout << "Polyline : " << polyline.size() << " points"<< std::endl;
BOOST_FOREACH(Point point, polyline)
{
std::cout << "Point : " << point << std::endl;
}
}
std::cout << std::endl;
polylines.clear();
}
Thanks for your help.
The answer to the post has been proposed by sloriot :
Try using the non-documented clip() function located in Polygon_mesh_processing/include/CGAL/Polygon_mesh_processing/internal/clip.h
Related
I want to use the Efficient Ransac implementation of CGAL, but whenever I try to set my own parameters, the algorithm doesn't detect any shape anymore.
This work is related to the Polyfit implementation in CGAL. I want to fine tune the plane detection to see the influence it has on the algorithm. When I use the standard call to ransac.detect(), it works perfectly. However, when I want to set my own parameters it just doesn't find any plane, even if I set them manually to the default values.
Here is my code, strongly related to this example
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/IO/read_xyz_points.h>
#include <CGAL/IO/Writer_OFF.h>
#include <CGAL/property_map.h>
#include <CGAL/Surface_mesh.h>
#include <CGAL/Shape_detection/Efficient_RANSAC.h>
#include <CGAL/Polygonal_surface_reconstruction.h>
#ifdef CGAL_USE_SCIP
#include <CGAL/SCIP_mixed_integer_program_traits.h>
typedef CGAL::SCIP_mixed_integer_program_traits<double> MIP_Solver;
#elif defined(CGAL_USE_GLPK)
#include <CGAL/GLPK_mixed_integer_program_traits.h>
typedef CGAL::GLPK_mixed_integer_program_traits<double> MIP_Solver;
#endif
#if defined(CGAL_USE_GLPK) || defined(CGAL_USE_SCIP)
#include <CGAL/Timer.h>
#include <fstream>
typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel;
typedef Kernel::Point_3 Point;
typedef Kernel::Vector_3 Vector;
// Point with normal, and plane index
typedef boost::tuple<Point, Vector, int> PNI;
typedef std::vector<PNI> Point_vector;
typedef CGAL::Nth_of_tuple_property_map<0, PNI> Point_map;
typedef CGAL::Nth_of_tuple_property_map<1, PNI> Normal_map;
typedef CGAL::Nth_of_tuple_property_map<2, PNI> Plane_index_map;
typedef CGAL::Shape_detection::Efficient_RANSAC_traits<Kernel, Point_vector, Point_map, Normal_map> Traits;
typedef CGAL::Shape_detection::Efficient_RANSAC<Traits> Efficient_ransac;
typedef CGAL::Shape_detection::Plane<Traits> Plane;
typedef CGAL::Shape_detection::Point_to_shape_index_map<Traits> Point_to_shape_index_map;
typedef CGAL::Polygonal_surface_reconstruction<Kernel> Polygonal_surface_reconstruction;
typedef CGAL::Surface_mesh<Point> Surface_mesh;
int main(int argc, char ** argv)
{
Point_vector points;
// Loads point set from a file.
const std::string &input_file = argv[1];
//const std::string input_file(input);
std::ifstream input_stream(input_file.c_str());
if (input_stream.fail()) {
std::cerr << "failed open file \'" <<input_file << "\'" << std::endl;
return EXIT_FAILURE;
}
std::cout << "Loading point cloud: " << input_file << "...";
CGAL::Timer t;
t.start();
if (!input_stream ||
!CGAL::read_xyz_points(input_stream,
std::back_inserter(points),
CGAL::parameters::point_map(Point_map()).normal_map(Normal_map())))
{
std::cerr << "Error: cannot read file " << input_file << std::endl;
return EXIT_FAILURE;
}
else
std::cout << " Done. " << points.size() << " points. Time: " << t.time() << " sec." << std::endl;
// Shape detection
Efficient_ransac ransac;
ransac.set_input(points);
ransac.add_shape_factory<Plane>();
std::cout << "Extracting planes...";
t.reset();
// Set parameters for shape detection.
Efficient_ransac::Parameters parameters;
// Set probability to miss the largest primitive at each iteration.
parameters.probability = 0.05;
// Detect shapes with at least 500 points.
parameters.min_points = 100;
// Set maximum Euclidean distance between a point and a shape.
parameters.epsilon = 0.01;
// Set maximum Euclidean distance between points to be clustered.
parameters.cluster_epsilon = 0.01;
// Set maximum normal deviation.
// 0.9 < dot(surface_normal, point_normal);
parameters.normal_threshold = 0.9;
// Detect shapes.
ransac.detect(parameters);
//ransac.detect();
Efficient_ransac::Plane_range planes = ransac.planes();
std::size_t num_planes = planes.size();
std::cout << " Done. " << num_planes << " planes extracted. Time: " << t.time() << " sec." << std::endl;
// Stores the plane index of each point as the third element of the tuple.
Point_to_shape_index_map shape_index_map(points, planes);
for (std::size_t i = 0; i < points.size(); ++i) {
// Uses the get function from the property map that accesses the 3rd element of the tuple.
int plane_index = get(shape_index_map, i);
points[i].get<2>() = plane_index;
}
//////////////////////////////////////////////////////////////////////////
std::cout << "Generating candidate faces...";
t.reset();
Polygonal_surface_reconstruction algo(
points,
Point_map(),
Normal_map(),
Plane_index_map()
);
std::cout << " Done. Time: " << t.time() << " sec." << std::endl;
//////////////////////////////////////////////////////////////////////////
Surface_mesh model;
std::cout << "Reconstructing...";
t.reset();
if (!algo.reconstruct<MIP_Solver>(model)) {
std::cerr << " Failed: " << algo.error_message() << std::endl;
return EXIT_FAILURE;
}
const std::string& output_file(input_file+"_result.off");
std::ofstream output_stream(output_file.c_str());
if (output_stream && CGAL::write_off(output_stream, model))
std::cout << " Done. Saved to " << output_file << ". Time: " << t.time() << " sec." << std::endl;
else {
std::cerr << " Failed saving file." << std::endl;
return EXIT_FAILURE;
}
//////////////////////////////////////////////////////////////////////////
// Also stores the candidate faces as a surface mesh to a file
Surface_mesh candidate_faces;
algo.output_candidate_faces(candidate_faces);
const std::string& candidate_faces_file(input_file+"_candidate_faces.off");
std::ofstream candidate_stream(candidate_faces_file.c_str());
if (candidate_stream && CGAL::write_off(candidate_stream, candidate_faces))
std::cout << "Candidate faces saved to " << candidate_faces_file << "." << std::endl;
return EXIT_SUCCESS;
}
#else
int main(int, char**)
{
std::cerr << "This test requires either GLPK or SCIP.\n";
return EXIT_SUCCESS;
}
#endif // defined(CGAL_USE_GLPK) || defined(CGAL_USE_SCIP)
When launched, I have the following message:
Loading point cloud: Scene1/test.xyz... Done. 169064 points. Time: 0.428 sec.
Extracting planes... Done. 0 planes extracted. Time: 8.328 sec.
Generating candidate faces... Done. Time: 0.028 sec.
Reconstructing... Failed: at least 4 planes required to reconstruct a closed surface mesh (only 1 provided)
While I have this when launching the code the ransac detection function without parameters:
Loading point cloud: Scene1/test.xyz... Done. 169064 points. Time: 0.448 sec.
Extracting planes... Done. 18 planes extracted. Time: 3.088 sec.
Generating candidate faces... Done. Time: 94.536 sec.
Reconstructing... Done. Saved to Scene1/test.xyz_result.off. Time: 30.28 sec.
Can someone help me setting my own parameters for the ransac shape detection?
However, when I want to set my own parameters it just doesn't find any
plane, even if I set them manually to the default values.
Just to be sure: "setting them manually to the default values" is not what you are doing in the code you shared.
Default values are documented as:
1% of the total number of points for min_points, which should be around 1700 points in your case, not 100
1% of the bounding box diagonal for epsilon and cluster_epsilon. For that obviously I don't know if that is what you used (0.01) as I don't have access to your point set, but if you want to reproduce default values, you should use the CGAL::Bbox_3 object at some point
If you use these values, there's no reason why it should behave differently than with no parameters given (if it does not work, then please let me know because there may be a bug).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am trying to generate normal maps given a mesh, camera pose, and camera intrinsics.
My plan is to calculate the vertex normal for each point in the cloud then project them onto an image plane with the corresponding camera pose and intrinsics. More specifically, I would first calculate the vertex normals then convert the point coordinates from world coordinates into camera coordinates with camera pose. Finally, using the camera intrinsics, the point cloud can be projected onto an image where each pixel represents the surface normal of the corresponding 3D vertex.
Below is my code:
#include <iostream>
#include <thread>
#include <pcl/io/ply_io.h>
#include <pcl/point_types.h>
#include <pcl/features/from_meshes.h>
#include <pcl/visualization/pcl_visualizer.h>
using namespace std;
using namespace pcl;
void readPLY(PolygonMesh::Ptr mesh, string fname, bool printResult=false)
{
PLYReader reader;
int success = reader.read(fname, *mesh); // load the file
if (success == -1) {
cout << "Couldn't read file " << fname << endl;
exit(-1);
}
if(printResult){
cout << "Loaded "
<< mesh->cloud.width * mesh->cloud.height
<< " data points from "
<< fname
<< " with the following fields: "
<< endl;
// convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
fromPCLPointCloud2(mesh->cloud, *cloud);
// print the first 10 vertices
cout << "Vertices:" << endl;
for (size_t i=0; i<10; ++i)
cout << " " << cloud->points[i].x
<< " " << cloud->points[i].y
<< " " << cloud->points[i].z << endl;
// print the first 10 polygons
cout << "Polygons:" << endl;
for (size_t i=0; i<10; ++i){
cout << mesh->polygons[i] << endl;
}
}
}
void computeNormal(PolygonMesh::Ptr mesh,
PointCloud<Normal>::Ptr normal,
bool printResult=false)
{
// convert from pcl/PCLPointCloud2 to pcl::PointCloud<T>
PointCloud<PointXYZ>::Ptr cloud (new PointCloud<PointXYZ>);
fromPCLPointCloud2(mesh->cloud, *cloud);
// compute surface normal
pcl::features::computeApproximateNormals(*cloud, mesh->polygons, *normal);
// print results
if (printResult){
cout << "Normal cloud contains "
<< normal->width * normal->height
<< " points" << endl;
// print the first 10 vertices
cout << "Vertex normals:" << endl;
for (size_t i=0; i<10; ++i)
cout << " " << normal->points[i] << endl;
}
}
int main (int argc, char** argv)
{
// ./main [path/to/ply] (--debug)
string fname = argv[1];
// check if debug flag is set
bool debug = false;
for(int i=0;i<argc;++i){
string arg = argv[i];
if(arg == "--debug")
debug = true;
}
// read file
PolygonMesh::Ptr mesh (new PolygonMesh);
readPLY(mesh, fname, debug);
// calculate normals
PointCloud<Normal>::Ptr normal (new PointCloud<Normal>);
computeNormal(mesh, normal, debug);
}
Currently, I have already obtained surface normal for each vertex with pcl::features::computeApproximateNormals. Is there a way to use PCL to project the normals onto an image plane with the xyz-elements of the normal mapped to the RGB channels and save the image to a file?
Welcome to Stack Overflow. What the documentation says is:
Given a geometric surface, it’s usually trivial to infer the direction of the normal at a certain point on the surface as the vector perpendicular to the surface in that point.
From what I gather from what you say is that you already have the surfaces for which you can easily calculate surface normals. Normal estimation is used because 3D point cloud data is basically a bunch of sample points from the real world. You do not have surface information in this kind of data. What you do is you estimate a surface around a pixel using Planar Fitting(2D Regression). Then, you obtain surface normal. You cannot compare these two methods. They essentially gave different purposes.
For question two: Yes. Refer to this SO answer.
I am using VTK to generate a 3D Delaunay triangulation from a list of input points that are obtained using terrestrial laser scanner. So far, I successfully generated some 3D triangulations and saved them in .vtk or .vtu file formats. However, I need to save them as one of the popular formats such as .ply, .stl, .obj or .wrl. Could you please tell me how can I save 3d triangulation in one of these formats? My other question is about setAlpha and setTolerance parameters, could you please explain me these parameters in detail?
Thanks in advance
int main( int argc, char* argv[] )
{
//load the point cloud
vtkSmartPointer<vtkSimplePointsReader> reader = vtkSmartPointer<vtkSimplePointsReader>::New();
reader->SetFileName("kucuk50k.xyz");
reader->Update();
vtkPolyData* polydata = reader->GetOutput();
std::cout << "The point cloud is loaded" << std::endl;
//end of point cloud loading
std::cout << "----------------------------------------------" << std::endl;
// Generate a mesh from the input points. If Alpha is non-zero, then
// tetrahedra, triangles, edges and vertices that lie within the
// alpha radius are output.
std::cout << "Start delaunay 3d triangulation" << std::endl;
vtkSmartPointer<vtkDelaunay3D> delaunay3DAlpha = vtkSmartPointer<vtkDelaunay3D>::New();
delaunay3DAlpha->SetInputConnection(reader->GetOutputPort());
delaunay3DAlpha->SetAlpha(0.01);
std::cout << "3d Delaunay computed" << std::endl;
std::cout << "----------------------------------------------" << std::endl;
std::cout << "Start writing the triangulation" << std::endl;
vtkSmartPointer<vtkXMLUnstructuredGridWriter> ugWriter = vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
ugWriter->SetInputConnection(delaunay3DAlpha->GetOutputPort());
ugWriter->SetFileName("delaunayy_50k.vtk");
ugWriter->Write();
std::cout << "VTK file created" << std::endl;
return EXIT_SUCCESS;
}
You try to compute a triangular surface from a scanned terrain, right? Is there a good reason to use 3D Delaunay which creates tetrahedra instead of 2.5D Delaunay? You may have a look at my Fade2.5D library, there is a free student version. A limitation of 2.5D is however that you can't represent vertical walls and thus buildings are not directly supported.
I've used OpenCV to calibrate my camera from different views and I obtained intrinsics, rvec and tvec with a reprojection error of .03 px (I thus think the calibration is fine).
Now, given one view of my scene, I want to be able to click on a point and find its projection on the other views.
To do so, I se the following functions:
void Camera::project(const vector<cv::Point2f> &pts_2d, vector<cv::Point3f> &pts_3d) {
std::cout << "Start proj 2d -> 3d" << std::endl;
cv::Mat pts_2d_homo;
convertPointsToHomogeneous(pts_2d, pts_2d_homo);
std::cout << "Cartesian to Homogeneous done!" << std::endl;
// Project point to camera normalized coordinates
cv::Mat unproj;
cv::transform(pts_2d_homo, unproj, intrinsics().inv());
std::cout << "Point unprojected: " << unproj.at<cv::Point3f>(0) << std::endl;
// Undo model view transform
unproj -= transVec();
cv::Mat rot;
cv::Rodrigues(rotVec(), rot);
cv::transform(unproj, unproj, rot.t());
unproj *= 1.f/cv::norm(unproj);
std::cout << "Model view undone: " << unproj.at<cv::Point3f>(0) << std::endl;
for (int i = 0; i < unproj.rows; ++i) {
std::cout << "Inside for :" << unproj.at<cv::Point3f>(i,0) << std::endl;
pts_3d.push_back(unproj.at<cv::Point3f>(i,0));
}
}
void Camera::project(const vector<cv::Point3f> &pts_3d, vector<cv::Point2f> &pts_2d) {
cv::projectPoints(pts_3d, rotVec(), transVec(), intrinsics(), dist_coeffs(), pts_2d);
}
Now I have mixed feelings about what I get as an output. When I draw the point projected on each view, they all correspond BUT no matter where I clicked at first in the "canonical view", the projected point is always the same.
Ive succesfully parsed the iqe (Inter Quake Exporter) format and now im stuck at displaying it in bindpose.
All vertices have a weird transformation, where root bone(which covers the full mesh for orientation) is not the only bone with influence for that vertex. You can see it at the arm / should / neck area of the mesh. This mesh has 3 bones. One root bone overing the whole mesh and two arm bones.
You can see how the mesh should look like in the background (exported as obj)
For better understanding, i have the following system:
1. I load all vertex data into one big vbo
(vertices, uvs, normals, tangent, bitangent, boneIndicies(4) (index of joint list) and boneWeights(4))
2. I add all joints to a joint list and create a tree system (simple linked list with position, rotation and parent pointer)
3. i have a seperate list called boneMatrices where i store.. well my bone matrices. currently every frame, later i will precalculate the matrices for each animation frame.
I try to calculate the bone matrix the following way:
for (int i = 0; i < this->jointList.size(); i++)
{
pixel::CJoint *joint = this->jointList.at(i);
std::cout << "Joint ------- " << joint->name << " -------- values: \n";
std::cout << "Translation: " << joint->position.x << " " << joint->position.y << " " << joint->position.z << "\n";
std::cout << "Quaternion: " << joint->rotation.x << " " << joint->rotation.y << " " << joint->rotation.z << " " << joint->rotation.w << "\n";
pixel::matrix4 rotation = pixel::CMatrix::fromQuaternion(joint->rotation);
pixel::matrix4 offset = pixel::CMatrix::translateMatrix(joint->position);
pixel::matrix4 baseMatrix = rotation * offset; // translation * rotation
joint->bindPose = baseMatrix;
joint->invBindPose = pixel::CMatrix::inverseMatrix(baseMatrix);
if (joint->parent != NULL)
{
std::cout << "Joint: " << joint->name << " is child of " << joint->parent->name << " \n";
joint->bindPose = joint->bindPose * joint->parent->invBindPose;
joint->invBindPose = pixel::CMatrix::inverseMatrix(joint->bindPose);
}
std::cout << "\n";
}
I store the transposed (else the mesh is upside down) of joint->invBindPose in the boneMatrices and send it to the shader:
boneMatrix is a std::vector of matrix4
this->material.setParameter("boneMatrix", this->boneMatrices.at(0), this->boneMatrices.size());
The root bone bind calculation has to be right (at least i think) because the head is at the right place and the eyes too (which dont have a bone influence currently)