I am currently using the Point Cloud Library (PCL) in order to do some work with point clouds. Now I need to compute a mesh for some point cloud and thought that the best thing to do is to use Meshlab. So far so good, my problem is that my point cloud has labels, i.e. it is of the following form:
pcl::PointCloud<pcl::PointXYZRGBL> cloud;
Important: I cannot omit the labels, I have to know after the mesh is computed, which point of the mesh has which label. Later, after some manipulation etc. I save this cloud via
pcl::io::savePLYFileBinary(writePath, *cloud);
which works fine IF the cloud is of type
pcl::PointCloud<pcl::PointXYZRGB> cloud;
but does not work for the first case. Does anyone have some idea what I could do to be able to still get a PLY file which contains labels and can be loaded into Meshlab?
Thanks all!
As MeshLab is not able to open your labeled points cloud, I'd suggest to:
Export your point cloud to a format readable by MeshLab (for example, the pcl::PointCloud<pcl::PointXYZRGB> you mentioned).
Reconstruct the triangle mesh using an interpolating method such as a ball pivoting. The interpolating method is necessary in order to preserve the original points as the vertices of the mesh. When you've finished, save the mesh.
Load the mesh at match the vertices with your original point cloud so you can recover the labels and any other associated attribute. In the quick test I've made even the vertices order matches the points' one.
Update
You mentioned in a comment that you were using the Screened Poisson Reconstruction. This method uses input points as positional constraints to improve the precision of the method, but it is still an approximating method, so output vertices are not guaranteed to match input points (and probably won't do).
You can either switch to an interpolating method (if noise and outliers allow you), or to find the closest point for each vertex (using a 1-NN, as you are doing now) to label vertices.
Above is valid for all discrete values. You should also adjust other values, such as color, to better match the reconstruction (vertices not matching points). To do so you can interpolate the corresponding value from the k-NN.
Related
I have a set of 2D points of a known density I want to mesh by taking the holes in account. Basically, given the following input:
I want something link this:
I tried PCL ConcaveHull, but it doens't handle the holes and splitted mesh very well.
I looked at CGAL Alpha shapes, which seems to go in the right direction (creating a polygon from a point cloud), but I don't know how to get triangles after that.
I though of passing the resulting polygons to a constrained triangulation algorithm and mark domains, but I didn't find how to get a list of polygons.
The resulting triangulated polygon is about a two step process at the least. First you need to triangulate your 2D points (using something like a Delaunay2D algorithm). There you can set the maximum length for the triangles and get the the desired shape. Then you can decimate the point cloud and re-triangulate. Another option is to use the convex hull to get the outside polygon, then extract the inside polygon through a TriangulationCDT algorithm, the apply some PolygonBooleanOperations, obtain the desired polygon, and finaly re-triangulate.
I suggest you look into the Geometric Tools library and specifically the Geometric Samples. I think everything you need is in there, and is much less library and path heavy than CGAL (the algorithms are not free for this type of work unless is a school project) or the PCL (I really like the library for segmentation, but their triangulation breaks often and is slow).
If this solves your problem, please mark it as your answer. Thank you!
I'm using pcl::FPFHEstimation class to detect feature point like this:
pcl::FPFHEstimationOMP<pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33> fest;
pcl::PointCloud<pcl::FPFHSignature33>::Ptr object_features(new pcl::PointCloud<pcl::FPFHSignature33>());
fest.setRadiusSearch(feature_radius);
fest.setInputCloud(source_cloud);
fest.setInputNormals(point_normal);
fest.compute(*object_features);
My question is how to visulize detected feature points in pointcloud? like detected feature points in red color and non-feature points in white color.
I have searched a lot for this, but I only find some ways to display histogram which is not what I want.
Fast point feature histogram is a point descriptor (which in pcl comes in the form of pcl::FPFHSignature33). This means that the algorithm computes a histogram for all points in the input point cloud. Once the descriptors of the points are computed, then you can classify, segment... the points. But the process of classifying or segmenting the points would be another algorithm.
In this paper I use FPFH to coarsely align point clouds. In there I use the FPFH descriptors to decide which point from cloud A corresponds to which point from cloud B (associate points). In this publication, what I do is I compute the curvature for all points before the FPFH, and then, I only compute the FPFH of the subset of points that which curvature is above a given threshold. This is how I extract key points. Then, I use the FPFH of these key points fo the rest of the things (in my case associate points from different point clouds).
The analogy that you propose in one of your comments: "So the feature descriptor computed by FPFH is more like a multi-dimentional eigenvector or something " would be a good one.
So, to answer your question: No, there is no other tool apart from histograms to visualize the FPFH data (or at least not an out-of-the-box pcl class that you can use).
I'm trying to find spheres from a point cloud with pcl::sacSegmentation using RANSAC. The cloud is scanned with an accurate terrestial scanner from one station. The cloud density is about 1cm. The best results so far are shown in the image below. As you can see the cloud contains 2 spheres (r=7,25cm) and a steel beam where the balls are attached.. I am able to find three sphere candidates whose inlier points are extracted from cloud in the image (You can see two circle shapes on the beam near the spheres).
Input point cloud. Inlier points extracted
So, it seems that I am close. Still the found sphere centers are too much (~10cm) away from the truth. Any suggestion how could I improve this? I have been tweaking the model parameters quite some time. Here are the parameters for the aforementioned results:
seg.setOptimizeCoefficients(true);
seg.setModelType(pcl::SACMODEL_SPHERE);
seg.setMethodType(pcl::SAC_RANSAC);
seg.setMaxIterations(500000);
seg.setDistanceThreshold(0.0020);
seg.setProbability(0.99900);
seg.setRadiusLimits(0.06, 0.08);
seg.setInputCloud(cloud);
I also tried to improve the results by including point normals in the model with no better results. Yet there are couple parameters more to adjust so there might be some combinations I had not tried.
I happily give you more information if needed.
Thaks
naikh0u
After some investigation I have come in to conclusion that I can't find spheres with SACSegmentation from a cloud that contains lot of other points that don't belong in any sphere shape. Like in my case the beam is too much for the algorithm.
Thus, I have to choose the points that show some potential being a part of a sphere shape. Also I think, I need to separate the points belonging in different spheres. I tested and saw that my code works pretty well if the input cloud has only sphere points for single sphere with some "natural" noise.
Some have solved this problem by first extracting all points belonging to planes and then searched for spheres. Others have used colors of the target (in case of camera) to extract only needed points.
Deleting plane points should work for my example cloud, but my application may have more complex shapes too so it may be too simple..
..Finally, I got satisfied results by clustering the cloud with pcl::EuclideanClusterExtraction and feeding the clusters for sphere search one by one.
I have various point clouds defining RT-STRUCTs called ROI from DICOM files. DICOM files are formed by tomographic scanners. Each ROI is formed by point cloud and it represents some 3D object.
The goal is to get 2D curve which is formed by plane, cutting ROI's cloud point. The problem is that I can't just use points which were intersected by plane. What I probably need is to intersect 3D concave hull with some plane and get resulting intersection contour.
Is there any libraries which have already implemented these operations? I've found PCL library and probably it should be able to solve my problem, but I can't figure out how to achieve it with PCL. In addition I can use Matlab as well - we use it through its runtime from C++.
Has anyone stumbled with this problem already?
P.S. As I've mentioned above, I need to use a solution from my C++ code - so it should be some library or matlab solution which I'll use through Matlab Runtime.
P.P.S. Accuracy in such kind of calculations is really important - it will be used in a medical software intended for work with brain tumors, so you can imagine consequences of an error (:
You first need to form a surface from the point set.
If it's possible to pick a 2d direction for the points (ie they form a convexhull in one view) you can use a simple 2D Delaunay triangluation in those 2 coordinates.
otherwise you need a full 3D surfacing function (marching cubes or Poisson)
Then once you have the triangles it's simple to calculate the contour line that a plane cuts them.
See links in Mesh generation from points with x, y and z coordinates
Perhaps you could just discard the points that are far from the plane and project the remaining ones onto the plane. You'll still need to reconstruct the curve in the plane but there are several good methods for that. See for instance http://www.cse.ohio-state.edu/~tamaldey/curverecon.htm and http://valis.cs.uiuc.edu/~sariel/research/CG/applets/Crust/Crust.html.
From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/