NaN values in normal point - c++

I'm having a problem verifying the value contained in the coordinates of the normal point. Initially I estimate the normal of my cloud and when I'll check the xyz coordinates of the normal it's returned nan.
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::Normal>::Ptr normals (new pcl::PointCloud<pcl::Normal>);
// Object for normal estimation.
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
normalEstimation.setInputCloud(cloud);
// For every point, use all neighbors in a radius of 3cm.
normalEstimation.setRadiusSearch(0.03);
// A kd-tree is a data structure that makes searches efficient. More about it later.
// The normal estimation object will use it to find nearest neighbors.
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
normalEstimation.setSearchMethod(kdtree);
// Calculate the normals.
normalEstimation.compute(*normals);
for (size_t i = 0; i < cloud->points.size(); i++){
std::cout << "x: " << cloud->points[i].x <<std::endl;
std::cout << "normal x: " << normals->points[i].normal_x <<std::endl; //-nan here
}
how can I solve it ?

Seems like a problem with your input point cloud. Make sure your point cloud is not a flat plane and make sure you don't have duplicate points. What was ailing me was the search radius. Try playing with that too.
I just encountered a similar problem with very similar code. The comments were very helpful, thanks. This is probably too late, but it might still help somebody.

Related

PCL: Normal estimation or visualization not working

I am trying to start working with PCL and do this basic example:
https://pcl.readthedocs.io/projects/tutorials/en/latest/how_features_work.html#how-3d-features-work
However, either the normal estimation or the visualization of the estimated normals doesn't work as it should.
As some test data, I generated this simple mesh in Blender:
I am using this test object because I have some point cloud data that was recorded with a real 3D-LIDAR sensor that is more complex so just for the sake of resolving this issue, I am using this simpler object here.
I convert this mesh into a PCL point cloud and that also works well.
Here is the view of the built in PCL point cloud visualizer:
However, after doing a surface normal estimation and trying to visualize it, the result is very crude:
I would have expected a view similar to the normal point cloud visualization, since I passed the original point cloud AND the normals, however I only get eleven (I counted them) normals of which some of them don't even seem to point in the right direction and no visualization of the original point cloud either.
The point cloud has 1024 points in it and the estimated normal set also has 1024 estimated normals in it so I am not sure what is going wrong here.
I ran this program with different parameters for the Sphere-Radius search (0.03, 0.06, 0.3, 0.6, 1, 3) but got similar results.
Here is my code:
int main()
{
PointCloud<PointXYZ> cloud;
PointCloud<PointXYZ>::Ptr cloud_ptr(&cloud);
string filename = "C:\\Users\\ilmu0\\Desktop\\Blendertestobjects\\planeconcave.obj";
int readresult = OBJReader::readPointCloudFromFile(filename, cloud_ptr); //My own class which reads Blender ".obj" files
if (readresult != 0)
{
cout << "File could not be read" << endl;
return 1;
}
cout << "Cloud size: " << cloud.size() << endl; // --> 1024
NormalEstimation<PointXYZ, Normal> estimation;
estimation.setInputCloud(cloud_ptr);
KdTree<PointXYZ>::Ptr tree_ptr(new KdTree<PointXYZ>());
estimation.setSearchMethod(tree_ptr);
PointCloud<Normal>::Ptr normalCloud_ptr(new PointCloud<Normal>());
estimation.setRadiusSearch(0.6);
estimation.compute(*normalCloud_ptr);
cout << "Normals: " << normalCloud_ptr->size() << endl; // --> 1024
string result;
cin >> result;
//This is used to visualize the original point cloud only
//CloudViewer viewer("Simple Cloud Viewer");
//viewer.showCloud(cloud_ptr);
//while (!viewer.wasStopped())
//{
//}
PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloudNormals<PointXYZ, Normal>(cloud_ptr, normalCloud_ptr);
while (!viewer.wasStopped())
{
viewer.spinOnce();
}
return 0;
}

CGAL: Wrong vertex accessed using facet

As far as I understand how a facet works in CGAL (using v5.1), it is composed of a cell handle and an index to the vertex opposing the facet in that cell handle. One can thus obtains all the vertex of the considered cell using (I am working using alpha-shapes in 3D):
typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel;
typedef Kernel::FT FT;
typedef CGAL::Triangulation_vertex_base_with_info_3<std::size_t, Kernel> Vb3;
typedef CGAL::Fixed_alpha_shape_vertex_base_3<Kernel, Vb3> asVb3;
typedef CGAL::Fixed_alpha_shape_cell_base_3<Kernel> asCb3;
typedef CGAL::Triangulation_data_structure_3<asVb3, asCb3> asTds3;
typedef CGAL::Delaunay_triangulation_3<Kernel, asTds3, CGAL::Fast_location> asTriangulation_3;
typedef CGAL::Fixed_alpha_shape_3<asTriangulation_3> Alpha_shape_3;
typedef Kernel::Point_3 Point_3;
...
Alpha_shape_3::Facet facet;
...
Alpha_shape_3::Vertex_handle facetV1 = facet.first->vertex((facet.second+1)%4);
Alpha_shape_3::Vertex_handle facetV2 = facet.first->vertex((facet.second+2)%4);
Alpha_shape_3::Vertex_handle facetV3 = facet.first->vertex((facet.second+3)%4);
Alpha_shape_3::Vertex_handle facetOppositeVertex = facet.first->vertex((facet.second)%4);
I assumed the opposite vertex is inside a cell, and the facet is also part of this cell. Using this, I can compute the exterior-pointing normals to the surface of my alpha-shape (if there is a way to do this automatically using CGAL, I would like to know it). To compute the normal of a point on the free surface, I do an average of the the different facet normals sharing that point, weighted by the angle of the facet at that point. To orient my normals exteriorly to the mesh, I simply check the dot product between the normal and a vector from one point of the facet to the opposite vertex.
The problem is that, sometimes the opposite vertex seems to be in a complete different cell, leading in my FEM computations to bad results. Here are some figures of what happen:
initially
Normal to free surface (good)
when the mesh slightly deforms:
Normal to free surface (bad)
Actually what happens is that the opposite vertex of some facet does not lie in the only cell sharing that free surface facet, but inside another cell far away, such that the orientation of the facet normal is not correctly computed, and that some facet normals cancel each other, leading thus to bad normals.
Is there:
a way to compute those normals automatically in cgal ?
a problem in my understanding of how a facet works ?
NB: I already check when the opposite vertex is infinite and in that case I mirror the facet using the mirror_facet function
EDIT:
As mentionned in a comment, I tried to use different functions to access the vertex:
std::cout << "--------------------------------------------------" << std::endl;
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 0))->info() << std::endl; //facetV1
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 1))->info() << std::endl; //facetV2
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 2))->info() << std::endl; //facetV3
std::cout << facet.first->vertex(asTriangulation_3::vertex_triple_index(facet.second, 3))->info() << std::endl; //opposite vertex
std::cout << "--------------------------------------------------" << std::endl;
But it sometimes gives me:
--------------------------------------------------
117
25
126
117
--------------------------------------------------
So an opposite vertex with the same index as one of the facet's vertex.
The way to get a consistent orientation of vertices in a face is to use the function vertex_triple_index() instead of (+1/2/3%4). So you have:
Alpha_shape_3::Vertex_handle facetV1 = facet.first->vertex(DT::vertex_triple_index(facet.second,0));
Alpha_shape_3::Vertex_handle facetV2 = facet.first->vertex(DT::vertex_triple_index(facet.second,1));
Alpha_shape_3::Vertex_handle facetV3 = facet.first->vertex(DT::vertex_triple_index(facet.second,2));

Computation of the matrix inverse using the Eigen C++ library introduces noise

I have a publish-subscribe type of a node that receives pose information (position and orientation) from the subscribed data stream and it should compute the inverse and publish out.
In order to do so I'm creating a 4-by-4 homogeneous transformation matrix from the original pose data.
Inverse it using the Eigen C++ template library, convert the transformation matrix back to position and orientation form and publish it.
When I plotted the published data stream I noticed some noise so I ended up publishing the original data too for comparison, here is what I did:
convert original_pose to TF matrix, named as original_TF
convert original_TF back to pose, named as original_pose_
publish original_pose_
inverse original_TF assign to inverted_TF
convert inverted_TF to pose, named as inverted_pose_
publish inverted_pose_
When I plot the X, Y, Z position fields, I'm seeing a significant amount of noise (spikes and notches in the visual below) in the inverted pose data. Since I'm using the same functions to convert the original pose to TF and back, I know that those equations aren't the source of the noise.
Blue is the original, whereas red is the inverted.
Here is the code. Really nothing extraordinary.
bool inverse_matrix(std::vector<std::vector<double> > & input, std::vector<std::vector<double> > & output)
{
// TODO: Currently only supports 4-by-4 matrices, I can make this configurable.
// see https://eigen.tuxfamily.org/dox/group__TutorialMatrixClass.html
Eigen::Matrix4d input_matrix;
Eigen::Matrix4d output_matrix;
Eigen::VectorXcd input_eivals;
Eigen::VectorXcd output_eivals;
input_matrix << input[0][0], input[0][1], input[0][2], input[0][3],
input[1][0], input[1][1], input[1][2], input[1][3],
input[2][0], input[2][1], input[2][2], input[2][3],
input[3][0], input[3][1], input[3][2], input[3][3];
cout << "Here is the matrix input:\n" << input_matrix << endl;
input_eivals = input_matrix.eigenvalues();
cout << "The eigenvalues of the input_eivals are:" << endl << input_eivals << endl;
if(input_matrix.determinant() == 0) { return false; }
output_matrix = input_matrix.inverse();
cout << "Here is the matrix output:\n" << output_matrix << endl;
output_eivals = output_matrix.eigenvalues();
cout << "The eigenvalues of the output_eivals are:" << endl << output_eivals << endl;
// Copy output_matrix to output
for (int i = 0; i < 16; ++i)
{
int in = i/4;
int im = i%4;
output[in][im] = output_matrix(in, im);
}
return true;
}
-- Edit 1 --
I printed out the eigenvalues of the input and output matrices of the inverse_matrix function.
Here is the matrix input:
0.99916 -0.00155684 -0.0409514 0.505506
0.00342358 -0.992614 0.121267 0.19625
-0.0408377 -0.121305 -0.991775 1.64257
0 0 0 1
The eigenvalues of the input_eivals are:
(1,0)
(-0.992614,0.121312)
(-0.992614,-0.121312)
(1,0)
Here is the matrix output:
0.99916 0.00342358 -0.0408377 -0.438674
-0.00155684 -0.992614 -0.121305 0.39484
-0.0409514 0.121267 -0.991775 1.62597
-0 -0 0 1
The eigenvalues of the output_eivals are:
(1,0)
(-0.992614,0.121312)
(-0.992614,-0.121312)
(1,0)
-- Edit 2 --
I don't quite understand what you are plotting. Is it original_pose.{X,Y,Z} and inverted_pose.{X,Y,Z}? Then the "spikes" will really depend on the orientation-part of the matrix.
I am plotting original_pose_{position.x, position.y, position.z} and inverted_pose_{position.x, position.y, position.z} where the complete data that's published is <variable_name>{position.x, position.y, position.z, orientation.w, orientation.x, orientation.y, orientation.z}.
Can you elaborate on "the "spikes" will really depend on the orientation-part of the matrix."?
Also, how is your description related to the code-snippet? (I don't see any matching variable names).
I've identified that the source of the noise is the inversion, which is the item number 4 in my description: inverse original_TF assign to inverted_TF. To relate one another, I'm calling the function as follows:
isSuccess = inverse_matrix(original_TF, inverted_TF);
How do you store "poses" (is that the vector<vector> in your snippet)?
Yes, I'm storing them in 2-dimensional vectors of type double.
At any point, do you use Eigen::Transform to store transformations, or just plain Eigen::Matrix4d?
No, I'm only using Eigen::Matrix4d locally in the inverse_matrix function to be able to make use of the Eigen library for computation.

PCL - GreedyProjectionTriangulation makes not enough faces

I am trying to triangulate my point cloud in Point Cloud Library using pcl::GreedyProjectionTriangulation from this tutorial
Problem:
But as the result I got PolyMesh with a very few number of faces (see the image below):
Sorry, I can't publish images
Here is my code:
//Calculate normals
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> n;
pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZ>);
tree->setInputCloud(cloud);
n.setInputCloud(cloud);
n.setSearchMethod(tree);
n.setKSearch(80);
cout << "Computing normals...\n";
n.compute(*normals);
// Concatenate the XYZ and normal fields*
pcl::PointCloud<pcl::PointNormal>::Ptr cloud_with_normals(new pcl::PointCloud<pcl::PointNormal>);
pcl::concatenateFields(*cloud, *normals, *cloud_with_normals);
// Create search tree*
pcl::search::KdTree<pcl::PointNormal>::Ptr tree2(new pcl::search::KdTree<pcl::PointNormal>);
tree2->setInputCloud(cloud_with_normals);
// Initialize objects
pcl::GreedyProjectionTriangulation<pcl::PointNormal> gp3;
pcl::PolygonMesh triangles;
// Set the maximum distance between connected points (maximum edge length)
gp3.setSearchRadius(0.1);
// Set typical values for the parameters
gp3.setMu(2.5);
gp3.setMaximumNearestNeighbors(150);
gp3.setMaximumSurfaceAngle(M_PI ); // 180 degrees
gp3.setMinimumAngle(M_PI / 18); // 10 degrees
gp3.setMaximumAngle(2 * M_PI / 3); // 120 degrees
gp3.setNormalConsistency(false);
// Get result
gp3.setInputCloud(cloud_with_normals);
gp3.setSearchMethod(tree2);
cout << "reconstruct\n";
gp3.reconstruct(triangles);
I tried to set bigger value (1 vs 0.1) of the SearchRadius but got this warning during runtime:
Not enough neighbors are considered: ffn or sfn out of range! Consider increasin
g nnn_... Setting R=23433 to be BOUNDARY!
Not enough neighbors are considered: source of R=23480 is out of range! Consider
increasing nnn_...
Number of neighborhood size increase requests for fringe neighbors: 24
Number of neighborhood size increase requests for source: 68
and number of polygons is steel low.
Question: So, what should I do to create good PolyMesh from the point cloud?
Here are some tips that could fix your problem:
Before calculating normals, try applying a method for a better surface approximation: MovingLeastSquares or BilateralUpsampling depending on your needs. Your cloud will be more dense, resulting in less holes after surface reconstruction.
Use radius search instead of K-search for normal estimation, it provides better results to me.
Increasing setMaximumNearestNeighbors will avoid that runtime warning.

PCLVisualizer not staying open

I have the following code to estimate and show normal vectors for points in my point cloud:
int main(int argc, char* argv[]) {
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
if (pcl::io::loadPCDFile<pcl::PointXYZ> ("coffee_mug_1_1_1.pcd", *cloud) == -1) //* load the file
{
PCL_ERROR ("Couldn't read file coffee_mug_1_1_1.pcd \n");
return (-1);
}
std::cout << "Loaded "
<< cloud->width * cloud->height
<< " data points from test_pcd.pcd with the following fields: "
<< std::endl;
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;
ne.setInputCloud (cloud);
// Create an empty kdtree representation, and pass it to the normal estimation object.
// Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
ne.setSearchMethod (tree);
// Output datasets
pcl::PointCloud<pcl::Normal>::Ptr cloud_normals (new pcl::PointCloud<pcl::Normal>);
// Use all neighbors in a sphere of radius 3cm
ne.setRadiusSearch (0.03);
// Compute the features
ne.compute (*cloud_normals);
cout << "Computed normals " << cloud_normals->width * cloud_normals->height << cloud_normals->points[0] << endl;
pcl::visualization::PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloud(cloud);
viewer.addPointCloudNormals<pcl::PointXYZ,pcl::Normal>(cloud, cloud_normals);
while (!viewer.wasStopped ())
{
viewer.spinOnce ();
}
}
However, when I run it the PCLVisualizer flashes on the screen and then the program terminates. I have no idea why it doesn't stay. If I use a CloudViewer just to display the point cloud (not the normals), this works fine and stays on the screen.
3 things you can check:
Check that indeed you read your file with no error. You can put a cin.get() right after PCL_ERROR ("Couldn't read file coffee_mug_1_1_1.pcd \n"); so that it doesn't exit directly in case it couldn't read the file.
Probably you should give a different id to your normals if you want to visualize them, because I think that now, both your cloud and the nomals use the id "cloud". Check the PCL example on normal visualization.
(e.g. viewer.addPointCloud(cloud, "cloud");
viewer.addPointCloudNormals<pcl::PointXYZ,pcl::Normal>(cloud, cloud_normals, 10, 0.05, "normals");
Make sure you get valid values for your normals.
If nothing from those works, could you maybe upload your coffee mug pcd somewhere and provide a link?
I have found the problem I had here, I was not linking against the boost libs that I was compiling against as I had /usr/lib in my link library path