Finding the Translation between two pointclouds using ICP algorithm - c++

I have two pointclouds of an object one is the target pointcloud and the other is the real-time pointcloud from a robot.Both the pointclouds are taken at the same distance,point from the object . My goal is to make the robot move till the object,by using ICP algorithm and getting a transformation matrix.
Using the PCL I have tried to implement the ICP algorithm and I am getting a Transformation Matrix, but the translation is not correct. Suppose if the robot is at a distance of 40cm from the object,then the translation is only shows 10cm.
pcl::IterativeClosestPoint<pcl::PointXYZ, pcl::PointXYZ> icp;
icp.setInputSource(cloud);
icp.setInputTarget(ideal_cloud);
icp.setEuclideanFitnessEpsilon(-1.797e+5);
icp.setMaximumIterations(50);
icp.setRANSACIterations(2000);
icp.setTransformationEpsilon(1e-3);
pcl::PointCloud<pcl::PointXYZ> Final;
icp.align(Final);
//Print Final Transformation and score
std::cout << "has converged:" << icp.hasConverged() << " score: " <<
icp.getFitnessScore() << std::endl;
As far as I know the ICP algorithm is used to find the best-fit between two pointclouds, and it gives a transformation matrix which tells how much translation and rotation is needed to align both the pointclouds. But I am not sure why I am getting the translation with such large error.

Related

PCL: Normal estimation or visualization not working

I am trying to start working with PCL and do this basic example:
https://pcl.readthedocs.io/projects/tutorials/en/latest/how_features_work.html#how-3d-features-work
However, either the normal estimation or the visualization of the estimated normals doesn't work as it should.
As some test data, I generated this simple mesh in Blender:
I am using this test object because I have some point cloud data that was recorded with a real 3D-LIDAR sensor that is more complex so just for the sake of resolving this issue, I am using this simpler object here.
I convert this mesh into a PCL point cloud and that also works well.
Here is the view of the built in PCL point cloud visualizer:
However, after doing a surface normal estimation and trying to visualize it, the result is very crude:
I would have expected a view similar to the normal point cloud visualization, since I passed the original point cloud AND the normals, however I only get eleven (I counted them) normals of which some of them don't even seem to point in the right direction and no visualization of the original point cloud either.
The point cloud has 1024 points in it and the estimated normal set also has 1024 estimated normals in it so I am not sure what is going wrong here.
I ran this program with different parameters for the Sphere-Radius search (0.03, 0.06, 0.3, 0.6, 1, 3) but got similar results.
Here is my code:
int main()
{
PointCloud<PointXYZ> cloud;
PointCloud<PointXYZ>::Ptr cloud_ptr(&cloud);
string filename = "C:\\Users\\ilmu0\\Desktop\\Blendertestobjects\\planeconcave.obj";
int readresult = OBJReader::readPointCloudFromFile(filename, cloud_ptr); //My own class which reads Blender ".obj" files
if (readresult != 0)
{
cout << "File could not be read" << endl;
return 1;
}
cout << "Cloud size: " << cloud.size() << endl; // --> 1024
NormalEstimation<PointXYZ, Normal> estimation;
estimation.setInputCloud(cloud_ptr);
KdTree<PointXYZ>::Ptr tree_ptr(new KdTree<PointXYZ>());
estimation.setSearchMethod(tree_ptr);
PointCloud<Normal>::Ptr normalCloud_ptr(new PointCloud<Normal>());
estimation.setRadiusSearch(0.6);
estimation.compute(*normalCloud_ptr);
cout << "Normals: " << normalCloud_ptr->size() << endl; // --> 1024
string result;
cin >> result;
//This is used to visualize the original point cloud only
//CloudViewer viewer("Simple Cloud Viewer");
//viewer.showCloud(cloud_ptr);
//while (!viewer.wasStopped())
//{
//}
PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloudNormals<PointXYZ, Normal>(cloud_ptr, normalCloud_ptr);
while (!viewer.wasStopped())
{
viewer.spinOnce();
}
return 0;
}

Computation of the matrix inverse using the Eigen C++ library introduces noise

I have a publish-subscribe type of a node that receives pose information (position and orientation) from the subscribed data stream and it should compute the inverse and publish out.
In order to do so I'm creating a 4-by-4 homogeneous transformation matrix from the original pose data.
Inverse it using the Eigen C++ template library, convert the transformation matrix back to position and orientation form and publish it.
When I plotted the published data stream I noticed some noise so I ended up publishing the original data too for comparison, here is what I did:
convert original_pose to TF matrix, named as original_TF
convert original_TF back to pose, named as original_pose_
publish original_pose_
inverse original_TF assign to inverted_TF
convert inverted_TF to pose, named as inverted_pose_
publish inverted_pose_
When I plot the X, Y, Z position fields, I'm seeing a significant amount of noise (spikes and notches in the visual below) in the inverted pose data. Since I'm using the same functions to convert the original pose to TF and back, I know that those equations aren't the source of the noise.
Blue is the original, whereas red is the inverted.
Here is the code. Really nothing extraordinary.
bool inverse_matrix(std::vector<std::vector<double> > & input, std::vector<std::vector<double> > & output)
{
// TODO: Currently only supports 4-by-4 matrices, I can make this configurable.
// see https://eigen.tuxfamily.org/dox/group__TutorialMatrixClass.html
Eigen::Matrix4d input_matrix;
Eigen::Matrix4d output_matrix;
Eigen::VectorXcd input_eivals;
Eigen::VectorXcd output_eivals;
input_matrix << input[0][0], input[0][1], input[0][2], input[0][3],
input[1][0], input[1][1], input[1][2], input[1][3],
input[2][0], input[2][1], input[2][2], input[2][3],
input[3][0], input[3][1], input[3][2], input[3][3];
cout << "Here is the matrix input:\n" << input_matrix << endl;
input_eivals = input_matrix.eigenvalues();
cout << "The eigenvalues of the input_eivals are:" << endl << input_eivals << endl;
if(input_matrix.determinant() == 0) { return false; }
output_matrix = input_matrix.inverse();
cout << "Here is the matrix output:\n" << output_matrix << endl;
output_eivals = output_matrix.eigenvalues();
cout << "The eigenvalues of the output_eivals are:" << endl << output_eivals << endl;
// Copy output_matrix to output
for (int i = 0; i < 16; ++i)
{
int in = i/4;
int im = i%4;
output[in][im] = output_matrix(in, im);
}
return true;
}
-- Edit 1 --
I printed out the eigenvalues of the input and output matrices of the inverse_matrix function.
Here is the matrix input:
0.99916 -0.00155684 -0.0409514 0.505506
0.00342358 -0.992614 0.121267 0.19625
-0.0408377 -0.121305 -0.991775 1.64257
0 0 0 1
The eigenvalues of the input_eivals are:
(1,0)
(-0.992614,0.121312)
(-0.992614,-0.121312)
(1,0)
Here is the matrix output:
0.99916 0.00342358 -0.0408377 -0.438674
-0.00155684 -0.992614 -0.121305 0.39484
-0.0409514 0.121267 -0.991775 1.62597
-0 -0 0 1
The eigenvalues of the output_eivals are:
(1,0)
(-0.992614,0.121312)
(-0.992614,-0.121312)
(1,0)
-- Edit 2 --
I don't quite understand what you are plotting. Is it original_pose.{X,Y,Z} and inverted_pose.{X,Y,Z}? Then the "spikes" will really depend on the orientation-part of the matrix.
I am plotting original_pose_{position.x, position.y, position.z} and inverted_pose_{position.x, position.y, position.z} where the complete data that's published is <variable_name>{position.x, position.y, position.z, orientation.w, orientation.x, orientation.y, orientation.z}.
Can you elaborate on "the "spikes" will really depend on the orientation-part of the matrix."?
Also, how is your description related to the code-snippet? (I don't see any matching variable names).
I've identified that the source of the noise is the inversion, which is the item number 4 in my description: inverse original_TF assign to inverted_TF. To relate one another, I'm calling the function as follows:
isSuccess = inverse_matrix(original_TF, inverted_TF);
How do you store "poses" (is that the vector<vector> in your snippet)?
Yes, I'm storing them in 2-dimensional vectors of type double.
At any point, do you use Eigen::Transform to store transformations, or just plain Eigen::Matrix4d?
No, I'm only using Eigen::Matrix4d locally in the inverse_matrix function to be able to make use of the Eigen library for computation.

NaN values in normal point

I'm having a problem verifying the value contained in the coordinates of the normal point. Initially I estimate the normal of my cloud and when I'll check the xyz coordinates of the normal it's returned nan.
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::Normal>::Ptr normals (new pcl::PointCloud<pcl::Normal>);
// Object for normal estimation.
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
normalEstimation.setInputCloud(cloud);
// For every point, use all neighbors in a radius of 3cm.
normalEstimation.setRadiusSearch(0.03);
// A kd-tree is a data structure that makes searches efficient. More about it later.
// The normal estimation object will use it to find nearest neighbors.
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
normalEstimation.setSearchMethod(kdtree);
// Calculate the normals.
normalEstimation.compute(*normals);
for (size_t i = 0; i < cloud->points.size(); i++){
std::cout << "x: " << cloud->points[i].x <<std::endl;
std::cout << "normal x: " << normals->points[i].normal_x <<std::endl; //-nan here
}
how can I solve it ?
Seems like a problem with your input point cloud. Make sure your point cloud is not a flat plane and make sure you don't have duplicate points. What was ailing me was the search radius. Try playing with that too.
I just encountered a similar problem with very similar code. The comments were very helpful, thanks. This is probably too late, but it might still help somebody.

Apply rotation to Eigen::Affine3f

I'm using an Eigen::Affine3f to represent a camera matrix. (I've already figured out how to setup the view matrix/Affine3f from an initial "lookAt" and "up" vector)
Now, I want to support change the camera's orientation. Simple question: what's the best way to apply rotations to this Affine3f, i.e. pitch, yaw, roll?
It's quite simple using the built in functionality. You can use an AxisAngle object to multiply the existing Affine3f. Just note that the axis needs to be normalized:
Vector3f rotationAxis;
rotationAxis.setRandom(); // I don't really care, you determine the axis
rotationAxis.normalize(); // This is important, don't forget it
Affine3f randomAffine3f, rotatedAffine;
// Whatever was left in memory in my case,
// whatever your transformation is in yours
std::cout << randomAffine3f.matrix() << std::endl;
// We'll now apply a rotation of 0.256*M_PI around the rotationAxis
rotatedAffine = (AngleAxisf(0.256*M_PI, rotationAxis) * randomAffine3f);
std::cout << rotatedAffine.matrix() << std::endl; // Ta dum!!

C++ Element in array first is read correctly, then gives a NaN without having been changed how is that possible?

i'm experiencing the weirdest problem in my programming life. I'm performing an image analysis con a 3D volume of approx 800x800x600 elements, extracting from each pixel an hessian matrix built on the gray scale, and computing some algebra (using Eigen) after to detect the alignement of the fibres described in the volume.
To build the volume, i use a double[x][y][z] array that i build reading from a nhdr+raw file. So far so good for an intro of the domain problem.
I perform the analysis dividing the big volume in subvolumes (the overall execution time is around 14 hours, but that was expected, as usual for scientific analysis software).
At a certain point, when i am at the x=(max-2) (i stop my analysis 2 pixels before the border) index of the array, for a couple of elements ([792][247][76] and [792][84][56], but there might be others after those ones, still with the x=max probably), my eigen function fails suddenly.
Debuggin i found out that to build the partial derivatives for the Hessian matrix, i access the element [x+2][y][z] and i obtain NaN out it, therefore obviously Eigen goes mad while doing his operations on that.
The weirdest thing is, that if in the beginning of the software (after loading the volume), if i print the value of that exact element, it exists and it has a meaningful value too. How's that possible?? I run the software several times, and for the same two pixels, same error on the same position, so even guessing it might be a RAM error, shouldn't it somehow fluctuate and change position due to other stuff going on in my PC?
I went further with the testing.
I am looping on the subvolumes, and there everything is fine (i keep on tracking the value of a fixed volume[][][] element, the one on which the failure was manifesting itself)
The pixel value remains unchanged outside the following function, that pixel by pixel is analysing the subvolume. As i know that the value of the pixel i am interested in is 51941 (and it is before getting into the following function for the given subvolum) i put a guard on when the values changes.
Here what happens >
Pixel value in subvolume start:51941
Element :5982; 636,260,62;
Pixel value in loop: 1.65031e-22
After 5982 loops (of more or less 3millions necessary for the full subvolume), the value changes, and nowehere in the following code i touch it! what might cause something like that?
Matrix3d HesseAnalysis(int subX, int subY, int subZ, int maxX, int maxY, int maxZ){
//int maxX = subX+deltaX;
//int maxY = subY+deltaY;
//int maxZ = subZ+deltaZ;
int counter=0;
for (int x=subX;x<(maxX-2);x++){
for (int y=subY;y<(maxY-2);y++){
for (int z=subZ;z<(maxZ-2);z++){
if(volume[792][247][76]!=51941){
cout << "Element :" << counter << "; " << x << "," << y << "," << z << ";" << endl;
cout << "Pixel value in loop: " << volume[792][247][76]<< endl;
exit(0);
}
fxx=((volume[x+2][y][z]-volume[x][y][z])/2-(volume[x][y][z]-volume[x-2][y][z])/2)/2;
fyy=((volume[x][y+2][z]-volume[x][y][z])/2-(volume[x][y][z]-volume[x][y-2][z])/2)/2;
fzz=((volume[x][y][z+2]-volume[x][y][z])/2-(volume[x][y][z]-volume[x][y][z-2])/2)/2;
fxy=((volume[x+1][y+1][z]-volume[x+1][y-1][z])-(volume[x-1][y+1][z]-volume[x-1][y-1][z]));
fxz=((volume[x+1][y][z+1]-volume[x+1][y][z-1])-(volume[x-1][y][z+1]-volume[x-1][y][z-1]));
fyz=((volume[x][y+1][z+1]-volume[x][y+1][z-1])-(volume[x][y-1][z+1]-volume[x][y-1][z-1]));
//compose hessian matrix for the pixel, remember that
hessian << fxx, fxy, fxz,
fxy, fyy, fyz,
fxz, fyz, fzz;
//extract eigenvalues and choose the eigenvector related to the smallest eigenvalue,
//and do the outer product of it with itself
EigenSolver<Matrix3d> solver(hessian);
int minorEigen = minorEigenvalue(solver.eigenvalues().real());
Vector3d v3 =solver.eigenvectors().col(minorEigen).real();
V3outerV3 = v3*v3.transpose();
OuterProducts[(x-subX)-2][(y-subY)-2][(z-subZ)-2]= V3outerV3;
counter++;
}
}
}
Generally C/C++ uses 0-based index. If you don't want your x to be out of range, should you stop at x=max-3, so the index can stay in the range from 0 to max-1?
I found it. I still didn't have time to explore the real reason, but it depends on a probable bug of Eigen.
this line is the one that causes the mess :
Vector3d v3 =solver.eigenvectors().col(minorEigen).real();
For some reason, at a certain point, it decides to invade the memory already allocated for my volume[ ][ ][ ] array and, and this is the weirdest part, the pixel that i was using for testing, the volume[792][247][76] value, is changed depending on which value the variable minorEigen (that is simply returning the index of the smallest eigenvalue in the function).
So for example with minorEigen = 0 the pixel becomes 1.65031e-22, with minorEigen = 1 it becomes 2.1402e+5... I guess some wird bogus pointer or some similar bug. I will investigate the Eigen bug reports, check for an newer releas, or otherwise implement my own eigensolver for the matrix.