I have the following code to estimate and show normal vectors for points in my point cloud:
int main(int argc, char* argv[]) {
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
if (pcl::io::loadPCDFile<pcl::PointXYZ> ("coffee_mug_1_1_1.pcd", *cloud) == -1) //* load the file
{
PCL_ERROR ("Couldn't read file coffee_mug_1_1_1.pcd \n");
return (-1);
}
std::cout << "Loaded "
<< cloud->width * cloud->height
<< " data points from test_pcd.pcd with the following fields: "
<< std::endl;
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;
ne.setInputCloud (cloud);
// Create an empty kdtree representation, and pass it to the normal estimation object.
// Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
ne.setSearchMethod (tree);
// Output datasets
pcl::PointCloud<pcl::Normal>::Ptr cloud_normals (new pcl::PointCloud<pcl::Normal>);
// Use all neighbors in a sphere of radius 3cm
ne.setRadiusSearch (0.03);
// Compute the features
ne.compute (*cloud_normals);
cout << "Computed normals " << cloud_normals->width * cloud_normals->height << cloud_normals->points[0] << endl;
pcl::visualization::PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloud(cloud);
viewer.addPointCloudNormals<pcl::PointXYZ,pcl::Normal>(cloud, cloud_normals);
while (!viewer.wasStopped ())
{
viewer.spinOnce ();
}
}
However, when I run it the PCLVisualizer flashes on the screen and then the program terminates. I have no idea why it doesn't stay. If I use a CloudViewer just to display the point cloud (not the normals), this works fine and stays on the screen.
3 things you can check:
Check that indeed you read your file with no error. You can put a cin.get() right after PCL_ERROR ("Couldn't read file coffee_mug_1_1_1.pcd \n"); so that it doesn't exit directly in case it couldn't read the file.
Probably you should give a different id to your normals if you want to visualize them, because I think that now, both your cloud and the nomals use the id "cloud". Check the PCL example on normal visualization.
(e.g. viewer.addPointCloud(cloud, "cloud");
viewer.addPointCloudNormals<pcl::PointXYZ,pcl::Normal>(cloud, cloud_normals, 10, 0.05, "normals");
Make sure you get valid values for your normals.
If nothing from those works, could you maybe upload your coffee mug pcd somewhere and provide a link?
I have found the problem I had here, I was not linking against the boost libs that I was compiling against as I had /usr/lib in my link library path
Related
I am trying to start working with PCL and do this basic example:
https://pcl.readthedocs.io/projects/tutorials/en/latest/how_features_work.html#how-3d-features-work
However, either the normal estimation or the visualization of the estimated normals doesn't work as it should.
As some test data, I generated this simple mesh in Blender:
I am using this test object because I have some point cloud data that was recorded with a real 3D-LIDAR sensor that is more complex so just for the sake of resolving this issue, I am using this simpler object here.
I convert this mesh into a PCL point cloud and that also works well.
Here is the view of the built in PCL point cloud visualizer:
However, after doing a surface normal estimation and trying to visualize it, the result is very crude:
I would have expected a view similar to the normal point cloud visualization, since I passed the original point cloud AND the normals, however I only get eleven (I counted them) normals of which some of them don't even seem to point in the right direction and no visualization of the original point cloud either.
The point cloud has 1024 points in it and the estimated normal set also has 1024 estimated normals in it so I am not sure what is going wrong here.
I ran this program with different parameters for the Sphere-Radius search (0.03, 0.06, 0.3, 0.6, 1, 3) but got similar results.
Here is my code:
int main()
{
PointCloud<PointXYZ> cloud;
PointCloud<PointXYZ>::Ptr cloud_ptr(&cloud);
string filename = "C:\\Users\\ilmu0\\Desktop\\Blendertestobjects\\planeconcave.obj";
int readresult = OBJReader::readPointCloudFromFile(filename, cloud_ptr); //My own class which reads Blender ".obj" files
if (readresult != 0)
{
cout << "File could not be read" << endl;
return 1;
}
cout << "Cloud size: " << cloud.size() << endl; // --> 1024
NormalEstimation<PointXYZ, Normal> estimation;
estimation.setInputCloud(cloud_ptr);
KdTree<PointXYZ>::Ptr tree_ptr(new KdTree<PointXYZ>());
estimation.setSearchMethod(tree_ptr);
PointCloud<Normal>::Ptr normalCloud_ptr(new PointCloud<Normal>());
estimation.setRadiusSearch(0.6);
estimation.compute(*normalCloud_ptr);
cout << "Normals: " << normalCloud_ptr->size() << endl; // --> 1024
string result;
cin >> result;
//This is used to visualize the original point cloud only
//CloudViewer viewer("Simple Cloud Viewer");
//viewer.showCloud(cloud_ptr);
//while (!viewer.wasStopped())
//{
//}
PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloudNormals<PointXYZ, Normal>(cloud_ptr, normalCloud_ptr);
while (!viewer.wasStopped())
{
viewer.spinOnce();
}
return 0;
}
I am using the Intel Realsense L515 lidar to obtain a pointcloud. I need to detect an empty area (no objects blocking it) on a surface (i.e. a table or a wall) so that a robotic arm has enough room to apply pressure to the surface at that point.
So far I have segmented the pointcloud in PCL in order to find a plane, to first detect the surface.
Afterwards I try segmenting for a 10-20cm circle anywhere in the area in order to attempt to find an empty space for the robot to operate.
https://imgur.com/a/fbY2yJ9
However as you can see, the circle (yellow points) is hollow instead of a full disc (so there could be objects lying inside it) and it has holes in it so it's not really an empty place on the table as the holes are created by objects eliminated in the planar segmentation stage (in this case that's my foot in the middle of the circle).
Is there any way to obtain the true non-object cluttered area on a planar surface?
Current code:
//Plane segmentation processing start
pcl::ModelCoefficients::Ptr coefficients (new pcl::ModelCoefficients);
pcl::PointIndices::Ptr inliers (new pcl::PointIndices);
// Create the plane segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg;
// Optional
seg.setOptimizeCoefficients (true);
// Mandatory
seg.setModelType (pcl::SACMODEL_PLANE);
seg.setMethodType (pcl::SAC_RANSAC);
seg.setDistanceThreshold (0.005);
// Create the filtering object
pcl::ExtractIndices<pcl::PointXYZ> extract;
// Segment the largest planar component from the remaining cloud
seg.setInputCloud(cloud);
pcl::ScopeTime scopeTime("Test loop plane");
{
seg.segment(*inliers, *coefficients);
}
if (inliers->indices.size() == 0)
{
std::cerr << "Could not estimate a planar model for the given dataset." << std::endl;
}
// Extract the inliers
extract.setInputCloud(cloud);
extract.setIndices(inliers);
extract.setNegative(false);
extract.filter(*cloud_p);
std::cerr << "PointCloud representing the planar component: " << cloud_p->width * cloud_p->height << " data points." << std::endl;
//Circle segmentation processing start
pcl::ModelCoefficients::Ptr coefficients_c (new pcl::ModelCoefficients);
pcl::PointIndices::Ptr inliers_c (new pcl::PointIndices);
// Create the circle segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg_c;
// Optional
seg_c.setOptimizeCoefficients (true);
// Mandatory
seg_c.setModelType (pcl::SACMODEL_CIRCLE2D);
seg_c.setMethodType (pcl::SAC_RANSAC);
seg_c.setDistanceThreshold (0.01);
seg_c.setRadiusLimits(0.1,0.2);
// Create the filtering object
pcl::ExtractIndices<pcl::PointXYZ> extract_c;
// Segment a circle component from the remaining cloud
seg_c.setInputCloud(cloud_p);
pcl::ScopeTime scopeTime2("Test loop circle");
{
seg_c.segment(*inliers_c, *coefficients_c);
}
if (inliers_c->indices.size() == 0)
{
std::cerr << "Could not estimate a circle model for the given dataset." << std::endl;
}
std::cerr << "Circle coefficients: \nCenter coordinates: " << coefficients_c->values[0] << " " << coefficients_c->values[1] << " " << coefficients_c->values[2] << " ";
// Extract the inliers
extract_c.setInputCloud(cloud_p);
extract_c.setIndices(inliers_c);
extract_c.setNegative(false);
extract_c.filter(*cloud_c);
std::cerr << "PointCloud representing the circle component: " << cloud_c->width * cloud_c->height << " data points." << std::endl;
I have a vector of point clouds of spatial points (XYZ only) which I am trying to display in the same window with different colours. I am using custom colour handlers in the PCLvisualizer.
Unfortunately the colours aren't different in the viewer on display however, but all the points are displayed in the same colour (the second one). It seems like the colour is changed for all the previous points in the viewer each time a new point cloud is added.
Here is my code:
pcl::visualization::PCLVisualizer viewer;
viewer.setBackgroundColor (0, 0, 0);
std::stringstream cloud_name;
int counter(0);
pcl::RGB rgb_color;
for (auto curr_cloud : clouds_vector)
{
rgb_color = pcl::GlasbeyLUT::at(counter);
++counter;
cloud_name.str("");
cloud_name << "Cloud " << counter;
if (counter < 5)
{
pcl::visualization::PointCloudColorHandlerCustom<pcl::PointXYZ> first_color(curr_cloud, rgb_color.r, rgb_color.g, rgb_color.b);
viewer.addPointCloud<pcl::PointXYZ> (curr_cloud, first_color, cloud_name.str());
}
else
{
pcl::visualization::PointCloudColorHandlerCustom<pcl::PointXYZ> second_color(curr_cloud, rgb_color.r, rgb_color.g, rgb_color.b);
viewer.addPointCloud<pcl::PointXYZ> (curr_cloud, second_color, cloud_name.str());
}
viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 8, cloud_name.str());
}
Any ideas why this might be and what to change?
Many thanks to you all in advance!
EDIT: I have tried around a little, and the problem doesn't seem to be the way I'm visualising the point clouds itself (above described way is working with freshly created clouds with random values.
In my use case I am extracting multiple point clouds with the help of their indices from a single point cloud. If I do that they still all hold the same colour upon visualisation. I still can't figure out why that's the case though and am happy for any suggestions. Here is the full code with the extraction (for the benefit of a minimal example with a randomly created mother point cloud from which the extraction is taking place):
PointCloudT::Ptr cloud (new PointCloudT);
// 100 points in point cloud, random coordinates
cloud->resize(100);
for (PointCloudT::iterator cloud_it (cloud->begin()); cloud_it != cloud->end(); ++cloud_it) {
cloud_it->x = 1024 * rand () / (RAND_MAX + 1.0f);
cloud_it->y = 1024 * rand () / (RAND_MAX + 1.0f);
cloud_it->z = 1024 * rand () / (RAND_MAX + 1.0f);
}
// Create index vectors containing 20 indices each
std::vector <pcl::PointIndices> point_indices_vector;
int offset(20);
pcl::PointIndices indices;
for (int i=0;i<5;++i) {
for (int j=0;j<20;++j) {
indices.indices.push_back(j + i * offset);
}
point_indices_vector.push_back(indices);
indices.indices.clear(); // <----- THIS WAS MISSING
}
// Create extraction object to copy points from 100 strong cloud to 20-point clouds
pcl::ExtractIndices<PointT> extract;
extract.setInputCloud (cloud);
extract.setNegative (false);
// Extract points and copy them to clouds vector
std::vector<PointCloudT::Ptr, Eigen::aligned_allocator<PointCloudT::Ptr> > clouds_vector;
PointCloudT::Ptr curr_segment_cloud;
for (auto curr_index_vector : point_indices_vector) {
curr_segment_cloud.reset (new PointCloudT);
// Copy points of current line segment from source cloud into current segment cloud
extract.setIndices(boost::make_shared<const pcl::PointIndices> (curr_index_vector));
extract.filter (*curr_segment_cloud);
// Push back point cloud into return vector
clouds_vector.push_back(curr_segment_cloud);
}
// Create viewer
pcl::visualization::PCLVisualizer viewer;
viewer.setBackgroundColor (0, 0, 0);
// Visualize point clouds from clouds vector
std::stringstream cloud_name;
int counter(0);
pcl::RGB rgb;
for (auto curr_cloud : clouds_vector) {
++counter;
cloud_name.str("");
cloud_name << "Cloud " << counter;
// Generate unique colour
rgb = pcl::GlasbeyLUT::at(counter);
// Create colour handle
pcl::visualization::PointCloudColorHandlerCustom<pcl::PointXYZ> colour_handle(curr_cloud, rgb.r, rgb.g, rgb.b);
// Add points to viewer and set parameters
viewer.addPointCloud<pcl::PointXYZ> (curr_cloud, colour_handle, cloud_name.str());
viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 8, cloud_name.str());
}
// Keep viewer running
while (!viewer.wasStopped()) {
viewer.spinOnce(100);
boost::this_thread::sleep (boost::posix_time::microseconds (100000));
}
Problem solved!
So if anybody wants to use the code... both of the snippets above are now working. (I found the mistake and corrected it in the above code snippet).
I'm embarrassed to say that I simply forgot to clear my extraction indices vector before extracting the points. So the last cloud was the full cloud again and was plotting over all of the before added points making them invisible. This way all the points seemed the same colour. o.O
I'm having a problem verifying the value contained in the coordinates of the normal point. Initially I estimate the normal of my cloud and when I'll check the xyz coordinates of the normal it's returned nan.
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::Normal>::Ptr normals (new pcl::PointCloud<pcl::Normal>);
// Object for normal estimation.
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> normalEstimation;
normalEstimation.setInputCloud(cloud);
// For every point, use all neighbors in a radius of 3cm.
normalEstimation.setRadiusSearch(0.03);
// A kd-tree is a data structure that makes searches efficient. More about it later.
// The normal estimation object will use it to find nearest neighbors.
pcl::search::KdTree<pcl::PointXYZ>::Ptr kdtree(new pcl::search::KdTree<pcl::PointXYZ>);
normalEstimation.setSearchMethod(kdtree);
// Calculate the normals.
normalEstimation.compute(*normals);
for (size_t i = 0; i < cloud->points.size(); i++){
std::cout << "x: " << cloud->points[i].x <<std::endl;
std::cout << "normal x: " << normals->points[i].normal_x <<std::endl; //-nan here
}
how can I solve it ?
Seems like a problem with your input point cloud. Make sure your point cloud is not a flat plane and make sure you don't have duplicate points. What was ailing me was the search radius. Try playing with that too.
I just encountered a similar problem with very similar code. The comments were very helpful, thanks. This is probably too late, but it might still help somebody.
I am trying to build a local map by adding point clouds from Kinect using iterative closest point from Point Cloud Library and ROS Hydro in Ubuntu 12.04. However, I am not able to add consecutive point clouds together to update the map. The problem is that the aligned pointcloud is only added with the source pointcloud for those current frames. I am having a bit of trouble storing the previous point clouds. As seen from the code I update the map with
Final+=*cloud_in;
However a new Final is computed every time, so I lose the old Final value. I need to retain it. I am a novice in C++ and ROS, so I would greatly appreciate help on this matter.
Listed below is the Code:
ros::Publisher _pub;
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_in (new pcl::PointCloud<pcl::PointXYZRGB>);
void
cloud_cb2 (const sensor_msgs::PointCloud2ConstPtr& next_input)
{
pcl::fromROSMsg (*next_input, *cloud_in);
//remove NAN points from the cloud
std::vector<int> indices;
pcl::removeNaNFromPointCloud(*cloud_in,*cloud_in, indices);
// Convert the sensor_msgs/PointCloud2 data to pcl::PointCloud
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud2_in (new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::fromROSMsg (*next_input, *cloud2_in);
//remove NAN points
std::vector<int> indices2;
pcl::removeNaNFromPointCloud(*cloud2_in,*cloud2_in, indices2);
pcl::IterativeClosestPoint<pcl::PointXYZRGB, pcl::PointXYZRGB> icp;
icp.setInputSource(cloud2_in);
icp.setInputTarget(cloud_in);
pcl::PointCloud<pcl::PointXYZRGB> Final;
icp.align(Final);
std::cout << "has converged:" << icp.hasConverged() << " score: " <<
icp.getFitnessScore() << std::endl;
std::cout << icp.getFinalTransformation() << std::endl;
Final+=*cloud_in;
// Convert the pcl::PointCloud to sensor_msgs/PointCloud2
sensor_msgs::PointCloud2 output;
pcl::toROSMsg( Final, output );
// Publish the map
_pub.publish(output);
}
int main (int argc, char** argv)
{
ros::init (argc, argv, "my_pcl_tutorial");
ros::NodeHandle nh;
// ROS subscriber for /camera/depth_registered/points
ros::Subscriber sub = nh.subscribe(
"/camera/depth_registered/points",
2,
cloud_cb2
);
// Create ROS publisher for transformed pointcloud
_pub = nh.advertise<sensor_msgs::PointCloud2>(
"output",
1
);
// Spin
ros::spin ();
}
I think you are doing it wrong... I mean, the idea of cloud_cb2 is to be a callback (at least that is the common thing in the examples where they use a similar name and definition), so the idea is that every time it enters this function, it gives you a new cloud that you should integrate to your previous cloud...
I suppose that by doing pcl::fromROSMsg (*next_input, *cloud2_in); you are forcing the program to give you a new cloud, but it should not be like that as I told you before.
Then, to answer your question:
icp.align(Final);
If you read the tutorial from PCL here, it tells you that this function receives as input a point cloud variable that will contain the icp result.
Also, the result will be the alignment (or try) of
icp.setInputSource(cloud2_in);
to match
icp.setInputTarget(cloud_in);
So you are overwriting Final, with the 2 new clouds align, and then adding the points of cloud_in that is already in the pointcloud.
I recommend you, to check your workflow again, it should be something like this
ros::Publisher _pub;
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_in (new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::PointCloud<pcl::PointXYZRGB>::Ptr Final (new pcl::PointCloud<pcl::PointXYZRGB>);
void
cloud_cb2 (const sensor_msgs::PointCloud2ConstPtr& next_input)
{
pcl::PointCloud<pcl::PointXYZRGB>::Ptr tmp_cloud (new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::fromROSMsg (*next_input, *cloud_in);
//remove NAN points from the cloud
std::vector<int> indices;
pcl::removeNaNFromPointCloud(*cloud_in,*cloud_in, indices);
pcl::IterativeClosestPoint<pcl::PointXYZRGB, pcl::PointXYZRGB> icp;
icp.setInputSource(cloud_in);
icp.setInputTarget(Final);
pcl::PointCloud<pcl::PointXYZRGB> Final;
icp.align(tmp_cloud);
std::cout << "has converged:" << icp.hasConverged() << " score: " <<
icp.getFitnessScore() << std::endl;
std::cout << icp.getFinalTransformation() << std::endl;
Final = tmp_cloud;
// Convert the pcl::PointCloud to sensor_msgs/PointCloud2
sensor_msgs::PointCloud2 output;
pcl::toROSMsg( Final, output );
// Publish the map
_pub.publish(output);
}
int main (int argc, char** argv)
{
ros::init (argc, argv, "my_pcl_tutorial");
ros::NodeHandle nh;
// ROS subscriber for /camera/depth_registered/points
ros::Subscriber sub = nh.subscribe(
"/camera/depth_registered/points",
2,
cloud_cb2
);
// Create ROS publisher for transformed pointcloud
_pub = nh.advertise<sensor_msgs::PointCloud2>(
"output",
1
);
// Spin
ros::spin ();
}
I just did some changes to show how it should be, but I haven't tested it so probably you'll need to correct it even further. I hope this answer helps you. Also, I do not know how the ICP algorithm will work in the first call back when the Final cloud is empty. Also, I recommend some downsampling of the data, or else it will use incredibly huge amounts of memory and cpu after doing it for some frames