Detecting an empty circular/rectangular on a surface using a pointcloud - c++

I am using the Intel Realsense L515 lidar to obtain a pointcloud. I need to detect an empty area (no objects blocking it) on a surface (i.e. a table or a wall) so that a robotic arm has enough room to apply pressure to the surface at that point.
So far I have segmented the pointcloud in PCL in order to find a plane, to first detect the surface.
Afterwards I try segmenting for a 10-20cm circle anywhere in the area in order to attempt to find an empty space for the robot to operate.
https://imgur.com/a/fbY2yJ9
However as you can see, the circle (yellow points) is hollow instead of a full disc (so there could be objects lying inside it) and it has holes in it so it's not really an empty place on the table as the holes are created by objects eliminated in the planar segmentation stage (in this case that's my foot in the middle of the circle).
Is there any way to obtain the true non-object cluttered area on a planar surface?
Current code:
//Plane segmentation processing start
pcl::ModelCoefficients::Ptr coefficients (new pcl::ModelCoefficients);
pcl::PointIndices::Ptr inliers (new pcl::PointIndices);
// Create the plane segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg;
// Optional
seg.setOptimizeCoefficients (true);
// Mandatory
seg.setModelType (pcl::SACMODEL_PLANE);
seg.setMethodType (pcl::SAC_RANSAC);
seg.setDistanceThreshold (0.005);
// Create the filtering object
pcl::ExtractIndices<pcl::PointXYZ> extract;
// Segment the largest planar component from the remaining cloud
seg.setInputCloud(cloud);
pcl::ScopeTime scopeTime("Test loop plane");
{
seg.segment(*inliers, *coefficients);
}
if (inliers->indices.size() == 0)
{
std::cerr << "Could not estimate a planar model for the given dataset." << std::endl;
}
// Extract the inliers
extract.setInputCloud(cloud);
extract.setIndices(inliers);
extract.setNegative(false);
extract.filter(*cloud_p);
std::cerr << "PointCloud representing the planar component: " << cloud_p->width * cloud_p->height << " data points." << std::endl;
//Circle segmentation processing start
pcl::ModelCoefficients::Ptr coefficients_c (new pcl::ModelCoefficients);
pcl::PointIndices::Ptr inliers_c (new pcl::PointIndices);
// Create the circle segmentation object
pcl::SACSegmentation<pcl::PointXYZ> seg_c;
// Optional
seg_c.setOptimizeCoefficients (true);
// Mandatory
seg_c.setModelType (pcl::SACMODEL_CIRCLE2D);
seg_c.setMethodType (pcl::SAC_RANSAC);
seg_c.setDistanceThreshold (0.01);
seg_c.setRadiusLimits(0.1,0.2);
// Create the filtering object
pcl::ExtractIndices<pcl::PointXYZ> extract_c;
// Segment a circle component from the remaining cloud
seg_c.setInputCloud(cloud_p);
pcl::ScopeTime scopeTime2("Test loop circle");
{
seg_c.segment(*inliers_c, *coefficients_c);
}
if (inliers_c->indices.size() == 0)
{
std::cerr << "Could not estimate a circle model for the given dataset." << std::endl;
}
std::cerr << "Circle coefficients: \nCenter coordinates: " << coefficients_c->values[0] << " " << coefficients_c->values[1] << " " << coefficients_c->values[2] << " ";
// Extract the inliers
extract_c.setInputCloud(cloud_p);
extract_c.setIndices(inliers_c);
extract_c.setNegative(false);
extract_c.filter(*cloud_c);
std::cerr << "PointCloud representing the circle component: " << cloud_c->width * cloud_c->height << " data points." << std::endl;

Related

PCL: Normal estimation or visualization not working

I am trying to start working with PCL and do this basic example:
https://pcl.readthedocs.io/projects/tutorials/en/latest/how_features_work.html#how-3d-features-work
However, either the normal estimation or the visualization of the estimated normals doesn't work as it should.
As some test data, I generated this simple mesh in Blender:
I am using this test object because I have some point cloud data that was recorded with a real 3D-LIDAR sensor that is more complex so just for the sake of resolving this issue, I am using this simpler object here.
I convert this mesh into a PCL point cloud and that also works well.
Here is the view of the built in PCL point cloud visualizer:
However, after doing a surface normal estimation and trying to visualize it, the result is very crude:
I would have expected a view similar to the normal point cloud visualization, since I passed the original point cloud AND the normals, however I only get eleven (I counted them) normals of which some of them don't even seem to point in the right direction and no visualization of the original point cloud either.
The point cloud has 1024 points in it and the estimated normal set also has 1024 estimated normals in it so I am not sure what is going wrong here.
I ran this program with different parameters for the Sphere-Radius search (0.03, 0.06, 0.3, 0.6, 1, 3) but got similar results.
Here is my code:
int main()
{
PointCloud<PointXYZ> cloud;
PointCloud<PointXYZ>::Ptr cloud_ptr(&cloud);
string filename = "C:\\Users\\ilmu0\\Desktop\\Blendertestobjects\\planeconcave.obj";
int readresult = OBJReader::readPointCloudFromFile(filename, cloud_ptr); //My own class which reads Blender ".obj" files
if (readresult != 0)
{
cout << "File could not be read" << endl;
return 1;
}
cout << "Cloud size: " << cloud.size() << endl; // --> 1024
NormalEstimation<PointXYZ, Normal> estimation;
estimation.setInputCloud(cloud_ptr);
KdTree<PointXYZ>::Ptr tree_ptr(new KdTree<PointXYZ>());
estimation.setSearchMethod(tree_ptr);
PointCloud<Normal>::Ptr normalCloud_ptr(new PointCloud<Normal>());
estimation.setRadiusSearch(0.6);
estimation.compute(*normalCloud_ptr);
cout << "Normals: " << normalCloud_ptr->size() << endl; // --> 1024
string result;
cin >> result;
//This is used to visualize the original point cloud only
//CloudViewer viewer("Simple Cloud Viewer");
//viewer.showCloud(cloud_ptr);
//while (!viewer.wasStopped())
//{
//}
PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloudNormals<PointXYZ, Normal>(cloud_ptr, normalCloud_ptr);
while (!viewer.wasStopped())
{
viewer.spinOnce();
}
return 0;
}

OpenSceneGraph: analysing a scenegraph

I want to read a 3D Model through OSG and learn the 3D model information about vertices, normals and texture coordinates etc.
I do not understand the code below (complete tutorial from here ). why are we using prset->index(ic) as an index? I am confused. (* verts) is the vertex array but what is the prset->index(ic) ?
for (ic=0; ic<prset->getNumIndices(); ic++) { // NB the vertices are held in the drawable -
osg::notify(osg::WARN) << "vertex "<< ic << " is index "<<prset->index(ic) << " at " <<
(* verts)[prset->index(ic)].x() << "," <<
(* verts)[prset->index(ic)].y() << "," <<
(* verts)[prset->index(ic)].z() << std::endl;
}
If your drawable uses indexed primitives, you need to de-reference the triangle vertices looking into the indices array, as you might re-use shared vertices of the vertex array.
Something like this.

Vector.push_back(pair<int,int>(x1,x2 )); does not work

I used a DLIB parallel_for loop to do some processing and add coordinates to a vector> that has been declared outside the loop. But I cannot use the vector.push_back() function from within the loop.
Verified whether there are any declaration issues.
Passed the vector pointer to the parallel_for loop lambda function.
//Store cordinates of respective face_image
std::vector<pair<int,int>> xy_coords;
//Create a dlib image window
window.clear_overlay();
window.set_image(dlib_frame);
auto detections = f_detector(dlib_frame);
dlib::parallel_for(0, detections.size(), [&,detections,xy_coords](long i)
{
auto det_face = detections[i];
//Display Face data to the user
cout << "Face Found! " << "Area: " << det_face.area() << "X: " <<det_face.left() << "Y: " << det_face.bottom() << endl;
//Get the Shape details from the face
auto shape = sp(dlib_frame, det_face);
//Extract Face Image from frame
matrix<rgb_pixel> face_img;
extract_image_chip(dlib_frame, get_face_chip_details(shape, 150, 0.25), face_img);
faces.push_back(face_img);
//Add the coordinates to the coordinates vector
xy_coords.push_back(std::pair<int,int>((int)det_face.left(),(int)det_face.bottom()));
//Add face to dlib image window
window.add_overlay(det_face);
});
Your lambda is capturing xy_coords by copy, the one you're pushing into inside the lamdba is not the same outside. Try capturing it by reference like so [&,&xy_coords,detections] or just [&,detections].
See this for more info:
https://en.cppreference.com/w/cpp/language/lambda#Lambda_capture

How to convert a triangulated hole patch to a surface mesh?

I have a triangulated hole patch with vertices and triangles. Now how do I convert this to a surface mesh?
I am trying to partially fill a hole in my mesh using two different patches. I have the vertex locations for all the points on the boundary (points with z=0). Using them I have triangulated the hole using the following code (from https://doc.cgal.org/latest/Polygon_mesh_processing/Polygon_mesh_processing_2triangulate_polyline_example_8cpp-example.html)
std::vector<PointCGAL> polyline;
Mesh::Property_map<vertex_descriptor, std::string> name;
Mesh::Property_map<vertex_descriptor, PointCGAL> location =
mesh1.points();
BOOST_FOREACH(vertex_descriptor vd, mesh1.vertices()) {
if (location[vd].z() < 0.00001)
{
std::cout << "on Boundary" << endl;
polyline.push_back(PointCGAL(location[vd].x(),
location[vd].y(), location[vd].z()));
}
std::cout << location[vd] << std::endl;
}
typedef CGAL::Triple<int, int, int> Triangle_int;
std::vector<Triangle_int> patch;
patch.reserve(polyline.size() - 2); // there will be exactly n-2
triangles in the patch
CGAL::Polygon_mesh_processing::triangulate_hole_polyline(
polyline,
std::back_inserter(patch));
Have a look at the function polygon_soup_to_polygon_mesh()

PCLVisualizer not staying open

I have the following code to estimate and show normal vectors for points in my point cloud:
int main(int argc, char* argv[]) {
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
if (pcl::io::loadPCDFile<pcl::PointXYZ> ("coffee_mug_1_1_1.pcd", *cloud) == -1) //* load the file
{
PCL_ERROR ("Couldn't read file coffee_mug_1_1_1.pcd \n");
return (-1);
}
std::cout << "Loaded "
<< cloud->width * cloud->height
<< " data points from test_pcd.pcd with the following fields: "
<< std::endl;
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;
ne.setInputCloud (cloud);
// Create an empty kdtree representation, and pass it to the normal estimation object.
// Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
ne.setSearchMethod (tree);
// Output datasets
pcl::PointCloud<pcl::Normal>::Ptr cloud_normals (new pcl::PointCloud<pcl::Normal>);
// Use all neighbors in a sphere of radius 3cm
ne.setRadiusSearch (0.03);
// Compute the features
ne.compute (*cloud_normals);
cout << "Computed normals " << cloud_normals->width * cloud_normals->height << cloud_normals->points[0] << endl;
pcl::visualization::PCLVisualizer viewer("PCL Viewer");
viewer.setBackgroundColor(0.0, 0.0, 0.5);
viewer.addPointCloud(cloud);
viewer.addPointCloudNormals<pcl::PointXYZ,pcl::Normal>(cloud, cloud_normals);
while (!viewer.wasStopped ())
{
viewer.spinOnce ();
}
}
However, when I run it the PCLVisualizer flashes on the screen and then the program terminates. I have no idea why it doesn't stay. If I use a CloudViewer just to display the point cloud (not the normals), this works fine and stays on the screen.
3 things you can check:
Check that indeed you read your file with no error. You can put a cin.get() right after PCL_ERROR ("Couldn't read file coffee_mug_1_1_1.pcd \n"); so that it doesn't exit directly in case it couldn't read the file.
Probably you should give a different id to your normals if you want to visualize them, because I think that now, both your cloud and the nomals use the id "cloud". Check the PCL example on normal visualization.
(e.g. viewer.addPointCloud(cloud, "cloud");
viewer.addPointCloudNormals<pcl::PointXYZ,pcl::Normal>(cloud, cloud_normals, 10, 0.05, "normals");
Make sure you get valid values for your normals.
If nothing from those works, could you maybe upload your coffee mug pcd somewhere and provide a link?
I have found the problem I had here, I was not linking against the boost libs that I was compiling against as I had /usr/lib in my link library path