I am using library called Point Cloud Library (PCL). In particular I am trying to compute point feature histograms. I followed this code from the website:
#include <pcl/point_types.h>
#include <pcl/features/pfh.h>
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::Normal>::Ptr normals (new pcl::PointCloud<pcl::Normal> ());
... read, pass in or create a point cloud with normals ...
... (note: you can create a single PointCloud<PointNormal> if you want) ...
// Create the PFH estimation class, and pass the input dataset+normals to it
pcl::PFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125> pfh;
pfh.setInputCloud (cloud);
pfh.setInputNormals (normals);
// alternatively, if cloud is of tpe PointNormal, do pfh.setInputNormals (cloud);
// Create an empty kdtree representation, and pass it to the PFH estimation object.
// Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
//pcl::KdTreeFLANN<pcl::PointXYZ>::Ptr tree (new pcl::KdTreeFLANN<pcl::PointXYZ> ()); -- older call for PCL 1.5-
pfh.setSearchMethod (tree);
// Output datasets
pcl::PointCloud<pcl::PFHSignature125>::Ptr pfhs (new pcl::PointCloud<pcl::PFHSignature125> ());
// Use all neighbors in a sphere of radius 5cm
// IMPORTANT: the radius used here has to be larger than the radius used to estimate the surface normals!!!
pfh.setRadiusSearch (0.05);
// Compute the features
pfh.compute (*pfhs);
// pfhs->points.size () should have the same size as the input cloud->points.size ()*
}
The output I get is an array of 125 values per point from the original point cloud. For example if I have a point cloud with 1000 points, where each point contains XYZ, then there will be 1000 * 125 values. I was able to understand why I have 125 entries where each corresponds to a bin. (assuming 3 features and 5 divisions 5^3 = 125)
This post helped some: PCL Point Feature Histograms - binning
Unfortunately I still have a few questions:
1) Why do I have a 125 histograms per point? Is it because it measures what is the percentage of points that are in K-nearest neighborhood to current point have similar features and then each point has its own neighborhood?
2) I see for some points all 125 entries are zeros. Why?
3) Graphing point feature histograms values as shown in the paper and website:
Website:
http://pointclouds.org/documentation/tutorials/pfh_estimation.php#pfh-estimation
Paper:
https://pdfs.semanticscholar.org/5aee/411f0b4228ba63c85df0e8ed64cab5844aed.pdf
The graphs shown have their X axis as number of bins (in my case 125 bins) so the natural question how do we consolidate 125 values per point into one graph?
I tried a simple summation of appropriate columns and scaling them by a constant but I do not think it is right. By summation I mean add all bin[0] for every point, next sum all bin[1] for every point and so on until bin[124].
I would really appreciate any help to clarify this.
Thank you.
The PFH descriptor is a local descriptor, so the histogram is computed for each point it is given. You likely will want to use only a keypoint or a set of kepypoints.
The histogram will have entries of 0 if it has no nearest neighbors within the radius search.
As for graphing, try viewing the histogram for one point at a time. I don't think it makes sense to consolidate it into one graph.
If you are interested in global descriptors that take into account all points, take a look at the CVFH descriptor (Clustered Viewpoint Feature Histogram) or other global ones.
Related
I am trying to segment a cylinder from a plane using conditional Euclidean Clustering (based on this documentation). To test this out I have generated a point cloud consisting of just a plane and a cylinder, shown below:
I have set up my custom Euclidean clustering as follows:
bool
enforceCurvatureSimilarity(const PointFull& point_a, const PointFull& point_b, float squared_distance)
{
Eigen::Map<const Eigen::Vector3f> point_a_normal = point_a.getNormalVector3fMap (), point_b_normal = point_b.getNormalVector3fMap ();
if (std::abs (point_a_normal.dot (point_b_normal)) > 0.7)
{
return (true);
}
return (false);
}
I calculate the normals and perform the clustering as follows:
// Set up a Normal Estimation class and merge data in cloud_with_normals
ne.setSearchMethod (search_tree);
// setKSearch seems faster than radius search -> is it less accurate?
ne.setKSearch (5);
//ne.setRadiusSearch (0.1);
// pcl::IntegralImageNormalEstimation<PointT, PointN> ne;
// ne.setMaxDepthChangeFactor(0.02f);
// ne.setNormalSmoothingSize(10.0f);
ne.setInputCloud (cloud);
ne.compute (*cloud_normals);
pcl::concatenateFields (*cloud, *cloud_normals, *cloud_with_normals);
// Set up a Conditional Euclidean Clustering class
pcl::ConditionalEuclideanClustering<PointFull> cec (true);
cec.setInputCloud (cloud_with_normals);
cec.setConditionFunction (&enforceCurvatureSimilarity);
//cec.setClusterTolerance (500.0);
cec.setClusterTolerance(0.05);
//Clusters that make up less than 0.1% of total cloud size considered too small
cec.setMinClusterSize (cloud_with_normals->size () / 1000);
//Clusters that make up more than 10% of total cloud size considered too large
cec.setMaxClusterSize (cloud_with_normals->size () *0.1);
cec.segment (*clusters);
cec.getRemovedClusters (small_clusters, large_clusters);
However, when I try and segment the point cloud, everything is seen as a single cluster. I can think of two reasons as to why this might be happening.
The first being incorrect tuning of the parameters. I have tried many different combinations for both the normal estimation and clustering parameters, but have yet to see any real difference. At best a few points in the cylinder will be considered a separate cluster of one point.
The second reason is that the normals around the base of the cylinder are being incorrectly estimated due to the hard edge. This could be leading them to have an angle when they shouldn't which is breaking down the clustering algorithm. An example of this can be seen below:
How can I fix this so that the segmentation works correctly?
May be useing DoN is better for this situation. Your point cloud is sparse,even the normal estimate being incorrectly on the plane.
I have been trying to fit a cylinder model to a generated point cloud using the pcl tools. I have been adapting the example code that is provided in the documentation here.
From my understanding, the pcl::SampleConsensusModelCylinder requires normals data so I have added a new check in the code that checks for a -cf argument. The code then calculates the normals of each point in a pcl::Normals-type point cloud called cloud_normals and then concatenates this with the original pcl::PointXYZ-type point cloud, cloud . I save this to a new pcl::PointNormal-type point cloud called cloud_normalpoints and with this I attempt to fit the the pcl::SampleConsensusModelCylinder using RAndom SAmple Consensus (RANSAC).
I have included the code below:
else if (pcl::console::find_argument (argc, argv, "-cf") >= 0 )
{
//TODO: find fastest way to fit cylinder model to point cloud
// the cloud_normals point cloud will be used to store the point cloud or normals
pcl::PointCloud<pcl::Normal>::Ptr cloud_normals (new pcl::PointCloud<pcl::Normal>);
cloud_normals->width = 500;
cloud_normals->height = 1;
// is_dense is True if no points have NaN or Inf in any of their floating points field
cloud_normals->is_dense = false;
cloud_normals->points.resize (cloud_normals->width * cloud_normals->height);
// the NormalEstimation object ne is created and will estimate the normals and curvature at each point
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;
// the search:KdTree object pointer points to a search method for finding points in 3D space (3D point clouds)
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
// use the filtered cloud as an input to the normal estimator
ne.setInputCloud (cloud);
// set number of k-nearest neighbours to use for feature estimation to 50
ne.setKSearch (50);
// compute normals and save these to the clouds_normals point cloud
ne.compute (*cloud_normals);
pcl::PointCloud<pcl::PointNormal>::Ptr cloud_normalpoints (new pcl::PointCloud<pcl::PointNormal>);
cloud_normalpoints->width = 500;
cloud_normalpoints->height = 1;
// is_dense is True if no points have NaN or Inf in any of their floating points field
cloud_normalpoints->is_dense = false;
cloud_normalpoints->points.resize (cloud_normalpoints->width * cloud_normalpoints->height);
pcl::concatenateFields(*cloud,*cloud_normals,*cloud_normalpoints);
//TODO: Solve normals not given error
pcl::SampleConsensusModelCylinder<pcl::PointNormal, pcl::Normal>::Ptr
model_c (new pcl::SampleConsensusModelCylinder<pcl::PointNormal, pcl::Normal> (cloud_normalpoints));
// Declares ransac as a ransac implementation searching for a cylinder (according to model_c -> in cloud)
pcl::RandomSampleConsensus<pcl::PointNormal> ransac (model_c);
// Set distance threshold of .01 -> believe this is for inliers
ransac.setDistanceThreshold (.01);
// Compute model coefficients and find inliers
ransac.computeModel();
// Return indices of best set of inliers so far for this model
ransac.getInliers(inliers);
}
I also added some more code to generate an original point cloud containing a cylinder, but this works so I will not go into it here.
When I run my code it gets to the compute model stage and then throws the following error:
[pcl::SampleConsensusModelCylinder::computeModelCoefficients] No input dataset containing normals was given!
Does anyone know why this is? The cloud_normalpoints cloud includes the normal data that was found for each point. Should I be setting up the RANSAC estimator differently? Should I use a different point type? I am relatively new to pcl so any help would be greatly appreciated!
You have to call the function setInputNormals of your model_c, where you pass the cloud_normals.
The cloud you pass in the constructor of SampleConsensusModelCylinder only sets the XYZ information, but is not used for normals.
This tutorial could also be interesting for you: https://pcl.readthedocs.io/projects/tutorials/en/latest/cylinder_segmentation.html
I'm quite new to PCL and so I don't know how I can debug this. In a tutorial I found code for surface smoothing and normal calculation based on MLS. Everytime, I want to execute this code snippet I get an empty point cloud. I tested it on the file commited in the tutorial where it functioned and on a other point cloud commited in an other tutorial. Here, the MLS-algorithm returned a point cloud, but every normal was 0. I would be glad, if someone could help me. Is there maybe a processing limits of points? My point cloud has 4.5 million points and is a scan of a UAV.
// Load input file into a PointCloud<T> with an appropriate type
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ> ());
// Create a KD-Tree
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ>);
// Output has the PointNormal type in order to store the normals calculated by MLS
pcl::PointCloud<pcl::PointNormal> mls_points;
// Init object (second point type is for the normals, even if unused)
pcl::MovingLeastSquares<pcl::PointXYZ, pcl::PointNormal> mls;
mls.setComputeNormals (true);
// Set parameters
mls.setInputCloud (cloud);
mls.setPolynomialOrder (2);
mls.setSearchMethod (tree);
mls.setSearchRadius (0.03);
// Reconstruct
mls.process (mls_points);
I use the following functions as a base of my tracking algorithm.
//1. detect the features
what i mean, this function extract the only good features,
cv::goodFeaturesToTrack(gray_prev, // the image
features, // the output detected features
max_count, // the maximum number of features
qlevel, // quality level
minDist); // min distance between two features
// 2. track features
cv::calcOpticalFlowPyrLK(
gray_prev, gray, // 2 consecutive images
points_prev, // input point positions in first im
points_cur, // output point positions in the 2nd
status, // tracking success
err); // tracking error
cv::calcOpticalFlowPyrLK takes vector of points from the previous image as input, and returns appropriate points on the next image.
Suppose I want to calculate the opical flow for each pixle instead of good features
in the other meaning, start to calculate the optical flow from(1,1 ) to (m,n)
cv::calcOpticalFlowPyrLK does sparse OF, ie from feature points, if you want it for each pixel, use
calcOpticalFlowFarneback .
Computes a dense optical flow (using the Gunnar Farneback’s algorithm).
I am looking for a C++ equivalent to Matlab's griddata function, or any 2D global interpolation method.
I have a C++ code that uses Eigen 3. I will have an Eigen Vector that will contain x,y, and z values, and two Eigen matrices equivalent to those produced by Meshgrid in Matlab. I would like to interpolate the z values from the Vectors onto the grid points defined by the Meshgrid equivalents (which will extend past the outside of the original points a bit, so minor extrapolation is required).
I'm not too bothered by accuracy--it doesn't need to be perfect. However, I cannot accept NaN as a solution--the interpolation must be computed everywhere on the mesh regardless of data gaps. In other words, staying inside the convex hull is not an option.
I would prefer not to write an interpolation from scratch, but if someone wants to point me to pretty good (and explicit) recipe I'll give it a shot. It's not the most hateful thing to write (at least in an algorithmic sense), but I don't want to reinvent the wheel.
Effectively what I have is scattered terrain locations, and I wish to define a rectilinear mesh that nominally follows some distance beneath the topography for use later. Once I have the node points, I will be good.
My research so far:
The question asked here: MATLAB functions in C++ produced a close answer, but unfortunately the suggestion was not free (SciMath).
I have tried understanding the interpolation function used in Generic Mapping Tools, and was rewarded with a headache.
I briefly looked into the Grid Algorithms library (GrAL). If anyone has commentary I would appreciate it.
Eigen has an unsupported interpolation package, but it seems to just be for curves (not surfaces).
Edit: VTK has a matplotlib functionality. Presumably there must be an interpolation used somewhere in that for display purposes. Does anyone know if that's accessible and usable?
Thank you.
This is probably a little late, but hopefully it helps someone.
Method 1.) Octave: If you're coming from Matlab, one way is to embed the gnu Matlab clone Octave directly into the c++ program. I don't have much experience with it, but you can call the octave library functions directly from a cpp file.
See here, for instance. http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs
griddata is included in octave's geometry package.
Method 2.) PCL: They way I do it is to use the point cloud library (http://www.pointclouds.org) and VoxelGrid. You can set x, and y bin sizes as you please, then set a really large z bin size, which gets you one z value for each x,y bin. The catch is that x,y, and z values are the centroid for the points averaged into the bin, not the bin centers (which is also why it works for this). So you need to massage the x,y values when you're done:
Ex:
//read in a list of comma separated values (x,y,z)
FILE * fp;
fp = fopen("points.xyz","r");
//store them in PCL's point cloud format
pcl::PointCloud<pcl::PointXYZ>::Ptr basic_cloud_ptr (new pcl::PointCloud<pcl::PointXYZ>);
int numpts=0;
double x,y,z;
while(fscanf(fp, "%lg, %lg, %lg", &x, &y, &z)!=EOF)
{
pcl::PointXYZ basic_point;
basic_point.x = x; basic_point.y = y; basic_point.z = z;
basic_cloud_ptr->points.push_back(basic_point);
}
fclose(fp);
basic_cloud_ptr->width = (int) basic_cloud_ptr->points.size ();
basic_cloud_ptr->height = 1;
// create object for result
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZ>());
// create filtering object and process
pcl::VoxelGrid<pcl::PointXYZ> sor;
sor.setInputCloud (basic_cloud_ptr);
//set the bin sizes here. (dx,dy,dz). for 2d results, make one of the bins larger
//than the data set span in that axis
sor.setLeafSize (0.1, 0.1, 1000);
sor.filter (*cloud_filtered);
So that cloud_filtered is now a point cloud that contains one point for each bin. Then I just make a 2-d matrix and go through the point cloud assigning points to their x,y bins if I want an image, etc. as would be produced by griddata. It works pretty well, and it's much faster than matlab's griddata for large datasets.