I am trying to merge point clouds from two frames into one bigger point cloud. I will use ICP for that but I understand I need to per-align the point clouds. I am trying to do it with PCL template_alignment code from:
https://pcl.readthedocs.io/projects/tutorials/en/latest/template_alignment.html#template-alignment
The program computes surface normals after loading pointcloud. It works fine for the sample data used in the code but for my own data the "norm_est.compute(*normals_)" statement on line 89 returns NaN values. I read on PCL library documention that if the function can't find the neighbouring points it will return NaN values. That is my question, why the program is unable to find neighbourung points and what do I do about it? I am using the same settings as in the code in the above link for radius search and other perimeters for normal estimation.My left Image and point cloud are given below. I have uploaded a coloured pointcloud for better visualization but for alignment purposes I am using point cloud without RGB and my pointcloud.ply file contains only xyz coordinates.
Simple fix: modify that line (89) as such
Old:
norm_est.setRadiusSearch (normal_radius_);
new:
norm_est.setKSearch(5);
What this does is instead of looking inside a particular size sphere (unknown number of entries) it looks for a specific number of nearest neighbors.
Note that 5 is a pretty arbitrary number. You could go faster by lowering to 3 (minimum required) or slower but more accurate by increasing that number. It is probably best to not actually drop a hardcoded value right there and as such I suggest you pipe it out similar to how normal_radius_ was before, but this should get you past this issue for now.
Other options:
1: remove nan from point cloud after calculating normals (pcl::removeNaNFromPointCloud)
2: Run a reprocess step where you do a statistical outlier removal filter. Or an outright minimum neighbor radius filter. This will remove points with too few neighbors (which are what is generating nan values in your normal calculation)
3: increase radius of normal calculation or perform a nearest neighbor (not radius based) normal calculation.
Related
I'm using pcl::FPFHEstimation class to detect feature point like this:
pcl::FPFHEstimationOMP<pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33> fest;
pcl::PointCloud<pcl::FPFHSignature33>::Ptr object_features(new pcl::PointCloud<pcl::FPFHSignature33>());
fest.setRadiusSearch(feature_radius);
fest.setInputCloud(source_cloud);
fest.setInputNormals(point_normal);
fest.compute(*object_features);
My question is how to visulize detected feature points in pointcloud? like detected feature points in red color and non-feature points in white color.
I have searched a lot for this, but I only find some ways to display histogram which is not what I want.
Fast point feature histogram is a point descriptor (which in pcl comes in the form of pcl::FPFHSignature33). This means that the algorithm computes a histogram for all points in the input point cloud. Once the descriptors of the points are computed, then you can classify, segment... the points. But the process of classifying or segmenting the points would be another algorithm.
In this paper I use FPFH to coarsely align point clouds. In there I use the FPFH descriptors to decide which point from cloud A corresponds to which point from cloud B (associate points). In this publication, what I do is I compute the curvature for all points before the FPFH, and then, I only compute the FPFH of the subset of points that which curvature is above a given threshold. This is how I extract key points. Then, I use the FPFH of these key points fo the rest of the things (in my case associate points from different point clouds).
The analogy that you propose in one of your comments: "So the feature descriptor computed by FPFH is more like a multi-dimentional eigenvector or something " would be a good one.
So, to answer your question: No, there is no other tool apart from histograms to visualize the FPFH data (or at least not an out-of-the-box pcl class that you can use).
I have calculated the essential matrix using the 5 point algorithm. I'm not sure how to integrate it with ransac so it gives me a better outcome.
Here is the source code. https://github.com/lunzhang/openar/blob/master/src/utils/5point/computeEssential.js
Currently, I was thinking about computing the essential matrix for 5 random points then convert the essential matrix to fundamental and see the error threshold using this equation x'Fx = 0. But then I'm not sure, what to do after.
How do I know which points to set as outliners? If the errors too big, do I set them as outliners right away? Could it be possible that one point could produce different essential matrices depending on what the other 4 points are?
Well, here is a short explanation, in pseudo-code, of how you can integrate this with ransac. Basically, all Ransac does is compute your model (here the Essential) using a subset of the data, and then sees if the rest of data "is happy" with that result. It keeps the result for which a highest portion of the dataset "is happy".
highest_number_of_happy_points=-1;
best_estimated_essential_matrix=Identity;
for iter=1 to max_iter_number:
n_pts=get_n_random_pts(P);//get a subset of n points from the set of points P. You can use 5, but you can also use more.
E=compute_essential(n_pts);
number_of_happy_points=0;
for pt in P:
//we want to know if pt is happy with the computed E
err=cost_function(pt,E);//for example x^TFx as you propose, or X^TEX with the essential.
if(err<some_threshold):
number_of_happy_points+=1;
if(number_of_happy_points>highest_number_of_happy_points):
highest_number_of_happy_points=number_of_happy_points;
best_estimated_essential_matrix=E;
This should do the trick. Usually, you set some_threshold experimentally to a low value. There are of course more sophisticated Ransacs, you can easily find them by googling.
Your idea of using x^TFx is fine in my opinion.
Once this Ransac completes, you will have best_estimated_essential_matrix. The outliers are those that have a x^TFx value that is greater than your optional threshold.
To answer your final question, yes, a point could produce a different matrix given 4 different points, because their spatial configuration is different (you can have degenerate situations). In an ideal settings this wouldn't be the case, but we always have noise, matching errors and so on, so what happens in the end is that the equations you obtain with 5 points wont produce the exact same results as for 5 other points.
Hope this helps.
I have a set of Exact_predicates_exact_constructions_kernel::Point_3 points that are coplanar. I would like to be able to serialize the points to a database in double precision and then be able to read the points back into an exact precision space and have them be coplanar. Specifically, my workflow is CGAL -> Postgres using Postgis -> SFCGAL. SFCGAL requires that the points be coplanar and uses exact constructions.
I know how to write the geometry, but I'm not sure how to go about rounding the points so that they remain coplanar. Obviously the points read will have a loss of precision and will be slightly transformed compared to the originals, but I only need them to be within roughly a e-04 distance from their respective points.
Unfortunately, projecting the points onto a plane after reading them is a last resort option. It doesn't work very well for my purposes.
I have a set of point cloud, and I would like to test if there is a corner in a 3D room. So I would like to discuss my approach and if there is a better approach or not in terms of speed, because I want to test it on mobile phones.
I will try to use hough tranform to detect lines, then I will try to see if there are three lines that are intersecting and they make a two plane that are intersecting too.
If the point cloud data comes from a depth sensor, then you have a relatively dense sampling of your walls. One thing I found that works well with depth sensors (e.g. Kinect or DepthSense) is a robust version of the RANSAC procedure that #MartinBeckett suggested.
Instead of picking 3 points at random, pick one point at random, and get the neighboring points in the cloud. There are two ways to do that:
The proper way: use a 3D nearest neighbor query data structure, like a KD-tree, to get all the points within some small distance from your query point.
The sloppy but faster way: use the pixel grid neighborhood of your randomly selected pixel. This may include points that are far from it in 3D, because they are on a different plane/object, but that's OK, since this pixel will not get much support from the data.
The next step is to generate a plane equation from that group of 3D points. You can use PCA on their 3D coordinates to get the two most significant eigenvectors, which define the plane surface (the last eigenvector should be the normal).
From there, the RANSAC algorithm proceeds as usual: check how many other points in the data are close to that plane, and find the plane(s) with maximal support. I found it better to find the largest support plane, remove the supporting 3D points, and run the algorithm again to find other 'smaller' planes. This way you may be able to get all the walls in your room.
EDIT:
To clarify the above: the support of a hypothesized plane is the set of all 3D points whose distance from that plane is at most some threshold (e.g. 10 cm, should depend on the depth sensor's measurement error model).
After each run of the RANSAC algorithm, the plane that had the largest support is chosen. All the points supporting that plane may be used to refine the plane equation (this is more robust than just using the neighboring points) by performing PCA/linear regression on the support set.
In order to proceed and find other planes, the support of the previous iteration should be removed from the 3D point set, so that remaining points lie on other planes. This may be repeated as long as there are enough points and best plane fit error is not too large.
In your case (looking for a corner), you need at least 3 perpendicular planes. If you find two planes with large support which are roughly parallel, then they may be the floor and some counter, or two parallel walls. Either the room has no visible corner, or you need to keep looking for a perpendicular plane with smaller support.
Normal approach would be ransac
Pick 3 points at random.
Make a plane.
Check if each other point lies on the plane.
If enough are on the plane - recalculate a best plane from all these points and remove them from the set
If not try another 3 points
Stop when you have enough planes, or too few points left.
Another approach if you know that the planes are near vertical or near horizontal.
pick a small vertical range
Get all the points in this range
Try and fit 2d lines
Repeat for other Z ranges
If you get a parallel set of lines in each Z slice then they are probably have a plane - recalculate the best fit plane for the points.
I would first like to point out
Even though this is an old post, I would like to present a complementary approach, similar to Hough Voting, to find all corner locations, composed of plane intersections, jointly:
Uniformly sample the space. Ensure that there is at least a distance $d$ between the points (e.g. you can even do this is CloudCompare with a 'space' subsampling)
Compute the point cloud normals at these points.
Randomly pick 3 points from this downsampled cloud.
Each oriented point (point+plane) defines a hypothetical plane. Therefore, each 3 point picked define 3 planes. Those planes, if not parallel and not intersecting at a line, always intersect at a single point.
Create a voting space to describe the corner: The intersection of the 3 planes (the point) might a valid parameterization. So our parameter space has 3 free parameters.
For each 3 points cast a vote in the accumulator space to the corner point.
Go to (2) and repeat until all sampled points are exhausted, or enough iterations are done. This way we'll be casting votes for all possible corner locations.
Take the local maxima of the accumulator space. Depending on the votes, we'll be selecting the corners from intersection of the largest planes (as they'll receive more votes) to the intersection of small planes. The largest 4 are probably the corners of the room. If not, one could also consider the other local maxima.
Note that the voting space is a quantized 3D space and the corner location will be a rough estimate of the actual one. If desired, one could store the planes intersection at that very location and refine them (with iterative optimization similar to ICP or etc) to get a very fine corner location.
This approach will be quite fast and probably very accurate, given that you could refine the location. I believe it's the best algorithm presented so far. Of course this assumes that we could compute the normals of the point clouds (we can always do that at sample locations with the help of the eigenvectors of the covariance matrix).
Please also look here, where I have put out a list of plane-fitting related questions at stackoverflow:
3D Plane fitting algorithms
I am trying to extract the curvature of a pulse along its profile (see the picture below). The pulse is calculated on a grid of length and height: 150 x 100 cells by using Finite Differences, implemented in C++.
I extracted all the points with the same value (contour/ level set) and marked them as the red continuous line in the picture below. The other colors are negligible.
Then I tried to find the curvature from this already noisy (due to grid discretization) contour line by the following means:
(moving average already applied)
1) Curvature via Tangents
The curvature of the line at point P is defined by:
So the curvature is the limes of angle delta over the arclength between P and N. Since my points have a certain distance between them, I could not approximate the limes enough, so that the curvature was not calculated correctly. I tested it with a circle, which naturally has a constant curvature. But I could not reproduce this (only 1 significant digit was correct).
2) Second derivative of the line parametrized by arclength
I calculated the first derivative of the line with respect to arclength, smoothed with a moving average and then took the derivative again (2nd derivative). But here I also got only 1 significant digit correct.
Unfortunately taking a derivative multiplies the already inherent noise to larger levels.
3) Approximating the line locally with a circle
Since the reciprocal of the circle radius is the curvature I used the following approach:
This worked best so far (2 correct significant digits), but I need to refine even further. So my new idea is the following:
Instead of using the values at the discrete points to determine the curvature, I want to approximate the pulse profile with a 3 dimensional spline surface. Then I extract the level set of a certain value from it to gain a smooth line of points, which I can find a nice curvature from.
So far I could not find a C++ library which can generate such a Bezier spline surface. Could you maybe point me to any?
Also do you think this approach is worth giving a shot, or will I lose too much accuracy in my curvature?
Do you know of any other approach?
With very kind regards,
Jan
edit: It seems I can not post pictures as a new user, so I removed all of them from my question, even though I find them important to explain my issue. Is there any way I can still show them?
edit2: ok, done :)
There is ALGLIB that supports various flavours of interpolation:
Polynomial interpolation
Rational interpolation
Spline interpolation
Least squares fitting (linear/nonlinear)
Bilinear and bicubic spline interpolation
Fast RBF interpolation/fitting
I don't know whether it meets all of your requirements. I personally have not worked with this library yet, but I believe cubic spline interpolation could be what you are looking for (two times differentiable).
In order to prevent an overfitting to your noisy input points you should apply some sort of smoothing mechanism, e.g. you could try if things like Moving Window Average/Gaussian/FIR filters are applicable. Also have a look at (Cubic) Smoothing Splines.