Calculation of corner points for the localization of robot in 3D data - computer-vision

After segmenting out subset of a pointcloud that fitted using pcl::SACMODEL_LINE RANSAC line segmentation module.
In the next step center point of extracted point cloud is computed using
pcl::compute3DCentroid(point_cloud, centroid);
Which gives accurate center point until the camera and the extracted line model object are parallel to each other.
In the last step the corner points of the extracted point cloud i.e a fitted line are calculated by the addition of known distance on the centerpoint to calculate the corner points.
This technique will be valid until the camera and the extracted line model object are parallel to each other as soon as camera makes an angle with it, the corner point calculation technique fails.
Any suggestions what should I do to calculate the corner points using an existing reliable method in PCL library to compute the corner points of the extracted point cloud data (pcl::SACMODEL_LINE).
Thanks in advance.

If you have your subset cloud accurately extracted using RANSAC, you should be able to use getMinMax3d() to find two corner points.
http://docs.pointclouds.org/1.7.0/group__common.html#ga3166f09aafd659f69dc75e63f5e10f81
While these are not actual points of the subset cloud, they can be used to determine the boundary and the points that lie on it.

Related

PCL: Sphere Filter for Point Cloud

My goal is to remove all points from a point cloud that lie within a certain radius (from the origin).
I've discovered the pcl::CropBox and pcl::CropHull filters, but the former crops a box (obviously...) and the latter needs hull indices.
I guess I could write my own filter by calculating the distance for each point and comparing it to a threshold. But maybe there is already an implementation of this that I'm not noticing?

cumulative matrix transformation with ICP from PCL

My goal is to align 3D point clouds with ICP. Somehow I have an error, I believe it is because of the cumulative matrix transformations.
For debugging I start with 2D point clouds, I created. For the creation of the point clouds, I create a random angle and sign them with cos() and sin() to a x and a y value, so I have random points on a circle. Than I use a translation and a rotation that rises iteratively for each new created image.
I am generating about 20 point clouds and store them in these 512*512 images. Than I want to load does images, create point clouds out of them and align them with ICP.
Now for the cumulative matrix transformation. Image at time 0 would have the Identity matrix. But ever other Image would get as transformation the matrix gathered from ICP (M) multiplied with the transformation matrix, from the last known position:
Mi = M * Mi-1
I am not sure If this is the write way, or if I have to transform back to Identity before applying a full transformation.
My results are for 10 point clouds:
In the first we see the gathered point clouds without ICP and in the second with ICP. I tested it before only with translations, that worked really good. And than I tested it with only rotations, and there I had way to high errors. It could be, that the rotation is to high, so ICP aligns the points wrong and than finds wrong matches.
But if I test real data, images gathered from a Xbox Kinect camera, it seems to have the same error like in my example with 2D point clouds.
So am I calculating the cumulative matrix transformation wrong? or is there maybe a different problem which I don't see?
And How should I set up my ICP correctly? I only using the setting :
icp.setTransformationEpsilon (1e-9);
And is there any other way to test it correctly?

3D reconstruction using stereo vison - theory

I am currently reading into the topic of stereo vision, using the book of Hartley&Zimmerman alongside some papers, as I am trying to develop an algorithm capable of creating elevation maps from two images.
I am trying to come up with the basic steps for such an algorithm. This is what I think I have to do:
If I have two images I somehow have to find the fundamental matrix, F, in order to find the actual elevation values at all points from triangulation later on. If the cameras are calibrated this is straightforward if not it is slightly more complex (plenty of methods for this can be found in H&Z).
It is necessary to know F in order to obtain the epipolar lines. These are lines that are used in order to find image point x in the first image back in the second image.
Now comes the part were it gets a bit confusing for me:
Now I would start taking a image point x_i in the first picture and try to find the corresponding point x_i’ in the second picture, using some matching algorithm. Using triangulation it is now possible to compute the real world point X and from that it’s elevation. This process will be repeated for every pixel in the right image.
In the perfect world (no noise etc) triangulation will be done based on
x1=P1X
x2=P2X
In the real world it is necessary to find a best fit instead.
Doing this for all pixels will lead to the complete elevation map as desired, some pixels will however be impossible to match and therefore can't be triangulated.
What confuses me most is that I have the feeling that Hartley&Zimmerman skip the entire discussion on how to obtain your point correspondences (matching?) and that the papers I read in addition to the book talk a lot about disparity maps which aren’t mentioned in H&Z at all. However I think I understood correctly that the disparity is simply the difference x1_i- x2_i?
Is this approach correct, and if not where did I make mistakes?
Your approach is in general correct.
You can think of a stereo camera system as two points in space where their relative orientation is known. This are the optical centers. In front of each optical center, you have a coordinate system. These are the image planes. When you have found two corresponding pixels, you can then calculate a line for each pixel, wich goes throug the pixel and the respectively optical center. Where the two lines intersect, there is the object point in 3D. Because of the not perfect world, they will probably not intersect and one may use the point where the lines are closest to each other.
There exist several algorithms to detect which points correspond.
When using disparities, the two image planes need to be aligned such that the images are parallel and each row in image 1 corresponds to the same row in image 2. Then correspondences only need to be searched on a per row basis. Then it is also enough to know about the differences on x-axis of the single corresponding points. This is then the disparity.

Fit a circle or a spline into a bunch of 3D Points

I have some 3D Points that roughly, but clearly form a segment of a circle. I now have to determine the circle that fits best all the points. I think there has to be some sort of least squares best fit but I cant figure out how to start.
The points are sorted the way they would be situated on the circle. I also have an estimated curvature at each point.
I need the radius and the plane of the circle.
I have to work in c/c++ or use an extern script.
You could use a Principal Component Analysis (PCA) to map your coordinates from three dimensions down to two dimensions.
Compute the PCA and project your data onto the first to principal components. You can then use any 2D algorithm to find the centre of the circle and its radius. Once these have been found/fitted, you can project the centre back into 3D coordinates.
Since your data is noisy, there will still be some data in the third dimension you squeezed out, but bear in mind that the PCA chooses this dimension such as to minimize the amount of data lost, i.e. by maximizing the amount of data that is represented in the first two components, so you should be safe.
A good algorithm for such data fitting is RANSAC (Random sample consensus). You can find a good description in the link so this is just a short outline of the important parts:
In your special case the model would be the 3D circle. To build this up pick three random non-colinear points from your set, compute the hyperplane they are embedded in (cross product), project the random points to the plane and then apply the usual 2D circle fitting. With this you get the circle center, radius and the hyperplane equation. Now it's easy to check the support by each of the remaining points. The support may be expressed as the distance from the circle that consists of two parts: The orthogonal distance from the plane and the distance from the circle boundary inside the plane.
Edit:
The reason because i would prefer RANSAC over ordinary Least-Squares(LS) is its superior stability in the case of heavy outliers. The following image is showing an example comparision of LS vs. RANSAC. While the ideal model line is created by RANSAC the dashed line is created by LS.
The arguably easiest algorithm is called Least-Square Curve Fitting.
You may want to check the math,
or look at similar questions, such as polynomial least squares for image curve fitting
However I'd rather use a library for doing it.

How to get curve from intersection of point cloud and arbitrary plane?

I have various point clouds defining RT-STRUCTs called ROI from DICOM files. DICOM files are formed by tomographic scanners. Each ROI is formed by point cloud and it represents some 3D object.
The goal is to get 2D curve which is formed by plane, cutting ROI's cloud point. The problem is that I can't just use points which were intersected by plane. What I probably need is to intersect 3D concave hull with some plane and get resulting intersection contour.
Is there any libraries which have already implemented these operations? I've found PCL library and probably it should be able to solve my problem, but I can't figure out how to achieve it with PCL. In addition I can use Matlab as well - we use it through its runtime from C++.
Has anyone stumbled with this problem already?
P.S. As I've mentioned above, I need to use a solution from my C++ code - so it should be some library or matlab solution which I'll use through Matlab Runtime.
P.P.S. Accuracy in such kind of calculations is really important - it will be used in a medical software intended for work with brain tumors, so you can imagine consequences of an error (:
You first need to form a surface from the point set.
If it's possible to pick a 2d direction for the points (ie they form a convexhull in one view) you can use a simple 2D Delaunay triangluation in those 2 coordinates.
otherwise you need a full 3D surfacing function (marching cubes or Poisson)
Then once you have the triangles it's simple to calculate the contour line that a plane cuts them.
See links in Mesh generation from points with x, y and z coordinates
Perhaps you could just discard the points that are far from the plane and project the remaining ones onto the plane. You'll still need to reconstruct the curve in the plane but there are several good methods for that. See for instance http://www.cse.ohio-state.edu/~tamaldey/curverecon.htm and http://valis.cs.uiuc.edu/~sariel/research/CG/applets/Crust/Crust.html.