How align to point clouds given global poses? - computer-vision

If I have two point clouds which each have their own global poses in the form of 4x4 homogeneous matrices, how can I find the transform to apply to the first point cloud for it to align with the second point cloud?

Related

How to visualize the result of pcl::FPFHEstimation in point cloud using PCL?

I'm using pcl::FPFHEstimation class to detect feature point like this:
pcl::FPFHEstimationOMP<pcl::PointXYZ, pcl::Normal, pcl::FPFHSignature33> fest;
pcl::PointCloud<pcl::FPFHSignature33>::Ptr object_features(new pcl::PointCloud<pcl::FPFHSignature33>());
fest.setRadiusSearch(feature_radius);
fest.setInputCloud(source_cloud);
fest.setInputNormals(point_normal);
fest.compute(*object_features);
My question is how to visulize detected feature points in pointcloud? like detected feature points in red color and non-feature points in white color.
I have searched a lot for this, but I only find some ways to display histogram which is not what I want.
Fast point feature histogram is a point descriptor (which in pcl comes in the form of pcl::FPFHSignature33). This means that the algorithm computes a histogram for all points in the input point cloud. Once the descriptors of the points are computed, then you can classify, segment... the points. But the process of classifying or segmenting the points would be another algorithm.
In this paper I use FPFH to coarsely align point clouds. In there I use the FPFH descriptors to decide which point from cloud A corresponds to which point from cloud B (associate points). In this publication, what I do is I compute the curvature for all points before the FPFH, and then, I only compute the FPFH of the subset of points that which curvature is above a given threshold. This is how I extract key points. Then, I use the FPFH of these key points fo the rest of the things (in my case associate points from different point clouds).
The analogy that you propose in one of your comments: "So the feature descriptor computed by FPFH is more like a multi-dimentional eigenvector or something " would be a good one.
So, to answer your question: No, there is no other tool apart from histograms to visualize the FPFH data (or at least not an out-of-the-box pcl class that you can use).

Determining homography from known planes?

I've got a question related to multiple view geometry.
I'm currently dealing with a problem where I have a number of images collected by a drone flying around an object of interest. This object is planar, and I am hoping to eventually stitch the images together.
Letting aside the classical way of identifying corresponding feature pairs, computing a homography and warping/blending, I want to see what information related to this task I can infer from prior known data.
Specifically, for each acquired image I know the following two things: I know the correspondence between the central point of my image and a point on the object of interest (on whose plane I would eventually want to warp my image). I also have a normal vector to the plane of each image.
So, knowing the centre point (in object-centric world coordinates) and the normal, I can derive the plane equation of each image.
My question is, knowing the plane equation of 2 images is it possible to compute a homography (or part of the transformation matrix, such as the rotation) between the 2?
I get the feeling that this may seem like a very straightforward/obvious answer to someone with deep knowledge of visual geometry but since it's not my strongest point I'd like to double check...
Thanks in advance!
Your "normal" is the direction of the focal axis of the camera.
So, IIUC, you have a 3D point that projects on the image center in both images, which is another way of saying that (absent other information) the motion of the camera consists of the focal axis orbiting about a point on the ground plane, plus an arbitrary rotation about the focal axis, plus an arbitrary translation along the focal axis.
The motion has a non-zero baseline, therefore the transformation between images is generally not a homography. However, the portion of the image occupied by the ground plane does, of course, transform as a homography.
Such a motion is defined by 5 parameters, e.g. the 3 components of the rotation vector for the orbit, plus the the angle of rotation about the focal axis, plus the displacement along the focal axis. However the one point correspondence you have gives you only two equations.
It follows that you don't have enough information to constrain the homography between the images of the ground plane.

Calculation of corner points for the localization of robot in 3D data

After segmenting out subset of a pointcloud that fitted using pcl::SACMODEL_LINE RANSAC line segmentation module.
In the next step center point of extracted point cloud is computed using
pcl::compute3DCentroid(point_cloud, centroid);
Which gives accurate center point until the camera and the extracted line model object are parallel to each other.
In the last step the corner points of the extracted point cloud i.e a fitted line are calculated by the addition of known distance on the centerpoint to calculate the corner points.
This technique will be valid until the camera and the extracted line model object are parallel to each other as soon as camera makes an angle with it, the corner point calculation technique fails.
Any suggestions what should I do to calculate the corner points using an existing reliable method in PCL library to compute the corner points of the extracted point cloud data (pcl::SACMODEL_LINE).
Thanks in advance.
If you have your subset cloud accurately extracted using RANSAC, you should be able to use getMinMax3d() to find two corner points.
http://docs.pointclouds.org/1.7.0/group__common.html#ga3166f09aafd659f69dc75e63f5e10f81
While these are not actual points of the subset cloud, they can be used to determine the boundary and the points that lie on it.

cumulative matrix transformation with ICP from PCL

My goal is to align 3D point clouds with ICP. Somehow I have an error, I believe it is because of the cumulative matrix transformations.
For debugging I start with 2D point clouds, I created. For the creation of the point clouds, I create a random angle and sign them with cos() and sin() to a x and a y value, so I have random points on a circle. Than I use a translation and a rotation that rises iteratively for each new created image.
I am generating about 20 point clouds and store them in these 512*512 images. Than I want to load does images, create point clouds out of them and align them with ICP.
Now for the cumulative matrix transformation. Image at time 0 would have the Identity matrix. But ever other Image would get as transformation the matrix gathered from ICP (M) multiplied with the transformation matrix, from the last known position:
Mi = M * Mi-1
I am not sure If this is the write way, or if I have to transform back to Identity before applying a full transformation.
My results are for 10 point clouds:
In the first we see the gathered point clouds without ICP and in the second with ICP. I tested it before only with translations, that worked really good. And than I tested it with only rotations, and there I had way to high errors. It could be, that the rotation is to high, so ICP aligns the points wrong and than finds wrong matches.
But if I test real data, images gathered from a Xbox Kinect camera, it seems to have the same error like in my example with 2D point clouds.
So am I calculating the cumulative matrix transformation wrong? or is there maybe a different problem which I don't see?
And How should I set up my ICP correctly? I only using the setting :
icp.setTransformationEpsilon (1e-9);
And is there any other way to test it correctly?

How to do the correspondance 2D-3D points

I'm working with OpenCv API on an augmented reality project using one camera.I have :
The 3D point of my 3D object( i get 4 points from MeshLab)
The 2D points which i want to follow ( i have 4 points):these points are not the projection of the 3D points.
Intrinsic camera parameters.
Using these parameters, i have the extrinsic parameters( rotation and translation using the cvFindExtrinsicParam function) which i have used to render my model and set the modelView matrix.
My problem is that the 3D model are not shown in particular position: it has been shown in différent location on my image. How can i fix the model location and then the modelView matrix?
In other forums they told me that i should do the correspondance 2D-3D to get the extrinsic parameters but i don't know how to correspond my 2D points with the 3D points?
Typically you would design the points you want to track in such a fashion that the 2d-3d correspondence is immediately clear. The easiest way to do this is to have points with different colors. You could also go with some sort of pattern (google augmented reality cards) which you would then have to analyze in order to find out how it is rotated in the image. The pattern of course can not be rotation symmetric.
If you can't do that, you can try out all the different permutations of the points, plug them into OpenCV to get a matrix, then project your 3D points to 2D points with those matrices, and then see which one fits best.