Match point clounds with an unknown dimension - computer-vision

I am trying to restore a point cloud based on another one. I have point clouds A and B with different point coordinate systems. Points in A are in the cartesian coordinate system (x, y, z). Points in B are described by a range, azimuth and elevation (r, theta, phi). However I do not know the elevation of the points in the cloud B. I only know that the two clouds are approximately the same but with a different scale. The set B as it is, would correspond in cartesian coordinates to a cloud of arcs, because of the unknown elevation. So I converted the A in the sperical coordinate sytem using the following formulas:
r = sqrt(x^2+y^2+z^2)
theta = arctan(x/z)
arcsin = arcsin(y/sqrt(x^2+y^2+z^2))
But I am not sure how to proceed after this. It looks like an optimization problem, so I am trying to find an objective function, but I don't know how to formulate it.
I would appreciate any help on this.

Related

Draping 2d point on a 3d terrain

I am using OpenTK(OpenGL) and a general hint will be helpful.
I have a 3d terrain. I have one point on this terrain O(x,y,z) and two perpendicular lines passing through this point that will serve as my X and Y axes.
Now I have a set of 2d points with are in polar coordinates (range,theta). I need to find which points on the terrain correspond to these points. I am not sure what is the best way to do it. I can think of two ideas:
Lets say I am drawing A(x1,y1).
Find the intersection of plane passing through O and A which is perpendicular to the XY plane. This will give me a polyline (semantics may be off). Now on this line, I find a point that is visible from O and is at a distance of the range.
Create a circle which is perpendicular to the XY plane with radius "range", find intersection points on the terrain, find which ones are visible from O and drop rest.
I understand I can find several points which satisfy the conditions, so I will do further check based on topography, but for now I need to get a smaller set which satisfy this condition.
I am new to opengl, but I get geometry pretty well. I am wondering if something like this exists in opengl since it is a standard problem with ground measuring systems.
As you say, both of the options you present will give you more than the one point you need. As I understand your problem, you need only to perform a change of bases from polar coordinates (r, angle) to cartesian coordinates (x,y).
This is fairly straight forward to do. Assuming that the two coordinate spaces share the origin O and that the angle is measured from the x-axis, then point (r_i, angle_i) maps to x_i = r_i*cos(angle_i) and y_i = r_i*sin(angle_i). If those assumptions aren't correct (i.e. if the origins aren't coincident or the angle is not measured from a radii parallel to the x-axis), then the transformation is a bit more complicated but can still be done.
If your terrain is represented as a height map, or 2D array of heights (e.g. Terrain[x][y] = z), once you have the point in cartesian coordinates (x_i,y_i) you can find the height at that point. Of course (x_i, y_i) might not be exactly one of the [x] or [y] indices of the height map.
In that case, I think you have a few options:
Choose the closest (x,y) point and take that height; or
Interpolate the height at (x_i,y_i) based on the surrounding points in the height map.
Unfortunately I am also learning OpenGL and can not provide any specific insights there, but I hope this helps solve your problem.
Reading your description I see a bit of confusion... maybe.
You have defined point O(x,y,z). Fine, this is your pole for the 3D coordinate system. Then you want to find a point defined by polar coordinates. That's fine also - it gives you 2D location. Basically all you need to do is to pinpoint the location in 3D A'(x,y,0), because we are assuming you know the elevation of the A at (r,t), which you of course do from the terrain there.
Angle (t) can be measured only from one axis. Choose which axis will be your polar north and stick to. Then you measure r you have and - voila! - you have your location. What's the point of having 2D coordinate set if you don't use it? Instead, you're adding visibility to the mix - I assume it is important, but highest terrain point on azimuth (t) NOT NECESSARILY will be in the range (r).
You have specific coordinates. Just like RonL suggest, convert to (x,y), find (z) from actual terrain and be done with it.
Unless that's not what you need. But in that case a different question is in order: what do you look for?

How can I tessellate the boundary of a point cloud?

I have a cloud of vertices. I'd like to tessellate a "shell" around the vertex cloud using only vertices in the cloud, such that the shell conforms roughly to the shape of the vertex cloud.
Is there an easy way to do this? I figured I could spherically parameterize the point cloud and then "walk" the outermost vertices to tessellate the cloud, but I'm not sure this will work.
I suppose it's acceptable to add vertices, but the general shape of the "shell" should match the shape of the vertex cloud.
I have an algorithm for you that works in the 2D case. It is tricky, but do-able to generalize it to 3D space. The basic idea is to start with a minimum-surface (triangle in 2D or tetrahedron in 3D), and split each edge (face) as you traverse the array of points.
2D algorithm (python. FULL SOURCE/DEMO HERE: http://pastebin.com/jmwUt3ES)
edit: this demo is fun to watch: http://pastebin.com/K0WpMyA3
def surface(pts):
center = pt_center(pts)
tx = -center[0]
ty = -center[1]
pts = translate_pts(pts, (tx, ty))
# tricky part: initialization
# initialize edges such that you have a triangle with the origin inside of it
# in 3D, this will be a tetrahedron.
ptA, ptB, ptC = get_center_triangle(pts)
print ptA, ptB, ptC
# tracking edges we've already included (triangles in 3D)
edges = [(ptA, ptB), (ptB, ptC), (ptC, ptA)]
# loop over all other points
pts.remove(ptA)
pts.remove(ptB)
pts.remove(ptC)
for pt in pts:
ptA = (0,0)
ptB = (0,0)
# find the edge that this point will be splitting
for (ptA, ptB) in edges:
if crossz(ptA, pt) > 0 and crossz(pt, ptB) > 0:
break
edges.remove((ptA, ptB))
edges.append((ptA, pt))
edges.append((pt, ptB))
# translate everything back
edges = [((ptA[0] - tx, ptA[1] - ty), (ptB[0] - tx, ptB[1] - ty)) for (ptA, ptB) in edges]
return edges
RESULT:
Generalizing to 3D
instead of edges, you have triangles.
initialization is a tetrahedron around the origin
finding the splitting face involves projecting a triangle, and checking if the pt is interior to that triangle
splitting involves adding 3 new faces whereas 2D was 2 new edges
need to be careful about orientation of the face (in my 2D code, I was easily able to guarantee A->B would be CCW orientation)
Depending on the size of your point cloud and speed requirements, you may need to be more clever about data structures for faster add/remove.
3D convex hull (convex hull algorithm for 3d surface z = f(x, y)).
Then for points on each of the largest faces search for the closest point on the cloud and re-triangulate that face to include the point.
Repeat until it is "close enough" based on largest distance from the nearest cloud point for each of the remaining faces, or on the size (length/are?) of the largest remaining face
I would think about a "metric" function f(x, y, z) that returns a scalar value for an arbitrary point in 3D space. This function should be constructed in a way to consider whether a given point (x, y, z) is "inside" or "outside" of a cloud. For instance, this can be a length of an average vector from (x, y, z) to every point in a cloud, or it can be a number of cloud points within a certain vicinity of (x, y, z). Choice of the function will affect the final result.
Having this f(x, y, z) you use the marching cubes algorithm to perform tessellation, basically constructing an iso-surface of f(x, y, z) for a certain value.
You should try 3d Delaunay Triangulation. This will tessellate the point cloud, while making sure that the tri mesh only has vertices from the point cloud. CGAL has a two implementations of triangulating the point cloud - delaunay and regular. The regular version triangulates the points using the idea described here.
You can use their implementation, if you're using C++. If not, you can still look at their code to implement it yourself (it is pretty complex though).
Sounds like you are looking for the "concave hull". This delivers a "reasonable" boundary around a set of points, "reasonable" meaning fitting the set up to a given tolerance. It is not a geometric property, like the convex hull, but a good approximation for a lot of "real world" problems, like finding a closely fitting boundary around a city.
You can find an implementation in the Point Cloud Library.

Create dataset of XYZ positions on a given plane

I need to create a list of XYZ positions given a starting point and an offset between the positions based on a plane. On just a flat plane this is easy. Let's say the offset I need is to move down 3 then right 2 from position 0,0,0
The output would be:
0,0,0 (starting position)
0,-3,0 (move down 3)
2,-3,0 (then move right 2)
The same goes for a different start position, let's say 5,5,1:
5,5,1 (starting position)
5,2,1 (move down 3)
7,2,1 (then move right 2)
The problem comes when the plane is no longer on this flat grid.
I'm able to calculate the equation of the plane and the normal vector given 3 points.
But now what can I do to create this dataset of XYZ locations given this equation?
I know I can solve for XYZ given two values. Say I know x=1 and y=1, I can solve for Z. But moving down 2 is no longer just y-2. I believe I need to find a linear equation on both the x and y axis to increment the positions and move parallel to the x and y of this new plane, then just solve for Z. I'm not sure how to accomplish this.
The other issue is that I need to calculate the angle, tilt and rotation of this plane in relation to the base plane.
For example:
P1=0,0,0 and P2=1,1,0 the tilt=0deg angle=0deg rotation=45deg.
P1=0,0,0 and P2=0,1,1 the tilt=0deg angle=45deg rotation=0deg.
P1=0,0,0 and P2=1,0,1 the tilt=45deg angle=0deg rotation=0deg.
P1=0,0,0 and P2=1,1,1 the tilt=0deg angle=45deg rotation=45deg.
I've searched for hours on both these problems and I've always come to a stop at the equation of the plane. Manipulating the x,y correctly to follow parallel to the plane, and then taking that information to find these angles. This is a lot of geometry to be solved, and I can't find any further information on how to calculate this list of points, let alone calculating the 3 angles to the base plane.
I would appericate any help or insight on this. Just plain old math or a reference to C++ would be perfect to sheding some light onto this issue I'm facing here.
Thank you,
Matt
You can think of your plane as being defined by a point and a pair of orthonormal basis vectors (which just means two vectors of length 1, 90 degrees from one another). Your most basic plane can be defined as:
p0 = (0, 0, 0) #Origin point
vx = (1, 0, 0) #X basis vector
vy = (0, 1, 0) #Y basis vector
To find point p1 that's offset by dx in the X direction and dy in the Y direction, you use this formula:
p1 = p0 + dx * vx + dy * vy
This formula will always work if your offsets are along the given axes (which it sounds like they are). This is still true if the vectors have been rotated - that's the property you're going to be using.
So to find a point that's been offset along a rotated plane:
Take the default basis vectors (vx and vy, above).
Rotate them until they define the plane you want (you may or may not need to rotate the origin point as well, depending on how the problem is defined).
Apply the formula, and get your answer.
Now there are some quirks when you're doing rotation (Order matters!), but that's the the basic idea, and should be enough to put you on the right track. Good luck!

Use fundamental matrix to compute coordinates translation using OpenCV

I am trying to compute the coordinates correspondence of several points between two images.
I have a group of points whose correspondences are known, I use them with OpenCV's findFundamentalMatrix() in order to find the fundamental matrix.
I verified that x^T * F * x' = (0) for each point, and the result is always right or very close.
The thing is, now I'd like to use the coordinates of a point on the first image (y) and the fundamental matrix (F) in order to find the coordinates of the point on the second image (y'). I first thought about simply using the equation above, but given only the z of the y' point, there can be an infinity of solutions.
How else can I use the fundamental matrix to compute the translations ?
To be more clear: knowing the fundamental matrix "linking" two projections, how can I use it to translate the coordinates of any known point (a, b, 1) from the first projection to the second projection?
Considering that we know a, b and F in this equation: (a', b", 1)T * F * (a, b, 1) = (0)
I had made a simple drawing as an example: http://i.imgur.com/drNr2.jpg . The idea is to find the coordinates of the red dot (xq, yq) in projection 2, considering that we know its coordinates in projection 1 and the ones of all other points in both projections (and some other ones as the algorithm to find the fundamental matrix actually requires at least 8 points)
Another precision: in my example, known points are coplanar, but the researched point will not necessarily be.
I hope that made my problem more clear :)
The fundamental matrix transforms points from one image to lines in the other. Could you elaborate more on
How else can I use the fundamental matrix to compute the translations?
please. Telling us what you want to achieve perhaps with an example would help too.
Edit: If you have calibrated the camera you can compute the essential matrix, E, from the fundamental matrix, F. E transforms a point in one image to a point in the other. But of course, the requirement is to have the internal matrix. If K is the internal matrix E=transpose(K)FK.
The other method is to find the corresponding line for a point in the other image and then search along this line for the patch most similar in appearance to the patch surrounding the point in the first image. There are some other ways too but really need more information about the problem to tell which suits your case.
Edit 2: in the drawing you have got the points are coplanar. Hence, a homography maps the point positions between the two images, and there is no need to find the fundamental matrix. OpenCV has a function for estimating homographies, which needs only four points.
Given:
Point correspondences a in image 1.
Goal:
Finding corresponding points b laying on the so called epipolar line L in image 2.
How?
| x0 | | x1 |
a = | y0 | , b = | y1 |
| 1 | | 1 |
L = F * a
|F00 F01 F02|
F = |F10 F11 F12|
|F20 F21 F22|
The following equation must be fulfilled to obtain b in image 2:
a' * F * b = 0.
Note: a' = transpose(a).
For some reason I could not add a comment due to a lack of reputation.
I have been studying this field for about a month now and hopefully I can answer the many questions left unanswered that have also puzzled me when I was studying the topic.
#M2X
A fundamental matrix is a mapping from a point in image plane 1 to a line in image plane 2. The lines are a special type of lines called epipolar lines and are formed by the intersection of the image plane and the plane constructed from the origin of the 2 cameras and the 3D point. So it is not possible to determine a point-point mapping using the Fundamental matrix unless you have some additional information or constraints.
#Jukurrpa
A homography is a point to point mapping such that parallel lines map to parallel lines. One can prove that this mapping is linear, then since linear maps a equivalent to matrices, the homography can be defined by a matrix.
A set of 3D points lying on a plane projected to the image plane maps parallel lines to parallel lines, so a homography will work in your case. Methods of estimating homograph from a given set of points is outlined in the book (multiple view geometry in computer vision). Given corresponding points in both images you can find homography by using iterative approaches (Gradient Descent) or closed form solutions (Singular Value Decomposition).

Point cloud alignment using principal component analysis with CGAL

I have sets of randomly sampled points on the surface of 3D objects. I want to be able to compute the similarity between two different objects. To make that work, I first have to make sure that the sample points of both objects I want to compare do have the same rotation and scale. I thought I could do this by orienting the principal component axes along the x/y/z axes, and scaling such that the longest principal component does have unit length.
I first compute the centroid of the point set, and translate all points such that the origin becomes the new centroid.
I do the principal component analysis using the CGAL linear_least_squares_fitting_3 function, which gives the best fitting plane through the points. I compute the normal of this plane by taking the cross product of both base vectors:
Plane plane;
linear_least_squares_fitting_3(points.begin(), points.end(),
plane, CGAL::Dimension_tag<0>());
auto dir1 = dir2vec(plane.base1().direction());
auto dir2 = dir2vec(plane.base2().direction());
auto normal = dir1 ^ dir2; // cross product
normal.normalize(); dir1.normalize(); dir2.normalize();
The dir2vec function converts a CGAL::Direction_3 object to an equivalent osg::Vec3d object (I am using the OpenSceneGraph graphics engine). Finally, I rotate everything to the unit axes using the following code:
Matrixd r1, r2, r3;
r1.makeRotate(normal, Vec3d(1,0,0));
r2.makeRotate(dir1 * r1, Vec3d(0,1,0));
r3.makeRotate(dir2 * r1 * r2, Vec3d(0,0,1));
auto rotate = [&](Vec3d const &p) {
return p * r1 * r2 * r3;
};
transform(osgPoints.begin(), osgPoints.end(), osgPoints.begin(), rotate);
Here, osgPoints is an vector<osg::Vec3d>. For testing purposes, I translate the centroid of the rotated points back to original location, so both point clouds don't overlap.
Vec3d center = point2vec(centroid);
auto tocentroid = [&](Vec3d const &v) {
return v + center;
};
transform(osgPoints.begin(), osgPoints.end(), osgPoints.begin(), tocentroid);
To test it, I use two copies of the same point set, however one is transformed (rotated and translated). The above code should undo the rotations, however the results are not what I did expect: See this image. The red lines indicate the base vectors of the best fitting planes and their normal. It looks like that the results of both calls to linear_least_squares_fitting_3 gives slightly different answers, as one of the planes is rotated a little bit with respect to the other.
Here is another image where both objects are positioned with their centroid in the origin. It is now clearly visible that the normals and base vectors fall together, but the points do not.
Does anybody know why this happens, and, how I can prevent it?
Fitting a plane to a set of points leaves one degree of freedom unconstrained. The plane is free to spin around its normal and the fit is equal. I don't know anything about CGAL, but I wouldn't be surprised to discover that they just find a convenient plane when finding the fit (probably a nearest projection from the original axes of the space).
If you did real PCA on the point cloud, I don't think you'd have that problem. Alternatively, perhaps you could rescale (stretch) your data along the normal discovered by the fitting algorithm and then find another fit. If you stretch the data out sufficiently, then the first plane found shouldn't be as good a fit as some orthogonal plane.
It indeed seemed that CGAL does not compute all principal components, as JCooper suggested. I switched to the ALGLIB library to do the PCA and now it works.