Assurance of ICP, internal Metrics - c++

So I have an iterative closest point (ICP) algorithm that has been written and will fit a model to a point cloud. As a quick tutorial for those not in the know ICP is a simple algorithm that fits points to a model ultimately providing a homogeneous transform matrix between the model and points.
Here is a quick picture tutorial.
Step 1. Find the closest point in the model set to your data set:
Step 2: Using a bunch of fun maths (sometimes based on gradiant descent or SVD) pull the clouds closer together and repeat untill a pose is formed:
![Figure 2][2]
Now that bit is simple and working, what i would like help with is:
How do I tell if the pose that I have is a good one?
So currently I have two ideas, but they are kind of hacky:
How many points are in the ICP Algorithm. Ie, if I am fitting to almost no points, I assume that the pose will be bad:
But what if the pose is actually good? It could be, even with few points. I dont want to reject good poses:
So what we see here is that low points can actually make a very good position if they are in the right place.
So the other metric investigated was the ratio of the supplied points to the used points. Here's an example
Now we exlude points that are too far away because they will be outliers, now this means we need a good starting position for the ICP to work, but i am ok with that. Now in the above example the assurance will say NO, this is a bad pose, and it would be right because the ratio of points vs points included is:
2/11 < SOME_THRESHOLD
So thats good, but it will fail in the case shown above where the triangle is upside down. It will say that the upside down triangle is good because all of the points are used by ICP.
You don't need to be an expert on ICP to answer this question, i am looking for good ideas. Using knowledge of the points how can we classify whether it is a good pose solution or not?
Using both of these solutions together in tandem is a good suggestion but its a pretty lame solution if you ask me, very dumb to just threshold it.
What are some good ideas for how to do this?
PS. If you want to add some code, please go for it. I am working in c++.
PPS. Someone help me with tagging this question I am not sure where it should fall.

One possible approach might be comparing poses by their shapes and their orientation.
Shapes comparison can be done with Hausdorff distance up to isometry, that is poses are of the same shape if
d(I(actual_pose), calculated_pose) < d_threshold
where d_threshold should be found from experiments. As isometric modifications of X I would consider rotations by different angles - seems to be sufficient in this case.
Is poses have the same shape, we should compare their orientation. To compare orientation we could use somewhat simplified Freksa model. For each pose we should calculate values
{x_y min, x_y max, x_z min, x_z max, y_z min, y_z max}
and then make sure that each difference between corresponding values for poses does not break another_threshold, derived from experiments as well.
Hopefully this makes some sense, or at least you can draw something useful for your purpose from this.

ICP attempts to minimize the distance between your point-cloud and a model, yes? Wouldn't it make the most sense to evaluate it based on what that distance actually is after execution?
I'm assuming it tries to minimize the sum of squared distances between each point you try to fit and the closest model point. So if you want a metric for quality, why not just normalize that sum, dividing by the number of points it's fitting. Yes, outliers will disrupt it somewhat but they're also going to disrupt your fit somewhat.
It seems like any calculation you can come up with that provides more insight than whatever ICP is minimizing would be more useful incorporated into the algorithm itself, so it can minimize that too. =)
Update
I think I didn't quite understand the algorithm. It seems that it iteratively selects a subset of points, transforms them to minimize error, and then repeats those two steps? In that case your ideal solution selects as many points as possible while keeping error as small as possible.
You said combining the two terms seemed like a weak solution, but it sounds to me like an exact description of what you want, and it captures the two major features of the algorithm (yes?). Evaluating using something like error + B * (selected / total) seems spiritually similar to how regularization is used to address the overfitting problem with gradient descent (and similar) ML algorithms. Selecting a good value for B would take some experimentation.

Looking at your examples, it seems that one of the things that determines whether the match is good or not, is the quality of the points. Could you use/calculate a weighting factor in calculating your metric?
For example, you could weight down points which are co-linear / co-planar, or spatially close, as they probably define the same feature. That would perhaps allow your upside-down triangle to be rejected (as the points are in a line, and that not a great indicator of the overall pose) but the corner-case would be ok, as they roughly define the hull.
Alternatively, maybe the weighting should be on how distributed the points are around the pose, again trying to ensure you have good coverage, rather than matching small indistinct features.

Related

Fast method to find distance from point to closest edge of polygon

Setup
Function will need to provide the distance from a point to the closest edge of a polygon
Point is known to be inside of the polygon
Polygon can be convex or concave
Many points (millions) will need to be tested
Many separate polygons (dozens) will need to be ran through the function per point
Precalculated and persistently stored data structures are an option.
The final search function will be in C++
For the function implementation, I know a simple method would be to test the distance to all segments of the polygon using standard distance to line segment formulas. This option would be fairly slow at scale and I am confident there should be a better option.
My gut instinct is that there should be some very fast known algorithms for this type of function that would have been implemented in a game engine, but I'm not sure where to look.
I've found a reference for storing line segments in a quadtree, which would provide for very rapid searching and I think it could be used for my purpose to quickly narrow down which segment to look at as the closest segment and then would only need to calculate the distance to one line segment.
https://people.cs.vt.edu/~shaffer/Papers/SametCVPR85.pdf
I've not been able to locate any code examples for how this would work. I don't mind implementing algorithms from scratch, but don't see the point in doing so if a working, tested code base exists.
I've been looking at a couple quadtree implementations and I think the way it would work is to create a quadtree per polygon and insert each polygon's line segments with a bounding box into the quadtree for that polygon.
The "query" portion of the function I would be making would then consist of creating a point as a very small bounding box, which would then be used to search against the quadtree structure, which would then find only the very closest portions of the polygon.
http://www.codeproject.com/Articles/30535/A-Simple-QuadTree-Implementation-in-C
and
https://github.com/Esri/geometry-api-java/blob/master/src/main/java/com/esri/core/geometry/QuadTree.java
My real question would be, does this seem like a sound approach for a fast search time function?
Is there something that would work faster?
EDIT:
I've been looking around and found some issues with using a quadtree. The way quadtrees work is good for collision detection, but isn't setup to allow for efficient nearest neighbor searching.
https://gamedev.stackexchange.com/questions/14373/in-2d-how-do-i-efficiently-find-the-nearest-object-to-a-point
R-Trees look to be a better option.
https://en.wikipedia.org/wiki/R-tree
and
efficient way to handle 2d line segments
Based on those posts, R-trees look like the winner. Also handy to see that C++ Boost already has them implemented. This looks close enough to what I was planning on doing that I'll go ahead and implement it and verify the results.
EDIT:
Since i have implemented an PMR quadtree, I see now, that the nearest neighbour search is a bit more complex than I described.
If the quad search result for the search point would be empty then it gets more complex.
I remeber a description somewhere in Hannan Sammets:Multidimensional search structure.
Giving the answer below I had in mind searching for all objects withing a specified distance. This is easy for the PMR quadtree, but just finding the closest is more complex.
Edit End
I would not use a R-Tree.
The weak point (and the strong point!) on R-trees is the separation of the space into rectangles.
There are three algorithms known to do that separation but none is well suited for all situations.
R-trees are really complex to implement.
Why then do it? Just because R-Trees can be twice fast than a quad tree when perfectly implemented. The speed difference between a quadtree and a R-Tree is not relevant. The monetary difference is. (If you have working code for both I would use the PMR quadtree, if you have only code for the R-Tree then use that, If you have none use the PMR Quadtree)
Quad trees (PMR) always work, and they are simple to implement.
Using the PMR quad tree, you just find all segments related to the search point. The result will be a few segments, then you just check them and ready you are.
People that tell quad trees are not suited or neighbour search, do not know that there are hundreds of different quad trees. The non suitability is just true for a point quad tree, not for the PMR one, which stores bounding boxes.
I once remebered the compelx description of finding the neighbour points in a POINT-Quadtree. For the PMR-quadtree I had nothing to do (for a search within a specified rectangular interval), no code change, Just iterate the result and find the closest.
I think that there are even better solutions than Quad tree or R-Tree for your spefic questions, but the point is that the PMR always work. Just implement it one time and use if for all spatial searches.
Since there are so many more points to test than polygons, you could consider doing some fairly extensive pre-processing of the polygons in order to accelerate the average number of tests to find the nearest line segment per point.
Consider an approach like this (assumes polygons have no holes):
Walk the edges of the polygon and define line segments along each equidistant line
Test which side of the line segment a point is to restrict the potential set of closest line segments
Build an arithmetic coding tree with each test weighted by the amount of space that is culled by the half-space of the line segment. this should give good average performance in determining the closest segment for a point and open up the possibility of parallel testing over multiple points at once.
This diagram should illustrate the concept. The blue lines define the polygon and the red lines are the equidistant lines.
Notice that needing to support concave polygons greatly increase the complexity, as illustrated by the 6-7-8 region. Concave regions mean that the line segments that extend to infinity may be defined by vertices that are arbitrarily far apart.
You could decompose this problem by fitting a convex hull to the polygon and then doing a fast, convex test for most points and only doing additional work on points that are within the "region of influence" of the concave region, but I am not sure if there is a fast way to calculate that test.
I am not sure how great the quadtree algorithm you posed would be, so I will let someone else comment on that, but I had a thought on something that might be fast and robust.
My thought is you could represent a polygon by a KD-Tree (assuming the vertices are static in time) and then find the nearest two vertices, doing a nearest 2 neighbor search, to whatever the point is that lies in this polygon. These two vertices should be the ones that create the nearest line segment, regardless of convexity, if my thinking is correct.

test if square overlaps poly in c++ w/ directx (optional)

how would I go about checking to see if a triangular poly is present within a square area? (I.E. picture a grid of squares overlaying a group of 2d polys.)
Or even better, how can I determine the percentage of one of these squares that is occupied by a given poly (if at all).
I've used directx before but can't seem to find the right combination of functions in their documentation. - Though it feels like something with ray-tracing might be relevant.
I use c++ and can use directx if helpful.
Thanks for any suggestions or ideas. :)
You might consider the clipper library for doing generic 2D polygon clipping, area computation, intersection testing, etc. It is fairly compact and easy to deal with, and has decent examples of how to use it.
It is an implementation of the Vatti clipping algorithm and will handle many odd edge cases (which may be overkill for you)
There are a few ways to do this and it's essentially a clipping problem.
One way is to use the Cohen–Sutherland algorithm: http://en.wikipedia.org/wiki/Cohen%E2%80%93Sutherland
You would run the algorithm 3 times (once for each triangle edge).
You can then find the percentage of area occupied by calculating area(clipped_triangle) / area(square_region).
You might consider the clipper library for doing generic 2D polygon clipping, area computation, intersection testing, etc. It is fairly compact and easy to deal with, and has decent examples of how to use it.
It is an implementation of the Vatti clipping algorithm and will handle many odd edge cases (which may be overkill for you)
Can ho celadon city - vinhomes central park

Trajectory interpolation and derivative

I'm working on the analysis of a particle's trajectory in a 2D plane. This trajectory typically consists of 5 to 50 (in rare cases more) points (discrete integer coordinates). I have already matched the points of my dataset to form a trajectory (thus I have time resolution).
I'd like to perform some analysis on the curvature of this trajectory, unfortunately the analysis framework I'm using has no support for fitting a trajectory. From what I heard one can use splines/bezier curves for getting this done but I'd like your opinion and/or suggestions what to use.
As this is only an optional part of my work I can not invest a vast amount of time for implementing a solution on my own or understanding a complex framework. The solution has to be as simple as possible.
Let me specify the features I need from a possible library:
- create trajectory from varying number of points
- as the points are discrete it should interpolate their position; no need for exact matches for all points as long as the resulting distance between trajectory and point is less than a threshold
- it is essential that the library can yield the derivative of the trajectory for any given point
- it would be beneficial if the library could report a quality level (like chiSquare for fits) of the interpolation
EDIT: After reading the comments I'd like to add some more:
It is not necessary that the trajectory exactly matches the points. The points are created from values of a pixel matrix and thus they form a discrete matrix of coordinates with a space resolution limited by the number of pixel per given distance. Therefore the points (which are placed at the center of the firing pixel) do not (exactly) match the actual trajectory of the particle. Either interpolation or fit is fine for me as long as the solution can cope with a trajectory which may/most probably will be neither bijective nor injective.
Thus most traditional fit approaches (like fitting with polynomials or exponential functions using a least squares fit) can't fulfil my criterias.
Additionaly all traditional fit approaches I have tried yield a function which seems to describe the trajectory quite well but when looking at their first derivative (or at higher resolution) one can find numerous "micro-oscillations" which (from my interpretation) are a result of fitting non-straight functions to (nearly) straight parts of the trajectory.
Edit2: There has been some discussion in the comments, what those trajectories may look like. Essentially thay may have any shape, length and "curlyness", although I try to exclude trajectories which overlap or cross in the previous steps. I have included two examples below; ignore the colored boxes, they're just a representation of the values of the raw pixel matrix. The black, circular dots are the points which I'd like to match to a trajectory, as you can see they are always centered to the pixels and therefore may have only discrete (integer) values.
Thanks in advance for any help & contribution!
This MIGHT be the way to go
http://alglib.codeplex.com/
From your description I would say that a parametric spline interpolation may suit your requirements. I have not used the above library myself, but it does have support for spline interpolation. Using an interpolant means you will not have to worry about goodness of fit - the curve will pass through every point that you give it.
If you don't mind using matrix libraries, linear least squares is the easiest solution (look at the end of the General Problem section for the equation to use). You can also use linear/polynomial regression to solve something like this.
Linear least squares will always give the best solution, but it's not scalable, because matrix multiplication is moderately expensive. Regression is an iterative heuristic method, so you can just run it until you have a "sufficiently good" answer. I've seen guidelines for the cutoff at about 1000-10000 dimensions in your data. So, with your data set, I'd recommend linear least squares, unless you decide to make them highly dimensioned for some reason.

Finding the nearest XY coordinates

I've got a point in 2d image for example the red Dot in the given picture and a set of n points blue dot (x1,y1)...(xn,yn) and I want to find nearest point to (x0,y0) in a way better than trying all points. Like to have best possible solution. Would appreciate if you share any similar class if you have.
There are many approaches to this, the most common probably being using some form of space partitioning to speed up the search so that it is not O(n). For details, see Nearest neighbor search on Wikipedia.
Most solutions that we could suggest would depend on a little bit more knowledge, I am going to go out on a limb and say that unless you already know that you are short on time. I.e. there are tens of thousands of blue dots or you have to do thousands of these calculations in a short time. "Linear Search" will serve you well enough.
Don't bother calculating the actual distance, save yourself calculating the square root and use this as the "distance".
Most other methods use more complex data structures to sort the points in respect to their geometric arrangement. But are a lot harder to implement.

Data structure for fast line queries?

I know that I can use a KD-Tree to store points and iterate quickly over a fraction of them that are close to another given point. I'm wondering whether there is something similar for lines.
Given a set of lines L in 3D (to be stored in that data structure) and another "query line" q, I'd like to be able to quickly iterate through all lines in L that "are close enough" to q. The distance I'm planning to use is the minimal Euclidean distance between two points u and v where u is some point on the first line and v is some point on the second line. Computing that distance is not a problem (there's a nice trick involving the cross product).
Maybe you guys have a good idea or know where to look for papers, descriptions, etc...
TIA,
s.
Another option - and the most commonly used one for spatial indexing in disk-based database systems - is the R-Tree. It's a bit more complicated to implement than a KD-Tree, but it's generally considered to be faster, and has no problem indexing lines and polygons.
You can use KD-Trees for this as well.
It's possible to build a KD-Tree that works on primitives, not points. Many ray tracers do this to make triangle hit testing much faster. The best description I've seen is in this ray tracing tutorial.
A potentially faster, though not 100% accurate, solution, is to just keep a list of points per line segment, and insert these into a standard point-based KD-Tree. Find the nearest points, then have them tagged with the line segment, and use that to get the nearest lines. It's crude, but often very fast compared to other options. The "trick" is to find the right balance of keeping large spaces between points along the segment (faster) vs. breaking the segment into more points (slower, but more accurate).