I have two point clouds, in 3d coordinates. One is a subset of the other, containing many less points. They are in the same scale.
What i need to do is find the translation and rotation between the two. I have looked at Point cloud Library, "Iterative closest point", and Coherent Point Drift, but these matching approaches both seem to expect the two point sets to contain mostly the same points, not have one be a smaller, subset of the other.
Can i use either of these, with adjustments? Or is there another algorithm to match a subset point cloud to a set?
Thank you.
Without having access to sample data, is kind of hard to recommend you a specific registration algorithm.
However, I'm pretty exicted nowdays about all the new "data-driven" registration approaches.
From my personal experience, I'm having awesome registration results using the approach of this recent paper:
https://arxiv.org/abs/1603.08182
Wich has source code avaliable here:
https://github.com/andyzeng/3dmatch-toolbox
As reported in the paper, it outperforms pcl-descriptor based registration approaches and I think that it may be suitable for your needs.
Related
Terminology:
Component: PC
loading-score[i,j]: the j feature in PC[i]
Question:
I know the question regarding feature selection is asked several times here at StackOverflow (SO) and on other tech-pages, and it proposes different answers/discussion. That is why I want to open a discussion for the different solutions, and not post it as a general question since that has been done.
Different methods are proposed for feature selection using PCA: For instance using the dot product between the original features and the components (here) to get their correlation, a discussion at SO here suggests that you can only talk about important features as loading-scores in a component (and not use that importance in the input space), and another discussion at SO (which I cannot find at the moment) suggest that the importance for feature[j] would be abs(sum(loading_score[:,j]) i.e the sum of the absolute value of loading_score[i,j] for all i components.
I personally would think that a way to get the importance of a feature would be an absolute sum where each loading_score[i,j] is weighted by the explained variance of component i i.e
imp_feature[j]=sum_i (abs(loading_score[i,j])*explained_variance[i].
Well, there is no universal way to select features; it totally depends on the dataset and some insights available about the dataset. I will provide some examples which might be helpful.
Since you asked about PCA, initially it separates the whole dataset into two sets under which the variances. On the other ICA (Independent Component Analysis) is able to extract multiple features simultaneously. Look at this example,
In this example, we mix three independent signals and try to separate out them using ICA and PCA. In this case, ICA is doing it a better way than PCA. In general, if you search Blind Souce Separation (BSS) you may find more information about this. Besides, in this example, we know the independent components thus, separation is easy. In general, we do not know the number of components. Hence, you may have to guess based on some prior information about the dataset. Also, you may use LDA (Linear Discriminate Analysis) to reduce the number of features.
Once you extract PC components using any of the techniques, following way we can visualize it. If we assume, those extracted components as random variables i.e., x, y, z
More information about you may refer to this original source where I took about two figures.
Coming back to your proposition,
imp_feature[j]=sum_i (abs(loading_score[i,j])*explained_variance[i]
I would not recommend this way due to the following reasons:
abs(loading_score[i,j]) when we get absolute values you may loose positive or negative correlations of considered features. explained_variance[i] may be used to find the correlation between features, but multiplying does not make any sense.
Edit:
In PCA, each component has its explained variance. Explained variance is the ratio between individual component variance and total variance(sum of all individual components variances). Feature significance can be measured by magnitude of explained variance.
All in all, what I want to say, feature selection totally depends on the dataset and the significance of features. PCA is just one technique. Frist understand the properties of features and the dataset. Then, try to extract features. Hope this helps. If you can provide us with an exact example, we may provide more insights.
I have been looking for methods to register (align) organized point clouds with normal information.
I could only find generic point cloud registration methods (for example in PCL).
I am using Microsoft Kinect to get my point clouds, but the problem is that they are quite big.
What I would like to know:
Is there are fast ways to register organized point clouds?
Are there down-sampling methods that are very fast (and may also
be using the fact that the point clouds are organized)?
I was also thinking about using OpenCV filters, since an organized
point cloud could be though of as an image with gray values (2D matrix with depth values). For example using the openCV resize method on the matrix, and some derivative type filters (because edges are important for me in the scene). Is that a good idea?
Also, down-sampling looks like a data-parallel problem, which could be a great candidate for GPU implementation. Do you know about any such implementation?
What I have done so far is the following.
- Several down-sampling methods (Random, Voxel-based, Uniform), but the problem with all of them is that they all took a lot of time (in PCL). Best was Voxel-based.
- Then did ICP, which ran pretty fast and accurate enough for me on the down-sampled point clouds.
So for me, currently, a good solution would be a fast way of down-sampling my point clouds. For example a GPU-based implementation for it.
Thinking of an organized point cloud as an image with greyvalues (simple 2D matrix) turns out to be a good idea.
Downsampling methods for 2D matrices implemented on GPU are available in, for example, OpenCV cuda.
Also, it is easy to implement your own fast downsampling methods on 2D matrices, depending on how important accuracy is. For example, just simply take every kth element. You can do, if needed, averaging at these elements to blur, or derivative type filters to sharpen (edge enhancement). You can come up with special picking methods, depending on information about the frames (e.g. if you know your objects tend to be in the center, then you can pick more points around the area).
All these three above will give faster results and probably "more-tuned" to your problem (especially #3). "More-tuned" implies less robust.
So I have an iterative closest point (ICP) algorithm that has been written and will fit a model to a point cloud. As a quick tutorial for those not in the know ICP is a simple algorithm that fits points to a model ultimately providing a homogeneous transform matrix between the model and points.
Here is a quick picture tutorial.
Step 1. Find the closest point in the model set to your data set:
Step 2: Using a bunch of fun maths (sometimes based on gradiant descent or SVD) pull the clouds closer together and repeat untill a pose is formed:
![Figure 2][2]
Now that bit is simple and working, what i would like help with is:
How do I tell if the pose that I have is a good one?
So currently I have two ideas, but they are kind of hacky:
How many points are in the ICP Algorithm. Ie, if I am fitting to almost no points, I assume that the pose will be bad:
But what if the pose is actually good? It could be, even with few points. I dont want to reject good poses:
So what we see here is that low points can actually make a very good position if they are in the right place.
So the other metric investigated was the ratio of the supplied points to the used points. Here's an example
Now we exlude points that are too far away because they will be outliers, now this means we need a good starting position for the ICP to work, but i am ok with that. Now in the above example the assurance will say NO, this is a bad pose, and it would be right because the ratio of points vs points included is:
2/11 < SOME_THRESHOLD
So thats good, but it will fail in the case shown above where the triangle is upside down. It will say that the upside down triangle is good because all of the points are used by ICP.
You don't need to be an expert on ICP to answer this question, i am looking for good ideas. Using knowledge of the points how can we classify whether it is a good pose solution or not?
Using both of these solutions together in tandem is a good suggestion but its a pretty lame solution if you ask me, very dumb to just threshold it.
What are some good ideas for how to do this?
PS. If you want to add some code, please go for it. I am working in c++.
PPS. Someone help me with tagging this question I am not sure where it should fall.
One possible approach might be comparing poses by their shapes and their orientation.
Shapes comparison can be done with Hausdorff distance up to isometry, that is poses are of the same shape if
d(I(actual_pose), calculated_pose) < d_threshold
where d_threshold should be found from experiments. As isometric modifications of X I would consider rotations by different angles - seems to be sufficient in this case.
Is poses have the same shape, we should compare their orientation. To compare orientation we could use somewhat simplified Freksa model. For each pose we should calculate values
{x_y min, x_y max, x_z min, x_z max, y_z min, y_z max}
and then make sure that each difference between corresponding values for poses does not break another_threshold, derived from experiments as well.
Hopefully this makes some sense, or at least you can draw something useful for your purpose from this.
ICP attempts to minimize the distance between your point-cloud and a model, yes? Wouldn't it make the most sense to evaluate it based on what that distance actually is after execution?
I'm assuming it tries to minimize the sum of squared distances between each point you try to fit and the closest model point. So if you want a metric for quality, why not just normalize that sum, dividing by the number of points it's fitting. Yes, outliers will disrupt it somewhat but they're also going to disrupt your fit somewhat.
It seems like any calculation you can come up with that provides more insight than whatever ICP is minimizing would be more useful incorporated into the algorithm itself, so it can minimize that too. =)
Update
I think I didn't quite understand the algorithm. It seems that it iteratively selects a subset of points, transforms them to minimize error, and then repeats those two steps? In that case your ideal solution selects as many points as possible while keeping error as small as possible.
You said combining the two terms seemed like a weak solution, but it sounds to me like an exact description of what you want, and it captures the two major features of the algorithm (yes?). Evaluating using something like error + B * (selected / total) seems spiritually similar to how regularization is used to address the overfitting problem with gradient descent (and similar) ML algorithms. Selecting a good value for B would take some experimentation.
Looking at your examples, it seems that one of the things that determines whether the match is good or not, is the quality of the points. Could you use/calculate a weighting factor in calculating your metric?
For example, you could weight down points which are co-linear / co-planar, or spatially close, as they probably define the same feature. That would perhaps allow your upside-down triangle to be rejected (as the points are in a line, and that not a great indicator of the overall pose) but the corner-case would be ok, as they roughly define the hull.
Alternatively, maybe the weighting should be on how distributed the points are around the pose, again trying to ensure you have good coverage, rather than matching small indistinct features.
currently I'm dealing with implementing a CBIR-System for object recognition (object classification in detail) and now since I have some working feature-detectors and -descriptors I try to find the best way for handling these features for the task of content based image retrieval.
As far as I know there are two main trends for this task, the discrete- and the continuous-approach. Where discrete stands for methods like bag-of-visual words and codebooks for building up inverted indices to apply methods referring text-retrieval, and continuous stands for methods like Best Bin First search with k-d trees and nearest neighbor classification.
So one main difference between those both approaches is, one works with an extra representation for features like visual-words and the other one works with the n-D features calculated from the descriptor.
My question is now, is there any comparison between the two method for CBIR which could help me in finding the best approach for my task?
The full answer to this question would be quite complex and long.
but generally, a continuous method can give you more accurate results, but it's slower as you can effectively build a search index, and you need to work with large descriptors.
you should consider a combination that uses discrete features (visual words) for initial results, and afterwards filter the result set using continuous methods.
Anybody out there using BGL for large production servers?
How many node does your network consist of?
How do you handle community detection
Does BGL have any cool ways to detect communities?
Sometimes two communities might be linked together by one or two edges, but these edges are not reliable and can fade away. Sometimes there are no edges at all.
Could someone speak briefly on how to solve this problem.
Please open my mind and inspire me.
So far I have managed to work out if two nodes are on an island (in a community)
in a lest expensive manner, but now I need to work out which two nodes on separate islands are closest to each other. We can only make minimal use of unreliable geographical data.
If we figuratively compare it to a mainland and an island and take it out of social distance context. I want to work out which two bits of land are the closest together across a body of water.
I've used the BGL for graphs with millions of nodes, but the size of the graph you can use depends on what algorithm you are trying to run. You can quickly compute distances between nodes. There are 4 shortest path algorithms which are most applicable depending on your data: (single pairs of points, for all pairs of points, sparse and dense graphs,...).
As for community detection, there aren't any algorithms built-into the BGL specifically for that (but maybe you can contribute one when you are finished with your project). There are a few algorithms that might be helpful in building a community detection algorithm. The max-flow/min-cut algorithms are typically used in community detection (if there is a lot of flow possible between two nodes, then they are likely to be in the same community, if there isn't much flow, then the min-cut is likely to represent roads between communities). There are also heuristics to order the nodes of the graph to reduce bandwidth. Nodes making up "communities" are likely to be close to each other in such an ordering.
As far as I know BGL doesn't have any algorithms specifically for community detection.
By "island" do you mean a disconnected subgraph?
Also, graphs do not have any notion of 'distance'.
This 'social distance' is something that you are going to have to define. Once you've done that a large part of the work is done.
There are numerous methods listed on the page you linked to, most of those only require you to define something like a 'distance' metric, and then plug your definitions into the algorithm.
# David Nehme
Graphs without edge-weights are only about connectedness, they have no notion of distance. If you want to talk about a network then you can talk about distance. But a graph with no edge-weights does not have any distance, unless you want to assume an implied edge-weight of 1 for all edges. But this is really just turning the graph into a network.
Also, he is talking about the distance between two disconnected graphs. To model this, you have to introduce an external concept for distance between nodes, separate from the edge-distance.