What distance measure should I use to match RIFT descriptors? - c++

I'm currently trying to find the nearest neighbours (NNs) of a point in its feature space, using the FLANN module from PCL library.
The points that I'm trying to compare are RIFT descriptors, in the form of pcl::Histogram<32>, as I use 4 bins for the distance and 8 for the gradient (like in the original article).
I'm wondering what distance measure should I use, as the default one is the L2 norm, which seems kind of weak when matching points in a high dimensionnal feature space.
I use the KdTreeMultiIndexCreator index of FLANN to speed-up the search.
I'm bound to use the flann module as below:
// Useful types
typedef pcl::Histogram<32> FeatureT;
typedef flann::L2<float> DistanceT;
typedef pcl::search::FlannSearch<FeatureT, DistanceT> SearchT;
typedef typename SearchT::FlannIndexCreatorPtr CreatorPtrT;
typedef typename SearchT::KdTreeMultiIndexCreator IndexT;
typedef typename SearchT::PointRepresentationPtr RepresentationPtrT;
// Instantiate search object with 4 randomized trees and 128 checks
SearchT search (true, CreatorPtrT (new IndexT (4)));
search.setPointRepresentation (RepresentationPtrT (new DefaultFeatureRepresentation<FeatureT>));
search.setChecks (128); // The more checks the more precise the solution
// search_cloud is filled with the keypoints to match
search.setInputCloud (search_cloud);
search.nearestKSearch(point_to_match, 1, indices, distances);
So, what is the best distance measure available in FLANN to fit my problem ?

Related

Point Feature Histograms in PCL library output interpretation

I am using library called Point Cloud Library (PCL). In particular I am trying to compute point feature histograms. I followed this code from the website:
#include <pcl/point_types.h>
#include <pcl/features/pfh.h>
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::Normal>::Ptr normals (new pcl::PointCloud<pcl::Normal> ());
... read, pass in or create a point cloud with normals ...
... (note: you can create a single PointCloud<PointNormal> if you want) ...
// Create the PFH estimation class, and pass the input dataset+normals to it
pcl::PFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125> pfh;
pfh.setInputCloud (cloud);
pfh.setInputNormals (normals);
// alternatively, if cloud is of tpe PointNormal, do pfh.setInputNormals (cloud);
// Create an empty kdtree representation, and pass it to the PFH estimation object.
// Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
//pcl::KdTreeFLANN<pcl::PointXYZ>::Ptr tree (new pcl::KdTreeFLANN<pcl::PointXYZ> ()); -- older call for PCL 1.5-
pfh.setSearchMethod (tree);
// Output datasets
pcl::PointCloud<pcl::PFHSignature125>::Ptr pfhs (new pcl::PointCloud<pcl::PFHSignature125> ());
// Use all neighbors in a sphere of radius 5cm
// IMPORTANT: the radius used here has to be larger than the radius used to estimate the surface normals!!!
pfh.setRadiusSearch (0.05);
// Compute the features
pfh.compute (*pfhs);
// pfhs->points.size () should have the same size as the input cloud->points.size ()*
}
The output I get is an array of 125 values per point from the original point cloud. For example if I have a point cloud with 1000 points, where each point contains XYZ, then there will be 1000 * 125 values. I was able to understand why I have 125 entries where each corresponds to a bin. (assuming 3 features and 5 divisions 5^3 = 125)
This post helped some: PCL Point Feature Histograms - binning
Unfortunately I still have a few questions:
1) Why do I have a 125 histograms per point? Is it because it measures what is the percentage of points that are in K-nearest neighborhood to current point have similar features and then each point has its own neighborhood?
2) I see for some points all 125 entries are zeros. Why?
3) Graphing point feature histograms values as shown in the paper and website:
Website:
http://pointclouds.org/documentation/tutorials/pfh_estimation.php#pfh-estimation
Paper:
https://pdfs.semanticscholar.org/5aee/411f0b4228ba63c85df0e8ed64cab5844aed.pdf
The graphs shown have their X axis as number of bins (in my case 125 bins) so the natural question how do we consolidate 125 values per point into one graph?
I tried a simple summation of appropriate columns and scaling them by a constant but I do not think it is right. By summation I mean add all bin[0] for every point, next sum all bin[1] for every point and so on until bin[124].
I would really appreciate any help to clarify this.
Thank you.
The PFH descriptor is a local descriptor, so the histogram is computed for each point it is given. You likely will want to use only a keypoint or a set of kepypoints.
The histogram will have entries of 0 if it has no nearest neighbors within the radius search.
As for graphing, try viewing the histogram for one point at a time. I don't think it makes sense to consolidate it into one graph.
If you are interested in global descriptors that take into account all points, take a look at the CVFH descriptor (Clustered Viewpoint Feature Histogram) or other global ones.

Error in calculating exact nearest neighbors in radius with FLANN

I am trying to find the exact number of neighbour nodes in a big 3D points dataset. The goal is for each point of the dataset to retrieve all the possible neighbours in a region with a given radius. FLANN ensures that for lower dimensional data can retrieve the exact neighbors while comparing with brute force search it seems to not be the case. The neighbors are essential for further calculations and therefore I need the exact number. I tested increasing the radius a little bit but doesn't seem to be this the problem. Is anyone aware how to calculate the exact neighbors with FLANN or other C++ library?
The code:
// All nodes to be tested for inclusion in support domain.
flann::Matrix<double> query_nodes = flann::Matrix<double>(&nodes_pos[0].x, nodes_pos.size(), 3);
// Set default search parameters
flann::SearchParams search_parameters = flann::SearchParams();
search_parameters.checks = -1;
search_parameters.sorted = false;
search_parameters.use_heap = flann::FLANN_True;
flann::KDTreeSingleIndexParams index_parameters = flann::KDTreeSingleIndexParams();
flann::KDTreeSingleIndex<flann::L2_3D<double> > index(query_nodes, index_parameters);
index.buildIndex();
//FLANN uses L2 for radius search.
double l2_radius = (this->support_layer_*grid.spacing)*(this->support_layer_*grid.spacing);
double extension = l2_radius/10.;
l2_radius+= extension;
index.radiusSearch(query_nodes, indices, dists, l2_radius, search_parameters);
Try nanoflann. It is designed for low dimensional spaces and gives exact nearest neighbors. Furthermore, it is just one header file that you can either "install" or just copy to your project.
You should check page 6+ from the flann-manual, to fine-tune your search parameters, such as target_precision, which should be set to 1, for "maximum" accuracy.
That parameter is often found as epsilon (ε) in Approximate Nearest Neighbor Search (ANNS), which is used in high dimensional spaces, in order to (try) to beat the curse of dimensionality. FLANN is usually used in 128 dimensions, not 3, as far as I can tell, which may explain the bad performance you are experiencing.
A c++ library that works well in 3 dimensions is CGAL. However, it's much larger than FLANN, because it is a library for computational geometry, thus it provides functionality for many problems, not just NNS.

Matching features of images using PCA-SIFT

I want to match features in two images to detect copy-move forgery. I used the PCA-SIFT code to detect image features. But, I am having trouble in matching the PCA-SIFT features. According to several papers, similar matching process is used for PCA-SIFT as is used in SIFT. I have used the following code snippet to match features.
%des1 and des2 are the PCA-SIFT descriptors obtained from two images
% Precompute matrix transpose
des2t = des2';
matchTable = zeros(1,size(des1,1));
cnt=0; %no. of matches
%ration of ditances
distRatio = 0.5;
%normalising features
m1=max(max(des1));
m2=max(max(des2));
m=max(m1,m2);
des1=des1./m;
des2=des2./m;
for i = 1 : size(des1,1)
%finding eucledian distance of a vector in one image to all features in second image
A=des1(i,:);
D = des2-repmat(A,size(des2,1),1);
[vals,indx] = sort((sum(D.^2,2)).^(1/2)); %sort distances
% Check if nearest neighbor has angle less than distRatio times 2nd.
if (vals(1) < distRatio * vals(2))
matchTable(i) = indx(1);
cnt=cnt+1;
else
matchTable(i) = 0;
end
end
cnt
The above code works fine for SIFT features. But I am not able to get correct results for PCA-SIFT features even after trying several values of distRatio(0-1). I'm also not sure if the matlab central code for PCA-SIFT(mentioned above) does the exact process as mentioned in this paper
If somebody has any idea about the above problem then please comment.
The problem is, PCA does not preserve euclidean distance between 2 vectors. Take a simple example where your data is along the line y = x. The distance between 2 points along the line will depend on both co-ordinates, even if all your data is 1 dimensional, i.e. lying along the line. When you apply PCA, the new euclidean distance will only take the principle component into account, which would be the line y=x, so distance between (1,1), (2,2) would just be 1 instead of sqrt(2).
However, if you normalize the features by their euclidean norm, nearest neighbor using euclidean distance is equivalent to computing cosine-similarity (dot-product) between features.
https://en.wikipedia.org/wiki/Cosine_similarity
Therefore I would first recommend you to test if matching for sift features works if you normalize them by their L2 norm. If yes, you could apply PCA on those features, again normalize the PCA features by their L2 norm and then compute euclidean distance. As far as I remember, L2 norm of a sift vector is 1. So, you only need normalize your PCA-SIFT features by their L2 norm and compute euclidean distance.

ITK - Calculate texture features for segmented 3D brain MRI

I'm trying to calculate texture features for a segmented 3D brain MRI using ITK library with C++. So I followed this example. The example takes a 3D image, and extracts 3 different features for all 13 possible spatial directions. In my program, I just want for a given 3D image to get :
Energy
Correlation
Inertia
Haralick Correlation
Inverse Difference Moment
Cluster Prominence
Cluster Shade
Here is what I have so far :
//definitions of used types
typedef itk::Image<float, 3> InternalImageType;
typedef itk::Image<unsigned char, 3> VisualizingImageType;
typedef itk::Neighborhood<float, 3> NeighborhoodType;
typedef itk::Statistics::ScalarImageToCooccurrenceMatrixFilter<InternalImageType>
Image2CoOccuranceType;
typedef Image2CoOccuranceType::HistogramType HistogramType;
typedef itk::Statistics::HistogramToTextureFeaturesFilter<HistogramType> Hist2FeaturesType;
typedef InternalImageType::OffsetType OffsetType;
typedef itk::AddImageFilter <InternalImageType> AddImageFilterType;
typedef itk::MultiplyImageFilter<InternalImageType> MultiplyImageFilterType;
void calcTextureFeatureImage (OffsetType offset, InternalImageType::Pointer inputImage)
{
// principal variables
//Gray Level Co-occurance Matrix Generator
Image2CoOccuranceType::Pointer glcmGenerator=Image2CoOccuranceType::New();
glcmGenerator->SetOffset(offset);
glcmGenerator->SetNumberOfBinsPerAxis(16); //reasonable number of bins
glcmGenerator->SetPixelValueMinMax(0, 255); //for input UCHAR pixel type
Hist2FeaturesType::Pointer featureCalc=Hist2FeaturesType::New();
//Region Of Interest
typedef itk::RegionOfInterestImageFilter<InternalImageType,InternalImageType> roiType;
roiType::Pointer roi=roiType::New();
roi->SetInput(inputImage);
InternalImageType::RegionType window;
InternalImageType::RegionType::SizeType size;
size.Fill(50);
window.SetSize(size);
window.SetIndex(0,0);
window.SetIndex(1,0);
window.SetIndex(2,0);
roi->SetRegionOfInterest(window);
roi->Update();
glcmGenerator->SetInput(roi->GetOutput());
glcmGenerator->Update();
featureCalc->SetInput(glcmGenerator->GetOutput());
featureCalc->Update();
std::cout<<"\n Entropy : ";
std::cout<<featureCalc->GetEntropy()<<"\n Energy";
std::cout<<featureCalc->GetEnergy()<<"\n Correlation";
std::cout<<featureCalc->GetCorrelation()<<"\n Inertia";
std::cout<<featureCalc->GetInertia()<<"\n HaralickCorrelation";
std::cout<<featureCalc->GetHaralickCorrelation()<<"\n InverseDifferenceMoment";
std::cout<<featureCalc->GetInverseDifferenceMoment()<<"\nClusterProminence";
std::cout<<featureCalc->GetClusterProminence()<<"\nClusterShade";
std::cout<<featureCalc->GetClusterShade();
}
The program works. However I have this problem : it gives the same results for different 3D images, even when I change the window size.
Does any one used ITK to do this ? If there is any other method to achieve that, could anyone point me to a solution please ?
Any help will be much apreciated.
I think that your images have only one gray scale level. For example, if you segment your images using itk-snap tool, when you save the result of the segmentation, itk-snap save it with one gray scale level. So, if you try to calculate texture features for images segmented with itk-snap you'll always have the same results even if you change the images or the window size because you have only one gray scale level in the co-occurrence matrix. Try to run your program using unsegmented images, you'll certainly have different results.
EDIT :
To calculate texture features for segmented images, try another segmentation method which saves the original gray scale levels of the unsegmented image.
Something strange in your code is size.Fill(50), while in the example they show it should hold the image dimension:
size.Fill(3); //window size=3x3x3

To implement FlannBasedMatcher

I am doing a project on face recognition from video images.I extracted the features,now I need to compare the feature.So I found FlannBasedMatcher is a good method, also it is very fast.FlannBasedMatcher is already in the opencv (I am using opencv),but like to implement it myself with out any opencv help.Please help me to find what is exactly happening inside FlannBasedMatcher.Any response will be greatly appreciated.
Features are typically compared using some distance metrics such as Euclidian distance between features that are considered to be points in some multi-demnsional space; one can use the angle between two vectors (that is feature vectors) that is independent of vector scaling; one can use a Humming distance for comparing binary strings, etc. The best way depends on the structure and the meaning of your feature vector. For faces it can be an angle between two vectors expressed through a dot product.
Now, flann is used for finding nearest neighbors and as such is not directly related to feature comparison though it can help to speed up finding similar features that are worth comparison (flann=fast library for nearest neighbors). Thus you won’t need to search through all your vectors trying to select the one that has highest dot product with the query vector, but instead directly compare a given face (vector) with just a few closest faces (vectors).
Finally, addressing a previous answer, in some cases one can use sparse arrays instead of KD trees. They are part of openCV too but can be implented through hash tables or trees. In sparse arrays you can check indices of neighboring elements which is analogous to flann nearest neighbors. Of course, sparse arrays are more limited than flann - for example, they require an exhaustive search in the neighborhood to get a nearest neighbors list but this is still faster than global search. Here is an example:
int dims = 3;
int sz[] = {1000, 1000, 1000}; // memory efficient
SparseMat M3d(dims, sz, CV_32F);
Point3i idx_sparse;
Vec3f p;
//set the element of a sparse 3D Mat
M3d.ref<Vec3f>(idx_sparse.x, idx_sparse.y, idx_sparse.z) = p;
// iterate
SparseMatIterator it = M3d.begin();
SparseMatIterator it_end = M3d.end();
for (; it != it_end; ++it) {
// access existing element through iterator
Vec3f vec = it.value<Vec3f>();
// check neighbors if they exist
int* idx = it.node()->index;
idx[0]++; idx[1]--; idx[2]+=2;
if (M3d.find(idx) != M3d.end()) {
Vec3f vec = M3d.ref<Vec3f>(idx);
}
}
It is not that easy. You have to implement kd-tree with aproximated nearest neighbor search. It is described in paper "An Optimal Algorithm for Approximate Nearest
Neighbor Searching in Fixed Dimensions" by Arya et al.
If you don`t want to do it from the scratch and just want to get rid of OpenCV, you can take original FLANN implementation.