OpenCV 2 Centroid - c++

I am trying to find the centroid of a contour but am having trouble implementing the example code in C++ (OpenCV 2.3.1). Can anyone help me out?

To find the centroid of a contour, you can use the method of moments. And functions are implemented OpenCV.
Check out these moments function (central and spatial moments).
Below code is taken from OpenCV 2.3 docs tutorial. Full code here.
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Get the moments
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false ); }
/// Get the mass centers:
vector<Point2f> mc( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
Also check out this SOF, although it is in Python, it would be useful. It finds all parameters of a contour.

If you have the mask of the contour area, you can find the centroid location as follows:
cv::Point computeCentroid(const cv::Mat &mask) {
cv::Moments m = moments(mask, true);
cv::Point center(m.m10/m.m00, m.m01/m.m00);
return center;
}
This approach is useful when one has the mask but not the contour. In that case the above method is computationally more efficient vs. using cv::findContours(...) and then finding mass center.
Here's the source

Given the contour points, and the formula from Wikipedia, the centroid can be efficiently computed like this:
template <typename T>
cv::Point_<T> computeCentroid(const std::vector<cv::Point_<T> >& in) {
if (in.size() > 2) {
T doubleArea = 0;
cv::Point_<T> p(0,0);
cv::Point_<T> p0 = in->back();
for (const cv::Point_<T>& p1 : in) {//C++11
T a = p0.x * p1.y - p0.y * p1.x; //cross product, (signed) double area of triangle of vertices (origin,p0,p1)
p += (p0 + p1) * a;
doubleArea += a;
p0 = p1;
}
if (doubleArea != 0)
return p * (1 / (3 * doubleArea) ); //Operator / does not exist for cv::Point
}
///If we get here,
///All points lies on one line, you can compute a fallback value,
///e.g. the average of the input vertices
[...]
}
Note:
This formula works with vertices given both in clockwise and
counterclockwise order.
If the points have integer coordinates, it
might be convenient to adapt the type of p and of the return value to Point2f or Point2d,
and to add a cast to float or double to the denominator in the return statement.

If all you need is an approximation of the centroid here are a couple of simple ways to do it:
sumX = 0; sumY = 0;
size = array_points.size;
if(size > 0){
foreach(point in array_points){
sumX += point.x;
sumY += point.y;
}
centroid.x = sumX/size;
centroid.y = sumY/size;
}
Or with the help of Opencv's boundingRect:
//pseudo-code:
Rect bRect = Imgproc.boundingRect(array_points);
centroid.x = bRect.x + (bRect.width / 2);
centroid.y = bRect.y + (bRect.height / 2);

Related

Extract corner (extreme corner points) of quadrangle from black/white image using OpenCV C++

As part of a bigger project, I need to extract the extreme bottom corners of a quadrangle. I have an image and a corresponding binary Mat with 1s where the image is white (the image) and 0 where black (the background).
I've found ways to find the extreme left, right, bottom and top points but they may not give the points I want as the quadrangles are not perfectly rectangular.
https://www.pyimagesearch.com/2016/04/11/finding-extreme-points-in-contours-with-opencv/
Finding Top Left and Bottom Right Points (C++)
Finding extreme points in contours with OpenCV C++
The only way I can think of doing it is not very good. I'm hoping you guys can think of a better way than to just cycle though the matrix for the most bottom row then most left point and then keep points within a certain radius from that most bottom and left point.
And the same for the right, but this is not very computationally efficient.
This is an example quadruple and the corners of interest.
The ideal output is two Mats, similar to the original one, that have 1s only in the region of interest and 0s everywhere.
Any and all help will be greatly appreciated!!
There are at least two possible approaches. Both assume that you've extracted the corners:
Use approxPolyDPfunction to approximate the contour and get 4 vertices of a quadrangle.
2.Fit rectangle to the contour and then find nearest points in your contour to the bottom vertices of this rectangle.
// bin - your binarized image
std::vector<std::vector<cv::Point2i>> contours;
cv::findContours(bin, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
int biggestContourIdx = -1;
double biggestContourArea = 0;
for (int i = 0; i < contours.size(); ++i)
{
auto area = cv::contourArea(contours[i]);
if (area > biggestContourArea)
{
biggestContourArea = area;
biggestContourIdx = i;
}
}
//first solution:
std::vector<cv::Point2i> approx;
cv::approxPolyDP(contours[biggestContourIdx], approx, 30, true);
auto mean = cv::mean(approx);
std::vector<cv::Point2i> bottomCorners;
for (auto p : approx)
{
if (p.y > mean[1]) bottomCorners.push_back(p);
}
//second solution:
auto rect = cv::minAreaRect(cv::Mat(contours[biggestContourIdx]));
auto center = rect.center;
Point2f rect_points[4];
rect.points(rect_points);
std::vector<cv::Point2i> bottomRectCorners;
std::vector<double> distances(2, std::numeric_limits<double>::max());
for (int i = 0; i < 4; ++i)
{
if (rect_points[i].y > center.y)
bottomRectCorners.push_back(rect_points[i]);
}
bottomCorners.clear();
bottomCorners.resize(2);
for (auto p : contours[biggestContourIdx])
{
for (int i = 0; i < distances.size(); ++i)
{
auto dist = cv::norm(p - bottomRectCorners[i]);
if (dist < distances[i])
{
distances[i] = dist;
bottomCorners[i] = p;
}
}
}
Results of both approaches: red - first method, green second

Get a single line representation for multiple close by lines clustered together in opencv

I detected lines in an image and drew them in a separate image file in OpenCv C++ using HoughLinesP method. Following is a part of that resulting image. There are actually hundreds of small and thin lines which form a big single line.
But I want single few lines that represent all those number of lines. Closer lines should be merged together to form a single line. For example above set of lines should be represented by just 3 separate lines as below.
The expected output is as above. How to accomplish this task.
Up to now progress result from akarsakov's answer.
(separate classes of lines resulted are drawn in different colors). Note that this result is the original complete image I am working on, but not the sample section I had used in the question
If you don't know the number of lines in the image you can use the cv::partition function to split lines on equivalency group.
I suggest you the following procedure:
Split your lines using cv::partition. You need to specify a good predicate function. It really depends on lines which you extract from image, but I think it should check following conditions:
Angle between lines should be quite small (less 3 degrees, for example). Use dot product to calculate angle's cosine.
Distance between centers of segments should be less than half of maximum length of two segments.
For example, it can be implemented as follows:
bool isEqual(const Vec4i& _l1, const Vec4i& _l2)
{
Vec4i l1(_l1), l2(_l2);
float length1 = sqrtf((l1[2] - l1[0])*(l1[2] - l1[0]) + (l1[3] - l1[1])*(l1[3] - l1[1]));
float length2 = sqrtf((l2[2] - l2[0])*(l2[2] - l2[0]) + (l2[3] - l2[1])*(l2[3] - l2[1]));
float product = (l1[2] - l1[0])*(l2[2] - l2[0]) + (l1[3] - l1[1])*(l2[3] - l2[1]);
if (fabs(product / (length1 * length2)) < cos(CV_PI / 30))
return false;
float mx1 = (l1[0] + l1[2]) * 0.5f;
float mx2 = (l2[0] + l2[2]) * 0.5f;
float my1 = (l1[1] + l1[3]) * 0.5f;
float my2 = (l2[1] + l2[3]) * 0.5f;
float dist = sqrtf((mx1 - mx2)*(mx1 - mx2) + (my1 - my2)*(my1 - my2));
if (dist > std::max(length1, length2) * 0.5f)
return false;
return true;
}
Guess you have your lines in vector<Vec4i> lines;. Next, you should call cv::partition as follows:
vector<Vec4i> lines;
std::vector<int> labels;
int numberOfLines = cv::partition(lines, labels, isEqual);
You need to call cv::partition once and it will clusterize all lines. Vector labels will store for each line label of cluster to which it belongs. See documentation for cv::partition
After you get all groups of line you should merge them. I suggest calculating average angle of all lines in group and estimate "border" points. For example, if angle is zero (i.e. all lines are almost horizontal) it would be the left-most and right-most points. It remains only to draw a line between this points.
I noticed that all lines in your examples are horizontal or vertical. In such case you can calculate point which is average of all segment's centers and "border" points, and then just draw horizontal or vertical line limited by "border" points through center point.
Please note that cv::partition takes O(N^2) time, so if you process a huge number of lines it may take a lot of time.
I hope it will help. I used such approach for similar task.
First off I want to note that your original image is at a slight angle, so your expected output seems just a bit off to me. I'm assuming you are okay with lines that are not 100% vertical in your output because they are slightly off on your input.
Mat image;
Mat binary = image > 125; // Convert to binary image
// Combine similar lines
int size = 3;
Mat element = getStructuringElement( MORPH_ELLIPSE, Size( 2*size + 1, 2*size+1 ), Point( size, size ) );
morphologyEx( mask, mask, MORPH_CLOSE, element );
So far this yields this image:
These lines are not at 90 degree angles because the original image is not.
You can also choose to close the gap between the lines with:
Mat out = Mat::zeros(mask.size(), mask.type());
vector<Vec4i> lines;
HoughLinesP(mask, lines, 1, CV_PI/2, 50, 50, 75);
for( size_t i = 0; i < lines.size(); i++ )
{
Vec4i l = lines[i];
line( out, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(255), 5, CV_AA);
}
If these lines are too fat, I've had success thinning them with:
size = 15;
Mat eroded;
cv::Mat erodeElement = getStructuringElement( MORPH_ELLIPSE, cv::Size( size, size ) );
erode( mask, eroded, erodeElement );
Here is a refinement build upon #akarsakov answer.
A basic issue with:
Distance between centers of segments should be less than half of
maximum length of two segments.
is that parallel long lines that are visually far might end up in same equivalence class (as demonstrated in OP's edit).
Therefore the approach that I found working reasonable for me:
Construct a window (bounding rectangle) around a line1.
line2 angle is close enough to line1's and at least one point of line2 is inside line1's bounding rectangle
Often a long linear feature in the image that is quite weak will end up recognized (HoughP, LSD) by a set of line segments with considerable gaps between them. To alleviate this, our bounding rectangle is constructed around line extended in both directions, where extension is defined by a fraction of original line width.
bool extendedBoundingRectangleLineEquivalence(const Vec4i& _l1, const Vec4i& _l2, float extensionLengthFraction, float maxAngleDiff, float boundingRectangleThickness){
Vec4i l1(_l1), l2(_l2);
// extend lines by percentage of line width
float len1 = sqrtf((l1[2] - l1[0])*(l1[2] - l1[0]) + (l1[3] - l1[1])*(l1[3] - l1[1]));
float len2 = sqrtf((l2[2] - l2[0])*(l2[2] - l2[0]) + (l2[3] - l2[1])*(l2[3] - l2[1]));
Vec4i el1 = extendedLine(l1, len1 * extensionLengthFraction);
Vec4i el2 = extendedLine(l2, len2 * extensionLengthFraction);
// reject the lines that have wide difference in angles
float a1 = atan(linearParameters(el1)[0]);
float a2 = atan(linearParameters(el2)[0]);
if(fabs(a1 - a2) > maxAngleDiff * M_PI / 180.0){
return false;
}
// calculate window around extended line
// at least one point needs to inside extended bounding rectangle of other line,
std::vector<Point2i> lineBoundingContour = boundingRectangleContour(el1, boundingRectangleThickness/2);
return
pointPolygonTest(lineBoundingContour, cv::Point(el2[0], el2[1]), false) == 1 ||
pointPolygonTest(lineBoundingContour, cv::Point(el2[2], el2[3]), false) == 1;
}
where linearParameters, extendedLine, boundingRectangleContour are following:
Vec2d linearParameters(Vec4i line){
Mat a = (Mat_<double>(2, 2) <<
line[0], 1,
line[2], 1);
Mat y = (Mat_<double>(2, 1) <<
line[1],
line[3]);
Vec2d mc; solve(a, y, mc);
return mc;
}
Vec4i extendedLine(Vec4i line, double d){
// oriented left-t-right
Vec4d _line = line[2] - line[0] < 0 ? Vec4d(line[2], line[3], line[0], line[1]) : Vec4d(line[0], line[1], line[2], line[3]);
double m = linearParameters(_line)[0];
// solution of pythagorean theorem and m = yd/xd
double xd = sqrt(d * d / (m * m + 1));
double yd = xd * m;
return Vec4d(_line[0] - xd, _line[1] - yd , _line[2] + xd, _line[3] + yd);
}
std::vector<Point2i> boundingRectangleContour(Vec4i line, float d){
// finds coordinates of perpendicular lines with length d in both line points
// https://math.stackexchange.com/a/2043065/183923
Vec2f mc = linearParameters(line);
float m = mc[0];
float factor = sqrtf(
(d * d) / (1 + (1 / (m * m)))
);
float x3, y3, x4, y4, x5, y5, x6, y6;
// special case(vertical perpendicular line) when -1/m -> -infinity
if(m == 0){
x3 = line[0]; y3 = line[1] + d;
x4 = line[0]; y4 = line[1] - d;
x5 = line[2]; y5 = line[3] + d;
x6 = line[2]; y6 = line[3] - d;
} else {
// slope of perpendicular lines
float m_per = - 1/m;
// y1 = m_per * x1 + c_per
float c_per1 = line[1] - m_per * line[0];
float c_per2 = line[3] - m_per * line[2];
// coordinates of perpendicular lines
x3 = line[0] + factor; y3 = m_per * x3 + c_per1;
x4 = line[0] - factor; y4 = m_per * x4 + c_per1;
x5 = line[2] + factor; y5 = m_per * x5 + c_per2;
x6 = line[2] - factor; y6 = m_per * x6 + c_per2;
}
return std::vector<Point2i> {
Point2i(x3, y3),
Point2i(x4, y4),
Point2i(x6, y6),
Point2i(x5, y5)
};
}
To partion, call:
std::vector<int> labels;
int equilavenceClassesCount = cv::partition(linesWithoutSmall, labels, [](const Vec4i l1, const Vec4i l2){
return extendedBoundingRectangleLineEquivalence(
l1, l2,
// line extension length - as fraction of original line width
0.2,
// maximum allowed angle difference for lines to be considered in same equivalence class
2.0,
// thickness of bounding rectangle around each line
10);
});
Now, in order to reduce each equivalence class to single line, we build a point cloud out of it and find a line fit:
// fit line to each equivalence class point cloud
std::vector<Vec4i> reducedLines = std::accumulate(pointClouds.begin(), pointClouds.end(), std::vector<Vec4i>{}, [](std::vector<Vec4i> target, const std::vector<Point2i>& _pointCloud){
std::vector<Point2i> pointCloud = _pointCloud;
//lineParams: [vx,vy, x0,y0]: (normalized vector, point on our contour)
// (x,y) = (x0,y0) + t*(vx,vy), t -> (-inf; inf)
Vec4f lineParams; fitLine(pointCloud, lineParams, CV_DIST_L2, 0, 0.01, 0.01);
// derive the bounding xs of point cloud
decltype(pointCloud)::iterator minXP, maxXP;
std::tie(minXP, maxXP) = std::minmax_element(pointCloud.begin(), pointCloud.end(), [](const Point2i& p1, const Point2i& p2){ return p1.x < p2.x; });
// derive y coords of fitted line
float m = lineParams[1] / lineParams[0];
int y1 = ((minXP->x - lineParams[2]) * m) + lineParams[3];
int y2 = ((maxXP->x - lineParams[2]) * m) + lineParams[3];
target.push_back(Vec4i(minXP->x, y1, maxXP->x, y2));
return target;
});
Demonstration:
Detected partitioned line (with small lines filtered out):
Reduced:
Demonstration code:
int main(int argc, const char* argv[]){
if(argc < 2){
std::cout << "img filepath should be present in args" << std::endl;
}
Mat image = imread(argv[1]);
Mat smallerImage; resize(image, smallerImage, cv::Size(), 0.5, 0.5, INTER_CUBIC);
Mat target = smallerImage.clone();
namedWindow("Detected Lines", WINDOW_NORMAL);
namedWindow("Reduced Lines", WINDOW_NORMAL);
Mat detectedLinesImg = Mat::zeros(target.rows, target.cols, CV_8UC3);
Mat reducedLinesImg = detectedLinesImg.clone();
// delect lines in any reasonable way
Mat grayscale; cvtColor(target, grayscale, CV_BGRA2GRAY);
Ptr<LineSegmentDetector> detector = createLineSegmentDetector(LSD_REFINE_NONE);
std::vector<Vec4i> lines; detector->detect(grayscale, lines);
// remove small lines
std::vector<Vec4i> linesWithoutSmall;
std::copy_if (lines.begin(), lines.end(), std::back_inserter(linesWithoutSmall), [](Vec4f line){
float length = sqrtf((line[2] - line[0]) * (line[2] - line[0])
+ (line[3] - line[1]) * (line[3] - line[1]));
return length > 30;
});
std::cout << "Detected: " << linesWithoutSmall.size() << std::endl;
// partition via our partitioning function
std::vector<int> labels;
int equilavenceClassesCount = cv::partition(linesWithoutSmall, labels, [](const Vec4i l1, const Vec4i l2){
return extendedBoundingRectangleLineEquivalence(
l1, l2,
// line extension length - as fraction of original line width
0.2,
// maximum allowed angle difference for lines to be considered in same equivalence class
2.0,
// thickness of bounding rectangle around each line
10);
});
std::cout << "Equivalence classes: " << equilavenceClassesCount << std::endl;
// grab a random colour for each equivalence class
RNG rng(215526);
std::vector<Scalar> colors(equilavenceClassesCount);
for (int i = 0; i < equilavenceClassesCount; i++){
colors[i] = Scalar(rng.uniform(30,255), rng.uniform(30, 255), rng.uniform(30, 255));;
}
// draw original detected lines
for (int i = 0; i < linesWithoutSmall.size(); i++){
Vec4i& detectedLine = linesWithoutSmall[i];
line(detectedLinesImg,
cv::Point(detectedLine[0], detectedLine[1]),
cv::Point(detectedLine[2], detectedLine[3]), colors[labels[i]], 2);
}
// build point clouds out of each equivalence classes
std::vector<std::vector<Point2i>> pointClouds(equilavenceClassesCount);
for (int i = 0; i < linesWithoutSmall.size(); i++){
Vec4i& detectedLine = linesWithoutSmall[i];
pointClouds[labels[i]].push_back(Point2i(detectedLine[0], detectedLine[1]));
pointClouds[labels[i]].push_back(Point2i(detectedLine[2], detectedLine[3]));
}
// fit line to each equivalence class point cloud
std::vector<Vec4i> reducedLines = std::accumulate(pointClouds.begin(), pointClouds.end(), std::vector<Vec4i>{}, [](std::vector<Vec4i> target, const std::vector<Point2i>& _pointCloud){
std::vector<Point2i> pointCloud = _pointCloud;
//lineParams: [vx,vy, x0,y0]: (normalized vector, point on our contour)
// (x,y) = (x0,y0) + t*(vx,vy), t -> (-inf; inf)
Vec4f lineParams; fitLine(pointCloud, lineParams, CV_DIST_L2, 0, 0.01, 0.01);
// derive the bounding xs of point cloud
decltype(pointCloud)::iterator minXP, maxXP;
std::tie(minXP, maxXP) = std::minmax_element(pointCloud.begin(), pointCloud.end(), [](const Point2i& p1, const Point2i& p2){ return p1.x < p2.x; });
// derive y coords of fitted line
float m = lineParams[1] / lineParams[0];
int y1 = ((minXP->x - lineParams[2]) * m) + lineParams[3];
int y2 = ((maxXP->x - lineParams[2]) * m) + lineParams[3];
target.push_back(Vec4i(minXP->x, y1, maxXP->x, y2));
return target;
});
for(Vec4i reduced: reducedLines){
line(reducedLinesImg, Point(reduced[0], reduced[1]), Point(reduced[2], reduced[3]), Scalar(255, 255, 255), 2);
}
imshow("Detected Lines", detectedLinesImg);
imshow("Reduced Lines", reducedLinesImg);
waitKey();
return 0;
}
I would recommend that you use HoughLines from OpenCV.
void HoughLines(InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0 )
You can adjust with rho and theta the possible orientation and position of the lines you want to observe.
In your case, theta = 90° would be fine (only vertical and horizontal lines).
After this, you can get unique line equations with Plücker coordinates. And from there you could apply a K-mean with 3 centers that should fit approximately your 3 lines in the second image.
PS : I will see if i can test the whole process with your image
You can merge multiple close line into single line by clustering lines using rho and theta and finally taking average of rho and theta.
void contourLines(vector<cv::Vec2f> lines, const float rho_threshold, const float theta_threshold, vector< cv::Vec2f > &combinedLines)
{
vector< vector<int> > combineIndex(lines.size());
for (int i = 0; i < lines.size(); i++)
{
int index = i;
for (int j = i; j < lines.size(); j++)
{
float distanceI = lines[i][0], distanceJ = lines[j][0];
float slopeI = lines[i][1], slopeJ = lines[j][1];
float disDiff = abs(distanceI - distanceJ);
float slopeDiff = abs(slopeI - slopeJ);
if (slopeDiff < theta_max && disDiff < rho_max)
{
bool isCombined = false;
for (int w = 0; w < i; w++)
{
for (int u = 0; u < combineIndex[w].size(); u++)
{
if (combineIndex[w][u] == j)
{
isCombined = true;
break;
}
if (combineIndex[w][u] == i)
index = w;
}
if (isCombined)
break;
}
if (!isCombined)
combineIndex[index].push_back(j);
}
}
}
for (int i = 0; i < combineIndex.size(); i++)
{
if (combineIndex[i].size() == 0)
continue;
cv::Vec2f line_temp(0, 0);
for (int j = 0; j < combineIndex[i].size(); j++) {
line_temp[0] += lines[combineIndex[i][j]][0];
line_temp[1] += lines[combineIndex[i][j]][1];
}
line_temp[0] /= combineIndex[i].size();
line_temp[1] /= combineIndex[i].size();
combinedLines.push_back(line_temp);
}
}
function call
You can tune houghThreshold, rho_threshold and theta_threshold as per your application.
HoughLines(edge, lines_t, 1, CV_PI / 180, houghThreshold, 0, 0);
float rho_threshold= 15;
float theta_threshold = 3*DEGREES_TO_RADIANS;
vector< cv::Vec2f > lines;
contourCluster(lines_t, rho_max, theta_max, lines);
#C_Raj made a good point, for lines like this, i.e., most likely extracted from table/form-like images, you should make full use of the fact that many of the line segments captured by Hough transform from the same lines have very similar \rho and \theta.
After clustering these line segments based on their \rho and \theta, you can apply 2D line fitting to obtain estimate of the true lines in an image.
There is a paper describing this idea and it's making further assumptions of the lines in a page.
HTH.

OpenCV mass center point

I found the mass center of an irregular shape, but now I need to compute the distance to any given point.
I understand that the mc is a vector of points, but how can I find the coordinates of mc so I can calculate the distance between the mass center and some other point.Thanks
vector<Point2f> mc( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 );
}
First you should get the point by index. Let :
int size = contours.size();
The indices are: i = 0 ... size . The point at index i is
mc[i];
The coordinates of that point can be reached by:
float xCoor = mc[i].x;
float yCoor = mc[i].y;
Of course you can read those values in a loop from i = 0 to size if you want to read all the coordinates of all the mc points.
Edit:
I assumed that you knew how to find the mass center, and was just asking how to get the coordinates. But if you want to get mass center and the distance from mass center to some other point then you could do the following:
float distance;
float totalX=0.0, totalY=0.0;
for(int i=0; i<size; i++) {
totalX+=mc[i].x;
totalY+=mc[i].y;
}
Point2f massCenter(totalX/size, totalY/size); // condition: size != 0
Point2F someOtherPoint(someXVal, someYVal);
distance = massCenter.distance(someOtherPoint);
is the distance from mass center to some other point.
Hope that helps!
mc[i].x and mc[i].y are the x and y coordinates of the point of index i.
To compute the center of mass:
cv::Point2f baricenter(0,0);
for( int i = 0; i < mc.size(); i++ )
barycenter += mc[i];
barycenter.x /= mc.size();
barycenter.y /= mc.size();
Check that you have at least one point in your vector.

How to find euclidean distance between keypoints of a single image in opencv

I want to get a distance vector d for each key point in the image. The distance vector should consist of distances from that keypoint to all other keypoints in that image.
Note: Keypoints are found using SIFT.
Im pretty new to opencv. Is there a library function in C++ that can make my task easy?
If you aren't interested int the position-distance but the descriptor-distance you can use this:
cv::Mat SelfDescriptorDistances(cv::Mat descr)
{
cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1);
for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr)
{
for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2)
{
double euclideanDistance = 0;
for(int descrDim = 0; descrDim < descr.cols; ++descrDim)
{
double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance;
}
}
return selfDistances;
}
which will give you a N x N matrix (N = number of keypoints) where Mat_i,j = euclidean distance between keypoint i and j.
with this input:
I get these outputs:
image where keypoints are marked which have a distance of less than 0.05
image that corresponds to the matrix. white pixels are dist < 0.05.
REMARK: you can optimize many things in the computation of the matrix, since distances are symmetric!
UPDATE:
Here is another way to do it:
From your chat I know that you would need 13GB memory to hold those distance information for 41381 keypoints (which you tried). If you want instead only the N best matches, try this code:
// choose double here if you are worried about precision!
#define intermediatePrecision float
//#define intermediatePrecision double
//
void NBestMatches(cv::Mat descriptors1, cv::Mat descriptors2, unsigned int n, std::vector<std::vector<float> > & distances, std::vector<std::vector<int> > & indices)
{
// TODO: check whether descriptor dimensions and types are the same for both!
// clear vector
// get enough space to create n best matches
distances.clear();
distances.resize(descriptors1.rows);
indices.clear();
indices.resize(descriptors1.rows);
for(int i=0; i<descriptors1.rows; ++i)
{
// references to current elements:
std::vector<float> & cDistances = distances.at(i);
std::vector<int> & cIndices = indices.at(i);
// initialize:
cDistances.resize(n,FLT_MAX);
cIndices.resize(n,-1); // for -1 = "no match found"
// now find the 3 best matches for descriptor i:
for(int j=0; j<descriptors2.rows; ++j)
{
intermediatePrecision euclideanDistance = 0;
for( int dim = 0; dim < descriptors1.cols; ++dim)
{
intermediatePrecision tmp = descriptors1.at<float>(i,dim) - descriptors2.at<float>(j, dim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
float tmpCurrentDist = euclideanDistance;
int tmpCurrentIndex = j;
// update current best n matches:
for(unsigned int k=0; k<n; ++k)
{
if(tmpCurrentDist < cDistances.at(k))
{
int tmpI2 = cIndices.at(k);
float tmpD2 = cDistances.at(k);
// update current k-th best match
cDistances.at(k) = tmpCurrentDist;
cIndices.at(k) = tmpCurrentIndex;
// previous k-th best should be better than k+1-th best //TODO: a simple memcpy would be faster I guess.
tmpCurrentDist = tmpD2;
tmpCurrentIndex =tmpI2;
}
}
}
}
}
It computes the N best matches for each keypoint of the first descriptors to the second descriptors. So if you want to do that for the same keypoints you'll set to be descriptors1 = descriptors2 ion your call as shown below. Remember: the function doesnt know that both descriptor sets are identical, so the first best match (or at least one) will be the keypoint itself with distance 0 always! Keep that in mind if using the results!
Here's sample code to generate an image similar to the one above:
int main()
{
cv::Mat input = cv::imread("../inputData/MultiLena.png");
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
cv::SiftFeatureDetector detector( 7500 );
cv::SiftDescriptorExtractor describer;
std::vector<cv::KeyPoint> keypoints;
detector.detect( gray, keypoints );
// draw keypoints
cv::drawKeypoints(input,keypoints,input);
cv::Mat descriptors;
describer.compute(gray, keypoints, descriptors);
int n = 4;
std::vector<std::vector<float> > dists;
std::vector<std::vector<int> > indices;
// compute the N best matches between the descriptors and themselves.
// REMIND: ONE best match will always be the keypoint itself in this setting!
NBestMatches(descriptors, descriptors, n, dists, indices);
for(unsigned int i=0; i<dists.size(); ++i)
{
for(unsigned int j=0; j<dists.at(i).size(); ++j)
{
if(dists.at(i).at(j) < 0.05)
cv::line(input, keypoints[i].pt, keypoints[indices.at(i).at(j)].pt, cv::Scalar(255,255,255) );
}
}
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
Create a 2D vector (size of which would be NXN) -->
std::vector< std::vector< float > > item;
Create 2 for loops to go till the number of keypoints (N) you have
Calculate distances as suggested by a-Jays
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y );
Add this to vector using push_back for each keypoint --> N times.
The keypoint class has a member called pt which in turn has x and y [the (x,y) location of the point] as its own members.
Given two keypoints kp1 and kp2, it's then easy to calculate the euclidean distance as:
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y )
In your case, it is going to be a double loop iterating over all the keypoints.

Is there an easy way/algorithm to match 2 clouds of 2D points?

I am wondering if there is an easy way to match (register) 2 clouds of 2d points.
Let's say I have an object represented by points and an cluttered 2nd image with the object points and noise (noise in a way of points that are useless).
Basically the object can be 2d rotated as well as translated and scaled.
I know there is the ICP - Algorithm but I think that this is not a good approach due to high noise.
I hope that you understand what i mean. please ask if (im sure it is) anything is unclear.
cheers
Here is the function that finds translation and rotation. Generalization to scaling, weighted points, and RANSAC are straight forward. I used openCV library for visualization and SVD. The function below combines data generation, Unit Test , and actual solution.
// rotation and translation in 2D from point correspondences
void rigidTransform2D(const int N) {
// Algorithm: http://igl.ethz.ch/projects/ARAP/svd_rot.pdf
const bool debug = false; // print more debug info
const bool add_noise = true; // add noise to imput and output
srand(time(NULL)); // randomize each time
/*********************************
* Creat data with some noise
**********************************/
// Simulated transformation
Point2f T(1.0f, -2.0f);
float a = 30.0; // [-180, 180], see atan2(y, x)
float noise_level = 0.1f;
cout<<"True parameters: rot = "<<a<<"deg., T = "<<T<<
"; noise level = "<<noise_level<<endl;
// noise
vector<Point2f> noise_src(N), noise_dst(N);
for (int i=0; i<N; i++) {
noise_src[i] = Point2f(randf(noise_level), randf(noise_level));
noise_dst[i] = Point2f(randf(noise_level), randf(noise_level));
}
// create data with noise
vector<Point2f> src(N), dst(N);
float Rdata = 10.0f; // radius of data
float cosa = cos(a*DEG2RAD);
float sina = sin(a*DEG2RAD);
for (int i=0; i<N; i++) {
// src
float x1 = randf(Rdata);
float y1 = randf(Rdata);
src[i] = Point2f(x1,y1);
if (add_noise)
src[i] += noise_src[i];
// dst
float x2 = x1*cosa - y1*sina;
float y2 = x1*sina + y1*cosa;
dst[i] = Point2f(x2,y2) + T;
if (add_noise)
dst[i] += noise_dst[i];
if (debug)
cout<<i<<": "<<src[i]<<"---"<<dst[i]<<endl;
}
// Calculate data centroids
Scalar centroid_src = mean(src);
Scalar centroid_dst = mean(dst);
Point2f center_src(centroid_src[0], centroid_src[1]);
Point2f center_dst(centroid_dst[0], centroid_dst[1]);
if (debug)
cout<<"Centers: "<<center_src<<", "<<center_dst<<endl;
/*********************************
* Visualize data
**********************************/
// Visualization
namedWindow("data", 1);
float w = 400, h = 400;
Mat Mdata(w, h, CV_8UC3); Mdata = Scalar(0);
Point2f center_img(w/2, h/2);
float scl = 0.4*min(w/Rdata, h/Rdata); // compensate for noise
scl/=sqrt(2); // compensate for rotation effect
Point2f dT = (center_src+center_dst)*0.5; // compensate for translation
for (int i=0; i<N; i++) {
Point2f p1(scl*(src[i] - dT));
Point2f p2(scl*(dst[i] - dT));
// invert Y axis
p1.y = -p1.y; p2.y = -p2.y;
// add image center
p1+=center_img; p2+=center_img;
circle(Mdata, p1, 1, Scalar(0, 255, 0));
circle(Mdata, p2, 1, Scalar(0, 0, 255));
line(Mdata, p1, p2, Scalar(100, 100, 100));
}
/*********************************
* Get 2D rotation and translation
**********************************/
markTime();
// subtract centroids from data
for (int i=0; i<N; i++) {
src[i] -= center_src;
dst[i] -= center_dst;
}
// compute a covariance matrix
float Cxx = 0.0, Cxy = 0.0, Cyx = 0.0, Cyy = 0.0;
for (int i=0; i<N; i++) {
Cxx += src[i].x*dst[i].x;
Cxy += src[i].x*dst[i].y;
Cyx += src[i].y*dst[i].x;
Cyy += src[i].y*dst[i].y;
}
Mat Mcov = (Mat_<float>(2, 2)<<Cxx, Cxy, Cyx, Cyy);
if (debug)
cout<<"Covariance Matrix "<<Mcov<<endl;
// SVD
cv::SVD svd;
svd = SVD(Mcov, SVD::FULL_UV);
if (debug) {
cout<<"U = "<<svd.u<<endl;
cout<<"W = "<<svd.w<<endl;
cout<<"V transposed = "<<svd.vt<<endl;
}
// rotation = V*Ut
Mat V = svd.vt.t();
Mat Ut = svd.u.t();
float det_VUt = determinant(V*Ut);
Mat W = (Mat_<float>(2, 2)<<1.0, 0.0, 0.0, det_VUt);
float rot[4];
Mat R_est(2, 2, CV_32F, rot);
R_est = V*W*Ut;
if (debug)
cout<<"Rotation matrix: "<<R_est<<endl;
float cos_est = rot[0];
float sin_est = rot[2];
float ang = atan2(sin_est, cos_est);
// translation = mean_dst - R*mean_src
Point2f center_srcRot = Point2f(
cos_est*center_src.x - sin_est*center_src.y,
sin_est*center_src.x + cos_est*center_src.y);
Point2f T_est = center_dst - center_srcRot;
// RMSE
double RMSE = 0.0;
for (int i=0; i<N; i++) {
Point2f dst_est(
cos_est*src[i].x - sin_est*src[i].y,
sin_est*src[i].x + cos_est*src[i].y);
RMSE += SQR(dst[i].x - dst_est.x) + SQR(dst[i].y - dst_est.y);
}
if (N>0)
RMSE = sqrt(RMSE/N);
// Final estimate msg
cout<<"Estimate = "<<ang*RAD2DEG<<"deg., T = "<<T_est<<"; RMSE = "<<RMSE<<endl;
// show image
printTime(1);
imshow("data", Mdata);
waitKey(-1);
return;
} // rigidTransform2D()
// --------------------------- 3DOF
// calculates squared error from two point mapping; assumes rotation around Origin.
inline float sqErr_3Dof(Point2f p1, Point2f p2,
float cos_alpha, float sin_alpha, Point2f T) {
float x2_est = T.x + cos_alpha * p1.x - sin_alpha * p1.y;
float y2_est = T.y + sin_alpha * p1.x + cos_alpha * p1.y;
Point2f p2_est(x2_est, y2_est);
Point2f dp = p2_est-p2;
float sq_er = dp.dot(dp); // squared distance
//cout<<dp<<endl;
return sq_er;
}
// calculate RMSE for point-to-point metrics
float RMSE_3Dof(const vector<Point2f>& src, const vector<Point2f>& dst,
const float* param, const bool* inliers, const Point2f center) {
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
unsigned int n = src.size();
assert(n>0 && n==dst.size());
float ang_rad = param[0];
Point2f T(param[1], param[2]);
float cos_alpha = cos(ang_rad);
float sin_alpha = sin(ang_rad);
double RMSE = 0.0;
int ninliers = 0;
for (unsigned int i=0; i<n; i++) {
if (all_inliers || inliers[i]) {
RMSE += sqErr_3Dof(src[i]-center, dst[i]-center, cos_alpha, sin_alpha, T);
ninliers++;
}
}
//cout<<"RMSE = "<<RMSE<<endl;
if (ninliers>0)
return sqrt(RMSE/ninliers);
else
return LARGE_NUMBER;
}
// Sets inliers and returns their count
inline int setInliers3Dof(const vector<Point2f>& src, const vector <Point2f>& dst,
bool* inliers,
const float* param,
const float max_er,
const Point2f center) {
float ang_rad = param[0];
Point2f T(param[1], param[2]);
// set inliers
unsigned int ninliers = 0;
unsigned int n = src.size();
assert(n>0 && n==dst.size());
float cos_ang = cos(ang_rad);
float sin_ang = sin(ang_rad);
float max_sqErr = max_er*max_er; // comparing squared values
if (inliers==NULL) {
// just get the number of inliers (e.g. after QUADRATIC fit only)
for (unsigned int i=0; i<n; i++) {
float sqErr = sqErr_3Dof(src[i]-center, dst[i]-center, cos_ang, sin_ang, T);
if ( sqErr < max_sqErr)
ninliers++;
}
} else {
// get the number of inliers and set them (e.g. for RANSAC)
for (unsigned int i=0; i<n; i++) {
float sqErr = sqErr_3Dof(src[i]-center, dst[i]-center, cos_ang, sin_ang, T);
if ( sqErr < max_sqErr) {
inliers[i] = 1;
ninliers++;
} else {
inliers[i] = 0;
}
}
}
return ninliers;
}
// fits 3DOF (rotation and translation in 2D) with least squares.
float fit3DofQUADRATICold(const vector<Point2f>& src, const vector<Point2f>& dst,
float* param, const bool* inliers, const Point2f center) {
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
unsigned int n = src.size();
assert(dst.size() == n);
// count inliers
int ninliers;
if (all_inliers) {
ninliers = n;
} else {
ninliers = 0;
for (unsigned int i=0; i<n; i++){
if (inliers[i])
ninliers++;
}
}
// under-dermined system
if (ninliers<2) {
// param[0] = 0.0f; // ?
// param[1] = 0.0f;
// param[2] = 0.0f;
return LARGE_NUMBER;
}
/*
* x1*cosx(a)-y1*sin(a) + Tx = X1
* x1*sin(a)+y1*cos(a) + Ty = Y1
*
* approximation for small angle a (radians) sin(a)=a, cos(a)=1;
*
* x1*1 - y1*a + Tx = X1
* x1*a + y1*1 + Ty = Y1
*
* in matrix form M1*h=M2
*
* 2n x 4 4 x 1 2n x 1
*
* -y1 1 0 x1 * a = X1
* x1 0 1 y1 Tx Y1
* Ty
* 1=Z
* ----------------------------
* src1 res src2
*/
// 4 x 1
float res_ar[4]; // alpha, Tx, Ty, 1
Mat res(4, 1, CV_32F, res_ar); // 4 x 1
// 2n x 4
Mat src1(2*ninliers, 4, CV_32F); // 2n x 4
// 2n x 1
Mat src2(2*ninliers, 1, CV_32F); // 2n x 1: [X1, Y1, X2, Y2, X3, Y3]'
for (unsigned int i=0, row_cnt = 0; i<n; i++) {
// use inliers only
if (all_inliers || inliers[i]) {
float x = src[i].x - center.x;
float y = src[i].y - center.y;
// first row
// src1
float* rowPtr = src1.ptr<float>(row_cnt);
rowPtr[0] = -y;
rowPtr[1] = 1.0f;
rowPtr[2] = 0.0f;
rowPtr[3] = x;
// src2
src2.at<float> (0, row_cnt) = dst[i].x - center.x;
// second row
row_cnt++;
// src1
rowPtr = src1.ptr<float>(row_cnt);
rowPtr[0] = x;
rowPtr[1] = 0.0f;
rowPtr[2] = 1.0f;
rowPtr[3] = y;
// src2
src2.at<float> (0, row_cnt) = dst[i].y - center.y;
}
}
cv::solve(src1, src2, res, DECOMP_SVD);
// estimators
float alpha_est;
Point2f T_est;
// original
alpha_est = res.at<float>(0, 0);
T_est = Point2f(res.at<float>(1, 0), res.at<float>(2, 0));
float Z = res.at<float>(3, 0);
if (abs(Z-1.0) > 0.1) {
//cout<<"Bad Z in fit3DOF(), Z should be close to 1.0 = "<<Z<<endl;
//return LARGE_NUMBER;
}
param[0] = alpha_est; // rad
param[1] = T_est.x;
param[2] = T_est.y;
// calculate RMSE
float RMSE = RMSE_3Dof(src, dst, param, inliers, center);
return RMSE;
} // fit3DofQUADRATICOLd()
// fits 3DOF (rotation and translation in 2D) with least squares.
float fit3DofQUADRATIC(const vector<Point2f>& src_, const vector<Point2f>& dst_,
float* param, const bool* inliers, const Point2f center) {
const bool debug = false; // print more debug info
const bool all_inliers = (inliers==NULL); // handy when we run QUADRTATIC will all inliers
assert(dst_.size() == src_.size());
int N = src_.size();
// collect inliers
vector<Point2f> src, dst;
int ninliers;
if (all_inliers) {
ninliers = N;
src = src_; // copy constructor
dst = dst_;
} else {
ninliers = 0;
for (int i=0; i<N; i++){
if (inliers[i]) {
ninliers++;
src.push_back(src_[i]);
dst.push_back(dst_[i]);
}
}
}
if (ninliers<2) {
param[0] = 0.0f; // default return when there is not enough points
param[1] = 0.0f;
param[2] = 0.0f;
return LARGE_NUMBER;
}
/* Algorithm: Least-Square Rigid Motion Using SVD by Olga Sorkine
* http://igl.ethz.ch/projects/ARAP/svd_rot.pdf
*
* Subtract centroids, calculate SVD(cov),
* R = V[1, det(VU')]'U', T = mean_q-R*mean_p
*/
// Calculate data centroids
Scalar centroid_src = mean(src);
Scalar centroid_dst = mean(dst);
Point2f center_src(centroid_src[0], centroid_src[1]);
Point2f center_dst(centroid_dst[0], centroid_dst[1]);
if (debug)
cout<<"Centers: "<<center_src<<", "<<center_dst<<endl;
// subtract centroids from data
for (int i=0; i<ninliers; i++) {
src[i] -= center_src;
dst[i] -= center_dst;
}
// compute a covariance matrix
float Cxx = 0.0, Cxy = 0.0, Cyx = 0.0, Cyy = 0.0;
for (int i=0; i<ninliers; i++) {
Cxx += src[i].x*dst[i].x;
Cxy += src[i].x*dst[i].y;
Cyx += src[i].y*dst[i].x;
Cyy += src[i].y*dst[i].y;
}
Mat Mcov = (Mat_<float>(2, 2)<<Cxx, Cxy, Cyx, Cyy);
Mcov /= (ninliers-1);
if (debug)
cout<<"Covariance-like Matrix "<<Mcov<<endl;
// SVD of covariance
cv::SVD svd;
svd = SVD(Mcov, SVD::FULL_UV);
if (debug) {
cout<<"U = "<<svd.u<<endl;
cout<<"W = "<<svd.w<<endl;
cout<<"V transposed = "<<svd.vt<<endl;
}
// rotation (V*Ut)
Mat V = svd.vt.t();
Mat Ut = svd.u.t();
float det_VUt = determinant(V*Ut);
Mat W = (Mat_<float>(2, 2)<<1.0, 0.0, 0.0, det_VUt);
float rot[4];
Mat R_est(2, 2, CV_32F, rot);
R_est = V*W*Ut;
if (debug)
cout<<"Rotation matrix: "<<R_est<<endl;
float cos_est = rot[0];
float sin_est = rot[2];
float ang = atan2(sin_est, cos_est);
// translation (mean_dst - R*mean_src)
Point2f center_srcRot = Point2f(
cos_est*center_src.x - sin_est*center_src.y,
sin_est*center_src.x + cos_est*center_src.y);
Point2f T_est = center_dst - center_srcRot;
// Final estimate msg
if (debug)
cout<<"Estimate = "<<ang*RAD2DEG<<"deg., T = "<<T_est<<endl;
param[0] = ang; // rad
param[1] = T_est.x;
param[2] = T_est.y;
// calculate RMSE
float RMSE = RMSE_3Dof(src_, dst_, param, inliers, center);
return RMSE;
} // fit3DofQUADRATIC()
// RANSAC fit in 3DOF: 1D rot and 2D translation (maximizes the number of inliers)
// NOTE: no data normalization is currently performed
float fit3DofRANSAC(const vector<Point2f>& src, const vector<Point2f>& dst,
float* best_param, bool* inliers,
const Point2f center ,
const float inlierMaxEr,
const int niter) {
const int ITERATION_TO_SETTLE = 2; // iterations to settle inliers and param
const float INLIERS_RATIO_OK = 0.95f; // stopping criterion
// size of data vector
unsigned int N = src.size();
assert(N==dst.size());
// unrealistic case
if(N<2) {
best_param[0] = 0.0f; // ?
best_param[1] = 0.0f;
best_param[2] = 0.0f;
return LARGE_NUMBER;
}
unsigned int ninliers; // current number of inliers
unsigned int best_ninliers = 0; // number of inliers
float best_rmse = LARGE_NUMBER; // error
float cur_rmse; // current distance error
float param[3]; // rad, Tx, Ty
vector <Point2f> src_2pt(2), dst_2pt(2);// min set of 2 points (1 correspondence generates 2 equations)
srand (time(NULL));
// iterations
for (int iter = 0; iter<niter; iter++) {
#ifdef DEBUG_RANSAC
cout<<"iteration "<<iter<<": ";
#endif
// 1. Select a random set of 2 points (not obligatory inliers but valid)
int i1, i2;
i1 = rand() % N; // [0, N[
i2 = i1;
while (i2==i1) {
i2 = rand() % N;
}
src_2pt[0] = src[i1]; // corresponding points
src_2pt[1] = src[i2];
dst_2pt[0] = dst[i1];
dst_2pt[1] = dst[i2];
bool two_inliers[] = {true, true};
// 2. Quadratic fit for 2 points
cur_rmse = fit3DofQUADRATIC(src_2pt, dst_2pt, param, two_inliers, center);
// 3. Recalculate to settle params and inliers using a larger set
for (int iter2=0; iter2<ITERATION_TO_SETTLE; iter2++) {
ninliers = setInliers3Dof(src, dst, inliers, param, inlierMaxEr, center); // changes inliers
cur_rmse = fit3DofQUADRATIC(src, dst, param, inliers, center); // changes cur_param
}
// potential ill-condition or large error
if (ninliers<2) {
#ifdef DEBUG_RANSAC
cout<<" !!! less than 2 inliers "<<endl;
#endif
continue;
} else {
#ifdef DEBUG_RANSAC
cout<<" "<<ninliers<<" inliers; ";
#endif
}
#ifdef DEBUG_RANSAC
cout<<"; recalculate: RMSE = "<<cur_rmse<<", "<<ninliers <<" inliers";
#endif
// 4. found a better solution?
if (ninliers > best_ninliers) {
best_ninliers = ninliers;
best_param[0] = param[0];
best_param[1] = param[1];
best_param[2] = param[2];
best_rmse = cur_rmse;
#ifdef DEBUG_RANSAC
cout<<" --- Solution improved: "<<
best_param[0]<<", "<<best_param[1]<<", "<<param[2]<<endl;
#endif
// exit condition
float inlier_ratio = (float)best_ninliers/N;
if (inlier_ratio > INLIERS_RATIO_OK) {
#ifdef DEBUG_RANSAC
cout<<"Breaking early after "<< iter+1<<
" iterations; inlier ratio = "<<inlier_ratio<<endl;
#endif
break;
}
} else {
#ifdef DEBUG_RANSAC
cout<<endl;
#endif
}
} // iterations
// 5. recreate inliers for the best parameters
ninliers = setInliers3Dof(src, dst, inliers, best_param, inlierMaxEr, center);
return best_rmse;
} // fit3DofRANSAC()
Let me first make sure I'm interpreting your question correctly. You have two sets of 2D points, one of which contains all "good" points corresponding to some object of interest, and one of which contains those points under an affine transformation with noisy points added. Right?
If that's correct, then there is a fairly reliable and efficient way to both reject noisy points and determine the transformation between your points of interest. The algorithm that is usually used to reject noisy points ("outliers") is known as RANSAC, and the algorithm used to determine the transformation can take several forms, but the most current state of the art is known as the five-point algorithm and can be found here -- a MATLAB implementation can be found here.
Unfortunately I don't know of a mature implementation of both of those combined; you'll probably have to do some work of your own to implement RANSAC and integrate it with the five point algorithm.
Edit:
Actually, OpenCV has an implementation that is overkill for your task (meaning it will work but will take more time than necessary) but is ready to work out of the box. The function of interest is called cv::findFundamentalMat.
I believe you are looking for something like David Lowe's SIFT (Scale Invariant Feature Transform). Other option is SURF (SIFT is patent protected). The OpenCV computer library presents a SURF implementation
I would try and use distance geometry (http://en.wikipedia.org/wiki/Distance_geometry) for this
Generate a scalar for each point by summing its distances to all neighbors within a certain radius. Though not perfect, this will be good discriminator for each point.
Then put all the scalars in a map that allows a point (p) to be retrieve by its scalar (s) plus/minus some delta
M(s+delta) = p (e.g K-D Tree) (http://en.wikipedia.org/wiki/Kd-tree)
Put all the reference set of 2D points in the map
On the other (test) set of 2D points:
foreach test scaling (esp if you have a good idea what typical scaling values are)
...scale each point by S
...recompute the scalars of the test set of points
......for each point P in test set (or perhaps a sample for faster method)
.........lookup point in reference scalar map within some delta
.........discard P if no mapping found
.........else foreach P' point found
............examine neighbors of P and see if they have corresponding scalars in the reference map within some delta (i.e reference point has neighbors with approx same value)
......... if all points tested have a mapping in the reference set, you have found a mapping of test point P onto reference point P' -> record mapping of test point to reference point
......discard scaling if no mappings recorded
Note this is trivially parallelized in several different places
This is off the top of my head, drawing from research I did years ago. It lacks fine details but the general idea is clear: find points in the noisy (test) graph whose distances to their closest neighbors are roughly the same as the reference set. Noisy graphs will have to measure the distances with a larger allowed error that less noisy graphs.
The algorithm works perfectly for graphs with no noise.
Edit: there is a refinement for the algorithm that doesn't require looking at different scalings. When computing the scalar for each point, use a relative distance measure instead. This will be invariant of transform
From C++, you could use ITK to do the image registration. It includes many registration functions that will work in the presence of noise.
The KLT (Kanade Lucas Tomasi) Feature Tracker makes a Affine Consistency Check of tracked features. The Affine Consistency Check takes into account translation, rotation and scaling. I don't know if it is of help to you, because you can't use the function (which calculates the affine transformation of a rectangular region) directly. But maybe you can learn from the documentation and source-code, how the affine transformation can be calculated and adapt it to your problem (clouds of points instead of a rectangular region).
You want want the Denton-Beveridge point matching algorithm. Source code at the bottom of the page linked below, and there is also a paper that explain the algorithm and why Ransac is a bad choice for this problem.
http://jasondenton.me/pntmatch.html