opencv euclidean clustering vs findContours - c++

I have the following image mask:
I want to apply something similar to cv::findContours, but that algorithm only joins connected points in the same groups. I want to do this with some tolerance, i.e., I want to add the pixels near each other within a given radius tolerance: this is similar to Euclidean distance hierarchical clustering.
Is this implemented in OpenCV? Or is there any fast approach for implementing this?
What I want is something similar to this,
http://www.pointclouds.org/documentation/tutorials/cluster_extraction.php
applied to the white pixels of this mask.
Thank you.

You can use partition for this:
partition splits an element set into equivalency classes. You can define your equivalence class as all points within a given euclidean distance (radius tolerance)
If you have C++11, you can simply use a lambda function:
int th_distance = 18; // radius tolerance
int th2 = th_distance * th_distance; // squared radius tolerance
vector<int> labels;
int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) {
return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2;
});
otherwise, you can just build a functor (see details in the code below).
With appropriate radius distance (I found 18 works good on this image), I got:
Full code:
#include <opencv2\opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
struct EuclideanDistanceFunctor
{
int _dist2;
EuclideanDistanceFunctor(int dist) : _dist2(dist*dist) {}
bool operator()(const Point& lhs, const Point& rhs) const
{
return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < _dist2;
}
};
int main()
{
// Load the image (grayscale)
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Get all non black points
vector<Point> pts;
findNonZero(img, pts);
// Define the radius tolerance
int th_distance = 18; // radius tolerance
// Apply partition
// All pixels within the radius tolerance distance will belong to the same class (same label)
vector<int> labels;
// With functor
//int n_labels = partition(pts, labels, EuclideanDistanceFunctor(th_distance));
// With lambda function (require C++11)
int th2 = th_distance * th_distance;
int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) {
return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2;
});
// You can save all points in the same class in a vector (one for each class), just like findContours
vector<vector<Point>> contours(n_labels);
for (int i = 0; i < pts.size(); ++i)
{
contours[labels[i]].push_back(pts[i]);
}
// Draw results
// Build a vector of random color, one for each class (label)
vector<Vec3b> colors;
for (int i = 0; i < n_labels; ++i)
{
colors.push_back(Vec3b(rand() & 255, rand() & 255, rand() & 255));
}
// Draw the labels
Mat3b lbl(img.rows, img.cols, Vec3b(0, 0, 0));
for (int i = 0; i < pts.size(); ++i)
{
lbl(pts[i]) = colors[labels[i]];
}
imshow("Labels", lbl);
waitKey();
return 0;
}

I suggest to use DBSCAN algorithm. It is exactly what you are looking for. Use a simple Euclidean Distance or even Manhattan Distance may work better.
The input is all white points(threshold-ed). The output is a groups of points(your connected component)
Here is a DBSCAN C++ implenetation
EDIT:
I tried DBSCAN my self and here is the result:
As you see, Just the really connected points are considered as one cluster.
This result was obtained using the standerad DBSCAN algorithm with EPS=3 (static no need to be tuned) MinPoints=1 (static also) and Manhattan Distance

Related

How to merge bounding boxes with groupRectangle?

I have an image with bounding boxes like so:
I want to merge overlapping bounding boxes.
I tried: cv::groupRectangles(detected, 1, 0.8)
My expectation was that I get a single box for each cluster.
But I got this:
As you can see, the problem is, there is no box for the dartboard in the middle and for the right one.
How do I resolve this? I would preferably like to use the OpenCV api rather than coding my own merging algorithm.
I see that it eliminates regions bounded by exactly one box. I want it to not do that.
I have tried tweaking the parameters randomly but I've gotten much worse results. I would love some guidance in the right direction.
How to define overlapping rectangles?
We need a way to define when two rectangles overlap. We can use the & intersection operator to find the intersection of the two rectangles, and check that it's not empty:
bool overlap(const cv::Rect& lhs, const cv::Rect& rhs) {
return (lhs & rhs).area() > 0;
}
If we want to ignore small intersections, we can use a threshold over the intersection area:
bool overlap(const cv::Rect& lhs, const cv::Rect& rhs, int th) {
return (lhs & rhs).area() > th;
}
But now the threshold depends on the dimensions of the rectangles. We can use the "Intersection over Union" metric (IoU) which is in the range [0, 1], and apply a threshold in that interval.
bool overlap(const cv::Rect& lhs, const cv::Rect& rhs, double th) {
double i = static_cast<double>((lhs & rhs).area());
double u = static_cast<double>((lhs | rhs).area());
double iou = i / u;
return iou > th;
}
This works well in general, but may show unexpected results if the two rectangles have a very different size. Another approach could be to check if the first rectangle intersects with the second one for most of its area, and vice versa:
bool overlap(const cv::Rect& lhs, const cv::Rect& rhs, double th) {
double i = static_cast<double>((lhs & rhs).area());
double ratio_intersection_over_lhs_area = i / static_cast<double>(lhs.area());
double ratio_intersection_over_rhs_area = i / static_cast<double>(rhs.area());
return (ratio_intersection_over_lhs_area > th) || (ratio_intersection_over_rhs_area > th);
}
Ok, now we have a few ways to define when two rectangles overlap. Pick one.
How to find overlapping rectangles?
We can cluster the rectangles with cv::partition with a predicate that puts overlapping rectangles in the same cluster. This will put in the same cluster even two rectangles that do not directly overlap each other, but are linked by one or more overlapping rectangles. The output of this function is a vector of cluster, where each cluster consists in a vector of rectangles:
std::vector<std::vector<cv::Rect>> cluster_rects(const std::vector<cv::Rect>& rects, const double th)
{
std::vector<int> labels;
int n_labels = cv::partition(rects, labels, [th](const cv::Rect& lhs, const cv::Rect& rhs) {
double i = static_cast<double>((lhs & rhs).area());
double ratio_intersection_over_lhs_area = i / static_cast<double>(lhs.area());
double ratio_intersection_over_rhs_area = i / static_cast<double>(rhs.area());
return (ratio_intersection_over_lhs_area > th) || (ratio_intersection_over_rhs_area > th);
});
std::vector<std::vector<cv::Rect>> clusters(n_labels);
for (size_t i = 0; i < rects.size(); ++i) {
clusters[labels[i]].push_back(rects[i]);
}
return clusters;
}
For example, from the rectangles in this image:
we obtain these clusters (with a threshold of 0.2). Note that:
in the top left cluster the three rectangles do not overlap with each other
the rectangle on the top right is on its own cluster, because it doesn't intersect enough with the other rectangles.
How to find a rectangle that represents a cluster?
Well, that's really application dependent. It can be the union of all rectangles:
cv::Rect union_of_rects(const std::vector<cv::Rect>& cluster)
{
cv::Rect one;
if (!cluster.empty())
{
one = cluster[0];
for (const auto& r : cluster) { one |= r; }
}
return one;
}
Or it can be the maximum inscribed rectangle (code below):
Or something else. For example, if you have a score associated with each rectangle (e.g. it's a detection with a confidence) you can sort each cluster by score and take only the first one. This is a an example of non-maxima-suppression (NMA) and you keep only the highest score rectangle for each cluster (Not showed in this answer).
Pick one.
Below the working code I used for creating these images. Please play with it :)
#include <opencv2/opencv.hpp>
std::vector<cv::Rect> create_some_rects()
{
std::vector<cv::Rect> rects
{
{20, 20, 20, 40},
{30, 40, 40, 40},
{50, 46, 30, 40},
{100, 120, 30, 40},
{110, 130, 36, 20},
{104, 124, 50, 30},
{200, 80, 40, 50},
{220, 90, 50, 30},
{240, 84, 30, 70},
{260, 60, 20, 30},
};
return rects;
}
void draw_rects(cv::Mat3b& img, const std::vector<cv::Rect>& rects)
{
for (const auto& r : rects) {
cv::Scalar random_color(rand() & 255, rand() & 255, rand() & 255);
cv::rectangle(img, r, random_color);
}
}
void draw_rects(cv::Mat3b& img, const std::vector<cv::Rect>& rects, const cv::Scalar& color)
{
for (const auto& r : rects) {
cv::rectangle(img, r, color);
}
}
void draw_clusters(cv::Mat3b& img, const std::vector<std::vector<cv::Rect>>& clusters)
{
for (const auto& cluster : clusters) {
cv::Scalar random_color(rand() & 255, rand() & 255, rand() & 255);
draw_rects(img, cluster, random_color);
}
}
std::vector<std::vector<cv::Rect>> cluster_rects(const std::vector<cv::Rect>& rects, const double th)
{
std::vector<int> labels;
int n_labels = cv::partition(rects, labels, [th](const cv::Rect& lhs, const cv::Rect& rhs) {
double i = static_cast<double>((lhs & rhs).area());
double ratio_intersection_over_lhs_area = i / static_cast<double>(lhs.area());
double ratio_intersection_over_rhs_area = i / static_cast<double>(rhs.area());
return (ratio_intersection_over_lhs_area > th) || (ratio_intersection_over_rhs_area > th);
});
std::vector<std::vector<cv::Rect>> clusters(n_labels);
for (size_t i = 0; i < rects.size(); ++i) {
clusters[labels[i]].push_back(rects[i]);
}
return clusters;
}
cv::Rect union_of_rects(const std::vector<cv::Rect>& cluster)
{
cv::Rect one;
if (!cluster.empty())
{
one = cluster[0];
for (const auto& r : cluster) { one |= r; }
}
return one;
}
// https://stackoverflow.com/a/30418912/5008845
// https://stackoverflow.com/a/34905215/5008845
cv::Rect findMaxRect(const cv::Mat1b& src)
{
cv::Mat1f W(src.rows, src.cols, float(0));
cv::Mat1f H(src.rows, src.cols, float(0));
cv::Rect maxRect(0, 0, 0, 0);
float maxArea = 0.f;
for (int r = 0; r < src.rows; ++r)
{
for (int c = 0; c < src.cols; ++c)
{
if (src(r, c) == 0)
{
H(r, c) = 1.f + ((r > 0) ? H(r - 1, c) : 0);
W(r, c) = 1.f + ((c > 0) ? W(r, c - 1) : 0);
}
float minw = W(r, c);
for (int h = 0; h < H(r, c); ++h)
{
minw = std::min(minw, W(r - h, c));
float area = (h + 1) * minw;
if (area > maxArea)
{
maxArea = area;
maxRect = cv::Rect(cv::Point(c - minw + 1, r - h), cv::Point(c + 1, r + 1));
}
}
}
}
return maxRect;
}
cv::Rect largest_inscribed_of_rects(const std::vector<cv::Rect>& cluster)
{
cv::Rect roi = union_of_rects(cluster);
cv::Mat1b mask(roi.height, roi.width, uchar(255));
for (const auto& r : cluster) {
cv::rectangle(mask, r - roi.tl(), cv::Scalar(0), cv::FILLED);
}
cv::Rect largest_rect = findMaxRect(mask);
largest_rect += roi.tl();
return largest_rect;
}
std::vector<cv::Rect> find_one_for_cluster(const std::vector<std::vector<cv::Rect>>& clusters)
{
std::vector<cv::Rect> one_for_cluster;
for (const auto& cluster : clusters) {
//cv::Rect one = union_of_rects(cluster);
cv::Rect one = largest_inscribed_of_rects(cluster);
one_for_cluster.push_back(one);
}
return one_for_cluster;
}
int main()
{
cv::Mat3b img(200, 300, cv::Vec3b(0, 0, 0));
std::vector<cv::Rect> rects = create_some_rects();
cv::Mat3b initial_rects_img = img.clone();
draw_rects(initial_rects_img, rects, cv::Scalar(127, 127, 127));
std::vector<std::vector<cv::Rect>> clusters = cluster_rects(rects, 0.2);
cv::Mat3b clustered_rects_img = initial_rects_img.clone();
draw_clusters(clustered_rects_img, clusters);
std::vector<cv::Rect> single_rects = find_one_for_cluster(clusters);
cv::Mat3b single_rects_img = initial_rects_img.clone();
draw_rects(single_rects_img, single_rects);
return 0;
}
Unfortunately, you cannot fine-tune groupRectangles(). The second parameter for your example should be 0 though. With 1, all singular rectangles have to be merged somewhere.
You could first grow small rectangles and stay with a conservative threshold parameter if you want a better clustering of the small ones. Not an optimal solution though.
If you want to cluster based on overlap condition, I would suggest to write your own simple algorithm for that. groupRectangles() simply does not do that. It finds rectangles similar in size and position; it does not accumulate rectangles that form a cluster.
You could fill a mask cv::Mat1b mask(image.size(), uchar(0)); with the rectangles and then use cv::connectedComponents() to find merged regions. Note that filling is trivial, loop over all rectangles and call mask(rect).setTo(255);. If the overlap is not always reliable, you could use cv::dilate() to grow rectangles in the mask before the connected-components step.
You could test all rectangles for overlaps and associate them accordingly. For a huge amount of rectangles, I suggest disjoint-set/union-find data structure for efficiency.

How to use ceres::evaluation_callbacks for inner iteration of ceres::cost functions

I am calculation visual image based on a paper and then optimize my parameters which are focal length, rotation and translation. For that reason I am creating cost function by travelling all the pixel bw real image and virtual image. In my ceres cost functions I basically subtracted normalized virtual image from normalized real image. the Virtual Image is calculated in evaluation_callback functor and the cost is calculated in cost function functor. The problem stems from cost functor. Optimization is terminated at first iteration because gradient is equals to 0. I am using ceres::Central for gradient calculation but virtual image creator functor just called once every iteration. However I need that functor to be called for f(x+h) and f(x-h) seperately.When I calculate normalized real image and normalized virtual image by 9 neighbours I have continuing iteration but every iteration takes 25 second which is not acceptable for my case. I need this evaluation_callback function but I could not make it work.
I look at the evaluation_callbacks definition. it is written that "NOTE: Evaluation callbacks are incompatible with inner iterations."
struct RcpAndFpOptimizer {
RcpAndFpOptimizer(cv::Mat &V, const cv::Mat I, int i,int j,double width, double height) : V_(V), I_(I), i_(i),
j_(j), width_(width), height_(height){}
bool operator()(const double* const fp, const double* const rotation, const double* const translation, double* residuals) const {
double intensity = V_.at<double>(j_, i_);
double tmp = (double)I_.at<double>(j_,i_)-(double)intensity;
residuals[0] = tmp;
//std::cout<<"pixels(i,j): "<<i_<<" "<<j_<<" residual: "<<residuals[0]<<std::endl;
return true;
}
const cv::Mat S_;
cv::Mat& V_;
const cv::Mat I_;
const int i_,j_;
double width_, height_;
};
virtual void PrepareForEvaluation(bool evaluateJacobians, bool newEvaluationPoint)
{
if(evaluateJacobians){
std::cout<<"evaluation jacobian is called"<<std::endl;
}
if (newEvaluationPoint)
{
// do your stuff here, e.g. calculate integral image
//Mat V(height_, width_, CV_8UC1);
std::cout<<"preperation is called"<<std::endl;
Intrinsic<double> intrinsicC = INTRINSIC_CAMERA;
Intrinsic<double> intrinsicP= {(double)fP_[0],(double)fP_[0], double(width_/2), double(height_/2), 0, 0};
//Convertion of array to point3d
Point3d bDist = Point3d(translation_[0],translation_[1], translation_[2]);
//Convertion euler array to rotation matrix
const Mat eulerAngles = (cv::Mat_<double>(3,1) << rotArray_[0], rotArray_[1], rotArray_[2]);
Mat rotM = rcpFinder::euler2rot(eulerAngles);
Mat tempVImg(height_, width_, CV_8UC1);
for (int i = 0; i < width_; ++i) {
for (int j = 0; j < height_ ; ++j) {
//std::cout<<"Virtual current x and y pixels: "<<i<<" "<<j<<std::endl;
Point3d unprojPRay = rcpFinder::unprojectPoints(Point2i(i,j),intrinsicC);
//Assigning the intensity from images
tempVImg.at<uchar>(j, i)= rcpFinder::genVirtualImg(S_, intrinsicP, bDist, unprojPRay,
planeNormalAndDistance_, rotM);
auto pixelIntensity = tempVImg.at<uchar>(Point(j, i));
//std::cout<<"pixel intensity "<< pixelIntensity<<std::endl;
}
}
//imshow("Virtual", tempVImg);
Mat integralV;
cv::integral(tempVImg, integralV);
//std::cout<<"integral image type is "<<integralV.type()<<std::endl;
rcpFinder::normalizePixelsImg(tempVImg, integralV, V_);
/*imshow("Normalized Img", V_);
waitKey(0);*/
}
}
// stuff here
const cv::Mat S_;
cv::Mat& V_;
int width_, height_;
map<int, vector<Point3d>> planeNormalAndDistance_;
double *translation_;
double* rotArray_;
double* fP_;
};
//Calling functors is like following
cv::Mat integralImgI;
cv::integral(im1, integralImgI);
cv::Mat normalizedRealImg;
rcpFinder::normalizePixelsImg(im1, integralImgI, normalizedRealImg);
Mat normalizedVirtualImg;
//ceres::CostFunction* total_cost_function = 0;
for (int i = 1; i < width-1; ++i) {
for (int j = 1; j < height-1 ; ++j) {
ceres::CostFunction* cost_function =
new ceres::NumericDiffCostFunction<RcpAndFpOptimizer, ceres::CENTRAL, 1, 1, 3, 3>(
new RcpAndFpOptimizer(normalizedVirtualImg, normalizedRealImg, i, j, width, height));
problem.AddResidualBlock(cost_function, NULL, fp, rotationArray, translation);
}
}
ceres::Solver::Options options;
options.minimizer_progress_to_stdout = true;
options.max_num_iterations = 50;
options.update_state_every_iteration = true;
options.evaluation_callback = (new evaluation_callback_functor(S, normalizedVirtualImg,width, height,
mapNormalAndDist, translation,rotationArray, fp));
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
I expected to ceres solver run more than one iteration at least and gradient should start from some values and must be decreasing by iteration.
I normalized the pizels with 9 neighbours. The current solution I have found calculating just 9 pixels of virtual image in cost functor and use them for one pixel normalization but that is too slow. I have 640x480 pixels and 9 times calculation for every pixel. Plus jacobian and gradient calculation in NumericalCOstFunction is too much. That's why I want to calculate virtual image in evaluation_callback functor and normalized it inside of that function and useing normalized image in cost functor.
Thank you for your help.
You cannot call evaluationcallback with inneriterations
https://groups.google.com/forum/#!topic/ceres-solver/zjQLIaSuAdQ

How to find euclidean distance between keypoints of a single image in opencv

I want to get a distance vector d for each key point in the image. The distance vector should consist of distances from that keypoint to all other keypoints in that image.
Note: Keypoints are found using SIFT.
Im pretty new to opencv. Is there a library function in C++ that can make my task easy?
If you aren't interested int the position-distance but the descriptor-distance you can use this:
cv::Mat SelfDescriptorDistances(cv::Mat descr)
{
cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1);
for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr)
{
for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2)
{
double euclideanDistance = 0;
for(int descrDim = 0; descrDim < descr.cols; ++descrDim)
{
double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance;
}
}
return selfDistances;
}
which will give you a N x N matrix (N = number of keypoints) where Mat_i,j = euclidean distance between keypoint i and j.
with this input:
I get these outputs:
image where keypoints are marked which have a distance of less than 0.05
image that corresponds to the matrix. white pixels are dist < 0.05.
REMARK: you can optimize many things in the computation of the matrix, since distances are symmetric!
UPDATE:
Here is another way to do it:
From your chat I know that you would need 13GB memory to hold those distance information for 41381 keypoints (which you tried). If you want instead only the N best matches, try this code:
// choose double here if you are worried about precision!
#define intermediatePrecision float
//#define intermediatePrecision double
//
void NBestMatches(cv::Mat descriptors1, cv::Mat descriptors2, unsigned int n, std::vector<std::vector<float> > & distances, std::vector<std::vector<int> > & indices)
{
// TODO: check whether descriptor dimensions and types are the same for both!
// clear vector
// get enough space to create n best matches
distances.clear();
distances.resize(descriptors1.rows);
indices.clear();
indices.resize(descriptors1.rows);
for(int i=0; i<descriptors1.rows; ++i)
{
// references to current elements:
std::vector<float> & cDistances = distances.at(i);
std::vector<int> & cIndices = indices.at(i);
// initialize:
cDistances.resize(n,FLT_MAX);
cIndices.resize(n,-1); // for -1 = "no match found"
// now find the 3 best matches for descriptor i:
for(int j=0; j<descriptors2.rows; ++j)
{
intermediatePrecision euclideanDistance = 0;
for( int dim = 0; dim < descriptors1.cols; ++dim)
{
intermediatePrecision tmp = descriptors1.at<float>(i,dim) - descriptors2.at<float>(j, dim);
euclideanDistance += tmp*tmp;
}
euclideanDistance = sqrt(euclideanDistance);
float tmpCurrentDist = euclideanDistance;
int tmpCurrentIndex = j;
// update current best n matches:
for(unsigned int k=0; k<n; ++k)
{
if(tmpCurrentDist < cDistances.at(k))
{
int tmpI2 = cIndices.at(k);
float tmpD2 = cDistances.at(k);
// update current k-th best match
cDistances.at(k) = tmpCurrentDist;
cIndices.at(k) = tmpCurrentIndex;
// previous k-th best should be better than k+1-th best //TODO: a simple memcpy would be faster I guess.
tmpCurrentDist = tmpD2;
tmpCurrentIndex =tmpI2;
}
}
}
}
}
It computes the N best matches for each keypoint of the first descriptors to the second descriptors. So if you want to do that for the same keypoints you'll set to be descriptors1 = descriptors2 ion your call as shown below. Remember: the function doesnt know that both descriptor sets are identical, so the first best match (or at least one) will be the keypoint itself with distance 0 always! Keep that in mind if using the results!
Here's sample code to generate an image similar to the one above:
int main()
{
cv::Mat input = cv::imread("../inputData/MultiLena.png");
cv::Mat gray;
cv::cvtColor(input, gray, CV_BGR2GRAY);
cv::SiftFeatureDetector detector( 7500 );
cv::SiftDescriptorExtractor describer;
std::vector<cv::KeyPoint> keypoints;
detector.detect( gray, keypoints );
// draw keypoints
cv::drawKeypoints(input,keypoints,input);
cv::Mat descriptors;
describer.compute(gray, keypoints, descriptors);
int n = 4;
std::vector<std::vector<float> > dists;
std::vector<std::vector<int> > indices;
// compute the N best matches between the descriptors and themselves.
// REMIND: ONE best match will always be the keypoint itself in this setting!
NBestMatches(descriptors, descriptors, n, dists, indices);
for(unsigned int i=0; i<dists.size(); ++i)
{
for(unsigned int j=0; j<dists.at(i).size(); ++j)
{
if(dists.at(i).at(j) < 0.05)
cv::line(input, keypoints[i].pt, keypoints[indices.at(i).at(j)].pt, cv::Scalar(255,255,255) );
}
}
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
Create a 2D vector (size of which would be NXN) -->
std::vector< std::vector< float > > item;
Create 2 for loops to go till the number of keypoints (N) you have
Calculate distances as suggested by a-Jays
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y );
Add this to vector using push_back for each keypoint --> N times.
The keypoint class has a member called pt which in turn has x and y [the (x,y) location of the point] as its own members.
Given two keypoints kp1 and kp2, it's then easy to calculate the euclidean distance as:
Point diff = kp1.pt - kp2.pt;
float dist = std::sqrt( diff.x * diff.x + diff.y * diff.y )
In your case, it is going to be a double loop iterating over all the keypoints.

sorting points: concave polygon

I have a set of points that I'm trying to sort in ccw order or cw order from their angle. I want the points to be sorted in a way that they could form a polygon with no splits in its region or intersections. This is difficult because in most cases, it would be a concave polygon.
point centroid;
int main( int argc, char** argv )
{
// I read a set of points into a struct point array: points[n]
// Find centroid
double sx = 0; double sy = 0;
for (int i = 0; i < n; i++)
{
sx += points[i].x;
sy += points[i].y;
}
centroid.x = sx/n;
centroid.y = sy/n;
// sort points using in polar order using centroid as reference
std::qsort(&points, n, sizeof(point), polarOrder);
}
// -1 ccw, 1 cw, 0 collinear
int orientation(point a, point b, point c)
{
double area2 = (b.x-a.x)*(c.y-a.y) - (b.y-a.y)*(c.x-a.x);
if (area2 < 0) return -1;
else if (area2 > 0) return +1;
else return 0;
}
// compare other points relative to polar angle they make with this point
// (where the polar angle is between 0 and 2pi)
int polarOrder(const void *vp1, const void *vp2)
{
point *p1 = (point *)vp1;
point *p2 = (point *)vp2;
// translation
double dx1 = p1->x - centroid.x;
double dy1 = p1->y - centroid.y;
double dx2 = p2->x - centroid.x;
double dy2 = p2->y - centroid.y;
if (dy1 >= 0 && dy2 < 0) { return -1; } // p1 above and p2 below
else if (dy2 >= 0 && dy1 < 0) { return 1; } // p1 below and p2 above
else if (dy1 == 0 && dy2 ==0) { // 3-collinear and horizontal
if (dx1 >= 0 && dx2 < 0) { return -1; }
else if (dx2 >= 0 && dx1 < 0) { return 1; }
else { return 0; }
}
else return -orientation(centroid,*p1,*p2); // both above or below
}
It looks like the points are sorted accurately(pink) until they "cave" in, in which case the algorithm skips over these points then continues.. Can anyone point me into the right direction to sort the points so that they form the polygon I'm looking for?
Raw Point Plot - Blue, Pink Points - Sorted
Point List: http://pastebin.com/N0Wdn2sm (You can ignore the 3rd component, since all these points lie on the same plane.)
The code below (sorry it's C rather than C++) sorts correctly as you wish with atan2.
The problem with your code may be that it attempts to use the included angle between the two vectors being compared. This is doomed to fail. The array is not circular. It has a first and a final element. With respect to the centroid, sorting an array requires a total polar order: a range of angles such that each point corresponds to a unique angle regardless of the other point. The angles are the total polar order, and comparing them as scalars provides the sort comparison function.
In this manner, the algorithm you proposed is guaranteed to produce a star-shaped polyline. It may oscillate wildly between different radii (...which your data do! Is this what you meant by "caved in"? If so, it's a feature of your algorithm and data, not an implementation error), and points corresponding to exactly the same angle might produce edges that coincide (lie directly on top of each other), but the edges won't cross.
I believe that your choice of centroid as the polar origin is sufficient to guarantee that connecting the ends of the polyline generated as above will produce a full star-shaped polygon, however, I don't have a proof.
Result plotted with Excel
Note you can guess from the nearly radial edges where the centroid is! This is the "star shape" I referred to above.
To illustrate this is really a star-shaped polygon, here is a zoom in to the confusing lower left corner:
If you want a polygon that is "nicer" in some sense, you will need a fancier (probably much fancier) algorithm, e.g. the Delaunay triangulation-based ones others have referred to.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
struct point {
double x, y;
};
void print(FILE *f, struct point *p) {
fprintf(f, "%f,%f\n", p->x, p->y);
}
// Return polar angle of p with respect to origin o
double to_angle(const struct point *p, const struct point *o) {
return atan2(p->y - o->y, p->x - o->x);
}
void find_centroid(struct point *c, struct point *pts, int n_pts) {
double x = 0, y = 0;
for (int i = 0; i < n_pts; i++) {
x += pts[i].x;
y += pts[i].y;
}
c->x = x / n_pts;
c->y = y / n_pts;
}
static struct point centroid[1];
int by_polar_angle(const void *va, const void *vb) {
double theta_a = to_angle(va, centroid);
double theta_b = to_angle(vb, centroid);
return theta_a < theta_b ? -1 : theta_a > theta_b ? 1 : 0;
}
void sort_by_polar_angle(struct point *pts, int n_pts) {
find_centroid(centroid, pts, n_pts);
qsort(pts, n_pts, sizeof pts[0], by_polar_angle);
}
int main(void) {
FILE *f = fopen("data.txt", "r");
if (!f) return 1;
struct point pts[10000];
int n_pts, n_read;
for (n_pts = 0;
(n_read = fscanf(f, "%lf%lf%*f", &pts[n_pts].x, &pts[n_pts].y)) != EOF;
++n_pts)
if (n_read != 2) return 2;
fclose(f);
sort_by_polar_angle(pts, n_pts);
for (int i = 0; i < n_pts; i++)
print(stdout, pts + i);
return 0;
}
Well, first and foremost, I see centroid declared as a local variable in main. Yet inside polarOrder you are also accessing some centroid variable.
Judging by the code you posted, that second centroid is a file-scope variable that you never initialized to any specific value. Hence the meaningless results from your comparison function.
The second strange detail in your code is that you do return -orientation(centroid,*p1,*p2) if both points are above or below. Since orientation returns -1 for CCW and +1 for CW, it should be just return orientation(centroid,*p1,*p2). Why did you feel the need to negate the result of orientation?
Your original points don't appear form a convex polygon, so simply ordering them by angle around a fixed centroid will not necessarily result in a clean polygon. This is a non-trivial problem, you may want to research Delaunay triangulation and/or gift wrapping algorithms, although both would have to be modified because your polygon is concave. The answer here is an interesting example of a modified gift wrapping algorithm for concave polygons. There is also a C++ library called PCL that may do what you need.
But...if you really do want to do a polar sort, your sorting functions seem more complex than necessary. I would sort using atan2 first, then optimize it later once you get the result you want if necessary. Here is an example using lambda functions:
#include <algorithm>
#include <math.h>
#include <vector>
int main()
{
struct point
{
double x;
double y;
};
std::vector< point > points;
point centroid;
// fill in your data...
auto sort_predicate = [&centroid] (const point& a, const point& b) -> bool {
return atan2 (a.x - centroid.x, a.y - centroid.y) <
atan2 (b.x - centroid.x, b.y - centroid.y);
};
std::sort (points.begin(), points.end(), sort_predicate);
}

OpenCV 2 Centroid

I am trying to find the centroid of a contour but am having trouble implementing the example code in C++ (OpenCV 2.3.1). Can anyone help me out?
To find the centroid of a contour, you can use the method of moments. And functions are implemented OpenCV.
Check out these moments function (central and spatial moments).
Below code is taken from OpenCV 2.3 docs tutorial. Full code here.
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Get the moments
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false ); }
/// Get the mass centers:
vector<Point2f> mc( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
Also check out this SOF, although it is in Python, it would be useful. It finds all parameters of a contour.
If you have the mask of the contour area, you can find the centroid location as follows:
cv::Point computeCentroid(const cv::Mat &mask) {
cv::Moments m = moments(mask, true);
cv::Point center(m.m10/m.m00, m.m01/m.m00);
return center;
}
This approach is useful when one has the mask but not the contour. In that case the above method is computationally more efficient vs. using cv::findContours(...) and then finding mass center.
Here's the source
Given the contour points, and the formula from Wikipedia, the centroid can be efficiently computed like this:
template <typename T>
cv::Point_<T> computeCentroid(const std::vector<cv::Point_<T> >& in) {
if (in.size() > 2) {
T doubleArea = 0;
cv::Point_<T> p(0,0);
cv::Point_<T> p0 = in->back();
for (const cv::Point_<T>& p1 : in) {//C++11
T a = p0.x * p1.y - p0.y * p1.x; //cross product, (signed) double area of triangle of vertices (origin,p0,p1)
p += (p0 + p1) * a;
doubleArea += a;
p0 = p1;
}
if (doubleArea != 0)
return p * (1 / (3 * doubleArea) ); //Operator / does not exist for cv::Point
}
///If we get here,
///All points lies on one line, you can compute a fallback value,
///e.g. the average of the input vertices
[...]
}
Note:
This formula works with vertices given both in clockwise and
counterclockwise order.
If the points have integer coordinates, it
might be convenient to adapt the type of p and of the return value to Point2f or Point2d,
and to add a cast to float or double to the denominator in the return statement.
If all you need is an approximation of the centroid here are a couple of simple ways to do it:
sumX = 0; sumY = 0;
size = array_points.size;
if(size > 0){
foreach(point in array_points){
sumX += point.x;
sumY += point.y;
}
centroid.x = sumX/size;
centroid.y = sumY/size;
}
Or with the help of Opencv's boundingRect:
//pseudo-code:
Rect bRect = Imgproc.boundingRect(array_points);
centroid.x = bRect.x + (bRect.width / 2);
centroid.y = bRect.y + (bRect.height / 2);