Find intersection points for vector construct - c++

So in my software I have two vectors. The first vector matrix stores the information of the shape of a given 3D model. So I got a vector of arrays to store the x,y,z coordinates of points.
std::vector<std::array<double, 3>> matrix;
This vector is already sorted, so that I get the contour of the model.
In the second vector boundingbox I store the information of a bounding box.
std::vector<std::array<double, 3>> boundingbox;
In this vector the first four elements describe the bounding box around the contour. To fill the outline I have placed a grid on it. The grid is in this case defined by the software based on a variable. The variable infill is set by the user at run-time. So currently my program creats the following image.
Now the next step would be to find the intersection points between the grid and the contour. My approach to this would be a typical mathematical approach.
I would use two for-loops. The first loop would be used to iterate over the grid so that each line of the grid is called once.
The second loop would be used the vector to undergo matrix. I developed a pseudo code, in which I describe my procedure.
int fillingStart; //first element of boundingbox to contain information about the grid
int n; //number of lines in the Grid.
for(size_t i=fillingStart; i<(n-1); i+2)
{
double A_x=boundingbox[j][0];
double A_y=boundingbox[j][1];
double B_x=boundingbox[j+1][0];
double B_y=boundingbox[j+1][0];
double AB_x=B_x-A_x;
double AB_y=B_y-A_y;
double intersectionpoint_y = DBL_MAX;
double intersectionpoint_x = DBL_MAX;
double intersectionpoint2_y = DBL_MAX;
double intersectionpoint2_x = DBL_MAX;
for(size_t j=0; j<(matrix.size()-1); j++)
{
double C_x = matrix[j][0];
double C_y = matrix[j][1];
double D_x = matrix[j+1][0];
double D_y = matrix[j+1][1];
double CD_x = D_x-C_x;
double CD_y = D_y-C_y;
double s = (((C_x-A_x)*(-CD_y))-((-CD_x)*(C_y-A_y)))/((AB_x*(-CD_y))-((-CD_x)*AB_y));//Cramer's rule
double t = ((AB_x*(C_y-A_y))-((C_x-A_x)*AB_y)) / ((AB_x * (-CD_y))-((-CD_x)*AB_y));//Cramer's rule
double point_x = A_x+s*AB_x;
double point_y = A_y*s*AB_y;
if(point_x < intersectionpoint_x && point_y < intersectionpoint_y)
{
intersectionpoint_x = point_x;
intersectionpoint_y = point_y;
}
else if(point_x < intersectionpoint2_x && point_y < intersectionpoint2_y)
{
intersectionpoint2_x = point_x;
intersectionpoint2_y = point_y;
}
}
intersects.push_back(std::array<double, 3>());
double q = boundingbox.size()-1;
intersects[q][0] = intersectionpoint_x;
intersects[q][1] = intersectionpoint_y;
intersects.push_back(std::array<double, 3>());
double q = boundingbox.size()-1;
intersects[q][0] = intersectionpoint2_x;
intersects[q][1] = intersectionpoint2_y;
}
With this two loops I would find the intersection points for each line of the grid and each vector (between two points) of the contour. Then I would have to find the two intersection points, closest to the grid line and store these points. The special case would be, if there is something in the contoure, like a hole. In this case I would find four points.
EDIT: Why I want to use intersection points is shown in the following figures
Here we have the contour of a rectangle. As you can see there are just a few points to describe the figure.
The next image shows the filling of the model
Because of the few points of the contour I have to calculate the intersection points of the contour and the grid.
EDIT2: I now got the code working and updated the code here, but the problem is that it saves always the same point in intersectionpoint. Thats because of the if-statement, but I cant figure out how get it working.

You could iterate over the contour, and for each two consecutive points, check if there is a line between, and if there is one, compute the intersection point.
std::vector<std::array<double, 3>> intersects;
auto it = matrix.begin();
while (it != matrix.end() - 1) {
auto &p1 = *it;
auto &p2 = *(++it);
double x;
// Check if there is a vertical line between p1 and p2
if (has_vertical_line(p1, p2, &x)) {
// The equation of the line joining p1 and p2 is:
// (p2[1] - p1[1]) / (p2[0] - p1[0]) * x + p1[0]
double y = (p2[1] - p1[1]) / (p2[0] - p1[0]) * x + p1[0];
intersects.push_back({x, y, 0.0});
}
}
Where has_vertical_line is something like:
bool has_vertical_line (std::array<double, 3> const& p1,
std::array<double, 3> const& p2,
double *px) {
double x1 = p1[0], x2 = p2[0];
if (x2 <= x1) {
std::swap(x1, x2);
}
size_t lx2 = closest_from_below(x2),
lx1 = closest_from_above(x1);
if (lx1 == lx2) {
*px = lines[lx1]; // Assuming abscissa
return true;
}
return false;
}
Where closest_from_below and closest_from_above are simple function that find the line just below / above the current abscissa (trivial since your lines are vertical).

Related

Sorting RHS/LHS Objects in Vehicle Path C++

So I currently am trying to create some method which when taking in a simulation vehicles position, direction, and an objects position, Will determine whether or not the object lies on the right and side or left hand side of that vehicles direction. An image will be shown here,Simple Diagram of Problem Situation
So far I have tried to use the cross product and some other methods to solve the problem i will include relevant code blocks here:
void Class::sortCones()
{
// Clearing both _lhsCones and _rhsCones vectors
_rhsCones.clear();
_lhsCones.clear();
for (int i =0; i < _cones.size(); i++)
{
if (indicateSide(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) > 0)
{
_lhsCones.push_back(_cones[i]);
}
if (indicateSide(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) == 0)
{
return;
}
else
{
_rhsCones.push_back(_cones[i]);
}
}
return;
}
double Class::indicateSide(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
// Compute the i and j compoents of the yaw measurment as a unit vector i.e Vector Mag = 1
double iOne = cos(yawCar);
double jOne = sin(yawCar);
// Create the Car to Cone Vector
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vCar to Cone Vector
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
// - old method
// Using the transformation Matrix with Theta = yaw (angle in radians) transform the axis to the augmented 2D space
// Take the Cross Product of < Ex, 0 > x < x', y' > where x', y' have the same location in the simulation space.
// double Ex = cos(yawCar)*iOne - sin(yawCar)*jOne;
// double Ey = sin(yawCar)*iOne + cos(yawCar)*jOne;
double result = iOne*jTwo - jOne*iTwo;
return result;
}
The car currently just seems to run off in a straight line and one of the funny elements is the sorting method of left and right any direction is GREATLY appreciated.

3-D Plane Filtering EVD RANSAC... where am I going wrong?

Background
For a computer vision assignment I've been given the task of implementing RANSAC to fit a plane to a given set of points and filter that input list of points by the consensus model using Eigenvalue Decomposition.
I have spent days trying to tweak my code to achieve correct plane filtering behavior on an input set of test data. All you algorithm junkies, this one's for you.
My implementation uses a vector of a ROS data structure (Point32) as inputs, but this is transparent to the problem at hand.
What I've done
When I test for expected plane filtering behavior (correct elimination of outliers >95-99% of the time), I see in my implementation that I only eliminate outliers and extract the main plane of a test point cloud ~30-40% of the time. Other times, I filter a plane that ~somewhat~ fits the expected model, but leaves a lot of obvious outliers inside the consensus model. The fact that this works at all suggests that I'm doing some things right, and some things wrong.
I've tweaked my constants (distance threshold, max iterations, estimated % points fit) to London and back, and I only see small differences in the consensus model.
Implementation (long)
const float RANSAC_ESTIMATED_FIT_POINTS = .80f; // % points estimated to fit the model
const size_t RANSAC_MAX_ITER = 500; // max RANSAC iterations
const size_t RANDOM_MAX_TRIES = 100; // max RANSAC random point tries per iteration
const float RANSAC_THRESHOLD = 0.0000001f; // threshold to determine what constitutes a close point to a plane
/*
Helper to randomly select an item from a STL container, from stackoverflow.
*/
template <typename I>
I random_element(I begin, I end)
{
const unsigned long n = std::distance(begin, end);
const unsigned long divisor = ((long)RAND_MAX + 1) / n;
unsigned long k;
do { k = std::rand() / divisor; } while (k >= n);
std::advance(begin, k);
return begin;
}
bool run_RANSAC(const std::vector<Point32> all_points,
Vector3f *out_p0, Vector3f *out_n,
std::vector<Point32> *out_inlier_points)
{
for (size_t iterations = 0; iterations < RANSAC_MAX_ITER; iterations ++)
{
Point32 p1,p2,p3;
Vector3f v1;
Vector3f v2;
Vector3f n_hat; // keep track of the current plane model
Vector3f P0;
std::vector<Point32> points_agree; // list of points that agree with model within
bool found = false;
// try RANDOM_MAX_TRIES times to get random 3 points
for (size_t tries = 0; tries < RANDOM_MAX_TRIES; tries ++) // try to get unique random points 100 times
{
// get 3 random points
p1 = *random_element(all_points.begin(), all_points.end());
p2 = *random_element(all_points.begin(), all_points.end());
p3 = *random_element(all_points.begin(), all_points.end());
v1 = Vector3f (p2.x - p1.x,
p2.y - p1.y,
p2.z - p1.z ); //Vector P1P2
v2 = Vector3f (p3.x - p1.x,
p3.y - p1.y,
p3.z - p1.z); //Vector P1P3
if (std::abs(v1.dot(v2)) != 1.f) // dot product != 1 means we've found 3 nonlinear points
{
found = true;
break;
}
} // end try random element loop
if (!found) // could not find 3 random nonlinear points in 100 tries, go to next iteration
{
ROS_ERROR("run_RANSAC(): Could not find 3 random nonlinear points in %ld tries, going on to iteration %ld", RANDOM_MAX_TRIES, iterations + 1);
continue;
}
// nonlinear random points exist past here
// fit a plane to p1, p2, p3
Vector3f n = v1.cross(v2); // calculate normal of plane
n_hat = n / n.norm();
P0 = Vector3f(p1.x, p1.y, p1.z);
// at some point, the original p0, p1, p2 will be iterated over and added to agreed points
// loop over all points, find points that are inliers to plane
for (std::vector<Point32>::const_iterator it = all_points.begin();
it != all_points.end(); it++)
{
Vector3f M (it->x - P0.x(),
it->y - P0.y(),
it->z - P0.z()); // M = (P - P0)
float d = M.dot(n_hat); // calculate distance
if (d <= RANSAC_THRESHOLD)
{ // add to inlier points list
points_agree.push_back(*it);
}
} // end points loop
ROS_DEBUG("run_RANSAC() POINTS AGREED: %li=%f, RANSAC_ESTIMATED_FIT_POINTS: %f", points_agree.size(),
(float) points_agree.size() / all_points.size(), RANSAC_ESTIMATED_FIT_POINTS);
if (((float) points_agree.size()) / all_points.size() > RANSAC_ESTIMATED_FIT_POINTS)
{ // if points agree / total points > estimated % points fitting
// fit to points_agree.size() points
size_t n = points_agree.size();
Vector3f sum(0.0f, 0.0f, 0.0f);
for (std::vector<Point32>::iterator iter = points_agree.begin();
iter != points_agree.end(); iter++)
{
sum += Vector3f(iter->x, iter->y, iter->z);
}
Vector3f centroid = sum / n; // calculate centroid
Eigen::MatrixXf M(points_agree.size(), 3);
for (size_t row = 0; row < points_agree.size(); row++)
{ // build distance vector matrix
Vector3f point(points_agree[row].x,
points_agree[row].y,
points_agree[row].z);
for (size_t col = 0; col < 3; col ++)
{
M(row, col) = point(col) - centroid(col);
}
}
Matrix3f covariance_matrix = M.transpose() * M;
Eigen::EigenSolver<Matrix3f> eigen_solver;
eigen_solver.compute(covariance_matrix);
Vector3f eigen_values = eigen_solver.eigenvalues().real();
Matrix3f eigen_vectors = eigen_solver.eigenvectors().real();
// find eigenvalue that is closest to 0
size_t idx;
// find minimum eigenvalue, get index
float closest_eval = eigen_values.cwiseAbs().minCoeff(&idx);
// find corresponding eigenvector
Vector3f closest_evec = eigen_vectors.col(idx);
std::stringstream logstr;
logstr << "Closest eigenvalue : " << closest_eval << std::endl <<
"Corresponding eigenvector : " << std::endl << closest_evec << std::endl <<
"Centroid : " << std::endl << centroid;
ROS_DEBUG("run_RANSAC(): %s", logstr.str().c_str());
Vector3f all_fitted_n_hat = closest_evec / closest_evec.norm();
// invoke copy constructors for outbound
*out_n = Vector3f(all_fitted_n_hat);
*out_p0 = Vector3f(centroid);
*out_inlier_points = std::vector<Point32>(points_agree);
ROS_DEBUG("run_RANSAC():: Success, total_size: %li, inlier_size: %li, %% agreement %f",
all_points.size(), out_inlier_points->size(), (float) out_inlier_points->size() / all_points.size());
return true;
}
} // end iterations loop
return false;
}
Pseudocode from wikipedia for reference:
Given:
data – a set of observed data points
model – a model that can be fitted to data points
n – minimum number of data points required to fit the model
k – maximum number of iterations allowed in the algorithm
t – threshold value to determine when a data point fits a model
d – number of close data points required to assert that a model fits well to data
Return:
bestfit – model parameters which best fit the data (or nul if no good model is found)
iterations = 0
bestfit = nul
besterr = something really large
while iterations < k {
maybeinliers = n randomly selected values from data
maybemodel = model parameters fitted to maybeinliers
alsoinliers = empty set
for every point in data not in maybeinliers {
if point fits maybemodel with an error smaller than t
add point to alsoinliers
}
if the number of elements in alsoinliers is > d {
% this implies that we may have found a good model
% now test how good it is
bettermodel = model parameters fitted to all points in maybeinliers and alsoinliers
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}
}
increment iterations
}
return bestfit
The only difference between my implementation and the wikipedia pseudocode is the following:
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}
My guess is that I need to do something related to comparing the (closest_eval) with some sentinel value for the expected minimum eigenvalue corresponding to a normal for planes that tend to fit the model. However this was not covered in class and I have no idea where to start figuring out what's wrong.
Heh, it's funny how thinking about how to present the problem to others can actually solve the problem I'm having.
Solved by simply implementing this with a std::numeric_limits::max() starting best fit eigenvalue. This is because the best fit plane extracted on any n-th iteration of RANSAC is not guaranteed to be THE best fit plane and may have a huge error in consensus amongst each constituent point, so I need to converge on that for each iteration. Woops.
thiserr = a measure of how well model fits these points
if thiserr < besterr {
bestfit = bettermodel
besterr = thiserr
}

Vector of double to save distance of every sides in a polygon

everyone I want to do a function in a class Polygon who will be save the size of every sides of the polygon in a vector of double. My polygon is build thanks to the class Point. So I success to know how many point I have in my polygon and to print the drawing of the polygon to the screen. But the function to get the sides of every sides of the polygon thanks to the point, I still have not succeeded
This is my class Point :
Point::Point(double x, double y)
{
_x = x;
_y = y;
}
Point::Point(const Point& other)
{
_x = other._x;
_y = other._y;
}
double Point::getX() const
{
return _x;
}
double Point::getY() const
{
return _y;
}
double Point::distance(const Point& other)
{
return sqrt((getX() - other._x) * (getX() - other._x) + (getY() - other._y) *(getY() - other._y));
}
This is my header of class Polygon :
class Polygon
{
public:
Polygon();
~Polygon();
int numOfPoints() const;
vector<Point> getPoints() const;
vector<double> getSides() const;
protected:
std::vector<Point> _points;
};
and the cpp of Polygon :
Polygon::Polygon(){}
Polygon::~Polygon(){}
int Polygon::numOfPoints() const
{
return _points.size();
}
vector<Point> Polygon::getPoints() const
{
return _points;
}
vector<double> Polygon::getSides() const
{
vector<double> sides;
}
So I dont know how can I get the size of every sides thanks to class Point. I think it can be do thanks to the function distance of point, but I don't know how. If you can help me.
Thanks You !
First the small point: The following avoids double calculation of the differences (though compiler might optimise, it's better not to rely on it for doing so...).
double Point::distance(const Point& other)
{
double dx = _x - other._x;
double dy = _y - other._y;
return sqrt(dx * dx + dy * dy);
}
Then you have to iterate over all the points; you need at least two to have any distances at all, but two is the degenerate case (one distance only, all other numbers n result in n distances...):
vector<double> Polygon::getSides() const
{
vector<double> sides;
if(points.size() > 2)
{
sides.reserve(points.size());
std::vector<Point>::iterator end = points.end() - 1;
for(std::vector<Point>::iterator i = points.begin(); i != end; ++i)
sides.push_back(i->distance(*(i + 1)));
}
if(points.size() >= 2)
sides.push_back(points.front().distance(points.back()));
return sides;
}
Explanation:
if(points.size() > 2)
Only if we have more than two points, so triangle at least, we have true polyone. We now calculate the distances of this one, e. g. for a square ABCD the distances AB, BC, CD. Note that the distance DA is yet missing...
sides.reserve(points.size());
A polygon with n points has n sides. This prevents reallocation.
std::vector<Point>::iterator end = points.end() - 1;
end() points one past the end. Want to calculate distances i, i+1, so last element must be skipped.
for(std::vector<Point>::iterator i = points.begin(); i != end; ++i)
sides.push_back(i->distance(*(i + 1)));
Now calculating the distances...
if(points.size() >= 2)
sides.push_back(points.front().distance(points.back()));
This catches two cases: For true polygones this adds the last side closing it (in the example above: DA). Additionally, it handles the degenerate case of a single line (i = 2).
Actually, this could have been placed as well in front of the for loop. My variant calculates for points ABCD AB BC CD DA, the alternative DA, AB, BC, CD.
You might have noticed that we reserve only in the case of a true polygone. In the degenerate case, we are only inserting a single element, so it does not matter if we allocate the inner array before via reserve or at inserting the element...
Oh, and if you want to save a line of code:
for(std::vector<Point>::iterator i = points.begin() + 1; i != points.end(); ++i)
sides.push_back(i->distance(*(i - 1)));
Effectively the same, just reverted the points (calculating BA instead of AB).
You should iterate over the points in the polygon, calculating the distance to the previous point.
Something like the following should work (untested):
vector<double> Polygon::getSides() const {
vector<double> sides;
for(auto it = this->_points.begin(); it != this->_points.end(); it++) {
if(it == this->_points.begin())
sides.push_back(it->distance(*(this->_points.end() - 1)));
else
sides.push_back(it->distance(*(it - 1)));
}
return sides;
}
This starts at the first point and calculates the distance to the last point. For each point after that it calculates the distance to the previous point. Each time adding the distance to the output vector.
Note that I have assumed that the polygon is closed, i.e. the first point is connected to the last point. If the polygon contains no points, the return vector will be empty. If it contains only one point, it will contain a single element [0]. This results from calculating the distance from a point to the same point.
See this tutorial for more info on iterating over vectors: http://www.cprogramming.com/tutorial/stl/iterators.html

sorting points: concave polygon

I have a set of points that I'm trying to sort in ccw order or cw order from their angle. I want the points to be sorted in a way that they could form a polygon with no splits in its region or intersections. This is difficult because in most cases, it would be a concave polygon.
point centroid;
int main( int argc, char** argv )
{
// I read a set of points into a struct point array: points[n]
// Find centroid
double sx = 0; double sy = 0;
for (int i = 0; i < n; i++)
{
sx += points[i].x;
sy += points[i].y;
}
centroid.x = sx/n;
centroid.y = sy/n;
// sort points using in polar order using centroid as reference
std::qsort(&points, n, sizeof(point), polarOrder);
}
// -1 ccw, 1 cw, 0 collinear
int orientation(point a, point b, point c)
{
double area2 = (b.x-a.x)*(c.y-a.y) - (b.y-a.y)*(c.x-a.x);
if (area2 < 0) return -1;
else if (area2 > 0) return +1;
else return 0;
}
// compare other points relative to polar angle they make with this point
// (where the polar angle is between 0 and 2pi)
int polarOrder(const void *vp1, const void *vp2)
{
point *p1 = (point *)vp1;
point *p2 = (point *)vp2;
// translation
double dx1 = p1->x - centroid.x;
double dy1 = p1->y - centroid.y;
double dx2 = p2->x - centroid.x;
double dy2 = p2->y - centroid.y;
if (dy1 >= 0 && dy2 < 0) { return -1; } // p1 above and p2 below
else if (dy2 >= 0 && dy1 < 0) { return 1; } // p1 below and p2 above
else if (dy1 == 0 && dy2 ==0) { // 3-collinear and horizontal
if (dx1 >= 0 && dx2 < 0) { return -1; }
else if (dx2 >= 0 && dx1 < 0) { return 1; }
else { return 0; }
}
else return -orientation(centroid,*p1,*p2); // both above or below
}
It looks like the points are sorted accurately(pink) until they "cave" in, in which case the algorithm skips over these points then continues.. Can anyone point me into the right direction to sort the points so that they form the polygon I'm looking for?
Raw Point Plot - Blue, Pink Points - Sorted
Point List: http://pastebin.com/N0Wdn2sm (You can ignore the 3rd component, since all these points lie on the same plane.)
The code below (sorry it's C rather than C++) sorts correctly as you wish with atan2.
The problem with your code may be that it attempts to use the included angle between the two vectors being compared. This is doomed to fail. The array is not circular. It has a first and a final element. With respect to the centroid, sorting an array requires a total polar order: a range of angles such that each point corresponds to a unique angle regardless of the other point. The angles are the total polar order, and comparing them as scalars provides the sort comparison function.
In this manner, the algorithm you proposed is guaranteed to produce a star-shaped polyline. It may oscillate wildly between different radii (...which your data do! Is this what you meant by "caved in"? If so, it's a feature of your algorithm and data, not an implementation error), and points corresponding to exactly the same angle might produce edges that coincide (lie directly on top of each other), but the edges won't cross.
I believe that your choice of centroid as the polar origin is sufficient to guarantee that connecting the ends of the polyline generated as above will produce a full star-shaped polygon, however, I don't have a proof.
Result plotted with Excel
Note you can guess from the nearly radial edges where the centroid is! This is the "star shape" I referred to above.
To illustrate this is really a star-shaped polygon, here is a zoom in to the confusing lower left corner:
If you want a polygon that is "nicer" in some sense, you will need a fancier (probably much fancier) algorithm, e.g. the Delaunay triangulation-based ones others have referred to.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
struct point {
double x, y;
};
void print(FILE *f, struct point *p) {
fprintf(f, "%f,%f\n", p->x, p->y);
}
// Return polar angle of p with respect to origin o
double to_angle(const struct point *p, const struct point *o) {
return atan2(p->y - o->y, p->x - o->x);
}
void find_centroid(struct point *c, struct point *pts, int n_pts) {
double x = 0, y = 0;
for (int i = 0; i < n_pts; i++) {
x += pts[i].x;
y += pts[i].y;
}
c->x = x / n_pts;
c->y = y / n_pts;
}
static struct point centroid[1];
int by_polar_angle(const void *va, const void *vb) {
double theta_a = to_angle(va, centroid);
double theta_b = to_angle(vb, centroid);
return theta_a < theta_b ? -1 : theta_a > theta_b ? 1 : 0;
}
void sort_by_polar_angle(struct point *pts, int n_pts) {
find_centroid(centroid, pts, n_pts);
qsort(pts, n_pts, sizeof pts[0], by_polar_angle);
}
int main(void) {
FILE *f = fopen("data.txt", "r");
if (!f) return 1;
struct point pts[10000];
int n_pts, n_read;
for (n_pts = 0;
(n_read = fscanf(f, "%lf%lf%*f", &pts[n_pts].x, &pts[n_pts].y)) != EOF;
++n_pts)
if (n_read != 2) return 2;
fclose(f);
sort_by_polar_angle(pts, n_pts);
for (int i = 0; i < n_pts; i++)
print(stdout, pts + i);
return 0;
}
Well, first and foremost, I see centroid declared as a local variable in main. Yet inside polarOrder you are also accessing some centroid variable.
Judging by the code you posted, that second centroid is a file-scope variable that you never initialized to any specific value. Hence the meaningless results from your comparison function.
The second strange detail in your code is that you do return -orientation(centroid,*p1,*p2) if both points are above or below. Since orientation returns -1 for CCW and +1 for CW, it should be just return orientation(centroid,*p1,*p2). Why did you feel the need to negate the result of orientation?
Your original points don't appear form a convex polygon, so simply ordering them by angle around a fixed centroid will not necessarily result in a clean polygon. This is a non-trivial problem, you may want to research Delaunay triangulation and/or gift wrapping algorithms, although both would have to be modified because your polygon is concave. The answer here is an interesting example of a modified gift wrapping algorithm for concave polygons. There is also a C++ library called PCL that may do what you need.
But...if you really do want to do a polar sort, your sorting functions seem more complex than necessary. I would sort using atan2 first, then optimize it later once you get the result you want if necessary. Here is an example using lambda functions:
#include <algorithm>
#include <math.h>
#include <vector>
int main()
{
struct point
{
double x;
double y;
};
std::vector< point > points;
point centroid;
// fill in your data...
auto sort_predicate = [&centroid] (const point& a, const point& b) -> bool {
return atan2 (a.x - centroid.x, a.y - centroid.y) <
atan2 (b.x - centroid.x, b.y - centroid.y);
};
std::sort (points.begin(), points.end(), sort_predicate);
}

OpenCV templates in 2D point data set

I was wandering what the best approach would be for detecting 'figures' in an array of 2D points.
In this example I have two 'templates'. Figure 1 is a template and figure 2 is a template.
Each of these templates exists only as a vector of points with an x,y coordinate.
Let's say we have a third vector with points with x,y coordinate
What would be the best way to find out and isolate points matching one of the first two arrays in the third one. (including scaling, rotation)?
I have been trying nearest neigbours(FlannBasedMatcehr) or even SVM implementation but it doesn't seem to get me any result, template matching doesn't seem to be the way to go either, I think. I am not working on images but only on 2D points in memory...
Especially because the input vector always has more points than the original data set to be compared with.
All it needs to do is find points in array that match a template.
I am not a 'specialist' in machine learning or opencv. I guess I am overlooking something from the beginning...
Thank you very much for your help/suggestions.
just for fun I tried this:
Choose two points of the point dataset and compute the transformation mapping the first two pattern points to those points.
Test whether all transformed pattern points can be found in the data set.
This approach is very naive and has a complexity of O(m*n²) with n data points and a single pattern of size m (points). This complexity might be increased for some nearest neighbor search methods. So you have to consider whether it's not efficient enough for your appplication.
Some improvements could include some heuristic to not choose all n² combinations of points but, but you need background information of maximal pattern scaling or something like that.
For evaluation I first created a pattern:
Then I create random points and add the pattern somewhere within (scaled, rotated and translated):
After some computation this method recognizes the pattern. The red line shows the chosen points for transformation computation.
Here's the code:
// draw a set of points on a given destination image
void drawPoints(cv::Mat & image, std::vector<cv::Point2f> points, cv::Scalar color = cv::Scalar(255,255,255), float size=10)
{
for(unsigned int i=0; i<points.size(); ++i)
{
cv::circle(image, points[i], 0, color, size);
}
}
// assumes a 2x3 (affine) transformation (CV_32FC1). does not change the input points
std::vector<cv::Point2f> applyTransformation(std::vector<cv::Point2f> points, cv::Mat transformation)
{
for(unsigned int i=0; i<points.size(); ++i)
{
const cv::Point2f tmp = points[i];
points[i].x = tmp.x * transformation.at<float>(0,0) + tmp.y * transformation.at<float>(0,1) + transformation.at<float>(0,2) ;
points[i].y = tmp.x * transformation.at<float>(1,0) + tmp.y * transformation.at<float>(1,1) + transformation.at<float>(1,2) ;
}
return points;
}
const float PI = 3.14159265359;
// similarity transformation uses same scaling along both axes, rotation and a translation part
cv::Mat composeSimilarityTransformation(float s, float r, float tx, float ty)
{
cv::Mat transformation = cv::Mat::zeros(2,3,CV_32FC1);
// compute rotation matrix and scale entries
float rRad = PI*r/180.0f;
transformation.at<float>(0,0) = s*cosf(rRad);
transformation.at<float>(0,1) = s*sinf(rRad);
transformation.at<float>(1,0) = -s*sinf(rRad);
transformation.at<float>(1,1) = s*cosf(rRad);
// translation
transformation.at<float>(0,2) = tx;
transformation.at<float>(1,2) = ty;
return transformation;
}
// create random points
std::vector<cv::Point2f> createPointSet(cv::Size2i imageSize, std::vector<cv::Point2f> pointPattern, unsigned int nRandomDots = 50)
{
// subtract center of gravity to allow more intuitive rotation
cv::Point2f centerOfGravity(0,0);
for(unsigned int i=0; i<pointPattern.size(); ++i)
{
centerOfGravity.x += pointPattern[i].x;
centerOfGravity.y += pointPattern[i].y;
}
centerOfGravity.x /= (float)pointPattern.size();
centerOfGravity.y /= (float)pointPattern.size();
pointPattern = applyTransformation(pointPattern, composeSimilarityTransformation(1,0,-centerOfGravity.x, -centerOfGravity.y));
// create random points
//unsigned int nRandomDots = 0;
std::vector<cv::Point2f> pointset;
srand (time(NULL));
for(unsigned int i =0; i<nRandomDots; ++i)
{
pointset.push_back( cv::Point2f(rand()%imageSize.width, rand()%imageSize.height) );
}
cv::Mat image = cv::Mat::ones(imageSize,CV_8UC3);
image = cv::Scalar(255,255,255);
drawPoints(image, pointset, cv::Scalar(0,0,0));
cv::namedWindow("pointset"); cv::imshow("pointset", image);
// add point pattern to a random location
float scaleFactor = rand()%30 + 10.0f;
float translationX = rand()%(imageSize.width/2)+ imageSize.width/4;
float translationY = rand()%(imageSize.height/2)+ imageSize.height/4;
float rotationAngle = rand()%360;
std::cout << "s: " << scaleFactor << " r: " << rotationAngle << " t: " << translationX << "/" << translationY << std::endl;
std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,composeSimilarityTransformation(scaleFactor,rotationAngle,translationX,translationY));
//std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,trans);
drawPoints(image, transformedPattern, cv::Scalar(0,0,0));
drawPoints(image, transformedPattern, cv::Scalar(0,255,0),3);
cv::imwrite("dataPoints.png", image);
cv::namedWindow("pointset + pattern"); cv::imshow("pointset + pattern", image);
for(unsigned int i=0; i<transformedPattern.size(); ++i)
pointset.push_back(transformedPattern[i]);
return pointset;
}
void programDetectPointPattern()
{
cv::Size2i imageSize(640,480);
// create a point pattern, this can be in any scale and any relative location
std::vector<cv::Point2f> pointPattern;
pointPattern.push_back(cv::Point2f(0,0));
pointPattern.push_back(cv::Point2f(2,0));
pointPattern.push_back(cv::Point2f(4,0));
pointPattern.push_back(cv::Point2f(1,2));
pointPattern.push_back(cv::Point2f(3,2));
pointPattern.push_back(cv::Point2f(2,4));
// transform the pattern so it can be drawn
cv::Mat trans = cv::Mat::ones(2,3,CV_32FC1);
trans.at<float>(0,0) = 20.0f; // scale x
trans.at<float>(1,1) = 20.0f; // scale y
trans.at<float>(0,2) = 20.0f; // translation x
trans.at<float>(1,2) = 20.0f; // translation y
// draw the pattern
cv::Mat drawnPattern = cv::Mat::ones(cv::Size2i(128,128),CV_8U);
drawnPattern *= 255;
drawPoints(drawnPattern,applyTransformation(pointPattern, trans), cv::Scalar(0),5);
// display and save pattern
cv::imwrite("patternToDetect.png", drawnPattern);
cv::namedWindow("pattern"); cv::imshow("pattern", drawnPattern);
// draw the points and the included pattern
std::vector<cv::Point2f> pointset = createPointSet(imageSize, pointPattern);
cv::Mat image = cv::Mat(imageSize, CV_8UC3);
image = cv::Scalar(255,255,255);
drawPoints(image,pointset, cv::Scalar(0,0,0));
// normally we would have to use some nearest neighbor distance computation, but to make it easier here,
// we create a small area around every point, which allows to test for point existence in a small neighborhood very efficiently (for small images)
// in the real application this "inlier" check should be performed by k-nearest neighbor search and threshold the distance,
// efficiently evaluated by a kd-tree
cv::Mat pointImage = cv::Mat::zeros(imageSize,CV_8U);
float maxDist = 3.0f; // how exact must the pattern be recognized, can there be some "noise" in the position of the data points?
drawPoints(pointImage, pointset, cv::Scalar(255),maxDist);
cv::namedWindow("pointImage"); cv::imshow("pointImage", pointImage);
// choose two points from the pattern (can be arbitrary so just take the first two)
cv::Point2f referencePoint1 = pointPattern[0];
cv::Point2f referencePoint2 = pointPattern[1];
cv::Point2f diff1; // difference vector
diff1.x = referencePoint2.x - referencePoint1.x;
diff1.y = referencePoint2.y - referencePoint1.y;
float referenceLength = sqrt(diff1.x*diff1.x + diff1.y*diff1.y);
diff1.x = diff1.x/referenceLength; diff1.y = diff1.y/referenceLength;
std::cout << "reference: " << std::endl;
std::cout << referencePoint1 << std::endl;
// now try to find the pattern
for(unsigned int j=0; j<pointset.size(); ++j)
{
cv::Point2f targetPoint1 = pointset[j];
for(unsigned int i=0; i<pointset.size(); ++i)
{
cv::Point2f targetPoint2 = pointset[i];
cv::Point2f diff2;
diff2.x = targetPoint2.x - targetPoint1.x;
diff2.y = targetPoint2.y - targetPoint1.y;
float targetLength = sqrt(diff2.x*diff2.x + diff2.y*diff2.y);
diff2.x = diff2.x/targetLength; diff2.y = diff2.y/targetLength;
// with nearest-neighborhood search this line will be similar or the maximal neighbor distance must be relative to targetLength!
if(targetLength < maxDist) continue;
// scale:
float s = targetLength/referenceLength;
// rotation:
float r = -180.0f/PI*(atan2(diff2.y,diff2.x) + atan2(diff1.y,diff1.x));
// scale and rotate the reference point to compute the translation needed
std::vector<cv::Point2f> origin;
origin.push_back(referencePoint1);
origin = applyTransformation(origin, composeSimilarityTransformation(s,r,0,0));
// compute the translation which maps the two reference points on the two target points
float tx = targetPoint1.x - origin[0].x;
float ty = targetPoint1.y - origin[0].y;
std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,composeSimilarityTransformation(s,r,tx,ty));
// now test if all transformed pattern points can be found in the dataset
bool found = true;
for(unsigned int i=0; i<transformedPattern.size(); ++i)
{
cv::Point2f curr = transformedPattern[i];
// here we check whether there is a point drawn in the image. If you have no image you will have to perform a nearest neighbor search.
// this can be done with a balanced kd-tree in O(log n) time
// building such a balanced kd-tree has to be done once for the whole dataset and needs O(n*(log n)) afair
if((curr.x >= 0)&&(curr.x <= pointImage.cols-1)&&(curr.y>=0)&&(curr.y <= pointImage.rows-1))
{
if(pointImage.at<unsigned char>(curr.y, curr.x) == 0) found = false;
// if working with kd-tree: if nearest neighbor distance > maxDist => found = false;
}
else found = false;
}
if(found)
{
std::cout << composeSimilarityTransformation(s,r,tx,ty) << std::endl;
cv::Mat currentIteration;
image.copyTo(currentIteration);
cv::circle(currentIteration,targetPoint1,5, cv::Scalar(255,0,0),1);
cv::circle(currentIteration,targetPoint2,5, cv::Scalar(255,0,255),1);
cv::line(currentIteration,targetPoint1,targetPoint2,cv::Scalar(0,0,255));
drawPoints(currentIteration, transformedPattern, cv::Scalar(0,0,255),4);
cv::imwrite("detectedPattern.png", currentIteration);
cv::namedWindow("iteration"); cv::imshow("iteration", currentIteration); cv::waitKey(-1);
}
}
}
}