Uniform scaling on array of point around average point c++ - c++

What I am trying to do:
1. scale uniformly an array of points around a point.
2. A point has to be an average point of array of points.
The code below, seems to work, but I do not know if it is the proper way of doing that.
I know that uniform scaling is simply multiplying points by some value, but this is scaling on 0,0,0 point, how to do it around mean point?
The code could be subdivided by following steps:
Get the average point of the array of points, by summing up all positions and dividing by a number of points.
Ratio is scaling value
Then I do vector subtraction to get a vector pointing from point to average point.
I normalize that vector (I get unit vector)
Then I add that normalized vector to current point multiplied by (1 - ratio)*0.5
This last bit 5th point came totally from checking total length of the value.
All examples I came up before was using matrices in math, and I really not capable of reading matrix operations.
Is it the correct uniform scaling method, if it's not could you point out what I am doing wrong?
//Get center of a curve
//That is average of all points
MatMxN curveCenter = MatMxN::Zero(2, 1); //This is just 1 vector/point with x and y coordinates
for (int i = 0; i < n; i++)
curveCenter += points.col(i);
curveCenter /= n;
//Scaling value
float ratio = 1.3;
//Get vector pointing to center and move by ratio
for (int i = 0; i < n; i++) {
MatMxN vector = curveCenter - points.col(i);
MatMxN unit = vector.normalized();
points.col(i) += unit*(1 - ratio)*0.5; //points.col(i) this is point in array
}

In order to scale points using a specific center point (other than 0), follow these steps:
Substract center from point MatMxN vector = points.col(i) - curveCenter;
Multiply vector by scaling factor vector *= ratio
Add center to the scaled vector to get new point points.col(i) = vector + curveCenter
This approach can be resolved to something remotely similar to your formula. Lets call the center C, the point to be scaled P0, the scaled point P1 and the scaling factor s. The above 3 steps can be written as:
v0 = P0 - C
v1 = s * v0
P1 = v1 + C
=>
P1 = s * P0 + C * (1 - s)
Now we define P1 = P0 + x for some x:
P0 + x = s * P0 + C * (1 - s)
=>
x = s * P0 + C * (1 - s) - P0
= C * (1 - s) - P0 * (1 - s)
= (C - P0) * (1 - s)
So the update could be written as follows instead of using the 3 steps mentioned:
MatMxN vector = curveCenter - points.col(i);
points.col(i) += vector * (1 - ratio);
However, I prefer to write the substractions in reverse, because it is closer to the original steps and easier to understand by intuition:
MatMxN vector = points.col(i) - curveCenter;
points.col(i) += vector * (ratio - 1);
I don't know where you found the normalize and *0.5 ideas.

Related

How to check if a point in a triangle (or on it's edge)

I'm trying to write an algorithm to determine if point is located inside a triangle or on it's edge in 3D coordinate space.
For example, I try to reach such results for different cases
I've figured out how to check if point P inside the triangle, I calculated normal vectors for triangles ABP, BCP, CAP and checked if they are similar.
Can someone explain how to check if a point is on the edge of a triangle (but not outside of a triangle)? You can provide formulas or code as you wish.
Make vectors:
r = p - A (r.x = p.x - A.x, r.y = p.y - A.y, r.z = p.z - A.z)
s = B - A
q = C - A
Calculate normal to ABC plane:
n = s x q (vector product)
Check if p lies in ABC plane using dot product:
dp = n.dot.r
If dp is zero (or has very small value like 1.0e-10 due to the floating point errors, then p is in the plane, and we can continue
Decompose vector p by base vectors s and q. At first check if z-component of normal (n.z) is non-zero. If so, use the next pair of equations (otherwise choose equations for x/z or y/z components):
px = a * sx + b * qx
py = a * sy + b * qy
Solve this system
a = (sy * qx - sx * qy) / (py * qx - px * qy)
b = (px - a * sx) / qx
If resulting coefficients a and b fulfill limits:
a >= 0
b >= 0
a + b <= 1.0
then point p lies in triangle plane inside it.

Reverse engineering - Is this a cheap 3D distance function?

I am reverse engineering a game from 1999 and I came across a function which looks to be checking if the player is within range of a 3d point for the triggering of audio sources. The decompiler mangles the code pretty bad but I think I understand it.
// Position Y delta
v1 = * (float * )(this + 16) - LocalPlayerZoneEntry - > y;
// Position X delta
v2 = * (float * )(this + 20) - LocalPlayerZoneEntry - > x;
// Absolute value
if (v1 < 0.0)
v1 = -v1;
// Absolute value
if (v2 < 0.0)
v2 = -v2;
// What is going on here?
if (v1 <= v2)
v1 = v1 * 0.5;
else
v2 = v2 * 0.5;
// Z position delta
v3 = * (float * )(this + 24) - LocalPlayerZoneEntry - > z;
// Absolute value
if (v3 < 0.0)
v3 = -v3;
result = v3 + v2 + v1;
// Radius
if (result > * (float * )(this + 28))
return 0.0;
return result;
Interestingly enough, when in game, it seemed like the triggering was pretty inconsistent and would sometimes be quite a bit off depending on from which side I approached the trigger.
Does anyone have any idea if this was a common algorithm used back in the day?
Note: The types were all added by me so they may be incorrect. I assume that this is a function of type bool.
The best way to visualize a distance function (a metric) is to plot its unit sphere (the set of points at unit distance from origin -- the metric in question is norm induced).
First rewrite it in a more mathematical form:
N(x,y,z) = 0.5*|x| + |y| + |z| when |x| <= |y|
= |x| + 0.5*|y| + |z| otherwise
Let's do that for 2d (assume that z = 0). The absolute values make the function symmetric in the four quadrants. The |x| <= |y| condition makes it symmetric in all the eight sectors. Let's focus on the sector x > 0, y > 0, x <= y. We want to find the curve when N(x,y,0) = 1. For that sector it reduces to 0.5x + y = 1, or y = 1 - 0.5x. We can go and plot that line. For when x > 0, y > 0, x > y, we get x = 1 - 0.5y. Plotting it all gives the following unit 'circle':
For comparison, here is an Euclidean unit circle overlaid:
In the third dimension it behaves like a taxicab metric, effectively giving you a 'diamond' shaped sphere:
So yes, it is a cheap distance function, though it lacks rotational symmetries.

Color image boundary based on local curvature

I am searching for an algorithm (using OpenCV C or C++) which does this:
Given the boundary image, I want to find the local curvature at all points and color map it, which is what is done in the image displayed above. I got this image from Wikipedia but haven't been able to find out a way to color the boundary in this way. Kindly let me know how it can be done.
If you observe the boundary, red denotes boundary has high slope, yellow shows that the boundary is almost linear.
How can this be done?
Edit
Just to give you an idea of how I was trying to do this since two days:
I used the openCV functions convexHull and convexityDefects but realized that I am going in the wrong direction. I have to work only on the contours/boundaries of the binary image.
You can solve the problem by fitting a path of cubic Bezier curves to the boundary, then taking the curvature analytically.
[elaborated]
The boundary consists of a list of points in x, y at pixel centres, each point 1px or root 2 px form the next in the list. You need to fit a smooth cubic Bezier path to this, using a technique by Schnider in Graphics Gems (Gems 1, pp 612, An algorithm for Fitting digitized curves).
The step along the curve taking tiny steps which are always sub-pixel, and
take the curvature using
double BezierCurve::Curvature(double t) const
{
// Nice mathematically perfect formula
//Vector2 d1 = Tangent(t);
//Vector2 d2 = Deriv2(t);
//return (d1.x * d2.y - d1.y * d2.x) / pow(d1.x * d1.x + d1.y * d1.y, 1.5);
// Get the cubic coefficients like this, I store them in the Bezier
// class
/*
a = p3 + 3.0 * p1 - 3.0 * p2 - p0;
b = 3.0 * p0 - 6.0 * p1 + 3.0 * p2;
c = 3.0 * p1 - 3.0 * p0;
d = p0;
*/
double dx, dy, ddx, ddy;
dx = 3 * this->ax * t*t + 2 * this->bx * t + this->cx;
ddx = 6 * this->ax * t + 2 * this->bx;
dy = 3 * this->ay * t*t + 2 * this->by * t + this->cy;
ddy = 6 * this->ay * t + 2 * this->by;
if (dx == 0 && dy == 0)
return 0;
return (dx*ddy - ddx*dy) / ((dx*dx + dy*dy)*sqrt(dx*dx + dy*dy));
}
OpenCV findContours used with mode= CV_RETR_EXTERNAL and method= CV_CHAIN_APPROX_NONE will give you all boundary pixels ordered such as two subsequent points are neighbors.
To get the radius of a circumference by three points, there are a lot of info in the Web. Because you only need the radius, not the center, this stackexchange answer is fast.
In pseudo code:
vector_of_points = OpenCV::findContours(...)
p1 = vector start
p2, p3 are next points in vector
//boundary is circular, so in the first loop pass we must adjust
p2 = next point
p3 = last point
//Use p1 as our iterator
while ( p1 <= vector.end )
{
//curvature
radius = calculateRadius(p1, p2, p3)
//set color for pixel p2
setColor(p, radius)
increment p1, p2, p3
adjust for start point = end point
}

Suggestions to Compute the Intersetions of Multiple Convex 2D Polygons

I am writing this question fishing for any state-of-the-art software or methods that can quickly compute the intersection of N 2D polygons (the convex hulls of projected convex polyhedrons), and M 2D polygons where typically N >> M. N may be in the order or at least 1M polygons and N in the order 50k. I've searched for some time now, but I keep coming up with the same answer shown below.
Use boost and a loop to
compute the projection of the polyhedron (not the bottleneck)
compute the convex hull of said polyhedron (bottleneck)
compute the intersection of the projected polyhedron and existing 2D polygon (major bottleneck).
This loop is repeated NK times where typically K << M, and K is the average number of 2D polygons intersecting a single projected polyhedron. This is done to reduce the number of computations.
The problem with this is that if I have N=262144 and M=19456 it takes about 129 seconds (when multithreaded by polyhedron), and this must be done about 300 times. Ideally, I would like to reduce the computation time to about 1 second for the above sizes, so I was wondering if someone could help point to some software or literature that could improve efficiency.
[EDIT]
#sehe's request I'm posting the most relevant parts of the code. I haven't compiled it, so this is just to get the gist... this code assumes, there are voxels and pixels, but the shapes can be anything. The order of the points in the grid can be any, but the indices of where the points reside in the grid are the same.
#include <boost/geometry/geometry.hpp>
#include <boost/geometry/geometries/point.hpp>
#include <boost/geometry/geometries/ring.hpp>
const std::size_t Dimension = 2;
typedef boost::geometry::model::point<float, Dimension, boost::geometry::cs::cartesian> point_2d;
typedef boost::geometry::model::polygon<point_2d, false /* is cw */, true /* closed */> polygon_2d;
typedef boost::geometry::model::box<point_2d> box_2d;
std::vector<float> getOverlaps(std::vector<float> & projected_grid_vx, // projected voxels
std::vector<float> & pixel_grid_vx, // pixels
std::vector<int> & projected_grid_m, // number of voxels in each dimension
std::vector<int> & pixel_grid_m, // number of pixels in each dimension
std::vector<float> & pixel_grid_omega, // size of the pixel grid in cm
int projected_grid_size, // total number of voxels
int pixel_grid_size) { // total number of pixels
std::vector<float> overlaps(projected_grid_size * pixel_grid_size);
std::vector<float> h(pixel_grid_m.size());
for(int d=0; d < pixel_grid_m.size(); d++) {
h[d] = (pixel_grid_omega[2*d+1] - pixel_grid_omega[2*d]) / pixel_grid_m[d];
}
for(int i=0; i < projected_grid_size; i++){
std::vector<float> point_indices(8);
point_indices[0] = i;
point_indices[1] = i + 1;
point_indices[2] = i + projected_grid_m[0];
point_indices[3] = i + projected_grid_m[0] + 1;
point_indices[4] = i + projected_grid_m[0] * projected_grid_m[1];
point_indices[5] = i + projected_grid_m[0] * projected_grid_m[1] + 1;
point_indices[6] = i + (projected_grid_m[1] + 1) * projected_grid_m[0];
point_indices[7] = i + (projected_grid_m[1] + 1) * projected_grid_m[0] + 1;
std::vector<float> vx_corners(8 * projected_grid_m.size());
for(int vn = 0; vn < 8; vn++) {
for(int d = 0; d < projected_grid_m.size(); d++) {
vx_corners[vn + d * 8] = projected_grid_vx[point_indices[vn] + d * projeted_grid_size];
}
}
polygon_2d proj_voxel;
for(int vn = 0; vn < 8; vn++) {
point_2d poly_pt(vx_corners[2 * vn], vx_corners[2 * vn + 1]);
boost::geometry::append(proj_voxel, poly_pt);
}
boost::geometry::correct(proj_voxel);
polygon_2d proj_voxel_hull;
boost::geometry::convex_hull(proj_voxel, proj_voxel_hull);
box_2d bb_proj_vox;
boost::geometry::envelope(proj_voxel_hull, bb_proj_vox);
point_2d min_pt = bb_proj_vox.min_corner();
point_2d max_pt = bb_proj_vox.max_corner();
// then get min and max indices of intersecting bins
std::vector<float> min_idx(projected_grid_m.size() - 1),
max_idx(projected_grid_m.size() - 1);
// compute min and max indices of incidence on the pixel grid
// this is easy assuming you have a regular grid of pixels
min_idx[0] = std::min( (float) std::max( std::floor((min_pt.get<0>() - pixel_grid_omega[0]) / h[0] - 0.5 ), 0.), pixel_grid_m[0]-1);
min_idx[1] = std::min( (float) std::max( std::floor((min_pt.get<1>() - pixel_grid_omega[2]) / h[1] - 0.5 ), 0.), pixel_grid_m[1]-1);
max_idx[0] = std::min( (float) std::max( std::floor((max_pt.get<0>() - pixel_grid_omega[0]) / h[0] + 0.5 ), 0.), pixel_grid__m[0]-1);
max_idx[1] = std::min( (float) std::max( std::floor((max_pt.get<1>() - pixel_grid_omega[2]) / h[1] + 0.5 ), 0.), pixel_grid_m[1]-1);
// iterate only over pixels which intersect the projected voxel
for(int iy = min_idx[1]; iy <= max_idx[1]; iy++) {
for(int ix = min_idx[0]; ix <= max_idx[0]; ix++) {
int idx = ix + iy * pixel_grid_size[0]; // `first' index of pixel corner point
polygon_2d pix_poly;
for(int pn = 0; pn < 4; pn++) {
point_2d pix_corner_pt(
pixel_grid_vx[idx + pn % 2 + (pn / 2) * pixel_grid_m[0]],
pixel_grid_vx[idx + pn % 2 + (pn / 2) * pixel_grid_m[0] + pixel_grid_size]
);
boost::geometry::append(pix_poly, pix_corner_pt);
}
boost::geometry::correct( pix_poly );
//make this into a convex hull since the order of the point may be any
polygon_2d pix_hull;
boost::geometry::convex_hull(pix_poly, pix_hull);
// on to perform intersection
std::vector<polygon_2d> vox_pix_ints;
polygon_2d vox_pix_int;
try {
boost::geometry::intersection(proj_voxel_hull, pix_hull, vox_pix_ints);
} catch ( std::exception e ) {
// skip since these may coincide at a point or line
continue;
}
// both are convex so only one intersection expected
vox_pix_int = vox_pix_ints[0];
overlaps[i + idx * projected_grid_size] = boost::geometry::area(vox_pix_int);
}
} // end intersection for
} //end projected_voxel for
return overlaps;
}
You could create the ratio of polygon to bounding box:
This could be done computationally once to arrive at an avgerage poly area to BB ratio R constant.
Or you could do it with geometry using a circle bounded by its BB Since your using only projected polyhedron:
R = 0.0;
count = 0;
for (each poly) {
count++;
R += polyArea / itsBoundingBoxArea;
}
R = R/count;
Then calculate the summation of intersection of bounding boxes.
Sbb = 0.0;
for (box1, box2 where box1.isIntersecting(box2)) {
Sbb += box1.intersect(box2);
}
Then:
Approximation = R * Sbb
All of this would not work if concave polys were allowed. Because a concave poly can occupy less than 1% of it's bounding box. You will still have to find the convex hull.
Alternatively, If you can find the polygons area quicker than its hull, you could use the actual computed average poly area. This would give you a decent approximation as well while avoiding both poly intersection and wrapping.
Hm, the problem seems similar to doing "collision-detection" i game-engines. Or "potentially visible sets".
While I don't know much about the current state-of-the-art, i remember an optimization was to enclose objects in spheres, since checking overlaps between spheres (or circles in 2D) is really cheap.
In order to speed-up checks for collisions, objects were often put into search-structures (e.g. a sphere-tree (circle-tree in 2D case)). Basically organizing the space into a hierarchical structure, to make queries for overlaps fast.
So basically my suggestion boils down to: Try looking at algorithms for collision-detection i game-engines.
Assumption
I'm assuming that you mean "intersections" and not intersection. Moreover, It is not the expected use case that most of the individual polys from M and N will overlap at the same time. If this assumption is true then:
Answer
The way this is done with 2D game engines is by having a scene graph where every object has a bounding box. Then place all the the polygons into a node in an quadtree according to their location determined by bounding box. Then the task becomes parallel because each node can be processed separately for intersection.
Here is the wiki for quadtree:
Quadtree Wiki
An octree could be used when in 3D.
It actually doesn't even have to be a octree. You could get the same results with any space partition. You could find the maximum separation of polys (lets call it S). And create say S/10 space partitions. Then you would have 10 separate spaces to execute in parallel. Not only would it be concurrent, but It would no longer be M * N time since not every poly must be compared against every other poly.

Finding a square in a group of coordinates

Ok, I'm having a bit of trouble finding a solution for this that seems to be a simple geometry problem.
I have a list of triple coordinates that form a square angle.
Between all these triple-coordinates I want to find a pair that forms up a square.
I believe the best I can do to exemplify is show an image:
and 2. are irrelevant. 3. and 4. are what I'm looking for.
For each triple coordinate I have the midle point, where the angle is, and two other points that describe the two segments that form the angle.
Summing it up, given six points, 2 for the diagonal + 4 other points, how can I find if these make a square?
obs: the two lines that make the angle are consistent but don't have the same size.
obs2:the lines from different triples may not intersect
Thank you for time and any help and insight provided.
If any term I used is incorrect or just plain hard to understand let me know, I'm not a native english speaker.
Edit: The code as it stands.
//for all triples
for (size_t i = 0; i < toTry.size() - 1; i++) {
Vec2i center_i = toTry[i].avg;
//NormalizedDiagonal = ((Side1 - Center) + (Side2 - Center));
Vec2i a = toTry[i].p, b = toTry[i].q;
Vec2f normalized_i = normalizedDiagonal(center_i, toTry[i].p, toTry[i].q);
for (size_t j = i + 1; j < toTry.size(); j++) {
Vec2i center_j = toTry[j].avg;
//Se os pontos sao proximos, nao importam
if (areClose(center_i, center_j, 25))
continue;
Vec2f normalized_j = normalizedDiagonal(center_j, toTry[j].p, toTry[j].q);
line(src, Point(center_i[0], center_i[1]), Point(center_i[0] + 1 * normalized_i[0], center_i[1] + 1 * normalized_i[1]), Scalar(255, 255, 255), 1);
//test if antiparallel
if (abs(normalized_i[0] - normalized_j[0]) > 0.1 || abs(normalized_i[1] - normalized_j[1] > 0.1))
continue;
Vec2f delta;
delta[0] = center_j[0] - center_i[0]; delta[1] = center_j[1] - center_i[1];
double dd = sqrt(pow((center_i[0] - center_j[0]), 2) + pow((center_i[1] - center_j[1]), 2));
//delta[0] = delta[0] / dd;
//delta[1] = delta[1] / dd;
float dotProduct = normalized_i[0] * delta[0] + normalized_i[1] * delta[1];
//test if do product < 0
if (dotProduct < 0)
continue;
float deltaDotDiagonal = delta[0] * normalized_i[0] + delta[1] * normalized_i[1];
menor_d[0] = delta[0] - deltaDotDiagonal * normalized_i[0];
menor_d[1] = delta[1] - deltaDotDiagonal * normalized_i[1];
dd = sqrt(pow((center_j[0] - menor_d[0]), 2) + pow((center_j[1] - menor_d[1]), 2));
if(dd < 25)
[...]
Just to be clear, the actual lengths of the side segments is irrelevant, right? All you care about is whether the semi-infinite lines formed by the side segments of two triples form a square? Or do the actual segments need to intersect?
Assuming the former, a method to check whether two triples form a square is as follows. Let's use the Point3D and Vector3D from the System.Windows.Media.Media3D namespace to define some terminology, since these are decent general-purpose 3d double precision points and vectors that support basic linear algebra methods. These are c# so you can't use them directly but I'd like to be able to refer to some of the basic methods mentioned there.
Here is the basic method to check if two triples intersect:
Define a triple as follows: Center, Side1 and Side2 as three Point3D structures.
For each triple, define the normalized diagonal vector as
NormalizedDiagonal = ((Side1 - Center) + (Side2 - Center));
NormalizedDiagonal.Normalize()
(You might want to cache this for performance.)
Check if the two centers are equal within some linear tolerance you define. If equal, return false -- it's a degenerate case.
Check if the two diagonal vectors are antiparallel within some angular tolerance you define. (I.e. NormalizedDiagonal1 == -NormalizedDiagonal2 with some tolerance.) If not, return false, not a square.
Compute the vector from triple2.Center to triple2.Center: delta = triple2.Center - triple1.Center.
If double deltaDotDiagonal = DotProduct(delta, triple1.NormalizedDiagonal) < 0, return false - the two triples point away from each other.
Finally, compute the distance from the center of triple2 to the (infinite) diagonal line passing through the center triple1. If zero (within your linear tolerance) they form a square.
To compute that distance: distance = (delta - deltaDotDiagonal*triple1.NormalizedDiagonal).Length
Note: deltaDotDiagonal*triple1.NormalizedDiagonal is the projection of the delta vector onto triple1.NormalizedDiagonal, and thus delta - deltaDotDiagonal*triple1.NormalizedDiagonal is the component of delta that is perpendicular to that diagonal. Its length is the distance we seek.
Finally, If your definition of a square requires that the actual side segments intersect, you can add an extra check that the lengths of all the side segments are less than sqrt(2) * delta.Length.
This method checks if two triples form a square. Finding all triples that form squares is, of course, O(N-squared). If this is a problem, you can put them in an array and sort then by angle = Atan2(NormalizedDiagonal.Y, NormalizedDiagonal.X). Having done that, you can find triples that potentially form squares with a given triple by binary-searching the array for triples with angles = +/- π from the angle of the current triple, within your angular tolerance. (When the angle is near π you will need to check both the beginning and end of the array.)
Update
OK, let's see if I can do this with your classes. I don't have definitions for Vec2i and Vec2f so I could get this wrong...
double getLength(Vec2f vector)
{
return sqrt(pow(vector[0], 2) + pow(vector[1], 2));
}
Vec2f scaleVector(Vec2f vec, float scale)
{
Vec2f scaled;
scaled[0] = vec[0] * scale;
scaled[1] = vec[1] * scale;
return scaled;
}
Vec2f subtractVectorsAsFloat(Vec2i first, Vec2i second)
{
// return first - second as float.
Vec2f diff;
diff[0] = first[0] - second[0];
diff[1] = first[1] - second[1];
return diff;
}
Vec2f subtractVectorsAsFloat(Vec2f first, Vec2f second)
{
// return first - second as float.
Vec2f diff;
diff[0] = first[0] - second[0];
diff[1] = first[1] - second[1];
return diff;
}
double dot(Vec2f first, Vec2f second)
{
return first[0] * second[0] + first[1] * second[1];
}
//for all triples
for (size_t i = 0; i < toTry.size() - 1; i++) {
Vec2i center_i = toTry[i].avg;
//NormalizedDiagonal = ((Side1 - Center) + (Side2 - Center));
Vec2i a = toTry[i].p, b = toTry[i].q;
Vec2f normalized_i = normalizedDiagonal(center_i, toTry[i].p, toTry[i].q);
for (size_t j = i + 1; j < toTry.size(); j++) {
Vec2i center_j = toTry[j].avg;
//Se os pontos sao proximos, nao importam
if (areClose(center_i, center_j, 25))
continue;
Vec2f normalized_j = normalizedDiagonal(center_j, toTry[j].p, toTry[j].q);
//test if antiparallel
if (abs(normalized_i[0] - normalized_j[0]) > 0.1 || abs(normalized_i[1] - normalized_j[1] > 0.1))
continue;
// get a vector pointing from center_i to center_j.
Vec2f delta = subtractVectorsAsFloat(center_j, center_i);
//test if do product < 0
float deltaDotDiagonal = dot(delta, normalized_i);
if (deltaDotDiagonal < 0)
continue;
Vec2f deltaProjectedOntoDiagonal = scaleVector(normalized_i, deltaDotDiagonal);
// Subtracting the dot product of delta projected onto normalized_i will leave the component
// of delta which is perpendicular to normalized_i...
Vec2f distanceVec = subtractVectorsAsFloat(deltaProjectedOntoDiagonal, center_j);
// ... the length of which is the distance from center_j
// to the diagonal through center_i.
double distance = getLength(distanceVec);
if(distance < 25) {
}
}
There are two approaches to solving this. One is a very direct approach that involves finding the intersection of two line segments.
You just use the triple coordinates to figure out the midpoint, and the two line segments that protrude from it (trivial). Do this for both triple-sets.
Now calculate the intersection points, if they exist, for all four possible permutations of the extending line segments. From the original answer to a similar question:
You might look at the code I wrote for Computational Geometry in C,
which discusses this question in detail (Chapter 1, Section 5). The
code is available as SegSegInt from the links at that web site.
In a nutshell, I recommend a different approach, using signed area of
triangles. Then comparing appropriate triples of points, one can
distinguish proper from improper intersections, and all degenerate
cases. Once they are distinguished, finding the point of intersection
is easy.
An alternate, image processing approach, would be to render the lines, define one unique color for the lines, and then apply an seed/flood fill algorithm to the first white zone found, applying a new unique color to future zones, until you flood fill an enclosed area that doesn't touch the border of the image.
Good luck!
References
finding the intersection of two line segments in 2d (with potential degeneracies), Accessed 2014-08-18, <https://math.stackexchange.com/questions/276735/finding-the-intersection-of-two-line-segments-in-2d-with-potential-degeneracies>
In a pair of segments, call one "the base segment" and one that is obtained by rotating the base segment by π/2 counterclockwise is "the other segment".
For each triple, compute the angle between the base segment and the X axis. Call this its principal angle.
Sort triples by the principal angle.
Now for each triple with the principal angle of α any potential square-forming mate has the principal angle of α+π (mod 2π). This is easy to find by binary search.
Furthermore, for two candidate triples with vertices a and a' and principal angles α and α+π, the angle of vector aa' should be α+π/4.
Finally, if each of the four segments is at least |aa'|/√2 long, we have a square.