I am implementing a marching cubes algorithm generally based on the implementation of Paul Bourke with some major adjustments:
precalculation of the scalarfield (floating point values)
avoiding duplicated vertices in the final list using a std::map
vertex storing to visualize the final mesh in Ogre3D
Basically I changed nearly 80% of his code. My resulting mesh has some ugly terrasses and I am not sure how to avoid them. I assumed that using floating points for the scalar field would do the job. Is this a common effect? How can you avoid it?
calculating the vertex positions on the edges. (cell.val[p1] contains the scalar value for the given vertex):
//if there is an intersection on this edge
if (cell.iEdgeFlags & (1 << iEdge))
{
const int* edge = a2iEdgeConnection[iEdge];
int p1 = edge[0];
int p2 = edge[1];
//find the approx intersection point by linear interpolation between the two edges and the density value
float length = cell.val[p1] / (cell.val[p2] + cell.val[p1]);
asEdgeVertex[iEdge] = cell.p[p1] + length * (cell.p[p2] - cell.p[p1]);
}
You can find the complete sourcecode here: https://github.com/DieOptimistin/MarchingCubes
I use Ogre3D as library for this example.
As Andy Newmann said, the devil was in the linear interpolation. Correct is:
float offset;
float delta = cell.val[p2] - cell.val[p1];
if (delta == 0) offset = 0.5;
else offset = (mTargetValue - cell.val[p1]) / delta;
asEdgeVertex[iEdge] = cell.p[p1] + offset* (cell.p[p2] - cell.p[p1]);
Related
I am trying to generate uniform random points on the surface of a unit sphere for a Monte Carlo ray tracing program. When I say uniform I mean the points are uniformly distributed with respect to surface area. My current methodology is to calculate uniform random points on a hemisphere pointing in the positive z axis and base in the x-y plane.
The random point on the hemisphere represents the direction of emission of thermal radiation for a diffuse grey emitter.
I achieve the correct result when I use the following calculation :
Note : dsfmt* is will return a random number between 0 and 1.
azimuthal = 2*PI*dsfmt_genrand_close_open(&dsfmtt);
zenith = asin(sqrt(dsfmt_genrand_close_open(&dsfmtt)));
// Calculate the cartesian point
osRay.c._x = sin(zenith)*cos(azimuthal);
osRay.c._y = sin(zenith)*sin(azimuthal);
osRay.c._z = cos(zenith);
However this is quite slow and profiling suggests that it takes up a large proportion of run time. Therefore I sought out some alternative methods:
The Marsaglia 1972 rejection method
do {
x1 = 2.0*dsfmt_genrand_open_open(&dsfmtt)-1.0;
x2 = 2.0*dsfmt_genrand_open_open(&dsfmtt)-1.0;
S = x1*x1 + x2*x2;
} while(S > 1.0f);
osRay.c._x = 2.0*x1*sqrt(1.0-S);
osRay.c._y = 2.0*x2*sqrt(1.0-S);
osRay.c._z = abs(1.0-2.0*S);
Analytical cartesian coordinate calculation
azimuthal = 2*PI*dsfmt_genrand_close_open(&dsfmtt);
u = 2*dsfmt_genrand_close_open(&dsfmtt) -1;
w = sqrt(1-u*u);
osRay.c._x = w*cos(azimuthal);
osRay.c._y = w*sin(azimuthal);
osRay.c._z = abs(u);
While these last two methods run serval times faster than the first, when I use them I get results which indicate that they are not generating uniform random points on the surface of a sphere but rather are giving a distribution which favours the equator.
Additionally the last two methods give identical final results however I am certain that they are incorrect as I am comparing against an analytical solution.
Every reference I have found indicates that these methods do produce uniform distributions however I do not achieve the correct result.
Is there an error in my implementation or have I missed a fundamental idea in the second and third methods?
The simplest way to generate a uniform distribution on the unit sphere (whatever its dimension is) is to draw independent normal distributions and normalize the resulting vector.
Indeed, for example in dimension 3, e^(-x^2/2) e^(-y^2/2) e^(-z^2/2) = e^(-(x^2 + y^2 + z^2)/2) so the joint distribution is invariant by rotations.
This is fast if you use a fast normal distribution generator (either Ziggurat or Ratio-Of-Uniforms) and a fast normalization routine (google for "fast inverse square root). No transcendental function call is required.
Also, the Marsaglia is not uniform on the half sphere. You'll have more points near the equator since the correspondence point on the 2D disc <-> point on the half sphere is not isometric. The last one seems correct though (however I didn't make the calculation to ensure this).
If you take a horizontal slice of the unit sphere, of height h, its surface area is just 2 pi h. (This is how Archimedes calculated the surface area of a sphere.) So the z-coordinate is uniformly distributed in [0,1]:
azimuthal = 2*PI*dsfmt_genrand_close_open(&dsfmtt);
osRay.c._z = dsfmt_genrand_close_open(&dsfmtt);
xyproj = sqrt(1 - osRay.c._z*osRay.c._z);
osRay.c._x = xyproj*cos(azimuthal);
osRay.c._y = xyproj*sin(azimuthal);
Also you might be able to save some time by calculating cos(azimuthal) and sin(azimuthal) together -- see this stackoverflow question for discussion.
Edited to add: OK, I see now that this is just a slight tweak of your third method. But it cuts out a step.
This should be quick if you have a fast RNG:
// RNG::draw() returns a uniformly distributed number between -1 and 1.
void drawSphereSurface(RNG& rng, double& x1, double& x2, double& x3)
{
while (true) {
x1 = rng.draw();
x2 = rng.draw();
x3 = rng.draw();
const double radius = sqrt(x1*x1 + x2*x2 + x3*x3);
if (radius > 0 && radius < 1) {
x1 /= radius;
x2 /= radius;
x3 /= radius;
return;
}
}
}
To speed it up, you can move the sqrt call inside the if block.
Have you tried getting rid of asin?
azimuthal = 2*PI*dsfmt_genrand_close_open(&dsfmtt);
sin2_zenith = dsfmt_genrand_close_open(&dsfmtt);
sin_zenith = sqrt(sin2_zenith);
// Calculate the cartesian point
osRay.c._x = sin_zenith*cos(azimuthal);
osRay.c._y = sin_zenith*sin(azimuthal);
osRay.c._z = sqrt(1 - sin2_zenith);
I think the problem you are having with non-uniform results is because in polar coordinates, a random point on the circle is not uniformly distributed on the radial axis. If you look at the area on [theta, theta+dtheta]x[r,r+dr], for fixed theta and dtheta, the area will be different of different values of r. Intuitivly, there is "more area" further out from the center. Thus, you need to scale your random radius to account for this. I haven't got the proof lying around, but the scaling is r=R*sqrt(rand), with R being the radius of the circle and rand begin the random number.
The second and third methods do in fact produce uniformly distributed random points on the surface of a sphere with the second method (Marsaglia 1972) producing the fastest run times at around twice the speed on an Intel Xeon 2.8 GHz Quad-Core.
As noted by Alexandre C there is an additional method using the normal distribution which expands to n-spheres better than the methods I have presented.
This link will give you further information on selecting uniformly distributed random points on the surface of a sphere.
My initial method as pointed out by TonyK does not produce uniformly distributed points and rather bias's the poles when generating the random points. This is required by the problem I am trying to solve however I simply assumed it would generate uniformly random points. As suggested by Pablo this method can be optimised by removing the asin() call to reduce run time by around 20%.
1st try (wrong)
point=[rand(-1,1),rand(-1,1),rand(-1,1)];
len = length_of_vector(point);
EDITED:
What about?
while(1)
point=[rand(-1,1),rand(-1,1),rand(-1,1)];
len = length_of_vector(point);
if( len > 1 )
continue;
point = point / len
break
Acception is here approx 0.4. Than mean that you will reject 60% of solutions.
I am working on conventional Whitted ray tracing, and trying to interpolate surface of hitted triangle as if it was convex instead of flat.
The idea is to treat triangle as a parametric surface s(u,v) once the barycentric coordinates (u,v) of hit point p are known.
This surface equation should be calculated using triangle's positions p0, p1, p2 and normals n0, n1, n2.
The hit point itself is calculated as
p = (1-u-v)*p0 + u*p1 + v*p2;
I have found three different solutions till now.
Solution 1. Projection
The first solution I came to. It is to project hit point on planes that come through each of vertexes p0, p1, p2 perpendicular to corresponding normals, and then interpolate the result.
vec3 r0 = p0 + dot( p0 - p, n0 ) * n0;
vec3 r1 = p1 + dot( p1 - p, n1 ) * n1;
vec3 r2 = p2 + dot( p2 - p, n2 ) * n2;
p = (1-u-v)*r0 + u*r1 + v*r2;
Solution 2. Curvature
Suggested in a paper of Takashi Nagata "Simple local interpolation of surfaces using normal vectors" and discussed in question "Local interpolation of surfaces using normal vectors", but it seems to be overcomplicated and not very fast for real-time ray tracing (unless you precompute all necessary coefficients). Triangle here is treated as a surface of the second order.
Solution 3. Bezier curves
This solution is inspired by Brett Hale's answer. It is about using some interpolation of the higher order, cubic Bezier curves in my case.
E.g., for an edge p0p1 Bezier curve should look like
B(t) = (1-t)^3*p0 + 3(1-t)^2*t*(p0+n0*adj) + 3*(1-t)*t^2*(p1+n1*adj) + t^3*p1,
where adj is some adjustment parameter.
Computing Bezier curves for edges p0p1 and p0p2 and interpolating them gives the final code:
float u1 = 1 - u;
float v1 = 1 - v;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*(u1*n0 + u*n1)*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*(v1*n0 + v*n2)*adj;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
p = (1-w)*b1 + w*b2;
Alternatively, one can interpolate between three edges:
float u1 = 1.0 - u;
float v1 = 1.0 - v;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
float w1 = 1.0 - w;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*( u1*n0 + u*n1 )*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*( v1*n0 + v*n2 )*adj;
vec3 b0 = w1*w1*(3-2*w1)*p1 + w*w*(3-2*w)*p2 + 3*w*w1*( w1*n1 + w*n2 )*adj;
p = (1-u-v)*b0 + u*b1 + v*b2;
Maybe I messed something in code above, but this option does not seem to be very robust inside shader.
P.S. The intention is to get more correct origins for shadow rays when they are casted from low-poly models. Here you can find the resulted images from test scene. Big white numbers indicates number of solution (zero for original image).
P.P.S. I still wonder if there is another efficient solution which can give better result.
Keeping triangles 'flat' has many benefits and simplifies several stages required during rendering. Approximating a higher order surface on the other hand introduces quite significant tracing overhead and requires adjustments to your BVH structure.
When the geometry is being treated as a collection of facets on the other hand, the shading information can still be interpolated to achieve smooth shading while still being very efficient to process.
There are adaptive tessellation techniques which approximate the limit surface (OpenSubdiv is a great example). Pixar's Photorealistic RenderMan has a long history using subdivision surfaces. When they switched their rendering algorithm to path tracing, they've also introduced a pretessellation step for their subdivision surfaces. This stage is executed right before rendering begins and builds an adaptive triangulated approximation of the limit surface. This seems to be more efficient to trace and tends to use less resources, especially for the high-quality assets used in this industry.
So, to answer your question. I think the most efficient way to achieve what you're after is to use an adaptive subdivision scheme which spits out triangles instead of tracing against a higher order surface.
Dan Sunday describes an algorithm that calculates the barycentric coordinates on the triangle once the ray-plane intersection has been calculated. The point lies inside the triangle if:
(s >= 0) && (t >= 0) && (s + t <= 1)
You can then use, say, n(s, t) = nu * s + nv * t + nw * (1 - s - t) to interpolate a normal, as well as the point of intersection, though n(s, t) will not, in general, be normalized, even if (nu, nv, nw) are. You might find higher order interpolation necessary. PN-triangles were a similar hack for visual appeal rather than mathematical precision. For example, true rational quadratic Bezier triangles can describe conic sections.
I am trying to apply a random texture with a number on it based on the tile input. My efforts thus far cause patterns to emerge in the number selected.
My method for going about it has been multiplying each tile value by a random scalar and then either multiplying or adding them together. Call the operations I do to each tile "salt".
Example:
float seed = fract(fract((tile.x+23.42f)*189.28148f) + (tile.y*92001.302+1.235801f));
When I add the two "salted" tile values together a pattern emerges. The numbers are the same along the diagonal.
When I multipy the two "salted" tile values together the numbers are mirrored along the diagonal.
I want a little more randomness out of these tile values, can someone decent at math help?
Shader code segment:
vec2 tile = vec2(floor(texCoord_vs.x), floor(texCoord_vs.y));
float seed = fract(fract((tile.x+23.42f)*189.28148f) + (tile.y*92001.302+1.235801f));
vec2 offset = offset_vs;
offset.x += floor(seed*7.0f)*TYPE_UNIT_SIZE;
vec3 diffuseColor = materialDiffuseColor * texture(
typesheet, vec2((texCoord_vs.x - tile.x)*TYPE_UNIT_SIZE,
(texCoord_vs.y - tile.y)*TYPE_UNIT_SIZE) + offset);
Picture:
https://gyazo.com/aada74e81fdc3dbd0220263993ed7d01
You might be able to use the technique described in this question for hashing 2D vectors, which in turn was taken from Optimized Spatial Hashing for Collision Detection of Deformable Objects:
index = (x p1 xor y p2) mod n
Where p1, & p2, are large prime numbers (they used 73856093, & 83492791) and n is the hash table size or in this case the max random index you wish to generate.
I am trying to produce random equilateral triangles on the console screen.
The method I am using is creating a center point for the triangle (randomly positioned), moving the center point to the origin (0,0) and then creating 3 points from the center (adding the radius(random number) of the triangle to the Y axis of each point). Then I rotate 2 of the points, one at 120 degrees and the other at 240 making an equilateral triangle then draw lines between the points. Then bring the points back to the original plot relating to the centroid.
This for the most past of the time works and I get an equilateral triangle, however other times I don't quite get an equilateral triangle and I am at a complete loss as to why.
I am using Brensenham's line algorithm to draw the line between points.
Image of working triangle: http://imgur.com/GpF406O
Image of broken triangle: http://imgur.com/Oa2BYun
Here is the code that plots the coords for the triangle:
void Triangle::createVertex(Vertex cent)
{
// angle of 120 in radians
double s120 = sin(2.0943951024);
double c120 = cos(2.0943951024);
// angle of 240 in radians
double s240 = sin(4.1887902048);
double c240 = cos(4.1887902048);
// bringing centroid to the origin and saving old pos to move later on
int x = cent.getX();
int y = cent.getY();
cent.setX(0);
cent.setY(0);
// creating the points all equal distance from the centroid
Vertex v1(cent.getX(), cent.getY() + radius);
Vertex v2(cent.getX(), cent.getY() + radius);
Vertex v3(cent.getX(), cent.getY() + radius);
// rotate points
double newx = v1.getX() * c120 - v1.getY() * s120;
double newy = v1.getY() * c120 + v1.getX() * s120;
double xnew = v2.getX() * c240 - v2.getY() * s240;
double ynew = v2.getY() * c240 + v2.getX() * s240;
// giving the points the actual location in relation the the old pos of the centroid
v1.setX(newx + x);
v1.setY(newy + y);
v2.setX(xnew + x);
v2.setY(ynew + y);
v3.setX(x);
v3.setY(y + radius);
// adding the to a list (list is used in a function to draw the lines)
vertices.push_back(v1);
vertices.push_back(v2);
vertices.push_back(v3);
}
Looking at the images of your two triangles (and looking at the line drawing algorithm) you are drawing lines as a series of discrete pixels. That means a vertex must fall in a pixel (it can't be on a boundary) like in this image.
So what happens if your vertex falls on* a border between pixels? Your line drawing algorithm has to make a decision on which pixel to put the vertex in.
Looking at the algorithm description on wikipedia and the c++ implementation on a page a www.cs.helsinki.fi
I see that both list implementations using integer arithmetic** which in this case is not unreasonable given you have discreet rows of pixels. This means that if your floating point calculations put one vertex above the threshold of the integer label for the next row of pixels when the floor (conversion from float to int) is done, but the other vertex is below that threshold then the two vertices will be placed on different rows.
think v1.y = 5.00000000000000000001 and v2.y = 4.99999999999999999999 which leads to v1 being placed on row 5 and v2 being placed on row 4.
This explains why you only see the issue occurring occasionally, you only occasionally have your vertices land on a boundary like this.
In order to fix a couple of things come to mind:
Fix it when you assign values to your vertices, the y values are the same anyways.
given:
v1.getX() = v2.getX() = 0 (defined by your code)
v1.getY() = v2.getY() = radius (defined by your code)
cos(120 degrees) = cos(240 degrees) ('tis true)
This reduces your two y values to
double newy = v1.getY() * c120
double ynew = v1.getY() * c120
ergo:
v1.setY(newy + y);
v2.setY(newy + y);
If you wrote your own Brensenham's algorithm implementation you could add a check in that code to make sure your vertices are at the same height, but that seems like a really bad place to put that kind of check since the height of the endpoints is specific to your problem and not drawing lines in general.
*Or not exactly on, but close enough you can't tell the difference after accounting for floating point error
**The algorithm is not restricted to integer arithmetic, but I suspect given the irregularity of your problem and the way the algorithm has been presented, along with the fact that you are using discreet characters for the lines in your images the integer arithmetic is the issue.
Ok, I'm having a bit of trouble finding a solution for this that seems to be a simple geometry problem.
I have a list of triple coordinates that form a square angle.
Between all these triple-coordinates I want to find a pair that forms up a square.
I believe the best I can do to exemplify is show an image:
and 2. are irrelevant. 3. and 4. are what I'm looking for.
For each triple coordinate I have the midle point, where the angle is, and two other points that describe the two segments that form the angle.
Summing it up, given six points, 2 for the diagonal + 4 other points, how can I find if these make a square?
obs: the two lines that make the angle are consistent but don't have the same size.
obs2:the lines from different triples may not intersect
Thank you for time and any help and insight provided.
If any term I used is incorrect or just plain hard to understand let me know, I'm not a native english speaker.
Edit: The code as it stands.
//for all triples
for (size_t i = 0; i < toTry.size() - 1; i++) {
Vec2i center_i = toTry[i].avg;
//NormalizedDiagonal = ((Side1 - Center) + (Side2 - Center));
Vec2i a = toTry[i].p, b = toTry[i].q;
Vec2f normalized_i = normalizedDiagonal(center_i, toTry[i].p, toTry[i].q);
for (size_t j = i + 1; j < toTry.size(); j++) {
Vec2i center_j = toTry[j].avg;
//Se os pontos sao proximos, nao importam
if (areClose(center_i, center_j, 25))
continue;
Vec2f normalized_j = normalizedDiagonal(center_j, toTry[j].p, toTry[j].q);
line(src, Point(center_i[0], center_i[1]), Point(center_i[0] + 1 * normalized_i[0], center_i[1] + 1 * normalized_i[1]), Scalar(255, 255, 255), 1);
//test if antiparallel
if (abs(normalized_i[0] - normalized_j[0]) > 0.1 || abs(normalized_i[1] - normalized_j[1] > 0.1))
continue;
Vec2f delta;
delta[0] = center_j[0] - center_i[0]; delta[1] = center_j[1] - center_i[1];
double dd = sqrt(pow((center_i[0] - center_j[0]), 2) + pow((center_i[1] - center_j[1]), 2));
//delta[0] = delta[0] / dd;
//delta[1] = delta[1] / dd;
float dotProduct = normalized_i[0] * delta[0] + normalized_i[1] * delta[1];
//test if do product < 0
if (dotProduct < 0)
continue;
float deltaDotDiagonal = delta[0] * normalized_i[0] + delta[1] * normalized_i[1];
menor_d[0] = delta[0] - deltaDotDiagonal * normalized_i[0];
menor_d[1] = delta[1] - deltaDotDiagonal * normalized_i[1];
dd = sqrt(pow((center_j[0] - menor_d[0]), 2) + pow((center_j[1] - menor_d[1]), 2));
if(dd < 25)
[...]
Just to be clear, the actual lengths of the side segments is irrelevant, right? All you care about is whether the semi-infinite lines formed by the side segments of two triples form a square? Or do the actual segments need to intersect?
Assuming the former, a method to check whether two triples form a square is as follows. Let's use the Point3D and Vector3D from the System.Windows.Media.Media3D namespace to define some terminology, since these are decent general-purpose 3d double precision points and vectors that support basic linear algebra methods. These are c# so you can't use them directly but I'd like to be able to refer to some of the basic methods mentioned there.
Here is the basic method to check if two triples intersect:
Define a triple as follows: Center, Side1 and Side2 as three Point3D structures.
For each triple, define the normalized diagonal vector as
NormalizedDiagonal = ((Side1 - Center) + (Side2 - Center));
NormalizedDiagonal.Normalize()
(You might want to cache this for performance.)
Check if the two centers are equal within some linear tolerance you define. If equal, return false -- it's a degenerate case.
Check if the two diagonal vectors are antiparallel within some angular tolerance you define. (I.e. NormalizedDiagonal1 == -NormalizedDiagonal2 with some tolerance.) If not, return false, not a square.
Compute the vector from triple2.Center to triple2.Center: delta = triple2.Center - triple1.Center.
If double deltaDotDiagonal = DotProduct(delta, triple1.NormalizedDiagonal) < 0, return false - the two triples point away from each other.
Finally, compute the distance from the center of triple2 to the (infinite) diagonal line passing through the center triple1. If zero (within your linear tolerance) they form a square.
To compute that distance: distance = (delta - deltaDotDiagonal*triple1.NormalizedDiagonal).Length
Note: deltaDotDiagonal*triple1.NormalizedDiagonal is the projection of the delta vector onto triple1.NormalizedDiagonal, and thus delta - deltaDotDiagonal*triple1.NormalizedDiagonal is the component of delta that is perpendicular to that diagonal. Its length is the distance we seek.
Finally, If your definition of a square requires that the actual side segments intersect, you can add an extra check that the lengths of all the side segments are less than sqrt(2) * delta.Length.
This method checks if two triples form a square. Finding all triples that form squares is, of course, O(N-squared). If this is a problem, you can put them in an array and sort then by angle = Atan2(NormalizedDiagonal.Y, NormalizedDiagonal.X). Having done that, you can find triples that potentially form squares with a given triple by binary-searching the array for triples with angles = +/- π from the angle of the current triple, within your angular tolerance. (When the angle is near π you will need to check both the beginning and end of the array.)
Update
OK, let's see if I can do this with your classes. I don't have definitions for Vec2i and Vec2f so I could get this wrong...
double getLength(Vec2f vector)
{
return sqrt(pow(vector[0], 2) + pow(vector[1], 2));
}
Vec2f scaleVector(Vec2f vec, float scale)
{
Vec2f scaled;
scaled[0] = vec[0] * scale;
scaled[1] = vec[1] * scale;
return scaled;
}
Vec2f subtractVectorsAsFloat(Vec2i first, Vec2i second)
{
// return first - second as float.
Vec2f diff;
diff[0] = first[0] - second[0];
diff[1] = first[1] - second[1];
return diff;
}
Vec2f subtractVectorsAsFloat(Vec2f first, Vec2f second)
{
// return first - second as float.
Vec2f diff;
diff[0] = first[0] - second[0];
diff[1] = first[1] - second[1];
return diff;
}
double dot(Vec2f first, Vec2f second)
{
return first[0] * second[0] + first[1] * second[1];
}
//for all triples
for (size_t i = 0; i < toTry.size() - 1; i++) {
Vec2i center_i = toTry[i].avg;
//NormalizedDiagonal = ((Side1 - Center) + (Side2 - Center));
Vec2i a = toTry[i].p, b = toTry[i].q;
Vec2f normalized_i = normalizedDiagonal(center_i, toTry[i].p, toTry[i].q);
for (size_t j = i + 1; j < toTry.size(); j++) {
Vec2i center_j = toTry[j].avg;
//Se os pontos sao proximos, nao importam
if (areClose(center_i, center_j, 25))
continue;
Vec2f normalized_j = normalizedDiagonal(center_j, toTry[j].p, toTry[j].q);
//test if antiparallel
if (abs(normalized_i[0] - normalized_j[0]) > 0.1 || abs(normalized_i[1] - normalized_j[1] > 0.1))
continue;
// get a vector pointing from center_i to center_j.
Vec2f delta = subtractVectorsAsFloat(center_j, center_i);
//test if do product < 0
float deltaDotDiagonal = dot(delta, normalized_i);
if (deltaDotDiagonal < 0)
continue;
Vec2f deltaProjectedOntoDiagonal = scaleVector(normalized_i, deltaDotDiagonal);
// Subtracting the dot product of delta projected onto normalized_i will leave the component
// of delta which is perpendicular to normalized_i...
Vec2f distanceVec = subtractVectorsAsFloat(deltaProjectedOntoDiagonal, center_j);
// ... the length of which is the distance from center_j
// to the diagonal through center_i.
double distance = getLength(distanceVec);
if(distance < 25) {
}
}
There are two approaches to solving this. One is a very direct approach that involves finding the intersection of two line segments.
You just use the triple coordinates to figure out the midpoint, and the two line segments that protrude from it (trivial). Do this for both triple-sets.
Now calculate the intersection points, if they exist, for all four possible permutations of the extending line segments. From the original answer to a similar question:
You might look at the code I wrote for Computational Geometry in C,
which discusses this question in detail (Chapter 1, Section 5). The
code is available as SegSegInt from the links at that web site.
In a nutshell, I recommend a different approach, using signed area of
triangles. Then comparing appropriate triples of points, one can
distinguish proper from improper intersections, and all degenerate
cases. Once they are distinguished, finding the point of intersection
is easy.
An alternate, image processing approach, would be to render the lines, define one unique color for the lines, and then apply an seed/flood fill algorithm to the first white zone found, applying a new unique color to future zones, until you flood fill an enclosed area that doesn't touch the border of the image.
Good luck!
References
finding the intersection of two line segments in 2d (with potential degeneracies), Accessed 2014-08-18, <https://math.stackexchange.com/questions/276735/finding-the-intersection-of-two-line-segments-in-2d-with-potential-degeneracies>
In a pair of segments, call one "the base segment" and one that is obtained by rotating the base segment by π/2 counterclockwise is "the other segment".
For each triple, compute the angle between the base segment and the X axis. Call this its principal angle.
Sort triples by the principal angle.
Now for each triple with the principal angle of α any potential square-forming mate has the principal angle of α+π (mod 2π). This is easy to find by binary search.
Furthermore, for two candidate triples with vertices a and a' and principal angles α and α+π, the angle of vector aa' should be α+π/4.
Finally, if each of the four segments is at least |aa'|/√2 long, we have a square.