I have a
boost::geometry::model::polygon<Point> Algorithm::poly
and i'm looking for the area of the polygon with
area = bg::area(poly);
the result is
1.10434e+08
When i'm reading the documentation, i can see
"The units are the square of the units used for the points defining the surface". I really don't understand what it means.
https://www.boost.org/doc/libs/1_65_0/libs/geometry/doc/html/geometry/reference/algorithms/area/area_1.html
I would like to know if we have a way to transform the return of the bg::area in a m2 result.
With another tool (i can't use it in my code) i can see that the total of m2 from the polygon is 11043 m2.
How can i have 11043 with 1.10434e+08.
The points in polyare cartesian (x,y) coordinates. What is their unit? are they in mm, cm, attoparsec?
The resulting unit is the square of that. But we can work that out from the data given:
sqrt(1.1043e+08 / 11043) = srqt(10000) = 100
So it seems your points are in 0.01 m == centimeter, so the area returned is in cm²
(1m² = 10000cm²)
Related
I am trying to generate uniform random points on the surface of a unit sphere for a Monte Carlo ray tracing program. When I say uniform I mean the points are uniformly distributed with respect to surface area. My current methodology is to calculate uniform random points on a hemisphere pointing in the positive z axis and base in the x-y plane.
The random point on the hemisphere represents the direction of emission of thermal radiation for a diffuse grey emitter.
I achieve the correct result when I use the following calculation :
Note : dsfmt* is will return a random number between 0 and 1.
azimuthal = 2*PI*dsfmt_genrand_close_open(&dsfmtt);
zenith = asin(sqrt(dsfmt_genrand_close_open(&dsfmtt)));
// Calculate the cartesian point
osRay.c._x = sin(zenith)*cos(azimuthal);
osRay.c._y = sin(zenith)*sin(azimuthal);
osRay.c._z = cos(zenith);
However this is quite slow and profiling suggests that it takes up a large proportion of run time. Therefore I sought out some alternative methods:
The Marsaglia 1972 rejection method
do {
x1 = 2.0*dsfmt_genrand_open_open(&dsfmtt)-1.0;
x2 = 2.0*dsfmt_genrand_open_open(&dsfmtt)-1.0;
S = x1*x1 + x2*x2;
} while(S > 1.0f);
osRay.c._x = 2.0*x1*sqrt(1.0-S);
osRay.c._y = 2.0*x2*sqrt(1.0-S);
osRay.c._z = abs(1.0-2.0*S);
Analytical cartesian coordinate calculation
azimuthal = 2*PI*dsfmt_genrand_close_open(&dsfmtt);
u = 2*dsfmt_genrand_close_open(&dsfmtt) -1;
w = sqrt(1-u*u);
osRay.c._x = w*cos(azimuthal);
osRay.c._y = w*sin(azimuthal);
osRay.c._z = abs(u);
While these last two methods run serval times faster than the first, when I use them I get results which indicate that they are not generating uniform random points on the surface of a sphere but rather are giving a distribution which favours the equator.
Additionally the last two methods give identical final results however I am certain that they are incorrect as I am comparing against an analytical solution.
Every reference I have found indicates that these methods do produce uniform distributions however I do not achieve the correct result.
Is there an error in my implementation or have I missed a fundamental idea in the second and third methods?
The simplest way to generate a uniform distribution on the unit sphere (whatever its dimension is) is to draw independent normal distributions and normalize the resulting vector.
Indeed, for example in dimension 3, e^(-x^2/2) e^(-y^2/2) e^(-z^2/2) = e^(-(x^2 + y^2 + z^2)/2) so the joint distribution is invariant by rotations.
This is fast if you use a fast normal distribution generator (either Ziggurat or Ratio-Of-Uniforms) and a fast normalization routine (google for "fast inverse square root). No transcendental function call is required.
Also, the Marsaglia is not uniform on the half sphere. You'll have more points near the equator since the correspondence point on the 2D disc <-> point on the half sphere is not isometric. The last one seems correct though (however I didn't make the calculation to ensure this).
If you take a horizontal slice of the unit sphere, of height h, its surface area is just 2 pi h. (This is how Archimedes calculated the surface area of a sphere.) So the z-coordinate is uniformly distributed in [0,1]:
azimuthal = 2*PI*dsfmt_genrand_close_open(&dsfmtt);
osRay.c._z = dsfmt_genrand_close_open(&dsfmtt);
xyproj = sqrt(1 - osRay.c._z*osRay.c._z);
osRay.c._x = xyproj*cos(azimuthal);
osRay.c._y = xyproj*sin(azimuthal);
Also you might be able to save some time by calculating cos(azimuthal) and sin(azimuthal) together -- see this stackoverflow question for discussion.
Edited to add: OK, I see now that this is just a slight tweak of your third method. But it cuts out a step.
This should be quick if you have a fast RNG:
// RNG::draw() returns a uniformly distributed number between -1 and 1.
void drawSphereSurface(RNG& rng, double& x1, double& x2, double& x3)
{
while (true) {
x1 = rng.draw();
x2 = rng.draw();
x3 = rng.draw();
const double radius = sqrt(x1*x1 + x2*x2 + x3*x3);
if (radius > 0 && radius < 1) {
x1 /= radius;
x2 /= radius;
x3 /= radius;
return;
}
}
}
To speed it up, you can move the sqrt call inside the if block.
Have you tried getting rid of asin?
azimuthal = 2*PI*dsfmt_genrand_close_open(&dsfmtt);
sin2_zenith = dsfmt_genrand_close_open(&dsfmtt);
sin_zenith = sqrt(sin2_zenith);
// Calculate the cartesian point
osRay.c._x = sin_zenith*cos(azimuthal);
osRay.c._y = sin_zenith*sin(azimuthal);
osRay.c._z = sqrt(1 - sin2_zenith);
I think the problem you are having with non-uniform results is because in polar coordinates, a random point on the circle is not uniformly distributed on the radial axis. If you look at the area on [theta, theta+dtheta]x[r,r+dr], for fixed theta and dtheta, the area will be different of different values of r. Intuitivly, there is "more area" further out from the center. Thus, you need to scale your random radius to account for this. I haven't got the proof lying around, but the scaling is r=R*sqrt(rand), with R being the radius of the circle and rand begin the random number.
The second and third methods do in fact produce uniformly distributed random points on the surface of a sphere with the second method (Marsaglia 1972) producing the fastest run times at around twice the speed on an Intel Xeon 2.8 GHz Quad-Core.
As noted by Alexandre C there is an additional method using the normal distribution which expands to n-spheres better than the methods I have presented.
This link will give you further information on selecting uniformly distributed random points on the surface of a sphere.
My initial method as pointed out by TonyK does not produce uniformly distributed points and rather bias's the poles when generating the random points. This is required by the problem I am trying to solve however I simply assumed it would generate uniformly random points. As suggested by Pablo this method can be optimised by removing the asin() call to reduce run time by around 20%.
1st try (wrong)
point=[rand(-1,1),rand(-1,1),rand(-1,1)];
len = length_of_vector(point);
EDITED:
What about?
while(1)
point=[rand(-1,1),rand(-1,1),rand(-1,1)];
len = length_of_vector(point);
if( len > 1 )
continue;
point = point / len
break
Acception is here approx 0.4. Than mean that you will reject 60% of solutions.
In CGAL I need to compute the exact intersection points between a set of lines and a set o circles. Starting from the circles (which can have irrational radius but rational squared_radius) I should compute the vertical line passing through the x_extremal_points of each circle (not segment but lines) and calculate the intersection point of each circle with each line.
I’m using CircularKernel and Circle_2 for the circles and Line_2 for the lines.
Here’s an example of how I compute the circles and the lines and how I check if they intersect.
int main()
{
Point_2 a = Point_2(250.5, 98.5);
Point_2 b = Point_2(156, 139);
//Radius is half distance ab
Circular_k::FT aRad = CGAL::squared_distance(a, b);
Circle_2 circle_a = Circle_2(a, aRad/4);
Circular_arc_point_2 a_left_point = CGAL::x_extremal_point(circle_a, false);
Circular_arc_point_2 a_right_point = CGAL::x_extremal_point(circle_a, true);
//for example use only left extremal point of circle a
CGAL::Bbox_2 a_left_point_bb = a_left_point.bbox();
Line_2 a_left_line = Line_2(Point_2(a_left_point_bb.xmin(), a_left_point_bb.ymin()),
Point_2(a_left_point_bb.xmin(), a_left_point_bb.ymax()));
if ( do_intersect(a_left_line, circle_a) ) {
std::cout << "intersect";
}
else {
std::cout << " do not intersect ";
}
return 0;
}
This flow rises this exception:
CGAL error: precondition violation!
Expression : y != 0
File : c:\dev\cgal-4.7\include\cgal\gmp\gmpq_type.h
Line : 371
Explanation:
Refer to the bug-reporting instructions at http://www.cgal.org/bug_report.html
I can’t figure out how I can calculate the intersection points.
Also, Is there a better way to compute the lines? I know abot the x_extremal_point function but it returns the Circular_arc_point point and I’m not able to construct a vertical line passing through them directly without using Bounding box.
In your code, you seem to compute the intersection of a circle with the vertical line that passes through the extremal point of the circle (I forget the bounding box). Well, then the (double) intersection is the extremal point itself...
More globally, you say in your text of introduction that you want to compute exact intersections. Then you should certainly not use bounding boxes, which by definition introduce some approximation.
If I understand your text correctly,
* for testing the intersection of your vertical lines with the other circles, you don't need to construct the lines, you only need to compare the abscissae of the extremal points of two circles, which you can do with the CGAL circular kernel.
* for computing the intersection of a vertical line that has non-rational coefficients (since its equation is of the form x= +-sqrt(r)) with another circle, then the CGAL circular kernel will not give you a pre-cooked solution. That kernel will help, but you must still compute a few things by hand.
If you don't want to bother, then you can also just take a standard CGAL kernel with Core::Expr as underlying number type. It can do "anything", but it will be slower.
For efficiency, you should look at the underlying 1D problem: projecting the lines and the circle on the X axis, you have a set of points and a set of intervals [Xc-R, Xc+R].
If the L points are sorted increasingly, you can locate the left bound of an interval in time Lg(L) by dichotomy, and scan the list of points until the right bound. This results in a O(Lg(L).C + I) behavior (C circle intervals), where I is the number of intersections reported.
I guess that with a merge-like process using an active list, if the interval bounds are also sorted you can lower to O(L + C + I).
The extension to 2D is elementary.
I'm trying to program a simulation. Originally I'd randomly create points like so...
for (int c = 0; c < number; c++){
for(int d = 0; d < 3; d++){
coordinate[c][d] = randomrange(low, high);
}
}
Where randomrange() is an arbitrary range randomizer, number is the amount of created points, and d represents the x,y,z coordinate. It works, however I want to take things further. How would I define a known shape? Say I want 80 points on a circle's circumference, or 500 that form the edges of a cube. I can explain well on paper, but have a problem describing the process as coding. This doesn't pertain to the question, but I end up taking the points to txt file and then use matlab, scatter3 to plot the points. Creating the "shape" points is my issue.
Both a circle and a cube edges set are 1-dimensional sets, so you can represent them as real intervals. For a circle it's straightforward: use an interval (0, 2pi) and transform a random value phi from the interval into a point:
xcentre + R cos(phi), ycentre + R sin(phi)
For a cube you have 12 segments, so use interval (0, 12) and split a random number from the interval into an integer part and a fraction. Then use the integer as an edge number and the fraction as a position inside the edge.
Easy variant:
First think of the min/max x/y values (separately; to reduce the faulty values for the step below), generate some coordinates matching this range, and then check if it fulfills eg. a^2+b^2=r^2 (circle)
If not, try again.
Better, but only possible for certain shapes:
Generate a radius between (0-max) and an angle (0-360)
(or just an angle if it should be on the circle border)
and use some math (sin/cos...) to transform it into x and y.
http://en.wikipedia.org/wiki/Polar_coordinate_system
I'am trying to find out an algorithm to recognize circle in array of points.
Lets say that I've got points array where circle could or could not be stored (that also means array doesn't have to store only circle's points, there could be some "extra" points before or after circle's data).
I've already tried some algorithms but none of them work properly with those "extra" points. Have you got any ideas how to deal with this problem?
EDIT// I didn't mention that before. I want this algorithm to be used on circle gesture recognition. I've thought I would have data in array (for last few seconds) and by analysing this data in every tracking frame I would be able to say if there was or was not a circle gesture.
First I calculate the geometric mean (not the aritmetic mean) for each X and Y component.
I choose geometric mean because one feature is that small values (with respect to the arithmetic mean ) of the values are much more influential than the large values.
This lead me to the theoretical center of all points: circ_center
Then I calculate the standard deviation of distance of each point to center: stddev. This gives me the "indicator" to quantify the amount of variation. One property of circle is that all circumference point is at the same distance of it's center. With standard dev I try to test if your points are (with max variance threshold: max_dispersion) equally distance.
Last I calculates the average distance of points inside max_dispersion threshold from center, this give me the radius of the circle: avg_dist.
Parameters:
max_dispersion represents the "cicle precision". Smaller means more precise.
min_points_needed is the minimun number of points valid to be considered as circumference.
This is just an attempt, I have not tried. Let me know.
I will try this (in pseudo language)
points_size = 100; //number_of_user_points
all_poins[points_size]; //coordinates of points
//thresholds to be defined by user
max_dispersion = 20; //value of max stddev accepted, expressed in geometric units
min_points_needed = 5; //minimum number of points near the circumference
stddev = 0; //standard deviation of points from center
circ_center; //estimated circumference center, using Geometric mean
num_ok_points = 0; //points with distance under standard eviation
avg_dist = 0; //distance from center of "ok points"
all_x = 1; all_y = 1;
for(i = 0 ; i < points_size ; i++)
{
all_x = all_x * all_poins[i].x;
all_y = all_y * all_poins[i].y;
}
//pow(x, 1/y) = nth root
all_x = pow(all_x, 1 / points_size); //Geometric mean
all_y = pow(all_y, 1 / points_size); //Geometric mean
circ_center = make_point(all_x, all_y);
for(i = 0 ; i < points_size ; i++)
{
dist = distance(all_poins[i], circ_center);
stddev = stddev + (dist * dist);
}
stddev = square_root(stddev / points_size);
for(i = 0 ; i < points_size ; i++)
{
if( distance(all_poins[i], circ_center) < max_dispersion )
{
num_ok_points++;
avg_dist = avg_dist + distance(all_poins[i], circ_center);
}
}
avg_dist = avg_dist / num_ok_points;
if(stddev <= max_dispersion && num_ok_points >= min_points_needed)
{
circle recognized; it's center is circ_center; it's radius is avg_dist;
}
Can we assume the array of points are mostly on or near to the circumference of the circle?
A circle has a center and radius. If you can determine the circle's center coordinates, via the intersection of perpendiculars of two chords, then all the true circle points should be equidistant(r), from the center point.
The false points can be eliminated by not being equidistant (+-)tolerance from the center point.
The weakness of this approach is how well can you determine the center and radius? You may want to try a least squares approach to computing the center coordinates.
To answer the initially stated question, my approach would be to iterate through the points and derive the center of a circle from each consecutive set of three points. Then, take the longest contiguous subset of points that create circles with centers that fall within some absolute range. Then determine if the points wind consistently around the average of the circles. You can always perform some basic heuristics on any discarded data to determine if a circle is actually what the user wanted to make though.
Now, since you say that you want to perform gesture recognition, I would suggest you think of a completely different method. Personally, I would first create a basic sort of language that can be used to describe gestures. It should be very simple; the only words I would consider having are:
Start - Denotes the start of a stroke
Angle - The starting angle of the stroke. This should be one of the eight major cardinal directions (N, NW, W, SW, S, SE, E, NE) or Any for unaligned gestures. You could also add combining mechanisms, or perhaps "Axis Aligned" or other such things.
End - Denotes the end of a stroke
Travel - Denotes a straight path in the stroke
Distance - The percentage of the total length of the path that this particular operation will consume.
Turn - Denotes a turn in the stroke
Direction - The direction to turn in. Choices would be Left, Right, Any, Previous, or Opposite.
Angle - The angle of the turn. I would suggest you use just three directions (90 deg, 180 deg, 270 deg)
Tolerance - The maximum tolerance for deviation from the specified angle. This should have a default of somewhere around 45 degrees in either direction for a high chance of matching the angle in a signature.
Type - Hard or Radial. Radial angles would be a stroke along a radius. Hard angles would be a turn about a point.
Radius - If the turn is radial, this is the radius of the turn (units are in percentage of total path length, with appropriate conversions of course)
Obviously you can make the angles much more fine, but the coarser the ranges are, the more tolerant of input error it can be. Being too tolerant can lead to misinterpretation though.
If you apply some fuzzy logic, it wouldn't be hard to break just about any gesture down into a language like this. You could then create a bunch of gesture "signatures" that describe various gestures that can be performed. For instance:
//Circle
Start Angle=Any
Turn Type=Radial Direction=Any Angle=180deg Radius=50%
Turn Type=Radial Direction=Previous Angle=180deg Radius=50%
End
//Box
Start Angle=AxisAligned
Travel Distance=25%
Turn Type=Hard Direction=Any Angle=90deg Tolerance=10deg
Travel Distance=25%
Turn Type=Hard Direction=Previous Angle=90deg Tolerance=10deg
Travel Distance=25%
Turn Type=Hard Direction=Previous Angle=90deg Tolerance=10deg
Travel Distance=25%
End
If you want, I could work on an algorithm that could take a point cloud and degenerate it into a series of commands like this so you can compare them with pre-generated signatures.
I am trying to calculate scale, rotation and translation between two consecutive frames of a video. So basically I matched keypoints and then used opencv function findHomography() to calculate the homography matrix.
homography = findHomography(feature1 , feature2 , CV_RANSAC); //feature1 and feature2 are matched keypoints
My question is: How can I use this matrix to calculate scale, rotation and translation?.
Can anyone provide me the code or explanation as to how to do it?
if you can use opencv 3.0, this decomposition method is available
http://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#decomposehomographymat
The right answer is to use homography as it is defined dst = H ⋅ src and explore what it does to small segments around a particular point.
Translation
Given a single point, for translation do
T = dst - (H ⋅ src)
Rotation
Given two points p1 and p2
p1 = H ⋅ p1
p2 = H ⋅ p2
Now just calculate the angle between vectors p1 p2 and p1' p2'.
Scale
You can use the same trick but now just compare the lengths: |p1 p2| and |p1' p2'|.
To be fair, use another segment orthogonal to the first and average the result. You will see that there is no constant scale factor or translation one. They will depend on the src location.
Given Homography matrix H:
|H_00, H_01, H_02|
H = |H_10, H_11, H_12|
|H_20, H_21, H_22|
Assumptions:
H_20 = H_21 = 0 and normalized to H_22 = 1 to obtain 8 DOF.
The translation along x and y axes are directly calculated from H:
tx = H_02
ty = H_12
The 2x2 sub matrix on the top left corner is decomposed to calculate shear, scaling and rotation. An easy and quick decomposition method is explained here.
Note: this method assumes invertible matrix.
Since i had to struggle for a couple of days to create my homography transformation function I'm going to put it here for the benefit of everyone.
Here you can see the main loop where every input position is multiplied by the homography matrix h. Then the result is used to copy the pixel from the original position to the destination position.
for (tempIn[0] = 0; tempIn[0] < stride; tempIn[0]++)
{
for (tempIn[1] = 0; tempIn[1] < rows; tempIn[1]++)
{
double w = h[6] * tempIn[0] + h[7] * tempIn[1] + 1; // very important!
//H_20 = H_21 = 0 and normalized to H_22 = 1 to obtain 8 DOF. <-- this is wrong
tempOut[0] = ((h[0] * tempIn[0]) + (h[1] * tempIn[1]) + h[2])/w;
tempOut[1] =(( h[3] * tempIn[0]) +(h[4] * tempIn[1]) + h[5])/w;
if (tempOut[1] < destSize && tempOut[0] < destSize && tempOut[0] >= 0 && tempOut[1] >= 0)
dest_[destStride * tempOut[1] + tempOut[0]] = src_[stride * tempIn[1] + tempIn[0]];
}
}
After such process an image with some kind of grid will be produced. Some kind of filter is needed to remove the grid. In my code i have used a simple linear filter.
Note: Only the central part of the original image is really required for producing a correct image. Some rows and columns can be safely discarded.
For estimating a tree-dimensional transform and rotation induced by a homography, there exist multiple approaches. One of them provides closed formulas for decomposing the homography, but they are very complex. Also, the solutions are never unique.
Luckily, OpenCV 3 already implements this decomposition (decomposeHomographyMat). Given an homography and a correctly scaled intrinsics matrix, the function provides a set of four possible rotations and translations.
The question seems to be about 2D parameters. Homography matrix captures perspective distortion. If the application does not create much perspective distortion, one can approximate a real world transformation using affine transformation matrix (that uses only scale, rotation, translation and no shearing/flipping). The following link will give an idea about decomposing an affine transformation into different parameters.
https://math.stackexchange.com/questions/612006/decomposing-an-affine-transformation