Converting an ellipse into a polyline - c++

I currently have several ellipses. These are defined by a centre point, and then two vectors, one point to the minimum axis and other to the maximum axis.
However, for the program I'm creating I need to be able to deal with these shapes as a polyline. I'm fairly sure there must be formula for generating a set of points from the available data that I have, but I'm unsure how to go about doing it.
Does anyone have any ideas of how to go about this?
Thanks.

(Under assumption that both vectors that represent ellipse axes are parllel to coordinate axes)
If you have a radial ray emanating from the centre of ellipsis at angle angle, then that ray intersects the ellipse at point
x = x_half_axis * cos(angle);
y = y_half_axis * sin(angle);
where x_half_axis and y_half_axis age just the lengths (magnitudes) of your half-axis vectors.
So, just choose some sufficiently small angle step delta. Sweep around your centre point through the entire [0...2*Pi] range with that step, beginning with 0 angle, then delta angle, then 2 * delta angle and so on. For each angle value the coordinates of the ellipse point will be given by the above formulas. That way you will generate your polygonal representation of the ellipse.
If your delta is relatively large (few points on the ellipse) then it should be chosen carefully to make sure your "elliptical polygon" closes nicely: 2*Pi should split into a whole number of delta steps. Albeit for small delta values it does not matter as much.
If your minimum-maximum axis vectors are not parallel to coordinate axes, your can still use the above approach and then transform the resultant points to the proper final position by applying the corresponding rotation transformation.
Fixed-delta angle stepping has some disadvantages though. It generates a denser sequence of polygonal points near the miminum axis of the ellipse (where the curvature is smaller) and a sparser sequence of points near the maximum axis (where the curvature is greater). This is actually the opposite of the desirable behavior: it is better to have higher point density in the regions of higher curvature.
If that is an issue for you, then you can update the algorithm to make it use variadic stepping. The angle delta should gradually decrease as we approach the maximum axis and increase as we approach the minimum axis.

Assuming the center at (Xc,Yc) and the axis vectors (Xm,Ym), (XM,YM) (these two should be orthogonal), the formula is
X = XM cos(t) + Xm sin(t) + Xc
Y = YM cos(t) + Ym sin(t) + Yc
with t in [0,2Pi].
To get a efficient distribution of the endpoints on the outline, I recommend to use the maximum deviation criterion applied recursively: to draw the arc corresponding to the range [t0,t2], try the midpoint value t1=(t0+t2)/2. If the corresponding points are such that the distance of P1 to the line P0P2 is below a constant threshold (such as one pixel), you can approximate the arc by the segment P0P1. Otherwise, repeat the operation for the arcs [t0,t1] and [t1,t2].
Preorder recursion allows you to emit the polyline vertexes in sequence.

Related

How a feature descriptior for traffic sign detection works in opencv

I am trying to understand how Centroid to Contour (CtC) detector works. Here is a sample code that I have found on Github, and I am trying to understand what is the idea behind this. In this sample, the author trying to detect speed sign using CtC. There are just two important functions:
pre_process()
CtC_features
I have understood a part of the code and how it works but I have some problems in understanding how CtC_features function works.
If you can help me I would like to understand the following parts (just 3 points):
Why if centroid.x > curr.x we need to add PI value to the angle result ( if (centroid.x > curr.x) ang += 3.14159; //PI )
When we start selecting the features on line 97 we set the start angle ( double ang = - 1.57079; ). Why is this half the pi value and negative? How was this value chosen?
And a more general question, how can you know that what feature you select are related to speed limit sign? You find the centroid of the image and adjust the angle in the first step, but in the second step how can you know if ( while (feature_v[i].first > ang) ) the current angle is bigger than your hardcode angle ( in first case ang = - 1.57079) then we add that distance as a feature.
I would like to understand the idea behind this code and if someone with more experience and with some knowledge about trigonometry would help me it will be just great.
Thank you.
The code you provided is not the best, but let's see what happens.
I took this starting image of a sign:
Then, pre_process is called, which basically runs a Canny edge detector, along with some tricks which should lead to a better edge detection. I won't look into them, but this is what it returns:
Not the greatest. Maybe some parameter tuning would help.
But now, CtC_features is called, which is the scope of the question. The role of CtC_features is to obtain some features for a machine learning algorithms. This amounts to finding a numerical description of the image which would help the ML algorithm detect the sign. Such a description can be anything. Think about how someone who never saw a STOP sign and does not know how to read would describe it. They would say something like "A red, flat plate, with 8 sides and some white stuff in the middle". Based on this description, someone might be able to tell it's a STOP sign. We want to do the same, but since computers are computers, we look for numerical features. And, with them, some algorithm could be trained to "learn" what features each sign has.
So, let's see what features does CtC_features obtains from the contours.
The first thing it does is to call findContours. This function takes a binary image and returns arrays of points representing the contours of the image. Basically, it takes the edges and puts them into arrays of points. With connectivity, so we basically know which points are connected. If we use the code from here for visualization, we can see what happens:
So, the array contours is a std::vector<std::vector<cv::Point>> and you have in each sub-array a continuous contour, here drawn with a different color.
Next, we compute the number of points (edge pixels), and do an average over their coordinates to find the centroid of the edge image. The centroid is the filled circle:
Then, we iterate over all points, and create a vector of std::pair<double, double>, recording for each point the distance from the centroid and the angle. The angle function is defined at the bottom of the file as
double angle(Point2f a, Point2f b) {
return atan((a.y - b.y) / (a.x - b.x));
}
It basically computes the angle of the line from a to b with respect to the x axis, using the arctangent function. I'll let you watch a video on arctangent, but tl;dr is that it gives you the angle from a ratio. In radians (a circle is 2 PI radians, half a circle is PI radians). The problem is that the function is periodic, with a period of PI. This means that there are 2 angles on the circle (the circle of all points at the same distance around the centroid) which give you the same value. So, we compute the ratio (the ratio is btw known as the tangent of the angle), apply the inverse function (arctangent) and we get an angle (corresponding to a point). But what if it's the other point? Well, we know that the other point is exactly with PI degrees offset (it is diametrically opposite), so we add PI if we detect that it's the other point.
The picture below also helps understand why there are 2 points:
The tangent of the angle is highlighted vertical distance,. But the angle on the other side of the diagonal line, which intersects the circle in the bottom left, also has the same tangent. The atan function gives the tangents only for angles on the left side of the center. Note that there are no 2 directions with the same tangent.
What the check does is to ask whether the point is on the right of the centroid. This is done in order to be able to add a half a circle (PI radians or 180 degrees) to correct for the result of atan.
Now, we know the distance (a simple formula) and we have found (and corrected) for the angle. We insert this pair into the vector feature_v, and we sort it. The sort function, called like that, sorts after the first element of the pair, so we sort after the angle, then after distance.
The interval variable:
int degree = 10;
double interval = double((double(degree) / double(360)) * 2 * 3.14159); //5 degrees interval
simply is value of degree, converted from degrees into radians. We need radians since the angles have been computed so far in radians, and degrees are more user friendly. Yep, the comment is wrong, the interval is 10 degrees, not 5.
The ang variable defined below it is -PI / 2 (a quarter of a circle):
double ang = - 1.57079;
Now, what it does is to divide the points around the centroid into bins, based on the angle. Each bin is 10 degrees wide. This is done by iterating over the points sorted after angle, all are accumulated until we get to the next bin. We are only interested in the largest distance of a point in each bin. The starting point should be small enough that all the direction (points) are captured.
In order to understand why it starts from -PI/2, we have to get back at the trigonometric function diagram above. What happens if the angle goes like this:
See how the highlighted vertical segment goes "downwards" on the y axis. This means that its length (and implicitly the tangent) is negative here. Also, the angle is considered to be negative (otherwise there would be 2 angles on the same side of the center with the same tangent). Now, we are interested in the range of angles we have. It's all the angles on the right side of the centroid, starting from the bottom at -PI/2 to the top at PI/2. A range of PI radians, or 180 degrees. This is also written in the documentation of atan:
If no errors occur, the arc tangent of arg (arctan(arg)) in the range [-PI/2, +PI/2] radians, is returned.
So, we simply split all the possible directions (360 degrees) into buckets of 10 degrees, and take the distance of the farthest point in each bin. Since the circle has 360 degrees, we'll get 360 / 10 = 36 bins. Then, these are normalized such that the greatest value is 1. This helps a bit with the machine learning algorithm.
How can we know if the point we selected belongs to the sign? We don't. Most computer vision make some assumptions regarding the image in order to simplify the problem. The idea of the algorithm is to determine the shape of the sign by recording the distance from the center to the edges. This makes the assumption that the centroid is roughly in the middle of the sign. Depending on the ML algorithm used, and on the training data, different levels of robustness can be obtained.
Also, it assumes that (some of) the edges can be reliably identified. See how in my image, the algorithm was not able to detect the upper left edge?
The good news is that this doesn't have to be perfect. ML algorithms know how to handle this variation (up to some extent) provided that they are appropriately trained. It doesn't have to be perfect, but it has to be good enough. In order to answer what good enough means, what are the actual limitations of the algorithm, some more testing needs to be done, as well as some understanding of the ML algorithm used. But this is also why ML is so popular in vision: it can handle a lot of variation quite well.
At the end, we basically get an array of 36 numbers, one for each of the 36 bins of 10 degrees, representing the maximum distance of a point in the bin. I assume this is because the developer of the algorithm wanted a way to capture the shape of the sign, by looking at distances from center in various directions. This assumes that no edges are detected in the background, and the sign looks something like:
The max distance is used to pick the border, and not the or other symbols on the sign.
It is not directly used here, but a possibly related reading is the Hough transform, which uses a similar particularization to detect straight lines in an image.

Make character look at player?

I need to make a function that will calculate the degrees necessary to make an NPC look at the center of the player. However, I have not been able to find any results regarding 3 dimensions which is what I need. Only 2 dimensional equations. I'm programming in C++.
Info:
Data Type: Float.
Vertical-Axis: 90 is looking straight up, -90 is looking straight down and 0 is looking straight ahead.
Horizontal-Axis: Positive value between 0 and 360, North is 0, East is 90, South 180, West 270.
See these transformation equations from Wikipedia. But note since you want "elevation" or "vertical-axis" to be zero on the xy-plane, you need to make the changes noted after "if theta measures elevation from the reference plane instead of inclination from the zenith".
First, find a vector from the NPC to the player to get the values x, y, z, where x is positive to the East, y is positive to the North, and z is positive upward.
Then you have:
float r = sqrtf(x*x+y*y+z*z);
float theta = asinf(z/r);
float phi = atan2f(x,y);
Or you might get better precision from replacing the first declaration with
float r = hypotf(hypotf(x,y), z);
Note acosf and atan2f return radians, not degrees. If you need degrees, start with:
theta *= 180./M_PI;
and theta is now your "vertical axis" angle.
Also, Wikipedia's phi = arctan(y/x) assumes an azimuth of zero at the positive x-axis and pi/2 at the positive y-axis. Since you want an azimuth of zero at the North direction and 90 at the East direction, I've switched to atan2f(x,y) (instead of the more common atan2f(y,x)). Also, atan2f returns a value from -pi to pi inclusive, but you want strictly positive values. So:
if (phi < 0) {
phi += 2*M_PI;
}
phi *= 180./M_PI;
and now phi is your desired "horizontal-axis" angle.
I'm not too familiar with math which involves rotation and 3d envionments, but couldn't you draw a line from your coordinates to the NPC's coordinates or vise versa and have a function approximate the proper rotation to that line until within a range of accepted +/-? This way it does this is by just increasing and decreasing the vertical and horizontal values until it falls into the range, it's just a matter of what causes to increase or decrease first and you could determine that based on the position state of the NPC. But I feel like this is the really lame way to go about it.
use 4x4 homogenous transform matrices instead of Euler angles for this. You create the matrix anyway so why not use it ...
create/use NPC transform matrix M
my bet is you got it somewhere near your mesh and you are using it for rendering. In case you use Euler angles you are doing a set of rotations and translation and the result is the M.
convert players GCS Cartesian position to NPC LCS Cartesian position
GCS means global coordinate system and LCS means local coordinate system. So is the position is 3D vector xyz = (x,y,z,1) the transformed position would be one of these (depending on conventions you use)
xyz'=M*xyz
xyz'=Inverse(M)*xyz
xyz'=Transpose(xyz*M)
xyz'=Transpose(xyz*Inverse(M))
either rotate by angle or construct new NPC matrix
You know your NPC's old coordinate system so you can extract X,Y,Z,O vectors from it. And now you just set the axis that is your viewing direction (usually -Z) to direction to player. That is easy
-Z = normalize( xyz' - (0,0,0) )
Z = -xyz' / |xyz'|
Now just exploit cross product and make the other axises perpendicular to Z again so:
X = cross(Y,Z)
Y = cross(Z,X)
And feed the vectors back to your NPC's matrix. This way is also much much easier to move the objects. Also to lock the side rotation you can set one of the vectors to Up prior to this.
If you still want to compute the rotation then it is:
ang = acos(dot(Z,-xyz')/(|Z|*|xyz'|))
axis = cross(Z,-xyz')
but to convert that into Euler angles is another story ...
With transform matrices you can easily make cool stuff like camera follow, easy computation between objects coordinate systems, easy physics motion simulations and much more.

Reconstruct boundaries and compute length in Paraview

I have a set of points on the unit sphere and a corresponding set of values being equal, for simplicity, to 0 and 1. Thus I'm constructing the characteristic function of a set on the sphere. Typically, I have several such sets, which form a partition of the sphere. An example is given in the figure.
I was wondering if paraview can find boundaries between the cells and compute the length and the curvature of the boundaries.
I read in a paper that using gradient reconstruction the guys managed to find the curvature of such contours. I imagine that if the curvature can be found, the length should be somewhat simpler. If the answer to the above question is yes, where should I look for the corresponding documentation?
For points on the sphere if they are build based on great-circle distance principle, it means all lines connecting points are of a shortest distance and plane goes through sphere center. In such case angle could be computed as arccos of scalar product.
R = 1;
angle = arccos(x1*x2 + y1*y2 + z1*z2);
length = R*angle;
And parametric line from p1 to p2 could be build using slerp interpolation.
slerp(t) = sin((1.0-t)*angle)/sin(angle)*p1 + sin(t*angle)/sin(angle)*p2;
where t is in [0...1] range
In such case curvature is 1/R for all great circle lines. That would be first thing I would try - try to match actual boundaries with those made from great-circle approach. If they match, that's the answer
Links
https://en.wikipedia.org/wiki/Great_circle
https://en.wikipedia.org/wiki/Great-circle_distance
https://en.wikipedia.org/wiki/Slerp
UPDATE
In case of non-great arcs I would propose following modification. Build great arc plane which goes through sphere center and on intersection with surface makes great arc between the points. Fix axis as a line going through those two points. Start rotating great arc plane along above mentioned axis till you get the exactly your arc of circle connecting two points. At this moment you could get rotation angle, compute your circle plane position and radius r, curvature as 1/r etc

Is this the correct method of using dot product to find the angle between two vectors? C++ SFML

So in another question I was told that you could use a simplified formula of the dot product to find the angle between two vectors:
angle = atan2(mouseY, mouseX) - atan2(yPos, xPos); //xPos, yPos is position of player
Except, from what I understood it simply takes points as vectors. Which is why the mouse and player position is plugged in the parameters. However when I move the mouse around the player in a 360 degree radius, the angle and therefore rotation of the player is a value from around -0.3... to -1.4...
Is this not the correct method? Am I supposed to be representing vectors not as X, Y positions but as something else?
There's also another formula I found which also doesn't seem to work for me:
angle = atan2(mouseY - yPos, mouseX - xPos);
The first method is correct, the second is wrong. The function atan2(x,y) computes the angle in radians of a vector pointing from (0,0) to (x,y) with respect to the x-axis. It is thus correct to calculate two angles (in radians) the vectors have wih respect to the x-axis and then subtract these from each other to obtain the angles between the two vectors.
The result of the function atan2 is a value in the half-open interval (-pi,pi]. Expressed in degrees, this corresponds to (-180°,180°). 0° thereby denotes a vector pointing right (along the x-axis), 90° a vector pointing up, 180° a vector pointing left and -90° a vector pointing down.
You can transform the result in radians to angles via the formula
atan(x,y)*180/pi
So, if you want to transform your resulting values into angles, just multiply them by 180/pi.

Create dataset of XYZ positions on a given plane

I need to create a list of XYZ positions given a starting point and an offset between the positions based on a plane. On just a flat plane this is easy. Let's say the offset I need is to move down 3 then right 2 from position 0,0,0
The output would be:
0,0,0 (starting position)
0,-3,0 (move down 3)
2,-3,0 (then move right 2)
The same goes for a different start position, let's say 5,5,1:
5,5,1 (starting position)
5,2,1 (move down 3)
7,2,1 (then move right 2)
The problem comes when the plane is no longer on this flat grid.
I'm able to calculate the equation of the plane and the normal vector given 3 points.
But now what can I do to create this dataset of XYZ locations given this equation?
I know I can solve for XYZ given two values. Say I know x=1 and y=1, I can solve for Z. But moving down 2 is no longer just y-2. I believe I need to find a linear equation on both the x and y axis to increment the positions and move parallel to the x and y of this new plane, then just solve for Z. I'm not sure how to accomplish this.
The other issue is that I need to calculate the angle, tilt and rotation of this plane in relation to the base plane.
For example:
P1=0,0,0 and P2=1,1,0 the tilt=0deg angle=0deg rotation=45deg.
P1=0,0,0 and P2=0,1,1 the tilt=0deg angle=45deg rotation=0deg.
P1=0,0,0 and P2=1,0,1 the tilt=45deg angle=0deg rotation=0deg.
P1=0,0,0 and P2=1,1,1 the tilt=0deg angle=45deg rotation=45deg.
I've searched for hours on both these problems and I've always come to a stop at the equation of the plane. Manipulating the x,y correctly to follow parallel to the plane, and then taking that information to find these angles. This is a lot of geometry to be solved, and I can't find any further information on how to calculate this list of points, let alone calculating the 3 angles to the base plane.
I would appericate any help or insight on this. Just plain old math or a reference to C++ would be perfect to sheding some light onto this issue I'm facing here.
Thank you,
Matt
You can think of your plane as being defined by a point and a pair of orthonormal basis vectors (which just means two vectors of length 1, 90 degrees from one another). Your most basic plane can be defined as:
p0 = (0, 0, 0) #Origin point
vx = (1, 0, 0) #X basis vector
vy = (0, 1, 0) #Y basis vector
To find point p1 that's offset by dx in the X direction and dy in the Y direction, you use this formula:
p1 = p0 + dx * vx + dy * vy
This formula will always work if your offsets are along the given axes (which it sounds like they are). This is still true if the vectors have been rotated - that's the property you're going to be using.
So to find a point that's been offset along a rotated plane:
Take the default basis vectors (vx and vy, above).
Rotate them until they define the plane you want (you may or may not need to rotate the origin point as well, depending on how the problem is defined).
Apply the formula, and get your answer.
Now there are some quirks when you're doing rotation (Order matters!), but that's the the basic idea, and should be enough to put you on the right track. Good luck!