Fastest way to calculate cubic bezier curves? - c++

Right now I calculate it like this:
double dx1 = a.RightHandle.x - a.UserPoint.x;
double dy1 = a.RightHandle.y - a.UserPoint.y;
double dx2 = b.LeftHandle.x - a.RightHandle.x;
double dy2 = b.LeftHandle.y - a.RightHandle.y;
double dx3 = b.UserPoint.x - b.LeftHandle.x;
double dy3 = b.UserPoint.y - b.LeftHandle.y;
float len = sqrt(dx1 * dx1 + dy1 * dy1) +
sqrt(dx2 * dx2 + dy2 * dy2) +
sqrt(dx3 * dx3 + dy3 * dy3);
int NUM_STEPS = int(len * 0.05);
if(NUM_STEPS > 55)
{
NUM_STEPS = 55;
}
double subdiv_step = 1.0 / (NUM_STEPS + 1);
double subdiv_step2 = subdiv_step*subdiv_step;
double subdiv_step3 = subdiv_step*subdiv_step*subdiv_step;
double pre1 = 3.0 * subdiv_step;
double pre2 = 3.0 * subdiv_step2;
double pre4 = 6.0 * subdiv_step2;
double pre5 = 6.0 * subdiv_step3;
double tmp1x = a.UserPoint.x - a.RightHandle.x * 2.0 + b.LeftHandle.x;
double tmp1y = a.UserPoint.y - a.RightHandle.y * 2.0 + b.LeftHandle.y;
double tmp2x = (a.RightHandle.x - b.LeftHandle.x)*3.0 - a.UserPoint.x + b.UserPoint.x;
double tmp2y = (a.RightHandle.y - b.LeftHandle.y)*3.0 - a.UserPoint.y + b.UserPoint.y;
double fx = a.UserPoint.x;
double fy = a.UserPoint.y;
//a user
//a right
//b left
//b user
double dfx = (a.RightHandle.x - a.UserPoint.x)*pre1 + tmp1x*pre2 + tmp2x*subdiv_step3;
double dfy = (a.RightHandle.y - a.UserPoint.y)*pre1 + tmp1y*pre2 + tmp2y*subdiv_step3;
double ddfx = tmp1x*pre4 + tmp2x*pre5;
double ddfy = tmp1y*pre4 + tmp2y*pre5;
double dddfx = tmp2x*pre5;
double dddfy = tmp2y*pre5;
int step = NUM_STEPS;
while(step--)
{
fx += dfx;
fy += dfy;
dfx += ddfx;
dfy += ddfy;
ddfx += dddfx;
ddfy += dddfy;
temp[0] = fx;
temp[1] = fy;
Contour[currentcontour].DrawingPoints.push_back(temp);
}
temp[0] = (GLdouble)b.UserPoint.x;
temp[1] = (GLdouble)b.UserPoint.y;
Contour[currentcontour].DrawingPoints.push_back(temp);
I'm wondering if there is a faster way to interpolate cubic beziers?
Thanks

Look into forward differencing for a faster method. Care must be taken to deal with rounding errors.
The adaptive subdivision method, with some checks, can be fast and accurate.

There is another point that is also very important, which is that you are approximating your curve using a lot of fixed-length straight-line segments. This is inefficient in areas where your curve is nearly straight, and can lead to a nasty angular poly-line where the curve is very curvy. There is not a simple compromise that will work for high and low curvatures.
To get around this is you can dynamically subdivide the curve (e.g. split it into two pieces at the half-way point and then see if the two line segments are within a reasonable distance of the curve. If a segment is a good fit for the curve, stop there; if it is not, then subdivide it in the same way and repeat). You have to be careful to subdivide it enough that you don't miss any localised (small) features when sampling the curve in this way.
This will not always draw your curve "faster", but it will guarantee that it always looks good while using the minimum number of line segments necessary to achieve that quality.
Once you are drawing the curve "well", you can then look at how to make the necessary calculations "faster".

Actually you should continue splitting until the two lines joining points on curve (end nodes) and their farthest control points are "flat enough":
- either fully aligned or
- their intersection is at a position whose "square distance" from both end nodes is below one half "square pixel") - note that you don't need to compute the actual distance, as it would require computing a square root, which is slow)
When you reach this situation, ignore the control points and join the two end-points with a straight segment.
This is faster, because rapidly you'll get straight segments that can be drawn directly as if they were straight lines, using the classic Bresenham algorithm.
Note: you should take into account the fractional bits of the endpoints to properly set the initial value of the error variable accumulating differences and used by the incremental Bresenham algorithm, in order to get better results (notably when the final segment to draw is very near from the horizontal or vertical or from the two diagonals); otherwise you'll get visible artefacts.
The classic Bresenham algorithm to draw lines between points that are aligned on an integer grid initializes this error variable to zero for the position of the first end node. But a minor modification of the Bresenham algorithm scales up the two distances variables and the error value simply by a constant power of two, before using the 0/+1 increments for the x or y variable which remain unscaled.
The high order bits of the error variable also allows you compute an alpha value that can be used to draw two stacked pixels with the correct alpha-shading. In most cases, your images will be using 8-bit color planes at most, so you will not need more that 8 bits of extra precision for the error value, and the upscaling can be limited to the factor of 256: you can use it to draw "smooth" lines.
But you could limit yourself to the scaling factor of 16 (four bits): typical bitmap images you have to draw are not extremely wide and their resolution is far below +/- 2 billions (the limit of a signed 32-bit integer): when you scale up the coordinates by a factor of 16, it will remain 28 bits to work with, but you should already have "clipped" the geometry to the view area of your bitmap to render, and the error variable of the Bresenham algorithm will remain below 56 bits in all cases and will still fit in a 64-bit integer.
If your error variable is 32-bit, you must limit the scaled coordinates below 2^15 (not more than 15 bits) for the worst case (otherwise the test of the sign of the error varaible used by Bresenham will not work due to integer overflow in the worst case), and with the upscaling factor of 16 (4 bits) you'll be limited to draw images not larger than 11 bits in width or height, i.e. 2048x2048 images.
But if your draw area is effectively below 2048x2048 pixels, there's no problem to draw lined smoothed by 16 alpha-shaded values of the draw color (to draw alpha-shaded pixels, you need to read the orignal pixel value in the image before mixing the alpha-shaded color, unless the computed shade is 0% for the first staked pixel that you don't need to draw, and 100% for the second stacked pixel that you can overwrite directly with the plain draw color)
If your computed image also includes an alpha-channel, your draw color can also have its own alpha value that you'll need to shade and combine with the alpha value of the pixels to draw. But you don't need any intermediate buffer just for the line to draw because you can draw directly in the target buffer.
With the error variable used by the Bresenham algorithm, there's no problem at all caused by rounding errors because they are taken into account by this variable. So set its initial value properly (the alternative, by simply scaling up all coordinates by a factor of 16 before starting subdividing recursively the spline is 16 times slower in the Bresenham algorithm itself).

Note how "flat enough" can be calculated. "Flatness" is a mesure of the minimum absolute angle (between 0 and 180°) between two sucessive segment but you don't need to compute the actual angle because this flatness is also equivalent to setting a minimum value to the cosine of their relative angle.
That cosine value also does not need to be computed directly because all you need is in fact the vector product of the two vectors and compare it with the square of the maximum of their length.
Note also that the "square of the cosine" is also "one minus the square of the sine". A maximum square cosine value is also a minimum square sine value... Now you know which kind of "vector product" to use: the fastest and simplest to compute is the scalar product, whose square is proportional to the square sine of the two vectors and to the product of square lengths of both vectors.
So checking if the curve is "flat enough" is simple: compute a ratio between two scalar products and see if this ratio is below the "flatness" constant value (of the minimum square sine). There's no division by zero because you'll determine which of the two vectors is the longest, and if this one has a square length below 1/4, your curve is already flat enough for the rendering resolution; otherwise check this ratio between the longest and the shortest vector (formed by the crossing diagonals of the convex hull containing the end points and control points):
with quadratic beziers, the convex hull is a triangle and you choose two pairs
with cubic beziers, the convex hull is a 4-sides convex polygon and the diagonals may either join an end point with one of the two control points, or join together the two end-points and the two control points and you have six possibilities
Use the combination giving the maximum length for the first vector between the 1st end-point and one of the three other points, the second vector joining two other points):
Al you need is to determine the "minimum square length" of the segments starting with one end-point or control-point to the next control-point or end-point in the sequence. (in a quadratic Bezier you just compare two segments, with a quadratic Bezier, you check 3 segments)
If this "minimum square length" is below 1/4 you can stop there, the curve is "flat enough".
Then determine the "maximum square length" of the segments starting with one end-point to any one of the other end-point or control-point (with a quadratic Bezier you can safely use the same 2 segments as above, with a cubic Bezier you discard one of the 3 segments used above joining the 2 control-points, but you add the segment joining the two end-nodes).
Then check that the "minimum square length" is lower than the product of the constant "flatness" (minimum square sine) times the "maximum square length" (if so the curve is "flat enough".
In both cases, when your curve is "flat enough", you just need to draw the segment joining the two end-points. Otherwise you split the spline recursively.
You may include a limit to the recursion, but in practice it will never be reached unless the convex hull of the curve covers a very large area in a very large draw area; even with 32 levels of recusions, it will never explode in a rectangular draw area whose diagonal is shorter than 2^32 pixels (the limit would be reached only if you are splitting a "virtual Bezier" in a virtually infinite space with floating-point coordinates but you don't intend to draw it directly, because you won't have the 1/2 pixel limitation in such space, and only if you have set an extreme value for the "flatness" such that your "minimum square sine" constant parameter is 1/2^32 or lower).

Related

Given n points, how can I find the number of points with given distance

I have an input of n unique points (X,Y) that are between 0 and 2^32 inclusive. The coordinates are integers.
I need to create an algorithm that finds the number of pairs of points with a distance of exactly 2018.
I have thought of checking with every other point but it would be O(n^2) and I have to make it more efficient. I also thought of using a set or a vector and sort it using a comparator based on the distance with the origin point but it wouldn't help at all.
So how can I do it efficiently?
There is one Pythagorean triple with the hypotenuse of 2018: 11182+16802=20182.
Since all coordinates are integers, the only possible differences between the coordinates (both X an Y) of the two points are 0, 1118, 1680, and 2018.
Finding all pairs of points with a given difference between X (or Y) coordinates is a simple n log n operation.
Numbers other than 2018 might need a bit more work because they might be members of more than one Pythagorean triple (for example 2015 is a hypotenuse of 3 triples). If the number is not given as a constant, but provided at run time, you will have to generate all triples with this hypotenuse. This may require some sqrt(N) effort (N is the hypotenuse, not the number of points). One can find a recipe on the math stackexchange, e.g. here (there are many others).
You could try using a Quadtree. First you start sorting your points into the quadtree. You should specify a lower limit for the cell size of e.g. 2048 wich is a power of 2. Then iterate though the points and calculate distances to the points in the same cell and to the points in adjacent cells. That way you should be able to decrease the number of distance calculations drastically.
The main difficulty will probably be implementing the tree structure. You also have to find a way to find adjacent cells (you must include the possibility to traverse upwards in the tree)
The complexity of this is probably O(n*log(n)) in the best case but don't pin me down on that.
One additional word on the distance calculation: You are probably much faster if you don't do
dx = p1x - p2x;
dy = p1y - p2y;
if ( sqrt(dx*dx + dy*dy) == 2018 ) {
...
}
but
dx = p1x - p2x;
dy = p1y - p2y;
if ( dx*dx + dy*dy == 2018*2018 ) {
...
}
Squaring is faster than taking the sqare root. So just compare the square of the distance with the square of 2018.

Is there a way to generate the corners of a regular N-gon without division and without trig, given N as input

Edit: So I found a page related to rasterizing a trapezoid https://cse.taylor.edu/~btoll/s99/424/res/ucdavis/GraphicsNotes/Rasterizing-Polygons/Rasterizing-Polygons.html but am still trying to figure out if I can just do the edges
I am trying to generate points for the corners of an arbitrary N-gon. Where N is a non-zero positive integer. But I am trying to do so efficiently without the need of division and trig. I am thinking that there is probably some of sort of Bresenham's type algorithm for this but I cannot seem to find anything.
The only thing I can find on stackoverflow is this but the internal angle increments are found by using 2*π/N:
How to draw a n sided regular polygon in cartesian coordinates?
Here was the algorithm from that page in C
float angle_increment = 2*PI / n_sides;
for(int i = 0; i < n_sides; ++i)
{
float x = x_centre + radius * cos(i*angle_increment +start_angle);
float y = y_centre + radius * sin(i*angle_increment +start_angle);
}
So is it possible to do so without any division?
There's no solution which avoids calling sin and cos, aside from precomputing them for all useful values of N. However, you only need to do the computation once, regardless of N. (Provided you know where the centre of the polygon is. You might need a second computation to figure out the coordinates of the centre.) Every vertex can be computed from the previous vertex using the same rotation matrix.
The lookup table of precomputed values is not an unreasonable solution, since at some sufficiently large value of N the polygon becomes indistinguishable from a circle. So you probably only need a lookup table of a few hundred values.

C++ recognize shape from points

I'am trying to find out an algorithm to recognize circle in array of points.
Lets say that I've got points array where circle could or could not be stored (that also means array doesn't have to store only circle's points, there could be some "extra" points before or after circle's data).
I've already tried some algorithms but none of them work properly with those "extra" points. Have you got any ideas how to deal with this problem?
EDIT// I didn't mention that before. I want this algorithm to be used on circle gesture recognition. I've thought I would have data in array (for last few seconds) and by analysing this data in every tracking frame I would be able to say if there was or was not a circle gesture.
First I calculate the geometric mean (not the aritmetic mean) for each X and Y component.
I choose geometric mean because one feature is that small values ​​(with respect to the arithmetic mean ) of the values ​​are much more influential than the large values.
This lead me to the theoretical center of all points: circ_center
Then I calculate the standard deviation of distance of each point to center: stddev. This gives me the "indicator" to quantify the amount of variation. One property of circle is that all circumference point is at the same distance of it's center. With standard dev I try to test if your points are (with max variance threshold: max_dispersion) equally distance.
Last I calculates the average distance of points inside max_dispersion threshold from center, this give me the radius of the circle: avg_dist.
Parameters:
max_dispersion represents the "cicle precision". Smaller means more precise.
min_points_needed is the minimun number of points valid to be considered as circumference.
This is just an attempt, I have not tried. Let me know.
I will try this (in pseudo language)
points_size = 100; //number_of_user_points
all_poins[points_size]; //coordinates of points
//thresholds to be defined by user
max_dispersion = 20; //value of max stddev accepted, expressed in geometric units
min_points_needed = 5; //minimum number of points near the circumference
stddev = 0; //standard deviation of points from center
circ_center; //estimated circumference center, using Geometric mean
num_ok_points = 0; //points with distance under standard eviation
avg_dist = 0; //distance from center of "ok points"
all_x = 1; all_y = 1;
for(i = 0 ; i < points_size ; i++)
{
all_x = all_x * all_poins[i].x;
all_y = all_y * all_poins[i].y;
}
//pow(x, 1/y) = nth root
all_x = pow(all_x, 1 / points_size); //Geometric mean
all_y = pow(all_y, 1 / points_size); //Geometric mean
circ_center = make_point(all_x, all_y);
for(i = 0 ; i < points_size ; i++)
{
dist = distance(all_poins[i], circ_center);
stddev = stddev + (dist * dist);
}
stddev = square_root(stddev / points_size);
for(i = 0 ; i < points_size ; i++)
{
if( distance(all_poins[i], circ_center) < max_dispersion )
{
num_ok_points++;
avg_dist = avg_dist + distance(all_poins[i], circ_center);
}
}
avg_dist = avg_dist / num_ok_points;
if(stddev <= max_dispersion && num_ok_points >= min_points_needed)
{
circle recognized; it's center is circ_center; it's radius is avg_dist;
}
Can we assume the array of points are mostly on or near to the circumference of the circle?
A circle has a center and radius. If you can determine the circle's center coordinates, via the intersection of perpendiculars of two chords, then all the true circle points should be equidistant(r), from the center point.
The false points can be eliminated by not being equidistant (+-)tolerance from the center point.
The weakness of this approach is how well can you determine the center and radius? You may want to try a least squares approach to computing the center coordinates.
To answer the initially stated question, my approach would be to iterate through the points and derive the center of a circle from each consecutive set of three points. Then, take the longest contiguous subset of points that create circles with centers that fall within some absolute range. Then determine if the points wind consistently around the average of the circles. You can always perform some basic heuristics on any discarded data to determine if a circle is actually what the user wanted to make though.
Now, since you say that you want to perform gesture recognition, I would suggest you think of a completely different method. Personally, I would first create a basic sort of language that can be used to describe gestures. It should be very simple; the only words I would consider having are:
Start - Denotes the start of a stroke
Angle - The starting angle of the stroke. This should be one of the eight major cardinal directions (N, NW, W, SW, S, SE, E, NE) or Any for unaligned gestures. You could also add combining mechanisms, or perhaps "Axis Aligned" or other such things.
End - Denotes the end of a stroke
Travel - Denotes a straight path in the stroke
Distance - The percentage of the total length of the path that this particular operation will consume.
Turn - Denotes a turn in the stroke
Direction - The direction to turn in. Choices would be Left, Right, Any, Previous, or Opposite.
Angle - The angle of the turn. I would suggest you use just three directions (90 deg, 180 deg, 270 deg)
Tolerance - The maximum tolerance for deviation from the specified angle. This should have a default of somewhere around 45 degrees in either direction for a high chance of matching the angle in a signature.
Type - Hard or Radial. Radial angles would be a stroke along a radius. Hard angles would be a turn about a point.
Radius - If the turn is radial, this is the radius of the turn (units are in percentage of total path length, with appropriate conversions of course)
Obviously you can make the angles much more fine, but the coarser the ranges are, the more tolerant of input error it can be. Being too tolerant can lead to misinterpretation though.
If you apply some fuzzy logic, it wouldn't be hard to break just about any gesture down into a language like this. You could then create a bunch of gesture "signatures" that describe various gestures that can be performed. For instance:
//Circle
Start Angle=Any
Turn Type=Radial Direction=Any Angle=180deg Radius=50%
Turn Type=Radial Direction=Previous Angle=180deg Radius=50%
End
//Box
Start Angle=AxisAligned
Travel Distance=25%
Turn Type=Hard Direction=Any Angle=90deg Tolerance=10deg
Travel Distance=25%
Turn Type=Hard Direction=Previous Angle=90deg Tolerance=10deg
Travel Distance=25%
Turn Type=Hard Direction=Previous Angle=90deg Tolerance=10deg
Travel Distance=25%
End
If you want, I could work on an algorithm that could take a point cloud and degenerate it into a series of commands like this so you can compare them with pre-generated signatures.

Number of Sides Required to draw a circle in OpenGL

Does anyone know some algorithm to calculate the number of sides required to approximate a circle using polygon, if radius, r of the circle and maximum departure of the polygon from circularity, D is given? I really need to find the number of sides as I need to draw the approximated circle in OpenGL.
Also, we have the resolution of the screen in NDC coordinates per pixel given by P and solving D = P/2, we could guarantee that our circle is within half-pixel of accuracy.
What you're describing here is effectively a quality factor, which often goes hand-in-hand with error estimates.
A common way we handle this is to calculate the error for a a small portion of the circumference of the circle. The most trivial is to determine the difference in arc length of a slice of the circle, compared to a line segment joining the same two points on the circumference. You could use more effective measures, like difference in area, radius, etc, but this method should be adequate.
Think of an octagon, circumscribed with a perfect circle. In this case, the error is the difference in length of the line between two adjacent points on the octagon, and the arc length of the circle joining those two points.
The arc length is easy enough to calculate: PI * r * theta, where r is your radius, and theta is the angle, in radians, between the two points, assuming you draw lines from each of these points to the center of the circle/polygon. For a closed polygon with n sides, the angle is just (2*PI/n) radians. Let the arc length corresponding to this value of n be equal to A, ie A=2*PI*r/n.
The line length between the two points is easily calculated. Just divide your circle into n isosceles triangles, and each of those into two right-triangles. You know the angle in each right triangle is theta/2 = (2*PI/n)/2 = (PI/n), and the hypotenuse is r. So, you get your equation of sin(PI/n)=x/r, where x is half the length of the line segment joining two adjacent points on your circumscribed polygon. Let this value be B (ie: B=2x, so B=2*r*sin(PI/n)).
Now, just calculate the relative error, E = |A-B| / A (ie: |TrueValue-ApproxValue|/|TrueValue|), and you get a nice little percentage, represented in decimal, of your error vector. You can use the above equations to set a constraint on E (ie: it cannot be greater than some value, say, 1.05), in order for it to "look good".
So, you could write a function that calculates A, B, and E from the above equations, and loop through values of n, and have it stop looping when the calculated value of E is less than your threshold.
I would say that you need to set the number of sides depending on two variables the radius and the zoom (if you allow zoom)
A circle or radius 20 pixels can look ok with 32 to 56 sides, but if you use the same number of sided for a radios of 200 pixels that number of sides will not be enough
numberOfSides = radius * 3
If you allow zoom in and out you will need to do something like this
numberOfSides = radiusOfPaintedCircle * 3
When you zoom in radiusOfPaintedCircle will be bigger that the "property" of the circle being drawn
I've got an algorithm to draw a circle using fixed function opengl, maybe it'll help?
It's hard to know what you mean when you say you want to "approximate a circle using polygon"
You'll notice in my algorithm below that I don't calculate the number of lines needed to draw the circle, I just iterate between 0 .. 2Pi, stepping the angle by 0.1 each time, drawing a line with glVertex2f to that point on the circle, from the previous point.
void Circle::Render()
{
glLoadIdentity();
glPushMatrix();
glBegin(GL_LINES);
glColor3f(_vColour._x, _vColour._y, _vColour._z);
glVertex3f(_State._position._x, _State._position._y, 0);
glVertex3f(
(_State._position._x + (sinf(_State._angle)*_rRadius)),
(_State._position._y + (cosf(_State._angle)*_rRadius)),
0
);
glEnd();
glTranslatef(_State._position._x, _State._position._y, 0);
glBegin(GL_LINE_LOOP);
glColor3f(_vColour._x, _vColour._y, _vColour._z);
for(float angle = 0.0f; angle < g_k2Pi; angle += 0.1f)
glVertex2f(sinf(angle)*_rRadius, cosf(angle)*_rRadius);
glEnd();
glPopMatrix();
}

fastest way to compute angle with x-axis

What is the fastest way to calculate angle between a line and the x-axis?
I need to define a function, which is Injective at the PI:2PI interval (I have to angle between point which is the uppermost point and any point below).
PointType * top = UPPERMOST_POINT;
PointType * targ = TARGET_POINT;
double targetVectorX = targ->x - top->x;
double targetVectorY = targ->y - top->y;
first try
//#1
double magnitudeTarVec = sqrt(targetVectorX*targetVectorX + targetVectorY*targetVectorY);
angle = tarX / magTar;
second try
//#2 slower
angle = atan2(targetVectorY, targetVectorX);
I do not need the angle directly (radians or degrees), just any value is fine as far as by comparing these values of this kind from 2 points I can tell which angle is bigger. (for example angle in example one is between -1 and 1 (it is cosine argument))
Check for y being zero as atan2 does; then The quotient x/y will be plenty fine. (assuming I understand you correctly).
I just wrote Fastest way to sort vectors by angle without actually computing that angle about the general question of finding functions monotonic in the angle, without any code or connections to C++ or the likes. Based on the currently accepted answer there I'd now suggest
double angle = copysign( // magnitude of first argument with sign of second
1. - targetVectorX/(fabs(targetVectorX) + fabs(targetVectorY)),
targetVectorY);
The great benefit compared to the currently accepted answer here is the fact that you won't have to worry about infinite values, since all non-zero vectors (i.e. targetVectorX and targetVectorY are not both equal to zero at the same time) will result in finite pseudoangle values. The resulting pseudoangles will be in the range [−2 … 2] for real angles [−π … π], so the signs and the discontinuity are just like you'd get them from atan2.