Given 2 points with known speed direction and location, compute a path composed of (circle) arcs - c++

So, I have two points, say A and B, each one has a known (x, y) coordinate and a speed vector in the same coordinate system. I want to write a function to generate a set of arcs (radius and angle) that lead A to status B.
The angle difference is known, since I can get it by subtracting speed unit vector. Say I move a certain distance with (radius=r, angle=theta) then I got into the exact same situation. Does it have a unique solution? I only need one solution, or even an approximation.
Of course I can solve it by giving a certain circle and a line(radius=infine), but that's not what I want to do. I think there's a library that has a function for this, since it's quite a common approach.

A biarc is a smooth curve consisting of two circular arcs. Given two points with tangents, it is almost always possible to construct a biarc passing through them (with correct tangents).
This is a very basic routine in geometric modelling, and it is indispensable for smoothly approximating an arbirtrary curve (bezier, NURBS, etc) with arcs. Approximation with arcs and lines is heavily used in CAM, because modellers use NURBS without a problem, but machine controllers usually understand only lines and arcs. So I strongly suggest reading on this topic.
In particular, here is a great article on biarcs on biarcs, I seriously advice reading it. It even contains some working code, and an interactive demo.

Related

What is the fastest algorithm to find the point from a set of points, which is closest to a line?

I have:
- a set of points of known size (in my case, only 6 points)
- a line characterized by x = s + t * r, where x, s and r are 3D vectors
I need to find the point closest to the given line. The actual distance does not matter to me.
I had a look at several different questions that seem related (including this one) and know how to solve this on paper from my highschool math classes. But I cannot find a solution without calculating every distance, and I am sure there has to be a better/faster way. Performance is absolutely crucial in my application.
One more thing: All numbers are integers (coordinates of points and elements of s and r vectors). Again, for performance reasons I would like to keep the floating-point math to a minimum.
You have to process every point at least once to know their distance. Unless you want to repeat the process many times with different lines, simply computing the distance of every point is unavoidable. So the algorithm has to be O(n).
Since you don't care about the actual distance, we can make some simplification to the point-distance computation. The exact distance is computed by (source):
d^2 = |r⨯(p-s)|^2 / |r|^2
where ⨯ is the cross product and |r|^2 is the squared length of vector r. Since |r|^2 is constant for all points, we can omit it from the distance computation without changing result:
d^2 = |r⨯(p-s)|^2
Compare the approximated square distances and keep the minimum. The advantage of this formula is that you can do everything with integers since you mentioned that all coordinates are integers.
I'm afraid you can't get away with computing less than 6 distances (if you could, at least one point would be left out -- including the nearest one).
See if it makes sense to preprocess: Is the line fixed and the points vary? Consider rotating coordinates to make the line horizontal.
As there are few points, it is doubtful that this is your bottleneck. Measure where the hot spots are, redesign algorithms/data representation, spice up compiler optimization, compile to assembly and bum that. Strictly in that order.
Jon Bentley's "Writing Efficient Programs" (sadly long out of print) and "Programming Pearls" (2nd edition) are full of advise on practical programming.

k-way triangle set intersection and triangulation

If we have K sets of potentially overlapping triangles, what is a computationally efficient way of computing a new, non-overlapping set of triangles?
For example, consider this problem:
Here we have 3 triangle sets A, B, C, with some mutual overlap, and wish to obtain the non-overlapping sets A', B', C', AB, AC, BC, ABC, where for example the triangles in AC would contain the surfaces where there is exclusive overlap among A and C; and A' would contain the surfaces of A which do not overlap any other set.
I (also) propose a two step approach.
1. Find the intersection points of all triangle sides.
As pointed out in the comments, this is a well-researched problem, typically approached with line sweep methods. Here is a very nice overview, look especially at the Bentley-Ottmann algorithm.
2. Triangulate with Constrained Delaunay.
I think Polygon Triangulation as suggested by #Claudiu cannot solve your problem as it cannot guarantee that all original edges are included. Therefore, I suggest you look at Constrained Delaunay triangulations. These allow you to specify edges that must be included in your triangulation, even if they would not be included in an unconstrained Delaunay or polygon triangulation. Furthermore, there are implementations that allow you to specify a non-convex border of your triangulation outside of which no triangles are generated. This also seems to be a requirement in your case.
Implementing Constrained Delaunay is non-trivial. There is however, a somewhat dated but very nice C implementation of available from a CMU researcher (including a command line tool). See here for the theory of this specific algorithm. This algorithm also supports specification of a border. Note that the linked algorithm can do more than just Constrained Delaunay (namely quality mesh generation), but it can be configured not to add new points, which amounts to Constrained Delaunay.
Edit See comments for another implementation.
If you want something a bit more straight forward, faster to implement, and significantly less code... I'd recommend just doing some simple polygon clipping like the old software rendering algorithms used to do (especially since you're only dealing with triangles as your input). As briefly mentioned by a couple of other people, it involves splitting each triangle at the point where every other segment intersects it.
Triangles are easy, because splitting a triangle at a given plane always results in just 1 or 2 new ones (2 or 3 total). If your data set is rather large, you could introduce a quad-tree or other form of spacial organization in order to find the intersecting triangles faster as the new ones get added.
Granted, this would generate more polygons than the suggested Constrained Delaunay algorithm. But many of those algorithms don't do well with overlapping shapes and would require you to know your silhouette segments, so you'd be doing much of the same work anyhow.
And if fewer resulting triangles is a requirement, you can always do a merging pass at the end (adding neighbor information during the clipping to speed that portion up).
Anyway, good luck!
Your example is a special case of what computational geometers call "an arrangement." The CGAL Library has extensive and efficent arrangement handling routines. If you check this part of the documentation, you'll see that you can declare an empty arrangement, then insert triangles to divide the 2d plane into disjoint faces. As others have said, you'll then need to triangulate the faces that aren't already triangles. Happily CGAL also provides the routines to do this. This example of constrained Delaunay triangulation is a good place to start.
CGAL attempts to use the most efficient algorithms available that are practical to implement. In this case it looks like you can achieve O((n + k) log n) for an arrangment with n edges (3 times the number of triangles in your case) with k intersection. The algorithm uses a general technique called "sweep line". A vertical line is swept left-to-right with "events" computed and processed along the way. Events are edge endpoints and intersections. As each event is processed, a cell of the arrangement is updated.
Delaunay algorithms are typically O(n log n) for n vertices. There are several common algorithms, easily looked up or found in the CGAL references.
Even if you can't use CGAL in your work (e.g. for licensing reasons), the documentation is full of sources on the underlying algorithms: arrangements and constrained Delaunay algorithms.
Beware however that both arrangments and triangulations are notoriously hard to implement correctly due to floating point error. Robust versions often depend on rational arithmetic (available in CGAL).
To expand a bit on the comment from Ted Hopp, this should be possible by first computing a planar subdivision in which each bounded face of the output is associated with one of the sets A', B', C', AB, AC, BC, ABC, or "none". The second phase is then to triangulate the (possibly non-convex) faces in each set.
Step 1 could be performed in O((N + K) log N) time using a variation of the Bentley-Ottmann sweep line algorithm in which the current set is maintained as part of the algorithm's state. This can be determined from the line segments that have already been crossed and their direction.
Once that's done, the disjoint polygons for each set can then be broken into monotone pieces in O(N log N) time which in turn can be triangulated in O(N) time.
If you haven't already, pick up a copy of "Computational Geometry: Algorithms and Applications" by de Berg et al.
I can think of two approaches.
A more general approach is treating your input as just a set of lines and splitting the problem in two:
Polygon Detection. Take the set of lines your initial triangles make and get a set of non-overlapping polygons. This paper offers an O((N + M)^4) approach, were N is the number of line segments and M the number of intersections, which does seem a bit slow unfortunately...
Polygon Triangulation. Take each polygon from step 1 and triangulate it. This takes O(n log* n) which is practically O(n).
Another approach is to do do a custom algorithm. Solve the problem for intersecting two triangles, apply it to the first two input triangles, then for each new triangle, apply the algorithm to all the current triangles with the new one. It seems even for two triangles this isn't that straightforward, though, as there are several cases (might not be exhaustive):
No points of the triangle are in any other trianle.
No intersection
Jewish star
Two perpendicular spikes
One point of one triangle is contained in the other
Each triangle contains one point of the other
Two points of one triangle are in the other
Three points of one are in the other - one is fully contained
etc... no, it doesn't seem like that is the right approach. Leaving it here anyway for posterity.

Polynomial Least Squares for Image Curve Fitting

I am trying to fit a curve to a number of pixels in an image so I can do further processing regarding it's shape. Does anyone know how to implement a least squares method in C/++ preferably using the following parameters: an x array, a y array, and an answers array (the length of the answers array should tell how many coefficients need to be calculated)?
If this is not some exercise in implementing this yourself, I would suggest you use a ready-made library like GNU gsl. Have a look at the functions whose names start with gsl_multifit_, see e.g. the second example here.
If you are trying to fit ordered points (x,y) like in a graph you can use linear least squares methods but always with such methods you will need to specify the degree of the polynomial you use to approximate with (length of your answers array presumably). If your points are general ordered points in the plane that are able to form a closed loop or some outline of a structure (for example trying to fit points that describe an ellipse or a circle or other closed or more complex geometry) then you are going to need something more sophisticated. You can still use least squares but you will need to use a parametric type curve like a spline. Take a look at the pdf at this link which may give what you need (or at the very least illustrate what I am saying): http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CE0QFjAA&url=http%3A%2F%2Ffolk.uio.no%2Fin329%2Fnchap6.pdf&ei=Yp8CUNvHC8Kg0QX6r_mEBw&usg=AFQjCNHBUZ5t2Y7C8eONYSosRydLs4Zu4A
Without seeing an image of exactly what you are trying to fit it is hard to say - it is quite possible that your data can be fit in a non parametric way with linear least squares polynomials - if so all you will need is a linear algebra library and you can code the approximations yourself like so: http://en.wikipedia.org/wiki/Ordinary_least_squares
Even so, all forms of approximation require you to decide on your form (function basis and degree etc) before you fit it. For example, if you want to decide on whether you need a 4th,5th,6th or 7th degree polynomial fit your data you would need to fit each one and assess the suitability for yourself. There is no generic way (at least none that I know of) that will tell you the degree of approximation you need to fit to your data.

All k nearest neighbors in 2D, C++

I need to find for each point of the data set all its nearest neighbors. The data set contains approx. 10 million 2D points. The data are close to the grid, but do not form a precise grid...
This option excludes (in my opinion) the use of KD Trees, where the basic assumption is no points have same x coordinate and y coordinate.
I need a fast algorithm O(n) or better (but not too difficult for implementation :-)) ) to solve this problem ... Due to the fact that boost is not standardized, I do not want to use it ...
Thanks for your answers or code samples...
I would do the following:
Create a larger grid on top of the points.
Go through the points linearly, and for each one of them, figure out which large "cell" it belongs to (and add the points to a list associated with that cell).
(This can be done in constant time for each point, just do an integer division of the coordinates of the points.)
Now go through the points linearly again. To find the 10 nearest neighbors you only need to look at the points in the adjacent, larger, cells.
Since your points are fairly evenly scattered, you can do this in time proportional to the number of points in each (large) cell.
Here is an (ugly) pic describing the situation:
The cells must be large enough for (the center) and the adjacent cells to contain the closest 10 points, but small enough to speed up the computation. You could see it as a "hash-function" where you'll find the closest points in the same bucket.
(Note that strictly speaking it's not O(n) but by tweaking the size of the larger cells, you should get close enough. :-)
I have used a library called ANN (Approximate Nearest Neighbour) with great success. It does use a Kd-tree approach, although there was more than one algorithm to try. I used it for point location on a triangulated surface. You might have some luck with it. It is minimal and was easy to include in my library just by dropping in its source.
Good luck with this interesting task!

Why do we need a Unit Vector (in other words, why do we need to normalize vectors)?

I am reading a book on game AI.
One of the terms that is being used is to normalize a vector which is to turn a vector into a unit. To do so you must divide each dimension x, y and z by its magnitude.
We must turn vector into a unit before we do anything with it. Why?
And could anyone give some scenarios where we must use a unit vector?
Thanks!
You don't have to normalize vectors, but it makes a lot of equations a little simpler when you do. It could also make API's smaller: any form of standardization has the potential to reduce the number of functions necessary.
Here's a simple example. Suppose you want to find the angle between two vectors u and v. If they are unit vectors, the angle is just arccos(uv). If they're not unit vectors, the angle is arccos(uv/(|u| |v|)). In that case, you end up computing the norms of u and v anyway.
As John D. Cook says - mainly you're doing this because you care about the direction, not the vector itself. Depending on context, you more than likely don't want / need the magnitude information - just the direction itself. You normalize to strip away the magnitude so that it doesn't skew other calculations, which in turn simplifies many other things.
In terms of AI - imagine you take the vector V between P1(the AI bad guy) and P2 (your hero) as the direction for the bad guy to move. You want the bad guy to move at a speed N per beat - how do you calculate this? Well, we either normalize the vector each beat, multiply by N to figure out how far they moved, or we pre-normalize the direction in the first place, and just multiply the unit vector by N each time - otherwise the bad guy would move further if it were further away from the hero! If the hero doesn't change position, that's one less calculation to worry about.
In that context, it's not a big deal - but what if you have a hundred bad guys? Or a thousand? What if your AI needs to deal with combinations of bad guys? Suddenly it's a hundred or thousand normalizations you're saving per beat. Since this is a handful of multiplies and a square root for each, eventually you reach the point where not normalizing the data ahead of time means you're going to kill your AI processing rate.
More broadly - math for this is really common - people are doing here what they do for things like 3D rendering - if you didn't unitize, for instance, the normals for your surfaces, you'd have potentially thousands of normalizations per rendering which are completely unnecessary. You have two options: one - make each function perform the calculation, or two - pre-normalize the data.
From the framework designer's perspective: the latter is inherently faster - if we assume the former, even if your user thinks to normalize the data, they're going to have to go through the same normalization routine OR you're going to have provide two versions of each function, which is a headache. But at the point you're making people think about which version of the function to call, you may as well make them think enough to call the correct one, and only provide it in the first place, making them do the right thing for performance.
You are often normalizing a vector because you only care about the direction the vector points and not the magnitude.
A concrete scenario is Normal Mapping. By combining light striking the surface and vectors that are perpendicular to the surface you can give an illusion of depth. The vectors from the surface define the parallel direction and the magnitude to the vector would actual make calculations wrong.
We must we turn a vector into units
before we do anything with it.
This statement is incorrect. All vectors are not unit vectors.
The vectors that form the basis for a coordinate space have two very nice properties that make them easy to work with:
They're orthogonal
They're unit vectors - magnitude = 1
This lets you write any vector in a 3D space as a linear combination of unit vectors:
(source: equationsheet.com)
I can choose to turn this vector into a unit vector if I need to by dividing each component by the magnitude
(source: equationsheet.com)
If you don't know that coordinate spaces or basis vectors are, I'd recommend learning a little more about the mathematics of graphics before you go much further.
In addition to the answers already provided, I would mention two important aspects.
Trigonometry is defined on a unit circle
All trigonometric functions are defined over a unit circle. The number pi itself is defined over a unit circle.
When you normalize vectors, you can use all trigonometric functions directly, without any rounds of scaling. As mentioned earlier, the angle between two unit vectors is simply: acos(dot(u, v)), without further scaling.
Unit vectors allow us to separate magnitude from direction
A vector can be interpreted as a quantity carrying two types of information: magnitude and direction. Force, velocity, and acceleration are important examples.
If you wish to deal separately with the magnitude and direction, a representation of the form vector = magnitude * direction, where magnitude is a scalar and direction a unit vector, is often very convenient: Changes in magnitude entail scalar manipulations, and changes in direction do not modify the magnitude. The direction has to be a unit vector to ensure that the magnitude of vector is exactly equal to magnitude.