Determine Polygon with specified property - c++

I am creating a graphics project in which I have to find at some point of time that if there exists a point x inside the polygon such that if I join this point to all vertices of this polygon then all the line segments joining vertices and this point x lies completely inside the Polygon.
I wonder if there is some famous algorithm to do so or could any one of you describe an algorithm to do so.
I am looking for a linear time algorithm.

You are asking how to compute the kernel of a star-shaped polygon. This problem was solved in 1979 by Lee and Preparata in a paper entitled An Optimal Algorithm for Finding the Kernel of a Polygon. From their abstract:
The kernel K(P) of a simple polygon P with n vertices is the locus of
the points internal to P from which all vertices of P are visible
Equivalently, K(P) is the intersection of appropriate half-planes
determined by the polygon's edges. Although it is known that to find
the intersection of n generic half-planes requires time O(n log n), we
show that one can exploit the ordering of the half-planes
corresponding to the sequence of the polygon's edges to obtain a
kernel finding algorithm which runs in time O(n) and is therefore
optimal.

Related

Finding the best algorithm for nearest neighbor search in a 2D plane with moving points

I am looking for an efficient way to perform nearest neighbor searches within a specified radius in a two-dimensional plane. According to Wikipedia, space-partitioning data structures, such as :
k-d trees,
r-trees,
octrees,
quadtrees,
cover trees,
metric trees,
BBD trees
locality-sensitive hashing,
and bins,
are often used for organizing points in a multi-dimensional space and can provide O(log n) performance for search and insert operations. However, in my case, the points in the two-dimensional plane are moving at each iteration, so I need to update the tree accordingly. Rebuilding the tree from scratch at each iteration seems easier, but I would like to avoid it if possible because the points only move slightly between iterations.
I have read that k-d trees are not naturally balanced, which could be an issue in my case. R-trees, on the other hand, are better suited for storing rectangles. Bin algorithms, on the other hand, are easy to implement and provide near-linear search performance within local bins.
I am working on an autonomous agent simulation where 1,000,000 agents are rendered in the GPU, and the CPU is responsible for computing the next movement of each agent. Each agent is influenced by other agents within its line of sight, or in other words, other agents within a circular sector of angle θ and radius r. So here specific requirements for my use case:
Search space is a 2-d plane,
Each object is a point identified with the x,y coordinate.
All points are frequently updated by a small factor.
Cannot afford any O(n^2) algorithms.
Search within a radius (circular sector)
Search for all candidates within the search surface.
Given these considerations, what would be the best algorithms for my use case?
I think you could potentially solve this by doing a sort of scheduling approach. If you know that no object will move more than d distance in each iteration, and you want to know which objects are within X distance of each other on each iteration, then given the distances between all objects you know that on the next iteration the only potential pairs of objects that would change their neighbor status would be those with a distance between X-d and X+d. The iteration after that it would be X-2d and X+2d and so on.
So I'm thinking that you could do an initial distance calculation between all pairs of objects, and then based on each difference you can create an NxN matrix where the value in each cell is which iteration you will need to re-check their distance. Then when you re-check those during that iteration, you would update their values in this matrix for the next iteration that they need to be checked.
The only problem is whether calculating an initial NxN distance matrix is feasible.

how can I get if a QPolygon is simple? [duplicate]

For a polygon defined as a sequence of (x,y) points, how can I detect whether it is complex or not? A complex polygon has intersections with itself, as shown:
Is there a better solution than checking every pair which would have a time complexity of O(N2)?
There are sweep methods which can determine this much faster than a brute force approach. In addition, they can be used to break a non-simple polygon into multiple simple polygons.
For details, see this article, in particular, this code to test for a simple polygon.
See Bentley Ottmann Algorithm for a sweep based O((N + I)log N) method for this.
Where N is the number of line segments and I is number of intersection points.
In fact, this can be done in linear time use Chazelle's triangulation algorithm. It either triangulates the polygon or find out the polygon is not simple.

Path finding on a large list of nodes? Around 100,000 nodes

I have a list of nodes as 2D coordinate (array of float) and the goal is to find how many nodes are linked to the source node(given).
Two nodes are defined as linked, if the distance between the nodes is less than or equal to 10. Also, if distance between A and B is <= 10, distance between B and C is <= 10 and distance between A and C > 10, even then, A and C are linked as then path would be is A->B->C. So, it is a typical graph search problem in theory.
Here is the problem. I have around 100,000 nodes in a list. Each node is a 2D coordinate point. Since, the list is enormous, conventional traversal and path finding algorithms like DFS or BFS would take up a O(n^2) to construct the adjacency list, which is not ideal and not what I am looking for.
I researched on the internet and found out that Quad Tree or kd Tree probably might be the best to implement in this case. I have made my own Quad Tree class also, I just don't understand how to implement a search algorithm like DFS on it. Or if there is something else that I am missing out on?
A quadtree groups points by splitting 2D space into quarters, either until each point has a quadrant to itself, or until you reach a minimum size, after which you lump all points within the quadrant into a list.
Since you're trying to find all points within a maximum distance of each point in your source list, you don't need to go all the way down to one-point-per-cell. To pick a cutoff, I would do performance tests on some different values, but as a starting point for minimum quadrant size the maximum connection distance for points is probably a good guess.
So now you have all of your points grouped into a tree and you need to know how to actually find nearby ones.
Since the quadtree encodes spatial information, to find points within a certain distance of any given point, you would descend the quadtree and use that spatial information to exclude entire quadrants from your search. To do this, you would check whether the nearest bound of each quadrant is beyond the maximum distance from the point you are searching from. If the closest edge of that quadrant is beyond the maximum distance, then none of the points in that quadrant can possibly be within the maximum distance, so there is no need to explore that part of the tree. (This is similar to how a binary search doesn't need to search parts of a sorted array or tree, because it knows that those parts cannot possibly contain the value being searched for).
Once you get down to the level of the quadtree where you have a single point or list of points, you would do a regular euclidean distance check with those points to see if they were actually within the maximum distance. (Don't forget to check for equality, otherwise you'll find the same point you're searching around).
So, for example, if you were searching for points near one of the points in the bottom-right corner of this image, there would be no need to search the other three top-level quadrants because all three of them would be beyond the maximum distance. This would save you from exploring all of the sub-quadrants in those parts of the tree and avoid doing distance comparisons against all of those points.
If, however, you are searching for a point near the edge of a quadrant, you do need to check neighboring quadrants, because the nearest bound will be close enough that you cannot exclude the possibility of a valid point being in that quadrant.
In your particular case, you would make use of this by building the quadtree once, and then looping over the original list of points and doing the search I described above to find all points near that point. You would then use the found-points to build a connectivity graph, which could be efficiently traversed by Depth/Breadth-First-Search or could be given edge-weights to be used with a more complex, weighted search like Dijkstra's Algorithm or A*.

what kind of algorithm for generating height-map from contour line?

I'm looking for interpolating some contour lines to generating a 3D view. The contours are not stored in a picture, coordinates of each point of the contour are simply stored in a std::vector.
for convex contours :
, it seems (I didn't check by myself) that the height can be easily calculates (linear interpolation) by using the distance between the two closest points of the two closest contours.
my contours are not necessarily convex :
, so it's more tricky... actualy I don't have any idea what kind of algorithm I can use.
UPDATE : 26 Nov. 2013
I finished to write a Discrete Laplace example :
you can get the code here
What you have is basically the classical Dirichlet problem:
Given the values of a function on the boundary of a region of space, assign values to the function in the interior of the region so that it satisfies a specific equation (such as Laplace's equation, which essentially requires the function to have no arbitrary "bumps") everywhere in the interior.
There are many ways to calculate approximate solutions to the Dirichlet problem. A simple approach, which should be well suited to your problem, is to start by discretizing the system; that is, you take a finite grid of height values, assign fixed values to those points that lie on a contour line, and then solve a discretized version of Laplace's equation for the remaining points.
Now, what Laplace's equation actually specifies, in plain terms, is that every point should have a value equal to the average of its neighbors. In the mathematical formulation of the equation, we require this to hold true in the limit as the radius of the neighborhood tends towards zero, but since we're actually working on a finite lattice, we just need to pick a suitable fixed neighborhood. A few reasonable choices of neighborhoods include:
the four orthogonally adjacent points surrounding the center point (a.k.a. the von Neumann neighborhood),
the eight orthogonally and diagonally adjacent grid points (a.k.a. the Moore neigborhood), or
the eight orthogonally and diagonally adjacent grid points, weighted so that the orthogonally adjacent points are counted twice (essentially the sum or average of the above two choices).
(Out of the choices above, the last one generally produces the nicest results, since it most closely approximates a Gaussian kernel, but the first two are often almost as good, and may be faster to calculate.)
Once you've picked a neighborhood and defined the fixed boundary points, it's time to compute the solution. For this, you basically have two choices:
Define a system of linear equations, one per each (unconstrained) grid point, stating that the value at each point is the average of its neighbors, and solve it. This is generally the most efficient approach, if you have access to a good sparse linear system solver, but writing one from scratch may be challenging.
Use an iterative method, where you first assign an arbitrary initial guess to each unconstrained grid point (e.g. using linear interpolation, as you suggest) and then loop over the grid, replacing the value at each point with the average of its neighbors. Then keep repeating this until the values stop changing (much).
You can generate the Constrained Delaunay Triangulation of the vertices and line segments describing the contours, then use the height defined at each vertex as a Z coordinate.
The resulting triangulation can then be rendered like any other triangle soup.
Despite the name, you can use TetGen to generate the triangulations, though it takes a bit of work to set up.

k-way triangle set intersection and triangulation

If we have K sets of potentially overlapping triangles, what is a computationally efficient way of computing a new, non-overlapping set of triangles?
For example, consider this problem:
Here we have 3 triangle sets A, B, C, with some mutual overlap, and wish to obtain the non-overlapping sets A', B', C', AB, AC, BC, ABC, where for example the triangles in AC would contain the surfaces where there is exclusive overlap among A and C; and A' would contain the surfaces of A which do not overlap any other set.
I (also) propose a two step approach.
1. Find the intersection points of all triangle sides.
As pointed out in the comments, this is a well-researched problem, typically approached with line sweep methods. Here is a very nice overview, look especially at the Bentley-Ottmann algorithm.
2. Triangulate with Constrained Delaunay.
I think Polygon Triangulation as suggested by #Claudiu cannot solve your problem as it cannot guarantee that all original edges are included. Therefore, I suggest you look at Constrained Delaunay triangulations. These allow you to specify edges that must be included in your triangulation, even if they would not be included in an unconstrained Delaunay or polygon triangulation. Furthermore, there are implementations that allow you to specify a non-convex border of your triangulation outside of which no triangles are generated. This also seems to be a requirement in your case.
Implementing Constrained Delaunay is non-trivial. There is however, a somewhat dated but very nice C implementation of available from a CMU researcher (including a command line tool). See here for the theory of this specific algorithm. This algorithm also supports specification of a border. Note that the linked algorithm can do more than just Constrained Delaunay (namely quality mesh generation), but it can be configured not to add new points, which amounts to Constrained Delaunay.
Edit See comments for another implementation.
If you want something a bit more straight forward, faster to implement, and significantly less code... I'd recommend just doing some simple polygon clipping like the old software rendering algorithms used to do (especially since you're only dealing with triangles as your input). As briefly mentioned by a couple of other people, it involves splitting each triangle at the point where every other segment intersects it.
Triangles are easy, because splitting a triangle at a given plane always results in just 1 or 2 new ones (2 or 3 total). If your data set is rather large, you could introduce a quad-tree or other form of spacial organization in order to find the intersecting triangles faster as the new ones get added.
Granted, this would generate more polygons than the suggested Constrained Delaunay algorithm. But many of those algorithms don't do well with overlapping shapes and would require you to know your silhouette segments, so you'd be doing much of the same work anyhow.
And if fewer resulting triangles is a requirement, you can always do a merging pass at the end (adding neighbor information during the clipping to speed that portion up).
Anyway, good luck!
Your example is a special case of what computational geometers call "an arrangement." The CGAL Library has extensive and efficent arrangement handling routines. If you check this part of the documentation, you'll see that you can declare an empty arrangement, then insert triangles to divide the 2d plane into disjoint faces. As others have said, you'll then need to triangulate the faces that aren't already triangles. Happily CGAL also provides the routines to do this. This example of constrained Delaunay triangulation is a good place to start.
CGAL attempts to use the most efficient algorithms available that are practical to implement. In this case it looks like you can achieve O((n + k) log n) for an arrangment with n edges (3 times the number of triangles in your case) with k intersection. The algorithm uses a general technique called "sweep line". A vertical line is swept left-to-right with "events" computed and processed along the way. Events are edge endpoints and intersections. As each event is processed, a cell of the arrangement is updated.
Delaunay algorithms are typically O(n log n) for n vertices. There are several common algorithms, easily looked up or found in the CGAL references.
Even if you can't use CGAL in your work (e.g. for licensing reasons), the documentation is full of sources on the underlying algorithms: arrangements and constrained Delaunay algorithms.
Beware however that both arrangments and triangulations are notoriously hard to implement correctly due to floating point error. Robust versions often depend on rational arithmetic (available in CGAL).
To expand a bit on the comment from Ted Hopp, this should be possible by first computing a planar subdivision in which each bounded face of the output is associated with one of the sets A', B', C', AB, AC, BC, ABC, or "none". The second phase is then to triangulate the (possibly non-convex) faces in each set.
Step 1 could be performed in O((N + K) log N) time using a variation of the Bentley-Ottmann sweep line algorithm in which the current set is maintained as part of the algorithm's state. This can be determined from the line segments that have already been crossed and their direction.
Once that's done, the disjoint polygons for each set can then be broken into monotone pieces in O(N log N) time which in turn can be triangulated in O(N) time.
If you haven't already, pick up a copy of "Computational Geometry: Algorithms and Applications" by de Berg et al.
I can think of two approaches.
A more general approach is treating your input as just a set of lines and splitting the problem in two:
Polygon Detection. Take the set of lines your initial triangles make and get a set of non-overlapping polygons. This paper offers an O((N + M)^4) approach, were N is the number of line segments and M the number of intersections, which does seem a bit slow unfortunately...
Polygon Triangulation. Take each polygon from step 1 and triangulate it. This takes O(n log* n) which is practically O(n).
Another approach is to do do a custom algorithm. Solve the problem for intersecting two triangles, apply it to the first two input triangles, then for each new triangle, apply the algorithm to all the current triangles with the new one. It seems even for two triangles this isn't that straightforward, though, as there are several cases (might not be exhaustive):
No points of the triangle are in any other trianle.
No intersection
Jewish star
Two perpendicular spikes
One point of one triangle is contained in the other
Each triangle contains one point of the other
Two points of one triangle are in the other
Three points of one are in the other - one is fully contained
etc... no, it doesn't seem like that is the right approach. Leaving it here anyway for posterity.