How to detect boundary points - c++

I have a set of points in a 2D plane. I need to detect only the points that belong to the shape that borns from the union of this subset of points in order to cover the largest area: that is, the boundary points.
The following figure is an example:
The red points are those ones I need to detect.

What you need is called Convex hull. A lot of algorithms exist to calculate it.

Related

Path finding on a large list of nodes? Around 100,000 nodes

I have a list of nodes as 2D coordinate (array of float) and the goal is to find how many nodes are linked to the source node(given).
Two nodes are defined as linked, if the distance between the nodes is less than or equal to 10. Also, if distance between A and B is <= 10, distance between B and C is <= 10 and distance between A and C > 10, even then, A and C are linked as then path would be is A->B->C. So, it is a typical graph search problem in theory.
Here is the problem. I have around 100,000 nodes in a list. Each node is a 2D coordinate point. Since, the list is enormous, conventional traversal and path finding algorithms like DFS or BFS would take up a O(n^2) to construct the adjacency list, which is not ideal and not what I am looking for.
I researched on the internet and found out that Quad Tree or kd Tree probably might be the best to implement in this case. I have made my own Quad Tree class also, I just don't understand how to implement a search algorithm like DFS on it. Or if there is something else that I am missing out on?
A quadtree groups points by splitting 2D space into quarters, either until each point has a quadrant to itself, or until you reach a minimum size, after which you lump all points within the quadrant into a list.
Since you're trying to find all points within a maximum distance of each point in your source list, you don't need to go all the way down to one-point-per-cell. To pick a cutoff, I would do performance tests on some different values, but as a starting point for minimum quadrant size the maximum connection distance for points is probably a good guess.
So now you have all of your points grouped into a tree and you need to know how to actually find nearby ones.
Since the quadtree encodes spatial information, to find points within a certain distance of any given point, you would descend the quadtree and use that spatial information to exclude entire quadrants from your search. To do this, you would check whether the nearest bound of each quadrant is beyond the maximum distance from the point you are searching from. If the closest edge of that quadrant is beyond the maximum distance, then none of the points in that quadrant can possibly be within the maximum distance, so there is no need to explore that part of the tree. (This is similar to how a binary search doesn't need to search parts of a sorted array or tree, because it knows that those parts cannot possibly contain the value being searched for).
Once you get down to the level of the quadtree where you have a single point or list of points, you would do a regular euclidean distance check with those points to see if they were actually within the maximum distance. (Don't forget to check for equality, otherwise you'll find the same point you're searching around).
So, for example, if you were searching for points near one of the points in the bottom-right corner of this image, there would be no need to search the other three top-level quadrants because all three of them would be beyond the maximum distance. This would save you from exploring all of the sub-quadrants in those parts of the tree and avoid doing distance comparisons against all of those points.
If, however, you are searching for a point near the edge of a quadrant, you do need to check neighboring quadrants, because the nearest bound will be close enough that you cannot exclude the possibility of a valid point being in that quadrant.
In your particular case, you would make use of this by building the quadtree once, and then looping over the original list of points and doing the search I described above to find all points near that point. You would then use the found-points to build a connectivity graph, which could be efficiently traversed by Depth/Breadth-First-Search or could be given edge-weights to be used with a more complex, weighted search like Dijkstra's Algorithm or A*.

How can I find the closest location on a map to an arbitrary point?

Note: For the remainder of this question I will call this arbitrary point as "myPoint" to avoid confusion.
Problem: There are several points on the map (it not practical to calculate the distance between each point and myPoint).
Attempt at a Solution: I tried doing a radius search but in order to know where these points are located within the circle, I would have to go through all the points and make sure the distance between them is less than the search circle's radius.
Question: How can I find the closest point to myPoint in an efficient way? Please ask questions if clarification is needed.
You could use a technique to "partition" the search space (i.e., the map).
You could consider defining a regular grid that covers the map and store all map locations within each cell of the grid.
With this it is easy to calculate/determine which cell contains the myPoint. Then it is just a matter of considering the points within the same cell.
Note: You may have to consider neighboring cells too in the event the cell containing myPoint doesn't have any map locations or that the distance to a point in a neighboring cell is shorter than the distance to the point within the same cell (e.g., myPoint is near a cell border).
Calculating the distance between the points is inevitable. What I suggest you can do is to slice the map in smaller pieces (not necessarily regular) and then only calculate the distance between the points that lie in the same slice.

what kind of algorithm for generating height-map from contour line?

I'm looking for interpolating some contour lines to generating a 3D view. The contours are not stored in a picture, coordinates of each point of the contour are simply stored in a std::vector.
for convex contours :
, it seems (I didn't check by myself) that the height can be easily calculates (linear interpolation) by using the distance between the two closest points of the two closest contours.
my contours are not necessarily convex :
, so it's more tricky... actualy I don't have any idea what kind of algorithm I can use.
UPDATE : 26 Nov. 2013
I finished to write a Discrete Laplace example :
you can get the code here
What you have is basically the classical Dirichlet problem:
Given the values of a function on the boundary of a region of space, assign values to the function in the interior of the region so that it satisfies a specific equation (such as Laplace's equation, which essentially requires the function to have no arbitrary "bumps") everywhere in the interior.
There are many ways to calculate approximate solutions to the Dirichlet problem. A simple approach, which should be well suited to your problem, is to start by discretizing the system; that is, you take a finite grid of height values, assign fixed values to those points that lie on a contour line, and then solve a discretized version of Laplace's equation for the remaining points.
Now, what Laplace's equation actually specifies, in plain terms, is that every point should have a value equal to the average of its neighbors. In the mathematical formulation of the equation, we require this to hold true in the limit as the radius of the neighborhood tends towards zero, but since we're actually working on a finite lattice, we just need to pick a suitable fixed neighborhood. A few reasonable choices of neighborhoods include:
the four orthogonally adjacent points surrounding the center point (a.k.a. the von Neumann neighborhood),
the eight orthogonally and diagonally adjacent grid points (a.k.a. the Moore neigborhood), or
the eight orthogonally and diagonally adjacent grid points, weighted so that the orthogonally adjacent points are counted twice (essentially the sum or average of the above two choices).
(Out of the choices above, the last one generally produces the nicest results, since it most closely approximates a Gaussian kernel, but the first two are often almost as good, and may be faster to calculate.)
Once you've picked a neighborhood and defined the fixed boundary points, it's time to compute the solution. For this, you basically have two choices:
Define a system of linear equations, one per each (unconstrained) grid point, stating that the value at each point is the average of its neighbors, and solve it. This is generally the most efficient approach, if you have access to a good sparse linear system solver, but writing one from scratch may be challenging.
Use an iterative method, where you first assign an arbitrary initial guess to each unconstrained grid point (e.g. using linear interpolation, as you suggest) and then loop over the grid, replacing the value at each point with the average of its neighbors. Then keep repeating this until the values stop changing (much).
You can generate the Constrained Delaunay Triangulation of the vertices and line segments describing the contours, then use the height defined at each vertex as a Z coordinate.
The resulting triangulation can then be rendered like any other triangle soup.
Despite the name, you can use TetGen to generate the triangulations, though it takes a bit of work to set up.

how to compare between points of different number of dimensions?

I have a data points of different dimensions and I want to compare between them such that I can remove redundant points. I tried to make the points of the same dimensions by using PCA, but the problem is that PCA reduced the dimensions, but I lost what each dimension mean as the resultant points are different from the points that I had, so I wonder if there is any other way to do so. In other words, I wonder if there is any way to help me compare between points of different number of dimensions.
Assume relevant null values for missing dimensions? For instance if you want to compare a 2d (x,y) point (vector) with a 3d one (x,y,z) you can assume a z-value of 0 for the 2d point. That corresponds to the x,y plane.

Parallelization of neighborhood point deletion

I am implementing the Good Features To Track/Shi-Tomasi corner detection algorithm on CUDA and need to find a way to parallelize the following part of the algorithm:
I start with an array of points obtained from an image sorted according to a certain intensity value (an eigenvalue of a previous calculation).
Starting with the first point of the array, I remove any point in the array that is within a certain physical distance of the first point. (This distance is calculated on the image plane, not on the array).
On the resulting array, we repeat step two for the remaining points.
Is this somehow parallelizable, specifically on CUDA? I'm suspecting not, since there will obviously be dependencies across the image.
I think the article Accelerated Corner-Detector Algorithms describes the way to solve this problem.