In a coordinate plane,you are given a set of points,say 10 points,lets consider them to be integers for the sake of simplicity.how to find out if a possible square lies within these 10 points?....if not, how many points to be added to these set of points to have atleast one square?
Just use brute force. For each point in the set, for each possible other point in the set, check if there are two points close enough to possible other square corners. If the coordinates are integers then this is really simple (although with quadratic complexity, assuming constant time for the point look up), when floating point a bit less simple with a little higher complexity, probably.
Related
I have:
- a set of points of known size (in my case, only 6 points)
- a line characterized by x = s + t * r, where x, s and r are 3D vectors
I need to find the point closest to the given line. The actual distance does not matter to me.
I had a look at several different questions that seem related (including this one) and know how to solve this on paper from my highschool math classes. But I cannot find a solution without calculating every distance, and I am sure there has to be a better/faster way. Performance is absolutely crucial in my application.
One more thing: All numbers are integers (coordinates of points and elements of s and r vectors). Again, for performance reasons I would like to keep the floating-point math to a minimum.
You have to process every point at least once to know their distance. Unless you want to repeat the process many times with different lines, simply computing the distance of every point is unavoidable. So the algorithm has to be O(n).
Since you don't care about the actual distance, we can make some simplification to the point-distance computation. The exact distance is computed by (source):
d^2 = |r⨯(p-s)|^2 / |r|^2
where ⨯ is the cross product and |r|^2 is the squared length of vector r. Since |r|^2 is constant for all points, we can omit it from the distance computation without changing result:
d^2 = |r⨯(p-s)|^2
Compare the approximated square distances and keep the minimum. The advantage of this formula is that you can do everything with integers since you mentioned that all coordinates are integers.
I'm afraid you can't get away with computing less than 6 distances (if you could, at least one point would be left out -- including the nearest one).
See if it makes sense to preprocess: Is the line fixed and the points vary? Consider rotating coordinates to make the line horizontal.
As there are few points, it is doubtful that this is your bottleneck. Measure where the hot spots are, redesign algorithms/data representation, spice up compiler optimization, compile to assembly and bum that. Strictly in that order.
Jon Bentley's "Writing Efficient Programs" (sadly long out of print) and "Programming Pearls" (2nd edition) are full of advise on practical programming.
I have two arrays currPoints and prevPoints. Both are not necessarily the same size. I want to compare each element in currPoints with prevPoints and replace the value in prevPoints that is closest to the value in currPoints.
Example:
prevPoints{2,5,10,13,84,22}
currPoints{1,15,9,99}
After applying the algorithm
prevPoints{1,5,9,15,99,22}
So what is the best algorithm/method for this? It needs to be fast.
Context: If it helps, I am trying to work on a tracking algorithm that takes points from two consecutive frames in a video and tries to figure out which points in the first frame correspond to points in the second frame. I hope to track objects and tag them with an ID this way. Speed is crucial as processing is to be done in realtime.
You need to sort both the arrays first. But remember the original orientation of the prevPoints array as you need to get the original array again at the end.
So after sorting:
prevPoints{2,5,10,13,22,84}
currPoints{1,9,15,99}
Now you basically need to figure out which of the currPoints should get into prevPoints. The algorithm is will be similar to merge 2 sorted arrays just that you won't merge, instead replace values.
Initially both pointers are at the start of the corresponding arrays. 1 from currpoints should replace 2 in prevPoints based on the fact that the value in currPoints is less than prevPoints and you know that the next points in PrevPoints will only be higher than 2 (sorted arry, remember). Replace and move on the pointers.
Now currpointer is at 9 and prevpointer is at 5. Calculate the absolute difference and keep a store of the minimium absolute difference encountered so far and also the value of the number that caused the least minimum absolute difference to be encountered.(4 in this case). Move the prevpointer forward as the currpointer is pointing to a higher value.
Now prevpointer at 10 and currpointer at 9. 9 is less than 10 and so a replacement has to be done. As this minimum absolute difference is less than the earlier one ( 1 < 4 ) so 10 will be replaced by 9.
Now the prevpointer is at 13 and currpointer is at 15.
Proceed in the same fashion.
Rearrange the prevPoints array to the original orientation.
Hope this helps!!!
We sort the first list by the x positions, and the second list by the y positions. So each point has a position in each list. Now the way you do this for a nearest neighbor search (at least what I came up with) is you find the position of the point in each list through a binary search. Then we know 4 directions, either +-1 x or +-y, and basically we travel in each of these directions until such time as the best length so far is greater than the distance of just that one coordinate.
So we search in the each direction. And say the closest point is at a distance of 25, then if our next coord in the +X direction is beyond 25 in just the +X direction we can stop because even if the change in Y is 0, it cannot be closer.
This makes for a highly effective and quick n(log(n)) closest point algorithm to find a single point. But, also since we only need the two sorted lists once we have those in n(log(n)) time we can find the nearest point for all the remaining points in something like log(n) time. Find the position in the x sorted list, find the position in the y sorted list. Then spiral out until you truncate and have certainly found the nearest point. But, since the scaffolding is the same in each case it should simply end up being quite quick.
Though given your actual test case you might want to come up with something that is simply a very effective heuristic.
Simply tracing the points seems really naive, if we are tracing the same thing from frame to frame it should be the case that the point from F0 to F1 in F2 should actually be equal to the distance it travelled in F0 to F1. If we assume all these points are traveling in roughly straight lines, we could do a much better job than simply closest points. We could find generally the curves these points are taking. If we guess that their position should be `F2 by interpolating F0 and F1 and low and behold the position of a point there is really really close. Then we can be quite sure we nailed that.
Equally the objects one would assume have all the points travel roughly the same direction. Like each point travels +5,+5 from F0 to F1, not only can we guess their positions of F2 but we can know these objects make up the same object rather effectively.
I'm looking for interpolating some contour lines to generating a 3D view. The contours are not stored in a picture, coordinates of each point of the contour are simply stored in a std::vector.
for convex contours :
, it seems (I didn't check by myself) that the height can be easily calculates (linear interpolation) by using the distance between the two closest points of the two closest contours.
my contours are not necessarily convex :
, so it's more tricky... actualy I don't have any idea what kind of algorithm I can use.
UPDATE : 26 Nov. 2013
I finished to write a Discrete Laplace example :
you can get the code here
What you have is basically the classical Dirichlet problem:
Given the values of a function on the boundary of a region of space, assign values to the function in the interior of the region so that it satisfies a specific equation (such as Laplace's equation, which essentially requires the function to have no arbitrary "bumps") everywhere in the interior.
There are many ways to calculate approximate solutions to the Dirichlet problem. A simple approach, which should be well suited to your problem, is to start by discretizing the system; that is, you take a finite grid of height values, assign fixed values to those points that lie on a contour line, and then solve a discretized version of Laplace's equation for the remaining points.
Now, what Laplace's equation actually specifies, in plain terms, is that every point should have a value equal to the average of its neighbors. In the mathematical formulation of the equation, we require this to hold true in the limit as the radius of the neighborhood tends towards zero, but since we're actually working on a finite lattice, we just need to pick a suitable fixed neighborhood. A few reasonable choices of neighborhoods include:
the four orthogonally adjacent points surrounding the center point (a.k.a. the von Neumann neighborhood),
the eight orthogonally and diagonally adjacent grid points (a.k.a. the Moore neigborhood), or
the eight orthogonally and diagonally adjacent grid points, weighted so that the orthogonally adjacent points are counted twice (essentially the sum or average of the above two choices).
(Out of the choices above, the last one generally produces the nicest results, since it most closely approximates a Gaussian kernel, but the first two are often almost as good, and may be faster to calculate.)
Once you've picked a neighborhood and defined the fixed boundary points, it's time to compute the solution. For this, you basically have two choices:
Define a system of linear equations, one per each (unconstrained) grid point, stating that the value at each point is the average of its neighbors, and solve it. This is generally the most efficient approach, if you have access to a good sparse linear system solver, but writing one from scratch may be challenging.
Use an iterative method, where you first assign an arbitrary initial guess to each unconstrained grid point (e.g. using linear interpolation, as you suggest) and then loop over the grid, replacing the value at each point with the average of its neighbors. Then keep repeating this until the values stop changing (much).
You can generate the Constrained Delaunay Triangulation of the vertices and line segments describing the contours, then use the height defined at each vertex as a Z coordinate.
The resulting triangulation can then be rendered like any other triangle soup.
Despite the name, you can use TetGen to generate the triangulations, though it takes a bit of work to set up.
I get a point in the 2-D plane (x,y) as input. Now I have to check which quadrant it is in, do some Reflections about X-axis and Y-axis and check again which quadrant it is in repeatedly for a large number of times.
I have two approaches but not sure which is better
I can store initially the x,y as boolean and then do boolean operations when Reflecting and hence this will be easy. To tell which quadrant the point is in just check the value is true or false.
Or I can do the normal approach with int and then check the first bit to find which quadrant the point is in
Neither, just compare your coordinates to 0.
If you store them as boolean, besides losing information, you'll probably get some overhead because of the conversion.
If you check the first bit, it will be less readable.
The compiler will optimize these calls on its own, doubt you'll gain anything from a different approach.
I need to find for each point of the data set all its nearest neighbors. The data set contains approx. 10 million 2D points. The data are close to the grid, but do not form a precise grid...
This option excludes (in my opinion) the use of KD Trees, where the basic assumption is no points have same x coordinate and y coordinate.
I need a fast algorithm O(n) or better (but not too difficult for implementation :-)) ) to solve this problem ... Due to the fact that boost is not standardized, I do not want to use it ...
Thanks for your answers or code samples...
I would do the following:
Create a larger grid on top of the points.
Go through the points linearly, and for each one of them, figure out which large "cell" it belongs to (and add the points to a list associated with that cell).
(This can be done in constant time for each point, just do an integer division of the coordinates of the points.)
Now go through the points linearly again. To find the 10 nearest neighbors you only need to look at the points in the adjacent, larger, cells.
Since your points are fairly evenly scattered, you can do this in time proportional to the number of points in each (large) cell.
Here is an (ugly) pic describing the situation:
The cells must be large enough for (the center) and the adjacent cells to contain the closest 10 points, but small enough to speed up the computation. You could see it as a "hash-function" where you'll find the closest points in the same bucket.
(Note that strictly speaking it's not O(n) but by tweaking the size of the larger cells, you should get close enough. :-)
I have used a library called ANN (Approximate Nearest Neighbour) with great success. It does use a Kd-tree approach, although there was more than one algorithm to try. I used it for point location on a triangulated surface. You might have some luck with it. It is minimal and was easy to include in my library just by dropping in its source.
Good luck with this interesting task!