Longest arithmetic progression of a set of numbers {ab1,ab2,ab3 .... abn} is defined as a subset {bb1,bb2,bb3 .... bbn} such that bi+1 - bi is constant.
I would like to extend this problems to a set of two dimensional points lying on a straight line.
Lets define Dist(P1,P2) is the distance between two Points P1(X1,Y1) and P2(X2,Y2) on a line as
Dist(P1,P2) = Dist((X1,Y1),(X2,Y2)) = (X2 - X1)2 + (Y2 - Y1))2
Now For a given set of points I need to find the largest Arithmetic Progression such that Dist(Pi,Pi+1) is constant, assuming they all lie on the same line (m & C are constant).
I researched a bit but could not figure out an algorithm which is better than O(n2).
In fact currently the way I am doing is I am maintaining a Dictionary say
DistDict=dict()
and say Points are defined in a List as
Points = [(X1,Y1),(X2,Y2),....]
then this is what I am doing
for i,pi in enumerate(Points):
for pj in Points[i+1:]:
DistDict.setdefault(dist(pi,pj),set([])).add((pi,pj))
so all I have ended up with a dictionary of points which are of equal distance. So the only thing I have to do is to scan through to find out the longest set.
I am just wondering that this ought to have a better solution, but somehow I can't figure out one. I have also seen couple of similar older SO posts but none I can find to give something that is more efficient than O(n2). Is this somehow an NP Hard problem that we can never have something better or if not what approach could be take.
Please note I came across a post which claims about an efficient divide and conquer algorithm but couldn't make any head or tail out of it.
Any help in this regard?
Note*** I am tagging this Python because I understand Python better than maybe Matlab or Ruby. C/C++/Java is also fine as I am somewhat proficient in these too :-)
To sum things up: As #TonyK pointed out, if you assume that the points lie on a straight line, you can reduce it to the one-dimensional case that was discussed extensively here already. The solution uses Fast Fourier Transforms as mentioned by #YochaiTimmer.
Additional note: The problem is almost certainly not NP-hard as it has an efficient O(n log n) solution, so that would imply P=NP.
You could study the Fast Fourier Transform methods for multiplication.O(N log N)
You might be able to do something similar with your problem.
Firstly, your definition of distance is wrong. You have to take the square root. Secondly, if you know that all the points lie on a straight line, you can simply ignore the y-coordinates (unless the line is vertical) or the x-coordinates (unless the line is horizontal). Then it reduces to the problem in your first paragraph.
Related
I have:
- a set of points of known size (in my case, only 6 points)
- a line characterized by x = s + t * r, where x, s and r are 3D vectors
I need to find the point closest to the given line. The actual distance does not matter to me.
I had a look at several different questions that seem related (including this one) and know how to solve this on paper from my highschool math classes. But I cannot find a solution without calculating every distance, and I am sure there has to be a better/faster way. Performance is absolutely crucial in my application.
One more thing: All numbers are integers (coordinates of points and elements of s and r vectors). Again, for performance reasons I would like to keep the floating-point math to a minimum.
You have to process every point at least once to know their distance. Unless you want to repeat the process many times with different lines, simply computing the distance of every point is unavoidable. So the algorithm has to be O(n).
Since you don't care about the actual distance, we can make some simplification to the point-distance computation. The exact distance is computed by (source):
d^2 = |r⨯(p-s)|^2 / |r|^2
where ⨯ is the cross product and |r|^2 is the squared length of vector r. Since |r|^2 is constant for all points, we can omit it from the distance computation without changing result:
d^2 = |r⨯(p-s)|^2
Compare the approximated square distances and keep the minimum. The advantage of this formula is that you can do everything with integers since you mentioned that all coordinates are integers.
I'm afraid you can't get away with computing less than 6 distances (if you could, at least one point would be left out -- including the nearest one).
See if it makes sense to preprocess: Is the line fixed and the points vary? Consider rotating coordinates to make the line horizontal.
As there are few points, it is doubtful that this is your bottleneck. Measure where the hot spots are, redesign algorithms/data representation, spice up compiler optimization, compile to assembly and bum that. Strictly in that order.
Jon Bentley's "Writing Efficient Programs" (sadly long out of print) and "Programming Pearls" (2nd edition) are full of advise on practical programming.
I have a list of points with x,y coordinates:
List_coord=[(462, 435), (491, 953), (617, 285),(657, 378)]
This list lenght (4 element here) can be very large from few hundred up to 35000 elements.
I want to remove too close points by threshold in this list.
note:Points are never at the exact same position.
My current code for that:
while iteration<5:
for pt in List_coord:
for PT in List_coord:
if (abs(pt[0]-PT[0])+abs(pt[1]-PT[1]))!=0 and abs(pt[0]-PT[0])<threshold and abs(pt[1]-PT[1])<threshold:
List_coord.remove(PT)
iteration=iteration+1
Explication of my terrible code :) :
I check if the very distance is 0 then it means that i am comparing
the same point
then i check the distance in x and in y..
Iteration:
I need few iterations to avoid missing one remove because the list change inside the loop itself...
This code is working but it is a very low process!
I am sure there is another method much easier but i wasn't able to find even if some allready answered questions are close to mine..
note:I would like to avoid using extra library for that code if it is possible
Python will be a bit slow at this ;-)
The solution you will probably want is called quad-trees, but I'll mention a simpler approach first, in case it's preferable.
The usual approach is to group the points so that you can easily reject points that are clearly far away from each other.
One approach might be to sort the list twice, once by x once by y. You can prove that if two points are too-close, they must be close in one dimension or the other. Thus your inner loop can break out early. If it sees a point that is too far away from the outer point in the sorted direction, it can know for a fact that all future points in that list are also too far away. Thus it doesn't have to look any further. Do this in X and Y and you're set!
This approach is going to tend to be dominated by the O(n log n) sort times. However, if all of your points share a single x value, you'll end up doing the same slow O(n^2) iteration that you're doing right now because you never terminate the inner loop early.
The more robust solution is to use quadtrees. Quadtrees are designed to solve the kind of problem you are looking at. The idea is to build a tree such that you can rapidly exclude large numbers of points. I'd recommend this.
If your number of points gets too large, I'd recommend getting a clustering library. Efficient clustering is a very difficult task, and often done in C++ or another fast language.
I have vector of pointers to a very simple Point class:
class Point{
public:
float x;
float y;
float z;
};
How do I find the closest object to a referent point using STL?
Do I need first sort the vector first or is there a more efficient way?
Sorting takes O(n*log(N)), so it's not very efficient. You can do it in O(n) by just iterating through the elements and memorizing the best match.
Using for_each from <algorithm>, you can define a function that keeps track of the closest elements and completes in O(n).
Or, you can probably even use min_element, also from <algorithm>.
The basic question here is how often you'll be doing queries against a single set of points.
If you're going to find one nearest point in the set one time, then #Lucian is right: you might as well leave the points un-sorted, and do a linear search to find the right point.
If you'll do a relatively large number of queries against the same set of points, it's worthwhile to organize the point data to improve query speed. #izomorphius has already mentioned a k-d tree, and that's definitely a good suggestion. Another possibility (admittedly, quite similar) is an oct-tree. Between the two, I find an oct-tree quite a bit easier to understand. In theory, a k-d tree should be slightly more efficient (on average), but I've never seen much difference -- though perhaps with different data the difference would become significant.
Note, however, that building something like a k-d tree or oct-tree isn't terribly slow, so you don't need to do an awful lot of queries against a set of points to justify building one. One query clearly doesn't justify it, and two probably won't either -- but contrary to what Luchian implies, O(N log N) (just for example) isn't really very slow. Roughly speaking, log(N) is the number of digits in the number N, so O(N log N) isn't really a whole lot slower than O(N). That, in turn, means you don't need a particularly large number of queries to justify organizing the data to speed up each one.
You can not go faster then a linear comparison if you only know that there are points in a vector. However if you have additional knowledge a lot can be improved. For instance if you know all the points are ordered and lie on the same line there is a logarithmic solution.
Also there are better data structures to solve your problem for instance a k-d tree. It is not part of the STL - you will have to implement it yourself but it is THE data structure to use to solve the problem you have.
you can try to use Quadtree
http://en.wikipedia.org/wiki/Quadtree
or something similar.
The problem:
N nodes are related to each other by a 'closeness' factor ranging from 0 to 1, where a factor of 1 means that the two nodes have nothing in common and 0 means the two nodes are exactly alike.
If two nodes are both close to another node (i.e. they have a factor close to 0) then this doesn't mean that they will be close together, although probabilistically they do have a much higher chance of being close together.
-
The question:
If another node is placed in the set, find the node that it is closest to in the shortest possible amount of time.
This isn't a homework question, this is a real world problem that I need to solve - but I've never taken any algorithm courses etc so I don't have a clue what sort of algorithm I should be researching.
I can index all of the nodes before another one is added and gather closeness data between each node, but short of comparing all nodes to the new node I haven't been able to come up with an efficient solution. Any ideas or help would be much appreciated :)
Because your 'closeness' metric obeys the triangle inequality, you should be able to use a variant of BK-Trees to organize your elements. Adapting them to real numbers should simply be a matter of choosing an interval to quantize your number on, and otherwise using the standard Bk-Tree procedure. Some experimentation may be required - you might want to increase the resolution of the quantization as you progress down the tree, for instance.
but short of comparing all nodes to
the new node I haven't been able to
come up with an efficient solution
Without any other information about the relationships between nodes, this is the only way you can do it since you have to figure out the closeness factor between the new node and each existing node. A O(n) algorithm can be a perfectly decent solution.
One addition you might consider - keep in mind we have no idea what data structure you are using for your objects - is to organize all present nodes into a graph, where nodes with factors below a certain threshold can be considered connected, so you can first check nodes that are more likely to be similar/related.
If you want the optimal algorithm in terms of speed, but O(n^2) space, then for each node create a sorted list of other nodes (ordered by closeness).
When you get a new node, you have to add it to the indexed list of all the other nodes, and all the other nodes need to be added to its list.
To find the closest node, just find the first node on any node's list.
Since you already need O(n^2) space (in order to store all the closeness information you need basically an NxN matrix where A[i,j] represents the closeness between i and j) you might as well sort it and get O(1) retrieval.
If this closeness forms a linear spectrum (such that closeness to something implies closeness to other things that are close to it, and not being close implies not being close to those close), then you can simply do a binary or interpolation sort on insertion for closeness, handling one extra complexity: at each point you have to see if closeness increases or decreases below or above.
For example, if we consider letters - A is close to B but far from Z - then the pre-existing elements can be kept sorted, say: A, B, E, G, K, M, Q, Z. To insert say 'F', you start by comparing with the middle element, [3] G, and the one following that: [4] K. You establish that F is closer to G than K, so the best match is either at G or to the left, and we move halfway into the unexplored region to the left... 3/2=[1] B, followed by E, and we find E's closer to F, so the match is either at E or to its right. Halving the space between our earlier checks at [3] and [1], we test at [2] and find it equally-distant, so insert it in between.
EDIT: it may work better in probabilistic situations, and require less comparisons, to start at the ends of the spectrum and work your way in (e.g. compare F to A and Z, decide it's closer to A, see if A's closer or the halfway point [3] G). Also, it might be good to finish with a comparison to the closest few points either side of where the binary/interpolation led you.
ACM Surveys September 2001 carried two papers that might be relevant, at least for background. "Searching in Metric Spaces", lead author Chavez, and "Searching in High Dimensional Spaces - Index Structures for Improving the Performance of Multimedia Databases", lead author Bohm. From memory, if all you have is the triangle inequality, you can use it to some effect, but if you can trim your data down to a sensible number of dimensions, you can do better by using a search structure that knows about this dimensional structure.
Facebook has this thing where it puts you and all of your friends in a graph, then slowly moves everyone around until people are grouped together based on mutual friends and so on.
It looked to me like they just made anything <0.5 an attractive force, anything >0.5 a repulsive force, and moved people with every iteration based on the net force. After a couple hundred iterations, it was looking pretty darn good.
Note: this is not an algorithm it is a heuristic. In the facebook implementation I saw, two people were not able to reach equilibrium and kept dancing around each other. It turns out they were actually the same person with two different accounts.
Also, it took about 15 minutes on a decent computer and ~100 nodes. YMMV.
It looks suspiciously like a Nearest Neighbor Search problem (also called a similarity search)
Click here to view the problem.
I can't come to a solution better than O(n^2) but with n<=500000 this won't work!
My Idea is to sort them by (beauty+intellect+richness) and test any of them with those after it.
Please help!
If you restrict the problem to two attributes (e.g. only B_i and R_i, just for illustration purposes), you can think of these attributes as points in a 2D plane. For each point (corresponding to a Lady) you'll have to count the number of points in the (semi-infinite) rectangle 'right and above' the given point.
I think a faster than O(n^2) solution would involve a range tree although I have not thought about the details. See also an illustration here.
EDIT: and you would store (or update while building) the number of points 'below' each node with the node so you can e.g. easily get the number of points below or above the splitting point of a given node.
I think you can solve this in O(n log n) by considering each lady as a point in 3-space and computing the convex hull of the points (see, e.g., here). Then, any point inside the hull is a potential suicide case.