I did recently attach the 3rd version of Dijkstra algorithm for shortest path of single source into my project.
I realize that there are many different implementations which vary strongly in performance and also do vary in the quality of result in large graphs. With my data set (> 100.000 vertices) the runtime varies from 20 minutes to a few seconds. Th shortest paths also vary by 1-2%.
Which is the best implementation you know?
EDIT:
My Data is a hydraulic network, with 1 to 5 vertices per node. Its comparable to a street map. I made some modifications to a already accelerated algorithm (using a sorted list for all remaining nodes) and now find to the same results in a fraction of time. I have searched for such a thing quite a while. I wonder if such a implementation already exists.
I can not explain the slight differences in results. I know that Dijkstra is not heuristic, but all the implementations seem to be correct. The faster solutions have the results with shorter paths. I use double precision math exclusively.
EDIT 2:
I found out that the differences in the found path are indeed my fault. I had inserted special handling for some vertices (only valid in one direction) and forgot about that in the other implementation.
BUT im still more than surprised that Dijkstra can be accelerated dramatically by the following change:
In general a Dijkstra algorithm contains a loop like:
MyListType toDoList; // List sorted by smallest distance
InsertAllNodes(toDoList);
while(! toDoList.empty())
{
MyNodeType *node = *toDoList.first();
toDoList.erase(toDoList.first());
...
}
If you change this a little bit, it works the same, but performs better:
MyListType toDoList; // List sorted by smallest distance
toDoList.insert(startNode);
while(! toDoList.empty())
{
MyNodeType *node = *toDoList.first();
toDoList.erase(toDoList.first());
for(MyNeigborType *x = node.Neigbors; x != NULL; x++)
{
...
toDoList.insert(x->Node);
}
}
It seems, that this modification reduces the runtime by a order not of magnitude, but a order of exponent. It reduced my runtime form 30 Seconds to less than 2. I can not find this modification in any literature. It's also very clear that the reason lies in the sorted list. insert/erase performs much worse with 100.000 elements that with a hand full of.
ANSWER:
After a lot of googling i found it myself. The answer is clearly:
boost graph lib. Amazing - i had not found this for quite a while. If you think, that there is no performance variation between Dijkstra implementations, see wikipedia.
The best implementations known for road networks (>1 million nodes) have query times expressed in microseconds. See for more details the 9th DIMACS Implementation Challenge(2006). Note that these are not simply Dijkstra, of course, as the whole point was to get results faster.
May be I am not answering your question. My point is why to use Dijkstra when there are pretty much more efficient algorithms for your problem. If your graph fullfills the triangular property (it is an euclidian graph)
|ab| +|bc| > |ac|
(the distance from node a to node b plus distance from node b to node c is bigger than the distance from node a to node c) then you can apply the A* algorithm.
This algorithm is pretty efficient. Otherwise consider using heuristics.
The implementation is not the major issue. The algorithm to be used does matter.
Two points I'd like to make:
1) Dijkstra vs A*
Dijkstra's algorithm is a dynamic programming algorithm, not an heuristic. A* is an heuristic because it also uses an heuristic function (lets say h(x) ) to "estimate" how close a point x is getting to the end point. This information is exploited in subsequent decisions of which nodes to explore next.
For cases such as an Euclidean graph, then A* works well because the heuristic function is easy to define (one can simply use the Euclidean distance, for example). However, for non Euclidean graphs it may be harder to define the heuristic function, and a wrong definition can lead to a non-optimal path.
Therefore, dijkstra has the advantage over A* which is that it works for any general graph (with the exception of A* being faster in some cases). It could well be that certain implementations use these algorithms interchangeably, resulting in different results.
2) The dijkstra algorithm (and others such as A*) use a priority queue to obtain the next node to explore. A good implementation may use a heap instead of a queue, and an even better one may use a fibonacci heap. This could explain the different run times.
The last time I checked, Dijkstra's Algorithm returns an optimal solution.
All "true" implementations of Dijkstra's should return the same result each time.
Similarly, asymptotic analysis shows us that minor optimisations to particular implementations are not going to affect performance significantly as the input size increases.
It's going to depend on a lot of things. How much do you know about your input data? Is it dense, or sparse? That will change which versions of the algorithm are the fastest.
If it's dense, just use a matrix. If its sparse, you might want to look at more efficient data structures for finding the next closest vertex. If you have more information about your data set than just the graph connectivity, then see if a different algorithm would work better like A*.
Problem is, there isn't "one fastest" version of the algorithm.
Related
I am trying to find shortest path between two nodes in a dataset. I
implemented dijkstra algorithm and am using it to prove
given two nodes (like: Andrew_Card
and Dick_Cheney) there does not
exist a path between the source and destination. However, I am finding
that my program is getting killed by the operating system.
After debugging I found that the problem could be related to resource
allocation in RAM. As for dijkstra algorithm, if the number of nodes,
n=16,375,503, then the space requirement is
n*n = 16,375,503 * 16,375,503 > 10^{14}.
To run this algorithm in memory we need at least
(10^{14} * 4) / (1024 * 1024 * 1024) = 10^5 GB (approximately equal)
of RAM.
So, it is not possible to find the shortest path using
dijkstra if we intend to keep a large connected graph in-memory.
Please correct me if I am wrong as I am stuck on this since a long time? Or if there could be some other possible reason which I should check, then please point me to it too.
I implemented the program in C++
No. of edges=25,908,132
If the number of edges is relatively low(so that all edges can fit into main memory), you can just store the graph using adjacency list. It requires O(V + E) memory, instead of O(V^2). Moreover, you can use Dijkstra's algorithm with a priority queue. It works well for sparse graphs(it has O(E log V) time complexity). This approach should work fine for a graph with about 2 * 10^7 vertices and edges(a good implementation can easily fit into main memory and run for no more than several minutes).
If you need JUST the distance between two nodes, use something like A*.
But if you're doing all points shortest paths, then you're definitely stuck with O(n^2) space. You're finding O(n^2) answers - so you can't really do any better than having to store all of them.
In terms of making sure that your program is indeed running out of memory, wrap your callsite in a try-catch block and see if you are getting a std::bad_alloc exception. Until you see the exception you are catching, don't make assumptions about which part of your program is failing
In terms of finding the shortest route between two nodes, you probably should up more literature to find what is the most suitable algorithm for your use case.
A*: http://en.wikipedia.org/wiki/A*_search_algorithm
Contraction hierarchy: http://algo2.iti.kit.edu/schultes/hwy/contract.pdf
You should find a way to reduce number of nodes. Your number of nodes are high. You could use Voronoi Diagram to reduce number of nodes.
In improvement could be to store vertexes in the priority queue just once. And update the priority queue, instead of adding the same vertex to the priority queue again and again.
I have vector of pointers to a very simple Point class:
class Point{
public:
float x;
float y;
float z;
};
How do I find the closest object to a referent point using STL?
Do I need first sort the vector first or is there a more efficient way?
Sorting takes O(n*log(N)), so it's not very efficient. You can do it in O(n) by just iterating through the elements and memorizing the best match.
Using for_each from <algorithm>, you can define a function that keeps track of the closest elements and completes in O(n).
Or, you can probably even use min_element, also from <algorithm>.
The basic question here is how often you'll be doing queries against a single set of points.
If you're going to find one nearest point in the set one time, then #Lucian is right: you might as well leave the points un-sorted, and do a linear search to find the right point.
If you'll do a relatively large number of queries against the same set of points, it's worthwhile to organize the point data to improve query speed. #izomorphius has already mentioned a k-d tree, and that's definitely a good suggestion. Another possibility (admittedly, quite similar) is an oct-tree. Between the two, I find an oct-tree quite a bit easier to understand. In theory, a k-d tree should be slightly more efficient (on average), but I've never seen much difference -- though perhaps with different data the difference would become significant.
Note, however, that building something like a k-d tree or oct-tree isn't terribly slow, so you don't need to do an awful lot of queries against a set of points to justify building one. One query clearly doesn't justify it, and two probably won't either -- but contrary to what Luchian implies, O(N log N) (just for example) isn't really very slow. Roughly speaking, log(N) is the number of digits in the number N, so O(N log N) isn't really a whole lot slower than O(N). That, in turn, means you don't need a particularly large number of queries to justify organizing the data to speed up each one.
You can not go faster then a linear comparison if you only know that there are points in a vector. However if you have additional knowledge a lot can be improved. For instance if you know all the points are ordered and lie on the same line there is a logarithmic solution.
Also there are better data structures to solve your problem for instance a k-d tree. It is not part of the STL - you will have to implement it yourself but it is THE data structure to use to solve the problem you have.
you can try to use Quadtree
http://en.wikipedia.org/wiki/Quadtree
or something similar.
The problem:
N nodes are related to each other by a 'closeness' factor ranging from 0 to 1, where a factor of 1 means that the two nodes have nothing in common and 0 means the two nodes are exactly alike.
If two nodes are both close to another node (i.e. they have a factor close to 0) then this doesn't mean that they will be close together, although probabilistically they do have a much higher chance of being close together.
-
The question:
If another node is placed in the set, find the node that it is closest to in the shortest possible amount of time.
This isn't a homework question, this is a real world problem that I need to solve - but I've never taken any algorithm courses etc so I don't have a clue what sort of algorithm I should be researching.
I can index all of the nodes before another one is added and gather closeness data between each node, but short of comparing all nodes to the new node I haven't been able to come up with an efficient solution. Any ideas or help would be much appreciated :)
Because your 'closeness' metric obeys the triangle inequality, you should be able to use a variant of BK-Trees to organize your elements. Adapting them to real numbers should simply be a matter of choosing an interval to quantize your number on, and otherwise using the standard Bk-Tree procedure. Some experimentation may be required - you might want to increase the resolution of the quantization as you progress down the tree, for instance.
but short of comparing all nodes to
the new node I haven't been able to
come up with an efficient solution
Without any other information about the relationships between nodes, this is the only way you can do it since you have to figure out the closeness factor between the new node and each existing node. A O(n) algorithm can be a perfectly decent solution.
One addition you might consider - keep in mind we have no idea what data structure you are using for your objects - is to organize all present nodes into a graph, where nodes with factors below a certain threshold can be considered connected, so you can first check nodes that are more likely to be similar/related.
If you want the optimal algorithm in terms of speed, but O(n^2) space, then for each node create a sorted list of other nodes (ordered by closeness).
When you get a new node, you have to add it to the indexed list of all the other nodes, and all the other nodes need to be added to its list.
To find the closest node, just find the first node on any node's list.
Since you already need O(n^2) space (in order to store all the closeness information you need basically an NxN matrix where A[i,j] represents the closeness between i and j) you might as well sort it and get O(1) retrieval.
If this closeness forms a linear spectrum (such that closeness to something implies closeness to other things that are close to it, and not being close implies not being close to those close), then you can simply do a binary or interpolation sort on insertion for closeness, handling one extra complexity: at each point you have to see if closeness increases or decreases below or above.
For example, if we consider letters - A is close to B but far from Z - then the pre-existing elements can be kept sorted, say: A, B, E, G, K, M, Q, Z. To insert say 'F', you start by comparing with the middle element, [3] G, and the one following that: [4] K. You establish that F is closer to G than K, so the best match is either at G or to the left, and we move halfway into the unexplored region to the left... 3/2=[1] B, followed by E, and we find E's closer to F, so the match is either at E or to its right. Halving the space between our earlier checks at [3] and [1], we test at [2] and find it equally-distant, so insert it in between.
EDIT: it may work better in probabilistic situations, and require less comparisons, to start at the ends of the spectrum and work your way in (e.g. compare F to A and Z, decide it's closer to A, see if A's closer or the halfway point [3] G). Also, it might be good to finish with a comparison to the closest few points either side of where the binary/interpolation led you.
ACM Surveys September 2001 carried two papers that might be relevant, at least for background. "Searching in Metric Spaces", lead author Chavez, and "Searching in High Dimensional Spaces - Index Structures for Improving the Performance of Multimedia Databases", lead author Bohm. From memory, if all you have is the triangle inequality, you can use it to some effect, but if you can trim your data down to a sensible number of dimensions, you can do better by using a search structure that knows about this dimensional structure.
Facebook has this thing where it puts you and all of your friends in a graph, then slowly moves everyone around until people are grouped together based on mutual friends and so on.
It looked to me like they just made anything <0.5 an attractive force, anything >0.5 a repulsive force, and moved people with every iteration based on the net force. After a couple hundred iterations, it was looking pretty darn good.
Note: this is not an algorithm it is a heuristic. In the facebook implementation I saw, two people were not able to reach equilibrium and kept dancing around each other. It turns out they were actually the same person with two different accounts.
Also, it took about 15 minutes on a decent computer and ~100 nodes. YMMV.
It looks suspiciously like a Nearest Neighbor Search problem (also called a similarity search)
I am starting to work with boost graph library. I need a best-first search, which I could implement using astar_search by having zero costs. (Please correct me if I'm wrong.)
However, I wonder if there is another possibility of doing this? If the costs are not considered, the algorithm should be slightly more efficient.
EDIT: Sorry for the unclear description. I am actually implementing a potential field search, so I don't have any costs/weights associated with the edges but rather need to do a steepest-descent-search (which can overcome local minima).
Thanks for any hints!
You could definitely use A* to tackle this; you'd need h(x) to be 0 though, not g(x). A* rates nodes based on F which is defined by
F(n) = g(n) + h(n).
TotalCost = PathCost + Heuristic.
g(n) = Path cost, the distance from the initial to the current state
h(n) = Heuristic, the estimation of cost from current state to end state.
From Wikipedia:
Dijkstra's algorithm, as another
example of a best-first search
algorithm, can be viewed as a special
case of A* where h(x) = 0 for all x.
If you are comfortable with C++, I would suggest trying out YAGSBPL.
As suggested by Aphex's answer, you might want to use Dijkstra's algorithm; one way to set the edge weights is to set w(u, v) to potential(v) - potential(u), assuming that is nonnegative. Dijkstra's algorithm assumes that edge weights are positive and so that distances increase as you move away from the source node. If you are searching for the smallest potential, flip the sides of the subtraction; if you have potentials that go both up and down you might need to use something like Bellman-Ford which is not best-first.
I wonder what are the advantages and disadvantages of these two algorithms. I want to write AddEmUp C++ solved, but I'm not sure which (IDA or DFID) algorithm should I use.
The best article I found is this one, but it seems too old - '93. Any newer?
I think IDA* would be better, but.. ? Any other ideas?
Any ideas and info would be helpful.
Thanks ! (:
EDIT: Some good article about IDA* and good explanation of the algorithm?
EDIT2: Or some good heuristic function for that game? I have no idea how to think of some :/
The Russel & Norvig book is an excellent reference on these algorithms, and I'll give larsmans a virtual high-five for suggesting it; however I disagree that IDA* is in any appreciable way harder to program than A*. I've done it for a project where I had to write an AI to solve a sliding-block puzzle - the familiar problem of having a N x N grid of numbered tiles, and using the single free space to slide tiles around until they are in ascending order.
Recall:
F(n) = g(n) + h(n).
TotalCost = PathCost + Heuristic.
g(n) = Path cost, the distance from the initial to the current state
h(n) = Heuristic, the estimation of cost from current state to end state. To be an admissible heuristic (and thus ensure A*'s optimality), you cannot in any case overestimate the cost. See this question for more info on the effects of overestimating/underestimating heuristics on A*.
Remember that Iterative Deepening A* is just A* with a limit on the F value of nodes you are allowed to traverse. This FLimit increases with each outer iteration; with each iteration you are deepening the search.
Here's my C++ code implementing both A* and IDA* to solve the aforementioned sliding block puzzle. You can see that I use a std::priority_queue with a custom Comparator to store Puzzle states in the queue prioritized by their F value. You will also note that the only difference between A* and IDA* is the addition of an FLimit check and an outer loop that increments this FLimit. I hope this helps shed some light on this subject.
Check out Russell & Norvig, chapters 3 and 4, and realize that IDA* is hard to program correctly. You might want to try recursive best first search (RBFS), also described by R&N, or plain old A*. The latter can be implemented using an std::priority_queue.
IIRC, R&N described IDA* in the first edition, then replaced it with RBFS in the second. I haven't seen the third edition yet.
As regards your second edit, I haven't looked into the game, but a good procedure for deriving heuristics is that of relaxed problems. You take away the rules of the game until you derive a version for which the heuristic is easily expressed and implemented (and cheap to compute). Or, following a bottom-up approach, you check the main rules to see which one admits an easy heuristic, then try that and add in other rules as you need them.
DFID is just a special case of IDA* where the heuristic function is the constant 0; in other words, it has no provision for introducing heuristics. If the problem is not small enough that it can be solved without using heuristics, it seems you have no choice but to use IDA* (or some other member of the A* family). That said, IDA* is really not that hard: the implementation provided by the authors of AIMA is only about half a page of Lisp code; I imagine a C++ implementation shouldn't take more than twice that.