I have a compile-time directed acyclic graph of asynchronous tasks. The DAG shows the dependencies between the tasks: by analyzing it, it's possible to understand what tasks can run in parallel (in separate threads) and what tasks need to wait for other tasks to finish before they can begin (dependencies).
I want to generate a callback chain from the DAG, using boost::future and the .then(...), when_all(...) continuation helper functions. The result of this generation will be a function that, when called, will start the callback chain and execute the tasks as described by the DAG, running as many tasks as possible in parallel.
I'm having trouble, however, finding a general algorithm that can work for all cases.
I made a few drawings to make the problem easier to understand. This is a legend that will show you what the symbols in the drawings mean:
Let's begin with a simple, linear DAG:
This dependency graph consists of three tasks (A, B, and C). C depends on B. B depends on A. There is no possibility of parallelism here - the generation algorithm would build something similar to this:
boost::future<void> A, B, C, end;
A.then([]
{
B.then([]
{
C.get();
end.get();
});
});
(Note that all code samples are not 100% valid - I'm ignoring move semantics, forwarding and lambda captures.)
There are many approaches to solve this linear DAG: either by starting from the end or the beginning, it's trivial to build a correct callback chain.
Things start to get more complicated when forks and joins are introduced.
Here's a DAG with a fork/join:
It's difficult to think of a callback chain that matches this DAG. If I try to work backwards, starting from the end, my reasoning is as follows:
end depends on B and D. (join)
D depends on C.
B and C depend on A. (fork)
A possible chain looks something like this:
boost::future<void> A, B, C, D, end;
A.then([]
{
boost::when_all(B, C.then([]
{
D.get();
}))
.then([]
{
end.get();
});
});
I found it difficult to write this chain by hand, and I'm also doubtful about its correctness. I could not think of a general way to implement an algorithm that could generate this - additional difficulties are also present due to the fact that when_all needs its arguments to be moved into it.
Let's see one last, even more complex, example:
Here we want to exploit parallelism as much as possible. Consider task E: E can be run in parallel with any of [B, C, D].
This is a possible callback chain:
boost::future<void> A, B, C, D, E, F, end;
A.then([]
{
boost::when_all(boost::when_all(B, C).then([]
{
D.get();
}),
E)
.then([]
{
F.then([]
{
end.get();
});
});
});
I've tried to come up with a general algorithm in several ways:
Starting from the beginning of the DAG, trying to build up the chain using .then(...) continuations. This doesn't work with joins, as the target join task would is repeated multiple times.
Starting from the end of the DAG, trying to generate the chain using when_all(...) continuations. This fails with forks, as the node that creates the fork is repeated multiple times.
Obviously the "breadth-first traversal" approach doesn't work well here. From the code samples that I have hand-written, it seems that the algorithm needs to be aware of forks and joins, and needs to be able to correctly mix .then(...) and when_all(...) continuations.
Here are my final questions:
Is it always possible to generate a future-based callback chain from a DAG of task dependencies, where every task appears only once in the callback chain?
If so, how can a general algorithm that, given a task dependency DAG builds a callback chain, be implemented?
EDIT 1:
Here's an additional approach I'm trying to explore.
The idea is to generate a ([dependencies...] -> [dependents...]) map data structure from the DAG, and to generate the callback chain from that map.
If len(dependencies...) > 1, then value is a join node.
If len(dependents...) > 1, then key is a fork node.
All the key-value pairs in the map can be expressed as when_all(keys...).then(values...) continuations.
The difficult part is figuring out the correct order in which to "expand" (think about something similar to a parser) the nodes and how to connect the fork/join continuations together.
Consider the following map, generated by image 4.
depenendencies | dependents
----------------|-------------
[F] : [end]
[D, E] : [F]
[B, C] : [D]
[A] : [E, C, B]
[begin] : [A]
By applying some sort of parser-like reductions/passes, we can get a "clean" callback chain:
// First pass:
// Convert everything to `when_all(...).then(...)` notation
when_all(F).then(end)
when_all(D, E).then(F)
when_all(B, C).then(D)
when_all(A).then(E, C, B)
when_all(begin).then(A)
// Second pass:
// Solve linear (trivial) transformations
when_all(D, E).then(
when_all(F).then(end)
)
when_all(B, C).then(D)
when_all(
when_all(begin).then(A)
).then(E, C, B)
// Third pass:
// Solve fork/join transformations
when_all(
when_all(begin).then(A)
).then(
when_all(
E,
when_all(B, C).then(D)
).then(
when_all(F).then(end)
)
)
The third pass is the most important one, but also the one that looks really difficult to design an algorithm for.
Notice how [B, C] have to be found inside the [E, C, B] list, and how, in the [D, E] dependency list, D must be interpreted as the result of when_all(B, C).then(D) and chained together with E in when_all(E, when_all(B, C).then(D)).
Maybe the entire problem can be simplified as:
Given a map consisting of [dependencies...] -> [dependents...] key value pairs, how could an algorithm that transforms those pairs to a when_all(...)/.then(...) continuation chain be implemented?
EDIT 2:
Here's some pseudocode I came up with for the approach described above. It seems to work for the DAG I tried, but I need to spend more time on it and "mentally" test it with other, trickier, DAG configurations.
The easiest way is to start from the entry node of the graph, as if you were writing the code by hand. In order to solve the join problem, you can not use a recursive solution, you need to have a topological ordering of your graph, and then build the graph according to the ordering.
This gives the guarantee that when you build a node all of its predecessors have already been created.
To achieve this goal we can use a DFS, with reverse postordering.
Once you have a topological sorting, you can forget the original node IDs, and refer to nodes with their number in the list. In order to do that you need create a compile time map that allows to retrieve the node predecessors using the node index in the topological sorting instead of the node original node index.
EDIT: Following up on how to implement topological sorting at compile time, I refactored this answer.
To be on the same page I will assume that your graph looks like this:
struct mygraph
{
template<int Id>
static constexpr auto successors(node_id<Id>) ->
list< node_id<> ... >; //List of successors for the input node
template<int Id>
static constexpr auto predecessors(node_id<Id>) ->
list< node_id<> ... >; //List of predecessors for the input node
//Get the task associated with the given node.
template<int Id>
static constexpr auto task(node_id<Id>);
using entry_node = node_id<0>;
};
Step 1: topological sort
The basic ingredient that you need is a compile time set of node-ids. In TMP a set is also a list, simply because in set<Ids...> the order of the Ids matters. This means that you can use the same data structure to encode the information on whether a node as been visited AND the resulting ordering at the same time.
/** Topological sort using DFS with reverse-postordering **/
template<class Graph>
struct topological_sort
{
private:
struct visit;
// If we reach a node that we already visited, do nothing.
template<int Id, int ... Is>
static constexpr auto visit_impl( node_id<Id>,
set<Is...> visited,
std::true_type )
{
return visited;
}
// This overload kicks in when node has not been visited yet.
template<int Id, int ... Is>
static constexpr auto visit_impl( node_id<Id> node,
set<Is...> visited,
std::false_type )
{
// Get the list of successors for the current node
constexpr auto succ = Graph::successors(node);
// Reverse postordering: we call insert *after* visiting the successors
// This will call "visit" on each successor, updating the
// visited set after each step.
// Then we insert the current node in the set.
// Notice that if the graph is cyclic we end up in an infinite
// recursion here.
return fold( succ,
visited,
visit() ).insert(node);
// Conventional DFS would be:
// return fold( succ, visited.insert(node), visit() );
}
struct visit
{
// Dispatch to visit_impl depending on the result of visited.contains(node)
// Note that "contains" returns a type convertible to
// integral_constant<bool,x>
template<int Id, int ... Is>
constexpr auto operator()( set<Is...> visited, node_id<Id> node ) const
{
return visit_impl(node, visited, visited.contains(node) );
}
};
public:
template<int StartNodeId>
static constexpr auto compute( node_id<StartNodeId> node )
{
// Start visiting from the entry node
// The set of visited nodes is initially empty.
// "as_list" converts set<Is ... > to list< node_id<Is> ... >.
return reverse( visit()( set<>{}, node ).as_list() );
}
};
This algorithm with the graph from your last example (assuming A = node_id<0>, B = node_id<1>, etc.), produces list<A,B,C,D,E,F>.
Step 2: graph map
This is simply an adapter that modifies the Id of each node in your graph according to a given ordering. So assuming that previous steps returned list<C,D,A,B>, this graph_map would map the index 0 to C, index 1 to D, etc.
template<class Graph, class List>
class graph_map
{
// Convert a node_id from underlying graph.
// Use a function-object so that it can be passed to algorithms.
struct from_underlying
{
template<int I>
constexpr auto operator()(node_id<I> id)
{ return node_id< find(id, List{}) >{}; }
};
struct to_underlying
{
template<int I>
constexpr auto operator()(node_id<I> id)
{ return get<I>(List{}); }
};
public:
template<int Id>
static constexpr auto successors( node_id<Id> id )
{
constexpr auto orig_id = to_underlying()(id);
constexpr auto orig_succ = Graph::successors( orig_id );
return transform( orig_succ, from_underlying() );
}
template<int Id>
static constexpr auto predecessors( node_id<Id> id )
{
constexpr auto orig_id = to_underlying()(id);
constexpr auto orig_succ = Graph::predecessors( orig_id );
return transform( orig_succ, from_underlying() );
}
template<int Id>
static constexpr auto task( node_id<Id> id )
{
return Graph::task( to_underlying()(id) );
}
using entry_node = decltype( from_underlying()( typename Graph::entry_node{} ) );
};
Step 3: assemble the result
We can now iterate over each node id in order. Thanks to the way we built the graph map, we know that all the predecessors of I have a node id which is less than I, for every possible node I.
// Returns a tuple<> of futures
template<class GraphMap, class ... Ts>
auto make_cont( std::tuple< future<Ts> ... > && pred )
{
// The next node to work with is N:
constexpr auto current_node = node_id< sizeof ... (Ts) >();
// Get a list of all the predecessors for the current node.
auto indices = GraphMap::predecessors( current_node );
// "select" is some magic function that takes a tuple of Ts
// and an index_sequence, and returns a tuple of references to the elements
// from the input tuple that are in the indices list.
auto futures = select( pred, indices );
// Assuming you have an overload of when_all that takes a tuple,
// otherwise use C++17 apply.
auto join = when_all( futures );
// Note: when_all with an empty parameter list returns a future< tuple<> >,
// which is always ready.
// In general this has to be a shared_future, but you can avoid that
// by checking if this node has only one successor.
auto next = join.then( GraphMap::task( current_node ) ).share();
// Return a new tuple of futures, pushing the new future at the back.
return std::tuple_cat( std::move(pred),
std::make_tuple(std::move(next)) );
}
// Returns a tuple of futures, you can take the last element if you
// know that your DAG has only one leaf, or do some additional
// processing to extract only the leaf nodes.
template<class Graph>
auto make_callback_chain()
{
constexpr auto entry_node = typename Graph::entry_node{};
constexpr auto sorted_list =
topological_sort<Graph>::compute( entry_node );
using map = graph_map< Graph, decltype(sorted_list) >;
// Note: we are not really using the "index" in the functor here,
// we only want to call make_cont once for each node in the graph
return fold( sorted_list,
std::make_tuple(), //Start with an empty tuple
[]( auto && tuple, auto index )
{
return make_cont<map>(std::move(tuple));
} );
}
Full live demo
If redundant dependencies may occur, remove them first (see e.g. https://mathematica.stackexchange.com/questions/33638/remove-redundant-dependencies-from-a-directed-acyclic-graph).
Then perform the following graph transformations (building sub-expressions in merged nodes) until you are down to a single node (in a way similar to how you'd calculate a network of resistors):
*: Additional incoming or outgoing dependencies, depending on placement
(...): Expression in a single node
Java code including setup for your more complex example:
public class DirectedGraph {
/** Set of all nodes in the graph */
static Set<Node> allNodes = new LinkedHashSet<>();
static class Node {
/** Set of all preceeding nodes */
Set<Node> prev = new LinkedHashSet<>();
/** Set of all following nodes */
Set<Node> next = new LinkedHashSet<>();
String value;
Node(String value) {
this.value = value;
allNodes.add(this);
}
void addPrev(Node other) {
prev.add(other);
other.next.add(this);
}
/** Returns one of the next nodes */
Node anyNext() {
return next.iterator().next();
}
/** Merges this node with other, then removes other */
void merge(Node other) {
prev.addAll(other.prev);
next.addAll(other.next);
for (Node on: other.next) {
on.prev.remove(other);
on.prev.add(this);
}
for (Node op: other.prev) {
op.next.remove(other);
op.next.add(this);
}
prev.remove(this);
next.remove(this);
allNodes.remove(other);
}
public String toString() {
return value;
}
}
/**
* Merges sequential or parallel nodes following the given node.
* Returns true if any node was merged.
*/
public static boolean processNode(Node node) {
// Check if we are the start of a sequence. Merge if so.
if (node.next.size() == 1 && node.anyNext().prev.size() == 1) {
Node then = node.anyNext();
node.value += " then " + then.value;
node.merge(then);
return true;
}
// See if any of the next nodes has a parallel node with
// the same one level indirect target.
for (Node next : node.next) {
// Nodes must have only one in and out connection to be merged.
if (next.prev.size() == 1 && next.next.size() == 1) {
// Collect all parallel nodes with only one in and out connection
// and the same target; the same source is implied by iterating over
// node.next again.
Node target = next.anyNext().next();
Set<Node> parallel = new LinkedHashSet<Node>();
for (Node other: node.next) {
if (other != next && other.prev.size() == 1
&& other.next.size() == 1 && other.anyNext() == target) {
parallel.add(other);
}
}
// If we have found any "parallel" nodes, merge them
if (parallel.size() > 0) {
StringBuilder sb = new StringBuilder("allNodes(");
sb.append(next.value);
for (Node other: parallel) {
sb.append(", ").append(other.value);
next.merge(other);
}
sb.append(")");
next.value = sb.toString();
return true;
}
}
}
return false;
}
public static void main(String[] args) {
Node a = new Node("A");
Node b = new Node("B");
Node c = new Node("C");
Node d = new Node("D");
Node e = new Node("E");
Node f = new Node("F");
f.addPrev(d);
f.addPrev(e);
e.addPrev(a);
d.addPrev(b);
d.addPrev(c);
b.addPrev(a);
c.addPrev(a);
boolean anyChange;
do {
anyChange = false;
for (Node node: allNodes) {
if (processNode(node)) {
anyChange = true;
// We need to leave the inner loop here because changes
// invalidate the for iteration.
break;
}
}
// We are done if we can't find any node to merge.
} while (anyChange);
System.out.println(allNodes.toString());
}
}
Output: A then all(E, all(B, C) then D) then F
This seems reasonably easy if you stop thinking about it in form of explicit dependencies and organizing a DAG. Every task can be organized in something like the following (C# because it's so much simpler to explain the idea):
class MyTask
{
// a list of all tasks that depend on this to be finished
private readonly ICollection<MyTask> _dependenants;
// number of not finished dependencies of this task
private int _nrDependencies;
public int NrDependencies
{
get { return _nrDependencies; }
private set { _nrDependencies = value; }
}
}
If you have a organized your DAG in such a form, the problem is actually really simple: Every Task where _nrDependencies == 0 can be executed. So we need a run method that looks something like the following:
public async Task RunTask()
{
// Execute actual code of the task.
var tasks = new List<Task>();
foreach (var dependent in _dependenants)
{
if (Interlocked.Decrement(ref dependent._nrDependencies) == 0)
{
tasks.Add(Task.Run(() => dependent.RunTask()));
}
}
await Task.WhenAll(tasks);
}
Basically as soon as our task finished, we go through all our dependents and execute all of those that have no more unfinished dependencies.
To start the whole thing off the only thing you have to do is to call RunTask() for all tasks that have zero dependents to start with (at least one of those must exist since we have a DAG). As soon as all of these tasks have finished, we know that the whole DAG has been executed.
This graph is not constructed in compile-time, but it is not clear to me if that is a requirement. The graph is held in a boost graph implemented as adjacency_list<vecS, vecS, bidirectionalS>. A single dispatch will start the tasks. We just need the in-edges at each node, so that we know just what we are waiting on. That pre-calculated at instantiation in the scheduler below.
I contend that a full topological sort is not needed.
For example, if the dependency graph were:
use scheduler_driver.cpp
For a join as in
just redefine the Graph to define the directed edges.
So, to answer your 2 questions:
. Yes, for a DAG. Only unique immediate dependencies are needed for each node, which can be pre-computed as below. The dependency chain can then be initiated with a single dispatch, and the domino chain falls.
. Yes, see algorithm below (using C++11 threads, not boost::thread). For forks, a shared_future is required for the communication, while joins are supported with future-based communication.
scheduler_driver.hpp:
#ifndef __SCHEDULER_DRIVER_HPP__
#define __SCHEDULER_DRIVER_HPP__
#include <iostream>
#include <ostream>
#include <iterator>
#include <vector>
#include <chrono>
#include "scheduler.h"
#endif
scheduler_driver.cpp:
#include "scheduler_driver.hpp"
enum task_nodes
{
task_0,
task_1,
task_2,
task_3,
task_4,
task_5,
task_6,
task_7,
task_8,
task_9,
N
};
int basic_task(int a, int d)
{
std::chrono::milliseconds sleepDuration(d);
std::this_thread::sleep_for(sleepDuration);
std::cout << "Result: " << a << "\n";
return a;
}
using namespace SCHEDULER;
int main(int argc, char **argv)
{
using F = std::function<R()>;
Graph deps(N);
boost::add_edge(task_0, task_1, deps);
boost::add_edge(task_0, task_2, deps);
boost::add_edge(task_0, task_3, deps);
boost::add_edge(task_1, task_4, deps);
boost::add_edge(task_1, task_5, deps);
boost::add_edge(task_1, task_6, deps);
boost::add_edge(task_2, task_7, deps);
boost::add_edge(task_2, task_8, deps);
boost::add_edge(task_2, task_9, deps);
std::vector<F> tasks =
{
std::bind(basic_task, 0, 1000),
std::bind(basic_task, 1, 1000),
std::bind(basic_task, 2, 1000),
std::bind(basic_task, 3, 1000),
std::bind(basic_task, 4, 1000),
std::bind(basic_task, 5, 1000),
std::bind(basic_task, 6, 1000),
std::bind(basic_task, 7, 1000),
std::bind(basic_task, 8, 1000),
std::bind(basic_task, 9, 1000)
};
auto s = std::make_unique<scheduler<int>>(std::move(deps), std::move(tasks));
s->doit();
return 0;
}
scheduler.h:
#ifndef __SCHEDULER2_H__
#define __SCHEDULER2_H__
#include <iostream>
#include <vector>
#include <iterator>
#include <functional>
#include <algorithm>
#include <mutex>
#include <thread>
#include <future>
#include <boost/graph/graph_traits.hpp>
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/depth_first_search.hpp>
#include <boost/graph/visitors.hpp>
using namespace boost;
namespace SCHEDULER
{
using Graph = adjacency_list<vecS, vecS, bidirectionalS>;
using Edge = graph_traits<Graph>::edge_descriptor;
using Vertex = graph_traits<Graph>::vertex_descriptor;
using VectexCont = std::vector<Vertex>;
using outIt = graph_traits<Graph>::out_edge_iterator;
using inIt = graph_traits<Graph>::in_edge_iterator;
template<typename R>
class scheduler
{
public:
using ret_type = R;
using fun_type = std::function<R()>;
using prom_type = std::promise<ret_type>;
using fut_type = std::shared_future<ret_type>;
scheduler() = default;
scheduler(const Graph &deps_, const std::vector<fun_type> &tasks_) :
g(deps_),
tasks(tasks_) { init_();}
scheduler(Graph&& deps_, std::vector<fun_type>&& tasks_) :
g(std::move(deps_)),
tasks(std::move(tasks_)) { init_(); }
scheduler(const scheduler&) = delete;
scheduler& operator=(const scheduler&) = delete;
void doit();
private:
void init_();
std::list<Vertex> get_sources(const Vertex& v);
auto task_thread(fun_type&& f, int i);
Graph g;
std::vector<fun_type> tasks;
std::vector<prom_type> prom;
std::vector<fut_type> fut;
std::vector<std::thread> th;
std::vector<std::list<Vertex>> sources;
};
template<typename R>
void
scheduler<R>::init_()
{
int num_tasks = tasks.size();
prom.resize(num_tasks);
fut.resize(num_tasks);
// Get the futures
for(size_t i=0;
i<num_tasks;
++i)
{
fut[i] = prom[i].get_future();
}
// Predetermine in_edges for faster traversal
sources.resize(num_tasks);
for(size_t i=0;
i<num_tasks;
++i)
{
sources[i] = get_sources(i);
}
}
template<typename R>
std::list<Vertex>
scheduler<R>::get_sources(const Vertex& v)
{
std::list<Vertex> r;
Vertex v1;
inIt j, j_end;
boost::tie(j,j_end) = in_edges(v, g);
for(;j != j_end;++j)
{
v1 = source(*j, g);
r.push_back(v1);
}
return r;
}
template<typename R>
auto
scheduler<R>::task_thread(fun_type&& f, int i)
{
auto j_beg = sources[i].begin(),
j_end = sources[i].end();
for(;
j_beg != j_end;
++j_beg)
{
R val = fut[*j_beg].get();
}
return std::thread([this](fun_type f, int i)
{
prom[i].set_value(f());
},f,i);
}
template<typename R>
void
scheduler<R>::doit()
{
size_t num_tasks = tasks.size();
th.resize(num_tasks);
for(int i=0;
i<num_tasks;
++i)
{
th[i] = task_thread(std::move(tasks[i]), i);
}
for_each(th.begin(), th.end(), mem_fn(&std::thread::join));
}
} // namespace SCHEDULER
#endif
I'm not sure what your setup is and why you need to build DAG, but I think that simple greedy algorithm may suffice.
when (some task have finished) {
mark output resources done;
find all tasks that can be run;
post them to thread pool;
}
Consider using Intel's TBB Flow Graph library.
I would like to obtain two sets of shortest paths (one-to-all) derived from one graph, defined as adjacency_list with internal properties (as opposed to bundles)
In theory I could run dijkstra_shortest_paths on two reference nodes, n1 and n2. If I create two property_maps and pass them in sequence to dijkstra_... I get what looks like two views of the same map. Both point to the result of the last run of dijkstra_shortest_paths, so that the older result is gone. What should I do to achieve the desired result?
// Define some property maps
property_map<ugraph_t,edge_weight_t>::type Weight=get(edge_weight,G);
property_map<ugraph_t,vertex_distance_t>::type Dist1=get(vertex_distance,G);
// One line later, I expect this to be mapped to the SPs w.r.t n1
// Run SP on the first node
dijkstra_shortest_paths(G,n1,predecessor_map(Prev1).distance_map(Dist1).weight_map(Weight));
// New property maps
property_map<ugraph_t,vertex_distance_t>::type Dist2(Dist1); // And now to the second set
property_map<ugraph_t,vertex_predecessor_t>::type Prev2(Prev1); // But no two sets will result...
// Run SP on the second node
// This will run fine, but I will lose the first SP set (with or without a copy constructor above)
dijkstra_shortest_paths(G,n2,predecessor_map(Prev2).distance_map(Dist2).weight_map(Weight));
CONCLUSION: If I am not mistaken, a property_map can be thought of as an interface with an iterator so that copying property_maps makes no sense. The solution is to pass a custom container, constructed on the fly. That solution is detailed in the answer by #sehe below for which my many thanks!
NOTE: This only works if the vertex container type is vecS. With listS one has to "manually" copy vertex-by-vertex.
The distance map is not supposed to be an interior property.
Same goes for the predecessor map.
They are not logically properties of the graph. They are the result of a query. As such they're property of a combination of query parameters, including the graph, starting node etc.
If you want to save the value of an interior property, just save it in any way you normally would:
std::vector<double> saved_distances(num_vertices(G));
BGL_FORALL_VERTICES(v, G, ugraph_t)
saved_distances.push_back(Dist1[v]);
Workaround
The workaround with copying the maps:
Live On Coliru
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/properties.hpp>
#include <boost/graph/dijkstra_shortest_paths.hpp>
#include <boost/graph/iteration_macros.hpp>
using namespace boost;
using ugraph_traits = graph_traits<adjacency_list<vecS, vecS, directedS> >;
using ugraph_t = adjacency_list<
vecS, vecS, directedS,
property<vertex_distance_t, double,
property<vertex_predecessor_t, ugraph_traits::vertex_descriptor>
>,
property<edge_weight_t, double>
>;
int main() {
ugraph_t G(10);
ugraph_t::vertex_descriptor n1 = 0, n2 = 1, v;
(void) n1;
(void) n2;
// ...
property_map<ugraph_t, edge_weight_t>::type Weight = get(edge_weight,G);
property_map<ugraph_t, vertex_distance_t>::type Dist1 = get(vertex_distance,G);
property_map<ugraph_t, vertex_predecessor_t>::type Prev1 = get(vertex_predecessor,G);
dijkstra_shortest_paths(G, n1,
predecessor_map(Prev1)
.distance_map(Dist1)
.weight_map(Weight)
);
std::vector<double> saved_distances(num_vertices(G));
std::vector<ugraph_t::vertex_descriptor> saved_predecessors(num_vertices(G));
BGL_FORALL_VERTICES(v, G, ugraph_t) {
saved_distances.push_back(Dist1[v]);
saved_predecessors.push_back(Prev1[v]);
}
/*
* // C++11 style
* for(auto v : make_iterator_range(vertices(G)))
* saved_distances[v] = Dist1[v];
*/
// Run SP on the second node
dijkstra_shortest_paths(G,n2,predecessor_map(Prev1).distance_map(Dist1).weight_map(Weight));
}
Suggested
I'd suggest making the result maps separate containers, leaving only the edge weight interior:
Live On Coliru
Better Yet: refactor to remove duplicated code
So it just becomes
Live On Coliru
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/dijkstra_shortest_paths.hpp>
using namespace boost;
using ugraph_t = adjacency_list<vecS, vecS, directedS, no_property, property<edge_weight_t, double> >;
using Vertex = ugraph_t::vertex_descriptor;
struct ShortestPaths {
ShortestPaths(size_t num_vertices);
std::vector<double> distances;
std::vector<Vertex> predecessors;
};
ShortestPaths GetShortestPaths(ugraph_t const& G, Vertex start);
int main() {
ugraph_t G(10);
Vertex n1 = 0, n2 = 1;
ShortestPaths sp1 = GetShortestPaths(G, n1);
ShortestPaths sp2 = GetShortestPaths(G, n2);
}
// some other cpp file...:
ShortestPaths::ShortestPaths(size_t num_vertices)
: distances(num_vertices), predecessors(num_vertices)
{ }
ShortestPaths GetShortestPaths(ugraph_t const& G, Vertex start) {
ShortestPaths result(num_vertices(G));
dijkstra_shortest_paths(G, start,
predecessor_map(make_container_vertex_map(result.predecessors, G))
.distance_map (make_container_vertex_map(result.distances, G))
.weight_map (get(edge_weight, G))
);
return result;
}
Note there is no more need to copy the results. In fact, you don't even need to keep the graph around to keep the result of your query.
I want my map to be searchable and I want to be capable to kick out from it elements that were inserted into it longest time ago (with api like map.remove(map.get_iterator_to_oldest_inserted_element()) ) like mix of queqe and map.. Is there any such container in STL or Boost?
You can use boost::multi_index using the ordered_unique and sequence indices, as in this example.
#include <iostream>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/ordered_index.hpp>
#include <boost/multi_index/sequenced_index.hpp>
#include <boost/multi_index/member.hpp>
// Element type to store in container
struct Element
{
std::string key;
int value;
Element(const std::string& key, int value) : key(key), value(value) {}
};
namespace bmi = boost::multi_index;
// Boost multi_index container
typedef bmi::multi_index_container<
Element,
bmi::indexed_by<
bmi::ordered_unique< bmi::member<Element,std::string,&Element::key> >,
bmi::sequenced<> >
>
MyContainer;
typedef MyContainer::nth_index<1>::type BySequence;
// Helper function that returns a sequence view of the container.
BySequence& bySequence(MyContainer& container) {return container.get<1>();}
int main()
{
MyContainer container;
// Access container by sequence. Push back elements.
BySequence& sequence = bySequence(container);
sequence.push_back(Element("one", 1));
sequence.push_back(Element("two", 2));
sequence.push_back(Element("three", 3));
// Access container by key. Find an element.
// By default the container is accessed as nth_index<0>
MyContainer::const_iterator it = container.find("two");
if (it != container.end())
std::cout << it->value << "\n";
// Access container by sequence. Pop elements in a FIFO manner,
while (!sequence.empty())
{
std::cout << sequence.front().value << "\n";
sequence.pop_front();
}
}
You can set up a boost multi-index container to do this.
However, I have trouble understanding multi-index containers. I think it would be easier to roll my own class that has as members a std::queue and a std::map, and manage it myself.
There is nothing like that in boost and stl, but you can make a hybrid one yourself:
map<Key, Value> Map;
deque<Key> Queue;
void insert(const Key &k, const Value &v)
{
Map[k] = v;
Queue.push_back(k);
}
void pop_front()
{
const Key &k = Queue.front();
Map.erase(k);
Queue.pop_front();
}
If you only ever want to find the oldest (or newest -- or more generally the least/greatest by some specified criteria, and be able to remove that element), then a priority_queue will do the job.
Boost::bimap can be used to create a unidirectionnal map using a list-based relation set.
typedef bimap<set_of<A>, unconstrained_set_of<B>, list_of_relation> custom_map_type;
custom_map_type map;
map.push_back(custom_map_type::value_type(A(), B()));
// delete the front of the list (the first inserted element if only using push_back)
map.pop_front();
// otherwise, set.right behave a lot like a std::map<A,B>, with minor changes.