Using remove_edge() in boost graph library and push_back to vector container - boost-graph

I just start learning boost graph library and have a basic problem on using the remove_edge function in boost graph library.
I would like to remove an edge in a multigraph. For a graph with two edges connect vertice 0 and 1, I would like to remove one of them, but kept the other.
According to the user guide, remove_edge(u ,v , g) removes all occurrences of (u ,v). So I should use remove_edge(e ,g) instead. Here, e is a valid edge descriptor.
For a single graph, g, I am able to perform both remove_edge(u ,v , g) and remove_edge(e ,g) operation without problem.
However, when I want to generalize the implementation to fit my need by putting graphs in a vector container, I got segmentation fault for remove_edge(e ,graph_list[0]). (The compiler show no warning message.) On the other hand, remove_edge(u,v,g) works perfectly. I am not sure what's the difference between those two syntax causes my problem.
I really want to make remove_edge(e,g) and vector container work at the same time. So any suggestion that can help my bypass this difficulty is appreciated.
Thanks a lot!!
Following is my test code.
#include <iostream>
#include <utility>
#include <algorithm>
#include <vector>
#include "boost/graph/graph_traits.hpp"
#include "boost/graph/adjacency_list.
hpp"
using namespace boost;
typedef adjacency_list<vecS, vecS, undirectedS> UndirectedGraph;
typedef boost::graph_traits<UndirectedGraph>::edge_iterator edge_iterator;
int graph_show(UndirectedGraph &g);
int main(int argc, char *argv[])
{
UndirectedGraph g;
typedef boost::graph_traits<UndirectedGraph>::edge_descriptor edge_descriptor;
edge_descriptor ed;
std::vector<edge_descriptor> edge_list;
std::vector<std::vector<edge_descriptor> > graph_edge_list;
std::vector<UndirectedGraph> graph_list;
int nb=0;
int Nl=4;
bool inserted;
while(nb<Nl)
{
tie(ed,inserted)=add_edge(nb,nb+1,g);
edge_list.push_back(ed);
tie(ed,inserted)=add_edge(nb,nb+1,g);
edge_list.push_back(ed);
graph_edge_list.push_back(edge_list);
nb=nb+1;
graph_list.push_back(g);
}
std::cout<<"size of the graph vector is: "<<graph_list.size()<<std::endl;
remove_edge(graph_edge_list[0][0],graph_list[0]);//This is where the problem shows.
//I got Segmentation fault (core dumped)
//remove_edge(0,1,graph_list[0]);
/*Remove edges by assigning vertices works fine, but that is not what I need.*/
for (int ig = 0; ig < Nl; ++ig) {
std::cout<<"graph#"<<ig<<std::endl;
std::cout<<"Size of edge_list is: "<<graph_edge_list[ig].size()<<std::endl;
graph_show(graph_list[ig]);
}
std::cout<<"Success"<<std::endl;
return 0;
}
int graph_show(UndirectedGraph &g)
{
std::cout<<"Number of edges is : "<<boost::num_edges(g)<<std::endl;
std::cout<<"Number of vertices is : "<<boost::num_vertices(g)<<std::endl;
std::pair<edge_iterator,edge_iterator> ei=edges(g);
for (edge_iterator edge_iter = ei.first; edge_iter!=ei.second; ++edge_iter) {
std::cout<<"("<< boost::source(*edge_iter,g)<<","<<boost::target(*edge_iter,g)<<")"<<std::endl;
}
typedef boost::graph_traits<UndirectedGraph>::vertex_iterator iter_v;
std::cout<<"vertices(g)={ ";
for (std::pair<iter_v,iter_v> p = vertices(g); p.first != p.second; ++p.first) {
std::cout<< *p.first;
std::cout<<" ";
}
std::cout<<"}"<<std::endl;
return 0;
}

This is not an answer, but I tried the code and it seems like the problem comes at:
g.m_edges.erase(edge_iter_to_erase);
in adjacency_list.hpp on line 771.
This seems like a bug. You may want to check the boost mailing list.
The context is:
template <>
struct remove_undirected_edge_dispatch<no_property> {
// O(E/V)
template <class edge_descriptor, class Config>
static void
apply(edge_descriptor e,
undirected_graph_helper<Config>& g_,
no_property&)
{
typedef typename Config::global_edgelist_selector EdgeListS;
BOOST_STATIC_ASSERT((!is_same<EdgeListS, vecS>::value));
typedef typename Config::graph_type graph_type;
graph_type& g = static_cast<graph_type&>(g_);
no_property* p = (no_property*)e.get_property();
typename Config::OutEdgeList& out_el = g.out_edge_list(source(e, g));
typename Config::OutEdgeList::iterator out_i = out_el.begin();
typename Config::EdgeIter edge_iter_to_erase;
for (; out_i != out_el.end(); ++out_i)
if (&(*out_i).get_property() == p) {
edge_iter_to_erase = (*out_i).get_iter();
out_el.erase(out_i);
break;
}
typename Config::OutEdgeList& in_el = g.out_edge_list(target(e, g));
typename Config::OutEdgeList::iterator in_i = in_el.begin();
for (; in_i != in_el.end(); ++in_i)
if (&(*in_i).get_property() == p) {
in_el.erase(in_i);
break;
}
g.m_edges.erase(edge_iter_to_erase);
}
};
When I step through with the debugger, it shows that edge_iter_to_erase = (*out_i).get_iter(); is never executed. This causes the undefined behavior when calling vector's erase method since the position is invalid (I am guessing).
If you use just g instead of graph_list[0] it works fine as well.
Take a look here for a similar problem on the mailing list. They use a different adjacency_list.
If I alter the graph to be:
typedef adjacency_list<vecS, vecS, undirectedS,no_property,no_property,no_property,setS> UndirectedGraph;
as in that post. I get an access read violation instead of a seg fault.
Seems like a bug, but I have no idea why it does this for the vector referenced graph and not the variable g.

Related

"Cannot access memory at address" on array of vectors

I am trying to make a Graph in C++, and I have almost no code, but I get some weird error. If I just run code, I get Process finished with exit code 0, but if I take a look at debugger (specifically, it I try to check my graph object), I see Cannot access memory at address 0x....
I am new to C++, so I cannot really get what has given me this error. Also, I almost don't have code yet & these few lines I took from my previous program which worked without this problem.
Anyway, I have a vertex class:
#ifndef P2_VERTEX_H
#define P2_VERTEX_H
class vertex {
private:
int id;
public:
explicit vertex(int id) { this->id = id; }
int get_id() { return id; }
};
#endif //P2_VERTEX_H
And then a graph header:
#ifndef P2_GRAPH_H
#define P2_GRAPH_H
#include <vector>
#include "vertex.h"
class graph {
private:
int N; // number of vertices
int M; // number of edges
std::vector<vertex*> vertices; // vector of vertices
std::vector<vertex*> *adj; // ARRAY of vectors of vertices
public:
graph(int n_vert);
graph(bool from_file, const char *file_name);
};
with implementation of graph:
#include "graph.h"
#include <iostream>
#include <fstream>
#include <sstream>
graph::graph(int n_vert) {
N = n_vert;
}
And I instantiate graph as:
#include "graph.h"
int main() {
graph g = graph(4);
return 0;
}
Specifically, I get this mistake if I uncomment std::vector<vertex*> *adj; in a graph header. While I realise that this is probably not the perfect way of storing adjacency list, I fail to see why it gives me an error I mentioned. Especially since I used it before, just instead of std::vector<vertex*> I had std::vector<edge*> where edge was some struct. I tried also to have std::vector<vertex> isntead of std::vector<vertex*> but I have the same error.
Upd:
If I initialize adj in constructor:
adj = new std::vector<vertex*>[N];
I get Duplicate variable object name in debugger after reaching this line.
The issue is that you never initialized adj so it will point to a random location in memory.
You have to initialize the pointer to nullptr to get rid of it.
For example, in the constructor initialization list:
graph::graph(int n_vert) : N(n_vert), adj(nullptr)
{}
By the way you forgot to initialize other field members.

Building asynchronous `future` callback chain from compile-time dependency graph (DAG)

I have a compile-time directed acyclic graph of asynchronous tasks. The DAG shows the dependencies between the tasks: by analyzing it, it's possible to understand what tasks can run in parallel (in separate threads) and what tasks need to wait for other tasks to finish before they can begin (dependencies).
I want to generate a callback chain from the DAG, using boost::future and the .then(...), when_all(...) continuation helper functions. The result of this generation will be a function that, when called, will start the callback chain and execute the tasks as described by the DAG, running as many tasks as possible in parallel.
I'm having trouble, however, finding a general algorithm that can work for all cases.
I made a few drawings to make the problem easier to understand. This is a legend that will show you what the symbols in the drawings mean:
Let's begin with a simple, linear DAG:
This dependency graph consists of three tasks (A, B, and C). C depends on B. B depends on A. There is no possibility of parallelism here - the generation algorithm would build something similar to this:
boost::future<void> A, B, C, end;
A.then([]
{
B.then([]
{
C.get();
end.get();
});
});
(Note that all code samples are not 100% valid - I'm ignoring move semantics, forwarding and lambda captures.)
There are many approaches to solve this linear DAG: either by starting from the end or the beginning, it's trivial to build a correct callback chain.
Things start to get more complicated when forks and joins are introduced.
Here's a DAG with a fork/join:
It's difficult to think of a callback chain that matches this DAG. If I try to work backwards, starting from the end, my reasoning is as follows:
end depends on B and D. (join)
D depends on C.
B and C depend on A. (fork)
A possible chain looks something like this:
boost::future<void> A, B, C, D, end;
A.then([]
{
boost::when_all(B, C.then([]
{
D.get();
}))
.then([]
{
end.get();
});
});
I found it difficult to write this chain by hand, and I'm also doubtful about its correctness. I could not think of a general way to implement an algorithm that could generate this - additional difficulties are also present due to the fact that when_all needs its arguments to be moved into it.
Let's see one last, even more complex, example:
Here we want to exploit parallelism as much as possible. Consider task E: E can be run in parallel with any of [B, C, D].
This is a possible callback chain:
boost::future<void> A, B, C, D, E, F, end;
A.then([]
{
boost::when_all(boost::when_all(B, C).then([]
{
D.get();
}),
E)
.then([]
{
F.then([]
{
end.get();
});
});
});
I've tried to come up with a general algorithm in several ways:
Starting from the beginning of the DAG, trying to build up the chain using .then(...) continuations. This doesn't work with joins, as the target join task would is repeated multiple times.
Starting from the end of the DAG, trying to generate the chain using when_all(...) continuations. This fails with forks, as the node that creates the fork is repeated multiple times.
Obviously the "breadth-first traversal" approach doesn't work well here. From the code samples that I have hand-written, it seems that the algorithm needs to be aware of forks and joins, and needs to be able to correctly mix .then(...) and when_all(...) continuations.
Here are my final questions:
Is it always possible to generate a future-based callback chain from a DAG of task dependencies, where every task appears only once in the callback chain?
If so, how can a general algorithm that, given a task dependency DAG builds a callback chain, be implemented?
EDIT 1:
Here's an additional approach I'm trying to explore.
The idea is to generate a ([dependencies...] -> [dependents...]) map data structure from the DAG, and to generate the callback chain from that map.
If len(dependencies...) > 1, then value is a join node.
If len(dependents...) > 1, then key is a fork node.
All the key-value pairs in the map can be expressed as when_all(keys...).then(values...) continuations.
The difficult part is figuring out the correct order in which to "expand" (think about something similar to a parser) the nodes and how to connect the fork/join continuations together.
Consider the following map, generated by image 4.
depenendencies | dependents
----------------|-------------
[F] : [end]
[D, E] : [F]
[B, C] : [D]
[A] : [E, C, B]
[begin] : [A]
By applying some sort of parser-like reductions/passes, we can get a "clean" callback chain:
// First pass:
// Convert everything to `when_all(...).then(...)` notation
when_all(F).then(end)
when_all(D, E).then(F)
when_all(B, C).then(D)
when_all(A).then(E, C, B)
when_all(begin).then(A)
// Second pass:
// Solve linear (trivial) transformations
when_all(D, E).then(
when_all(F).then(end)
)
when_all(B, C).then(D)
when_all(
when_all(begin).then(A)
).then(E, C, B)
// Third pass:
// Solve fork/join transformations
when_all(
when_all(begin).then(A)
).then(
when_all(
E,
when_all(B, C).then(D)
).then(
when_all(F).then(end)
)
)
The third pass is the most important one, but also the one that looks really difficult to design an algorithm for.
Notice how [B, C] have to be found inside the [E, C, B] list, and how, in the [D, E] dependency list, D must be interpreted as the result of when_all(B, C).then(D) and chained together with E in when_all(E, when_all(B, C).then(D)).
Maybe the entire problem can be simplified as:
Given a map consisting of [dependencies...] -> [dependents...] key value pairs, how could an algorithm that transforms those pairs to a when_all(...)/.then(...) continuation chain be implemented?
EDIT 2:
Here's some pseudocode I came up with for the approach described above. It seems to work for the DAG I tried, but I need to spend more time on it and "mentally" test it with other, trickier, DAG configurations.
The easiest way is to start from the entry node of the graph, as if you were writing the code by hand. In order to solve the join problem, you can not use a recursive solution, you need to have a topological ordering of your graph, and then build the graph according to the ordering.
This gives the guarantee that when you build a node all of its predecessors have already been created.
To achieve this goal we can use a DFS, with reverse postordering.
Once you have a topological sorting, you can forget the original node IDs, and refer to nodes with their number in the list. In order to do that you need create a compile time map that allows to retrieve the node predecessors using the node index in the topological sorting instead of the node original node index.
EDIT: Following up on how to implement topological sorting at compile time, I refactored this answer.
To be on the same page I will assume that your graph looks like this:
struct mygraph
{
template<int Id>
static constexpr auto successors(node_id<Id>) ->
list< node_id<> ... >; //List of successors for the input node
template<int Id>
static constexpr auto predecessors(node_id<Id>) ->
list< node_id<> ... >; //List of predecessors for the input node
//Get the task associated with the given node.
template<int Id>
static constexpr auto task(node_id<Id>);
using entry_node = node_id<0>;
};
Step 1: topological sort
The basic ingredient that you need is a compile time set of node-ids. In TMP a set is also a list, simply because in set<Ids...> the order of the Ids matters. This means that you can use the same data structure to encode the information on whether a node as been visited AND the resulting ordering at the same time.
/** Topological sort using DFS with reverse-postordering **/
template<class Graph>
struct topological_sort
{
private:
struct visit;
// If we reach a node that we already visited, do nothing.
template<int Id, int ... Is>
static constexpr auto visit_impl( node_id<Id>,
set<Is...> visited,
std::true_type )
{
return visited;
}
// This overload kicks in when node has not been visited yet.
template<int Id, int ... Is>
static constexpr auto visit_impl( node_id<Id> node,
set<Is...> visited,
std::false_type )
{
// Get the list of successors for the current node
constexpr auto succ = Graph::successors(node);
// Reverse postordering: we call insert *after* visiting the successors
// This will call "visit" on each successor, updating the
// visited set after each step.
// Then we insert the current node in the set.
// Notice that if the graph is cyclic we end up in an infinite
// recursion here.
return fold( succ,
visited,
visit() ).insert(node);
// Conventional DFS would be:
// return fold( succ, visited.insert(node), visit() );
}
struct visit
{
// Dispatch to visit_impl depending on the result of visited.contains(node)
// Note that "contains" returns a type convertible to
// integral_constant<bool,x>
template<int Id, int ... Is>
constexpr auto operator()( set<Is...> visited, node_id<Id> node ) const
{
return visit_impl(node, visited, visited.contains(node) );
}
};
public:
template<int StartNodeId>
static constexpr auto compute( node_id<StartNodeId> node )
{
// Start visiting from the entry node
// The set of visited nodes is initially empty.
// "as_list" converts set<Is ... > to list< node_id<Is> ... >.
return reverse( visit()( set<>{}, node ).as_list() );
}
};
This algorithm with the graph from your last example (assuming A = node_id<0>, B = node_id<1>, etc.), produces list<A,B,C,D,E,F>.
Step 2: graph map
This is simply an adapter that modifies the Id of each node in your graph according to a given ordering. So assuming that previous steps returned list<C,D,A,B>, this graph_map would map the index 0 to C, index 1 to D, etc.
template<class Graph, class List>
class graph_map
{
// Convert a node_id from underlying graph.
// Use a function-object so that it can be passed to algorithms.
struct from_underlying
{
template<int I>
constexpr auto operator()(node_id<I> id)
{ return node_id< find(id, List{}) >{}; }
};
struct to_underlying
{
template<int I>
constexpr auto operator()(node_id<I> id)
{ return get<I>(List{}); }
};
public:
template<int Id>
static constexpr auto successors( node_id<Id> id )
{
constexpr auto orig_id = to_underlying()(id);
constexpr auto orig_succ = Graph::successors( orig_id );
return transform( orig_succ, from_underlying() );
}
template<int Id>
static constexpr auto predecessors( node_id<Id> id )
{
constexpr auto orig_id = to_underlying()(id);
constexpr auto orig_succ = Graph::predecessors( orig_id );
return transform( orig_succ, from_underlying() );
}
template<int Id>
static constexpr auto task( node_id<Id> id )
{
return Graph::task( to_underlying()(id) );
}
using entry_node = decltype( from_underlying()( typename Graph::entry_node{} ) );
};
Step 3: assemble the result
We can now iterate over each node id in order. Thanks to the way we built the graph map, we know that all the predecessors of I have a node id which is less than I, for every possible node I.
// Returns a tuple<> of futures
template<class GraphMap, class ... Ts>
auto make_cont( std::tuple< future<Ts> ... > && pred )
{
// The next node to work with is N:
constexpr auto current_node = node_id< sizeof ... (Ts) >();
// Get a list of all the predecessors for the current node.
auto indices = GraphMap::predecessors( current_node );
// "select" is some magic function that takes a tuple of Ts
// and an index_sequence, and returns a tuple of references to the elements
// from the input tuple that are in the indices list.
auto futures = select( pred, indices );
// Assuming you have an overload of when_all that takes a tuple,
// otherwise use C++17 apply.
auto join = when_all( futures );
// Note: when_all with an empty parameter list returns a future< tuple<> >,
// which is always ready.
// In general this has to be a shared_future, but you can avoid that
// by checking if this node has only one successor.
auto next = join.then( GraphMap::task( current_node ) ).share();
// Return a new tuple of futures, pushing the new future at the back.
return std::tuple_cat( std::move(pred),
std::make_tuple(std::move(next)) );
}
// Returns a tuple of futures, you can take the last element if you
// know that your DAG has only one leaf, or do some additional
// processing to extract only the leaf nodes.
template<class Graph>
auto make_callback_chain()
{
constexpr auto entry_node = typename Graph::entry_node{};
constexpr auto sorted_list =
topological_sort<Graph>::compute( entry_node );
using map = graph_map< Graph, decltype(sorted_list) >;
// Note: we are not really using the "index" in the functor here,
// we only want to call make_cont once for each node in the graph
return fold( sorted_list,
std::make_tuple(), //Start with an empty tuple
[]( auto && tuple, auto index )
{
return make_cont<map>(std::move(tuple));
} );
}
Full live demo
If redundant dependencies may occur, remove them first (see e.g. https://mathematica.stackexchange.com/questions/33638/remove-redundant-dependencies-from-a-directed-acyclic-graph).
Then perform the following graph transformations (building sub-expressions in merged nodes) until you are down to a single node (in a way similar to how you'd calculate a network of resistors):
*: Additional incoming or outgoing dependencies, depending on placement
(...): Expression in a single node
Java code including setup for your more complex example:
public class DirectedGraph {
/** Set of all nodes in the graph */
static Set<Node> allNodes = new LinkedHashSet<>();
static class Node {
/** Set of all preceeding nodes */
Set<Node> prev = new LinkedHashSet<>();
/** Set of all following nodes */
Set<Node> next = new LinkedHashSet<>();
String value;
Node(String value) {
this.value = value;
allNodes.add(this);
}
void addPrev(Node other) {
prev.add(other);
other.next.add(this);
}
/** Returns one of the next nodes */
Node anyNext() {
return next.iterator().next();
}
/** Merges this node with other, then removes other */
void merge(Node other) {
prev.addAll(other.prev);
next.addAll(other.next);
for (Node on: other.next) {
on.prev.remove(other);
on.prev.add(this);
}
for (Node op: other.prev) {
op.next.remove(other);
op.next.add(this);
}
prev.remove(this);
next.remove(this);
allNodes.remove(other);
}
public String toString() {
return value;
}
}
/** 
* Merges sequential or parallel nodes following the given node.
* Returns true if any node was merged.
*/
public static boolean processNode(Node node) {
// Check if we are the start of a sequence. Merge if so.
if (node.next.size() == 1 && node.anyNext().prev.size() == 1) {
Node then = node.anyNext();
node.value += " then " + then.value;
node.merge(then);
return true;
}
// See if any of the next nodes has a parallel node with
// the same one level indirect target.
for (Node next : node.next) {
// Nodes must have only one in and out connection to be merged.
if (next.prev.size() == 1 && next.next.size() == 1) {
// Collect all parallel nodes with only one in and out connection
// and the same target; the same source is implied by iterating over
// node.next again.
Node target = next.anyNext().next();
Set<Node> parallel = new LinkedHashSet<Node>();
for (Node other: node.next) {
if (other != next && other.prev.size() == 1
&& other.next.size() == 1 && other.anyNext() == target) {
parallel.add(other);
}
}
// If we have found any "parallel" nodes, merge them
if (parallel.size() > 0) {
StringBuilder sb = new StringBuilder("allNodes(");
sb.append(next.value);
for (Node other: parallel) {
sb.append(", ").append(other.value);
next.merge(other);
}
sb.append(")");
next.value = sb.toString();
return true;
}
}
}
return false;
}
public static void main(String[] args) {
Node a = new Node("A");
Node b = new Node("B");
Node c = new Node("C");
Node d = new Node("D");
Node e = new Node("E");
Node f = new Node("F");
f.addPrev(d);
f.addPrev(e);
e.addPrev(a);
d.addPrev(b);
d.addPrev(c);
b.addPrev(a);
c.addPrev(a);
boolean anyChange;
do {
anyChange = false;
for (Node node: allNodes) {
if (processNode(node)) {
anyChange = true;
// We need to leave the inner loop here because changes
// invalidate the for iteration.
break;
}
}
// We are done if we can't find any node to merge.
} while (anyChange);
System.out.println(allNodes.toString());
}
}
Output: A then all(E, all(B, C) then D) then F
This seems reasonably easy if you stop thinking about it in form of explicit dependencies and organizing a DAG. Every task can be organized in something like the following (C# because it's so much simpler to explain the idea):
class MyTask
{
// a list of all tasks that depend on this to be finished
private readonly ICollection<MyTask> _dependenants;
// number of not finished dependencies of this task
private int _nrDependencies;
public int NrDependencies
{
get { return _nrDependencies; }
private set { _nrDependencies = value; }
}
}
If you have a organized your DAG in such a form, the problem is actually really simple: Every Task where _nrDependencies == 0 can be executed. So we need a run method that looks something like the following:
public async Task RunTask()
{
// Execute actual code of the task.
var tasks = new List<Task>();
foreach (var dependent in _dependenants)
{
if (Interlocked.Decrement(ref dependent._nrDependencies) == 0)
{
tasks.Add(Task.Run(() => dependent.RunTask()));
}
}
await Task.WhenAll(tasks);
}
Basically as soon as our task finished, we go through all our dependents and execute all of those that have no more unfinished dependencies.
To start the whole thing off the only thing you have to do is to call RunTask() for all tasks that have zero dependents to start with (at least one of those must exist since we have a DAG). As soon as all of these tasks have finished, we know that the whole DAG has been executed.
This graph is not constructed in compile-time, but it is not clear to me if that is a requirement. The graph is held in a boost graph implemented as adjacency_list<vecS, vecS, bidirectionalS>. A single dispatch will start the tasks. We just need the in-edges at each node, so that we know just what we are waiting on. That pre-calculated at instantiation in the scheduler below.
I contend that a full topological sort is not needed.
For example, if the dependency graph were:
use scheduler_driver.cpp
For a join as in
just redefine the Graph to define the directed edges.
So, to answer your 2 questions:
. Yes, for a DAG. Only unique immediate dependencies are needed for each node, which can be pre-computed as below. The dependency chain can then be initiated with a single dispatch, and the domino chain falls.
. Yes, see algorithm below (using C++11 threads, not boost::thread). For forks, a shared_future is required for the communication, while joins are supported with future-based communication.
scheduler_driver.hpp:
#ifndef __SCHEDULER_DRIVER_HPP__
#define __SCHEDULER_DRIVER_HPP__
#include <iostream>
#include <ostream>
#include <iterator>
#include <vector>
#include <chrono>
#include "scheduler.h"
#endif
scheduler_driver.cpp:
#include "scheduler_driver.hpp"
enum task_nodes
{
task_0,
task_1,
task_2,
task_3,
task_4,
task_5,
task_6,
task_7,
task_8,
task_9,
N
};
int basic_task(int a, int d)
{
std::chrono::milliseconds sleepDuration(d);
std::this_thread::sleep_for(sleepDuration);
std::cout << "Result: " << a << "\n";
return a;
}
using namespace SCHEDULER;
int main(int argc, char **argv)
{
using F = std::function<R()>;
Graph deps(N);
boost::add_edge(task_0, task_1, deps);
boost::add_edge(task_0, task_2, deps);
boost::add_edge(task_0, task_3, deps);
boost::add_edge(task_1, task_4, deps);
boost::add_edge(task_1, task_5, deps);
boost::add_edge(task_1, task_6, deps);
boost::add_edge(task_2, task_7, deps);
boost::add_edge(task_2, task_8, deps);
boost::add_edge(task_2, task_9, deps);
std::vector<F> tasks =
{
std::bind(basic_task, 0, 1000),
std::bind(basic_task, 1, 1000),
std::bind(basic_task, 2, 1000),
std::bind(basic_task, 3, 1000),
std::bind(basic_task, 4, 1000),
std::bind(basic_task, 5, 1000),
std::bind(basic_task, 6, 1000),
std::bind(basic_task, 7, 1000),
std::bind(basic_task, 8, 1000),
std::bind(basic_task, 9, 1000)
};
auto s = std::make_unique<scheduler<int>>(std::move(deps), std::move(tasks));
s->doit();
return 0;
}
scheduler.h:
#ifndef __SCHEDULER2_H__
#define __SCHEDULER2_H__
#include <iostream>
#include <vector>
#include <iterator>
#include <functional>
#include <algorithm>
#include <mutex>
#include <thread>
#include <future>
#include <boost/graph/graph_traits.hpp>
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/depth_first_search.hpp>
#include <boost/graph/visitors.hpp>
using namespace boost;
namespace SCHEDULER
{
using Graph = adjacency_list<vecS, vecS, bidirectionalS>;
using Edge = graph_traits<Graph>::edge_descriptor;
using Vertex = graph_traits<Graph>::vertex_descriptor;
using VectexCont = std::vector<Vertex>;
using outIt = graph_traits<Graph>::out_edge_iterator;
using inIt = graph_traits<Graph>::in_edge_iterator;
template<typename R>
class scheduler
{
public:
using ret_type = R;
using fun_type = std::function<R()>;
using prom_type = std::promise<ret_type>;
using fut_type = std::shared_future<ret_type>;
scheduler() = default;
scheduler(const Graph &deps_, const std::vector<fun_type> &tasks_) :
g(deps_),
tasks(tasks_) { init_();}
scheduler(Graph&& deps_, std::vector<fun_type>&& tasks_) :
g(std::move(deps_)),
tasks(std::move(tasks_)) { init_(); }
scheduler(const scheduler&) = delete;
scheduler& operator=(const scheduler&) = delete;
void doit();
private:
void init_();
std::list<Vertex> get_sources(const Vertex& v);
auto task_thread(fun_type&& f, int i);
Graph g;
std::vector<fun_type> tasks;
std::vector<prom_type> prom;
std::vector<fut_type> fut;
std::vector<std::thread> th;
std::vector<std::list<Vertex>> sources;
};
template<typename R>
void
scheduler<R>::init_()
{
int num_tasks = tasks.size();
prom.resize(num_tasks);
fut.resize(num_tasks);
// Get the futures
for(size_t i=0;
i<num_tasks;
++i)
{
fut[i] = prom[i].get_future();
}
// Predetermine in_edges for faster traversal
sources.resize(num_tasks);
for(size_t i=0;
i<num_tasks;
++i)
{
sources[i] = get_sources(i);
}
}
template<typename R>
std::list<Vertex>
scheduler<R>::get_sources(const Vertex& v)
{
std::list<Vertex> r;
Vertex v1;
inIt j, j_end;
boost::tie(j,j_end) = in_edges(v, g);
for(;j != j_end;++j)
{
v1 = source(*j, g);
r.push_back(v1);
}
return r;
}
template<typename R>
auto
scheduler<R>::task_thread(fun_type&& f, int i)
{
auto j_beg = sources[i].begin(),
j_end = sources[i].end();
for(;
j_beg != j_end;
++j_beg)
{
R val = fut[*j_beg].get();
}
return std::thread([this](fun_type f, int i)
{
prom[i].set_value(f());
},f,i);
}
template<typename R>
void
scheduler<R>::doit()
{
size_t num_tasks = tasks.size();
th.resize(num_tasks);
for(int i=0;
i<num_tasks;
++i)
{
th[i] = task_thread(std::move(tasks[i]), i);
}
for_each(th.begin(), th.end(), mem_fn(&std::thread::join));
}
} // namespace SCHEDULER
#endif
I'm not sure what your setup is and why you need to build DAG, but I think that simple greedy algorithm may suffice.
when (some task have finished) {
mark output resources done;
find all tasks that can be run;
post them to thread pool;
}
Consider using Intel's TBB Flow Graph library.

How to copy boost::property_map?

I would like to obtain two sets of shortest paths (one-to-all) derived from one graph, defined as adjacency_list with internal properties (as opposed to bundles)
In theory I could run dijkstra_shortest_paths on two reference nodes, n1 and n2. If I create two property_maps and pass them in sequence to dijkstra_... I get what looks like two views of the same map. Both point to the result of the last run of dijkstra_shortest_paths, so that the older result is gone. What should I do to achieve the desired result?
// Define some property maps
property_map<ugraph_t,edge_weight_t>::type Weight=get(edge_weight,G);
property_map<ugraph_t,vertex_distance_t>::type Dist1=get(vertex_distance,G);
// One line later, I expect this to be mapped to the SPs w.r.t n1
// Run SP on the first node
dijkstra_shortest_paths(G,n1,predecessor_map(Prev1).distance_map(Dist1).weight_map(Weight));
// New property maps
property_map<ugraph_t,vertex_distance_t>::type Dist2(Dist1); // And now to the second set
property_map<ugraph_t,vertex_predecessor_t>::type Prev2(Prev1); // But no two sets will result...
// Run SP on the second node
// This will run fine, but I will lose the first SP set (with or without a copy constructor above)
dijkstra_shortest_paths(G,n2,predecessor_map(Prev2).distance_map(Dist2).weight_map(Weight));
CONCLUSION: If I am not mistaken, a property_map can be thought of as an interface with an iterator so that copying property_maps makes no sense. The solution is to pass a custom container, constructed on the fly. That solution is detailed in the answer by #sehe below for which my many thanks!
NOTE: This only works if the vertex container type is vecS. With listS one has to "manually" copy vertex-by-vertex.
The distance map is not supposed to be an interior property.
Same goes for the predecessor map.
They are not logically properties of the graph. They are the result of a query. As such they're property of a combination of query parameters, including the graph, starting node etc.
If you want to save the value of an interior property, just save it in any way you normally would:
std::vector<double> saved_distances(num_vertices(G));
BGL_FORALL_VERTICES(v, G, ugraph_t)
saved_distances.push_back(Dist1[v]);
Workaround
The workaround with copying the maps:
Live On Coliru
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/properties.hpp>
#include <boost/graph/dijkstra_shortest_paths.hpp>
#include <boost/graph/iteration_macros.hpp>
using namespace boost;
using ugraph_traits = graph_traits<adjacency_list<vecS, vecS, directedS> >;
using ugraph_t = adjacency_list<
vecS, vecS, directedS,
property<vertex_distance_t, double,
property<vertex_predecessor_t, ugraph_traits::vertex_descriptor>
>,
property<edge_weight_t, double>
>;
int main() {
ugraph_t G(10);
ugraph_t::vertex_descriptor n1 = 0, n2 = 1, v;
(void) n1;
(void) n2;
// ...
property_map<ugraph_t, edge_weight_t>::type Weight = get(edge_weight,G);
property_map<ugraph_t, vertex_distance_t>::type Dist1 = get(vertex_distance,G);
property_map<ugraph_t, vertex_predecessor_t>::type Prev1 = get(vertex_predecessor,G);
dijkstra_shortest_paths(G, n1,
predecessor_map(Prev1)
.distance_map(Dist1)
.weight_map(Weight)
);
std::vector<double> saved_distances(num_vertices(G));
std::vector<ugraph_t::vertex_descriptor> saved_predecessors(num_vertices(G));
BGL_FORALL_VERTICES(v, G, ugraph_t) {
saved_distances.push_back(Dist1[v]);
saved_predecessors.push_back(Prev1[v]);
}
/*
* // C++11 style
* for(auto v : make_iterator_range(vertices(G)))
* saved_distances[v] = Dist1[v];
*/
// Run SP on the second node
dijkstra_shortest_paths(G,n2,predecessor_map(Prev1).distance_map(Dist1).weight_map(Weight));
}
Suggested
I'd suggest making the result maps separate containers, leaving only the edge weight interior:
Live On Coliru
Better Yet: refactor to remove duplicated code
So it just becomes
Live On Coliru
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/dijkstra_shortest_paths.hpp>
using namespace boost;
using ugraph_t = adjacency_list<vecS, vecS, directedS, no_property, property<edge_weight_t, double> >;
using Vertex = ugraph_t::vertex_descriptor;
struct ShortestPaths {
ShortestPaths(size_t num_vertices);
std::vector<double> distances;
std::vector<Vertex> predecessors;
};
ShortestPaths GetShortestPaths(ugraph_t const& G, Vertex start);
int main() {
ugraph_t G(10);
Vertex n1 = 0, n2 = 1;
ShortestPaths sp1 = GetShortestPaths(G, n1);
ShortestPaths sp2 = GetShortestPaths(G, n2);
}
// some other cpp file...:
ShortestPaths::ShortestPaths(size_t num_vertices)
: distances(num_vertices), predecessors(num_vertices)
{ }
ShortestPaths GetShortestPaths(ugraph_t const& G, Vertex start) {
ShortestPaths result(num_vertices(G));
dijkstra_shortest_paths(G, start,
predecessor_map(make_container_vertex_map(result.predecessors, G))
.distance_map (make_container_vertex_map(result.distances, G))
.weight_map (get(edge_weight, G))
);
return result;
}
Note there is no more need to copy the results. In fact, you don't even need to keep the graph around to keep the result of your query.

Compiler is giving errors when I try to print the contents of a vector?

Basically, I have a .h file, in which I have defined a function to generate an element buffer and squashed list of vertices (for OpenGL) given an unoptimized list of vertices. I ran into some problems though, and it turns out that I can't actually access the contents of a vector which I pass to the method. My code is as follows
#ifndef LEARNOPENGL_COMMON_H
#define LEARNOPENGL_COMMON_H
#include "ContextBase.h" // this includes all the OpenGL stuff
#include "vector"
#include "iostream"
class common {
public:
template<typename V>
static bool are_equal(int size, V* v1, V* v2) {
for (int x = 0; x < size; x++) {
//if (v1[x] != v2[x]) return false;
}
return true;
}
template<typename V, typename E>
static void GenOptimizedArrays(const int vertex_size, std::vector<V>* vertex_source,
std::vector<V>* vertex_out, std::vector<E>* ebo_out) {
std::vector<V> * vertex_vector = new std::vector<V>();
std::vector<E> * element_vector = new std::vector<E>();
std::cout << vertex_source[0] << std::endl;
}
};
#endif //LEARNOPENGL_COMMON_H
However, my compiler is telling me that trying to print (access?) vertex_source[0] is causing an error- the exact (relevant) error message is
error: cannot bind ‘std::ostream {aka std::basic_ostream<char>}
’ lvalue to ‘std::basic_ostream<char>&&
I tried to search this online but, while I found similar problems, everything just said to use an iterator without explaining why I figured out how from the solutions, but found no good explanation. Can you help on this?
std::vector<int> *vec = new vector<int>;
//std::cout<<vec[0];(1)
std::cout<<&vec[0];(2)
If you uncomment (1) and run above snippet you will get almost same error. Variable vec knows only about its address unless you assign data into its pointer location. So when you try to access the data from its pointer location system can't bind it with stream.
If you try to print the address as shown in(2) it will print the address of the location variable vec knows.

Is thare in STL or BOOST map like container with find and pop operation?

I want my map to be searchable and I want to be capable to kick out from it elements that were inserted into it longest time ago (with api like map.remove(map.get_iterator_to_oldest_inserted_element()) ) like mix of queqe and map.. Is there any such container in STL or Boost?
You can use boost::multi_index using the ordered_unique and sequence indices, as in this example.
#include <iostream>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/ordered_index.hpp>
#include <boost/multi_index/sequenced_index.hpp>
#include <boost/multi_index/member.hpp>
// Element type to store in container
struct Element
{
std::string key;
int value;
Element(const std::string& key, int value) : key(key), value(value) {}
};
namespace bmi = boost::multi_index;
// Boost multi_index container
typedef bmi::multi_index_container<
Element,
bmi::indexed_by<
bmi::ordered_unique< bmi::member<Element,std::string,&Element::key> >,
bmi::sequenced<> >
>
MyContainer;
typedef MyContainer::nth_index<1>::type BySequence;
// Helper function that returns a sequence view of the container.
BySequence& bySequence(MyContainer& container) {return container.get<1>();}
int main()
{
MyContainer container;
// Access container by sequence. Push back elements.
BySequence& sequence = bySequence(container);
sequence.push_back(Element("one", 1));
sequence.push_back(Element("two", 2));
sequence.push_back(Element("three", 3));
// Access container by key. Find an element.
// By default the container is accessed as nth_index<0>
MyContainer::const_iterator it = container.find("two");
if (it != container.end())
std::cout << it->value << "\n";
// Access container by sequence. Pop elements in a FIFO manner,
while (!sequence.empty())
{
std::cout << sequence.front().value << "\n";
sequence.pop_front();
}
}
You can set up a boost multi-index container to do this.
However, I have trouble understanding multi-index containers. I think it would be easier to roll my own class that has as members a std::queue and a std::map, and manage it myself.
There is nothing like that in boost and stl, but you can make a hybrid one yourself:
map<Key, Value> Map;
deque<Key> Queue;
void insert(const Key &k, const Value &v)
{
Map[k] = v;
Queue.push_back(k);
}
void pop_front()
{
const Key &k = Queue.front();
Map.erase(k);
Queue.pop_front();
}
If you only ever want to find the oldest (or newest -- or more generally the least/greatest by some specified criteria, and be able to remove that element), then a priority_queue will do the job.
Boost::bimap can be used to create a unidirectionnal map using a list-based relation set.
typedef bimap<set_of<A>, unconstrained_set_of<B>, list_of_relation> custom_map_type;
custom_map_type map;
map.push_back(custom_map_type::value_type(A(), B()));
// delete the front of the list (the first inserted element if only using push_back)
map.pop_front();
// otherwise, set.right behave a lot like a std::map<A,B>, with minor changes.