C/C++ graph interface for representation of partial order - c++

In my code I use a class which represents a directed acyclic graph. I wrote the code myself, it wasn't hard. But later I realized my app has more requirements: the graph must be transitive-reduced, i.e. unique representation of a partial ordrer. Every time the user does drag-n-drop or cut/copy/paste on the visual GUI representation of the graph, it has to be validated and adapted to this requirement. Now things become more complicated. So I did plan how to perform all graph operations safely, etc., but before I really dive into the code, I'd like to know:
Is there a known C/C++ interface for partial orders? (Preferably C++)
I found many many libraries for graphs, but I already have my simple acyclic digraph code. I couldn't find anything which deals specifically with transitively-reduced graphs (I don't need an adjacency matrix, the data comes from the user so it would be inefficient here... It's a small graph for user data, not something for mathematical use)
I'm looking for an interface which automatically detects unnecessary connections and removes them, does tests to see if a node copy/move operation would be valid partial-order-wise, i.e. preserve the properties of a partial order, etc.

I would recommend adding a partial-order validation method. When an edit is being made, make a copy of the whole graph apply the edit to one copy, then validate it. If it passes, keep the modified copy. If it doesn't pass, revert to the saved copy.
Perhaps the validator could find all bottom nodes, for each one, build a multiset of its ancestors (or descendants if you call them that) and check for duplicate entries. I would revert to recursion for the search if you expect only small graphs.

As far as I know, usually programs have their own graph classes when used for non-mathematical purposes. This happens because graphs may be much more complicated than linear containers such as the STL containers (vector, list, etc.).
Since you don't have any special needs in the field of math or algorithms (a search algorithm in your case would be a simple loop, in most cases you don't need more than that, and certainly not in the case of (premature) optimization). If you do, you have boost::graph, but I suspect it would complicate things more than help you.
So I say, write a good graph/node class, and if it's good enough and written for general-purpose, we can all benefit from that. Nobody is answering the question because there's really no existing public code which matches your needs. Write good libre code once, and it can then be used everywhere. Good luck.
P.S your own search algorithm may be much faster than ones written for general-purpose graph libraries, e.g. boost::graph, because you can take an advantage of the known restrictions and rules of you specific graph, thus making seraches much faster. For example, in a transitively-reduced graph, if A is a parent of B, then A cannot also have b as a non-child decendant (e.g. grand-child), so you can optimize your search using this knowledge. The price you pay is doing lots of tests when changing the graph, but you gain a lot back because searching/scanning can become much faster.

Related

Compare two QAbstractItemModels

I'm trying to figure out an efficient algorithm that takes in two QAbstractItemModels (trees) (A,B) and computes the differences between them, such that I get a list of Items that are not present in A (but are in B - added), or items that have been modified / deleted.
The only current way I can think of is doing a Breadth search of A for every item item in B. But this doesn't seem very efficient. Any ideas are welcome.
Have you tried using magic?
Seriously though, this is a very broad question, especially if we consider the fact it is an QAbstractItemModels and not a QAbstractListModel. For a list it would be much simpler, but an abstract item model implements a tree structure so there are a lot of variables.
do you check for total item count
do you check for item count per level
do you check if item is contained in both models
if so, is it contained at the same level
if so, is it contained at the same index
is the item in its original state or has it been modified
You need to make all those considerations and come up with an efficient solution. And don't expect it will as simple as a "by the book algorithm". Good news, since you are dealing with isolated items, it will be easier than trying to do that for text, and in the case of the latter, you can't hope to get anywhere nearly as concise as with isolated items. I've had my fair share of absurdly mindless github diff results.
And just in case that's your actual goal, it will be much easier to achieve by tracking the history of the derived data set than doing a blind comparison. Tracking history is much easier if you want to establish what is added, what is deleted, what is moved and what is modified. Because it will consider the actual event flow rather than just the end result comparison. Especially if you don't have any persistent ID scheme implemented. Is there a way to tell if item X has been deleted or moved to a new level/index and modified and stuff like that.
Also, worry about efficiency only after you have empirically established a performance issue. Some algorithms may seem overly complex, but modern machines are overly fast, and unless you are running that in a tight loop you shouldn't really worry about it. In the end, it doesn't boil down to how complex it is, it boils down to whether it is fast enough or not.

Why do C++ data structures for graphs hide contiguous integer indices?

Data structures for directed and undirected graphs are of fundamental importance. Well-known and widely-used implementations such as the Boost Graph Library and Lemon are designed such that the contiguous integer indices of nodes and edges are not exposed to the user via the interface.
Instead, the user identifies nodes and edges by (small) representative objects. One advantage is that these objects are updated automatically when the indices of nodes and edges change due to the removal of edges or nodes from the graph.
In my opinion (!), this advantage is overrated. Users will typically store the representative objects of nodes and/or edges in a container, e.g., an std::vector. Now, if nodes or edges are removed from the graph and their representative objects become invalid, the user needs to either ignore this or rearrange the vector so as to keep valid integer indices contiguous, i.e., do exactly the bookkeeping that the design was supposed to make unnecessary.
Hence, my question is: Does the design choice (of hiding the contiguous integer indices of nodes and edges from the user) have other advantages?
(I'm at home in the Java world, but hope that it is OK to give an answer that is not focussed on the particular libraries in question)
There are several possible advantages of such an abstraction. One of the most important ones was already mentioned in the question: The consistency when performing modifications in a graph is much harder to accomplish when indices have to be maintained.
The reason why this may be hard lies in the different possible graph representations: It may be easy to maintain consistent indices if the internal representation only (and always) consisted of random-access lists of Vertex and Edge objects. But for other representations, determining an index may be difficult.
This directly related to the second main advantage: One is free to use different implementations of the graph interface. The section "Graph Data Structures" in the Review of Elementary Graph Theory of the boost documentation lists several data structures that are already offered by the BGL (and everybody may add his own implementation). The running times for certain operations are given in Big-O-notation, and once can see that they vary greatly between the different data structures.
So one can easily imagine that different implementations are better suited for certain tasks. For example, consider an algorithm frequently has to check whether a particular vertex is contained in a graph. For an indexed (that is, list-based) vertex storage, this would require O(n) for each test. With a set-based storage of the vertices, this could be done in O(1) - but there simply are no sensible "indices" in this case.
Additionally, as mentioned in the Graph Concepts overview:
In fact, the BGL interface need not even be implemented using a data-structure, as for some problems it is easier or more efficient to define a graph implicitly based on some functions.
So suggesting that there is an "indexed access" even when the graph does not even exist in memory may hinder such a purely functional implementation.
I can't speak for Lemon graph, but for boost graph I think the main goal is to be generic. So abstracting away the vertex (edge) access helps to achieve that goal.
It is stated in the documentation that boost graph is based on Dietmar Kühl's Masters Thesis on generic graph algorithms. (See my answer to Do property maps remain necessary for BGL?). So the main goal behind the library is to be generic and extensible. The choice of encapsulating access is part of abstracting the algorithms from the graph representation. To me, continuous integer indices are an implementation detail.
Boost doesn't make any assumptions on how you will use the graph or what performance trade offs are important to you. It lets you choose (or implement) the container that will best fit your needs.
If you want to break this encapsulation, you are free to do so. In fact, my most common use of boost graph involves vecS containers and a vector of structs. I usually work with graphs where the size is fixed. I could just as easily use a map of vertex_descriptors (or edge_descriptors) to objects to achieve the same goal.
So in summary, I would say that this not so much a design choice, but rather a consequence of achieving the broader goal of being generic. So the hiding of access has the benefit of being more generic.

Testing important implementation details

I'm implementing a key -> value associative container. Internally it's a sorted binary tree, and I want to make sure it's balanced so that find operations are sure to be Olog(n). The problem is that this is an implementation detail that is entirely private to the class, and I can't readily measure it from outside.
The best I can think to do is to benchmark my find operations - if they operate in linear time it's probably because the tree is unbalanced - but that seems far too inexact, and I'd feel better if I had a more direct way to measure.
What design/testing patterns are out there that might be helpful in these sorts of situations?
You could extract the balanced tree to it's own class, and test that class. In that class the balanced-ness is a feature of the class and could expose something like depth which would let you inspect it and assert that the tree remains balanced.
You are correct in saying that you are testing an implementation detail. The problem here is that a bad implementation also produces the correct output, it just takes longer. This means that the only measurable unit is time.
What I would do is similar to what you propose: create a big collection of data and structure it in a way that a good implementation should be able to find what you're looking for in a matter of moments and a bad implementation has to go through your entire collection before finding it.
This could translate to having thousands of elements and searching for the element that's last in line. You could structure it in a way that a good implementation should find it at the top of the tree and thus find it very quickly while a bad implementation should find it somewhere at the bottom, thus taking time to find it.
Many frameworks have an option to specify a timeout so if you set this to a low enough value and you have plenty of data in your collection, you can weed out slow-running implementations like that.

Managing large spatial data set with attributes in C++

I have a data set with about 700 000 entries, and each entry is a set of 3D coordinates with attributes such as name, timestamp, ID, and so on.
Right now I'm just reading the coordinates and render them as points in OpenGL. However I want to associate each point with its corresponding attributes and I want to be able to sort and pick them during runtime based on their attributes. How would I go about to achieve this in an efficient manner?
I know I can put I can put the data in a struct and use stl sort for sorting, but is that a good design choice or is there a more efficient/elegant way of handling the problem?
The way I tend to look at these design choices is to first use one of the standard library containers (btw, if you need to "just" do lookup you don't necessarily have to sort, but you need a container that allows lookup), then check if this an "efficient enough" solution for the problem.
You can usually come up with a custom solution that is more efficient and maybe more elegant but you tend to run into two issues with that:
1) You end up having to implement some type of a container, which will cost you time both in implementation and debugging compared to a well understood and tested container that is already out there. Most of the time you're better off trying to solve the problem at hand rather than make it bigger by adding more code.
2) If someone else will have to maintain your code at some point, chances are they are familiar with standard library components both from a design and implementation perspective, but they won't be familiar with your custom container, thus increasing the learning curve.
If you consider each attribute of your point class as a component of a vector, then your selection process is a region query. Your example of a string attribute being equal to something means that the region is actually a line in your data space. However, there won't be any sorting made on other attributes within that selection, you will have to implement it by yourself, but it should be relatively straightforward for octrees, which partition data in ordered regions.
As advocated in another answer, try existing standard solutions first. If you can find an of the shelf implementation of one of these data structures:
R-tree
KD tree
BSP
Octree, or more likely, a n dimensional version of the quadtree or octree principle (I will use the term octree herein to denote the general data structure)
then go for it. These are the data structures I recommend for spatial data management.
You could also use an embedded RDBMS capable of working with spatial data (they usually implement R-tree for spatial indexing), but it may not be interesting if your dataset isn't dynamic.
If your dataset falls within the 10000 entries range, then by today standards it isn't that large, so using simpler structures should suffice. In that perimeter, I would go first for a simple std::vector, and use std::sort and std::find to filter the data in smaller set and sort it afterward.
I would probably try an ordered set or map on the most queried attribute in a second attempt, then do some benchmarks to pick the more performing solution.
For a more efficient one dimensional indexing algorithm (in essence, that`s what sets and maps are), you might want to try B-trees: there's C++ implementation available from google.
My third attempt would go toward an OpenCL solution (although if you are doing heavy OpenGL rendering, you might prefer doing the work on the CPU instead, but that depends on your framerate needs).
If your dataset is much larger, as it seems to be, then consider one of the more complex solutions I listed initially.
At any rate, without more details about your dataset and how you plan to use it, it will be difficult to provide a good solution, so the only real advice we can give is: try everthing you can and benchmark.
If you're dealing with point clouds, take a look at PCL, it could save you a lot of time and effort without having to dig into the intricacies of spatial indexing yourself. It also includes visualisation.

Can I implement potential field/depth first method for obstacle avoidance using boost graph?

I implemented an obstacle avoidance algorithm in Matlab which assigns every node in a graph a potential and tries to descend this potential (the goal of the pathplanning is in the global minimum). Now there might appear local minima, so the (global) planning needs a way to get out of these. I used the strategy to have a list of open nodes which are reachable from the already visited nodes. I visit the open node which has the smallest potential next.
I want to implement this in C++ and I am wondering if Boost Graph has such algorithms already. If not - is there any benefit from using this library if I have to write the algorithm myself and I will also have to create my own graph class because the graph is too big to be stored as adjacency list/edge list in memory.
Any advice appreciated!
boost::graph provides a list of Shortest Paths / Cost Minimization Algorithms. You might be interested in the followings: Dijkstra Shortest path, A*.
The algorithms can be easily customized. If that doesn't exactly fit your needs, take a look at the visitor concepts. It allows you to customize your algorithm at some predefined event point.
Finally Distributed BGL handles huge graph (potentially millions of nodes). It will work for you if your graph does not fit in memory.
You can find good overview of the Boost Graph Library here.
And of course, do not hesitate to ask more specific question about BGL on stackoverflow.
To my mind, boost::graph is really awesome for implementing new algorithms, because it provides various data holders, adaptors and commonly used stuff (which can obviously be used as parts of the newly constructed algorithms).
Last ones are also customizable due to usage of visitors and other smart patterns.
Actually, boost::graph may take some time to get used to, but in my opinion it's really worth it.