Compare two QAbstractItemModels - c++

I'm trying to figure out an efficient algorithm that takes in two QAbstractItemModels (trees) (A,B) and computes the differences between them, such that I get a list of Items that are not present in A (but are in B - added), or items that have been modified / deleted.
The only current way I can think of is doing a Breadth search of A for every item item in B. But this doesn't seem very efficient. Any ideas are welcome.

Have you tried using magic?
Seriously though, this is a very broad question, especially if we consider the fact it is an QAbstractItemModels and not a QAbstractListModel. For a list it would be much simpler, but an abstract item model implements a tree structure so there are a lot of variables.
do you check for total item count
do you check for item count per level
do you check if item is contained in both models
if so, is it contained at the same level
if so, is it contained at the same index
is the item in its original state or has it been modified
You need to make all those considerations and come up with an efficient solution. And don't expect it will as simple as a "by the book algorithm". Good news, since you are dealing with isolated items, it will be easier than trying to do that for text, and in the case of the latter, you can't hope to get anywhere nearly as concise as with isolated items. I've had my fair share of absurdly mindless github diff results.
And just in case that's your actual goal, it will be much easier to achieve by tracking the history of the derived data set than doing a blind comparison. Tracking history is much easier if you want to establish what is added, what is deleted, what is moved and what is modified. Because it will consider the actual event flow rather than just the end result comparison. Especially if you don't have any persistent ID scheme implemented. Is there a way to tell if item X has been deleted or moved to a new level/index and modified and stuff like that.
Also, worry about efficiency only after you have empirically established a performance issue. Some algorithms may seem overly complex, but modern machines are overly fast, and unless you are running that in a tight loop you shouldn't really worry about it. In the end, it doesn't boil down to how complex it is, it boils down to whether it is fast enough or not.

Related

Testing important implementation details

I'm implementing a key -> value associative container. Internally it's a sorted binary tree, and I want to make sure it's balanced so that find operations are sure to be Olog(n). The problem is that this is an implementation detail that is entirely private to the class, and I can't readily measure it from outside.
The best I can think to do is to benchmark my find operations - if they operate in linear time it's probably because the tree is unbalanced - but that seems far too inexact, and I'd feel better if I had a more direct way to measure.
What design/testing patterns are out there that might be helpful in these sorts of situations?
You could extract the balanced tree to it's own class, and test that class. In that class the balanced-ness is a feature of the class and could expose something like depth which would let you inspect it and assert that the tree remains balanced.
You are correct in saying that you are testing an implementation detail. The problem here is that a bad implementation also produces the correct output, it just takes longer. This means that the only measurable unit is time.
What I would do is similar to what you propose: create a big collection of data and structure it in a way that a good implementation should be able to find what you're looking for in a matter of moments and a bad implementation has to go through your entire collection before finding it.
This could translate to having thousands of elements and searching for the element that's last in line. You could structure it in a way that a good implementation should find it at the top of the tree and thus find it very quickly while a bad implementation should find it somewhere at the bottom, thus taking time to find it.
Many frameworks have an option to specify a timeout so if you set this to a low enough value and you have plenty of data in your collection, you can weed out slow-running implementations like that.

How to fetch patterns from a game board in a fast way?

For my recent project I'm right now looking for an efficient way to structure and store the board information with consideration of the usage for patternmatching.
I'm having a square board, and for pattern matching, I'm using bitfields with 2 bits representing one field of the board. The patterns to match have a diamond shape, that could be centered around any possible field on the board. (so the center is not static, I need to be able to do it for any center)
Example of diamond area around O:
..X..
.XXX.
XXOXX
.XXX.
..X..
If parts of the diamond are outside the the playing area, the bits will be set to 11. The diamonds can have differing radiuses, aboves example would have a radius of 2.
Another important thing for the efficiency of the system is, that I have to be able to quickly rotate/mirror the pattern into all 8 possible symmetries.
For this, it may be beneficial to actually NOT store the information of the central point in the pattern, and as this is not required for my algorithm anyway, this may be a valuable timesaver. Because now some bitshifting magic is possible to quickly rotate/mirror the patterns.
As this kind of patternmatching has to be done at a high frequency, it can prove to be a severe bottleneck of my overall project, when implemented badly.
When trying to get a nice model for doing all this work, I figured, there are 3 important keyareas that require thinking about, but are of course tightly connected.
A. How is the data stored in the board implementation.
Currently this is done in a rather difficult manner, which would be too difficult to read from with such high frequency. But it would be no problem or timeloss to actually store and update the 2 bit data in any possible way for the entire board.
Easiest would be to just store the entire board in an bitset with the size of twice the board, and then each two bit represent the value of a single field. But there is no necessarity for doing it in a special sequence or in only one bitset, even though at first it may look natural to do so.
Anyway, this is the part I'm most felxible about, as this can be done without performance issues in any way it seves the other 2 critical parts of the problem the best.
B. How is the data stored in the pattern.
This is already more difficult. As said, my intention is to store them in a bitset of the appropriate size, but there is he question in what order.
There seem to be two ways, that quickly come to mind:
a) (this could be done with or without the central point C)
...0...
..123..
.45678.
9ABCDEF
.GHIJK.
..LMN..
...O...
b)
...0...
..N14..
.ML235.
KJI.678
.HFC9A.
..GDB..
...E...
If we are just talking about the patterns, b) seems clearly superior. A rotation of the pattern is done by a simple rotateshift (3 bitops total per rotation) and even mirroring the pattern can be done with about a dozen bitops. This kind of operations are much more time consuming with a).
But b) has also some severe drawback... And this leads to:
C. How is the data read from the board implementation to the pattern.
Looking at aboves 2 potential ways to order the pattern bits, now a) is clearly superior. a) can be read by a bunch of bitops from a potential array, as discussed in A. you bitshift each line (getting the line by AND with a bitset nulling all other bits) to the appropriate place and put them together with some OR-operations. Even near the board edges this is done very quick.
Problem of course is, that this would still only get me one possible symmetry of the pattern, but rotations/mirrors are not that easily done. This could be circumvented by saving each pattern to match agaisnt 8 times, but this would look very crude, and may cause troubles elsewhere.
With b) this is much more difficult... Honestly, I don't see a way how it can be done quick, without checking every single bit individually. But when increasing the pattern size (like radius 15) this takes forever, when done very often, especially as the [] operator of bitsets is rather slow.
One possible solution I thought of writing it in CUDA, with each thread generating a pattern around one field, and each block of the thread checking one fixed position around this center. But as I haven't used CUDA before, I don't know how reasonable this is, but if done parallel, this sounds more reasonable than iterating over all positions serially.
As I still didn't find a satisfying solution for the problem, I wanted to ask here, if someone probably knows how it can be done better:
- either rotate/mirror patterns of type a)
- or quickly read pattern of type b) (possibly by arranging the data in a better way in step A., I'm flexible here)
- or if the CUDA idea may actually solve that problem
- or maybe some completely different way, I didn't think of, as I'm sure this has been done before by smarter people
If it matters: I'm coding with VS Pro 2013 and don't mind using boost. If CUDA could solve this effectively, I would also use it.
EDIT:
Okay... So I continued thinking about the whole thing. Maybe there are some other ways to make the whole thing more efficient, by doing some work in more efficient batches.
First of all, what I usually need: On a given board position (and we are talking about 10k positions per second) I need for a large set of positions (every empty field of the board, so most fields) all patterns from size 15 down to size 3. I only need the biggest pattern matched by my database, but in any case, I may often need most of them. So there are 2 things, that could make some time savings possible:
1) some efficient way to use the larger pattern, to generate the pattern one size smaller. This should actually possible, when using the bitordering from b), if it is done the proper way... Then it would only need a few bit ops to cut out the outer ring...
2) As often neighboring fields need their specific pattern, if there would be some way to create their patterns in some sort of batch operation... But I admit, I don't see how this could be done very well... But there may be some time savings.
Oh, and another additional comment, as I had the discussion earlier today with some friend: No it is not an option, t instead of matching the board position against the pattern database, to reverse it and do it the other way around (check if DB pattern matches some board position) I have way too many patterns for that. When doing it the first way, i can just look, if the bitstring exists in my database and be done.
Edit2:
Another Update... First I looked into CUDA, and as it seems incompatible with VS2013, this is a severe blow to that idea. Second I thought about the process how patterns are matched. In fact, it may seem possible, instead of going from the large patterns down to the small ones, doing it in reverse. Now suddenly my pattern library is less of a dictionary but more of a searchtree, as larger patterns certainly have their inner core saved as pattern as well. This should speed up any lookups, but still does not solve my problem of the patterngeneration, sadly.
Edit3:
As I felt, it is more worth of an answer then an edit, I just posted my own new idea (which is different from what I had in mind when posting this question) below.
Okay, as I was thinking about this more and more, I now think, that the following solution may be the best to tackle the problems. This is certainly not final, but my currently best idea. So any criticism is welcome and improvements can surely be made.
As the discussion in the comments led me to the believe, that the approach imagined in the question is not practical for the problem at hand, I now drastically changed my idea. Instead of trying to read the pattern around each empty intersection after each move, I will now update the surrounding pattern of each empty intersection after each move made.
This can be made in an efficient way, as we can use 2 very important features of our patterns:
1) each larger patterns core (so the pattern reduced to a lower radius) will guaranteed to be in the database
2) most patterns will have a rather low radius, and in most cases, not many positions on the board are changed with each move, resulting in not too many positions needing a recheck of their patterns.
My idea is, to store the currently largest pattern, it's radius and it's evaluation with each empty intersection. Now, while a move is made, I generate a list of all positions changed during that move. (usually one) Once the move is finished, I iterate over all empty positions on the board and look at their distance to the closest change. Now we are having 3 possible cases:
a) the distance is smaller or equal the radius of currently matched pattern. Now we have to recheck the pattern.
b) the distance is one bigger then the radius of the currently matched pattern. Now we have to check, if actually a (r+1) size pattern exists, matching the surrounding. If it does, we have to check r+2 etc, until we found the largest.
c) the distance is even bigger: We can keep everything as it is.
As we are having now basically a tree of patterns, with each pattern having lots of child pattern with an incremented radius, it is actually practical to store the pattern information in a series of bitets, each representing a ring of a certain radius around the center.
I hope that this system maximizes the reusability of all the information at hand and is fast enough for my needs. As mentioned before, I welcome criticism and opinions for improvement and if there is not better solution found, will probably implement it in the near future. Once done, I can probably report back on the results.

Managing large spatial data set with attributes in C++

I have a data set with about 700 000 entries, and each entry is a set of 3D coordinates with attributes such as name, timestamp, ID, and so on.
Right now I'm just reading the coordinates and render them as points in OpenGL. However I want to associate each point with its corresponding attributes and I want to be able to sort and pick them during runtime based on their attributes. How would I go about to achieve this in an efficient manner?
I know I can put I can put the data in a struct and use stl sort for sorting, but is that a good design choice or is there a more efficient/elegant way of handling the problem?
The way I tend to look at these design choices is to first use one of the standard library containers (btw, if you need to "just" do lookup you don't necessarily have to sort, but you need a container that allows lookup), then check if this an "efficient enough" solution for the problem.
You can usually come up with a custom solution that is more efficient and maybe more elegant but you tend to run into two issues with that:
1) You end up having to implement some type of a container, which will cost you time both in implementation and debugging compared to a well understood and tested container that is already out there. Most of the time you're better off trying to solve the problem at hand rather than make it bigger by adding more code.
2) If someone else will have to maintain your code at some point, chances are they are familiar with standard library components both from a design and implementation perspective, but they won't be familiar with your custom container, thus increasing the learning curve.
If you consider each attribute of your point class as a component of a vector, then your selection process is a region query. Your example of a string attribute being equal to something means that the region is actually a line in your data space. However, there won't be any sorting made on other attributes within that selection, you will have to implement it by yourself, but it should be relatively straightforward for octrees, which partition data in ordered regions.
As advocated in another answer, try existing standard solutions first. If you can find an of the shelf implementation of one of these data structures:
R-tree
KD tree
BSP
Octree, or more likely, a n dimensional version of the quadtree or octree principle (I will use the term octree herein to denote the general data structure)
then go for it. These are the data structures I recommend for spatial data management.
You could also use an embedded RDBMS capable of working with spatial data (they usually implement R-tree for spatial indexing), but it may not be interesting if your dataset isn't dynamic.
If your dataset falls within the 10000 entries range, then by today standards it isn't that large, so using simpler structures should suffice. In that perimeter, I would go first for a simple std::vector, and use std::sort and std::find to filter the data in smaller set and sort it afterward.
I would probably try an ordered set or map on the most queried attribute in a second attempt, then do some benchmarks to pick the more performing solution.
For a more efficient one dimensional indexing algorithm (in essence, that`s what sets and maps are), you might want to try B-trees: there's C++ implementation available from google.
My third attempt would go toward an OpenCL solution (although if you are doing heavy OpenGL rendering, you might prefer doing the work on the CPU instead, but that depends on your framerate needs).
If your dataset is much larger, as it seems to be, then consider one of the more complex solutions I listed initially.
At any rate, without more details about your dataset and how you plan to use it, it will be difficult to provide a good solution, so the only real advice we can give is: try everthing you can and benchmark.
If you're dealing with point clouds, take a look at PCL, it could save you a lot of time and effort without having to dig into the intricacies of spatial indexing yourself. It also includes visualisation.

C/C++ graph interface for representation of partial order

In my code I use a class which represents a directed acyclic graph. I wrote the code myself, it wasn't hard. But later I realized my app has more requirements: the graph must be transitive-reduced, i.e. unique representation of a partial ordrer. Every time the user does drag-n-drop or cut/copy/paste on the visual GUI representation of the graph, it has to be validated and adapted to this requirement. Now things become more complicated. So I did plan how to perform all graph operations safely, etc., but before I really dive into the code, I'd like to know:
Is there a known C/C++ interface for partial orders? (Preferably C++)
I found many many libraries for graphs, but I already have my simple acyclic digraph code. I couldn't find anything which deals specifically with transitively-reduced graphs (I don't need an adjacency matrix, the data comes from the user so it would be inefficient here... It's a small graph for user data, not something for mathematical use)
I'm looking for an interface which automatically detects unnecessary connections and removes them, does tests to see if a node copy/move operation would be valid partial-order-wise, i.e. preserve the properties of a partial order, etc.
I would recommend adding a partial-order validation method. When an edit is being made, make a copy of the whole graph apply the edit to one copy, then validate it. If it passes, keep the modified copy. If it doesn't pass, revert to the saved copy.
Perhaps the validator could find all bottom nodes, for each one, build a multiset of its ancestors (or descendants if you call them that) and check for duplicate entries. I would revert to recursion for the search if you expect only small graphs.
As far as I know, usually programs have their own graph classes when used for non-mathematical purposes. This happens because graphs may be much more complicated than linear containers such as the STL containers (vector, list, etc.).
Since you don't have any special needs in the field of math or algorithms (a search algorithm in your case would be a simple loop, in most cases you don't need more than that, and certainly not in the case of (premature) optimization). If you do, you have boost::graph, but I suspect it would complicate things more than help you.
So I say, write a good graph/node class, and if it's good enough and written for general-purpose, we can all benefit from that. Nobody is answering the question because there's really no existing public code which matches your needs. Write good libre code once, and it can then be used everywhere. Good luck.
P.S your own search algorithm may be much faster than ones written for general-purpose graph libraries, e.g. boost::graph, because you can take an advantage of the known restrictions and rules of you specific graph, thus making seraches much faster. For example, in a transitively-reduced graph, if A is a parent of B, then A cannot also have b as a non-child decendant (e.g. grand-child), so you can optimize your search using this knowledge. The price you pay is doing lots of tests when changing the graph, but you gain a lot back because searching/scanning can become much faster.

Help me understand how the conflict between immutability and running time is handled in Clojure

Clojure truly piqued my interest, and I started going through a tutorial on it:
http://java.ociweb.com/mark/clojure/article.html
Consider these two lines mentioned under "Set":
(def stooges (hash-set "Moe" "Larry" "Curly")) ; not sorted
(def more-stooges (conj stooges "Shemp")) ; -> #{"Moe" "Larry" "Curly" "Shemp"}
My first thought was that the second operation should take constant time to complete; otherwise functional language might have little benefit over an object-oriented one. One can easily imagine a need to start with [nearly] empty set, and populate it and shrink it as we go along. So, instead of assigning the new result to more-stooges, we could re-assign it to itself.
Now, by the marvelous promise of functional languages, side effects are not to be concerned with. So, sets stooges and more-stooges should not work on top of each other ever. So, either the creation of more-stooges is a linear operation, or they share a common buffer (like Java's StringBuffer) which would seem like a very bad idea and conflict with immutability (subsequently stooges can drop an element one-by-one).
I am probably reinventing a wheel here. it seems like the hash-set would be more performant in clojure when you start with the maximum number of elements and then remove them one at a time until empty set as oppose to starting with an empty set and growing it one at a time.
The examples above might not seem terribly practical, or have workarounds, but the object-oriented language like Java/C#/Python/etc. has no problem with either growing or shrinking a set one or few elements at a time while also doing it fast.
A [functional] language which guarantees(or just promises?) immutability would not be able to grow a set as fast. Is there another idiom that one can use which somehow can help avoiding doing that?
For someone familiar with Python, I would mention set comprehension versus an equivalent loop approach. The running time of the two is tiny bit different, but that has to do with relative speeds of C, Python, interpreter and not rooted in complexity. The problem I see is that set comprehension is often a better approach, but NOT ALWAYS the best approach, for the readability might suffer a great deal.
Let me know if the question is not clear.
The core Immutable data structures are one of the most fascinating parts of the language for me also. their is a lot to answering this question and Rich does a really great job of it in this video:
http://blip.tv/file/707974
The core data structures:
are actually fully immutable
the old copies are also immutable
performance does not degrade for the old copies
access is constant (actually bounded <= a constant)
all support efficient appending, concatenating (except lists and seqs) and chopping
How do they do this???
the secret: it's pretty much all trees under the hood (actually a trie).
But what if i really want to edit somthing in place?
you can use clojure's transients to edit a structure in place and then produce a immutable version (in constant time) when you are ready to share it.
as a little background: a Trie is a tree where all the common elements of the key are hoisted up to the top of the tree. the sets and maps in clojure use trie where the indexes are a hash of the key you are looking for. it then breaks the hash up into small chunks and uses each chunk as the key to one level of the hash-trie. This allows the common parts of both the new and old maps to be shared and the access time is bounded because there can only be a fixed number of branches because the hash used as in input has a fixed size.
Using these hash tries also helps prevent big slowdowns during the re-balancing used by a lot of other persistent data structures. so you will actually get fairly constant wall-clock-access-time.
I really reccomend the (relativly short)_ book: Purely Functional Data Structures
In it he covers a lot of really interesting structures and concepts like "removing amortization" to allow true constant time access for queues. and things like lazy-persistent queues. the author even offers a free copy in pdf here
Clojure's data structures are persistent, which means that they are immutable but use structural sharing to support efficient "modifications". See the section on immutable data structures in the Clojure docs for a more thorough explanation. In particular, it states
Specifically, this means that the new version can't be created using a full copy, since that would require linear time. Inevitably, persistent collections are implemented using linked data structures, so that the new versions can share structure with the prior version.
These posts, as well as some of Rich Hickey's talks, give a good overview of the implementation of persistent data structures.