I'm looking for a (space) efficient implementation of an LCS algorithm for use in a C++ program. Inputs are two random access sequences of integers.
I'm currently using the dynamic programming approach from the wikipedia page about LCS. However, that has O(mn) behaviour in memory and time and dies on me with out of memory errors for larger inputs.
I have read about Hirschberg's algorithm, which improves memory usage considerably, Hunt-Szymanski and Masek and Paterson. Since it isn't trivial to implement these I'd prefer to try them on my data with an existing implementation. Does anyone know of such a library? I'd imagine since text diff tools are pretty common, there ought to be some open source libraries around?
When searching for things like that, try scholar.google.com. It is much better for finding scholarly works. It turned up
http://www.biotec.icb.ufmg.br/cabi/artigos/seminarios2/subsequence_algorithm.pdf
this document, a "survey of longest common subsequences algorithms".
Not C++ but Python but I think usable.
http://wordaligned.org/articles/longest-common-subsequence
Hirschberg's Algorithm embeds a javascript implementation : almost C.
Related
I was reading through the String searching algorithm wikipedia article, and it made me wonder what algorithm strstr uses in Visual Studio? Should I try and use another implementation, or is strstr fairly fast?
Thanks!
The implementation in visual studio strstr is not know to me, and I am uncertain if it is to anyone. However I found these interesting sources and an example implementation. The latter shows that the algorithm runs in worst case quadratic time wrt the size of the searched string. Aggregate should be less than that. The algorithmic limit of non stochastic solutions should be that.
What is actually the case is that depending the size of the input it might be possible that different algorithms are used, mainly optimized to the metal. However, one cannot really bet on that. In case that you are doing DNA sequencing strstr and family are very important and most probably you will have to write your own customized version. Usually, standard implementations are optimized for the general case, but on the other hand those working on compilers know their shit n staff. At any rate you should not bet your own skills against the pros.
But really all this discussion about time to develop is hurting the effort to write good software. Be certain that the benefit of rewriting a custom strstr outweigh the effort that is going to be needed to maintain and tune it for your specific case, before you embark on this task.
As others have recommended: Profile. Perform valid performance tests.
Without the profile data, you could be optimizing a part of the code that runs 20% of the time, a waste of ROI.
Development costs are the prime concern with modern computers, not execution time. The best use of time is to develop the program to operate correctly with few errors before entering System Test. This is where the focus should be. Also due to this reasoning, most people don't care how Visual Studio implements strstr as long as the function works correctly.
Be aware that there is line or point where a linear search outperforms other searches. This line depends on the size of the data or the search criteria. For example, linear search using a processor with branch prediction and a large instruction cache may outperform other techniques for small and medium data sizes. A more complicated algorithm may have more branches that cause reloading of the instruction cache or data cache (wasting execution time).
Another method for optimizing your program is to make the data organization easier for searching. For example, making the string small enough to fit into a cache line. This also depends on the quantity of searching. For a large amount of searches, optimizing the data structure may gain some performance.
In summary, optimize if and only if the program is not working correctly, the User is complaining about speed, it is missing timing constraints or it doesn't fit in the allocated memory. Next step is then to profile and optimize the areas where most of the time is spent. Any other optimization is futile.
The C++ standard refers to the C standard for the description of what strstr does. The C standard doesn't seem to put any restrictions on the complexity, so pretty much any algorithm the finds the first instance of the substring would be compliant.
Thus different implementations may choose different algorithms. You'd have to look at your particular implementation to determine which it uses.
The simple, brute-force approach is likely O(m×n) where m and n are the lengths of the strings. If you need better than that, you can try other libraries, like Boost, or implement one of the sub-linear searches yourself.
dawgdic is a great DAWG library, but it has a significant drawback because it is static (not updateable) and has to be constructed form strings sorted in alphabetical order. If the raw data from which the DAWG is constructed is big (several gigabytes), the initial construction of DAWG involving sorting a huge array of strings can demand too much resources.
Is there a library that provides a memory-efficient structure as dawgdic which allows construction from non-sorted dictionary?
Currently, I don't think there's any library that allows construction of DAWGs from a non-sorted dictionaries.
But, after a lot of searching, I've found this paper, "Incremental Construction of Minimal Acyclic Finite-State Automata"
, which I think has exactly what you want. Maybe you could make your own library after reading this, and share it with everyone!
EDIT: Have you looked at this question?
I've found some great libraries that allow online construction from non-sorted data, altough they are not based on DAWG:
cedar - a very fast double-array trie
marisa-trie - а very space efficient string matching library
I currently know of no C++ implementations of a DAWG which support the construction from non-sorted data, but if you're open to creating your own solution which has such a feature, Incremental Construction of Minimal Acyclic Finite-State Automata (2000) is a paper which basically lays out the algorithm behind it.
Alternatively, if you're open to porting solutions from other languages, it may be worth your while to check out MDAG, a Java implementation of the data structure. It supports both on-the-fly string addition and string removal, which is exactly what you are looking for. The code is also easy to follow, and extensively commented, so porting it should be a fairly simple task.
Disclaimer: I am the author of MDAG :) .
Does someone know if there is any production-ready K-shortest-paths algorithm for C++?
The only available implementation (k-shortest-paths), unfortunately, leaks memory, has counter-intuitive interfaces and another "reinvented wheel" - the Graph class.
I'm looking for something better, probably, boost::graph-based.
There are two possible algorithms available - simple Yen's algorithm and optimized Yen's algorithm, both would suit me.
Thanks in advance.
There is another one, but you'll have to check if this also leaks memory.
http://sourceforge.net/projects/ksp/files/ksp/ksp-1.0/
I want to make my own text file compression program. I don't know much about C++ programming, but I have learned all the basics and writing/reading a file.
I have searched on google a lot about compression, and saw many different kind of methods to compress a file like LZW and Huffman. The problem is that most of them don't have a source code, or they have a very complicated one.
I want to ask if you know any good webpages where I can learn and make a compression program myself?
EDIT:
I will let this topic be open for a little longer, since I plan to study this the next few days, and if I have any questions, I'll ask them here.
Most of the algorithms are pretty complex. But they all have in common that they are taking data that is repeating and only store it once and have a system of knowing how to uncompress them (putting the repeated segments back in place)
Here is a simple example you can try to implement.
We have this data file
XXXXFGGGJJ
DDDDDDDDAA
XXXXFGGGJJ
Here we have chars that repeat and two lines that repeat. So you could start with finding a way to reduce the filesize.
Here's a simple compression algorithm.
4XF3G2J
8D2A
4XF3G2J
So we have 4 of X, one of F, 3 of G etc.
You can try this page it contains a clear walk through of the basics of compression and the first principles.
Compression is not the most easy task. I took a college class to learn about compression algorithms like LZW and Huffman, and I can tell you that they're not that easy. If C++ is your first language and you're just starting into this sort of thing, I wouldn't recommend trying to write your own sorting algorithm just yet. If you are more experienced, then I would try writing source without any code being provided to you - this shows that you truly understand the compression algorithm.
This is how I was taught - the professor explained the algorithm in very broad terms, and then either we would implement it (in Java, mind you) or we would answer questions about how the algorithm would behave under certain circumstances. If we could do either of those, then we really knew the algorithm - without him showing us any source at all - it's a good skill to develop ;)
Huffman encoding tree's are not too complicated, I'd start with them. Here's a link: Example: Huffman Encoding Trees
I have found a way that improves (as far as I have tested) upon the quicksort algorithm beyond what has already been done. I am working on testing it and then I want to get the word out about it. However, I would appreciate some help with some things. So here are my questions. All of my code is in C++ by the way.
One of the sorts I have been comparing to my quicksort is the std::sort from the C++ Standard Library. However, it appears to be extremely slow. I am only sorting arrays of ints and longs, but it appears to be around 8-10 times slower than both my quicksort and a standard quicksort by Bentley and McIlroy (and maybe Sedgewick). Does anyone have any ideas as to why it is so slow? The code I use for the sort is just
std::sort(a,a+numelem);
where a is the array of longs or ints and numelem is the number of elements in the array. The numbers are very random, and I have tried different sizes as well as different amounts of repeated elements. I also tried qsort, but it is even worse as I expected.
Edit: Ignore this first question - it's been resolved.
I would like to find more good quicksort implementations to compare with my quicksort. So far I have a Bentley-McIlroy one and I have also compared with the first published version of Vladimir Yaroslavskiy's dual-pivot quicksort. In addition, I plan on porting timsort (which is a merge sort I believe) and the optimized dual-pivot quicksort from the jdk 7 source. What other good quicksorts implementations do you know about? If they aren't in C or C++ that might be okay because I am pretty good at porting, but I would prefer C or C++ ones if you know of them.
How would you recommend getting out the word about my additions to the quicksort? So far my quicksort seems to be significantly faster than all other quicksorts that I've tested it against. The main source of its speed is that it handles repeated elements much more efficiently than other methods that I've found. It almost completely eradicates worst case behavior without adding much time in checking for repeated elements. I posted about it on the Java forums, but got no response. I also tried writing to Jon Bentley because he was working with Vladimir on his dual-pivot quicksort and got no response (though I wasn't terribly surprised by this). Should I write a paper about it and put it on arxiv.org? Should I post in some forums? Are there some mailing lists to which I should post? I have been working on this for some time now and my method is legit. I do have some experience with publishing research because I am a PhD candidate in computational physics. Should I try approaching someone in the Computer Science department of my university? By the way, I have also developed a different dual-pivot quicksort, but it isn't better than my single-pivot quicksort (though it is better than Vladimir's dual-pivot quicksort with some datasets).
I really appreciate your help. I just want to add what I can to the computing world. I'm not interested in patenting this or any absurd thing like that.
If you have confidence in your work, definitely try to discuss it with someone knowledgeable at your university as soon as possible. It's not enough to show that your code runs faster than another procedure on your machine. You have to mathematically prove whatever performance gain you claim to have achieved through analysis of your algorithm. I'd say the first thing to do is make sure both algorithms you are comparing are implemented and compiled optimally - you may just be fooling yourself here. The likelihood of an individual achieving such a marked improvement upon such an important sorting method without already having thorough knowledge of its accepted variants just seems minuscule. However, don't let me discourage you. It should be interesting anyway. Would you be willing to post the code here?
...Also, since quicksort is especially vulnerable to worst-case scenarios, the tests you choose to run may have a huge effect, as will the choice of pivots. In general, I would say that any data set with a large number of equivalent elements or one that is already highly sorted is never a good choice for quicksort - and there are already well-known ways of combating that situation, and better alternative sorting methods.
If you have truly made a breakthrough and have the math to prove it, you should try to get it published in the Journal of the ACM. It's definitely one of the more prestigious journals for computer science.
The second best would be one of the IEEE journals such as Transactions on Software Engineering.