Lets say I have many objects containing strings of non-trivial length (around ~3-4kb). The strings are all different from each other yet at the same time contain lots of common parts/subsequences. On average maybe 80-90% of any individual string is contained withing the others as well. Is there an easy way to automatically exploit this huge redundancy for compressing the data?
Ideally the solution would be C++ and transparent for the user (i.e. I can use it as if I was accessing a regular read only const std::string but instead reading from compressed storage).
Algorithmically, Lempel–Ziv–Welch with one dictionary for all objects/strings might be a good start.
You can use huffman coding implementation is not hard, Also there are zip algorithms in languages (like C# and java) and you can use them.
Also If you sure 80-90% are repeated in all, create a dictionary of all words, then for each string store the position of dictionary word, means have a bit array of big size (10000 i.e) and mark the related position bits[i] to 1 if a words[i] exists in the current string. think each word length is 5 character then the abbreviation takes around 1/5 size.
If the common parts of the strings are common because they are composed from other strings, then you might get some traction by using the stlport rope class, which looks for all the world like a std::string, but uses substring tree representation with copy on write that makes them both very space efficient (common substrings are shared) and very good at inserts and deletes (log(n))
When to use rope:
you are making a template engine. document instances are made from a template by substituting varying data in the template, and then cached for future uses. Parts that are common to templates and instances are stored only once and shared across instances, inserts and deletes are cheap.
When not to use rope:
you are loading many documents from outside the domain of your application (from disk, or over a network) and using them without modification. rope doesn't share strings if they are not copied from one rope to another. If you can afford to do the work to find the common substrings, rope can still be used to improve your final representations.
Like #Saeed mentioned, a simple Huffman coding will perform well here.
There is no need in dictionary, if the common words are known apriori (you've mentioned that it's a HTML). Just precompute a huffman table using statistical data from many HTML files (Note that you can encode whole tag by a single symbol, and you can have as many symbols as you want).
Related
The usual way is to store the characters in a string, but because while writing a text, a lot of times the user deletes or adds characters in the middle of the text, perhaps it is better to use std::list<char> to contains the characters, then adding characters in the middle of list is not costly operation.
The following paper summarizes the data structures used in word processors: http://www.cs.unm.edu/~crowley/papers/sds.pdf
Data Structures for Text Sequences.
Charles Crowley, University of New Mexico, 1998
The data structure used to maintain the sequence of characters is an
important part of a text editor. This paper investigates and evaluates
the range of possible data structures for text sequences. The ADT
interface to the text sequence component of a text editor is examined.
Six common sequence data structures (array, gap, list, line pointers,
fixed size buers and piece tables) are examined and then a general
model of sequence data structures that encompasses all six structures
is presented. The piece table method is explained in detail and its
advantages are presented. The design space of sequence data structures
is examined and several variations on the ones listed above are
presented. These sequence data structures are compared experimentally
and evaluated based on a number of criteria. The experimental
comparison is done by implementing each data structure in an editing
simulator and testing it using a synthetic load of many thousands of
edits. We also report on experiments on the senstivity of the results
to variations in the parameters used to generate the synthetic editing
load.
First word-processing do much more than string manipulation. You will need a rich-text data structure. If you need pagination you will also need some meta-data like page setup. Do some research on Word, you will have answer.
For the rich-text part, your data structure have to save two things: characters and attributes. In other words, you have to have some kind of markup language. HTML/DOM is a choice. But in most of time it's a overkill because of complexity.
There are many data structure can handle the character part: Rope, Gap Buffer, and Piece Table. But none of them provide attribute support directly. You have to build it by you self.
AbiWord using list based Piece Table before, but now using tree based Piece Table now. Go to the Wiki page of AbiWord you will find more.
OpenOffice use a different way. Basically, it keeps a list of paragraph, and inside the paragraph it keep a string (or other more effective data structure) and list of attributes. I prefer this way because Paragraph is a naturally small enough unit to edit, it's much easier than tree based piece table.
SGI STL has a Rope class, you may want to check it out:
http://www.sgi.com/tech/stl/Rope.html
Using std::list<char> would require about nine times more storage per character than using std::string. That's probably not a good tradeoff. My first inclination would be to use a std::vector<std::string>, where each string object holds the text of a paragraph. Insertions and deletions within a paragraph will be fast enough.
I am dealing with a collection of objects where the reasonable size of it could be anywhere between 1 and 50K (but there's no set upper limit). Each object contains a handful of strings.
I want to implement to a search function that can partially, exactly, or RegEx match any of one these strings and subsequently return a list of objects.
If each object only contained a single string then I could simply lexicographically sort them, and pull out ranges fairly easily - but I am reluctant to implement a map-like structure for each of the contained strings due to speed/memory concerns.
Is there a data structure well suited to this kind of operation for speed and memory efficiency? I'm sensing a database maybe on the horizon, but I know little about them, so I want to hold off researching until someone more knowledgeable can nudge me in the right direction!
a map-like collection is probably your best bet, the key will be the string, and the value is a reference to the containing object. If your strings are held inside the objects as a stl string, then you could store a reference to the data in the key part of the map instead (alternatively use a shared_ptr for the strings and reference them in both the object and the map)
Searching, sorting just becomes a matter of implementing a custom search functor that uses the dereferenced data. The size of the map will be 2 references plus the map overhead which isn't going to be that bad if you consider the alternatives will be as large, if not larger.
partially, exactly, or RegEx match any of one these strings and subsequently return a list of objects
Well, for exact matches, you could have a std::map<std::string, std::vector<object*> >. The key would be the exact string, and the vector holds pointers to matching objects, many of these pointers may point to a single object instance.
You could have a front-end map from partial strings to full strings: say the string is "dogged", you'd sadly have to put entries in for "dogged", "ogged", "gged", "ged", "ed" and "d" (stop wherever you like if you want a minimum match size)... then use lower_bound to search. That way, say you search on "dog" you could still see that there was a match for "dogged" (doesn't matter if it matches say "dogfood" instead. This would be a simple std::map<string, string>. While you increment forwards from the lower_bound position and the string still matches (i.e. from dogfood to dogged to ... until it doesn't start with dog), you can search for that in the "exact match" map and aggregate results.
For regular expressions, I have no good suggestion... I'd start with a brute force search through all the full strings. If it really isn't good enough, then you do some rough optimisations like checking for a constant substring to filter by before doing the brute force matching, but it's beyond me to imagine how to do this very thoroughly and fast.
(substitute your favourite smart pointers for object*s if useful)
Thanks for all the replies, but following on from techniques mentioned in this post, I've decided to use an enhanced suffix array from the header-only SeqAn project.
I am starting to read about Trie. I got also references from friends here in: Tutorials on Trie
I am not clear on the following:
It seems that to go on and use a Trie one assumes that all the input strings that will be the search space and used to build the Trie are separated in distinct word boundaries.
E.g. all the example tutorials I have seen use input such as:
S={ball, bid, byte, car, cat, mac, map etc...}
Then we build the trie from S and do our searches (really fast)
My question is: How did we end up with S to begin with?
I mean before starting to read about tries I imagined that S would be an arbitrarily long text e.g. A Shakespeare passage.
Then using a Trie we could find things really fast.
But it seems this is not the case.
Is the assumption here that the input passage (of Shakespeare for example) is pre-processed first extracting all the words to get S?
So if one wants to search for patterns (same way as you do when you Google and see all pages having also spaces in your search query) a Trie is not appropriate?
When can we know if a Trie is the data structure that we can actually use?
Tries are useful where you have a fixed dictionary you want to look up quickly. Compared to a hashtable it may require less storage for a large dictionary but may well take longer to look up. One example place I have used it is for mapping URLs to operations on a web server were there may be inheritance of functionality based on the prefix. Here recursing down a trie enables appropriate lookup of all of the methods that need to be called for a particular url. It would also be efficient for storing a dictionary.
For doing text searches you would typically represent documents using a token vector of leximes with weights (perhaps based on occurance frequency), and then search against that to get a ranking of documents against a particular search vector. There a number of standard libraries to do this which I would suggest using rather than writing your own - particularly for removing stopwords, dealing with synonyms and stemming.
We can use tries for sub string searching in linear time, without pre processing the string every time. You can get a best tutorial on suffix tree generation #
Ukkonen's suffix tree algorithm in plain English?
As the other examples have said, a trie is useful because it provides fast string look-ups (or, more generally, look-ups for any sequence). Some examples of where I've used tries:
My answer to this question uses a (slightly modified) trie for matching sentences: it is a trie based on a sequence of words, rather than a sequence of characters. (The other answers to that question probably demonstrate the trie in action more clearly.)
I've also used a trie in a game which had a large number of rooms with names (the total number and the names were defined at run time), each of these names has to be unique and one had to be able to search for a room with a given name quickly. A hash table could also have been used, but in some ways a trie is simpler to implement and faster when using strings. (My trie implementation ended up being ~50 lines of C.)
The trie tag probably has many more examples.
There are multiple ways to use tries. The typical example is a lookup such as the one you have presented. However Tries can also be used to fully index a complete text. Either you use the Ukkonen suffix tree algorithm, to produce a suffix trie, or you explicetly construct the suffix trie by storing suffixes (much slower than Ukkonens algorithm, but also much simpler). As this is preprocessing, which needs to be done only once speed is not that crucial.
For this you would just take your text, insert the full text, then chop of the first letter, insert the resulting text, chop of second letter, insert...
So if we have the text "The Text" we would insert the following set:
{"The Text", "he Text", "e Text", " Text", "Text", "ext", "xt", "t"}
In the resulting suffix trie we can easily search for any kind of prefix. Also this is space efficient, because we do not need to store the whole string, since common prefixes are stored only once.
If you need to store much longer strings space efficiently it is best not only to store prefixes together but also suffixes. In that case you could build up a directed acyclic word graph (DAWG), which is very similar to a trie in conception.
So a trie in that sense allows finding arbitrary substrings, including partial words. If you are only interested in storing words, a different data structure should be used, for example a inverted list (if order is important) or a vector space based retrieval algorithm (in case word order does not matter).
Usually, entities and components or other parts of the game code in data-driven design will have names that get checked if you want to find out which object you're dealing with exactly.
void Player::Interact(Entity *myEntity)
{
if(myEntity->isNearEnough(this) && myEntity->GetFamilyName() == "guard")
{
static_cast<Guard*>(myEntity)->Say("No mention of arrows and knees here");
}
}
If you ignore the possibility that this might be premature optimization, it's pretty clear that looking up entities would be a lot faster if their "name" was a simple 32 bit value instead of an actual string.
Computing hashes out of the string names is one possible option. I haven't actually tried it, but with a range of 32bit and a good hashing function the risk of collision should be minimal.
The question is this: Obviously we need some way to convert in-code (or in some kind of external file) string-names to those integers, since the person working on these named objects will still want to refer to the object as "guard" instead of "0x2315f21a".
Assuming we're using C++ and want to replace all strings that appear in the code, can this even be achieved with language-built in features or do we have to build an external tool that manually looks through all files and exchanges the values?
Jason Gregory wrote this on his book :
At Naughty Dog, we used a variant of the CRC-32 algorithm to hash our strings, and we didn't encounter a single collision in over two years of development on Uncharted: Drake's Fortune.
So you may want to look into that.
And about the build step you mentioned, he also talked about it. They basically encapsulate the strings that need to be hashed in something like:
_ID("string literal")
And use an external tool at build time to hash all the occurrences. This way you avoid any runtime costs.
This is what enums are for. I wouldn't dare to decide which resource is best for the topic, but there are plenty to choose from: https://www.google.com/search?q=c%2B%2B+enum
I'd say go with enums!
But if you already have a lot of code already using strings, well, either just keep it that way (simple and usually enough fast on a PC anyway) or hash it using some kind of CRC or MD5 into an integer.
This is basically solved by adding an indirection on top of a hash map.
Say you want to convert strings to integers:
Write a class wraps both an array and a hashmap. I call these classes dictionaries.
The array contains the strings.
The hash map's key is the string (shared pointers or stable arrays where raw pointers are safe work as well)
The hash map's value is the index into the array the string is located, which is also the opaque handle it returns to calling code.
When adding a new string to the system, it is searched for already existing in the hashmap, returns the handle if present.
If the handle is not present, add the string to the array, the index is the handle.
Set the string and the handle in the map, and return the handle.
Notes/Caveats:
This strategy makes getting the string back from the handle run in constant time (it is merely an array deference).
handle identifiers are first come first serve, but if you serialize the strings instead of the values it won't matter.
Operator[] overloads for both the key and the value are fairly simple (registering new strings, or getting the string back), but wrapping the handle with a user-defined class (wrapping an integer) adds a lot of much needed type safety, and also avoids ambiguity if you want the key and the values to be the same types (overloaded[]'s wont compile and etc)
You have to store the strings in RAM, which can be a problem.
What's the best way to store the following message into a data structure for easy access?
"A=abc,B=156,F=3,G=1,H=10,G=2,H=20,G=3,H=30,X=23.50,Y=xyz"
The above consists of key/value pairs of the following:
A=abc
B=156
F=3
G=1
H=10
G=2
H=20
G=3
H=30
X=23.50
Y=xyz
The tricky part is the keys F, G and H. F indicates the number of items in a group whose item consists of G and H.
For example if F=3, there are three items in this group:
Item 1: G=1, H=10
Item 2: G=2, H=20
Item 3: G=3, H=30
In the above example, each item consists of two key/pair values: G and H. I would like the data structure to be flexible such that it can handle if the item increases its key/pair values. As much as possible, I would like to maintain the order it appears in the string.
UPDATE: I would like to store the key/value pairs as strings even though the value often appears as float or other data type, like a map.
May not be what you're looking for, but I'd simply recommend using QuickFIX (quickfixengine.org), which is a very high quality C++ FIX library. It has the type "FIX::Message" which does everything you're looking for, I believe.
I work with FIX a lot in Python an Perl, and I tend to use a dictionary or hash. Your keys should be unique within the message. For C++, you could look at std::map or STL extension std::hash_map.
If you have a subset of FIX messages you have to support (most exchanges usually use 10-20 types), you can roll your own classes to parse messages into. If you're trying to be more generic, I would suggest creating something like a FIXChunk class. The entirety of the message could be stored in this class, organized into keys and their values, as well as lists of repeating groups. Each of the repeating groups would itself be a FIXChunk.
A simple solution, but you could use a std::multimap<std::string,std::string> to store the data. That allows you to have multiple keys with the same value.
In my experience, fix messages are usually stored either in their original form (as a stream of bytes) or as a complex data structure providing a full APIs that can handle their intricacies. After all, a fix message can sometimes represent a tree of data.
The problem with the latter solution is that the transition is expensive in terms of computation cost in high-speed trading systems. If you are building a trading system, you may prefer to lazily calculate the parts of the fix message than you need, which is admittedly easier said than done.
I am not familiar with efficient open-source implementations; companies like the one I work for usually have proprietary implementations.