Lempel-Ziv 76 complexity - compression

Can someone explain to me Lempel-Ziv 76 complexity? I was under the impression that you initialize with the first letter of the string in your dictionary, and then check subsequent blocks for existence in the previous substring, growing one letter each time a substring is found. If no substring exists in the previous substring, that substring is called a block and the next letter becomes the next substring to be searched.
For example,
01011010001101110010
0|1
since 1 is not in 0, we get 0|1|0
since 0 is in 01, we get 0|1|01
since 01 is in 01, we get 0|1|011|0
since 0 is in 01011, we get 0|1|011|01
since 01 is in 01011, we get 0|1|011|010
since 010 is in 01011, we get 0|1|011|0100|0
and so on until, we get 0|1|011|0100|011011|1001|0,
where the last letter can be a repeat if necessary.
What am I doing wrong? Because I am being told that for a string 1111111, the decomposition is 1|111111. Thanks!

I agree with your decomposition:
01011010001101110010 -> 0.1.011.0100.011011.1001.0
I also believe that what you were told is correct:
1111111 -> 1.111111
However, how you arrived at your original decomposition is not quite right! Hence the confusion about decomposing 1111111. I think, according to your reasoning:
1111111 -> 1.11.1111
which I'm pretty sure is not correct.
Extensions to the existing sequence of words (blocks) is not as simple as checking to see if the extension previously appears as a substring of the previous history. It boils down to the concept of reproducibility of an extension that Lempel and Ziv describe in their paper On the Complexity of Finite Sequences (I'm assuming that's what you're working from!). An extension is formed such that it is the shortest word that is not reproducible from the previous history. The concept of reproducibility of an extension that they describe is rather complicated, but it boils down to being able to find a starting position in the previous history, from which you can sequentially copy symbols from that starting position, onto the end of the history, to form the extension.
From your original sequence, assume the symbols have positions from 1 to 20. The first symbol is always a word/block by itself:
0.
The next extension is not reproducible from the previous history:
0.1.
The next extension is:
0.1.011.
The reason why it can't be 0 or 01 is as follows: 0 is reproducible from the previous history, by copying one symbol from position 1 to the end; 01 is reproducible by copying two symbols from position 1 to the end; 011 is not reproducible.
The next extension is:
0.1.011.0100.
0 is reproducible by copying one symbol from position 1 to the end; 01 is reproducible by copying two symbols from position 1 to the end; 010 is reproducible by copying three symbols from position 1 to the end; 0100 is not reproducible.
And so on.
The decomposition of 1111111 is as follows: the first symbol is a block by itself:
1.
The next extension is:
1.111111
1 is reproducible by copying one symbol from position 1 to the end of the previous history. 11 is reproducible by copying two symbols from position 1 to the end. This is where it gets complicated - in this case, when you copy two symbols, the source of the copy actually extends past the end of the previous history! In other words, the second 1 that you copy is actually part of the extension that results from copying the first 1 onto the end. Similarly, 111, 1111, 11111, 111111 are all reproducible, due to this recursive copying process. However, since we have now reached the end of the original sequence, the last extension is deemed to be 111111.
Hopefully my explanation has made some sense.

This paper does not agree with your description of the algorithm. According to the paper, you have a new partition if it doesn't match any previous partition. You don't get to make partitions based on the entire unpartitioned preceding sequence, as you have done. So for your examples (I am using . instead of | to partition, since that's easier to read):
01011010001101110010
partitions into:
    0.1.01.10.100.011.0111.00.10
so the LZ76 weight is 9 (not 7).
1111111
partitions into:
1.11.111.1
They both provide an example of the case where the final partition is contained in a previous one. Hence the < r instead of <= r in the definition.
I don't have the original paper, so I can't check whether this paper got it wrong or not. However I doubt that they incorrectly copied the definition from the original paper.

Related

How is a Huffman compression decoded?

I'm struggling to understand how to decode, say, a text file that has been compressed using Huffman's method. Let's say I'm reading a text file, I get a list of all the characters and the frequency in which they occur, I create a Huffman tree and all the characters have a specific code assigned do them. Say,
a: 110
b: 11
c: 010
etc.
When I want to decompress this text file and print/read its contents, how do I do that? How do I know if the file reads "abc" or "bac"?
A small solution I made up was after the Huffman tree has been created, I read the file all over again and create an array to insert every character code as I read it.
Say, a while loop where I read a character until I've reached EOF.
character = a; insert 110 into array. Character = b; insert 11 into array until we are left with 11011010.
But I feel like there should be a better way.
EDIT: The codes for a,b, and c are random, not actual Huffman codes. I put in random ones as it's irrelevant for the question, I'm only interested in how it would be decoded with or without a real life example. But here's an example of Huffman code for "Hello World."
l: 11
o: 001
H: 100
e: 0101
spacebar: 0000
w: 0001
r: 101
d: 011
.: 0100
A Huffman code is a prefix code, which means that no code can be a prefix of any other code. Your example of a Huffman code is most definitely not a Huffman code. There you have 11 (c), which is a prefix of 110 (b). That cannot be the result of a correct implementation of Huffman's algorithm.
Update for question update:
You are incorrect. The codes are extremely relevant for the question. The examples you gave cannot be unambiguously decoded.
Second update of question:
It is still not clear what you're asking, but here is an answer to the question: "How do I decode a stream of bits that are a Huffman-coded sequence of symbols?"
Here is the tree for the example prefix code:
You see that if you follow any sequence of branches to a symbol, the branches you followed are the bits in that code. That is exactly how you decode the incoming stream of bits.
Start at the node at the top of the tree.
Read one bit from the stream.
Follow the branch for that value, left for 0, right for 1.
If you arrive at another node, go to step 2.
Otherwise, emit the symbol in the leaf, and go to step 1.

regex for number between numbers

I'm in need of a regex, which takes a minimum and a maximum number to determine valid input, And I want the maximum and minimum to be dynamic.
I have been trying to get this done using this link
https://stackoverflow.com/a/13473595/1866676
But couldn't get it to work. Can someone please let me know how to do this.
Let's say I want to make a html5 input box, and I Want it to only receive numbers from 100 to 1999
What would a regex for this like this look like?
First off, while it is possible to do this, I think if there is a simpler way to choose a number range such as <input type="number" min="1" max="100">, that way would be preferred.
Having said that, here's how the kind of regex you requested works:
ones: ^[0-9]$ // just set the numbers -- matches 0 to 9
tens: ^[1-3]?[0-9]$ //set max tens and max ones -- matches 0 to 39
tens where max does not end in 9 ^[1-2]?[0-9]$|^[3][0-4]$ // 0 to 34
only tens: ^[1][5-9]$|^[2-3][0-9]$|^[4][0-5]$ // 15 to 45
Here, lets pick an arbitrary number 1234 to 2345
^[1][2][3][4-9]$|
^[1][2][4-9][0-9]$|
^[1][3-9][0-9][0-9]$|
^[2][0-2][0-9][0-9]$|
^[2][3][0-3][0-9]$|
^[2][3][4][0-5]$
https://regex101.com/r/pP8rQ7/4
Basically the ending of the middle series always needs to be a straight range that can reach 9 unless we are dealing with the ones place, and if it cant, you have to build it upwards toward the middle each time we have a value that can't start in 0 and then once we reach a value that cant end in 9 break early and set it in the next condition.
Notice the pattern, as each place solidifies. Also keep in mind that when dealing with going from lower to higher places, optional operators ? should be used.
Its a bit complex, but its nowhere near impossible to design a custom range with a bit of thought.
If you are more specific, we can craft an exact example, but this is generally how it is done:beginning-range|middle-range|end-range
You should only need beginning or end-ranges in certain cases like if the min or max does not end in 9. the ? means that the range that comes after it is optional. (so for example in the first case it lets us have both single and double numbers.
so for 100 - 1999 it's quite simple actually because you have lots of 9's and 0's
/^[1-9][0-9][0-9]$|^[1][0-9][0-9][0-9]$/
https://regex101.com/r/pP8rQ7/1
Note: Single values don't need ranges [n] I just added them for readability.
Edit: There used to be a regex range generator at: http://gamon.webfactional.com/regexnumericrangegenerator/. It appears to be offline now.
Essentially, you can't.
For every numeric range, there exists a regex that will match numbers in that range, therefore it is possible to write code that can generate a such regex. But such a regex is not a simple reformatting of the range ends.
However, such code would require colossal effort and complexity to write compared to code that simply checked the number using numeric methods.
With HTML 5 simply put a range input...
<form>
Quantity (between 100 and 1999):
<input type="number" name="quantity" min="100" max="1999">
</form>
with regex:
^([12345679])(\d)(\d)|^(1)(\d)(\d)(\d)
So if you need to create the regex dinamically it's possible but a bit tricky and complex

Find all partial matches to vector of unsigned

For an AI project of mine, I need to apply to a factored state all rules that apply to its partial components. This needs to be done very frequently so I'm looking for a way to make this as fast as possible.
I'm going to describe my problem with strings, however the true problem works in the same way with vectors of unsigned integers.
I have a bunch of entries (of length N) like this which I need to store in some way:
__a_b
c_e__
___de
abcd_
fffff
__a__
My input is a single entry ciede to which I must find, as fast as possible, all stored entries which match to it. For example in this case the matches would be c_e__ and ___de. Removal and adding of entries should be supported, however I don't care how slow it is. What I would like to be as fast as possible is:
for ( const auto & entry : matchedEntries(input) )
My problem, as I said, is one where each letter is actually an unsigned integer, and the vector is of an unspecified (but known) length. I have no requirements for how entries should be stored, or what type of metadata is going to be associated with them. The naive algorithm of matching all is O(N), is it possible to do better? The number of reasonable entries I need stored is <=100k.
I'm thinking some kind of sorting might help, or some weird looking tree structure, but I can't seem to figure out a good way to approach this problem. It also looks like something word processers already need to do, so someone might be able to help.
The easiest solution is to build a trie containing your entries. When searching the trie, you start in the root and recursively follow an edge, that matches character from your input. There will be at most two of those edges in each node, one for the wildcard _ and one for the actual letter.
In the worst case you have to follow two edges from each node, which would add up to O(2^n) complexity, where n is the length of the input, while the space complexity is linear.
A different approach would be to preprocess the entries, to allow for linear search. This is basically what compiling regular expressions does. For your example, consider following regular expression, which matches your desired input:
(..a.b|c.e..|...de|abcd.|fffff|..a..)
This expression can be implemented as a nondeterministic finite state automaton, with initial state having ε-moves to a deterministic automaton for each of the single entries. This NFSA can then be turned to a deterministic FSA, using the standard powerset construction.
Although this construction can increase the number of states substantially, searching the input word can then be done in linear time, simply simulating the deterministic automaton.
Below is an example for entries ab, a_, ba, _a and __. First start with a nondeterministic automaton, which upon removing ε-moves and joining equivalent states is actually a trie for the set.
Then turn it into a deterministic machine, with states corresponding to subsets of states of the NFSA. Start in the state 0 and for each edge, other than _, create the next state as the union of the states in the original machine, that are reachable from any state in the current set.
For example, when DFSA is in state 16, that means the NFSA could be either in state 1 or 6. Upon transition on a, the NFSA could get to states 3 (from 1), 7 or 8 (from 6) - that will be your next state in the DFSA.
The standard construction would preserve the _-edges, but we can omit them, as long as the input does not contain _.
Now if you have a word ab on the input, you simulate this automaton (i.e. traverse its transition graph) and end up in state 238, from which you can easily recover the original entries.
Store the data in a tree, 1st layer represents 1st element (character or integer), and so on. This means the tree will have a constant depth of 5 (excluding the root) in your example. Don't care about wildcards ("_") at this point. Just store them like the other elements.
When searching for the matches, traverse the tree by doing a breadth first search and dynamically build up your result set. Whenever you encounter a wildcard, add another element to your result set for all other nodes of this layer that do not match. If no subnode matches, remove the entry from your result set.
You should also skip reduntant entries when building up the tree: In your example, __a_b is reduntant, because whenever it matches, __a__ also matches.
I've got an algorithm in mind which I plan to implement and benchmark, but I'll describe the approach already. It needs n_templates * template_length * n_symbols bits of storage (so for 100k templates of length 100 and 256 distinct symbols needs 2.56 Gb = 320 MB of RAM. This does not scale nicely to large number of symbols unless succinct data structure is used.
Query takes O(n_templates * template_length * n_symbols) time but should perform quite well thanks to bit-wise operations.
Let's say we have the given set of templates:
__a_b
c_e__
___de
abcd_
_ied_
bi__e
The set of symbols is abcdei, for each symbol we pre-calculate a bit mask indicating whether the template differs from the symbol at that location or not:
aaaaa bbbbb ccccc ddddd eeeee iiiii
....b ..a.. ..a.b ..a.b ..a.b ..a.b
c.e.. c.e.. ..e.. c.e.. c.... c.e..
...de ...de ...de ....e ...d. ...de
.bcd. a.cd. ab.d. abc.. abcd. abcd.
.ied. .ied. .ied. .ie.. .i.d. ..ed.
bi..e .i..e bi..e bi..e bi... b...e
Same tables expressed in binary:
aaaaa bbbbb ccccc ddddd eeeee iiiii
00001 00100 00101 00101 00101 00101
10100 10100 00100 10100 10000 10100
00011 00011 00011 00001 00010 00011
01110 10110 11010 11100 11110 11110
01110 01110 01110 01100 01010 00110
11001 01001 11001 11001 11000 10001
These are stored in columnar order, 64 templates / unsigned integer. To determine which templates match ciede we check the 1st column of c table, 2st column from i, 3rd from e and so forth:
ciede ciede
__a_b ..a.b 00101
c_e__ ..... 00000
___de ..... 00000
abcd_ abc.. 11100
_ied_ ..... 00000
bi__e b.... 10000
We find matching templates as rows of zeros, which indicates that no differences were found. We can check 64 templates at once, and the algorithm itself is very simple (python-like code):
for i_block in range(n_templates / 64):
mask = 0
for i in range(template_length):
# Accumulate difference-indicating bits
mask |= tables[i_block][word[i]][i]
if mask == 0xFFFFFFFF:
# All templates differ, we can stop early
break
for i in range(64):
if mask & (1 << i) == 0:
print('Match at template ' + (i_block * 64 + i))
As I said I haven't yet actually tried implementing this, so I have no clue how fast it is in practice.

Checking if a string contains an English sentence

As of right now, I decided to take a dictionary and iterate through the entire thing. Every time I see a newline, I make a string containing from that newline to the next newline, then I do string.find() to see if that English word is somewhere in there. This takes a VERY long time, each word taking about 1/2-1/4 a second to verify.
It is working perfectly, but I need to check thousands of words a second. I can run several windows, which doesn't affect the speed (Multithreading), but it still only checks like 10 a second. (I need thousands)
I'm currently writing code to pre-compile a large array containing every word in the English language, which should speed it up a lot, but still not get the speed I want. There has to be a better way to do this.
The strings I'm checking will look like this:
"hithisisastringthatmustbechecked"
but most of them contained complete garbage, just random letters.
I can't check for impossible compinations of letters, because that string would be thrown out because of the 'tm', in between 'thatmust'.
You can speed up the search by employing the Knuth–Morris–Pratt (KMP) algorithm.
Go through every dictionary word, and build a search table for it. You need to do it only once. Now your search for individual words will proceed at faster pace, because the "false starts" will be eliminated.
There are a lot of strategies for doing this quickly.
Idea 1
Take the string you are searching and make a copy of each possible substring beginning at some column and continuing through the whole string. Then store each one in an array indexed by the letter it begins with. (If a letter is used twice store the longer substring.
So the array looks like this:
a - substr[0] = "astringthatmustbechecked"
b - substr[1] = "bechecked"
c - substr[2] = "checked"
d - substr[3] = "d"
e - substr[4] = "echecked"
f - substr[5] = null // since there is no 'f' in it
... and so forth
Then, for each word in the dictionary, search in the array element indicated by its first letter. This limits the amount of stuff that has to be searched. Plus you can't ever find a word beginning with, say 'r', anywhere before the first 'r' in the string. And some words won't even do a search if the letter isn't in there at all.
Idea 2
Expand upon that idea by noting the longest word in the dictionary and get rid of letters from those strings in the arrays that are longer than that distance away.
So you have this in the array:
a - substr[0] = "astringthatmustbechecked"
But if the longest word in the list is 5 letters, there is no need to keep any more than:
a - substr[0] = "astri"
If the letter is present several times you have to keep more letters. So this one has to keep the whole string because the "e" keeps showing up less than 5 letters apart.
e - substr[4] = "echecked"
You can expand upon this by using the longest words starting with any particular letter when condensing the strings.
Idea 3
This has nothing to do with 1 and 2. Its an idea that you could use instead.
You can turn the dictionary into a sort of regular expression stored in a linked data structure. It is possible to write the regular expression too and then apply it.
Assume these are the words in the dictionary:
arun
bob
bill
billy
body
jose
Build this sort of linked structure. (Its a binary tree, really, represented in such a way that I can explain how to use it.)
a -> r -> u -> n -> *
|
b -> i -> l -> l -> *
| | |
| o -> b -> * y -> *
| |
| d -> y -> *
|
j -> o -> s -> e -> *
The arrows denote a letter that has to follow another letter. So "r" has to be after an "a" or it can't match.
The lines going down denote an option. You have the "a or b or j" possible letters and then the "i or o" possible letters after the "b".
The regular expression looks sort of like: /(arun)|(b(ill(y+))|(o(b|dy)))|(jose)/ (though I might have slipped a paren). This gives the gist of creating it as a regex.
Once you build this structure, you apply it to your string starting at the first column. Try to run the match by checking for the alternatives and if one matches, more forward tentatively and try the letter after the arrow and its alternatives. If you reach the star/asterisk, it matches. If you run out of alternatives, including backtracking, you move to the next column.
This is a lot of work but can, sometimes, be handy.
Side note I built one of these some time back by writing a program that wrote the code that ran the algorithm directly instead of having code looking at the binary tree data structure.
Think of each set of vertical bar options being a switch statement against a particular character column and each arrow turning into a nesting. If there is only one option, you don't need a full switch statement, just an if.
That was some fast character matching and really handy for some reason that eludes me today.
How about a Bloom Filter?
A Bloom filter, conceived by Burton Howard Bloom in 1970 is a
space-efficient probabilistic data structure that is used to test
whether an element is a member of a set. False positive matches are
possible, but false negatives are not; i.e. a query returns either
"inside set (may be wrong)" or "definitely not in set". Elements can
be added to the set, but not removed (though this can be addressed
with a "counting" filter). The more elements that are added to the
set, the larger the probability of false positives.
The approach could work as follows: you create the set of words that you want to check against (this is done only once), and then you can quickly run the "in/not-in" check for every sub-string. If the outcome is "not-in", you are safe to continue (Bloom filters do not give false negatives). If the outcome is "in", you then run your more sophisticated check to confirm (Bloom filters can give false positives).
It is my understanding that some spell-checkers rely on bloom filters to quickly test whether your latest word belongs to the dictionary of known words.
This code was modified from How to split text without spaces into list of words?:
from math import log
words = open("english125k.txt").read().split()
wordcost = dict((k, log((i+1)*log(len(words)))) for i,k in enumerate(words))
maxword = max(len(x) for x in words)
def infer_spaces(s):
"""Uses dynamic programming to infer the location of spaces in a string
without spaces."""
# Find the best match for the i first characters, assuming cost has
# been built for the i-1 first characters.
# Returns a pair (match_cost, match_length).
def best_match(i):
candidates = enumerate(reversed(cost[max(0, i-maxword):i]))
return min((c + wordcost.get(s[i-k-1:i], 9e999), k+1) for k,c in candidates)
# Build the cost array.
cost = [0]
for i in range(1,len(s)+1):
c,k = best_match(i)
cost.append(c)
# Backtrack to recover the minimal-cost string.
costsum = 0
i = len(s)
while i>0:
c,k = best_match(i)
assert c == cost[i]
costsum += c
i -= k
return costsum
Using the same dictionary of that answer and testing your string outputs
>>> infer_spaces("hithisisastringthatmustbechecked")
294.99768817854056
The trick here is finding out what threshold you can use, keeping in mind that using smaller words makes the cost higher (if the algorithm can't find any usable word, it returns inf, since it would split everything to single-letter words).
In theory, I think you should be able to train a Markov model and use that to decide if a string is probably a sentence or probably garbage. There's another question about doing this to recognize words, not sentences: How do I determine if a random string sounds like English?
The only difference for training on sentences is that your probability tables will be a bit larger. In my experience, though, a modern desktop computer has more than enough RAM to handle Markov matrices unless you are training on the entire Library of Congress (which is unnecessary- even 5 or so books by different authors should be enough for very accurate classification).
Since your sentences are mashed together without clear word boundaries, it's a bit tricky, but the good news is that the Markov model doesn't care about words, just about what follows what. So, you can make it ignore spaces, by first stripping all spaces from your training data. If you were going to use Alice in Wonderland as your training text, the first paragraph would, perhaps, look like so:
alicewasbeginningtogetverytiredofsittingbyhersisteronthebankandofhavingnothingtodoonceortwiceshehadpeepedintothebookhersisterwasreadingbutithadnopicturesorconversationsinitandwhatistheuseofabookthoughtalicewithoutpicturesorconversation
It looks weird, but as far as a Markov model is concerned, it's a trivial difference from the classical implementation.
I see that you are concerned about time: Training may take a few minutes (assuming you have already compiled gold standard "sentences" and "random scrambled strings" texts). You only need to train once, you can easily save the "trained" model to disk and reuse it for subsequent runs by loading from disk, which may take a few seconds. Making a call on a string would take a trivially small number of floating point multiplications to get a probability, so after you finish training it, it should be very fast.

Search Large Text File for Thousands of strings

I have a large text file that is 20 GB in size. The file contains lines of text that are relatively short (40 to 60 characters per line). The file is unsorted.
I have a list of 20,000 unique strings. I want to know the offset for each string each time it appears in the file. Currently, my output looks like this:
netloader.cc found at offset: 46350917
netloader.cc found at offset: 48138591
netloader.cc found at offset: 50012089
netloader.cc found at offset: 51622874
netloader.cc found at offset: 52588949
...
360doc.com found at offset: 26411474
360doc.com found at offset: 26411508
360doc.com found at offset: 26483662
360doc.com found at offset: 26582000
I am loading the 20,000 strings into a std::set (to ensure uniqueness), then reading a 128MB chunk from the file, and then using string::find to search for the strings (start over by reading another 128MB chunk). This works and completes in about 4 days. I'm not concerned about a read boundary potentially breaking a string I'm searching for. If it does, that's OK.
I'd like to make it faster. Completing the search in 1 day would be ideal, but any significant performance improvement would be nice. I prefer to use standard C++ with Boost (if necessary) while avoiding other libraries.
So I have two questions:
Does the 4 day time seem reasonable considering the tools I'm using and the task?
What's the best approach to make it faster?
Thanks.
Edit: Using the Trie solution, I was able to shorten the run-time to 27 hours. Not within one day, but certainly much faster now. Thanks for the advice.
Algorithmically, I think that the best way to approach this problem, would be to use a tree in order to store the lines you want to search for a character at a time. For example if you have the following patterns you would like to look for:
hand, has, have, foot, file
The resulting tree would look something like this:
The generation of the tree is worst case O(n), and has a sub-linear memory footprint generally.
Using this structure, you can begin process your file by reading in a character at a time from your huge file, and walk the tree.
If you get to a leaf node (the ones shown in red), you have found a match, and can store it.
If there is no child node, corresponding to the letter you have red, you can discard the current line, and begin checking the next line, starting from the root of the tree
This technique would result in linear time O(n) to check for matches and scan the huge 20gb file only once.
Edit
The algorithm described above is certainly sound (it doesn't give false positives) but not complete (it can miss some results). However, with a few minor adjustments it can be made complete, assuming that we don't have search terms with common roots like go and gone. The following is pseudocode of the complete version of the algorithm
tree = construct_tree(['hand', 'has', 'have', 'foot', 'file'])
# Keeps track of where I'm currently in the tree
nodes = []
for character in huge_file:
foreach node in nodes:
if node.has_child(character):
node.follow_edge(character)
if node.isLeaf():
# You found a match!!
else:
nodes.delete(node)
if tree.has_child(character):
nodes.add(tree.get_child(character))
Note that the list of nodes that has to be checked each time, is at most the length of the longest word that has to be checked against. Therefore it should not add much complexity.
The problem you describe looks more like a problem with the selected algorithm, not with the technology of choice. 20000 full scans of 20GB in 4 days doesn't sound too unreasonable, but your target should be a single scan of the 20GB and another single scan of the 20K words.
Have you considered looking at some string matching algorithms? Aho–Corasick comes to mind.
Rather than searching 20,000 times for each string separately, you can try to tokenize the input and do lookup in your std::set with strings to be found, it will be much faster. This is assuming your strings are simple identifiers, but something similar can be implemented for strings being sentences. In this case you would keep a set of first words in each sentence and after successful match verify that it's really beginning of the whole sentence with string::find.