Levenshtein Automata - levenshtein-distance

i implemented a levenshtein trie to find similar words to a given word.
my goal was to have a fast way to do spell correction.
However i found out that there is an even faster way to do that:
Levenshtein Automata
I just have a problem... I understand no word of that whats written
here.
Can someone explain me the idea and the basic functionality of a
levenshtein automata in easy words?

Can someone explain me the idea and the basic functionality of a levenshtein automata in easy words?
A deterministic finite automaton (DFA) is
an alphabet (set of possible input characters)
a set of states (just abstract objects with no special properties)
a transition function (given any state and an input character, it returns a unique state)
a distinguished start state
a set of accepting states.
You can draw a DFA as a diagram like those in the paper. Conventionally, circular nodes are states. Directed edges each labeled with one character are transitions. Accepting states are marked as double-line circles. The start state has an inward pointing arrow with nothing at its tail.
A DFA accepts word W if and only if you can move a marker from the start state along transitions whose concatenated labels equal W to an accepting state. That is, if T is the transition function, and W = "cat", then T(T(T(Start, 'c'), 'a'), 't') must be an accepting state. Cycles in the transition function allow DFAs to accept strings of arbitrary length even though the DFA is finite.
In software a DFA is a simple loop and a table T(state, char) that implements the transition function.
current_state = START
while not end-of-input
c = get character from input
current_state = T(current_state, c)
end
if current_state is an accepting state return ACCEPT, else REJECT
The Wikipedia page on DFAs is not bad.
DFAs have nice properties. Accepting/rejecting an input of length N requires O(N) time (as long as the transition function runs in constant time). There is a unique minimum version of every DFA (among all those accepting the same set of words) and an efficient algorithm to find that minimum DFA. It's easy to compare DFAs for equality in time linear in the DFAs' size.
A Levenshtein Automaton L(W, d) for word W and Levenshtein distance d is just a DFA that accepts all words having Levenshtein distance at most d from W. That is, the automaton accepts W plus a bunch of other words that are W with no more than d "mistakes" in the usual sense of Levenshtein distance.
The contribution of the paper is a fast algorithm for computing Levenshtein DFAs and then a more advanced algorithm that accomplishes the same thing without computing the DFA explicitly.

Gene's has provided a fantastic high-level description of Levenshtein Automata!
With that said, if you're looking for some code to further your understanding, you may find the Java LevenshteinAutomaton library useful. It implements both algorithms described in the paper you stumbled upon (among others), and is well-structured, easy to follow, and extensively commented. It is also maintained by yours truly :) .

Related

Given Two Regex, Determine if One is a Complement of Other

I'd like to know how you can tell if some regular expression is the complement of another regular expression. Let's say I have 2 regular expressions r_1 and r_2. I can certainly create a DFA out of each of them and then check to make sure that L(r_1) != L(r_2). But that doesn't necessarily mean that r_1 is the complement of r_2 and vice versa. Also, it seems to be that many different regular expressions that could be the same complement of a single regular expression.
So I'm wondering how, given two regular expressions, I can determine if one is the complement of another. This is also new to me, so perhaps I'm missing something that should be apparent.
Edit: I should point out that I am not simply trying to find the complement of a regular expression. I am given two regular expressions, and I am to determine if they are the complement of each other.
Here is one approach that is conceptually simple, if not terribly efficient (not that there is necessarily a more efficient solution...):
Construct NFAs M and N for regular expressions r and s, respectively. You can do this using the construction introduced in the proof that finite automata describe the same languages.
Determinize M and N to get M' and N'. We might as well go ahead and minimize them at this point... giving M'' and N''.
Construct a machine C using the Cartesian product machine construction on machines M'' and N''. Acceptance will be determined by the symmetric difference, or XOR, criterion: accepting states in the product machine correspond to pairs of states (m, n) where exactly one of the two states is accepting in its automaton.
Minimize C and call the result C'
If L(r) = L(s)', then the initial state of C' will be accepting and C' will have all transitions originating in the initial state also terminating in the initial state. If this is the case,
Why should this work? The symmetric difference of two sets is the set of everything in exactly one (not both, not neither). If L(s) and L(r) are complementary, then it is not difficult to see that the symmetric difference includes all strings (by definition, the complement of a set contains everything not in the set). Suppose now there were non-complementary sets whose symmetric difference were the universe of all strings. The sets are not complementary, so either (1) their union is non-empty or (2) their union is not the universe of all strings. In case (1), the symmetric difference will not include the shared element; in case (2), the symmetric difference will not include the missing strings. So, only complementary sets have the symmetric difference equal to the universe of all strings; and a minimal DFA for the set of all strings will always have an accepting initial state with self-loops.
For complement: L(r_1) == !L(r_2)

Check if 2 minimum DFA are equivalent

I have 2 minimized DFA and i need to check if they are equivalent.
If they are equivalent, the problem is to find a efficient comparison of state regardless of different labels. In my case DFA are table, then i need to find the permutation that match the rows of first DFA with rows of second DFA.
I thought also about to have a Breadth-first search of DFA and create the minimum access string to a state and then compare the first list with the second list (this should be regardless of the particular input, for example: 001 and 110 could be interchangeable).
I'm interesting either to direct and inefficient algorithm and to more sophisticated algorithm.
The right approach is to construct another DFA with:
L3=(L1-L2) U (L2-L1)
And test whether L3 is empty or not. If L3 is empty then L1=L2, otherwise L1<>L2
I found these algorithms:
- Symmetric difference
- Table-filling algorithm
- Faster Table-Filling algorithm O(n^2)
- Hopcroft algorithm
- Nearly Linear algorithm by Hopcroft and Karp
A complete reference is:
Algorithms for testing equivalence of finite automata, with a grading tool for Jflap - Norton, 2009
I accepted this my answere because the one of #abbaasi is too incomplete.
I will accept any other answer with a significant contribution.
I remeber a minimum DFA is unique. So if you have 2 minimized DFA, I think you only need to check whether they're the same.

Determining whether a regex is a subset of another

I have a large collection of regular expression that when matched call a particular http handler. Some of the older regex's are unreachable (e.g. a.c* ⊃ abc*) and I'd like to prune them.
Is there a library that given two regex's will tell me if the second is subset of the first?
I wasn't sure this was decidable at first (it smelled like the halting problem by a different name). But it turns out it's decidable.
Trying to find the complexity of this problem lead me to this paper.
The formal definition of the problem can be found within: this is generally called the inclusion problem
The inclusion problem for R, is to test for two given expressions r, r′ ∈ R,
whether r ⊆ r′.
That paper has some great information (summary: all but the simplest expressions are fairly complex), however searching for information on the inclusion problem leads one directly back to StackOverflow. That answer already had a link to a paper describing a passable polynomial time algorithm which should cover a lot of common cases.
I found a python regex library that provides set operations.
http://github.com/ferno/greenery
The proof says Sub ⊆ Sup ⇔ Sub ∩ ¬Sup is {}. I can implement this with the python library:
import sys
from greenery.lego import parse
subregex = parse(sys.argv[1])
supregex = parse(sys.argv[2])
s = subregex&(supregex.everythingbut())
if s.empty():
print("%s is a subset of %s"%(subregex,supregex))
else:
print("%s is not a subset of %s, it also matches %s"%(subregex,supregex,s)
examples:
subset.py abcd.* ab.*
abcd.* is a subset of ab.*
subset.py a[bcd]f* a[cde]f*
a[bcd]f* is not a subset of a[cde]f*, it also matches abf*
The library may not be robust because as mentioned in the other answers you need to use the minimal DFA in order for this to work. I'm not sure ferno's library makes (or can make) that guarantee.
As an aside: playing with the library to calculate inverse or simplify regexes is lots of fun.
a(b|.).* simplifies to a.+. Which is pretty minimal.
The inverse of abf* is ([^a]|a([^b]|bf*[^f])).*|a?. Try to come up with that on your own!
If the regular expressions use "advanced features" of typical procedural matchers (like those in Perl, Java, Python, Ruby, etc.) that allow accepting languages that aren't regular, then you are out of luck. The problem is in general undecidable. E.g. the problem of whether one pushdown automaton recognizes the same context free (CF) language as another is undecidable. Extended regular expressions can describe CF languages.
On the other hand, if the regular expressions are "true" in the theoretical sense, consisting only of concatenation, alternation, and Kleene star over strings with a finite alphabet, plus the usual syntactic sugar on these (character classes, +, ?, etc), then there is a simple polynomial time algorithm.
I can't give you libraries, but this:
For each pair of regexes r and s for languages L(r) and L(s)
Find the corresponding Deterministic Finite Automata M(r) and M(s)
Compute the cross-product machine M(r x s) and assign accepting states
so that it computes L(r) - L(s)
Use a DFS or BFS of the the M(r x s) transition table to see if any
accepting state can be reached from the start state
If no, you can eliminate s because L(s) is a subset of L(r).
Reassign accepting states so that M(r x s) computes L(s) - L(r)
Repeat the steps above to see if it's possible to eliminate r
Converting a regex to a DFA generally uses Thompson's construction to get a non-deterministic automaton. This is converted to a DFA using the Subset Construction. The cross-product machine is another standard algorithm.
This was all worked out in the 1960's and is now part of any good undergrad computer science theory course. The gold standard for the topic is Hopcroft and Ullman, Automata Theory.
There is an answer in the mathematics section: https://math.stackexchange.com/questions/283838/is-one-regular-language-subset-of-another.
Basic idea:
Compute the minimal DFA for both languages.
Calculate the cross product of both automates M1 and M2, which means that each state consists of a pair [m1, m2] where m1 is from M1 and m2 from M2 for all possible combinations.
The new transition F12 is: F12([m1, m2], x) => [F1(m1, x), F2(m2, x)]. This means if there was a transition in M1 from state m1 to m1' while reading x and in M2 from state m2 to m2' while reading x then there is one transition in M12 from [m1, m2] to [m1', m2'] while reading x.
At the end you look into the reachable states:
If there is a pair [accepting, rejecting] then the M2 is not a subset of M1
If there is a pair [rejecting, accapting] then M1 is not a subset of M2
It would be benificial if you would just compute the new transition and the resulting states, omitting all non reachable states from the beginning.

Is it possible to calucate the edit distance between a regexp and a string?

If so, please explain how.
Re: what is distance -- "The distance between two strings is defined as the minimal number of edits required to convert one into the other."
For example, xyz to XYZ would take 3 edits, so the string xYZ is closer to XYZ and xyz.
If the pattern is [0-9]{3} or for instance 123, then a23 would be closer to the pattern than ab3.
How can you find the shortest distance between a regexp and a non-matching string?
The above is the Damerau–Levenshtein distance algorithm.
You can use Finite State Machines to do this efficiently (that is, linear in time). If you use a transducer, you can even write the specification of the transformation fairly compactly and do far more nuanced transformations than simply inserts or deletes - see wikipedia for Finite State Transducer as a starting point, and software such as the FSA toolkit or FSA6 (which has a not entirely stable web-demo) too. There are lots of libraries for FSA manipulation; I don't want to suggest the previous two are your only or best options, just two I've heard of.
If, however, you merely want the efficient, approximate searching, a less flexibly but already-implemented-for-you option exists: TRE, which has an approximate matching function that returns the cost of the match - i.e., the distance to the match, from your perspective.
If you mean the string with the smallest levenshtein distance between the closest matched string and a sample, then I'm pretty sure it can be done, but you'd have to convert the Regex to a DFA yourself, then try to match and whenever something fails, non-deterministically continue as if it had passed and keep track of the number differences. you could use A* search or something similar for this, it would be quite inefficient though (O(2^n) worst case)

How can you detect if two regular expressions overlap in the strings they can match?

I have a container of regular expressions. I'd like to analyze them to determine if it's possible to generate a string that matches more than 1 of them. Short of writing my own regex engine with this use case in mind, is there an easy way in C++ or Python to solve this problem?
There's no easy way.
As long as your regular expressions use only standard features (Perl lets you embed arbitrary code in matching, I think), you can produce from each one a nondeterministic finite-state automaton (NFA) that compactly encodes all the strings that the RE matches.
Given any pair of NFA, it's decidable whether their intersection is empty. If the intersection isn't empty, then some string matches both REs in the pair (and conversely).
The standard decidability proof is to determinize them into DFAs first, and then construct a new DFA whose states are pairs of the two DFAs' states, and whose final states are exactly those in which both states in the pair are final in their original DFA. Alternatively, if you've already shown how to compute the complement of a NFA, then you can (DeMorgan's law style) get the intersection by complement(union(complement(A),complement(B))).
Unfortunately, NFA->DFA involves a potentially exponential size explosion (because states in the DFA are subsets of states in the NFA). From Wikipedia:
Some classes of regular languages can
only be described by deterministic
finite automata whose size grows
exponentially in the size of the
shortest equivalent regular
expressions. The standard example are
here the languages L_k consisting of
all strings over the alphabet {a,b}
whose kth-last letter equals a.
By the way, you should definitely use OpenFST. You can create automata as text files and play around with operations like minimization, intersection, etc. in order to see how efficient they are for your problem. There already exist open source regexp->nfa->dfa compilers (I remember a Perl module); modify one to output OpenFST automata files and play around.
Fortunately, it's possible to avoid the subset-of-states explosion, and intersect two NFA directly using the same construction as for DFA:
if A ->a B (in one NFA, you can go from state A to B outputting the letter 'a')
and X ->a Y (in the other NFA)
then (A,X) ->a (B,Y) in the intersection
(C,Z) is final iff C is final in the one NFA and Z is final in the other.
To start the process off, you start in the pair of start states for the two NFAs e.g. (A,X) - this is the start state of the intersection-NFA. Each time you first visit a state, generate an arc by the above rule for every pair of arcs leaving the two states, and then visit all the (new) states those arcs reach. You'd store the fact that you expanded a state's arcs (e.g. in a hash table) and end up exploring all the states reachable from the start.
If you allow epsilon transitions (that don't output a letter), that's fine:
if A ->epsilon B in the first NFA, then for every state (A,Y) you reach, add the arc (A,Y) ->epsilon (B,Y) and similarly for epsilons in the second-position NFA.
Epsilon transitions are useful (but not necessary) in taking the union of two NFAs when translating a regexp to an NFA; whenever you have alternation regexp1|regexp2|regexp3, you take the union: an NFA whose start state has an epsilon transition to each of the NFAs representing the regexps in the alternation.
Deciding emptiness for an NFA is easy: if you ever reach a final state in doing a depth-first-search from the start state, it's not empty.
This NFA-intersection is similar to finite state transducer composition (a transducer is an NFA that outputs pairs of symbols, that are concatenated pairwise to match both an input and output string, or to transform a given input to an output).
This regex inverter (written using pyparsing) works with a limited subset of re syntax (no * or + allowed, for instance) - you could invert two re's into two sets, and then look for a set intersection.
In theory, the problem you describe is impossible.
In practice, if you have a manageable number of regular expressions that use a limited subset or of regexp syntax, and/or a limited selection of strings that can be used to match against the container of regular expressions, you might be able to solve it.
Assuming you're not trying to solve the abstract general case, there might be something you can do to solve a practical application. Perhaps if you provided a representative sample of the regexps, and described the strings you'd be matching with, a heuristic could be created to solve the problem.