I recently found out about Kleene algebra for manipulating and simplifying regular expressions.
I'm wondering if this has been build into any computational software programs like Mathematica? It would be great to have a computational tool for doing unions and concatenations of large expressions and have the computer simplify them.
If you are not aware of any programs with this algebra built in, do you know any programs that allow extending their engines with new algebras?
On http://www.maplesoft.com/msw/program/MSW04FinalProgram.pdf, it states:
One of the basic results of the theory of finite automata is the
famous Kleene theorem, which states that a language is acceptable by a
finite automaton if and only if it can be represented by a regular
expression.
and
The main difficulty of the algorithmic treatment of regular
expressions is, however, their simplification. Although several
identities are known concerning regular expressions, e.g., the rules
of Kleene algebra, there does not exist an effective algorithm for
solving the simplification problem of regular expressions.
and
Under the circumstances, the only way left is to develop heuristic
algorithms for simplifying regular expressions. For the aut package,
this paper outlines the Maple procedures Rsimplify, Rabsorb and
Rexpand.
Im wondering if open-source implementations of Kleene Algebra algorithms exist.
Related
As the title suggests, how are DFA and NFA related to regular expressions? Would learning DFA and NFA be useful in having a better understanding in regular expressions?
Finite automata (fa), regular expression(re), and also regular grammars, all are finite representation for regular languages. The purpose all of them is to express a regular set/language (and same is true for other class of languages like cfg, csl etc).
Automata comparatively more useful for theoretical purpose, to analysis language properties — class of complexity.
In case of finite automata, both deterministic (DFA) and non-deterministic (NFA) models represent same class of language, called "regular language" (that is not true for other languages for npda ≭ pda).
Regular expression (re): is another way to represent regular languages in alphabetical form, which is very much helpful to represent a set of valid strings in programming languages (here automata can't be useful directly whereas regular expression is not much helpful to analysis language properties eg. to fully describe pumping lemma).
How are DFA and NFA related to regular expressions?
Both represent same class of languages — regular languages
Its not possible to construct automata or regular expression algorithmically from English description of language directly. Although, if we have any one representation (FA or RE) then we can systematically write other representation eg.we can write regular expression for a DFA/NFA in step by step and systematic manner, using Arden's theorem. (check this link)
Lets take a language example: L = "Even number of a's and b's".
regular expression for L is:
(
(a + b(aa)*ab)(bb)*(ba(aa)*ab(bb)*)*a +
(b + a(bb)*ba)(aa)*(ab(bb)*ba(aa)*)*b
)*
Its very tough to write regular expression for this language directly(even its bit typical to understand this re quickly).
But from DFA and using Arden't theorm it was simple to write regular expression for language L.
Important is that drawing DFA for this language is comparatively simple (also easy to memorize).
One more example can be language over "symbols 0 and 1, where decimal equivalent of binary string is divisible by 5", writing RE for this will be very hard compare to writing DFA.
We can also draw DFA from a regular language algorithimally.
Would learning DFA and NFA be useful in having a better understanding in regular expressions?
Yes, because of following reasons:
Sometime it is hard to write RE directly.
A regular expression that is written directly from English description can be buggy. Chances of buggy dfa would be less than buggy regex that is why when we writing compiler for some language then preferable/correct steps are considered to draw DFA first from each token, then write their equvilent regex — DFA will be consider proof of correctness - dfa are more descriptive and easy to grasp language construct (DFA is correct then RE will be correct).
If re is complex and you are to find "what is the language description?", then you can draw DFA from re and then give language description.
Sometime to find better re, you can draw DFA then translate it to minimize DFA then write re using minimized DFA may give you better solution. (Its not general technique, may be helpful sometime)
If its hard to compare two regular expressions then you can compare their corresponding DFAs to check for equivalence.
Note: Sometime writing regular expression is much simpler then drawing DFAs.
A non-deterministic finite automaton (NFA) is a machine that can recognize a regular language.
A regular expression is a string that describes a regular language.
It is possible to algorithmically build an NFA that will recognize the language described by a given regular expression. Running the NFA on an input string will tell you if the regular expression matches the input string or not.
So NFAs can be used to implement regular expression engines, but knowledge of them is not required to use regular expressions to their full potential.
Can you implement the shunting yard algorithm in terms of regular expressions?
I do not think so. Regular expressions can only match regular languages (See Regular language), while infix expressions are a kind of context-free language (See Context-free language). For example, you cannot match expressions made of properly matched parentheses with a regular expression.
I believe this has been answered here: Can the shunting yard algorithm parse POSIX regular expressions?
I will say that the answer to your question is "no, you cannot
implement the shunting yard algorithm using a regular expression."
This is for the same reason you cannot parse arbitrary HTML using
regular expressions. Which boils down to this:
Regular expressions do not have a stack. Because the shunting yard
algorithm relies on a stack (to push and pop operands as you convert
from infix to RPN), then regular expressions do not have the
computational "power" to perform this task.
This glosses over many details, but a "regular expression" is one way
to define a regular language. When you "use" a regular expression, you
are asking the computer to say: "Look at a body of text and tell me
whether or not any of those strings are in my language. The language
that I defined using a regular expression." I'll point to this most
excellent answer which you and everyone reading this should upvote
for more on regular languages.
So now you need some mathematical concept to augment "regular
languages" in order to create more powerful languages. If you were to
characterize the shunting yard algorithm as an realization of a model
of computational power, then you might say that the algorithm would be
described as a context-free grammar (hey what do you know, that
link uses an expression parse tree as an example.) A push-down
automata. Something with a stack.
If you are less-than-familiar with automata theory and complexity
classes, then those wikipedia articles are probably not that helpful
without explaining them from the ground up.
The point being, you may be able to use regex to help writing shunting
yard. But regex are not very good at doing operations that have an
arbitrary depth, which this problem has. So I would not spend too much
time going down the regex avenue for this problem.
I'm not new to using regular expressions, and I understand the basic theory they're based on--finite state machines.
I'm not so good at algorithmic analysis though and don't understand how a regex compares to say, a basic linear search. I'm asking because on the surface it seems like a linear array search. (If the regex is simple.)
Where could I go to learn more about implementing a regex engine?
This is one of the most popular outlines: Regular Expression Matching Can Be Simple And Fast
. Running a DFA-compiled regular expression against a string is indeed O(n), but can require up to O(2^m) construction time/space (where m = regular expression size).
Are you familiar with the term Deterministic/Non-Deterministic Finite Automata?
Real regular expressions (when I say real I'm refering to those regex that recognize Regular Languages, and not the regex that almost every programming language include with backreferences, etc) can be converted into a DFA/NFA and both can be implemented in a mechanical way in a programming language (a NFA can be converted into a DFA)
What you have to do is:
Find a way to convert a regex into an automaton
Implement the recognition of the automaton in the programming language of your preference
That way, given a regex you can convert it to a DFA and run it to see if it matches or not a specified text.
This can be implemented in O(n), because DFA don't go backward (like a Turing Machine), so it matches the string or not. That is supposing you won't take in count overlapped matches, otherwise you will have to go back and start matching again...
The classic regular expression can be implemented in a way which is fast in practice but has really bad worst case behaviour (the standard DFA) or in a way which has guaranteed reasonable worst case behaviour (keeping it as an NFA). The standard DFA can be extended to support lots of extra matching characters and flags, which make use of the fact that it is basically back-tracking search.
Examples of the standard approach are everywhere (e.g. built into Perl). There is an example that claims good worst case behaviour at http://code.google.com/p/re2/ - in fact it is even better than I expected in the worst case, so they may have found an extra trick or two.
If you are at all interested in this, or care about writing programs that can be made to lock up solid given pathological inputs, read http://swtch.com/~rsc/regexp/regexp1.html.
Classical regular expressions are equivalent to finite automata. Most current implementations of "regular expressions" are not strictly speaking regular expressions but are more powerful. Some people have started using the term "pattern" rather than "regular expression" to be more accurate.
What is the formal language classification of what can be described with a modern "regular expression" such as the patterns supported in Perl 5?
Update: By "Perl 5" I mean that pattern matching functionality implemented in Perl 5 and adopted by numerous other languages (C#, JavaScript, etc) and not anything specific to Perl. I don't want to consider, for example, tricks for embedding Perl code in a pattern.
Perl regexps, as those of any pattern language, where "backreferences" are allowed, are not actually "regular".
Backreferences is the mechanism of matching the same string that was matched by a sub-pattern before. For example, /^(a*)\1$/ matches only strings with even number of as, because after some as there should follow the same number of those.
It's easy to prove, that, for instance, pattern /^((a|b)*)\1$/ matches words from a non-regular language(*), so it's more expressive that ant finite automaton. Regular expressions can't "remember" a string of arbitrary length and then match it again (the length may be very long, while finite-state machine only can simulate finite amount of "memory").
A formal proof would use the pumping lemma. (By the way, this language can't be described by context-free grammar as well.)
Let alone the tricks that allow to use perl code in perl regexps (non-regular language of balanced parentheses there).
(*) "Regular languages" are sets of words that are matched by finite automata. I already wrote an answer about that.
There was a recent discussion on this topic a Perlmonks: Turing completeness and regular expressions
I've always heard perl's regex implementation described as an NFA with backtracking. Wikipedia seems to have a little section on this:
This is possibly slightly too fuzzy but it's informative non the less:
From Wikipedia:
There are at least three different
algorithms that decide if and how a
given regular expression matches a
string.
The oldest and fastest two rely on a
result in formal language theory that
allows every nondeterministic finite
state machine (NFA) to be transformed
into a deterministic finite state
machine (DFA). The DFA can be
constructed explicitly and then run on
the resulting input string one symbol
at a time. Constructing the DFA for a
regular expression of size m has the
time and memory cost of O(2m), but it
can be run on a string of size n in
time O(n). An alternative approach is
to simulate the NFA directly,
essentially building each DFA state on
demand and then discarding it at the
next step, possibly with caching. This
keeps the DFA implicit and avoids the
exponential construction cost, but
running cost rises to O(nm). The
explicit approach is called the DFA
algorithm and the implicit approach
the NFA algorithm. As both can be seen
as different ways of executing the
same DFA, they are also often called
the DFA algorithm without making a
distinction. These algorithms are
fast, but using them for recalling
grouped subexpressions, lazy
quantification, and similar features
is tricky.[12][13]
The third algorithm is to match the
pattern against the input string by
backtracking. This algorithm is
commonly called NFA, but this
terminology can be confusing. Its
running time can be exponential, which
simple implementations exhibit when
matching against expressions like
(a|aa)*b that contain both alternation
and unbounded quantification and force
the algorithm to consider an
exponentially increasing number of
sub-cases. More complex
implementations will often identify
and speed up or abort common cases
where they would otherwise run slowly.
Although backtracking implementations
only give an exponential guarantee in
the worst case, they provide much
greater flexibility and expressive
power. For example, any implementation
which allows the use of
backreferences, or implements the
various extensions introduced by Perl,
must use a backtracking
implementation.
Some implementations try to provide
the best of both algorithms by first
running a fast DFA match to see if the
string matches the regular expression
at all, and only in that case perform
a potentially slower backtracking
match.
Does anyone know any examples of the following?
Proof developments about regular expressions (possibly extended with backreferences) in proof assistants (such as Coq).
Programs in dependently-typed languages (such as Agda) about regular expressions.
Certified Programming with Dependent Types has a section on creating a verified regular expression matcher. Coq Contribs has an automata contribution that might be useful. Jan-Oliver Kaiser formalized the equivalence between regular expressions, finite automata and the Myhill-Nerode characterization in Coq for his bachelors thesis.
Moreira, Pereira & de Sousa, On the Mechanisation of Kleene Algebra in Coq gives a nice verified construction of the Antimirov derivative of regexps in Coq. It's pretty easy to read off a CFA from this construction, and to compute the intersection of regexps.
I'm not sure why you separate Coq from dependently typed programming: Coq essentially is programming in a polymorphic dependently typed lambda calculus with inductive types (i.e., CIC, the calculus of inductive constructions).
I've never heard of a formalisation of regexps in a dependently typed language, nor have I heard of something such as an Antimirov-like derivative for regexps with backtracking, but Becchi & Crowley, Extending Finite Automata to Efficiently Match Perl-Compatible Regular Expressions provide a notion of finite-state automata that matches a Perl-like regexp languages. That might attractive to formalisers in the near future.
See Perl Regular Expression Matching is NP-Hard
Regex matching is NP-hard when regexes are allowed to have backreferences.
Reduction of 3-CNF-SAT to Perl Regular Expression Matching
[...] 3-CNF-SAT is NP-complete. If there
were an efficient (polynomial-time)
algorithm for computing whether or not
a regex matched a certain string, we
could use it to quickly compute
solutions to the 3-CNF-SAT problem,
and, by extension, to the knapsack
problem, the travelling salesman
problem, etc. etc.
I don't know of any development that treats regular expressions by themselves.
Finite automata, however, relevant since NFAs are the standard way to match those regular expressions, have been studied in NuPRL. Have a look at : Robert L. Constable, Paul B. Jackson, Pavel Naumov, Juan Uribe. Constructively Formalizing Automata Theory.
Should you be interested in approaching those formal languages through algebra, esp. developing finite semigroup theory, there are a number of algebra libraries developed in various theorem provers that you could think of using, with one particularly efficient in a finite setting.
The proof assistant Isabelle/HOL ships a number of formalized proofs regarding regular expressions (without back reference):
http://afp.sourceforge.net/browser_info/devel/HOL/Regular-Sets/
(here is a paper by the authors regarding what they did exactly).
Another approach is to characterize regular expressions via Myhill-Nerode Theorem:
http://www.dcs.kcl.ac.uk/staff/urbanc/Publications/itp-11.pdf