What is the power of regular expressions? - regex

As the name suggests we may think that regular expressions can match regular languages only. But regular expressions we use in practice contain stuff that I am not sure it's possible to implement with their theoretical counterparts. How for example would you simulate a back-reference?
So the question arises: what is the theoretical power of the regular expressions we use in practice? Can you think of a way to match {(a^n)(b^n)|n>=0}? What about {(a^n)(b^n)(c^n)|n>=0}?

The answer to your question is, "regular expression" languages that allow back-references are neither regular nor context-free. (In other words, as you pointed out, you cannot simulate back-reference with a regular language, nor with a CFL.) In fact, Wikipedia says many of the "regular expression" languages we use in practice are NP-Complete:
Pattern matching with an unbounded
number of back references, as
supported by numerous modern tools, is
NP-complete (see,[11] Theorem 6.2).
As others have suggested, the regular expression languages commonly supported in computer languages and libraries are a different animal from regular expressions in formal language theory. Larry Wall wrote in regard to Perl "regexes,"
'Regular expressions' [...] are only
marginally related to real regular
expressions. Nevertheless, the term
has grown with the capabilities of our
pattern matching engines, so I'm not
going to try to fight linguistic
necessity here. I will, however,
generally call them "regexes"
You asked,
Can you think of a way to match
{(a^n)(b^n)|n>=0}? What about
{(a^n)(b^n)(c^n)|n>=0}?
I'm not sure here if you're trying to test whether theoretical regular expression languages can match the "language of squares", or whether you're looking for an implementation in a (practical) regex language. Here's the proof why the former is not possible; and here's a long explanation and implementation of the latter for java regexes.

The basic difficulty with regular expressions that you are hinting at is the fact that regular expressions don't have a "memory" to them. In the purest form, no real regular expression should be able to recognize either of these languages. Any regular expression that could parse these sorts of languages would be, by definition, not regular. I think what you mean by "regular expressions we use is practice" is extended regular expressions, which are not technically regular expressions.
The problem with your question is that you are asking to apply a specifically contrived theoretical scenario to a practical situation, which almost always ends in disaster.
So my answer is sort of a non-answer, in that I am saying you would have to rephrase the question to ask about extended regular expressions for it to have an answer.
A couple of resources that might help in this matter:
Helpful wikipedia article
Similar StackOverflow question
Good book with a chapter on this topic
I'm also making my answer a community wiki for anyone else who wants to contribute to this line of thought.

Related

Special construct of expression use

With the construct of using If-Then-Else Conditionals in regular expressions, I would like to know the possible outcome of trying to manipulate many constructs into a single expression for multiple matches.
Let's take this example below.
foo(bar)?(?(1)baz|quz)
Now being combined with an expression, which matches the previous conditions and then we add on to the previous with the following conditions..
foo(bar)?(?(1)baz|quz)|(?(?=.*baz)bar|foo)
Mainly I am asking should you construct a regular expression in this way, and what would ever be the purpose that you would need to use it in this way?
should you construct a regular expression in this way, and what would
ever be the purpose that you would need to use it in this way?
In this case, and probably most cases, I would say "no".
I often find that conditionals can be rewritten as lookarounds or simplified within alternations.
For instance, it seems to me that the regex you supplied,
foo(bar)?(?(1)baz|quz)|(?(?=.*baz)bar|foo)
could be replaced for greater clarity by
bar(?=.*baz)|foo(?:quz|barbaz)?
which gives us two simple match paths.
But it's been six months since you posted the question. During this time, answering a great many questions on SO, have you ever felt the need for that kind of construction?
I believe the answer to this would be ultimately be specific to the regex library being used or a language's implementation. ("Comparison of regular expression engines", Wikipedia.)
There isn't an official RFC or specification for regular expressions. And the diversity of implementations leads to frustration doing even "simple" expressions--the nuances you're considering are probably implementation-specific instead of regex-specific.
And even beyond your specific question, I think most of the regex-related questions on StackOverflow would be improved if people were more specific about the language being used or the library employed by the application they're using. When troubleshooting my own regular expressions in various applications (text editors for example) it took me awhile before I realized the need to understand the specific library being used.

When a string is being matched against a regular expression, what's going on behind the scenes?

I'd be interested to know what kind of algorithms are used for matching it, and how they are optimised, because I imagine that somes regexes could produce a vast number of possible matches that could cause serious problems on a poorly witten regex parser.
Also, I recently discovered the concept of a ReDoS, why do regexes such as (a|aa)+ or (a|a?)+ cause problems?
EDIT: I have used them most in C# and Python, so that's what was in my mind when I was considering the question. I assume Python's is written in C like the rest of the interpreter, but I have no idea about C#
I find http://www.regular-expressions.info has really useful info about regular expressions.
The author specifically talks about catastrophic uses of regular expression.
Regex Buddy has this debug page which "offers you a unique view inside a regular expression engine".
http://www.regexbuddy.com/debug.html
There are two kinds of regular expression engine: NFA and DFA. I am quite rusty so I don't dare go into specifics by memory. Here is a page that goes through the algorithms, though. Some parsers will perform better with poorly-written expressions. A good book on the subject (that is sitting on my shelf) is Mastering Regular Expression.

How the Perl regular expressions dialect/implementation is called?

The engine for parsing strings which is called "regular expressions" in Perl is very different from what is known by the term "regular expressions" in books.
So, my question is: is there some document describing the Perl's regexp implementation and how and in what ways does it really differ from the classic one (by classic I mean a regular expressions that can really be transformed to ordinary DFA/NFA) and how it works?
Thank you.
Perl regular expressions are of course called Perl regular expressions, or regexes for short. They may also be called patterns or rules. But what they are, or at least can be, is recursive descent parsers. They’re implemented using a recursive backtracker, although you can swap in a DFA engine if you prefer to offload DFA‐solvable tasks to it.
Here are some relevant citations about these matters, with all emboldening — and some of the text :) — mine:
You specify a pattern by creating a regular expression (or regex),
and Perl’s regular expression engine (the “Engine”, for the rest of this
chapter) then takes that expression and determines whether (and how) the
pattern matches your data. While most of your data will probably be
text strings, there’s nothing stopping you from using regexes to search
and replace any byte sequence, even what you’d normally think of as
“binary” data. To Perl, bytes are just characters that happen to have
an ordinal value less than 256.
If you’re acquainted with regular expressions from some other venue, we
should warn you that regular expressions are a bit different in Perl.
First, they aren’t entirely “regular” in the theoretical sense of the
word, which means they can do much more than the traditional regular
expressions taught in computer science classes. Second, they are used
so often in Perl that they have their own special variables, operators,
and quoting conventions which are tightly integrated into the language,
not just loosely bolted on like any other library.
      — Programming Perl, by Larry Wall, Tom Christiansen, and Jon Orwant
This is the Apocalypse on Pattern Matching, generally having to do with
what we call “regular expressions”, which are only marginally related to
real regular expressions. Nevertheless, the term has grown with the
capabilities of our pattern matching engines, so I’m not going to try to
fight linguistic necessity here. I will, however, generally call them
“regexes” (or “regexen”, when I’m in an Anglo‐Saxon mood).
      — Perl6 Apocalypse 5: Pattern Matching, by Larry Wall
There’s a lot of new syntax there, so let’s step through it slowly, starting with:
$file = rx/ ^ <$hunk>* $ /;
This statement creates a pattern object. Or, as it’s known in Perl 6, a
“rule”. People will probably still call them “regular expressions” or
“regexes” too (and the keyword rx reflects that), but Perl patterns long
ago ceased being anything like “regular”, so we’ll try and avoid those
terms.
[Update: We’ve resurrected the term “regex” to refer to these patterns in
general. When we say “rule” now, we’re specifically referring to the kind
of regex that you would use in a grammar. See S05.]
      — Perl6 Exegesis 5: Pattern Matching, by Damian Conway
This document summarizes Apocalypse 5, which is about the new regex syntax.
We now try to call them regex rather than “regular expressions” because
they haven’t been regular expressions for a long time, and we think the
popular term “regex” is in the process of becoming a technical term with a
precise meaning of: “something you do pattern matching with, kinda like a regular
expression”. On the other hand, one of the purposes of the redesign
is to make portions of our patterns more amenable to analysis under
traditional regular expression and parser semantics, and that involves
making careful distinctions between which parts of our patterns and
grammars are to be treated as declarative, and which parts as procedural.
In any case, when referring to recursive patterns within a grammar, the
terms rule and token are generally preferred over regex.
      — Perl6 Synopsis 5: Regexes and Rules,
by Damian Conway, Allison Randal, Patrick Michaud, Larry Wall, and Moritz Lenz
The O'Reilly book 'Mastering Regular Expressions' has a very good explanation of Perl's and other engines. For me this is the reference book on the topic.
There is no formal mathematical name for the language accepted by PCREs.
The term "regular expressions with backtracking" or "regular expressions with backreferences" is about as close as you will get. Anybody familiar with the difference will know what you mean.
(There are only two common types of regexp implementations: DFA-based, and backtracking-based. The former generally accept the "regular languages" in the traditional Computer Science sense. The latter generally accept... More, and it depends on the specific implementation, but backreferences are always one the non-DFA features.)
I asked the same question on the theoretical CS Stack Exchange (Regular expressions aren't), and the answer that got the most upvotes was “regex.”
The dialect is called PCRE (Perl-compatible Regular Expressions).
It's documented in the Perl manual.
Or in "Programming Perl" by Wall, Orwant and Christiansen

what does regular in regex/"regular expression" mean?

What does the "regular" in the phrase "regular expression" mean?
I have heard that regexes were regular at one time, but no more
The regular in regular expression comes from that it matches a regular language.
The concept of regular expressions used in formal language theory is quite different from what engines like PCRE call regular expressions. PCRE and other similar engines have features like lookahead, conditionals and recursion, which make them able to match non-regular languages.
It comes from regular language. This is part of formal language theory. Check out the Chomsky hierarchy for other formal languages.
It's signifying that it's a regular language.
Regexes are still popular. Some people frown on them but they remain a quick and easy (if you know how to use them) way of matching certain types of strings. The alternative is often a good few lines of code looping through strings and extracting the bits that you need which is much nastier!
I still use them on a regular (pun fully intended) basis, to give you a use case I used one the other day to match lines of guitar chords as oppose to lyrics. They're also commonly used for things like basic validation of email addresses and the like.
They're certainly not dead.
I think it comes from the term for the class of grammars that regular expressions describe: regular grammars (or "regular" languages). Where that term comes from is likely answered by a trip to Wikipedia.
Modern regex engines that implement all those fancy look-ahead, pattern re-match, and subexpression counting features, well, those are recognizing a class of grammar that's a superset of regular grammars. "Classical" regular expressions correspond in mechanical ways to theoretical machines called "finite automata". That's a really fun subject in and of itself.

library for converting regular expressions to NFAs?

Is there a good library for converting Regular Expressions into NFAs? I see lots of academic papers on the subject, which are helpful, but not much in the way of working code.
My question is due partially to curiosity, and partially to an actual need to speed up regular expression matching on a production system I'm working on. Although it might be fun to explore this subject for learning's sake, I'm not sure it's a "practical" solution to speeding up our pattern matching. We're a Java shop, but would happily take pointers to good code in any language.
Edit:
Interesting, I did not know that Java's regexps were already NFAs. The title of this paper lead me to believe otherwise. Incidentally, we are currently doing our regexp matching in Postgres; if the simple solution is to move the matching into the Java code that would be great.
Addressing your need to speed up your regexes:
Java's implementation of its regex engine is NFA based. As such, to tune your regexes, I would say that you would benefit from a deeper understanding of how the engine is implemented.
And as such I direct you to: Mastering Regular Expressions The book gives substantial treatment to the NFA engine and how it performs matches, including how to tune your regex specific to the NFA engine.
Additionally, look into Atomic Grouping for tuning your regex.
Disclaimer: I'm not an expert on java+regexes. But, if I understand correctly...
If Java's regular expression matcher is similar to most others, it does use NFA's - but not the way you might expect. Instead of the forward-only implementation you may have heard about, it's using a backtracking solution which simplifies subexpression matching, and is probably required for Backreference usage. However, it performs alternation poorly.
You want to see: http://swtch.com/~rsc/regexp/regexp1.html (concerning edge cases which perform poorly on this altered architecture).
I've also written a question which I suppose comes down to the same thing:
Regex implementation that can handle machine-generated regex's: *non-backtracking*, O(n)?
But basically, it looks like for some very odd reason all common major-vendor regex implementaions have terrible performance when used on certain regexes, even though this is unnecessary.
Disclaimer: I'm a googler, not an expert on regexes.
There is a bunch of faster-than-JDK regex libraries one of which is dk.brics.automaton. According to the benchmark linked in the article, it is approximately x20 faster than the JDK implementation.
This library was written by Anders Møller and had also been mavenized.