Why do most languages implement wildcard regular expressions inefficiently? - regex

I was given a link to the following article regarding the implementation of regular expressions in many modern languages.
http://swtch.com/~rsc/regexp/regexp1.html
TL;DNR: Certain regular expressions such as (a?)^na^n for fixed $n$ take exponential time matched against, say, a^n because its implemented via backtracking over the string when matching the ? section. Implementing these as an NFA by keeping state lists makes this much more efficient for obvious reasons
The details of how each language actually implements these isn't very detailed (and the article is old), but I'm curious: what, if any, are the drawbacks of using an NFA as opposed to other implementation techniques. The only thing I can come up with is that with all the bells and whistles of most libraries either a) building a NFA for all those features is impractical or b) there is some conflicting performance issue between the expression above and some other, possibly more common, operation.

While it is possible to construct DFAs that handle these complex cases well (the Tcl RE engine, which was written by Henry Spencer, is a proof by example; the article linked indicated this with its performance data) it's also exceptionally hard.
One key thing though is that if you can detect that you never need the matching group information, you can then (for many REs, especially those without internal backreferences) transform the RE into one that only uses parentheses for grouping allowing a more efficient RE to be generated (so (a?){n}a{n} — I'm using modern conventional syntax — becomes effectively equivalent to a{n,2n}). Backreferences break that major optimisation; it's not for nothing that in Henry's RE code (alluded to above) there is a code comment describing them as the “Feature from the Black Lagoon”. It is one of the best comments I've ever read in code (with the exception of references to academic papers that describe the algorithm encoded).
On the other hand, the Perl/PCRE style engines with their recursive-descent evaluation schemes, can ascribe a much saner set of semantics to mixed greediness REs, and many other things besides. (At the extreme end of this, recursive patterns — (?R) et al — are completely impossible with automata-theoretic approaches. They require a stack to match, making them formally not be regular expressions.)
On a practical level, the cost of building the NFA and the DFA you then compile that to can be quite high. You need clever caching to make it not too expensive. And also on a practical level, the PCRE and Perl implementations have had a lot more developer effort applied to them.

My understanding is that the main reason is we're not just interested in whether a string matches, but in how it matches, e.g. with capturing groups. For example, (x*)x needs to know how many xs were in the group so it can be returned as a capturing group. Similarly it "promises" to consume as many x characters as possible, which matters if we continue matching more things against the remaining string.
Some simpler types of expressions could be matched in the efficient way the article describes, and I have no special knowledge of why this isn't done. Presumably it's more effort to write two separate engines, and perhaps the extra time analyzing an expression to determine which engine to use on it is expensive enough that it's better to skip that step for the common case, and live with the very poor performance in the worst case.

Here:
http://haifux.org/lectures/156/PCRE-Perl_Compatible_Regular_Expression_Library.pdf
They write that pcre uses NFA based implementation. But this link also not the youngest thing on the web...
Around the page 36 there is comparison between engines. It can also be relevant to the original question.

Related

Optimization techniques for backtracking regex implementations

I'm trying to implement a regular expression matcher based on the backtracking approach sketched in Exploring Ruby’s Regular Expression Algorithm. The compiled regex is translated into an array of virtual machine commands; for the backtracking the current command and input string indices as well as capturing group information is maintained on a stack.
In Regular Expression Matching: the Virtual Machine Approach Cox gives more detailed information about how to compile certain regex components into VM commands, though the discussed implementations are a bit different. Based on thoese articles my implementation works quite well for the standard grouping, character classes and repetition components.
Now I would like to see what extensions and optimization options there are for this type of implementation. Cox gives in his article a lot of useful information on the DFA/NFA approach, but the information about extensions or optimization techniques for the backtracking approach is a bit sparse.
For example, about backreferences he states
Backreferences are trivial in backtracking implementations.
and gives an idea for the DFA approach. But it's not clear to me how this can be "trivially" done with the VM approach. When the backreference command is reached, you'd have to compile the previously matched string from the corresponding group into another list of VM commands and somehow incorporate those commands into the current VM, or maintain a second VM and switch execution temporarily to that one.
He also mentions a possible optimization in repetitions by using look-aheads, but doesn't elaborate on how that would work. It seems to me this could be used to reduce the number items on the backtracking stack.
tl;dr What general optimization techniques exist for VM-based backtracking regex implementations and how do they work? Note that I'm not looking for optimizations specific to a certain programming language, but general techniques for this type of regex implemenations.
Edit: As mentioned in the first link, the Oniguruma library implements a regex matcher with exactly that stack-based backtracking approach. Perhaps someone can explain the optimizations done by that library which can be generalized to other implementations. Unfortunately, the library doesn't seem to provide any documentation on the source code and the code also lacks comments.
Edit 2: When reading about parsing expression grammars (PEGs), I stumbled upon a paper on a Lua PEG implementation which makes use of a similar VM-based approach. The paper mentions several optimization options to reduce the number of executed VM commands and an unnecessary growth of the backtracking stack.
I suggest you to watch full lection, it is very interesting, but here is outline:
Complexity explosion in backtracking. This happens then pattern have
ambiguity ([a-x]*[a-x0-9]*z in video, as an example) in it, so engine have to backtrack and test all conditions, until it become certain the pattern did (or didn't) match.
It can take up to O(Nᵖ), where p is "measure of ambiguity" of pattern.
To get O(pN), we need to avoid evaluating equivalent threads again and again.
...
Solution:
At one step ajust all threads by one character, "Breadth-first" execution results in linear comlexity.
Tricks to save every bit of performance
Inside std::regex
Hope this helps!
P.S Lector's repository

Optimization techniques used by std::regex_constants::optimize

I am working with std::regex, and whilst reading about the various constants defined in std::regex_constants, I came across std::optimize, reading about it, it sounds like it is useful in my application (I only need one instance of the regex, initialized at the beginning, but it is used multiple times throughout the loading process).
According to the working paper n3126 (pg. 1077), std::regex_constants::optimize:
Specifies that the regular expression engine should pay more attention to the speed with which regular expressions are matched, and less to the speed with which regular expression objects are constructed. Otherwise it has no detectable effect on the program output.
I was curious as to what type of optimization would be performed, but there doesn't seem to be much literature about it (indeed, it seems to be undefined), and one of the only things I found was at cppreference.com, which stated that std::regex_constants::optimize:
Instructs the regular expression engine to make matching faster, with the potential cost of making construction slower. For example, this might mean converting a non-deterministic FSA to a deterministic FSA.
However, I have no formal background in computer science, and whilst I'm aware of the basics of what an FSA is, and understand the basic difference between a deterministic FSA (each state only has one possible next state), and a non-deterministic FSA (with multiple potential next states); I do not understand how this improves matching time. Also, I would be interested to know if there are any other optimizations in various C++ Standard Library implementations.
There's some useful information on the topic of regex engines and performance trade offs (far more than can fit in a stackoverflow answer) in Mastering Regular Expressions by Jeffrey Friedl.
It's worth noting that Boost.Regex, which was the source for N3126, documents optimize as "This currently has no effect for Boost.Regex."
P.S.
indeed, it seems to be implementation-defined
No, it's unspecified. Implementation-defined means an implementation is required to define the choice of behaviour. Implementations are not required to document how their regex engines are implemented or what (if any) difference the optimize flag makes.
P.S. 2
in various STL implementations
std::regex is not part of the STL, the C++ Standard Library is not the same thing as the STL.
See http://swtch.com/~rsc/regexp/regexp1.html for a nice explanation on how NFA based regex implementations can avoid the exponential backtracking that occurs in DFA matchers in certain circumstances.

Why it's not possible to use regex to parse HTML/XML: a formal explanation in layman's terms

There is no day on SO that passes without a question about parsing (X)HTML or XML with regular expressions being asked.
While it's relatively easy to come up with examples that demonstrates the non-viability of regexes for this task or with a collection of expressions to represent the concept, I could still not find on SO a formal explanation of why this is not possible done in layman's terms.
The only formal explanations I could find so far on this site are probably extremely accurate, but also quite cryptic to the self-taught programmer:
the flaw here is that HTML is a Chomsky Type 2 grammar (context free
grammar) and RegEx is a Chomsky Type 3 grammar (regular expression)
or:
Regular expressions can only match regular languages but HTML is a
context-free language.
or:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
or:
The Pumping lemma for regular languages is the reason why you can't do
that.
[To be fair: the majority of the above explanation link to wikipedia pages, but these are not much easier to understand than the answers themselves].
So my question is: could somebody please provide a translation in layman's terms of the formal explanations given above of why it is not possible to use regex for parsing (X)HTML/XML?
EDIT: After reading the first answer I thought that I should clarify: I am looking for a "translation" that also briefely explains the concepts it tries to translate: at the end of an answer, the reader should have a rough idea - for example - of what "regular language" and "context-free grammar" mean...
Concentrate on this one:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
The definition of regular expressions is equivalent to the fact that a test of whether a string matches the pattern can be performed by a finite automaton (one different automaton for each pattern). A finite automaton has no memory - no stack, no heap, no infinite tape to scribble on. All it has is a finite number of internal states, each of which can read a unit of input from the string being tested, and use that to decide which state to move to next. As special cases, it has two termination states: "yes, that matched", and "no, that didn't match".
HTML, on the other hand, has structures that can nest arbitrarily deep. To determine whether a file is valid HTML or not, you need to check that all the closing tags match a previous opening tag. To understand it, you need to know which element is being closed. Without any means to "remember" what opening tags you've seen, no chance.
Note however that most "regex" libraries actually permit more than just the strict definition of regular expressions. If they can match back-references, then they've gone beyond a regular language. So the reason why you shouldn't use a regex library on HTML is a little more complex than the simple fact that HTML is not regular.
The fact that HTML doesn't represent a regular language is a red herring. Regular expression and regular languages sound sort of similar, but are not - they do share the same origin, but there's a notable distance between the academic "regular languages" and the current matching power of engines. In fact, almost all modern regular expression engines support non-regular features - a simple example is (.*)\1. which uses backreferencing to match a repeated sequence of characters - for example 123123, or bonbon. Matching of recursive/balanced structures make these even more fun.
Wikipedia puts this nicely, in a quote by Larry Wall:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).
"Regular expression can only match regular languages", as you can see, is nothing more than a commonly stated fallacy.
So, why not then?
A good reason not to match HTML with regular expression is that "just because you can doesn't mean you should". While may be possible - there are simply better tools for the job. Considering:
Valid HTML is harder/more complex than you may think.
There are many types of "valid" HTML - what is valid in HTML, for example, isn't valid in XHTML.
Much of the free-form HTML found on the internet is not valid anyway. HTML libraries do a good job of dealing with these as well, and were tested for many of these common cases.
Very often it is impossible to match a part of the data without parsing it as a whole. For example, you might be looking for all titles, and end up matching inside a comment or a string literal. <h1>.*?</h1> may be a bold attempt at finding the main title, but it might find:
<!-- <h1>not the title!</h1> -->
Or even:
<script>
var s = "Certainly <h1>not the title!</h1>";
</script>
Last point is the most important:
Using a dedicated HTML parser is better than any regex you can come up with. Very often, XPath allows a better expressive way of finding the data you need, and using an HTML parser is much easier than most people realize.
A good summary of the subject, and an important comment on when mixing Regex and HTML may be appropriate, can be found in Jeff Atwood's blog: Parsing Html The Cthulhu Way.
When is it better to use a regular expression to parse HTML?
In most cases, it is better to use XPath on the DOM structure a library can give you. Still, against popular opinion, there are a few cases when I would strongly recommend using a regex and not a parser library:
Given a few of these conditions:
When you need a one-time update of your HTML files, and you know the structure is consistent.
When you have a very small snippet of HTML.
When you aren't dealing with an HTML file, but a similar templating engine (it can be very hard to find a parser in that case).
When you want to change parts of the HTML, but not all of it - a parser, to my knowledge, cannot answer this request: it will parse the whole document, and save a whole document, changing parts you never wanted to change.
Because HTML can have unlimited nesting of <tags><inside><tags and="<things><that><look></like></tags>"></inside></each></other> and regex can't really cope with that because it can't track a history of what it's descended into and come out of.
A simple construct that illustrates the difficulty:
<body><div id="foo">Hi there! <div id="bar">Bye!</div></div></body>
99.9% of generalized regex-based extraction routines will be unable to correctly give me everything inside the div with the ID foo, because they can't tell the closing tag for that div from the closing tag for the bar div. That is because they have no way of saying "okay, I've now descended into the second of two divs, so the next div close I see brings me back out one, and the one after that is the close tag for the first". Programmers typically respond by devising special-case regexes for the specific situation, which then break as soon as more tags are introduced inside foo and have to be unsnarled at tremendous cost in time and frustration. This is why people get mad about the whole thing.
A regular language is a language that can be matched by a finite state machine.
(Understanding Finite State machines, Push-down machines, and Turing machines is basically the curriculum of a fourth year college CS Course.)
Consider the following machine, which recognizes the string "hi".
(Start) --Read h-->(A)--Read i-->(Succeed)
\ \
\ -- read any other value-->(Fail)
-- read any other value-->(Fail)
This is a simple machine to recognize a regular language; Each expression in parenthesis is a state, and each arrow is a transition. Building a machine like this will allow you to test any input string against a regular language -- hence, a regular expression.
HTML requires you to know more than just what state you are in -- it requires a history of what you have seen before, to match tag nesting. You can accomplish this if you add a stack to the machine, but then it is no longer "regular". This is called a Push-down machine, and recognizes a grammar.
A regular expression is a machine with a finite (and typically rather small) number of discrete states.
To parse XML, C, or any other language with arbitrary nesting of language elements, you need to remember how deep you are. That is, you must be able to count braces/brackets/tags.
You cannot count with finite memory. There may be more brace levels than you have states! You might be able to parse a subset of your language that restricts the number of nesting levels, but it would be very tedious.
A grammar is a formal definition of where words can go. For example, adjectives preceed nouns in English grammar, but follow nouns en la gramática española.
Context-free means that the grammar works universally in all contexts. Context-sensitive means there are additional rules in certain contexts.
In C#, for example, using means something different in using System; at the top of files, than using (var sw = new StringWriter (...)). A more relevant example is the following code within code:
void Start ()
{
string myCode = #"
void Start()
{
Console.WriteLine (""x"");
}
";
}
There's another practical reason for not using regular expressions to parse XML and HTML that has nothing to do with the computer science theory at all: your regular expression will either be hideously complicated, or it will be wrong.
For example, it's all very well writing a regular expression to match
<price>10.65</price>
But if your code is to be correct, then:
It must allow whitespace after the element name in both start and end tag
If the document is in a namespace, then it should allow any namespace prefix to be used
It should probably allow and ignore any unknown attributes appearing in the start tag (depending on the semantics of the particular vocabulary)
It may need to allow whitespace before and after the decimal value (again, depending on the detailed rules of the particular XML vocabulary).
It should not match something that looks like an element, but is actually in a comment or CDATA section (this becomes especially important if there is a possibility of malicious data trying to fool your parser).
It may need to provide diagnostics if the input is invalid.
Of course some of this depends on the quality standards you are applying. We see a lot of problems on StackOverflow with people having to generate XML in a particular way (for example, with no whitespace in the tags) because it is being read by an application that requires it to be written in a particular way. If your code has any kind of longevity then it's important that it should be able to process incoming XML written in any way that the XML standard permits, and not just the one sample input document that you are testing your code on.
So others have gone and given brief definitions for most of these things, but I don't really think they cover WHY normal regex's are what they are.
There are some great resources on what a finite state machine is, but in short, a seminal paper in computer science proved that the basic grammar of regex's (the standard ones, used by grep, not the extended ones, like PCRE) can always be manipulated into a finite-state machine, meaning a 'machine' where you are always in a box, and have a limited number of ways to move to the next box. In short, you can always tell what the next 'thing' you need to do is just by looking at the current character. (And yes, even when it comes to things like 'match at least 4, but no more than 5 times', you can still create a machine like this) (I should note that note that the machine I describe here is technically only a subtype of finite-state machines, but it can implement any other subtype, so...)
This is great because you can always very efficiently evaluate such a machine, even for large inputs. Studying these sorts of questions (how does my algorithm behave when the number of things I feed it gets big) is called studying the computational complexity of the technique. If you're familiar with how a lot of calculus deals with how functions behave as they approach infinity, well, that's pretty much it.
So whats so great about a standard regular expression? Well, any given regex can match a string of length N in no more than O(N) time (meaning that doubling the length of your input doubles the time it takes: it says nothing about the speed for a given input) (of course, some are faster: the regex * could match in O(1), meaning constant, time). The reason is simple: remember, because the system has only a few paths from each state, you never 'go back', and you only need to check each character once. That means even if I pass you a 100 gigabyte file, you'll still be able to crunch through it pretty quickly: which is great!.
Now, its pretty clear why you can't use such a machine to parse arbitrary XML: you can have infinite tags-in-tags, and to parse correctly you need an infinite number of states. But, if you allow recursive replaces, a PCRE is Turing complete: so it could totally parse HTML! Even if you don't, a PCRE can parse any context-free grammar, including XML. So the answer is "yeah, you can". Now, it might take exponential time (you can't use our neat finite-state machine, so you need to use a big fancy parser that can rewind, which means that a crafted expression will take centuries on a big file), but still. Possible.
But lets talk real quick about why that's an awful idea. First of all, while you'll see a ton of people saying "omg, regex's are so powerful", the reality is... they aren't. What they are is simple. The language is dead simple: you only need to know a few meta-characters and their meanings, and you can understand (eventually) anything written in it. However, the issue is that those meta-characters are all you have. See, they can do a lot, but they're meant to express fairly simple things concisely, not to try and describe a complicated process.
And XML sure is complicated. It's pretty easy to find examples in some of the other answers: you can't match stuff inside comment fields, ect. Representing all of that in a programming language takes work: and that's with the benefits of variables and functions! PCRE's, for all their features, can't come close to that. Any hand-made implementation will be buggy: scanning blobs of meta-characters to check matching parenthesis is hard, and it's not like you can comment your code. It'd be easier to define a meta-language, and compile that down to a regex: and at that point, you might as well just take the language you wrote your meta-compiler with and write an XML parser. It'd be easier for you, faster to run, and just better overall.
For more neat info on this, check out this site. It does a great job of explaining all this stuff in layman's terms.
Don't parse XML/HTML with regex, use a proper XML/HTML parser and a powerful xpath query.
theory :
According to the compiling theory, XML/HTML can't be parsed using regex based on finite state machine. Due to hierarchical construction of XML/HTML you need to use a pushdown automaton and manipulate LALR grammar using tool like YACC.
realLife©®™ everyday tool in a shell :
You can use one of the following :
xmllint often installed by default with libxml2, xpath1 (check my wrapper to have newlines delimited output
xmlstarlet can edit, select, transform... Not installed by default, xpath1
xpath installed via perl's module XML::XPath, xpath1
xidel xpath3
saxon-lint my own project, wrapper over #Michael Kay's Saxon-HE Java library, xpath3
or you can use high level languages and proper libs, I think of :
python's lxml (from lxml import etree)
perl's XML::LibXML, XML::XPath, XML::Twig::XPath, HTML::TreeBuilder::XPath
ruby nokogiri, check this example
php DOMXpath, check this example
Check: Using regular expressions with HTML tags
In a purely theoretical sense, it is impossible for regular expressions to parse XML. They are defined in a way that allows them no memory of any previous state, thus preventing the correct matching of an arbitrary tag, and they cannot penetrate to an arbitrary depth of nesting, since the nesting would need to be built into the regular expression.
Modern regex parsers, however, are built for their utility to the developer, rather than their adherence to a precise definition. As such, we have things like back-references and recursion that make use of knowledge of previous states. Using these, it is remarkably simple to create a regex that can explore, validate, or parse XML.
Consider for example,
(?:
<!\-\-[\S\s]*?\-\->
|
<([\w\-\.]+)[^>]*?
(?:
\/>
|
>
(?:
[^<]
|
(?R)
)*
<\/\1>
)
)
This will find the next properly formed XML tag or comment, and it will only find it if it's entire contents are properly formed. (This expression has been tested using Notepad++, which uses Boost C++'s regex library, which closely approximates PCRE.)
Here's how it works:
The first chunk matches a comment. It's necessary for this to come first so that it will deal with any commented-out code that otherwise might cause hang ups.
If that doesn't match, it will look for the beginning of a tag. Note that it uses parentheses to capture the name.
This tag will either end in a />, thus completing the tag, or it will end with a >, in which case it will continue by examining the tag's contents.
It will continue parsing until it reaches a <, at which point it will recurse back to the beginning of the expression, allowing it to deal with either a comment or a new tag.
It will continue through the loop until it arrives at either the end of the text or at a < that it cannot parse. Failing to match will, of course, cause it to start the process over. Otherwise, the < is presumably the beginning of the closing tag for this iteration. Using the back-reference inside a closing tag <\/\1>, it will match the opening tag for the current iteration (depth). There's only one capturing group, so this match is a simple matter. This makes it independent of the names of the tags used, although you could modify the capturing group to capture only specific tags, if you need to.
At this point it will either kick out of the current recursion, up to the next level or end with a match.
This example solves problems dealing with whitespace or identifying relevant content through the use of character groups that merely negate < or >, or in the case of the comments, by using [\S\s], which will match anything, including carriage returns and new lines, even in single-line mode, continuing until it reaches a
-->. Hence, it simply treats everything as valid until it reaches something meaningful.
For most purposes, a regex like this isn't particularly useful. It will validate that XML is properly formed, but that's all it will really do, and it doesn't account for properties (although this would be an easy addition). It's only this simple because it leaves out real world issues like this, as well as definitions of tag names. Fitting it for real use would make it much more of a beast. In general, a true XML parser would be far superior. This one is probably best suited for teaching how recursion works.
Long story short: use an XML parser for real work, and use this if you want to play around with regexes.

does regex comparisons consume lots of resources?

i dunno, but will your machine suffer great slowdown if you use a very complex regex?
like for example the famous email validation module proposed just recently? which can be found here RFC822
update: sorry i had to ask this question in a hurry anyway i posted the link to the email regex i was talking about
It highly depends on the individual regex: features like look-behind or look-ahead can get very expensive, while simple regular expressions are fine for most situations.
Tutorials on http://www.regular-expressions.info/ offer performance advice, so that can be a good start.
Regexes are usually implemented as one of two algorithms (NFA or DFA) that correspond to two different FSMs. Different languages and even different versions of the same language may have a different type of regex. Naturally, some regexes work faster in one and some work faster in the other. If it's really critical, you might want to find what type of regex FSM is implemented.
I'm no expert here. I got all this from reading Mastering Regular Expressions by Jeffrey E. F. Friedl. You might want to look that up.
Depends also on how well you optimise your query, and knowing the internal working of regex.
Using the negated character class, for example, saves the cost of having the engine backtracking characters (i.e. /<[^>]+>/ instead of /<.+?>/)(*).Trivial in small matches, but saves a lot of cycles when you have to match inside a big chunk of text.
And there are many other ways to save resources in regex operations, so performance can vary wildly.
example taken from http://www.regular-expressions.info/repeat.html
You might be interested by articles like: Regular Expression Matching Can Be Simple And Fast or Understanding Regular Expressions.
It is, alas, easy to write inefficient REs, which can match quite quickly on success but can look for hours if no match is found, because the engine stupidly try a long match on every position of a long string!
There are a few recipes for this, like anchoring whenever it is possible, avoiding greediness if possible, etc.
Note that the giant e-mail expression isn't recent, and not necessarily slow: a short, simple expression can be slower than a more convoluted one!
Note also that in some situations (like e-mail, precisely), it can be more efficient (and maintainable!) to use a mix of regexes and code to handle cases, like splitting at #, handling different cases (first part starts with " or not, second part is IP address or domain, etc.).
Regexes are not the ultimate tool able to do everything, but it is a very useful tool well worth to master!
It depends on your regexp engine. As explained here (Regular Expression Matching Can Be Simple And Fast) there may be some important difference in the performance depending on the implementation.
You can't talk about regexes in general any more than you can talk about code in general.
Regular expressions are little programs on their own. Just as any given program may be fast or slow, any given regex may be fast or slow.
One thing to remember, however, is that the regular expression handler is is very well optimized to do its job and run the regex quickly.
I once made a program that analyzed a lot of text (a big code base, >300k lines). First I used regex but when I switched to using regular string functions it got a lot faster, like taking 40% of the time of the regex version. So while of course it depends, my thing got a lot faster.
Once I had written a greedy - accidentally, of course :-) - a multi-line regex and had it search/replace on 10 * 200 GB of text files. It was damn slow... So it depends what you write, and what you check.
Depends on the complexity of the expression and the language the expression is used with.
In JavaScript; you have to optimize everything. In C#; not so much.

library for converting regular expressions to NFAs?

Is there a good library for converting Regular Expressions into NFAs? I see lots of academic papers on the subject, which are helpful, but not much in the way of working code.
My question is due partially to curiosity, and partially to an actual need to speed up regular expression matching on a production system I'm working on. Although it might be fun to explore this subject for learning's sake, I'm not sure it's a "practical" solution to speeding up our pattern matching. We're a Java shop, but would happily take pointers to good code in any language.
Edit:
Interesting, I did not know that Java's regexps were already NFAs. The title of this paper lead me to believe otherwise. Incidentally, we are currently doing our regexp matching in Postgres; if the simple solution is to move the matching into the Java code that would be great.
Addressing your need to speed up your regexes:
Java's implementation of its regex engine is NFA based. As such, to tune your regexes, I would say that you would benefit from a deeper understanding of how the engine is implemented.
And as such I direct you to: Mastering Regular Expressions The book gives substantial treatment to the NFA engine and how it performs matches, including how to tune your regex specific to the NFA engine.
Additionally, look into Atomic Grouping for tuning your regex.
Disclaimer: I'm not an expert on java+regexes. But, if I understand correctly...
If Java's regular expression matcher is similar to most others, it does use NFA's - but not the way you might expect. Instead of the forward-only implementation you may have heard about, it's using a backtracking solution which simplifies subexpression matching, and is probably required for Backreference usage. However, it performs alternation poorly.
You want to see: http://swtch.com/~rsc/regexp/regexp1.html (concerning edge cases which perform poorly on this altered architecture).
I've also written a question which I suppose comes down to the same thing:
Regex implementation that can handle machine-generated regex's: *non-backtracking*, O(n)?
But basically, it looks like for some very odd reason all common major-vendor regex implementaions have terrible performance when used on certain regexes, even though this is unnecessary.
Disclaimer: I'm a googler, not an expert on regexes.
There is a bunch of faster-than-JDK regex libraries one of which is dk.brics.automaton. According to the benchmark linked in the article, it is approximately x20 faster than the JDK implementation.
This library was written by Anders Møller and had also been mavenized.