How can I safely validate an untrusted regex in Perl? - regex

This answer explains that to validate an arbitrary regular expression, one simply uses eval:
while (<>) {
eval "qr/$_/;"
print $# ? "Not a valid regex: $#\n" : "That regex looks valid\n";
}
However, this strikes me as very unsafe, for what I hope are obvious reasons. Someone could input, say:
foo/; system('rm -rf /'); qr/
or whatever devious scheme they can devise.
The natural way to prevent such things is to escape special characters, but if I escape too many characters, I severely limit the usefulness of the regex in the first place. A strong argument can be made, I believe, that at least []{}()/-,.*?^$! and white space characters ought to be permitted (and probably others), un-escaped, in a user regex interface, for the regexes to have minimal usefulness.
Is it possible to secure myself from regex injection, without limiting the usefulness of the regex language?

The solution is simply to change
eval("qr/$_/")
to
eval("qr/\$_/")
This can be written more clearly as follows:
eval('qr/$_/')
But that's still not optimal. The following would be far better as it doesn't involve generating and compiling Perl code at run-time:
eval { qr/$_/ }
Note that neither solution protects you from denial of service attacks. It's quite easy to write a pattern that will take longer than the life of the universe to complete. To hand that situation, you could execute the regex match in a child for which CPU ulimit has been set.

There is some discussion about this over at The Monastery.
TLDR: use re::engine::RE2 (-strict => 1);
Make sure at add (-strict => 1) to your use statement or re::engine::RE2 will fall back to perl's re.
The following is a quote from junyer, owner of the project on github.
RE2 was designed and implemented with an explicit goal of being able to handle regular expressions from untrusted users without risk. One of its primary guarantees is that the match time is linear in the length of the input string. It was also written with production concerns in mind: the parser, the compiler and the execution engines limit their memory usage by working within a configurable budget – failing gracefully when exhausted – and they avoid stack overflow by eschewing recursion.

Related

How to evaluate the performance of a certain regex with certain engine? [duplicate]

I recently became aware of Regular expression Denial of Service attacks, and decided to root out so-called 'evil' regex patterns wherever I could find them in my codebase - or at least those that are used on user input. The examples given at the OWASP link above and wikipedia are helpful, but they don't do a great job of explaining the problem in simple terms.
A description of evil regexes, from wikipedia:
the regular expression applies repetition ("+", "*") to a complex subexpression;
for the repeated subexpression, there exists a match which is also a suffix of another valid match.
With examples, again from wikipedia:
(a+)+
([a-zA-Z]+)*
(a|aa)+
(a|a?)+
(.*a){x} for x > 10
Is this a problem that just doesn't have a simpler explanation? I'm looking for something that would make it easier to avoid this problem while writing regexes, or to find them within an existing codebase.
Why Are Evil Regexes A Problem?
Because computers do exactly what you tell them to do, even if it's not what you meant or is totally unreasonable. If you ask a regex engine to prove that, for some given input, there either is or is not a match for a given pattern, then the engine will attempt to do that no matter how many different combinations must be tested.
Here is a simple pattern inspired by the first example in the OP's post:
^((ab)*)+$
Given the input:
abababababababababababab
The regex engine tries something like (abababababababababababab) and a match is found on the first try.
But then we throw the monkey wrench in:
abababababababababababab a
The engine will first try (abababababababababababab) but that fails because of that extra a. This causes catastrophic backtracking, because our pattern (ab)*, in a show of good faith, will release one of its captures (it will "backtrack") and let the outer pattern try again. For our regex engine, that looks something like this:
(abababababababababababab) - Nope
(ababababababababababab)(ab) - Nope
(abababababababababab)(abab) - Nope
(abababababababababab)(ab)(ab) - Nope
(ababababababababab)(ababab) - Nope
(ababababababababab)(abab)(ab) - Nope
(ababababababababab)(ab)(abab) - Nope
(ababababababababab)(ab)(ab)(ab) - Nope
(abababababababab)(abababab) - Nope
(abababababababab)(ababab)(ab) - Nope
(abababababababab)(abab)(abab) - Nope
(abababababababab)(abab)(ab)(ab) - Nope
(abababababababab)(ab)(ababab) - Nope
(abababababababab)(ab)(abab)(ab) - Nope
(abababababababab)(ab)(ab)(abab) - Nope
(abababababababab)(ab)(ab)(ab)(ab) - Nope
(ababababababab)(ababababab) - Nope
(ababababababab)(abababab)(ab) - Nope
(ababababababab)(ababab)(abab) - Nope
(ababababababab)(ababab)(ab)(ab) - Nope
(ababababababab)(abab)(abab)(ab) - Nope
(ababababababab)(abab)(ab)(abab) - Nope
(ababababababab)(abab)(ab)(ab)(ab) - Nope
(ababababababab)(ab)(abababab) - Nope
(ababababababab)(ab)(ababab)(ab) - Nope
(ababababababab)(ab)(abab)(abab) - Nope
(ababababababab)(ab)(abab)(ab)(ab) - Nope
(ababababababab)(ab)(ab)(ababab) - Nope
(ababababababab)(ab)(ab)(abab)(ab) - Nope
(ababababababab)(ab)(ab)(ab)(abab) - Nope
(ababababababab)(ab)(ab)(ab)(ab)(ab) - Nope
                              ...
(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(abababab) - Nope
(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ababab)(ab) - Nope
(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(abab)(abab) - Nope
(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(abab)(ab)(ab) - Nope
(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ababab) - Nope
(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(abab)(ab) - Nope
(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(abab) - Nope
(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab)(ab) - Nope
The number of possible combinations scales exponentially with the length of the input and, before you know it, the regex engine is eating up all your system resources trying to solve this thing until, having exhausted every possible combination of terms, it finally gives up and reports "There is no match." Meanwhile your server has turned into a burning pile of molten metal.
How to Spot Evil Regexes
It's actually very tricky. Catastrophic backtracking in modern regex engines is similar in nature to the halting problem which Alan Turing proved was impossible to solve. I have written problematic regexes myself, even though I know what they are and generally how to avoid them. Wrapping everything you can in an atomic group can help to prevent the backtracking issue. It basically tells the regex engine not to revisit a given expression - "lock whatever you matched on the first try". Note, however, that atomic expressions don't prevent backtracking within the expression, so ^(?>((ab)*)+)$ is still dangerous, but ^(?>(ab)*)+$ is safe (it'll match (abababababababababababab) and then refuse to give up any of it's matched characters, thus preventing catastrophic backtracking).
Unfortunately, once it's written, it's actually very hard to immediately or quickly find a problem regex. In the end, recognizing a bad regex is like recognizing any other bad code - it takes a lot of time and experience and/or a single catastrophic event.
Interestingly, since this answer was first written, a team at the University of Texas at Austin published a paper describing the development of a tool capable of performing static analysis of regular expressions with the express purpose of finding these "evil" patterns. The tool was developed to analyse Java programs, but I suspect that in the coming years we'll see more tools developed around analysing and detecting problematic patterns in JavaScript and other languages, especially as the rate of ReDoS attacks continues to climb.
Static Detection of DoS Vulnerabilities in
Programs that use Regular Expressions
Valentin Wüstholz, Oswaldo Olivo, Marijn J. H. Heule, and Isil Dillig
The University of Texas at Austin
Detecting evil regexes
Try Nicolaas Weideman's RegexStaticAnalysis project.
Try my ensemble-style vuln-regex-detector which has a CLI for Weideman's tool and others.
Rules of thumb
Evil regexes are always due to ambiguity in the corresponding NFA, which you can visualize with tools like regexper.
Here are some forms of ambiguity. Don't use these in your regexes.
Nesting quantifiers like (a+)+ (aka "star height > 1"). This can cause exponential blow-up. See substack's safe-regex tool.
Quantified Overlapping Disjunctions like (a|a)+. This can cause exponential blow-up.
Avoid Quantified Overlapping Adjacencies like \d+\d+. This can cause polynomial blow-up.
Additional resources
I wrote this paper on super-linear regexes. It includes loads of references to other regex-related research.
What you call an "evil" regex is a regex that exhibits catastrophic backtracking. The linked page (which I wrote) explains the concept in detail. Basically, catastrophic backtracking happens when a regex fails to match and different permutations of the same regex can find a partial match. The regex engine then tries all those permutations. If you want to go over your code and inspect your regexes these are the 3 key issues to look at:
Alternatives must be mutually exclusive. If multiple alternatives can match the same text then the engine will try both if the remainder of the regex fails. If the alternatives are in a group that is repeated, you have catastrophic backtracking. A classic example is (.|\s)* to match any amount of any text when the regex flavor does not have a "dot matches line breaks" mode. If this is part of a longer regex then a subject string with a sufficiently long run of spaces (matched by both . and \s) will break the regex. The fix is to use (.|\n)* to make the alternatives mutually exclusive or even better to be more specific about which characters are really allowed, such as [\r\n\t\x20-\x7E] for ASCII printables, tabs, and line breaks.
Quantified tokens that are in sequence must either be mutually exclusive with each other or be mutually exclusive what comes between them. Otherwise both can match the same text and all combinations of the two quantifiers will be tried when the remainder of the regex fails to match. A classic example is a.*?b.*?c to match 3 things with "anything" between them. When c can't be matched the first .*? will expand character by character until the end of the line or file. For each expansion the second .*? will expand character by character to match the remainder of the line or file. The fix is to realize that you can't have "anything" between them. The first run needs to stop at b and the second run needs to stop at c. With single characters a[^b]*+b[^c]*+c is an easy solution. Since we now stop at the delimiter, we can use possessive quantifiers to further increase performance.
A group that contains a token with a quantifier must not have a quantifier of its own unless the quantified token inside the group can only be matched with something else that is mutually exclusive with it. That ensures that there is no way that fewer iterations of the outer quantifier with more iterations of the inner quantifier can match the same text as more iterations of the outer quantifier with fewer iterations of the inner quantifier. This is the problem illustrated in JDB's answer.
While I was writing my answer I decided that this merited a full article on my website. This is now online too.
I would sum it up as "A repetition of a repetition". The first example you listed is a good one, as it states "the letter a, one or more times in a row. This can again happen one or more times in a row".
What to look for in this case is combination of the quantifiers, such as * and +.
A somewhat more subtle thing to look out for is the third and fourth one. Those examples contain an OR operation, in which both sides can be true. This combined with a quantifier of the expression can result in a LOT of potential matches depending on the input string.
To sum it up, TLDR-style:
Be careful how quantifiers are used in combination with other operators.
I have surprisingly come across ReDOS quite a few times performing source code reviews. One thing I would recommend is to use a timeout with whatever Regular Expression engine that you are using.
For example, in C# I can create the regular expression with a TimeSpan attribute.
string pattern = #"^<([a-z]+)([^<]+)*(?:>(.*)<\/\1>|\s+\/>)$";
Regex regexTags = new Regex(pattern, RegexOptions.None, TimeSpan.FromSeconds(1.0));
try
{
string noTags = regexTags.Replace(description, "");
System.Console.WriteLine(noTags);
}
catch (RegexMatchTimeoutException ex)
{
System.Console.WriteLine("RegEx match timeout");
}
This regex is vulnerable to denial of service and without the timeout will spin and eat resources. With the timeout, it will throw a RegexMatchTimeoutException after the given timeout and will not cause the resource usage leading to a Denial of Service condition.
You will want to experiment with the timeout value to make sure it works for your usage.
I would say this is related to the regex engine in use. You may not always be able to avoid these types of regexes, but if your regex engine is built right, then it is less of a problem. See this blog series for a great deal of information on the topic of regex engines.
Note the caveat at the bottom of the article, in that backtracking is an NP-Complete problem. There currently is no way to efficiently process them, and you might want to disallow them in your input.
I don't think you can recognize such regexes, at least not all of them or not without restrictively limiting their expressiveness. If you'd really care about ReDoSs, I'd try to sandbox them and kill their processing with a timeout. It also might be possible that there are RegEx implementations that let you limit their max backtracking amount.
There are some ways I can think of that you could implement some simplification rules by running them on small test inputs or analyzing the regex's structure.
(a+)+ can be reduced using some sort of rule for replacing redundant operators to just (a+)
([a-zA-Z]+)* could also be simplified with our new redundancy combining rule to ([a-zA-Z]*)
The computer could run tests by running the small subexpressions of the regex against randomly-generated sequences of the relevant characters or sequences of characters, and seeing what groups they all end up in. For the first one, the computer is like, hey the regex wants a's, so lets try it with 6aaaxaaq. It then sees that all the a's, and only the first groupm end up in one group, and concludes that no matter how many a's is puts, it won't matter, since + gets all in the group. The second one, is like, hey, the regex wants a bunch of letters, so lets try it with -fg0uj=, and then it sees that again each bunch is all in one group, so it gets rid of the + at the end.
Now we need a new rule to handle the next ones: The eliminate-irrelevant-options rule.
With (a|aa)+, the computer takes a look at it and is like, we like that big second one, but we can use that first one to fill in more gaps, lets get ans many aa's as we can, and see if we can get anything else after we're done. It could run it against another test string, like `eaaa#a~aa.' to determine that.
You can protect yourself from (a|a?)+ by having the computer realize that the strings matched by a? are not the droids we are looking for, because since it can always match anywhere, we decide that we don't like things like (a?)+, and throw it out.
We protect from (.*a){x} by getting it to realize that the characters matched by a would have already been grabbed by .*. We then throw out that part and use another rule to replace the redundant quantifiers in (.*){x}.
While implementing a system like this would be very complicated, this is a complicated problem, and a complicated solution may be necessary. You should also use techniques other people have brought up, like only allowing the regex some limited amount of execution resources before killing it if it doesn't finish.

Worst input for given regular expression

I want to automate testing of regular expressions in my code base.
I'd like to protect against (a+)+ evil regexps and their kin.
For that I'm looking for an approach or existing library that generates "worst case" inputs for a given regular expression and engine (both NFA and DFA-based engines are in scope).
Granted, regular expression is a powerful language and it's clearly [computationally] hard to find the worst input for arbitrary regular expression, esp. if back references are used, perhaps it might even be undecidable.
For my use-case, I'm fine with finding inputs that are terrible (as opposed to worst possible), yet quite short.
The worst input for a regular expression will vary from engine to engine. The same regex and string may take no time at all on one engine, but never finish on another.
Differences between engines
Engine Type
For certain engines, the "evilest" regex is still benign, running in linear time (or O(n*m) time when both the length of the regex and the length of the string may vary.) Of course, the reason for this is the implementation. These engines don't backtrack; instead they use a finite state machine (FSM).
Note that some backtracking implementations use FSM, but only as an intermediate step. Don't let this confuse you; they're not FSM.
Most of the old regex engines (like sed) use FSM matching. There are a few new engines that use this implementation, such as Go. PCRE even has DFA functions (search for "DFA" here) that use this type of matching.
Another answer of mine also addresses the potential speed difference between the two implementations.
It would be wise to consider using a FSM implementation if you are really worried about malicious input affecting the speed of your regex. Unfortunately, FSM is not as powerful as the other implementation; it lacks support for some features, such as back references.
Optimizations
Evil is actually a bit subjective. Something evil to one regex engine may not be evil to a different engine. An evil plot can be thwarted if the engine is optimized. Optimizations are particularly important to backtracking engines, given their potential exponential run time.
Short-circuiting
Under certain conditions, the engine may be able to quickly determine a match is impossible. While running the regex a*b?a*x against the string aaaaaaaaaaaaaaaaaaaaaaaaaa, Regex101 says:
Your match failed outright. What this means is the engine, due to its internal optimizations, understood that your pattern would never match at any position, and thus did not even attempt to.
Keep in mind that Wikipedia says the regex is evil, especially when paired with that string.
Of course, the engine is smart to not need to backtrack to determine the match wouldn't work. It saw something pretty obvious: the regex needs an x in order to match, but no x was present in the string.
Modifiers
I mention this because you might not expect modifiers to be a factor in regex performance. But they are.
Even PCRE, one of the more optimized implementations, may take considerably more steps with both the u and i modifiers enabled. See my question here for more information about this. In the end, I figured out that only certain characters trigger this behavior.
Analyzing Strings
String length
In general, a long string will be slower than a short string. In fact, if you find a string of length x that causes catastrophic backtracking, you can make it backtrack a bit more by increasing the length of the string.
Greedy vs. Lazy
Compare the speeds of these regexes:
.*b on aaaaaaaa...aaaaab
.*?b on aaaaaaaa...aaaaab
.*b on abaaaaaaa...aaaaa
.*?b on abaaaaaaa...aaaaa
Essentially, greedy matching is best when you think you will need to match a lot. Lazy matching is best when you need to match only a little.
Note that if you change the regex to a*b or a*?b, then the engine may optimize things considerably.
Brute force testing
There are several frameworks that are specifically designed to try to find vulnerabilities in your regexes. It may be worthwhile to try one out.
There's really one thing that I will suggest if you wanted to try making your own algorithm. It's not practical to try all characters in the dictionary, especially if you want to test long strings.
Instead, look at your regex to determine what characters you should test. If you have (a+)+ as your regex, there are really only two things that go into the match: a and not a. You could really just imagine that there are only two characters: a and b (aka not a) when you generate your strings to brute force with.
Setting timeouts
It would be fantastic to be able to ensure your regex finishes before the heat death of the universe, right? Some regex engines do have a way to set a time out.
.NET:
AppDomain domain = AppDomain.CurrentDomain;
// Set a timeout interval of 2 seconds.
domain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT", TimeSpan.FromSeconds(2));
Java
Runnable runnable = new Runnable() {
#Override
public void run()
{
long startTime = System.currentTimeMillis();
Matcher interruptableMatcher = pattern.matcher(new InterruptibleCharSequence(input));
interruptableMatcher.find(); // runs for a long time!
System.out.println("Regex took:" + (System.currentTimeMillis() - startTime) + "ms");
}
};
Thread thread = new Thread(runnable);
thread.start();
Thread.sleep(500);
thread.interrupt();
PHP
bool set_time_limit ( int $seconds )
Set the number of seconds a script is allowed to run. If this is reached, the script returns a fatal error. The default limit is 30 seconds or, if it exists, the max_execution_time value defined in the php.ini.
When called, set_time_limit() restarts the timeout counter from zero. In other words, if the timeout is the default 30 seconds, and 25 seconds into script execution a call such as set_time_limit(20) is made, the script will run for a total of 45 seconds before timing out.
Perl
You might as well visit the link, since it's right on Stack Overflow.

Using regular expression for validating data is correct or not?

I have been finding some articles and post which suggest not to use the regular expression to validate user data. I am not sure of all the things but i usually find it in case of email address verification.
So i want to be clear whether using regular expression for validating user input is good or not? if it is good then what is bad with it for validating email address?
Edit:
So can we say that for basic primary validation of data types we can use regex and it is good and for full validation we need to combine it with another parser.
And for second part for email validation in general usage we can use it but as per standard it is not appropriate. Is it?
Now confusion in selecting correct one answer
It’s good because you can use regular expressions to express and test complex patterns in an easy way.
It’s bad because regular expressions can be complicated and there is much you can do wrong.
Edit    Well, ok. Here’s some real advice: First make sure that the expected valid values can be expressed using regular expression at all. That is when the language of valid values is a regular language. Otherwise you simply cannot use regular expressions (or at least not regular expressions only)!
Now that we know what can be validated using regular expressions, we should discuss what is viable to be validated using regular expressions. If we take an e-mail address as an example (like many others did), we should know what a valid e-mail address may look like (see RFC 5322):
addr-spec = local-part "#" domain
local-part = dot-atom / quoted-string / obs-local-part
domain = dot-atom / domain-literal / obs-domain
domain-literal = [CFWS] "[" *([FWS] dtext) [FWS] "]" [CFWS]
dtext = %d33-90 / ; Printable US-ASCII
%d94-126 / ; characters not including
obs-dtext ; "[", "]", or "\"
Here we see that the local-part may consists of a quoted-string that may contain any printable US-ASCII character (excluding \ and "", but including #). So it is not sufficient to test if the e-mail address contains just one # if we want to allow addresses according to RFC 5322.
On the other hand, if we want to allow any valid e-mail address according to RFC 5322, we would also allow addresses that do probably not exists or are just senseless in most cases (e.g. ""#localhost).
Your question seems to have two parts: (1) is using regular expressions for data validation bad, and (2) is using them for validating email addresses bad?
Re (1), this really depends upon the situation. In many situations a regular expression will be more than adequate to validate user input; for example, validating that a username has only alphanumeric characters. Where a set of regular expressions will probably be inadequate is when the input might be passed to something like a database query or an eval() statement. In these instances there may be language constructs like recursion that cannot be handled with regular expressions, and, more generally, you will want something that knows a lot about the target language to do the validation (and sanitization).
In most cases you'll want to escape the input so that it will will be an innocuous string in the target language.
If you are validating the correctness of code, you will want a full-blown parser for this. A parser may make use of regular expressions, but typically parsers use other things to do the heavy lifting.
Regular expressions can be bad for three reasons:
They can get really complicated, and eventually unmaintainable. It's very easy to make mistakes.
There are certain types of text that cannot be parsed with regular expressions at all (e.g. HTML). Basically, anything with nested patterns cannot be parsed with regular expressions. You wouldn't be able to parse a programming language with regex, for example.
Depending on what kind of text you are working with, it may be easier and clearer if you just write your own code to parse it.
But if neither of these is an issue for whatever you are working with, then there is nothing wrong with using regular expressions. I would say validating email addresses is a good use of regex.
Regular expressions are a tool like any other, albeit a very powerful one.
They are so powerful that people using them tend to suffer from the problem of everything looking like a nail (when you have a hammer). This leads to them being used in situations where another method would be more verbose but more efficient and more maintainable.
In the specific case of email addresses, the main problem here is that there are a very large number of regular expressions out there which claim to validate email address syntax, but are loaded up with problems that cause false negatives.
The main problems with them include:
Disallowing plus characters in the first half of the address (despite them being relatively common)
Limiting the TLD to three characters (this blocking out the .museum TLD)
Limiting the TLD to two character country code TLDs or a list of specific TLDs (thus forcing it to be updated whenever a new TLD comes into play — guess what never happens?)
Email addresses are so complex that a regular expression shouldn't really try to do anything more then:
Something that doesn't include an #
An #
Something that doesn't include an #
A .
Something that doesn't include an #
For e-mail addresses is good to use regular expressions. It will work in most of the cases.
In general: you should validate with regular expressions whatever can be expressed as a regular language
If the pattern of the data you are validating can be expressed completely and correctly using regular expressions, you can use them safely with no worries. However not all textual patterns can be expressed using regular expressions (e.g. context free grammars). In such cases you might need to write a parser or a custom method for validating the data.
The concerns are probably about the fact that often the regular expressions in use do not cover all the possible (valid) inputs and/or restrict the user to much in what he can input.
I see no other way to validate if some user input matches a certain schema (I mean, that is what regular expressions are for), so they are essential (imo) for user input validation. But you definitely have to put some time into designing an expression, to make sure it really works, also in extreme cases.
Take credit card numbers. You have to consider the ways a user might enter them:
1234-5678
// or
1234 5678
// or
1234 - 5678
And now you have two possibilities:
You restrict the input to the first case which will result in an easier expression but will restrict (and maybe annoy) the user the most.
You create an expression that accepts any of these possibilities, making the expression more complicated (hence harder to maintain) but is more use friendly.
It is a trade-off.
Regexes aren't bad for validating most data, if it's a Regular Language.
But, as has been noted, sometimes they can become difficult to maintain, and the programmers introduce errors.
The simplest way to mitigate the situation is with Tests/TDD. These tests should be calling a method that uses the regular expression to validate email addresses (I currently use this regex /^[A-Z0-9._%+-]+#(?:[A-Z0-9-]+\.)+[A-Z]{2,4}$/i which is working well enough. This way, when you get a false positive or false negative, you can add another test for that case, adjust your regular expression, and ensure you didn't break some other condition.
If TDD seems a bit much, a tool like Expresso lets you save regexes with test data, and that can aid in keeping track of values that should pass/fail and aid in creating and understanding your regex.
WARNING:
Take some care in constructing regular expressions. There is potential for introducing ReDos vulnerabilities
See: http://msdn.microsoft.com/en-us/magazine/ff646973.aspx
In short, a poorly constructed regex, given the right input can take hours to execute effectively killing your servers performance.

In "aa67bc54c9", is there any way to print "aa" 67 times, "bc" 54 times and so on, using regular expressions?

I was asked this question in an interview for an internship, and the first solution I suggested was to try and use a regular expression (I usually am a little stumped in interviews). Something like this
(?P<str>[a-zA-Z]+)(?P<n>[0-9]+)
I thought it would match the strings and store them in the variable "str" and the numbers in the variable "n". How, I was not sure of.
So it matches strings of type "a1b2c3", but a problem here is that it also matches strings of type "a1b". Could anyone suggest a solution to deal with this problem?
Also, is there any other regular expression that could solve this problem?
Do you know why "regular expressions" are called "regular"? :-)
That would be too long to explain, I'll just outline the way. To match a pattern (i.e. decide whether a given string is "valid" or "invalid"), a theoretical informatician would use a finite state automaton. That's an abstract machine that has a finite number of states; each tick it reads a char from the input and jumps to another state. The pattern of where to jump from particular state when a particular character is read is fixed. Some states are marked as "OK", some--as "FAIL", so that by examining state of a machine you can check whether your text is "valid" (i.e. a valid e-mail).
For example, this machine only accepts "nice" as its "valid" word (a pic from Wikipedia):
A set of "valid" words such a machine theoretically can distinguish from invalid is called "regular language". Not every set is a regular language: for example, finite state automata are incapable of checking whether parentheses in string are balanced.
But constructing state machines was a complex task, compared to the complexity of defining what "valid" is. So the mathematicians (mainly S. Kleene) noted that every regular language could be described with a "regular expression". They had *s and |s and were the prototypes of what we know as regexps now.
What does it have to do with the problem? The problem in subject is essentially non-regular. It can't be expressed with anything that works like a finite automaton.
The essence is that it should contain a memory cell that is capable to hold an arbitrary number (repetition count in your case). Finite automata and classical regular expressions can not do this.
However, modern regexps are more expressive and are said to be able to check balanced parentheses! But this may serve as a good example that you shouldn't use regexps for tasks they don't suit. Let alone that it contains code snippets; this makes the expression far from being "regular".
Answering the initial question, you can't solve your problem with using anything "regular" only. However, regexps could be aid you in solving this problem, as in tster's answer
Perhaps, I should look closer to tster's answer (do a "+1" there, please!) and show why it's not the "regular expression" solution. One may think that it is, it just contains print statement (not essential) and a loop--and loop concept is compatible with finite state automaton expressive power. But there is one more elusive thing:
while ($line =~ s/^([a-z]+)(\d+)//i)
{
print $1
x # <--- this one
$2;
}
The task of reading a string and a number and printing repeatedly that string given number of times, where the number is an arbitrary integer, is undoable on a finite state machine without additional memory. You use a memory cell to keep that number and decrease it, and check for it to be greater than zero. But this number may be arbitrarily big, and it contradicts with a finite memory available to the finite state machine.
However, there's nothing wrong with classical pattern /([abc]*){5}/ that matches something "regular" repeated fixed number of times. We essentially have states that correspond to "matched pattern once", "matched pattern twice" ... "matched pattern 5 times". There's finite number of them, and that's the gist of the difference.
how about:
while ($line =~ s/^([a-z]+)(\d+)//i)
{
print $1 x $2;
}
Answering your question directly:
No, regular expressions match text and don't print anything, so there is no way to do it solely using regular expressions.
The regular expression you gave will match one string/number pair; you can then print that repeatedly using an appropriate mechanism. The Perl solution from #tster is about as compact as it gets. (It doesn't use the names that you applied in your regex; I'm pretty sure that doesn't matter.)
The remaining details depend on your implementation language.
Nope, this is your basic 'trick question' - no matter how you answer it that answer is wrong unless you have exactly the answer the interviewer was trained to parrot. See the workup of the issue given by Pavel Shved - note that all invocations have 'not' as a common condition, the tool just keeps sliding: Even when it changes state there is no counter in that state
I have a rather advanced book by Kenneth C Louden who is a college prof on the matter, in which it is stated that the issue at hand is codified as "Regex's can't count." The obvious answer to the question seems to me at the moment to be using the lookahead feature of Regex's ...
Probably depends on what build of what brand of regex the interviewer is using, which probably depends of flight-dynamics of Golf Balls.
Nice answers so far. Regular expressions alone are generally thought of as a way to match patterns, not generate output in the manner you mentioned.
Having said that, there is a way to use regex as part of the solution. #Jonathan Leffler made a good point in his comment to tster's reply: "... maybe you need a better regex library in your language."
Depending on your language of choice and the library available, it is possible to pull this off. Using C# and .NET, for example, this could be achieved via the Regex.Replace method. However, the solution is not 100% regex since it still relies on other classes and methods (StringBuilder, String.Join, and Enumerable.Repeat) as shown below:
string input = "aa67bc54c9";
string pattern = #"([a-z]+)(\d+)";
string result = Regex.Replace(input, pattern, m =>
// can be achieved using StringBuilder or String.Join/Enumerable.Repeat
// don't use both
//new StringBuilder().Insert(0, m.Groups[1].Value, Int32.Parse(m.Groups[2].Value)).ToString()
String.Join("", Enumerable.Repeat(m.Groups[1].Value, Int32.Parse(m.Groups[2].Value)).ToArray())
+ Environment.NewLine // comment out to prevent line breaks
);
Console.WriteLine(result);
A clearer solution would be to identify the matches, loop over them and insert them using the StringBuilder rather than rely on Regex.Replace. Other languages may have compact idioms to handle the string multiplication that doesn't rely on other library classes.
To answer the interview question, I would reply with, "it's possible, however the solution would not be a stand-alone 100% regex approach and would rely on other language features and/or libraries to handle the generation aspect of the question since the regex alone is helpful in matching patterns, not generating them."
And based on the other responses here you could beef up that answer further if needed.

Efficiently querying one string against multiple regexes

Lets say that I have 10,000 regexes and one string and I want to find out if the string matches any of them and get all the matches.
The trivial way to do it would be to just query the string one by one against all regexes. Is there a faster,more efficient way to do it?
EDIT:
I have tried substituting it with DFA's (lex)
The problem here is that it would only give you one single pattern. If I have a string "hello" and patterns "[H|h]ello" and ".{0,20}ello", DFA will only match one of them, but I want both of them to hit.
This is the way lexers work.
The regular expressions are converted into a single non deterministic automata (NFA) and possibily transformed in a deterministic automata (DFA).
The resulting automaton will try to match all the regular expressions at once and will succeed on one of them.
There are many tools that can help you here, they are called "lexer generator" and there are solutions that work with most of the languages.
You don't say which language are you using. For C programmers I would suggest to have a look at the re2c tool. Of course the traditional (f)lex is always an option.
I've come across a similar problem in the past. I used a solution similar to the one suggested by akdom.
I was lucky in that my regular expressions usually had some substring that must appear in every string it matches. I was able to extract these substrings using a simple parser and index them in an FSA using the Aho-Corasick algorithms. The index was then used to quickly eliminate all the regular expressions that trivially don't match a given string, leaving only a few regular expressions to check.
I released the code under the LGPL as a Python/C module. See esmre on Google code hosting.
We had to do this on a product I worked on once. The answer was to compile all your regexes together into a Deterministic Finite State Machine (also known as a deterministic finite automaton or DFA). The DFA could then be walked character by character over your string and would fire a "match" event whenever one of the expressions matched.
Advantages are it runs fast (each character is compared only once) and does not get any slower if you add more expressions.
Disadvantages are that it requires a huge data table for the automaton, and there are many types of regular expressions that are not supported (for instance, back-references).
The one we used was hand-coded by a C++ template nut in our company at the time, so unfortunately I don't have any FOSS solutions to point you toward. But if you google regex or regular expression with "DFA" you'll find stuff that will point you in the right direction.
Martin Sulzmann Has done quite a bit of work in this field.
He has a HackageDB project explained breifly here which use partial derivatives seems to be tailor made for this.
The language used is Haskell and thus will be very hard to translate to a non functional language if that is the desire (I would think translation to many other FP languages would still be quite hard).
The code is not based on converting to a series of automata and then combining them, instead it is based on symbolic manipulation of the regexes themselves.
Also the code is very much experimental and Martin is no longer a professor but is in 'gainful employment'(1) so may be uninterested/unable to supply any help or input.
this is a joke - I like professors, the less the smart ones try to work the more chance I have of getting paid!
10,000 regexen eh? Eric Wendelin's suggestion of a hierarchy seems to be a good idea. Have you thought of reducing the enormity of these regexen to something like a tree structure?
As a simple example: All regexen requiring a number could branch off of one regex checking for such, all regexen not requiring one down another branch. In this fashion you could reduce the number of actual comparisons down to a path along the tree instead of doing every single comparison in 10,000.
This would require decomposing the regexen provided into genres, each genre having a shared test which would rule them out if it fails. In this way you could theoretically reduce the number of actual comparisons dramatically.
If you had to do this at run time you could parse through your given regular expressions and "file" them into either predefined genres (easiest to do) or comparative genres generated at that moment (not as easy to do).
Your example of comparing "hello" to "[H|h]ello" and ".{0,20}ello" won't really be helped by this solution. A simple case where this could be useful would be: if you had 1000 tests that would only return true if "ello" exists somewhere in the string and your test string is "goodbye;" you would only have to do the one test on "ello" and know that the 1000 tests requiring it won't work, and because of this, you won't have to do them.
If you're thinking in terms of "10,000 regexes" you need to shift your though processes. If nothing else, think in terms of "10,000 target strings to match". Then look for non-regex methods built to deal with "boatloads of target strings" situations, like Aho-Corasick machines. Frankly, though, it seems like somethings gone off the rails much earlier in the process than which machine to use, since 10,000 target strings sounds a lot more like a database lookup than a string match.
Aho-Corasick was the answer for me.
I had 2000 categories of things that each had lists of patterns to match against. String length averaged about 100,000 characters.
Main Caveat: The patters to match were all language patters not regex patterns e.g. 'cat' vs r'\w+'.
I was using python and so used https://pypi.python.org/pypi/pyahocorasick/.
import ahocorasick
A = ahocorasick.Automaton()
patterns = [
[['cat','dog'],'mammals'],
[['bass','tuna','trout'],'fish'],
[['toad','crocodile'],'amphibians'],
]
for row in patterns:
vals = row[0]
for val in vals:
A.add_word(val, (row[1], val))
A.make_automaton()
_string = 'tom loves lions tigers cats and bass'
def test():
vals = []
for item in A.iter(_string):
vals.append(item)
return vals
Running %timeit test() on my 2000 categories with about 2-3 traces per category and a _string length of about 100,000 got me 2.09 ms vs 631 ms doing sequential re.search() 315x faster!.
You'd need to have some way of determining if a given regex was "additive" compared to another one. Creating a regex "hierarchy" of sorts allowing you to determine that all regexs of a certain branch did not match
You could combine them in groups of maybe 20.
(?=(regex1)?)(?=(regex2)?)(?=(regex3)?)...(?=(regex20)?)
As long as each regex has zero (or at least the same number of) capture groups, you can look at what what captured to see which pattern(s) matched.
If regex1 matched, capture group 1 would have it's matched text. If not, it would be undefined/None/null/...
If you're using real regular expressions (the ones that correspond to regular languages from formal language theory, and not some Perl-like non-regular thing), then you're in luck, because regular languages are closed under union. In most regex languages, pipe (|) is union. So you should be able to construct a string (representing the regular expression you want) as follows:
(r1)|(r2)|(r3)|...|(r10000)
where parentheses are for grouping, not matching. Anything that matches this regular expression matches at least one of your original regular expressions.
I would recommend using Intel's Hyperscan if all you need is to know which regular expressions match. It is built for this purpose. If the actions you need to take are more sophisticated, you can also use ragel. Although it produces a single DFA and can result in many states, and consequently a very large executable program. Hyperscan takes a hybrid NFA/DFA/custom approach to matching that handles large numbers of expressions well.
I'd say that it's a job for a real parser. A midpoint might be a Parsing Expression Grammar (PEG). It's a higher-level abstraction of pattern matching, one feature is that you can define a whole grammar instead of a single pattern. There are some high-performance implementations that work by compiling your grammar into a bytecode and running it in a specialized VM.
disclaimer: the only one i know is LPEG, a library for Lua, and it wasn't easy (for me) to grasp the base concepts.
I'd almost suggest writing an "inside-out" regex engine - one where the 'target' was the regex, and the 'term' was the string.
However, it seems that your solution of trying each one iteratively is going to be far easier.
You could compile the regex into a hybrid DFA/Bucchi automata where each time the BA enters an accept state you flag which regex rule "hit".
Bucchi is a bit of overkill for this, but modifying the way your DFA works could do the trick.
I use Ragel with a leaving action:
action hello {...}
action ello {...}
action ello2 {...}
main := /[Hh]ello/ % hello |
/.+ello/ % ello |
any{0,20} "ello" % ello2 ;
The string "hello" would call the code in the action hello block, then in the action ello block and lastly in the action ello2 block.
Their regular expressions are quite limited and the machine language is preferred instead, the braces from your example only work with the more general language.
Try combining them into one big regex?
I think that the short answer is that yes, there is a way to do this, and that it is well known to computer science, and that I can't remember what it is.
The short answer is that you might find that your regex interpreter already deals with all of these efficiently when |'d together, or you might find one that does. If not, it's time for you to google string-matching and searching algorithms.
The fastest way to do it seems to be something like this (code is C#):
public static List<Regex> FindAllMatches(string s, List<Regex> regexes)
{
List<Regex> matches = new List<Regex>();
foreach (Regex r in regexes)
{
if (r.IsMatch(string))
{
matches.Add(r);
}
}
return matches;
}
Oh, you meant the fastest code? i don't know then....