Regular Expression to find words in varying orders - regex

I am searching for a way to model a RegEx which would give a match for both of these strings when searched for "sun shining".
the sun is shining
a shining sun is nice

I'd use positive lookaheads for each word, like this (and you can add as many as you like):
(?=.*?\bsun\b)(?=.*?\bshining\b).*

Basic regular expressions don't handle differing orders of words very well. There are ways to do it but the regular expressions become ugly and unreadable to all but the regex gurus. I prefer to opt for readability in most cases myself.
My advice would be to use a simple or variant, something like:
sun.+shining|shining.+sun
with word boundaries if necessary:
\bsun\b.+\bshining\b|\bshining\b.+\bsun\b
As Lucero points out, this will become unwieldy as the number of words your searching for increases, in which case I would go for the multiple regex match solution:
def hasAllWords (string, words[]):
count = words[].length()
for each word in words[]:
if not string.match ("\b" + word + "\b"):
return false
return true
That pseudo-code will run a check for each word and ensure that all of them appear.

You will need to use a regular expression that considers every permutation like this:
\b(sun\b.+\bshining|shining\b.+\bsun)\b
Here the word boundaries \b are used to only match the words sun and shining and no sub-words like in “sunny”.

You use two regexes.
if ( ( $line =~ /\bsun\b.+\bshining\b/ ) ||
( $line =~ /\bshining\b.+\bsun\b/ ) ) {
# do whatever
}
Sometimes you have to do what seems to be low-tech. Other answers to this question will have you building complex regexes with alternation and lookahead and whatever, but sometimes the best way is to do it the simplest way, and in this case, it's to use two different regexes.
Don't worry about execution speed. Unless you benchmark this solution against other more complicated single-expression solutions, you don't know which is faster. It's incredibly easy to write slow regexes.

Related

Regular Expression matching Sentence that MAY contain Parantheses

Am using this expression in REGEX to capture words being sent to our data quality systems. This should be a FULL match - ie all the words in a sentence:
(^$|^\w+(\s\w+)*$)
This works for all scenarios like this:
A sheep jumped over a fence
But not for this
A sheep jumped over a fence (And Tripped)
I understand that \w takes care of only alphanumeric and underscore. But I would also want this to match sentences with the Brackets ( ) like in the example above. Is there a way to achieve this to ADDITIONALLY add the ( ) checks so both scenarios can be satisfied?
I might be misunderstanding this (always take whatever Wiktor says over anybody else) but maybe you are looking for something simple to match each word like this?
^$|([\w]+)
or a full match like this
^$|([ \w()]+)
Good luck! A good place to try this stuff out is at https://regex101.com/ :) What is neat with regexes is you can make them really clever and small, but I lean towards the side of being able to read easily later. Use whichever one gets it done and is easy to understand.

Look behinds: all the rage in regex?

Many regex questions lately have some kind of look-around element in the query that appears to me is not necessary to the success of the match. Is there some teaching resource that is promoting them? I am trying to figure out what kinds of cases you would be better off using a positive look ahead/behind. The main application I can see is when trying to not match an element. But, for example, this query from a recent question has a simple solution to capturing the .*, but why would you use a look behind?
(?<=<td><a href="\/xxx\.html\?n=[0-9]{0, 5}">).*(?=<\/a><span
And this one from another question:
$url = "www.example.com/id/1234";
preg_match("/\d+(?<=id\/[\d])/",$url,$matches);
When is it truly better to use a positive look-around? Can you give some examples?
I realize this is bordering on an opinion-based question, but I think the answers would be really instructive. Regex is confusing enough without making things more complicated... I have read this page and am more interested in some simple guidelines for when to use them rather than how they work.
Thanks for all the replies. In addition to those below, I recommend checking out m.buettner's great answer here.
You can capture overlapping matches, and you can find matches which could lie in the lookarounds of other matches.
You can express complex logical assertions about your match (because many engines let you use multiple lookbehind/lookahead assertions which all must match in order for the match to succeed).
Lookaround is a natural way to express the common constraint "matches X, if it is followed by/preceded by Y". It is (arguably) less natural to add extra "matching" parts that have to be thrown out by postprocessing.
Negative lookaround assertions, of course, are even more useful. Combined with #2, they can allow you do some pretty wizard tricks, which may even be hard to express in usual program logic.
Examples, by popular request:
Overlapping matches: suppose you want to find all candidate genes in a given genetic sequence. Genes generally start with ATG, and end with TAG, TAA or TGA. But, candidates could overlap: false starts may exist. So, you can use a regex like this:
ATG(?=((?:...)*(?:TAG|TAA|TGA)))
This simple regex looks for the ATG start-codon, followed by some number of codons, followed by a stop codon. It pulls out everything that looks like a gene (sans start codon), and properly outputs genes even if they overlap.
Zero-width matching: suppose you want to find every tr with a specific class in a computer-generated HTML page. You might do something like this:
<tr class="TableRow">.*?</tr>(?=<tr class="TableRow">|</table>)
This deals with the case in which a bare </tr> appears inside the row. (Of course, in general, an HTML parser is a better choice, but sometimes you just need something quick and dirty).
Multiple constraints: suppose you have a file with data like id:tag1,tag2,tag3,tag4, with tags in any order, and you want to find all rows with tags "green" and "egg". This can be done easily with two lookaheads:
(.*):(?=.*\bgreen\b)(?=.*\begg\b)
There are two great things about lookaround expressions:
They are zero-width assertions. They require to be matched, but they consume nothing of the input string. This allows to describe parts of the string which will not be contained in a match result. By using capturing groups in lookaround expressions, they are the only way to capture parts of the input multiple times.
They simplify a lot of things. While they do not extend regular languages, they easily allow to combine (intersect) multiple expressions to match the same part of a string.
Well one simple case where they are handy is when you are anchoring the pattern to the start or finish of a line, and just want to make sure that something is either right ahead or behind the pattern you are matching.
I try to address your points:
some kind of look-around element in the query that appears to me is not necessary to the success of the match
Of course they are necessary for the match. As soon as a lookaround assertions fails, there is no match. They can be used to ensure conditions around the pattern, that have additionally to be true. The whole regex does only match, if:
The pattern does fit and
The lookaround assertions are true.
==> But the returned match is only the pattern.
When is it truly better to use a positive look-around?
Simple answer: when you want stuff to be there, but you don't want to match it!
As Bergi mentioned in his answer, they are zero width assertions, this means they don't match a character sequence, they just ensure it is there. So the characters inside a lookaround expression are not "consumed", the regex engine continues after the last "consumed" character.
Regarding your first example:
(?<=<td><a href="\/xxx\.html\?n=[0-9]{0, 5}">).*(?=<\/a><span
I think there is a misunderstanding on your side, when you write "has a simple solution to capturing the .*". The .* is not "captured", it is the only thing that the expression does match. But only those characters are matched that have a "<td><a href="\/xxx\.html\?n=[0-9]{0, 5}">" before and a "<\/a><span" after (those two are not part of the match!).
"Captured" is only something that has been matched by a capturing group.
The second example
\d+(?<=id\/[\d])
Is interesting. It is matching a sequence of digits (\d+) and after the sequence, the lookbehind assertion checks if there is one digit with "id/" before it. Means it will fail if there is more than one digit or if the text "id/" before the digit is missing. Means this regex is matching only one digit, when there is fitting text before.
teaching resources
www.regular-expressions.info
perlretut on Looking ahead and looking behind
I'm assuming you understand the good uses of lookarounds, and ask why they are used with no apparent reason.
I think there are four main categories of how people use regular expressions:
Validation
Validation is usually done on the whole text. Lookarounds like you describe are not possible.
Match
Extracting a part of the text. Lookarounds are used mainly due to developer laziness: avoiding captures.
For example, if we have in a settings file with the line Index=5, we can match /^Index=(\d+)/ and take the first group, or match /(?<=^Index=)\d+/ and take everything.
As other answers said, sometimes you need overlapping between matches, but these are relatively rare.
Replace
This is similar to match with one difference: the whole match is removed and is being replaced with a new string (and some captured groups).
Example: we want to highlight the name in "Hi, my name is Bob!".
We can replace /(name is )(\w+)/ with $1<b>$2</b>,
but it is neater to replace /(?<=name is )\w+/ with <b>$&</b> - and no captures at all.
Split
split takes the text and breaks it to an array of tokens, with your pattern being the delimiter. This is done by:
Find a match. Everything before this match is token.
The content of the match is discarded, but:
In most flavors, each captured group in the match is also a token (notably not in Java).
When there are no more matches, the rest of the text is the last token.
Here, lookarounds are crucial. Matching a character means removing it from the result, or at least separating it from its token.
Example: We have a comma separated list of quoted string: "Hello","Hi, I'm Jim."
Splitting by comma /,/ is wrong: {"Hello", "Hi, I'm Jim."}
We can't add the quote mark, /",/: {"Hello, "Hi, I'm Jim."}
The only good option is lookbehind, /(?<="),/: {"Hello", "Hi, I'm Jim."}
Personally, I prefer to match the tokens rather than split by the delimiter, whenever that is possible.
Conclusion
To answer the main question - these lookarounds are used because:
Sometimes you can't match text that need.
Developers are shiftless.
Lookaround assertions can also be used to reduce backtracking which can be the main cause for a bad performance in regexes.
For example: The regex ^[0-9A-Z]([-.\w]*[0-9A-Z])*#(1) can also be written ^[0-9A-Z][-.\w]*(?<=[0-9A-Z])#(2) using a positive look behind (simple validation of the user name in an e-mail address).
Regex (1) can cause a lot of backtracking essentially because [0-9A-Z] is a subset of [-.\w] and the nested quantifiers. Regex (2) reduces the excessive backtracking, more information here Backtracking, section Controlling Backtracking > Lookbehind Assertions.
For more information about backtracking
Best Practices for Regular Expressions in the .NET Framework
Optimizing Regular Expression Performance, Part II: Taking Charge of Backtracking
Runaway Regular Expressions: Catastrophic Backtracking
I typed this a while back but got busy (still am, so I might take a while to reply back) and didn't get around to post it. If you're still open to answers...
Is there some teaching resource that is promoting them?
I don't think so, it's just a coincidence I believe.
But, for example, this query from a recent question has a simple solution to capturing the .*, but why would you use a look behind?
(?<=<td><a href="\/xxx\.html\?n=[0-9]{0, 5}">).*(?=<\/a><span
This is most probably a C# regex, since variable width lookbehinds are not supported my many regex engines. Well, the lookarounds could be certainly avoided here, because for this, I believe it's really simpler to have capture groups (and make the .* lazy as we're at it):
(<td><a href="\/xxx\.html\?n=[0-9]{0,5}">).*?(<\/a><span)
If it's for a replace, or
<td><a href="\/xxx\.html\?n=[0-9]{0,5}">(.*?)<\/a><span
for a match. Though an html parser would definitely be more advisable here.
Lookarounds in this case I believe are slower. See regex101 demo where the match is 64 steps for capture groups but 94+19 = 1-3 steps for the lookarounds.
When is it truly better to use a positive look-around? Can you give some examples?
Well, lookarounds have the property of being zero-width assertions, which mean they don't really comtribute to matches while they contribute onto deciding what to match and also allows overlapping matches.
Thinking a bit about it, I think, too, that negative lookarounds get used much more often, but that doesn't make positive lookarounds less useful!
Some 'exploits' I can find browsing some old answers of mine (links below will be demos from regex101) follow. When/If you see something you're not familiar about, I probably won't be explaining it here, since the question's focused on positive lookarounds, but you can always look at the demo links I provided where there's a description of the regex, and if you still want some explanation, let me know and I'll try to explain as much as I can.
To get matches between certain characters:
In some matches, positive lookahead make things easier, where a lookahead could do as well, or when it's not so practical to use no lookarounds:
Dog sighed. "I'm no super dog, nor special dog," said Dog, "I'm an ordinary dog, now leave me alone!" Dog pushed him away and made his way to the other dog.
We want to get all the dog (regardless of case) outside quotes. With a positive lookahead, we can do this:
\bdog\b(?=(?:[^"]*"[^"]*")*[^"]*$)
to ensure that there are even number of quotes ahead. With a negative lookahead, it would look like this:
\bdog\b(?!(?:[^"]*"[^"]*")*[^"]*"[^"]*$)
to ensure that there are no odd number of quotes ahead. Or use something like this if you don't want a lookahead, but you'll have to extract the group 1 matches:
(?:"[^"]+"[^"]+?)?(\bdog\b)
Okay, now say we want the opposite; find 'dog' inside the quotes. The regex with the lookarounds just need to have the sign inversed, first and second:
\bdog\b(?!(?:[^"]*"[^"]*")*[^"]*$)
\bdog\b(?=(?:[^"]*"[^"]*")*[^"]*"[^"]*$)
But without the lookaheads, it's not possible. the closest you can get is maybe this:
"[^"]*(\bdog\b)[^"]*"
But this doesn't get all the matches, or you can maybe use this:
"[^"]*?(\bdog\b)[^"]*?(?:(\bdog\b)[^"]*?)?"
But it's just not practical for more occurrences of dog and you get the results in variables with increasing numbers... And this is indeed easier with lookarounds, because they are zero width assertions, you don't have to worry about the expression inside the lookaround to match dog or not, or the regex wouldn't have obtained all the occurrences of dog in the quotes.
Of course now, this logic can be extended to groups of characters, such as getting specific patterns between words such as start and end.
Overlapping matches
If you have a string like:
abcdefghijkl
And want to extract all the consecutive 3 characters possible inside, you can use this:
(?=(...))
If you have something like:
1A Line1 Detail1 Detail2 Detail3 2A Line2 Detail 3A Line3 Detail Detail
And want to extract these, knowing that each line starts with #A Line# (where # is a number):
1A Line1 Detail1 Detail2 Detail3
2A Line2 Detail
3A Line3 Detail Detail
You might try this, which fails because of greediness...
[0-9]+A Line[0-9]+(?: \w+)+
Or this, which when made lazy no more works...
[0-9]+A Line[0-9]+(?: \w+)+?
But with a positive lookahead, you get this:
[0-9]+A Line[0-9]+(?: \w+)+?(?= [0-9]+A Line[0-9]+|$)
And appropriately extracts what's needed.
Another possible situation is one where you have something like this:
#ff00fffirstword#445533secondword##008877thi#rdword#
Which you want to convert to three pairs of variables (first of the pair being a # and some hex values (6) and whatever characters after them):
#ff00ff and firstword
#445533 and secondword#
#008877 and thi#rdword#
If there were no hashes inside the 'words', it would have been enough to use (#[0-9a-f]{6})([^#]+), but unfortunately, that's not the case and you have to resort to .*? instead of [^#]+, which doesn't quite yet solve the issue of stray hashes. Positive lookaheads however make this possible:
(#[0-9a-f]{6})(.+?)(?=#[0-9a-f]{6}|$)
Validation & Formatting
Not recommended, but you can use positive lookaheads for quick validations. The following regex for instance allow the entry of a string containing at least 1 digit and 1 lowercase letter.
^(?=[^0-9]*[0-9])(?=[^a-z]*[a-z])
This can be useful when you're checking for character length but have patterns of varying length in the a string, for example, a 4 character long string with valid formats where # indicates a digit and the hyphen/dash/minus - must be in the middle:
##-#
#-##
A regex like this does the trick:
^(?=.{4}$)\d+-\d+
Where otherwise, you'd do ^(?:[0-9]{2}-[0-9]|[0-9]-[0-9]{2})$ and imagine now that the max length was 15; the number of alterations you'd need.
If you want a quick and dirty way to rearrange some dates in the 'messed up' format mmm-yyyy and yyyy-mm to a more uniform format mmm-yyyy, you can use this:
(?=.*(\b\w{3}\b))(?=.*(\b\d{4}\b)).*
Input:
Oct-2013
2013-Oct
Output:
Oct-2013
Oct-2013
An alternative might be to use a regex (normal match) and process separately all the non-conforming formats separately.
Something else I came across on SO was the indian currency format, which was ##,##,###.### (3 digits to the left of the decimal and all other digits groupped in pair). If you have an input of 122123123456.764244, you expect 1,22,12,31,23,456.764244 and if you want to use a regex, this one does this:
\G\d{1,2}\K\B(?=(?:\d{2})*\d{3}(?!\d))
(The (?:\G|^) in the link is only used because \G matches only at the start of the string and after a match) and I don't think this could work without the positive lookahead, since it looks forward without moving the point of replacement.)
Trimming
Suppose you have:
this is a sentence
And want to trim all the spaces with a single regex. You might be tempted to do a general replace on spaces:
\s+
But this yields thisisasentence. Well, maybe replace with a single space? It now yields " this is a sentence " (double quotes used because backticks eats spaces). Something you can however do is this:
^\s*|\s$|\s+(?=\s)
Which makes sure to leave one space behind so that you can replace with nothing and get "this is a sentence".
Splitting
Well, somewhere else where positive lookarounds might be useful is where, say you have a string ABC12DE3456FGHI789 and want to get the letters+digits apart, that is you want to get ABC12, DE3456 and FGHI789. You can easily do use the regex:
(?<=[0-9])(?=[A-Z])
While if you use ([A-Z]+[0-9]+) (i.e. the captured groups are put back in the resulting list/array/etc, you will be getting empty elements as well.
Note that this could be done with a match as well, with [A-Z]+[0-9]+
If I had to mention negative lookarounds, this post would have been even longer :)
Keep in mind that a positive/negative lookaround is the same for a regex engine. The goal of lookarounds is to perform a check somewhere in your "regular expression".
One of the main interest is to capture something without using capturing parenthesis (capturing the whole pattern), example:
string: aaabbbccc
regex: (?<=aaa)bbb(?=ccc)
(you obtain the result with the whole pattern)
instead of: aaa(bbb)ccc
(you obtain the result with the capturing group.)

A successor to regex? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Looking at some of the regex questions commonly asked on SO, it seems to me there's a number of areas where the traditional regex syntax is falling short of the kind of tasks people are looking for it to do nowadays. For instance:
I want to match a number between 1 and 31, how do I do that ?
The usual answer is don't use regex for this, use normal conditional comparisons. That's fine if you've got just the number by itself, but not so great when you want to match the number as part of a longer string. Why can't we write something like \d{1~31}, and either modify the regex to do some form of counting or have the regex engine internally translate it into [1-9]|[12]\d|3[01] ?
How do I match an even/odd number of occurrences of a specific string ?
This results in a very messy regex, it would be great to be able to just do (mytext){Odd}.
How do I parse XML with regex ?
We all know that's a bad idea, but this and similar tasks would be easier if the [^ ] operator wasn't limited to just a single character. It'd be nice to be able to do <name>(.*)[^(</name>)]
How do I validate an email with regex ?
Very commonly done and yet very complex to do correctly with regex. It'd save everyone having to re-invent the wheel if a syntax like {IsEmail} could be used instead.
I'm sure there are others that would be useful too. I don't know too much about regex internals to know how easy these would be too implement, or if it would even be possible. Implementing some form of counting (to solve the first two problems) may mean it's not technically a 'regular expression' anymore, but it sure would be useful.
Is a 'regex 2.0' syntax desirable, technically possible, and is there anyone working on anything like this ?
I believe Larry Wall covered this with Perl 6 regexes. The basic idea is to replace simple regular expressions with more-useful grammar rules. They're easier to read and it's easier to put code in for things like making sure that you have an number of matches. Plus, you can name rules like IsEmail. I can't possibly list all the details here, but suffice it to say, it sounds like what you're suggesting.
Here are some examples from http://dev.perl.org/perl6/doc/design/exe/E05.html:
Matching IP address:
token quad { (\d**1..3) <?{ $1 < 256 }> }
$str ~~ m/ <quad> <dot> <quad> <dot> <quad> <dot> <quad> /;
Matching nested parentheses:
$str =~ m/ \( [ <-[()]> + : | <self> ]* \) /;
Annotated:
$str =~ m/ <'('> # Match a literal '('
[ # Start a non-capturing group
<-[()]> + # Match a non-paren (repeatedly)
: # ...and never backtrack that match
| # Or
<self> # Recursively match entire pattern
]* # Close group and match repeatedly
<')'> # Match a literal ')'
/;
Don't blame the tool, blame the user.
Regular Expressions were built for matching patterns in strings. That's it.
It was not made for:
Integer validation
Markup language parsing
Very complex validation (ie.: RFC 2822)
Exact string comparison
Spelling correction
Vector computation
Genetic decoding
Miracle making
Baby saving
Finance administering
Sub-atomic partitioning
Flux capacitor activating
Warp core engaging
Time traveling
Headache inducing
Never-mind that last one. It seems that regular expressions are very well adapted to doing that last task when they are being used where they shouldn't.
Should we redesign the screwdriver because it can't nail? NO, use a hammer.
Simply use the proper tool for the task. Stop using regular expressions for tasks which they don't qualify for.
I want to match a number between 1 and 31, how do I do that?
Use your language constructs to try to convert the string to an integer and do the appropriate comparisons.
How do I match an even/odd number of occurrences of a specific string?
Regular expressions are not a string parser. You can however extract the relevant part with a regular expression if you only need to parse a sub-section of the original string.
How do I parse XML with regex?
You don't. Use a XML or a HTML parser depending on your need. Also, an XML parser can't do the job of an HTML parser (unless you have a perfectly formed XHTML document) and the reverse is also true.
How do I validate an email with regex?
You either use this large abomination or you do it properly with a parser.
All of those are reasonably possible in Perl.
To match a 1..31 with a regex pattern:
/( [0-9]+ ) (?(?{ $^N < 1 && $^N > 31 })(*FAIL)) /x
To generate something like [1-9]|[12]\d|3[01]:
use Regexp::Assemble qw( );
my $ra = Regexp::Assemble->new();
$ra->add($_) for (1..31);
my $re = $ra->re; # qr/(?:[456789]|3[01]?|1\d?|2\d?)/
Perl 5.10+ uses tries to optimise alternations, so the following should be sufficient:
my $re = join '|', 1..31;
$re = qr/$re/;
To match an even number of occurrences:
/ (?: pat{2} )* /x
To match an odd number of occurrences:
/ pat (?: pat{2} )* /x
Pattern negative match:
/<name> (.*?) </name>/x # Non-greedy matching
/<name> ( (?: (?!</name>). )* ) </name>/x
To get a pattern matching email addresses:
use Regexp::Common qw( Email::Address );
/$RE{Email}{Address}/
Probably it is already there and from a long time ago. It's called "grammars". Ever heard of yacc and lex ? Now there is a need for something simple. As strange it may appear, the big strength of regex is that they are very simple to write on the spot.
I believe in some (but large) specialized areas there is already what is needed. I'm thinking of XPath syntax.
Is there a larger (not limited to XML but still simple) alternative around that could cover all cases ? Maybe you should take a look at perl 6 grammars.
No. We should leave regular expressions as is. They are already far too complicated. When was the last time you thought you had nailed it, i.e., got the whole extended regex syntax (choose your flavour) loaded in your squashy memory?
The theory behind regexes is nice and simple. But then we wanted this and that to go with it. The tool is useful, but falls short on non-regular matching. That is ok!
What most people miss, is that context-free grammars and little specialized interpreters are really easy to write.
Instead of making regexes more difficult, we should be rooting for parser support in standard libraries for our languages of choice!

How can I "inverse match" with regex?

I'm processing a file, line-by-line, and I'd like to do an inverse match. For instance, I want to match lines where there is a string of six letters, but only if these six letters are not 'Andrea'. How should I do that?
I'm using RegexBuddy, but still having trouble.
(?!Andrea).{6}
Assuming your regexp engine supports negative lookaheads...
...or maybe you'd prefer to use [A-Za-z]{6} in place of .{6}
Note that lookaheads and lookbehinds are generally not the right way to "inverse" a regular expression match. Regexps aren't really set up for doing negative matching; they leave that to whatever language you are using them with.
For Python/Java,
^(.(?!(some text)))*$
http://www.lisnichenko.com/articles/javapython-inverse-regex.html
In PCRE and similar variants, you can actually create a regex that matches any line not containing a value:
^(?:(?!Andrea).)*$
This is called a tempered greedy token. The downside is that it doesn't perform well.
The capabilities and syntax of the regex implementation matter.
You could use look-ahead. Using Python as an example,
import re
not_andrea = re.compile('(?!Andrea)\w{6}', re.IGNORECASE)
To break that down:
(?!Andrea) means 'match if the next 6 characters are not "Andrea"'; if so then
\w means a "word character" - alphanumeric characters. This is equivalent to the class [a-zA-Z0-9_]
\w{6} means exactly six word characters.
re.IGNORECASE means that you will exclude "Andrea", "andrea", "ANDREA" ...
Another way is to use your program logic - use all lines not matching Andrea and put them through a second regex to check for six characters. Or first check for at least six word characters, and then check that it does not match Andrea.
Negative lookahead assertion
(?!Andrea)
This is not exactly an inverted match, but it's the best you can directly do with regex. Not all platforms support them though.
If you want to do this in RegexBuddy, there are two ways to get a list of all lines not matching a regex.
On the toolbar on the Test panel, set the test scope to "Line by line". When you do that, an item List All Lines without Matches will appear under the List All button on the same toolbar. (If you don't see the List All button, click the Match button in the main toolbar.)
On the GREP panel, you can turn on the "line-based" and the "invert results" checkboxes to get a list of non-matching lines in the files you're grepping through.
I just came up with this method which may be hardware intensive but it is working:
You can replace all characters which match the regex by an empty string.
This is a oneliner:
notMatched = re.sub(regex, "", string)
I used this because I was forced to use a very complex regex and couldn't figure out how to invert every part of it within a reasonable amount of time.
This will only return you the string result, not any match objects!
(?! is useful in practice. Although strictly speaking, looking ahead is not a regular expression as defined mathematically.
You can write an inverted regular expression manually.
Here is a program to calculate the result automatically.
Its result is machine generated, which is usually much more complex than hand writing one. But the result works.
If you have the possibility to do two regex matches for the inverse and join them together you can use two capturing groups to first capture everything before your regex
^((?!yourRegex).)*
and then capture everything behind your regex
(?<=yourRegex).*
This works for most regexes. One problem I discovered was when I had a quantifier like {2,4} at the end. Then you gotta get creative.
In Perl you can do:
process($line) if ($line =~ !/Andrea/);

What are good regular expressions?

I have worked for 5 years mainly in java desktop applications accessing Oracle databases and I have never used regular expressions. Now I enter Stack Overflow and I see a lot of questions about them; I feel like I missed something.
For what do you use regular expressions?
P.S. sorry for my bad english
Consider an example in Ruby:
puts "Matched!" unless /\d{3}-\d{4}/.match("555-1234").nil?
puts "Didn't match!" if /\d{3}-\d{4}/.match("Not phone number").nil?
The "/\d{3}-\d{4}/" is the regular expression, and as you can see it is a VERY concise way of finding a match in a string.
Furthermore, using groups you can extract information, as such:
match = /([^#]*)#(.*)/.match("myaddress#domain.com")
name = match[1]
domain = match[2]
Here, the parenthesis in the regular expression mark a capturing group, so you can see exactly WHAT the data is that you matched, so you can do further processing.
This is just the tip of the iceberg... there are many many different things you can do in a regular expression that makes processing text REALLY easy.
Regular Expressions (or Regex) are used to pattern match in strings. You can thus pull out all email addresses from a piece of text because it follows a specific pattern.
In some cases regular expressions are enclosed in forward-slashes and after the second slash are placed options such as case-insensitivity. Here's a good one :)
/(bb|[^b]{2})/i
Spoken it can read "2 be or not 2 be".
The first part are the (brackets), they are split by the pipe | character which equates to an or statement so (a|b) matches "a" or "b". The first half of the piped area matches "bb". The second half's name I don't know but it's the square brackets, they match anything that is not "b", that's why there is a roof symbol thingie (technical term) there. The squiggly brackets match a count of the things before them, in this case two characters that are not "b".
After the second / is an "i" which makes it case insensitive. Use of the start and end slashes is environment specific, sometimes you do and sometimes you do not.
Two links that I think you will find handy for this are
regular-expressions.info
Wikipedia - Regular expression
Coolest regular expression ever:
/^1?$|^(11+?)\1+$/
It tests if a number is prime. And it works!!
N.B.: to make it work, a bit of set-up is needed; the number that we want to test has to be converted into a string of “1”s first, then we can apply the expression to test if the string does not contain a prime number of “1”s:
def is_prime(n)
str = "1" * n
return str !~ /^1?$|^(11+?)\1+$/
end
There’s a detailled and very approachable explanation over at Avinash Meetoo’s blog.
If you want to learn about regular expressions, I recommend Mastering Regular Expressions. It goes all the way from the very basic concepts, all the way up to talking about how different engines work underneath. The last 4 chapters also gives a dedicated chapter to each of PHP, .Net, Perl, and Java. I learned a lot from it, and still use it as a reference.
If you're just starting out with regular expressions, I heartily recommend a tool like The Regex Coach:
http://www.weitz.de/regex-coach/
also heard good things about RegexBuddy:
http://www.regexbuddy.com/
As you may know, Oracle now has regular expressions: http://www.oracle.com/technology/oramag/webcolumns/2003/techarticles/rischert_regexp_pt1.html. I have used the new functionality in a few queries, but it hasn't been as useful as in other contexts. The reason, I believe, is that regular expressions are best suited for finding structured data buried within unstructured data.
For instance, I might use a regex to find Oracle messages that are stuffed in log file. It isn't possible to know where the messages are--only what they look like. So a regex is the best solution to that problem. When you work with a relational database, the data is usually pre-structured, so a regex doesn't shine in that context.
A regular expression (regex or regexp for short) is a special text string for describing a search pattern. You can think of regular expressions as wildcards on steroids. You are probably familiar with wildcard notations such as *.txt to find all text files in a file manager. The regex equivalent is .*\.txt$.
A great resource for regular expressions: http://www.regular-expressions.info
These RE's are specific to Visual Studio and C++ but I've found them helpful at times:
Find all occurrences of "routineName" with non-default params passed:
routineName\(:a+\)
Conversely to find all occurrences of "routineName" with only defaults:
routineName\(\)
To find code enabled (or disabled) in a debug build:
\#if._DEBUG*
Note that this will catch all the variants: ifdef, if defined, ifndef, if !defined
Validating strong passwords:
This one will validate a password with a length of 5 to 10 alphanumerical characters, with at least one upper case, one lower case and one digit:
^(?=.*[A-Z])(?=.*[a-z])(?=.*[0-9])[a-zA-Z0-9]{5,10}$