Under what situations are regular expressions really the best way to solve the problem? - regex

I'm not sure if Jeff coined it but it's the joke/saying that people who say "oh, I know I'll use regular expressions!" now have two problems. I've always taken this to mean that people use regular expressions in very inappropriate contexts.
However, under what circumstances are regular expressions really the best answer? What problems are they really the best or maybe only way to solve a situation?

RexExprs are good for:
Text Format Validations (email, url, numbers)
Text searchs/substitution.
Mappings (e.g. url pattern to function call)
Filtering some texts (related to substitution)
Lexical analysis during parsing.

They can be used to validate anything that have a pattern like :
Social Security Number
Telephone Number ( 555-555-5555 )
Email Address (something#example.com)
IP Address (but it's more complex to make sure it's valid)
All those have patterns and are easily verifiable by RegEx.
They are difficultly used for entry that have a logic instead of a pattern like a credit card number but they still can be used to do some client validation.
So the best ways?
To sanitize data entry on the client
side before sanitizing them on the
server.
To make "Search and Replace" of some
strings that contains pattern
I'm sure I am missing a lot of other cases.

Regular expressions are a great way to parse text that doesn't already have a parser (i.e. XML) I have used it to create a parser for the mod_rewrite syntax in the .htaccess file or in my URL Rewriter project http://www.codeplex.com/urlrewriter for example

they are really good when you want to be more specific than "*" or "?" like "3 letters then 2 numbers then a $ sign then a period"
The quote is from an anti-Perl rant from Jamie Zawinski. I think Perl used to do regex really badly but now it seems to be a standard engine for a lot of programs.
But the same sentiment still applies. If you don't know how to use regex, you better not try something real fancy other wise you get one of these tags too (see bronze list) ;o)
https://stackoverflow.com/users/730/keng

They are good for matching or finding text that takes a very specific and simple format. By "simple" I mean not nested and smaller than the entire html spec, for example.

They are primarily of value for highly structured text parsing. If you used named groups (and option in most mature regex systems), you have a phenomenally powerful and crisp way to handle the strings.
Here's an example. Consider that netstat in its various iterations on different linux OSes, and versions of netstat can return different results. Sometimes there is an extra column, sometimes there is a shift if the date/time format. Regexes give you a powerful way to handle that with a single expression. Couple that with named groups, and you can retrieve the data without hacks like:
1) split on spaces
2) ok, the netstat version is X so add I need to add 1 to all array references past column 5.
3) ok, the netstat version is Y so I need to make sure that I use multiple array references for the date info.
YUCK. Simple to fix in a Regex :-)

Related

How to store regex "literals" in Postgres?

I want to store regex pattern/option "literals" in a Postgres database, like:
/<pattern>/options
I think it's helpful to indicate the expected format and use of the text. Also, the application framework I'm using can coerce this kind of text into the proper Regex type.
I looked through the data types and provided extensions and didn't see anything specific. Am I missing one?
If there is no specialized type, is there a reasonable way to constrain TEXT to likely contain a regex (not to validate the regex, just to ensure text between forward-slashes). Does this work?
pattern TEXT CONSTRAINT is_regex (pattern LIKE '/%/%')
At the moment, I'm only using these literals in application code, which is why the TEXT to Regex transformation is very helpful. At some point, I might get better at CTEs and transform them back to regular TEXT (without forward-slashes or options) to be used in Postgres pattern matching functions.
PostgreSQL doesn't offer such type (as of now), but generally speaking you have a few options to preserve database integrity (I can only assume you want this to avoid worrying that the data you read from the database fails your application, because it's not a valid regular expression).
Your best bet is (which you already figured out) is to use a CHECK constraint, one way or the other. If you plan to use this pattern in multiple places, I suggest you to use domain types. That way, you don't have to define these constraints at multiple columns. Ironically the best way to write such a CHECK constraint is to write a regexp pattern to match your regexp patterns (because there are multiple regexp implementations with slight differences). It obviously won't be perfect, but it might be good enough. I.e.
create domain likely_regexp as text
check (value ~ '^/([^/]*(\\/[^/]*)*[^\\])?/[a-z]*$');
But if you're okay to check against PostgreSQL's implementation, you can (ab)use the fact that CHECK constraints fails not only when the evaluated expression is false, but they also fail when the expression throws (raises) some error. So you can call a regexp function in order to detect if it's actually a valid regular expression or not. Altough you still have to split the pattern and the options part.
create domain pg_regexp as text
check (regexp_replace('', replace(substring(value from '^/(.*)/'), '\/', '/'),
'', substring(value from '/([^/]*)$')) = '');
https://rextester.com/YFG18381

Combine multiple regexes into one / build small regex to match a set of fixed strings

The situation:
We created a tool Google Analytics Referrer Spam Killer, which automatically adds filters to Google Analytics to filter out spam.
These filters exclude traffic which comes from certain spammy domains. Right now we have 400+ spammy domains in our list.
To remove the spam, we add a regex (like so domain1.com|domain2.com|..) as a filter to Analytics and tell Analytics to ignore all traffic which matches this filter.
The problem:
Google Analytics has a 255 character limit for each regex (one regex per filter). Because of that we must to create a lot of filters to add all 400+ domains (now 30+ filters). The problem is, there is another limit. Number of write operation per day. Each new filter is 3 more write operations.
The question:
What I want to find the shortest regex to exactly match another regex.
For example you need to match the following strings:
`abc`, `abbc` and `aac`
You could match them with the following regexes: /^abc|abbc|aac$/, /^a(b|bb|a)c$/, /^a(bb?|a)c$/, etc..
Basically I'm looking for an expression which exactly matches /^abc|abbc|aac$/, but is shorter in length.
I found multiregexp, but as far as I can tell, it doesn't create a new regex out of another expression which I can use in Analytics.
Is there a tool which can optimize regexes for length?
I found this C tool which compiles on Linux: http://bisqwit.iki.fi/source/regexopt.html
Super easy:
$ ./regex-opt '123.123.123.123'
(123.){3}123
$ ./regex-opt 'abc|abbc|aac'
(aa|ab{1,2})c
$ ./regex-opt 'aback|abacus|abacuses|abaft|abaka|abakas|abalone|abalones|abamp'
aba(ck|ft|ka|lone|mp|(cu|ka|(cus|lon)e)s)
I wasn't able to run the tool suggested by #sln. It looks like it makes an even shorter regex.
I'm not aware of an existing tool for combining / compressing / optimising regexes. There may be one. Maybe by building a finite-state machine out of a regex and then generating a regex back out of that?
You don't need to solve the problem for the general case of arbitrary regexes. I think it's a better bet to look at creating compact regexes to match any of a given set of fixed strings.
There may already be some existing code for making an optimised regex to match a given set of fixed strings, again, IDK.
To do it yourself, the simplest thing would be to sort your strings and look for common prefixes / suffixes. ((afoo|bbaz|c)bar.com). Looking for common strings in the middle is less easy. You might want to look at algorithms used for lossless data compression for finding redundancy.
You'd ideally want to spot cases where you could use a foo[a-d] range instead of a foo(a|b|c|d), and various other things.

How to author and manage very long regex patterns and reuse pattern blocks?

Is there any proven way to overcome the difficulty of authoring and managing large regex patterns in your code? Preferably in a visual tool? Is there any way to build up a pattern from smaller reusable pieces? I could not find an web based regex visualizers that supported multine regex for instance.
We are currently using a technique to split patterns and store the pieces in variables, but this mixes languages - an architectural no-no for us - and also hinders the ability to paste the pattern into a visualizer.
I am using .NET/Powershell/JavaScript - but I am interested in the flavor agnostic perspective as well.
At my old job we used regex for everything. The best tools I found where the below:
Best regex editor in my opinion (it explains each segment and has a reference sheet): http://regex101.com
Best web multi-line regex editor:http://regexpal.com/
Best regex editor overall (a download for the price of $40): http://www.regexbuddy.com/
As far as managing regexs, we used to keep all regexs in a properties file separate from the code, where the code loads the property (regex) in real time. We also shared regexbuddy files for exchanging regex patterns. There was one file that was saved on source control that had lines and lines of simple patterns for matching certain things. It helped to create larger ones, using the smaller pieces. However, what I have learned is that basically all regexs need to be tweaked for your specific purposes. It is not as simple as piecing small ones together. The small ones just help get started in the right direction.
Over here in flavor agnostic land, I sometimes do something like this (actual working code I just happened to be revisiting):
street = "(#{names}[A-Za-z0-9']+)((?:\\s+(?:#{StreetType.regexp}))?)"
space = '[\s.,]+'
at_a_street =
'(?:and|&|&amp;|at|#|by|just\s+\w+\s+of|just\s+past|looking(?:\s+\w+)?\s+(?:at|to|towards?)|near)' +
"#{space}#{street}"
between_streets =
"(?:between|(?:betw?|btwn)\\.?)#{space}#{street}#{space}(?:and|&|&amp;)#{space}#{street}"
address = '(\b\d+)(?:\s*-\s*\d+|[a-z])?\s+' + street
#regexps = [
/#{street}#{space}#{at_a_street}/i,
/#{street}#{space}#{between_streets}/i,
/#{address}/i,
/#{address}#{space}#{at_a_street}/i,
/#{address}#{space}#{between_streets}/i
]
Namely break the regexp up into meaningful bits, give them comprehensible names, and concatenate them as necessary. (You need to think a little extra about whether each bit can be safely concatenated with others, e.g. watch out for greedy expressions at the end.)

Need to create a gmail like search syntax; maybe using regular expressions?

I need to enhance the search functionality on a page listing user accounts. Rather than have multiple search boxes for each possible field, or a drop down menu where the user can only search against one field, I'd like a single search box and to use a gmail like syntax. That's the best way I can describe it, and what I mean by a gmail like search syntax is being able to type the following into the input box:
username:bbaggins type:admin "made up plc"
When the form is submitted, the search string should be split into it's separate parts, which will allow me to construct a SQL query. So for example, type:admin would form part of the WHERE clause so that it would find any record where the field type is equal to admin and the same for username. The text in quotes may be a free text search, but I'm not sure on that yet.
I'm thinking that a regular expression or two would be the best way to do this, but that's something I'm really not good at. Can anyone help to construct a regular expression which could be used for this purpose? I've searched around for some pointers but either I don't know what to search for or it's not out there as I couldn't find anything obvious. Maybe if I understood regular expressions better it would be easier :-)
Cheers,
Adam
No, you would not use regular expressions for this. Just split the string on spaces in whatever language you're using.
You don't necessarily have to use a regex. Regexes are powerful, but in many cases also slow. Regex also does not handle nested parameters very well. It would be easier for you to write a script that uses string manipulation to split the string and extract the keywords and the field names.
If you want to experiment with Regex, try the online REGex tester. Find a tutorial and play around, it's fun, and you should quickly be able to produce useful regexes that find any words before or after a : character, or any sentences between " quotation marks.
thanks for the answers...I did start doing it without regex and just wondered if a regex would be simpler. Sounds like it wouldn't though, so I'll go back to the way I was doing it and test it again.
Good old Mr Bilbo is my go to guy for any naming needs :-)
Cheers,
Adam

Regex misspellings

I have a regex created from a list in a database to match names for types of buildings in a game. The problem is typos, sometimes those writing instructions for their team in the game will misspell a building name and obviously the regex will then not pick it up (i.e. spelling "University" and "Unversity").
Are there any suggestions on making a regex match misspellings of 1 or 2 letters?
The regex is dynamically generated and run on a local machine that's able to handle a lot more load so I have as a last resort to algorithmically create versions of each word with a letter missing and then another with letters added in.
I'm using PHP but I'd hope that any solution to this issue would not be PHP specific.
Allow me to introduce you to the Levenshtein Distance, a measure of the difference between strings as the number of transformations needed to convert one string to the other.
It's also built into PHP.
So, I'd split the input file by non-word characters, and measure the distance between each word and your target list of buildings. If the distance is below some threshold, assume it was a misspelling.
I think you'd have more luck matching this way than trying to craft regex's for each special case.
Google's implementation of "did you mean" by looking at previous results might also help:
How do you implement a "Did you mean"?
What is Soundex() ? – Teifion (28 mins ago)
A soundex is similar to the levenshtein function Triptych mentions. It is a means of comparing strings. See: http://us3.php.net/soundex
You could also look at metaphone and similar_text. I would have put this in a comment but I don't have enough rep yet to do that. :D
Back in the days we sometimes used Soundex() for these problems.
You're in luck; the algorithms folks have done lots of work on approximate matching of regular expressions. The oldest of these tools is probably agrep originally developed at the University of Arizona and now available in a nice open-source version. You simply tell agrep how many mistakes you are willing to tolerate and it matches from there. It can also match other blocks of text besides lines. The link above has links to a newer, GPLed version of agrep and also a number of language-specific libraries for approximate matching of regular expressions.
This might be overkill, but Peter Norvig of Google has written an excellent article on writing a spell checker in Python. It's definitely worth a read and might apply to your case.
At the end of the article, he's also listed contributed implementations of the algorithm in various other languages.