Perl matching multiple capitalized words - regex

I'm doing a perl program (script?) that reads through a text file and identifies all names and categorizes them as either person, location, organization, or miscellaneous. I'm having trouble with things like New York or Pacific First Financial Corp. where there are multiple capitalized words in a row. I've been using:
/([A-Z][a-z]+)+/
to capture as many capitalized words in a row as there are on a given line. From what I understand the + will match 1 or more instances of such pattern, but it's only matching one (i.e. New in New York). For New York, I can just repeate the [A-Z][a-z]+ twice but it doesn't find patterns with more than 2 capitalized words in a row. What am I doing wrong?
PS Sorry if my use of vocabulary is off I'm always so bad with that.

You were just missing the spacing between words.
The following matches whitespace before each word, except the first, so covers the cases you've described:
use strict;
use warnings;
while (<DATA>) {
while (/(?=\w)((?:\s*[A-Z][a-z]+)+)/g) {
print "$1\n";
}
}
__DATA__
I'm doing a perl program (script?) that reads through a text file and identifies all names and categorizes them as either person, location, organization, or miscellaneous. I'm having trouble with things like New York or Pacific First Financial Corp. where there are multiple capitalized words in a row. I've been using:
to capture as many capitalized words in a row as there are on a given line. From what I understand the + will match 1 or more instances of such pattern, but it's only matching one (i.e. New in New York). For New York, I can just repeate the [A-Z][a-z]+ twice but it doesn't find patterns with more than 2 capitalized words in a row. What am I doing wrong?
PS Sorry if my use of vocabulary is off I'm always so bad with that.
Outputs:
New York
Pacific First Financial Corp
From
New
New York
For New York
What
Sorry

There's a CPAN module called Lingua::EN::NamedEntity which seems to do what you want. Might be worth taking a quick look at it.

The How
The pattern you provide, /([A-Z][a-z]+)+/, in your question matches one of more capitalised words given consecutively, like this
This
ThisAndThat
but it won't match this
Not This
It actually matches each of these individually
Not
This
So lets modify the regex to /(?:[A-Z][a-z]+)(?:\s*[A-Z][a-z]+)*/. Now that is a bit of a mouthful so lets break it down a bit at a time
(?: ... ) Groups like this don't capture which is more efficient
[A-Z][a-z]+ Matches a capitalised word
\s*[A-Z][a-z]+ Matches a subsequent capitalised word, optionally starting with
whitespace
The What - TL;DR
Put this all together and we now have a regex that matches a capitalised word, then any subsequent ones with or without whitespace seperation. So it matches
This
ThisAndThat
Not This
We can now abstract this regex a bit to avoid repetition and use it in code as so
my $CAPS_WORD = qr/[A-Z][a-z]+/;
my $FULL_RE = qr/(?:$CAPS_WORD)(?:\s*$CAPS_WORD)*/;
$string =~ /$FULL_RE/;
say $&;
The Why
This answer gives an alternative to the already great one given by #Miller, both will work fine but this solution is quite a bit faster since it doesn't use a lookahead. This is faster than this by a factor of 7
$ time ./bench-simple.pl
Running 100000 runs
800000 matches
real 0m2.869s
user 0m2.860s
sys 0m0.008s
$ time ./bench-lookahead.pl
Running 100000 runs
800000 matches
real 0m19.845s
user 0m19.831s
sys 0m0.012s

Related

Regex match characters when not preceded by a string

I am trying to match spaces just after punctuation marks so that I can split up a large corpus of text, but I am seeing some common edge cases with places, titles and common abbreviations:
I am from New York, N.Y. and I would like to say hello! How are you today? I am well. I owe you $6. 00 because you bought me a No. 3 burger. -Sgt. Smith
I am using this with the re.split function in Python 3 I want to get this:
["I am from New York, N.Y. and I would like to say hello!",
"How are you today?",
"I am well.",
"I owe you $6. 00 because you bought me a No. 3 burger."
"-Sgt. Smith"]
This is currently my regex:
(?<=[\.\?\!])(?<=[^A-Z].)(?<=[^0-9].)(?<=[^N]..)(?<=[^o].)
I decided to try to fix the No. first, with the last two conditions. But it relies on matching the N and the o independently which I think is going to case false positives elsewhere. I cannot figure out how to get it to make just the string No behind the period. I will then use a similar approach for Sgt. and any other "problem" strings I come across.
I am trying to use something like:
(?<=[\.\?\!])(?<=[^A-Z].)(?<=[^0-9].)^(?<=^No$)
But it doesn't capture anything after that. How can I get it to exclude certain strings which I expect to have a period in it, and not capture them?
Here is a regexr of my situation: https://regexr.com/4sgcb
This is the closest regex I could get (the trailing space is the one we match):
(?<=(?<!(No|\.\w))[\.\?\!])(?! *\d+ *)
which will split also after Sgt. for the simple reason that a lookbehind assertion has to be fixed width in Python (what a limitation!).
This is how I would do it in vim, which has no such limitation (the trailing space is the one we match):
\(\(No\|Sgt\|\.\w\)\#<![?.!]\)\( *\d\+ *\)\#!\zs
For the OP as well as the casual reader, this question and the answers to it are about lookarounds and are very interesting.
You may consider a matching approach, it will offer you better control over the entities you want to count as single words, not as sentence break signals.
Use a pattern like
\s*((?:\d+\.\s*\d+|(?:No|M[rs]|[JD]r|S(?:r|gt))\.|\.(?!\s+-?[A-Z0-9])|[^.!?])+(?:[.?!]|$))
See the regex demo
It is very similar to what I posted here, but it contains a pattern to match poorly formatted float numbers, added No. and Sgt. abbreviation support and a better handling of strings not ending with final sentence punctuation.
Python demo:
import re
p = re.compile(r'\s*((?:\d+\.\s*\d+|(?:No|M[rs]|[JD]r|S(?:r|gt))\.|\.(?!\s+-?[A-Z0-9])|[^.!?])+(?:[.?!]|$))')
s = "I am from New York, N.Y. and I would like to say hello! How are you today? I am well. I owe you $6. 00 because you bought me a No. 3 burger. -Sgt. Smith"
for m in p.findall(s):
print(m)
Output:
I am from New York, N.Y. and I would like to say hello!
How are you today?
I am well.
I owe you $6. 00 because you bought me a No. 3 burger.
-Sgt. Smith
Pattern details
\s* - matches 0 or more whitespace (used to trim the results)
(?:\d+\.\s*\d+|(?:No|M[rs]|[JD]r|S(?:r|gt))\.|\.(?!\s+-?[A-Z0-9])|[^.!?])+ - one or more occurrences of several aternatives:
\d+\.\s*\d+ - 1+ digits, ., 0+ whitespaces, 1+ digits
(?:No|M[rs]|[JD]r|S(?:r|gt))\. - abbreviated strings like No., Mr., Ms., Jr., Dr., Sr., Sgt.
\.(?!\s+-?[A-Z0-9]) - matches a dot not followed by 1 or more whitespace and then an optional - and uppercase letters or digits
| - or
[^.!?] - any character but a ., !, and ?
(?:[.?!]|$) - a ., !, and ? or end of string.
As mentioned in my comment above, if you are not able to define a fixed set of edge cases, this might not be possible without false positives or false negatives. Again, without context you are not able to destinguish between abbreviations like "-Sgt. Smith" and ends of sentences like "Sergeant is often times abbreviated as Sgt. This makes it shorter.".
However, if you can define a fixed set of edge cases, its probably easier and much more readable to do this in multiple steps.
1. Identify your edge cases
For example, you can destinguish "Ill have a No. 3" and "No. I am your father" by checking for a subsequent number. So you would identify that edge case with a regex like this: No. \d. (Again, context matters. Sentences like "Is 200 enough? No. 200 is not enough." will still give you a false positive)
2. Mask your edge cases
For each edge case, mask the string with a respective string that will 100% not be part of the original text. E.g. "No." => "======NUMBER======"
3. Run your algorithm
Now that you got rid of your unwanted punctuations, you can run a simpler regex like this to identify the true positives: [\.\!\?]\s
4. Unmask your edge cases
Turn "======NUMBER======" back into "No."
Doing it with only one regex will be tricky - as stated in comments, there are lots of edge cases.
Myself I would do it with three steps:
Replace spaces that should stay with some special character (re.sub)
Split the text (re.split)
Replace the special character with space
For example:
import re
zero_width_space = '\u200B'
s = 'I am from New York, N.Y. and I would like to say hello! How are you today? I am well. I owe you $6. 00 because you bought me a No. 3 burger. -Sgt. Smith'
s = re.sub(r'(?<=\.)\s+(?=[\da-z])|(?<=,)\s+|(?<=Sgt\.)\s+', zero_width_space, s)
s = re.split(r'(?<=[.?!])\s+', s)
from pprint import pprint
pprint([line.replace(zero_width_space, ' ') for line in s])
Prints:
['I am from New York, N.Y. and I would like to say hello!',
'How are you today?',
'I am well.',
'I owe you $6. 00 because you bought me a No. 3 burger.',
'-Sgt. Smith']

Regular expressions combined with list for numbers written as words

I am trying to extract information about people wounded from several articles. The issue is that there are different ways in which conveying that information in journalistic language since it can be written in numbers or in words.
For instance:
`Security forces had *wounded two* gunmen inside the museum but that two or three accomplices might still be at large.`
`The suicide bomber has wounded *four men* last night.`
`*Dozens* were wounded in a terrorist attack.`
I noticed as most of the times numbers that goes from 1-10 are written in words rather than in numbers. And I was wondering how to extract them without incurring in any convoluted code and just list regular expression with words from 1-10.
Shall I use a list? And how it would be included?
This is the pattern I used so far for extracting the number of people wounded with digit:
text_open = open("News")
text_read = text_open.read()
pattern= ("wounded (\d+)|(\d+) were wounded|(\d+) injured|(\d+) people were wounded|wounding (\d+)|wounding at least (\d+)")
result = re.findall(pattern,text_read)
print(result)
try this
import re
regex = r"(\w)+\s(?=were)|(?<=wounded|injured)\s[\w]{3,}"
test_str = ("`Security forces had wounded two gunmen inside the museum but that two or three accomplices might still be at large.`\n\n"
"`The suicide bomber has wounded four men last night.`\n\n"
"`Dozens were wounded in a terrorist attack.")
matches = re.finditer(regex, test_str)
for match in matches:
print (match.group().strip())
Output:
two
four
Dozens
\w+\s(?=were) : ?= look ahead for were , found capture word using \w
| or
(?<=wounded|injured)\s\w{3,} : ?<= look behind , capture word if wounded or injured occurred before word and {3,} mean length of word is 3 or more , simply to avoid capturing word i.e. in and every numeric word has min length 3 so it's fine to use it.

Regex for words that don't differ by only one letter

I want to create series of puzzle games where you change one letter in a word to create a new word, with the aim of reaching a given target word. For example, to change "this" to "that":
this
thin
than
that
What I want to do is create a regex which will scan a list of words and choose all those that do not match the current word by all but one letter. For example, if my starting word is "pale" and my list of words is...
pale
male
sale
tale
pile
pole
pace
page
pane
pave
palm
peal
leap
play
help
pack
... I want all the words from "peal" to "pack" to be selected. This means that I can delete them from my list, leaving only the words that could be the next match. (It's OK for "pale" itself to be unselected.)
I can do this in parts:
^.(?!ale).{3}\n selects words not like "*ale"
^.(?<!p).{3}\n|^.{2}(?!le).{2}\n selects words not like "p*le"
^.{2}(?<!pa).{2}\n|^.{3}(?!e).\n selects words not like "pa*e"
^.{3}(?<!pal).\n selects words not like "pal*".
However, when I put them together...
^.(?!ale).{3}\n|^.(?<!p).{3}\n|^.{2}(?!le).{2}\n|^.{2}(?<!pa).{2}\n|^.{3}(?!e).\n|^.{3}(?<!pal).\n
... everything but "pale" is matched.
I need some way to create an AND relationship between the different regexes, or (more likely) a completely different approach.
You can use the Python regex module that allows fuzzy matching:
>>> import regex
>>> regex.findall(r'(?:pale){s<=1}', "male sale tale pile pole pace page pane pave palm peal leap play help pack")
['male', 'sale', 'tale', 'pile', 'pole', 'pace', 'page', 'pane', 'pave', 'palm']
In this case, you want a substitution of 0 or 1 is a match.
Or consider the TRE library and the command line agrep which supports a similar syntax.
Given:
$ echo $s
male sale tale pile pole pace page pane pave palm peal leap play help pack
You can filter to a list of a single substitution:
$ echo $s | tr ' ' '\n' | agrep '(?:pale){ 1s <2 }'
male
sale
tale
pile
pole
pace
page
pane
pave
palm
Here's a solution that uses cool python tricks and no regex:
def almost_matches(word1, word2):
return sum(map(str.__eq__, word1, word2)) == 3
for word in "male sale tale pile pole pace page pane pave palm peal leap play help pack".split():
print almost_matches("pale", word)
A completely different approach: Levenshtein distance
...the Levenshtein distance between two words is the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one word into the other.
PHP example:
$words = array(
"pale",
"male",
"sale",
"tale",
"pile",
"pole",
"pace",
"page",
"pane",
"pave",
"palm",
"peal",
"leap",
"play",
"help",
"pack"
);
foreach($words AS $word)
if(levenshtein("pale", $word) > 1)
echo $word."\n";
This assumes the word on the first line is the keyword. Just a brute force parallel letter-match and count gets the job done:
awk 'BEGIN{FS=""}
NR==1{n=NF;for(i=1;i<=n;++i)c[i]=$i}
NR>1{j=0;for(i=1;i<=n;++i)j+=c[i]==$i;if(j<n-1)print}'
A regexp general solution would need to be a 2-stepper I think -- generate the regexp in first step (from the keyword), run the regexp against the file in the second step.
By the way, the way to do an "and" of regexp's is to string lookaheads (and the lookaheads don't need to be as complicated as you had above I think):
^(?!.ale)(?!p.le)(?!pa.e)(?!pal.)

Regex parse with alteryx

One of the columns has the data as below and I only need the suburb name, not the state or postcode.
I'm using Alteryx and tried regex (\<\w+\>)\s\<\w+\> but only get a few records to the new column.
Input:
CABRAMATTA
CANLEY HEIGHTS
ST JOHNS PARK
Parramatta NSW 2150
Claymore 2559
CASULA
Output
CABRAMATTA
CANLEY HEIGHTS
ST JOHNS PARK
Parramatta
Claymore
CASULA
This regex matches all letter-words up to but not including an Australian state abbreviation (since the addresses are clearly Australian):
( ?(?!(VIC|NSW|QLD|TAS|SA|WA|ACT|NT)\b)\b[a-zA-Z]+)+
See demo
The negative look ahead includes a word boundary to allow suburbs that start with a state abbreviation (see demo).
Expanding on Bohemian's answer, you can use groupings to do a REGEXP REPLACE in alteryx. So:
REGEX_Replace([Field1], "(.*)(\VIC|NSW|QLD|TAS|SA|WA|ACT|NT)+(\s*\d+)" , "\1")
This will grab anything that matches in the first group (so just the suburb). The second and third groups match the state and the zip. Not a perfect regex, but should get you most of the way there.
I think this workflow will help you :

Regex: match when string has repeated letter pattern

I'm using the Regex interpreter found in XYplorer file browser. I want to match any string (in this case a filename) that has repeated groups of 'several' characters. More specifically, I want a match on the string:
jack johnny - mary joe ken johnny bill
because it has 'johnny' at least twice. Note that it has spaces and a dash too.
It would be nice to be able to specify the length of the group to match, but in general 4, 5 or 6 will do.
I have looked at several previous questions here, but either they are for specific patterns or involve some language as well. The one that almost worked is:
RegEx: words with two letters repeated twice (eg. ABpoiuyAB, xnvXYlsdjsdXYmsd)
where the answer was:
\b\w*(\w{2})\w*\1
However, this fails when there are spaces in the strings.
I'd also like to limit my searches to .jpg files, but XYplorer has a built-in filter to only look at image files so that isn't so important to me here.
Any help will be appreciated, thanks.
.
.
.
EDIT -
The regex by OnlineCop below answered my original question, thanks very much:
(\b\w+.\b).(\1)
I see that it matches words, not arbitrary string chunks, but that works for my present need. And I am not interested in capturing anything, just in detecting a match.
As a refinement, I wonder if it can be changed or extended to allow me to specify the length of words (or string chunks) that must be the same in order to declare a match. So, if I specified a match length of 5 and my filenames are:
1) jack john peter paul mary johnnie.jpg
2) jack johnnie peter paul mary johnnie.jpg
the first one would not match since no substring of five characters or more is repeated. The second one would match since 'johnnie' is repeated and is more than 5 chars long.
Do you wish to capture the word 'johnny' or the stuff between them (or both)?
This example shows that it selects everything from the first 'johnny' to the last, but it does not capture the stuff between:
Re: (\b\w+\b).*(\1)
Result: jack bill
This example allows some whitespace between names/words:
Re: (\b\w+.*\b).*(\1)
String: Jackie Chan fought The Dragon who was fighting Jackie Chan
Result: Jackie Chan Jackie Chan
Use perl:
#!/usr/bin/perl
use strict;
use warnings;
while ( my $line = <STDIN> ) {
chomp $line;
my #words = split ( /\s+/, $line );
my %seen;
foreach my $word ( #words ) {
if ( $seen{$word} ) { print "Match: $line\n"; last }
$seen{$word}++;
}
}
And yes, it's not as neat as a one line regexp, but it's also hopefully a bit clearer what's going on.