Exception handling for regular expressions in R - regex

I've found several related questions, but haven't found one that I solves my problem yet, please let me know if I'm missing a question that addresses this.
Essentially I want to use a regular expression to find a pattern but with an exception based on the preceding characters. For example, I have the following text object ("muffins") as a vector and I want to match the names ("Sarah","Muffins", and "Bob").:
muffins
[1] "Dear Sarah,"
[2] "I love your dog, Muffins, who is adorable and very friendly. However, I cannot say I enjoy the \"muffins\" he regularly leaves in my front yard. Please consider putting him on a leash outside and properly walking him like everyone else in the neighborhood."
[3] "Sincerely,"
[4] "Bob"
My approach was the search for capitalized words and then exclude words capitalized for grammatical reasons, such as the beginning of a sentence.
pattern = "\\b[[:upper:]]\\w+\\b"
m = gregexpr(pattern,muffins)
regmatches(muffins,m)
This pattern gets me most of the way, returning:
[[1]]
[1] "Dear" "Sarah"
[[2]]
[1] "Muffins" "However" "Please"
[[3]]
[1] "Sincerely"
[[4]]
[1] "Win"
and I can identify some of the sentence beginnings with:
pattern2 = "[.]\\s[[:upper:]]\\w+\\b"
m = gregexpr(pattern2,muffins)
regmatches(muffins,m)
but I can't seem to do both simultaneously, where I say I want pattern where pattern2 is not the case.
I've tried several combinations that I thought would work, but with little success. A few of the ones I tried:
pattern2 = "(?<![.]\\s[[:upper:]]\\w+\\b)(\\b[[:upper:]]\\w+\\b)"
pattern2 = "(^[.]\\s[[:upper:]]\\w+\\b)(\\b[[:upper:]]\\w+\\b)"
Any advice or insight would be greatly appreciated!

You maybe looking for a negative look-behind.
pattern = "(?<!\\.\\s)\\b[[:upper:]]\\w+\\b"
m = gregexpr(pattern,muffins, perl=TRUE)
regmatches(muffins,m)
# [[1]]
# [1] "Dear" "Sarah"
#
# [[2]]
# [1] "Muffins"
#
# [[3]]
# [1] "Sincerely"
#
# [[4]]
# [1] "Bob"
The look behind part (?<!\\.\\s) makes sure there's not a period and a space immediately before the match.

The below regex would match only the names Bob, Sarah and Muffins,
(?<=^)[A-Z][a-z]+(?=$)|(?<!\. )[A-Z][a-z]+(?=,[^\n])|(?<= )[A-Z][a-z]+(?=,$)
DEMO

Trying to use regular expressions to identify names becomes a problem. There is no hope of working reliably. It is very complicated to match names from arbitrary data. If extracting these names is your goal, you need to approach this in a different way instead of simply trying to match an uppercase letter followed by word characters.
Considering your vector is as you posted in your question:
x <- c('Dear Sarah,',
'I love your dog, Muffins, who is adorable and very friendly. However, I cannot say I enjoy the "muffins" he regularly leaves in my front yard. Please consider putting him on a leash outside and properly walking him like everyone else in the neighborhood.',
'Sincerely',
'Bob')
m = regmatches(x, gregexpr('(?<!\\. )[A-Z][a-z]{1,7}\\b(?! [A-Z])', x, perl=T))
Filter(length, m)
# [[1]]
# [1] "Sarah"
# [[2]]
# [1] "Muffins"
# [[3]]
# [1] "Bob"

Related

Positive look-behind in R that includes non-ascii characters

I am trying to extract the first group of non-whitespace characters that follows an Arabic string for each text in a set of about 2,100 total texts. Some of these texts contain the string, while others do not. This would be a very easy task, using str_extract from the stringr package, if the string were in English. However, for some reason this function doesn't work when using an Arabic string within the look-behind pattern:
library(stringr)
test_texts <- c(
"My text كلمة containing some Arabic",
"My text كلمة again containing some Arabic",
"My text that doesn't contain any Arabic"
)
str_extract(test_texts, "(?<=text )\\S+")
# [1] "كلمة" "كلمة" "that"
str_extract(test_texts, "(?<=containing )\\S+")
# [1] "some" "some" NA
str_extract(test_texts, "(?<=كلمة )\\S+") #returns NAs even though string is there
# [1] NA NA NA
Note that this works if I'm not using a look-behind pattern:
str_extract(test_texts, "كلمة \\S+")
# [1] "كلمة containing" "كلمة again" NA
Why does the Arabic mess things up only when using a look-behind pattern?
I am using R version 3.2.3, on OS X 10.11.3, and stringr version 1.0.0.
It seems there is some issue how str_extract processes the right-to-left text inside the positive lookbehind. As a workaround, you may use str_match with a regex having a capturing group around the subpattern capture the value you need:
> res <- str_match(test_texts, "كلمة +(\\S+)")
> res[,2]
[1] "containing" "again" NA
This solution allows matching the non-whitespace chunk even if there are more than 1 space after the Arabic word.
You can grep for non-ascii characters like this:
str_extract(test_texts, "[^\001-\177]+")
[1] "كلمة" "كلمة" NA
str_extract(test_texts, "(?<=[^\001-\177] )\\S+")
[1] "containing" "again" NA
And this seems to work... just adding brackets to what you had. This may not be sufficient either since the characters can be in any order if they are in brackets.
str_extract(test_texts, "(?<=[كلمة] )\\S+")
[1] "containing" "again" NA

Unable to replace string with back reference using gsub in R

I am trying to replace some text in a character vector using regex in R where, if there is a set of letters inside a bracket, the bracket content is to erplace the whole thing. So, given the input:
tst <- c("85", "86 (TBA)", "87 (LAST)")
my desired output would be equivalent to c("85", "TBA", "LAST")
I tried gsub("\\(([[:alpha:]])\\)", "\\1", tst) but it didn't replace anything. What do I need to correct in my regular expression here?
I think you want
gsub(".*\\(([[:alpha:]]+)\\)", "\\1", tst)
# [1] "85" "TBA" "LAST"
Your first expression was trying to match exactly one alpha character rather than one-or-more. I also added the ".*" to capture the beginning part of the string so it gets replaced as well, otherwise, it would be left untouched.
gsub("(?=.*\\([^)]*\\)).*\\(([^)]*)\\)", "\\1", tst, perl=TRUE)
## [1] "85" "TBA" "LAST"
You can try this.See demo.Replace by \1.
https://regex101.com/r/sH8aR8/38
The following would work. Note that white-spaces within the brackets may be problematic
A<-sapply(strsplit(tst," "),tail,1)
B<-gsub("\\(|\\)", "", A)
I like the purely regex answers better. I'm showing a solution using the qdapRegex package that I maintain as the result is pretty speedy and easy to remember and generalize. It pulls out the strings that are in parenthesis and then replaces any NA (no bracket) with the original value. Note that the result is a list and you'd need to use unlist to match your desired output.
library(qdpRegex)
m <- rm_round(tst, extract=TRUE)
m[is.na(m)] <- tst[is.na(m)]
## [[1]]
## [1] "85"
##
## [[2]]
## [1] "TBA"
##
## [[3]]
## [1] "LAST"

Can I use an OR statement to indicate the pattern in stringr's str_extract_all function?

I'm looking at a number of cells in a data frame and am trying to extract any one of several sequences of characters; there's only one of these sequences per per cell.
Here's what I mean:
dF$newColumn = str_extract_all(string = "dF$column1", pattern ="sequence_1|sequence_2")
Am I screwing the syntax up here? Can I pull this sort of thing with stringr? Please rectify my ignorance!
Yes, you can use | since it denotes logical or in regex. Here's an example:
vec <- c("abc text", "text abc", "def text", "text def text")
library(stringr)
str_extract_all(string = vec, pattern = "abc|def")
The result:
[[1]]
[1] "abc"
[[2]]
[1] "abc"
[[3]]
[1] "def"
[[4]]
[1] "def"
However, in your command, you should replace "dF$column1" with dF$column1 (without quotes).

Split on first/nth occurrence of delimiter

I am trying something I thought would be easy. I'm looking for a single regex solution (though others are welcomed for completeness). I want to split on n occurrences of a delimiter.
Here is some data:
x <- "I like_to see_how_too"
pat <- "_"
Desired outcome
Say I want to split on first occurrence of _:
[1] "I like" "to see_how_too"
Say I want to split on second occurrence of _:
[1] "I like_to see" "how_too"
Ideally, if the solution is a regex one liner generalizable to nth occurrence; the solution will use strsplit with a single regex.
Here's a solution that doesn't fit my parameters of single regex that works with strsplit
x <- "I like_to see_how_too"
y <- "_"
n <- 1
loc <- gregexpr("_", x)[[1]][n]
c(substr(x, 1, loc-1), substr(x, loc + 1, nchar(x)))
Here is another solution using the gsubfn package and some regex-fu. To change the nth occurrence of the delimiter, you can simply swap the number that is placed inside of the range quantifier — {n}.
library(gsubfn)
x <- 'I like_to see_how_too'
strapply(x, '((?:[^_]*_){1})(.*)', c, simplify =~ sub('_$', '', x))
# [1] "I like" "to see_how_too"
If you would like the nth occurrence to be user defined, you could use the following:
n <- 2
re <- paste0('((?:[^_]*_){',n,'})(.*)')
strapply(x, re, c, simplify =~ sub('_$', '', x))
# [1] "I like_to see" "how_too"
Non-Solution
Since R is using PCRE, you can use \K to remove everything that matches the pattern before \K from the main match result.
Below is the regex to split the string at the 3rd _
^[^_]*(?:_[^_]*){2}\K_
If you want to split at the nth occurrence of _, just change 2 to (n - 1).
Demo on regex101
That was the plan. However, strsplit seems to think differently.
Actual execution
Demo on ideone.com
x <- "I like_to see_how_too but_it_seems to_be_impossible"
strsplit(x, "^[^_]*(?:_[^_]*)\\K_", perl=TRUE)
strsplit(x, "^[^_]*(?:_[^_]*){1}\\K_", perl=TRUE)
strsplit(x, "^[^_]*(?:_[^_]*){0}\\K_", perl=TRUE)
# strsplit(x, "^[^_]*(?:_[^_]*)\\K_", perl=TRUE)
# [[1]]
# [1] "I like_to see" "how_too but" "it_seems to" "be_impossible"
# strsplit(x, "^[^_]*(?:_[^_]*){1}\\K_", perl=TRUE)
# [[1]]
# [1] "I like_to see" "how_too but" "it_seems to" "be_impossible"
# strsplit(x, "^[^_]*(?:_[^_]*){0}\\K_", perl=TRUE)
# [[1]]
# [1] "I like" "to see" "how" "too but" "it"
# [6] "seems to" "be" "impossible"
It still fails to work on a stronger assertion \A
strsplit(x, "\\A[^_]*(?:_[^_]*){0}\\K_", perl=TRUE)
# [[1]]
# [1] "I like" "to see" "how" "too but" "it"
# [6] "seems to" "be" "impossible"
Explanation?
This behavior hints at the fact that strsplit find the first match, do a substring to extract the first token and the remainder part, and find the next match in the remainder part.
This removes all the states from the previous matches, and leaves us with a clean state when it tries to match the regex on the remainder. This makes the task of stopping the strsplit function at first match and achieving the task at the same time impossible. There is not even a parameter in strsplit to limit the number of splits.
Rather than split you do match to get your split strings.
Try this regex:
^((?:[^_]*_){1}[^_]*)_(.*)$
Replace 1 by n-1 where you're trying to get split on nth occurrence of underscore.
RegEx Demo
Update: It seems R also supports PCRE and in that case you can do split as well using this PCRE regex:
^((?:[^_]*_){1}[^_]*)(*SKIP)(*F)|_
Replace 1 by n-1 where you're trying to get split on nth occurrence of underscore.
(*FAIL) behaves like a failing negative assertion and is a synonym for (?!)
(*SKIP) defines a point beyond which the regex engine is not allowed to backtrack when the subpattern fails later
(*SKIP)(*FAIL) together provide a nice alternative of restriction that you cannot have a variable length lookbehind in above regex.
RegEx Demo2
x <- "I like_to see_how_too"
strsplit(x, "^((?:[^_]*_){0}[^_]*)(*SKIP)(*F)|_", perl=TRUE)
strsplit(x, "^((?:[^_]*_){1}[^_]*)(*SKIP)(*F)|_", perl=TRUE)
## > strsplit(x, "^((?:[^_]*_){0}[^_]*)(*SKIP)(*F)|_", perl=TRUE)
## [[1]]
## [1] "I like" "to see" "how" "too"
## > strsplit(x, "^((?:[^_]*_){1}[^_]*)(*SKIP)(*F)|_", perl=TRUE)
## [[1]]
## [1] "I like_to see" "how_too"
This uses gsubfn to to preprocess the input string so that strsplit can handle it. The main advantage is that one can specify a vector of numbers, k, indicating which underscores to split on.
It replaces the occurrences of underscore defined by k by a double underscore and then splits on double underscore. In this example we split at the 2nd and 4th underscore:
library(gsubfn)
k <- c(2, 4) # split at 2nd and 4th _
p <- proto(fun = function(., x) if (count %in% k) "__" else "_")
strsplit(gsubfn("_", p, "aa_bb_cc_dd_ee_ff"), "__")
giving:
[[1]]
[1] "aa_bb" "cc_dd" "ee_ff"
If empty fields are allowed then use any other character sequence not in the string, e.g. "\01" in place of the double underscore.
See section 4 of the gusbfn vignette for more info on using gusbfn with proto objects to retain state between matches.

R gsub( ) , Regular Expression

I have the following data
Names[]
[1] John Simon is a great player
[2] Chi-Twi is from china
[3] O'Konnor works hard
[4] R.F is a swimmer
I need to extract only the names from all these rows and store them. I tried doing it this way.
[1] John Simon
[2] Chi-Twi
[3] O'Konnor
[4] R.F
names = gsub("(^[A-Z|a-z|.|-|']+[ ]+[A-Z|a-z|.|-|]+)[ ]+.*", "\\1",names)
can some one help me out?
Here's a regex that will work for this sample data:
names = gsub("(^[A-Za-z]+[^A-Za-z][A-Za-z]+)", "\\1", names)
If underscores are valid characters in a first or last name, you could shorten it to:
names = gsub("(^\\w+\\W\\w+)", "\\1", names)
It simply takes one or more letters, a non-letter, and then one or more letters again.
Some things I noticed wrong in your regex:
[A-Z|a-z|.|-|']+ actually matches A-Z, |, a-z, | (again), ., |-| (that's a range), and '. You really wanted [A-Za-z.\\-']+.
In any case, that's wrong, you don't want to include dots or dashes in the first name.
Based on #nhahtdh's comment, you can use
sub("(^\\w+\\W\\w+).*", "\\1", Names)
# [1] "John Simon" "Chi-Twi" "O'Konnor" "R.F"