gsub returns \n (newline) - regex

I have this behaviour of regex which I can't explain. My goal is to parse only the text after the # yet when my string contains \n preceded by some words, gsub parses also \n:
string <- ".#address something \n"
gsub("^\\.?#([a-z0-9_]{1,15})[^a-z0-9_]+.*$", "\\1", string, perl=T);
# [1] "address\n"
string <- ".#address \n"
gsub("^\\.?#([a-z0-9_]{1,15})[^a-z0-9_]+.*$", "\\1", string, perl=T);
# [1] "address"

In Perl-compatible regular expressions . does not match \n. This is in contrast to "normal" regular expressions. Have a look at this example:
grepl(".", "\n", perl = FALSE)
# [1] TRUE
grepl(".", "\n", perl = TRUE)
# [1] FALSE
Your code will work if you specify perl = FALSE:
gsub("^\\.?#([a-z0-9_]{1,15})[^a-z0-9_]+.*$", "\\1", string, perl = FALSE)
# [1] "address"

To extract address, you could also use:
library(stringr)
str_extract(string, perl('(?<=#)[a-z0-9_]+(?= )'))
#[1] "address"

Related

Trying to use a regular expression in R to capture some data

So I have a table in R, and an example of of the string I am trying to capture is this:
C.Hale (79-83)
I want to write a regular expression to extract the (79-83).
How do I go about doing this?
We can use sub. We match one or more characters that are not a space ([^ ]+) from the beginning of the string (^) , followed by a space (\\s) and replace it with a ''.
sub('^[^ ]+\\s', '', str1)
#[1] "(79-83)"
Or another option is stri_extract_all from stringi
library(stringi)
stri_extract_all_regex(str1, '\\([^)]+\\)')[[1]]
#[1] "(79-83)"
data
str1 <- 'C.Hale (79-83)'
One possibility using the qdapRegex package I maintain:
x <- "C.Hale (79-83)"
library(qdapRegex)
rm_round(x, extract = TRUE, include.markers = TRUE)
## [[1]]
## [1] "(79-83)"

Combining regex with a literal string

I have the following code:
input <- "1-FA-1-I2-1-I2-1-I2-1-EX-1-I2-1-I3-1-FA-1-I1-1-I2-1-TR-1-I1-1-I2-1-FA-1-I3-1-I1-1-FA-1-FA-1-NR-1-I3-1-I2-1-TR-1-I1-1-I2-1-I1-1-I2-1-FA-1-I2-1-I1-1-I3-1-FA-1-QU-1-I1-1-I2-1-I2-1-I2-1-NR-1-I2-1-I2-1-NR-1-I1-1-I2-1-I1-1-NR-1-I3-1-QU-1-I2-1-I3-1-QU-1-NR-1-I2-1-I1-1-NR-1-QU-1-QU-1-I2-1-I1-1-EX"
innovation_patterns <- gsub(input, pattern = "-1-", replacement = "-")
innovation_patterns <- lapply(innovation_patterns, str_extract_all, '(?:I\\d-?)*I3(?:-?I\\d)*')
This outputs:
"I2-I3" "I3-I1" "I3-I2" "I2-I1-I3" "I3" "I2-I3"
However, I only want to extract matches to the regex that are following immediately to a specific string, e.g.:
only match the regex when it's preceded by the literal string FA-I2-I2-I2-EX.
This, for example, would be the first match of the regex, while the second match is preceded by FA-I1-I2-TR-I1-I2-FA.
The expected output is roughly the same as in the regex above, but only selecting one of the 5 matches, because it needs to be preceded by a specific literal string.
How can I modify this regex to achieve this purpose? I assume it needs to use a positive lookbehind to first identify the literal string, then execute the regex.
I don't know if I'm fully understanding what you mean, but it seems you could use positive lookbehind.
For instance:
(?<=a)b (positive lookbehind) matches the b (and only the b) in cab, but does not match bed or debt
There should be something more intuitive but i think this will do the job
literal <- "FA-I2-I2-I2-EX"
innovation_patterns <- gsub(input, pattern = "-1-", replacement = "-")
a <- lapply(strsplit(innovation_patterns, literal )[[1]], str_extract_all, '(?:I\\d-?)*I3(?:-?I\\d)*')
b <- lapply(2:length(a), function(x){
a[[x]][[1]][1]
})
print(b)
Use (*SKIP)(*F)
innovation_patterns <- gsub(input, pattern = "-1-", replacement = "-")
innovation_patterns <- lapply(innovation_patterns, str_extract_all, perl('FA-I1-I2-TR-I1-I2-FA.*(*SKIP)(*F)|(?:I\\d-?)*I3(?:-?I\\d)*'))
Syntax would be like,
partIDontWant.*(*SKIP)(*F)|choose from the string which exists before partIDontWant
DEMO
Here's is another way you could go about this.
x <- "1-FA-1-I2-1-I2-1-I2-1-EX-1-I2-1-I3-1-FA-1-I1-1-I2-1-TR-1-I1-1-I2-1-FA-1-I3-1-I1-1-FA-1-FA-1-NR-1-I3-1-I2-1-TR-1-I1-1-I2-1-I1-1-I2-1-FA-1-I2-1-I1-1-I3-1-FA-1-QU-1-I1-1-I2-1-I2-1-I2-1-NR-1-I2-1-I2-1-NR-1-I1-1-I2-1-I1-1-NR-1-I3-1-QU-1-I2-1-I3-1-QU-1-NR-1-I2-1-I1-1-NR-1-QU-1-QU-1-I2-1-I1-1-EX"
CODE
substr <- 'FA-I2-I2-I2-EX'
regex <- paste0(substr, '-?((?:I\\d-?)*I3(?:-?I\\d)*)')
gsubfn::strapply(gsub('-1-', '-', x), regex, simplify = c)
## [1] "I2-I3"
Here's how to implement it:
lapply(innovation_patterns, str_extract_all, '(?<=FA-I2-I2-I2-EX-?)(?:I\\d-?)*I3(?:-?I\\d)*');
## [[1]]
## [[1]][[1]]
## [1] "I2-I3"

remove leading zeroes from timestamp %j%Y %H:%M

My timestamp is in the form
0992006 09:00
I need to remove the leading zeros to get this form:
992006 9:00
Here's the code I'm using now, which doesn't remove leading zeros:
prediction$TIMESTAMP <- as.character(format(prediction$TIMESTAMP, '%j%Y %H:%M'))
Simplest way is to create your own boundary that asserts either the start of the string or a space precedes.
gsub('(^| )0+', '\\1', '0992006 09:00')
# [1] "992006 9:00"
You could do the same making the replacement exempt using a trick. \K resets the starting point of the reported match and any previously consumed characters are no longer included.
gsub('(^| )\\K0+', '', '0992006 09:00', perl=T)
# [1] "992006 9:00"
Or you could use sub and match until the second set of leading zeros.
sub('^0+([0-9]+ )0+', '\\1', '0992006 09:00')
# [1] "992006 9:00"
And to cover all possibilities, if you know that you will ever have a format like 0992006 00:00, simply remove the + quantifier from zero in the regular expression so it only removes the first leading zero.
str1 <- "0992006 09:00"
gsub("(?<=^| )0+", "", str1, perl=TRUE)
#[1] "992006 9:00"
For situations like below, it could be:
str2 <- "0992006 00:00"
gsub("(?<=^| )0", "", str2, perl=TRUE)
#[1] "992006 0:00"
Explanation
Here the idea is to use look behind (?<=^| )0+ to match 0s
if it occurs either at the beginning of the string
(?<=^
or |
if it follows after a space )0+
and replace those matched 0s by "" in the second part of the gsub argument.
In the second string, the hour and minutes are all 0's. So, using the first code would result in:
gsub("(?<=^| )0+", "", str2, perl=TRUE)
#[1] "992006 :00"
Here, it is unclear what the OP would accept as a result. So, I thought, instead of removing the whole 0s before the :, it would be better if one 0 was left. So, I replaced the multiple 0+ code to just one 0 and replace that by "".
Here's another option using a lookbehind
gsub("(^0)|(?<=\\s)0", "", "0992006 09:00", perl = TRUE)
## [1] "992006 9:00"
With sub:
sub("^[0]+", "", prediction$TIMESTAMP)
[1] "992006 09:00"
You can also use stringr without a regular expression, by using the substrings.
> library(stringr)
> str_c(str_sub(word(x, 1:2), 2), collapse = " ")
# [1] "992006 9:00"
Some more Perl regexes,
> gsub("(?<!:)\\b0+", "", "0992006 09:00", perl=T)
[1] "992006 9:00"
> gsub("(?<![\\d:])0+", "", "0992006 09:00", perl=T)
[1] "992006 9:00"

Get Twitter #Username with Regex in R

How can I use regex in R to extract Twitter usernames from a string of text?
I've tried
library(stringr)
theString <- '#foobar Foobar! and #foo (#bar) but not foo#bar.com'
str_extract_all(string=theString,pattern='(?:^|(?:[^-a-zA-Z0-9_]))#([A-Za-z]+[A-Za-z0-9_]+)')
But I end up with #foobar, #foo and (#bar which contains an unwanted parenthesis.
How can I get just #foobar, #foo and #bar as output?
Here's one method that works in R:
theString <- '#foobar Foobar! and #foo (#bar) but not foo#bar.com'
theString1 <- unlist(strsplit(theString, " "))
regex <- "(^|[^#\\w])#(\\w{1,15})\\b"
idx <- grep(regex, theString1, perl = T)
theString1[idx]
[1] "#foobar" "#foo" "(#bar)"
If you want to use #Jerry's answer in R:
regex <- "#([A-Za-z]+[A-Za-z0-9_]+)(?![A-Za-z0-9_]*\\.)"
idx <- grep(regex, theString1, perl = T)
theString1[idx]
[1] "#foobar" "#foo" "(#bar)"
Both of these methods include the parenthesis that you don't want, however.
UPDATE This will get to you start-to-finish with no parentheses or any other kind of punctuation (except underscores, since they're allowed in usernames)
theString <- '#foobar Foobar! and #fo_o (#bar) but not foo#bar.com'
theString1 <- unlist(strsplit(theString, " "))
regex1 <- "(^|[^#\\w])#(\\w{1,15})\\b" # get strings with #
regex2 <- "[^[:alnum:]#_]" # remove all punctuation except _ and #
users <- gsub(regex2, "", theString1[grep(regex1, theString1, perl = T)])
users
[1] "#foobar" "#fo_o" "#bar"
#[a-zA-Z0-9_]{0,15}
Where:
# matches the character # literally (case sensitive).
[a-zA-Z0-15] match a single character present in the list
{0,15} Quantifier matches between 0 and 15 times, as many times as
possible, giving back as needed
It is working fine on selecting twitter usernames from a mixed dataset.
Try using a negative lookbehind so that characters are not consumed in your match:
(?:^|(?<![-a-zA-Z0-9_]))#([A-Za-z]+[A-Za-z0-9_]+)
^^^
EDIT: Since it seems lookbehinds don't work in R (I found somewhere here that lookbehinds worked on R, but apparently not...), try this one:
#([A-Za-z]+[A-Za-z0-9_]+)(?![A-Za-z0-9_]*\\.)
Edit: double escaped the dot
EDITv3... : Try turning on PCRE:
str_extract_all(string=theString,perl("(?:^|(?<![-a-zA-Z0-9_]))#([A-Za-z]+[A-Za-z0-9_]+)")

R-regex: match strings not beginning with a pattern

I'd like to use regex to see if a string does not begin with a certain pattern. While I can use: [^ to blacklist certain characters, I can't figure out how to blacklist a pattern.
> grepl("^[^abc].+$", "foo")
[1] TRUE
> grepl("^[^abc].+$", "afoo")
[1] FALSE
I'd like to do something like grepl("^[^(abc)].+$", "afoo") and get TRUE, i.e. to match if the string does not start with abc sequence.
Note that I'm aware of this post, and I also tried using perl = TRUE, but with no success:
> grepl("^((?!hede).)*$", "hede", perl = TRUE)
[1] FALSE
> grepl("^((?!hede).)*$", "foohede", perl = TRUE)
[1] FALSE
Any ideas?
Yeah. Put the zero width lookahead /outside/ the other parens. That should give you this:
> grepl("^(?!hede).*$", "hede", perl = TRUE)
[1] FALSE
> grepl("^(?!hede).*$", "foohede", perl = TRUE)
[1] TRUE
which I think is what you want.
Alternately if you want to capture the entire string, ^(?!hede)(.*)$ and ^((?!hede).*)$ are both equivalent and acceptable.
There is now (years later) another possibility with the stringr package.
library(stringr)
str_detect("dsadsf", "^abc", negate = TRUE)
#> [1] TRUE
str_detect("abcff", "^abc", negate = TRUE)
#> [1] FALSE
Created on 2020-01-13 by the reprex package (v0.3.0)
I got stuck on the following special case, so I thought I would share...
What if there are multiple instances of the regular expression, but you still only want the first segment?
Apparently you can turn off the implicit greediness of the search
with specific perl wildcard modifiers
Suppose the string I wanted to process was
myExampleString = paste0(c(letters[1:13], "_", letters[14:26], "__",
LETTERS[1:13], "_", LETTERS[14:26], "__",
"laksjdl", "_", "lakdjlfalsjdf"),
collapse = "")
myExampleString
"abcdefghijklm_nopqrstuvwxyz__ABCDEFGHIJKLM_NOPQRSTUVWXYZ__laksjdl_lakdjlfalsjd"
and that I wanted only the first segment before the first "__".
I cannot simply search on "_", because single-underscore is
an allowable non-delimiter in this example string.
The following doesn't work. It instead gives me the first and second segments because of the default greediness (but not third, because of the forward-look).
gsub("^(.+(?=__)).*$", "\\1", myExampleString, perl = TRUE)
"abcdefghijklm_nopqrstuvwxyz__ABCDEFGHIJKLM_NOPQRSTUVWXYZ"
But this does work
gsub("^(.+?(?=__)).*$", "\\1", myExampleString, perl = TRUE)
"abcdefghijklm_nopqrstuvwxyz"
The difference is the greedy-modifier "?" after the wildcard ".+"
in the (perl) regular expression.