How to split R data.frame column based regular expression condition - regex

I have a data.frame and I want to split one of its columns to two based on a regular expression. More specifically the strings have a suffix in parentheses that needs to be extracted to a column of its own.
So e.g. I want to get from here:
dfInit <- data.frame(VAR = paste0(c(1:10),"(",c("A","B"),")"))
to here:
dfFinal <- data.frame(VAR1 = c(1:10), VAR2 = c("A","B"))

1) gsubfn::read.pattern read.pattern in the gsubfn package can do that. The matches to the parenthesized portions of the regular rexpression are regarded as the fields:
library(gsubfn)
read.pattern(text = as.character(dfInit$VAR), pattern = "(.*)[(](.*)[)]$")
giving:
V1 V2
1 1 A
2 2 B
3 3 A
4 4 B
5 5 A
6 6 B
7 7 A
8 8 B
9 9 A
10 10 B
2) sub Another way is to use sub:
data.frame(V1=sub("\\(.*", "", dfInit$VAR), V2=sub(".*\\((.)\\)$", "\\1", dfInit$VAR))
giving the same result.
3) read.table This solution does not use a regular expression:
read.table(text = as.character(dfInit$VAR), sep = "(", comment = ")")
giving the same result.

You could also use extract from tidyr
library(tidyr)
extract(dfInit, VAR, c("VAR1", "VAR2"), "(\\d+).([[:alpha:]]+).", convert=TRUE) # edited and added `convert=TRUE` as per #aosmith's comments.
# VAR1 VAR2
#1 1 A
#2 2 B
#3 3 A
#4 4 B
#5 5 A
#6 6 B
#7 7 A
#8 8 B
#9 9 A
#10 10 B

See Split column at delimiter in data frame
dfFinal <- within(dfInit, VAR<-data.frame(do.call('rbind', strsplit(as.character(VAR), '[[:punct:]]'))))
> dfFinal
VAR.X1 VAR.X2
1 1 A
2 2 B
3 3 A
4 4 B
5 5 A
6 6 B
7 7 A
8 8 B
9 9 A
10 10 B

An approach with regmatches and gregexpr:
as.data.frame(do.call(rbind, regmatches(dfInit$VAR, gregexpr("\\w+", dfInit$VAR))))

You can also use cSplit from splitstackshape.
library(splitstackshape)
cSplit(dfInit, "VAR", "[()]", fixed=FALSE)
# VAR_1 VAR_2
# 1: 1 A
# 2: 2 B
# 3: 3 A
# 4: 4 B
# 5: 5 A
# 6: 6 B
# 7: 7 A
# 8: 8 B
# 9: 9 A
#10: 10 B

Related

sort dataframe columns in R

Is there a way to sort dataframe columns in R. I tried with below, but the result is returning as character instead of dataframe
> asd <- data.frame(a = c("fsd","sdfsd"))
> asd <- with(asd, asd[order(a) , ])
> asd
[1] "fsd" "sdfsd"
Can we get in dataframe only?
Try this
a <- data.frame(x=LETTERS[1:5],y=c(5:1))
a[order(a$x),]
a[order(a$y),]
> a[order(a$x),]
x y
1 A 5
2 B 4
3 C 3
4 D 2
5 E 1
> a[order(a$y),]
x y
5 E 1
4 D 2
3 C 3
2 B 4
1 A 5

Assigning groups using grepl with multiple inputs

I have a dataframe:
df <- data.frame(name=c("john", "david", "callum", "joanna", "allison", "slocum", "lisa"), id=1:7)
df
name id
1 john 1
2 david 2
3 callum 3
4 joanna 4
5 allison 5
6 slocum 6
7 lisa 7
I have a vector containing regex that I wish to find in the df$name variable:
vec <- c("lis", "^jo", "um$")
The output I want to get is as follows:
name id group
1 john 1 2
2 david 2 NA
3 callum 3 3
4 joanna 4 2
5 allison 5 1
6 slocum 6 3
7 lisa 7 1
I could do this doing the following:
df$group <- ifelse(grepl("lis", df$name), 1,
ifelse(grepl("^jo", df$name), 2,
ifelse(grepl("um$", df$name), 3,
NA)
However, I want to do this directly from 'vec'. I am generating different values into vec reactively in a shiny app. Can I assign groups based on index in vec?
Further, if something like the below happens, the group should be the first appearing. e.g. 'Callum' is TRUE for 'all' and "um$" but should get a group 1 here.
vec <- c("all", "^jo", "um$")
Here are several options:
df$group <- apply(Vectorize(grepl, "pattern")(vec, df$name),
1,
function(ii) which(ii)[1])
# name id group
#1 john 1 2
#2 david 2 NA
#3 callum 3 3
#4 joanna 4 2
#5 allison 5 1
#6 slocum 6 3
#7 lisa 7 1
Use a named vector and merge on it:
names(vec) <- seq_along(vec)
df <- merge(df, stack(Vectorize(grep, "pattern", SIMPLIFY=FALSE)(vec, df$name)),
by.x="id", by.y="values", all.x = TRUE)
df[!duplicated(df$id),] # to keep only the first match
# id name ind
#1 1 john 2
#2 2 david <NA>
#3 3 callum 3
#4 4 joanna 2
#5 5 allison 1
#6 6 slocum 3
#7 7 lisa 1
A for loop:
df$group <- NA
for ( i in rev(seq_along(vec))) {
TFvec <- grepl(vec[i], df$name)
df$group[TFvec] <- i
}
df
# name id group
#1 john 1 2
#2 david 2 NA
#3 callum 3 3
#4 joanna 4 2
#5 allison 5 1
#6 slocum 6 3
#7 lisa 7 1
Or you can use outer with stri_match_first_regex from stringi
library(stringi)
match.mat <- outer(df$name, vec, stri_match_first_regex)
df$group <- apply(match.mat, 1, function(ii) which(!is.na(ii))[1])
# [1] for first match in `vec`
# name id group
#1 john 1 2
#2 david 2 NA
#3 callum 3 3
#4 joanna 4 2
#5 allison 5 1
#6 slocum 6 3
#7 lisa 7 1
A vectorised solution, using rebus and stringi.
library(rebus)
library(stringi)
Create a regular expression that captures any of the values in vec.
vec <- c("lis", "^jo", "um$")
(rx <- or1(vec, capture = TRUE))
## <regex> (lis|^jo|um$)
Match the regex, then convert to factor and integer.
matches <- stri_match_first_regex(df$name, rx)[, 2]
df$group <- as.integer(factor(matches, levels = c("lis", "jo", "um")))
df now looks like this:
name id group
1 john 1 2
2 david 2 NA
3 callum 3 3
4 joanna 4 2
5 allison 5 1
6 slocum 6 3
7 lisa 7 1

How to remove non-alphabetic characters and convert all letter to lowercase in R?

In the following string:
"I may opt for a yam for Amy, May, and Tommy."
How to remove non-alphabetic characters and convert all letter to lowercase and sort the letters within each word in R?
Meanwhile, I try to sort words in sentence and removes the duplicates.
You could use stringi
library(stringi)
unique(stri_sort(stri_trans_tolower(stri_extract_all_words(txt, simplify = TRUE))))
Which gives:
## [1] "a" "amy" "and" "for" "i" "may" "opt" "tommy" "yam"
Update
As per mentionned by #DavidArenburg, I overlooked the "sort the letters within words" part of your question. You didn't provide a desired output and no immediate application comes to mind but, assuming you want to identify which words have a matching counterpart (string distance of 0):
unique(stri_sort(stri_trans_tolower(stri_extract_all_words(txt, simplify = TRUE)))) %>%
stringdistmatrix(., ., useNames = "strings", method = "qgram") %>%
# a amy and for i may opt tommy yam
# a 0 2 2 4 2 2 4 6 2
# amy 2 0 4 6 4 0 6 4 0
# and 2 4 0 6 4 4 6 8 4
# for 4 6 6 0 4 6 4 6 6
# i 2 4 4 4 0 4 4 6 4
# may 2 0 4 6 4 0 6 4 0
# opt 4 6 6 4 4 6 0 4 6
# tommy 6 4 8 6 6 4 4 0 4
# yam 2 0 4 6 4 0 6 4 0
apply(., 1, function(x) sum(x == 0, na.rm=TRUE))
# a amy and for i may opt tommy yam
# 1 3 1 1 1 3 1 1 3
Words with more than one 0 per row ("amy", "may", "yam") have a scrambled counterpart.
str <- "I may opt for a yam for Amy, May, and Tommy."
## Clean the words (just keep letters and convert to lowercase)
words <- strsplit(tolower(gsub("[^A-Za-z ]", "", str)), " ")[[1]]
## split the words into characters and sort them
sortedWords <- sapply(words, function(word) sort(unlist(strsplit(word, ""))))
## Join the sorted letters back together
sapply(sortedWords, paste, collapse="")
# i may opt for a yam for amy may and
# "i" "amy" "opt" "for" "a" "amy" "for" "amy" "amy" "adn"
# tommy
# "mmoty"
## If you want to convert result back to string
do.call(paste, lapply(sortedWords, paste, collapse=""))
# [1] "i amy opt for a amy for amy amy adn mmoty"
stringr will let you work on all character sets in R and at C-speed, and magrittr will let you use a piping idiom that works well for your needs:
library(stringr)
library(magrittr)
txt <- "I may opt for a yam for Amy, May, and Tommy."
txt %>%
str_to_lower %>% # lowercase
str_replace_all("[[:punct:][:digit:][:cntrl:]]", "") %>% # only alpha
str_replace_all("[[:space:]]+", " ") %>% # single spaces
str_split(" ") %>% # tokenize
extract2(1) %>% # str_split returns a list
sort %>% # sort
unique # unique words
## [1] "a" "amy" "and" "for" "i" "may" "opt" "tommy" "yam"
The qdap package that I maintain has the bag_o_words function that works well for this:
txt <- "I may opt for a yam for Amy, May, and Tommy."
library(qdap)
unique(sort(bag_o_words(txt)))
## [1] "a" "amy" "and" "for" "i" "may" "opt" "tommy" "yam"

Removing brackets from a dataset

I have imported a dataset in R of 10 columns and 100 row. But in few columns there are brackets([]) and commas along with the values. How can i get rid of them?
As an instance, consider one of 4 columns and 2 rows.
V1 V2 V3 V4
3( [4 ([5 8
(1 5 9 [10,
And what i want is
V1 V2 V3 V4
3 4 5 8
1 5 9 10
Just use gsub:
mydf[] <- lapply(mydf, function(x) gsub("[][(),]", "", x))
mydf
# V1 V2 V3 V4
# 1 3 4 5 8
# 2 1 5 9 10
Instead of lapply, you can also use as.matrix:
mydf[] <- gsub("[][(),]", "", as.matrix(mydf))
mydf
# V1 V2 V3 V4
# 1 3 4 5 8
# 2 1 5 9 10

Working with dataframes in a list: Rename variables

Define:
dats <- list( df1 = data.frame(A=sample(1:3), B = sample(11:13)),
df2 = data.frame(AA=sample(1:3), BB = sample(11:13)))
s.t.
> dats
$df1
A B
1 2 12
2 3 11
3 1 13
$df2
AA BB
1 1 13
2 2 12
3 3 11
I would like to change all variable names from all caps to lower. I can do this with a loop but somehow cannot get this lapply call to work:
dats <- lapply(dats, function(x)
names(x)<-tolower(names(x)))
which results in:
> dats
$df1
[1] "a" "b"
$df2
[1] "aa" "bb"
while the desired result is:
> dats
$df1
a b
1 2 12
2 3 11
3 1 13
$df2
aa bb
1 1 13
2 2 12
3 3 11
If you don't use return at the end of a function, the last evaluated expression returned. So you need to return x.
dats <- lapply(dats, function(x) {
names(x)<-tolower(names(x))
x})