I'm analyzing a set of urls and values extracted using a crawler. While I could extract substrings from the URL, I'd really rather not bother with the regex to do so—is there a simple way to do a lookup table-style replacement using subset/grepl without resorting to dplyr(do a conditional mutate on the vairables)?
My current process:
test <- data.frame(
url = c('google.com/testing/duck', 'google.com/evaluating/dog', 'google.com/analyzing/cat'),
content = c(1, 2, 3),
subdir = NA
)
test[grepl('testing', test$url), ]$subdir <- 'testing'
test[grepl('evaluating', test$url), ]$subdir <- 'evaluating'
test[grepl('analyzing', test$url), ]$subdir <- 'analyzing'
Obviously, this is a little clumsy and doesn't scale well. With dplyr, I'd be able to do something with conditionals like:
test %<>% tbl_df() %>%
mutate(subdir = ifelse(
grepl('testing', subdir),
'test r',
ifelse(
grepl('evaluating', subdir),
'eval r',
ifelse(
grepl('analyzing', subdir),
'anal r',
NA
))))
But, again, really goofy and I don't want to incur a package dependency if at all possible. Is there any way to do regex-based subsetting with some sort of lookup table?
Edit: Just a few clarifications:
For extracting subdirectories, yes, regex would be most efficient; however, I was hoping for a more general pattern that could match a dictionary-like struct of strings with other, arbitrary values.
Of course, nested ifelse is ugly and prone to error—just wanted to get a quick-and-dirty example with dplyr up.
Edit 2: Thought I'd loop back and post what I ended up with based upon BondedDust's approach. Decided to practice some mapping and non-standard eval while at it:
test <- data.frame(
url = c(
'google.com/testing/duck',
'google.com/testing/dog',
'google.com/testing/cat',
'google.com/evaluating/duck',
'google.com/evaluating/dog',
'google.com/evaluating/cat',
'google.com/analyzing/duck',
'google.com/analyzing/dog',
'google.com/analyzing/cat',
'banana'
),
content = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
subdir = NA
)
# List used for key/value lookup, names can be regex
lookup <- c(
"testing" = "Testing is important",
"Eval.*" = 'eval in R',
"analy(z|s)ing" = 'R is fun'
)
# Dumb test for error handling:
# lookup <- c('test', 'hey')
# Defining new lookup function
regexLookup <- function(data, dict, searchColumn, targetColumn, ignore.case = TRUE){
# Basic check—need to separate errors/handling
if(is.null(names(dict)) || is.null(dict[[1]])) {
stop("Not a valid replacement value; use a key/value store for `dict`.")
}
# Non-standard eval for the column names; not sure if I should
# add safetytype/checks for these
searchColumn <- eval(substitute(searchColumn), data)
targetColumn <- deparse(substitute(targetColumn))
# Define find-and-replace utility
findAndReplace <- function (key, val){
data[grepl(key, searchColumn, ignore.case = ignore.case), targetColumn] <- val
data <<- data
}
# Map over the key/value store
mapply(findAndReplace, names(dict), dict)
# Return result, with non-matching rows preserved
return(data)
}
regexLookup(test, lookup, url, subdir, ignore.case = FALSE)
for (target in c('testing','evaluating','analyzing') ) {
test[grepl(target, test$url),'subdir' ] <- target }
test
url content subdir
1 google.com/testing/duck 1 testing
2 google.com/evaluating/dog 2 evaluating
3 google.com/analyzing/cat 3 analyzing
The vector of targets could have instead been the name of a vector that is in the workspace.
targets <- c('testing','evaluating','analyzing')
for( target in targets ) { ...}
Try this:
test$subdir<-gsub('.*\\/(.*)\\/.*','\\1',test$url)
Related
Is there a way in R to add all variables into a regression that match a certain pattern? For example, I have a bunch of variables in my dataset that correspond to holidays with the prefix h_ and I have other variables with other prefixes such as a_
Is there a way to do something like this:
lm(homicide ~ w_* + a_*, data= df)
To programmatically construct a formula, have a look at reformulate().
Here's an example that uses grep() to find all variables that begin with a "d" and then uses reformulate() to plug them in as the regressor variables on the RHS of a formula object.
vv <- grep("^d.*", names(mtcars), value=TRUE)
ff <- reformulate(termlabels=vv, response="mpg")
lm(ff, data=mtcars)
#
# Call:
# lm(formula = ff, data = mtcars)
#
# Coefficients:
# (Intercept) disp drat
# 21.84488 -0.03569 1.80203
A string can be turned into a formula.
data(iris)
fmla <- as.formula(paste("Species ~",
paste(grep("Width", names(iris), value = TRUE), collapse = " + ")))
glm(fmla, data = iris, family = binomial(link = "logit"))
I have a long list of names of city and its province name. This is partial list of my data
data <- c('Ranchi_Capital_State_Jharkhand', 'Bokaro_State_Jharkhand', 'Tata Nagar_State_Jharkhand', 'Ramgarh_State_Jharkhand',
'Pune_State_Maharashtra', 'Mumbai_Capital_State_Maharashtra', 'Nagpur_State_Maharashtra')
I want to arrange it such that State should come first, like this State_Jharkhand_Bokaro. If city is a capital then State_Jharkhand_Capital_Ranchi. Also note that city name or state name may have single string or more than one string (eg Tata Nagar).
What is most efficient way to do it, (without using any loop)?
You could use the below gsub function.
> data <- c('Ranchi_Capital_State_Jharkhand', 'Bokaro_State_Jharkhand', 'Tata Nagar_State_Jharkhand', 'Ramgarh_State_Jharkhand',
+ 'Pune_State_Maharashtra', 'Mumbai_Capital_State_Maharashtra', 'Nagpur_State_Maharashtra')
> gsub("^(?:(.*?)(_Capital))?(.*?)_(State.*)", "\\4\\2_\\1\\3", data)
[1] "State_Jharkhand_Capital_Ranchi" "State_Jharkhand_Bokaro"
[3] "State_Jharkhand_Tata Nagar" "State_Jharkhand_Ramgarh"
[5] "State_Maharashtra_Pune" "State_Maharashtra_Capital_Mumbai"
[7] "State_Maharashtra_Nagpur"
DEMO
This doesn't really use much regex, but is mostly based on the expected position of the information. Split the strings by "_" and then reorder them as required:
data
# [1] "Ranchi_Capital_State_Jharkhand" "Bokaro_State_Jharkhand"
# [3] "Tata Nagar_State_Jharkhand" "Ramgarh_State_Jharkhand"
# [5] "Pune_State_Maharashtra" "Mumbai_Capital_State_Maharashtra"
# [7] "Nagpur_State_Maharashtra"
A <- strsplit(data, "_", TRUE)
sapply(A, function(x) {
if (length(x) == 3) {
paste(x[c(2, 3, 1)], collapse = "_")
} else if (length(x) == 4) {
paste(x[c(3, 4, 2, 1)], collapse = "_")
} else {
stop("unexpected length")
}
})
# [1] "State_Jharkhand_Capital_Ranchi" "State_Jharkhand_Bokaro"
# [3] "State_Jharkhand_Tata Nagar" "State_Jharkhand_Ramgarh"
# [5] "State_Maharashtra_Pune" "State_Maharashtra_Capital_Mumbai"
# [7] "State_Maharashtra_Nagpur"
I don't know if using sapply breaks your requirement of "without using any loop" though.
I have big data like this :
> Data[1:7,1]
[1] mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5
[2] mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9
[3] mature=hsa-miR-448|mir_Family=mir-448|Gene=OR4F5
[4] mature=hsa-miR-659-3p|mir_Family=-|Gene=OR4F5
[5] mature=hsa-miR-5197-3p|mir_Family=-|Gene=OR4F5
[6] mature=hsa-miR-5093|mir_Family=-|Gene=OR4F5
[7] mature=hsa-miR-650|mir_Family=mir-650|Gene=OR4F5
what I want to do is that, in every row, I want to select the name after word mature= and also the word after Gene= and then pater them together with
paste(a,b, sep="-")
for example, the expected output from first two rows would be like :
hsa-miR-5087-OR4F5
hsa-miR-26a-1-3p-OR4F9
so, the final implementation is like this:
for(i in 1:nrow(Data)){
Data[i,3] <- sub("mature=([^|]*).*Gene=(.*)", "\\1-\\2", Data[i,1])
Name <- strsplit(as.vector(Data[i,2]),"\\|")[[1]][2]
Data[i,4] <- as.numeric(sub("pvalue=","",Name))
print(i)
}
which work well, but it's very slow. the size of Data is very big and it has 200,000,000 rows. this implementation is very slow for that. how can I speed it up ?
If you can guarantee that the format is exactly as you specified, then a regular expression can capture (denoted by the brackets below) everything from the equals sign upto the pipe symbol, and from the Gene= to the end, and paste them together with a minus sign:
sub("mature=([^|]*).*Gene=(.*)", "\\1-\\2", Data[,1])
Another option is to use read.table with = as a separator then pasting the 2 columns:
res = read.table(text=txt,sep='=')
paste(sub('[|].*','',res$V2), ## get rid from last part here
sub('^ +| +$','',res$V4),sep='-') ## remove extra spaces
[1] "hsa-miR-5087-OR4F5" "hsa-miR-26a-1-3p-OR4F9" "hsa-miR-448-OR4F5" "hsa-miR-659-3p-OR4F5"
[5] "hsa-miR-5197-3p-OR4F5" "hsa-miR-5093-OR4F5" "hsa-miR-650-OR4F5"
The simple sub solution already given looks quite nice but just in case here are some other approaches:
1) read.pattern Using read.pattern in the gsubfn package we can parse the data into a data.frame. This intermediate form, DF, can then be manipulated in many ways. In this case we use paste in essentially the same way as in the question:
library(gsubfn)
DF <- read.pattern(text = Data[, 1], pattern = "(\\w+)=([^|]*)")
paste(DF$V2, DF$V6, sep = "-")
giving:
[1] "hsa-miR-5087-OR4F5" "hsa-miR-26a-1-3p-OR4F9" "hsa-miR-448-OR4F5"
[4] "hsa-miR-659-3p-OR4F5" "hsa-miR-5197-3p-OR4F5" "hsa-miR-5093-OR4F5"
[7] "hsa-miR-650-OR4F5"
The intermediate data frame, DF, that was produced looks like this:
> DF
V1 V2 V3 V4 V5 V6
1 mature hsa-miR-5087 mir_Family - Gene OR4F5
2 mature hsa-miR-26a-1-3p mir_Family mir-26 Gene OR4F9
3 mature hsa-miR-448 mir_Family mir-448 Gene OR4F5
4 mature hsa-miR-659-3p mir_Family - Gene OR4F5
5 mature hsa-miR-5197-3p mir_Family - Gene OR4F5
6 mature hsa-miR-5093 mir_Family - Gene OR4F5
7 mature hsa-miR-650 mir_Family mir-650 Gene OR4F5
Here is a visualization of the regular expression we used:
(\w+)=([^|]*)
Debuggex Demo
1a) names We could make DF look nicer by reading the three columns of data and the three names separately. This also improves the paste statement:
DF <- read.pattern(text = Data[, 1], pattern = "=([^|]*)")
names(DF) <- unlist(read.pattern(text = Data[1,1], pattern = "(\\w+)=", as.is = TRUE))
paste(DF$mature, DF$Gene, sep = "-") # same answer as above
The DF in this section that was produced looks like this. It has 3 instead of 6 columns and remaining columns were used to determine appropriate column names:
> DF
mature mir_Family Gene
1 hsa-miR-5087 - OR4F5
2 hsa-miR-26a-1-3p mir-26 OR4F9
3 hsa-miR-448 mir-448 OR4F5
4 hsa-miR-659-3p - OR4F5
5 hsa-miR-5197-3p - OR4F5
6 hsa-miR-5093 - OR4F5
7 hsa-miR-650 mir-650 OR4F5
2) strapplyc
Another approach using the same package. This extracts the fields coming after a = and not containing a | producing a list. We then sapply over that list pasting the first and third fields together:
sapply(strapplyc(Data[, 1], "=([^|]*)"), function(x) paste(x[1], x[3], sep = "-"))
giving the same result.
Here is a visualization of the regular expression used:
=([^|]*)
Debuggex Demo
Here is one approach:
Data <- readLines(n = 7)
mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5
mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9
mature=hsa-miR-448|mir_Family=mir-448|Gene=OR4F5
mature=hsa-miR-659-3p|mir_Family=-|Gene=OR4F5
mature=hsa-miR-5197-3p|mir_Family=-|Gene=OR4F5
mature=hsa-miR-5093|mir_Family=-|Gene=OR4F5
mature=hsa-miR-650|mir_Family=mir-650|Gene=OR4F5
df <- read.table(sep = "|", text = Data, stringsAsFactors = FALSE)
l <- lapply(df, strsplit, "=")
trim <- function(x) gsub("^\\s*|\\s*$", "", x)
paste(trim(sapply(l[[1]], "[", 2)), trim(sapply(l[[3]], "[", 2)), sep = "-")
# [1] "hsa-miR-5087-OR4F5" "hsa-miR-26a-1-3p-OR4F9" "hsa-miR-448-OR4F5" "hsa-miR-659-3p-OR4F5" "hsa-miR-5197-3p-OR4F5" "hsa-miR-5093-OR4F5"
# [7] "hsa-miR-650-OR4F5"
Maybe not the more elegant but you can try :
sapply(Data[,1],function(x){
parts<-strsplit(x,"\\|")[[1]]
y<-paste(gsub("(mature=)|(Gene=)","",parts[grepl("mature|Gene",parts)]),collapse="-")
return(y)
})
Example
Data<-data.frame(col1=c("mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5","mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9"),col2=1:2,stringsAsFactors=F)
> Data[,1]
[1] "mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5" "mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9"
> sapply(Data[,1],function(x){
+ parts<-strsplit(x,"\\|")[[1]]
+ y<-paste(gsub("(mature=)|(Gene=)","",parts[grepl("mature|Gene",parts)]),collapse="-")
+ return(y)
+ })
mature=hsa-miR-5087|mir_Family=-|Gene=OR4F5 mature=hsa-miR-26a-1-3p|mir_Family=mir-26|Gene=OR4F9
"hsa-miR-5087-OR4F5" "hsa-miR-26a-1-3p-OR4F9"
I have the following data.table:
id fShort
1 432-12 1245
2 3242-12 453543
3 324-32 45543
4 322-34 45343
5 2324-34 13543
DT <- data.table(
id=c("432-12", "3242-12", "324-32", "322-34", "2324-34"),
fShort=c("1245", "453543", "45543", "45343", "13543"))
and the following list:
filenames <- list("3242-124342345.png", "432-124343.png", "135-13434.jpeg")
I would like to create a new column "fComplete" that includes the complete filename from the list. For this the values of column "id" need to be matched with the filename-list. If the filename starts with the "id" string, the complete filename should be returned. I use the following regex
t <- grep("432-12","432-124343.png",value=T)
that return the correct filename.
This is how the final table should look like:
id fShort fComplete
1 432-12 1245 432-124343.png
2 3242-12 453543 3242-124342345.png
3 324-32 45543 NA
4 322-34 45343 NA
5 2324-34 13543 NA
DT2 <- data.table(
id=c("432-12", "3242-12", "324-32", "322-34", "2324-34"),
fshort=c("1245", "453543", "45543", "45343", "13543"),
fComplete = c("432-124343.png", "3242-124342345.png", NA, NA, NA))
I tried using apply and data.table approaches but I always get warnings like
argument 'pattern' has length > 1 and only the first element will be used
What is a simple approach to accomplish this?
Here's a data.table solution:
DT[ , fComplete := lapply(id, function(x) {
m <- grep(x, filenames, value = TRUE)
if (!length(m)) NA else m})]
id fShort fComplete
1: 432-12 1245 432-124343.png
2: 3242-12 453543 3242-124342345.png
3: 324-32 45543 NA
4: 322-34 45343 NA
5: 2324-34 13543 NA
In my experience with similar functions, sometimes the regex functions return a list, so you have to consider that in the apply - I usually do an example manually
Also apply will not always in y experience on its own return something that always works into a data.frame,sometimes I had to use lap ply, and or unlist and data.frame to modify it
Here is an answer - I am not familiar with data.tables and I was having issues with the filenames being in a list, but with some transformations this works. I worked it out by seeing what apply was outputting and adding the [1] to get the piece I needed
DT <- data.frame(
id=c("432-12", "3242-12", "324-32", "322-34", "2324-34"),
fShort=c("1245", "453543", "45543", "45343", "13543"))
filenames <- list("3242-124342345.png", "432-124343.png", "135-13434.jpeg")
filenames1 <- unlist(filenames)
x<-apply(DT[1],1,function(x) grep(x,filenames1)[1])
DT$fielname <- filenames1[x]
I have a pattern list
patternlist <- list('one' = paste(c('a','b','c'),collapse="|"), 'two' = paste(1:5,collapse="|"), 'three' = paste(c('k','l','m'),collapse="|"))
that I want to select from to extract rows from a data frame
dataframez <- data.frame('letters' = c('a','b','c'), 'numbers' = 1:3, 'otherletters' = c('k','l','m'))
with this function
pattern.record <- function(x, column="letters", value="one")
{
if (column %in% names(x))
{
result <- x[grep(patternlist$value, x$column, ignore.case=T),]
}
else
{
result <- NA
}
return(result)
}
oddly enough, I get an error when I run it:
> pattern.record(dataframez)
Error in grep(patternlist$value, x$column, ignore.case = T) :
invalid 'pattern' argument
The problem is your use of the `$` operator.
In your function, it is looking a column \ named element called column
It is far simpler here to use `[[`
Then x[[column]] uses what column is defined as, not column as a name.
The relevant lines in ?`$` are
Both [[ and $ select a single element of the list. The main difference is that $ does not allow computed indices, whereas [[ does. x$name is equivalent to x[["name", exact = FALSE]]. Also, the partial matching behavior of [[ can be controlled using the exact argument.
You are trying to use value and column as computed indices (i.e. computing what value and column are defined as), thus you need `[[`.
The function becomes
pattern.record <- function(x, column="letters", value="one", pattern_list)
{
if (column %in% names(x))
{
result <- x[grep(pattern_list[[value]], x[[column]], ignore.case=T),]
}
else
{
result <- NA
}
return(result)
}
pattern.record(dataframez, patternlist = pattern_list)
## letters numbers otherletters
## 1 a 1 k
## 2 b 2 l
## 3 c 3 m
note that I've also added an argumentpattern_list so it does not depend on an object named patternlist existing somewhere in the parent environments (in your case the global environment.