One is the target data frame (targetframe) and the other dataframe works as a library (word.library) with some key values. Then I need the following algorithm: The algorithm should look up an approximate match between word.library$mainword and targetframe$words. After figuring out the approximate match the substrings in targetframe$words should be replaced with word.library$keyID.
Here are the two data frames mentioned above:
tragetframe <- data.frame(words= c("This is sentence one with the important word",
"This is sentence two with the inportant woord",
"This is sentence three with crazy sayings" ))
word.library <- data.frame(mainword = c("important word",
"crazy sayings"),
keyID = c("1001",
"2001"))
Here is my solution which works.
for(i in 1:nrow(word.library)){
positions <- aregexec(word.library[i,1], tragetframe$words, max.distance = 0.1)
res <- regmatches(tragetframe$words, positions)
res[lengths(res)==0] <- "XXXX" # deal with 0 length matches somehow
tragetframe$words <- Vectorize(gsub)(unlist(res), word.library[i,2], tragetframe$words)
tragetframe$words
}
However: I use a for loop which is not efficent (imagine I have two huge data frames). Has anyone an idea how to resovle this issue more efficiently?
Related
I believe that there must be some related questions in the community, but I failed to find the one very informative to my case:
Basically, I am trying to produce three plots with the lapply function. Below are my codes.
p_grid <- seq(0,1,length.out=20)
prior_uni <- rep(1,20)
prior_bi <- ifelse( p_grid < 0.5 , 0 , 1)
prior_exp <- exp(-5*abs(p_grid-0.5))
prior_list <- list(prior_uni, prior_bi, prior_exp)
ggs <- lapply(prior_list, function(x){
likelihood <- dbinom(6,9, prob = p_grid)
unstd.post <- likelihood*x
std.post <- unstd.post/sum(unstd.post)
plot_post <- plot(p_grid,std.post,type="b", ylim = c(0,max(x)))
mtext(paste0(x))
}
)
By doing so, I get the plots but the mtext function does not work well. Instead of showing the title prior_uni, prior_bi, prior_exp respectively, it gives every single value of the list (e.g., prior_uni) with overlapping each other.
It is a bit confusing to me. According to the plot results, the function within lapply seems to take the three lists of prior_list, not every single value. In other words, x is the three elements of prior_list, not the sixty (3*20) elements, but the function mtext does oppositely.
I hope I have expressed clearly. Look for your responses.
Best regards,
Jilong
I have been given a large text file and want to find the number of different words that start with each letter. I am trying to understand input and output values for map and reduce functions.
I understand a simpler problem which does not need to deal with duplicate words: determine the frequency with which each letter of the alphabet starts a word in the text using map reduce.
Map input: <0, “everyday i am city in tomorrow easy over school i iterate tomorrow city community”>
Map output: [<e,1>,<i,1>,<a,1>,<c,1>,<i,1>,<t,1>,<e,1>,<o,1>,<s,1>,<i,1>,<i,1>,<t,1>,<c,1>,<c,1>]
Reduce input: <a,[1]>,<c,[1,1,1]>,<e,[1,1]>,<i,[1,1,1,1]>,<o,[1]>,<s,[1]>,<t,[1,1]>
Reduce output: [<a,1>,<c,3>,<e,2>,<i,4>,<o,1>,<s,1>,<t,2>]
For the above problem the words 'i' 'city' and 'tomorrow' appear more than once so my final output should be:
Reduce output: [<a,1>,<c,2>,<e,2>,<i,3>,<o,1>,<s,1>,<t,1>]
I am unsure of how I would ensure duplicate words are remove in the above problem (would it be done in a pre processing phase or could it be implemented on either map or reduce functions). If I could get help understanding the map and reduce outputs of the new problem I would appreciate it.
You can do it in two map-reduce passes:
find all the distinct word by using word as a map output and in reduce outputting each word once
you already solved - find frequency of each initial letter on each unique word.
Alternatively, since there are not many unique words you can cache them in mapper and output each one (or its first letter) only once and the reduce will be identical to your simpler problem. Actually, no, that won't work because same words can appear in different mappers. But you can still cache the words in mapper in the first solution and output each word only once per mapper - a little bit less traffic between map and reduce.
Maybe something like this would help,
let str = "everyday i am city in tomorrow easy over school i iterate tomorrow city community"
let duplicatesRemoved = Set(str.split(separator: " "))
Output:
["city", "community", "tomorrow", "easy", "everyday", "over", "in", "iterate", "i", "am", "school"]
And maybe you don't need those map statements and can achieve something like this,
Code
var varCount = [Character: Int]()
for subStr in duplicatesRemoved {
if let firstChar = subStr.first {
varCount[firstChar] = (varCount[firstChar] ?? 0) + 1
}
}
Output
["i": 3, "t": 1, "e": 2, "c": 2, "s": 1, "a": 1, "o": 1]
I have the following data as an example:
fruit.region <- data.frame(full =c("US red apple","bombay Asia mango","gold kiwi New Zealand"), name = c("apple", "mango", "kiwi"), country = c("US","Asia","New Zealand"), type = c("red","bombay","gold"))
I would like R to be able to look at other items in the "full" (name) column that don't have values for "name", "country" and "type" and see if they match other items. For instance, if full had a 4th row with "bombay US mango" it would be able to identify that the country should read US, bombay should be under type and mango should be under name.
This is what I have so far, which merely identifies (logically) where the items match:
new.entry <- c("bombay US mango")
split.new.entry <- strsplit(new.entry, " ")
lapply(split.new.entry, function(x){
check = grepl(x, fruit.region, ignore.case=TRUE)
print(check)
})
I'm at a bit of a standstill..I've read through a number of regex posts and the r help guides on grepl but am not able to find a great solution. What I have doesn't fully identify a logical "match" vector so I'm unable to subset and use an if statement to concatenate on different elements. Ideally, I'd like to be able to replace these elements in data.table form as my fruit.region will actually be in a data table. Does anyone have any suggestions on the best approach?
Using the str_detect function from the stringr library. This gives a list, ready to rbind:
library(stringr)
addnewrow <- function(newfruit){
z<-lapply(fruit.region[,2:4], function(x) x[str_detect(new.entry, x)])
z$full <- newfruit
z
}
addnewrow(new.entry)
$name
[1] "mango"
$country
[1] "US"
$type
[1] "bombay"
$full
[1] "bombay US mango"
The next step would depend on your desired outcome - if you only want to add one, try:
rbind(fruit.region, addnewrow(new.entry))
If you have a lot:
z <- do.call(rbind, lapply(c(new.entry, new.entry), addnewrow))
rbind(fruit.region, z)
NB make sure your columns are character first:
fruit.region[] <- lapply(fruit.region, as.character)
I have a unique set of words in a character vector (that have been 'stemmed') and I want to know how many of them appear in a string.
Here's what I have so far:
library(RTextTools)
string <- "Players Information donation link controller support years fame glory addition champion Steer leader gang ghosts life Power Pellets tables gobble ghost"
wordstofind <- c("player","fame","field","donat")
# I created a stemmed list of the string
string.stem <- colnames(create_matrix(string, stemWords = T, removeStopwords = F))
I know the next step probably involves grepl("\\bword\\b,value") or some usage of regex, but I'm not sure what the fastest option is in this case.
Here are my criteria:
I have to do this many times, so it being as fast as possible is a concern.
It should match the entire word ("es" shouldn't match "test").
Any push in the right direction would be great.
Well, I never work with huge datasets, so time is never of the essence, but given the data you've provided this will give you a count of how many words exactly match something in the string. Might be a good starting point.
sum(wordstofind %in% unlist(strsplit(string, " ")))
> sum(wordstofind %in% unlist(strsplit(string, " ")))
[1] 1
Edit Using the stems to get the proper 3 matches, thanks to #Anthony Bissel:
sum(wordstofind %in% unlist(string.stem))
> sum(wordstofind %in% unlist(string.stem))
[1] 3
Take a look at stringr by Hadley Wickham. You are probably looking for the function str_count.
There certainly might be a faster option, but this works:
length(wordstofind) - length(setdiff(wordstofind, string.stem)) # 3
But it looks like Andrew Taylor's answer is faster:
`microbenchmark(sum(wordstofind %in% unlist(string.stem)), length(wordstofind) - length(setdiff(wordstofind, string.stem)))
Unit: microseconds
expr min lq mean median uq max neval
sum(wordstofind %in% unlist(string.stem)) 4.016 4.909 6.55562 5.355 5.801 37.485 100
length(wordstofind) - length(setdiff(wordstofind, string.stem)) 16.511 16.958 21.85303 17.404 18.296 81.218 100`
I'll have two strings of the form
"Initestimate" or "L#estimate" with # being a 1 or 2 digit number
" Nameestimate" with Name being the name of the actual symbol. In the example below, the name of our symbol is "6JU4"
And I have a matrix containing, among other things, columns containing "InitSymbol" and "L#Symbol". I want to return the column name of the column where the first row holds the substring before "estimate".
I'm using stringr. Right now I have it coded with a bunch of calls to str_sub but its really sloppy and I wanted to clean it up and do it right.
example code:
> examplemat <- matrix(c("RYU4","6JU4","6EU4",1,2,3),ncol=6)
> colnames(examplemat) <- c("InitSymb","L1Symb","L2Symb","RYU4estimate","6JU4estimate","6EU4estimate")
> examplemat
InitSymb L1Symb L2Symb RYU4estimate 6JU4estimate 6EU4estimate
[1,] "RYU4" "6JU4" "6EU4" "1" "2" "3"
> searchStr <- "L1estimate"
So with answer being the answer I'm looking for, I want to be able to input examplemat[,answer] so I can extract the data column (in this case, "2")
I don't really know how to do regex, but I think the answer looks something like
examplemat[,paste0(**some regex function**("[(Init)|(L[:digit:]+)]",searchStr),"estimate")]
what function goes there, and is my regex code right?
May be you can try:
library(stringr)
Extr <- str_extract(searchStr, '^[A-Za-z]\\d+')
Extr
[1] "L1"
#If the searchStr is `Initestimate`
#Extr <- str_extract(searchStr, '^[A-Za-z]{4}')
pat1 <- paste0("(?<=",Extr,").*")
indx1 <-examplemat[,str_detect(colnames(examplemat),perl(pat1))]
pat2 <- paste0("(?<=",indx1,").*")
examplemat[,str_detect(colnames(examplemat), perl(pat2))]
#6JU4estimate
# "2"
#For searchStr using Initestimate;
#examplemat[,str_detect(colnames(examplemat), perl(pat2))]
#RYU4estimate
# "1"
The question is bit confusing so I am quite not sure if my interpretation is correct.
First, you would extract the values in the string "coolSymb" without "Symb"
Second, you can detect if column name contains "cool" and return the location (column index)
by which() statement.
Finally, you can extract the value using simple matrix indexing.
library(stringr)
a = str_split("coolSymb", "Symb")[[1]][1]
b = which(str_detect(colnames(examplemat), a))
examplemat[1, b]
Hope this helps,
won782's use of str_split inspired me to find an answer that works, although I still want to know how to do this by matching the prefix instead of excluding the suffix, so I'll accept an answer that does that.
Here's the step-by-step
> str_split("L1estimate","estimate")[[1]][1]
[1] "L1"
replace the above step with one that gets {L1} instead of getting {not estimate} for bonus points
> paste0(str_split("L1estimate","estimate")[[1]][1],"Symb")
[1] "L1Symb"
> examplemat[1,paste0(str_split("L1estimate","estimate")[[1]][1],"Symb")]
L1Symb
[1,] "6JU4"
> paste0(examplemat[1,paste0(str_split("L1estimate","estimate")[[1]][1],"Symb")],"estimate")
[1] "6JU4estimate"
> examplemat[,paste0(examplemat[1,paste0(str_split("L1estimate","estimate")[[1]][1],"Symb")],"estimate")]
6JU4estimate
[1,] "2"