R function(): how to pass parameters which contain characters and regular expression - regex

my data as follows:
>df2
id calmonth product
1 101 01 apple
2 102 01 apple&nokia&htc
3 103 01 htc
4 104 01 apple&htc
5 104 02 nokia
Now i wanna calculate the number of ids whose products contain both 'apple' and 'htc' when calmonth='01'. Because what i need is not only 'apple' and 'htc', also i need 'apple' and 'nokia',etc.
So i want to realize this by a function like this:
xandy=function(a,b) data.frame(product=paste(a,b,sep='&'),
csum=length(grep('a.*b',x=df2$product))
)
also, i make a parameters list like this:
para=c('apple','htc','nokia')
but the problem is here. When i pass parameters like
xandy(para[1],para[2])
the results is as follows:
product csum
1 apple&htc 0
What my expecting result should be
product csum calmonth
1 apple&htc 2 01
2 apple&htc 0 02
So where is wrong about the parameters passing?
and, how can i add the calmonth in to the function() xandy correctly?
FYI.This question stems from my another question before
What's the R statement responding to SQL's 'in' statement
EDIT AFTER COMMENT
My predictive result will be:
product csum calmonth
1 apple&htc 2 01
2 apple&htc 0 02

May answer is another way how to tackle your problem.
library(stringr)
The function contains will split up the elements of a string vector according to the split character and evaluate if all target words are contained.
contains <- function(x, target, split="&") {
l <- str_split(x, split)
sapply(l, function(x, y) all(y %in% x), y=target)
}
contains(d$product, c("apple", "htc"))
[1] FALSE TRUE FALSE TRUE FALSE
The rest is just subsetting and summarizing
get_data <- function(a, b) {
e <- subset(d, contains(product, c(a, b)))
e$product2 <- paste(a, b, sep="&")
ddply(e, .(calmonth, product2), summarise, csum=length(id))
}
Using the data below, order does not play a role now anymore (see comment below).
get_data("apple", "htc")
calmonth product2 csum
1 1 apple&htc 1
2 2 apple&htc 2
get_data("htc", "apple")
calmonth product2 csum
1 1 htc&apple 1
2 2 htc&apple 2
I know this is not a direct answer to your question but I find this approach quite clean.
EDIT AFTER COMMENT
The reason that you get csum=0 is simply that you are searching for the wrong regex pattern, i.e. a something in between b not for apple ... htc. You need to construct the correct regex pattern,i.e. paste0(a, ".*", b).
Here a complete solution. I would not call it beautiful code, but anyway (note that I change the data to show that it generalizes for months).
library(plyr)
df2 <- read.table(text="
id calmonth product
101 01 apple
102 01 apple&nokia&htc
103 01 htc
104 02 apple&htc
104 02 apple&htc", header=T)
xandy <- function(a, b) {
pattern <- paste0(a, ".*", b)
d1 <- df2[grep(pattern, df2$product), ]
d1$product <- paste0(a,"&", b)
ddply(d1, .(calmonth), summarise,
csum=length(calmonth),
product=unique(product))
}
xandy("apple", "htc")
calmonth csum product
1 1 1 apple&htc
2 2 2 apple&htc

Related

Exclude a few columns from a grouped selection by `dplyr::contains`

Suppose a data frame with several groups of columns (linked by their names, here Bla and D):
df = data.frame(A=1, BlaTata=2, BlaTato=3, BlaTota=4, BlaToto=5,
C=6, D1=7, D2=8, D3=9, D4=10)
# A BlaTata BlaTato BlaTota BlaToto C D1 D2 D3 D4
# 1 2 3 4 5 6 7 8 9 10
How can I easily drop all columns containing Bla (i.e., select(-contains('Bla'))) except for a few of them that I would explicitely "protect" from the (de)selection procedure?
Supposing I want to "protect" BlaTato and BlaToto:
df %>% mutate(saveBlaToto=BlaToto, saveBlaTato=BlaTato) %>%
select(-starts_with('Bla')) %>%
mutate(BlaToto=saveBlaToto, BlaTato=saveBlaTato) %>%
select(-contains('save')) %>%
select(order(colnames(.)))
# A BlaTato BlaToto C D1 D2 D3 D4
# 1 3 5 6 7 8 9 10
There must be an easier and more elegant way ;-)
Supposing it is not handy to select by column index etc.
Something like select(-contains('Bla' but keep c('BlaTato','BlaToto'))) possibly for several columns to be preserved...
EDIT
This question is answered in Frank's "New Question" below.
The original question, simpler and answered in his "First Question", was "How to drop all columns containing B except from B2 in the following data frame":
df = data.frame(A=1, B1=2, B2=3, B3, B4=5, C=6, D1=7, D2=8, D3=9, D4=10)
First question. If you look at ?select, you'll see that you can enter a regular expression, like
# example
df = data.frame(A=1, B1=2, B2=3, B3=4, B4=5, C=6, D1=7, D2=8, D3=9, D4=10)
# goal: drop B, protect B2
df %>% select(-matches('^B[^2]$'))
A B2 C D1 D2 D3 D4
1 1 3 6 7 8 9 10
Reading the regex:
^ and $ indicate start and end of the string.
[^x] means any character except x.
New question. It looks like dplyr doesn't support Perl-style regexes yet, so...
# example
df = data.frame(A=1, BlaTata=2, BlaTato=3, BlaTota=4, BlaToto=5,
C=6, D1=7, D2=8, D3=9, D4=10)
# goal: drop Bla, protect BlaTato, BlaToto
df %>% select(-grep('^Bla(?!Tato|Toto)', names(.), perl=TRUE))
A BlaTato BlaToto C D1 D2 D3 D4
1 1 3 5 6 7 8 9 10
Reading the regex:
(?!xyz) means "don't be followed by xyz"
x|y means x or y
For more info on regular expressions and the base R functions for using them, read ?regex and ?grep. Really, though, you shouldn't name your columns like this. If you find yourself in a position where you need to parse column names, you probably made a mistake earlier on.

R: How to group and aggregate list elements using regex?

I want to aggregate (sum up) the following product list by groups (see below):
prods <- list("101.2000"=data.frame(1,2,3),
"102.2000"=data.frame(4,5,6),
"103.2000"=data.frame(7,8,9),
"104.2000"=data.frame(1,2,3),
"105.2000"=data.frame(4,5,6),
"106.2000"=data.frame(7,8,9),
"101.2001"=data.frame(1,2,3),
"102.2001"=data.frame(4,5,6),
"103.2001"=data.frame(7,8,9),
"104.2001"=data.frame(1,2,3),
"105.2001"=data.frame(4,5,6),
"106.2001"=data.frame(7,8,9))
test= list("100.2000"=data.frame(2,3,5),
"100.2001"=data.frame(4,5,6))
names <- c("A", "B", "C")
prods <- lapply(prods, function (x) {colnames(x) <- names; return(x)})
Each element of the product list (prods) has a name combination of the product number and the year (e.g. 101.2000 --> 101 = prod nr. and 2000 = year). And the groups only contain product numbers for the aggregation.
group1 <- c(101, 106)
group2 <- c(102, 104)
group3 <- c(105, 103)
My expected result, shows the aggregated product groups by year:
$group1.2000
A B C
1 8 10 12
$group2.2000
A B C
1 5 7 9
$group3.2000
A B C
1 11 13 15
$group1.2001
A B C
1 8 10 12
$group2.2001
A B C
1 5 7 9
$group3.2001
A B C
1 11 13 15
So far, I tried this way: First I decomposed the names of prods into product numbers:
prodnames <- names(prods)
prodnames_sub <- gsub("\\..*.","", prodnames)
And then I tried to aggregate using lapply:
lapply(prods, function(x) aggregate( ... , FUN = sum)
However, I didn't find how to implement the previous product numbers in the aggregation function. Ideas? Thanks
Here are two approaches. No packages are used in either one.
1) Using lists Create a two column data.frame S from the groups whose columns are the products (value column) and associated groups (ind column). Create the list to split by, By. In code to produce By, sub("\\.*", "", names(prods)) extracts the products and match is then used to find the associated group. sub("\\..*", "", names(prods)) extracts the year. Next perform the split and lapply over it to run the summations. The two components of By (group and year) can be reversed to change the order of the output, if desired.
S <- stack(list(group1 = group1, group2 = group2, group3 = group3))
By <- list(group = S$ind[match(sub("\\..*", "", names(prods)), S$values)],
year = sub(".*\\.", "", names(prods)))
lapply(split(prods, By), function(x) colSums(do.call(rbind, x)))
2) Using data.frames Convert the groups and prods each to a data frame, merge them, perform an aggregate and split back into a list. The output is the same as requested except for order. (Reverse the two right hand variables in the aggregate formula to get the order shown in the question but that will also reverse the two parts of each component name in he output list.)
S <- stack(list(group1 = group1, group2 = group2, group3 = group3))
DF0 <- do.call(rbind, prods)
DF <- cbind(do.call(rbind, strsplit(rownames(DF0), ".", fixed = TRUE)), DF0)
M <- merge(DF, S, all.x = TRUE, by = 1)
Ag <- aggregate(cbind(A, B, C) ~ ind + `2`, M, sum)
lapply(split(Ag, paste(Ag[[1]], Ag[[2]], sep = ".")), "[", 3:5)
giving:
$group1.2000
A B C
1 8 10 12
$group1.2001
A B C
4 8 10 12
$group2.2000
A B C
2 5 7 9
$group2.2001
A B C
5 5 7 9
$group3.2000
A B C
3 11 13 15
$group3.2001
A B C
6 11 13 15

Which pattern occurs the most in a matrix - R (UPDATE)

UPDATE 2
*I've added some code (and explanation) I wrote myself at the end of this question, this is however a suboptimal solution (both in coding efficiency as resulting output) but kind of manages to make a selection of items that adhere to the constraints. If you have any ideas on how to improve it (again both in efficiency as resulting output) please let me know.
1. Updated Post
Please look below for the initial question and sample code. Thx to alexis_laz his answer the problem was solved for a small number of items. However when the number of items becomes to large the combn function in R cannot calculate it anymore because of the invalid 'ncol' value (too large or NA) error. Since my dataset has indeed a lot of items, I was wondering whether replacing some of his code (shown after this) with C++ provides a solution to this, and if this is the case what code I should use for this? Tnx!
This is the code as provided by alexis_laz;
ff = function(x, No_items, No_persons)
{
do.call(rbind,
lapply(No_items:ncol(x),
function(n) {
col_combs = combn(seq_len(ncol(x)), n, simplify = F)
persons = lapply(col_combs, function(j) rownames(x)[rowSums(x[, j, drop = F]) == n])
keep = unlist(lapply(persons, function(z) length(z) >= No_persons))
data.frame(persons = unlist(lapply(persons[keep], paste, collapse = ", ")),
items = unlist(lapply(col_combs[keep], function(z) paste(colnames(x)[z], collapse = ", "))))
}))
}
2. Initial Post
Currently I'm working on a set of data coming from adaptive measurement, which means that not all persons have made all of the same items. For my analysis however I need a dataset that contains only items that have been made by all persons (or a subset of these persons).
I have a matrix object in R with rows = persons (100000), and columns = items(220), and a 1 in a cell if the person has made the item and a 0 if the person has not made the item.
How can I use R to determine which combination of at least 15 items, is made by the highest amount of persons?
Hopefully the question is clear (if not please ask me for more details and I will gladly provide those).
Tnx in advance.
Joost
Edit:
Below is a sample matrix with the items (A:E) as columns and persons (1:5) as rows.
mat <- matrix(c(1,1,1,0,0,1,1,0,1,1,1,1,1,0,1,0,1,1,0,0,1,1,1,1,0),5,5,byrow=T)
colnames(mat) <- c("A","B","C","D","E")
rownames(mat) <- 1:5
> mat
A B C D E
"1" 1 1 1 0 0
"2" 1 1 0 1 1
"3" 1 1 1 0 1
"4" 0 1 1 0 0
"5" 1 1 1 1 0
mat[1,1] = 1 means that person 1 has given a response to item 1.
Now (in this example) I'm interested in finding out which set of at least 3 items is made by at least 3 people. So here I can just go through all possible combinations of 3, 4 and 5 items to check how many people have a 1 in the matrix for each item in a combination.
This will result in me choosing the item combination A, B and C, since it is the only combination of items that has been made by 3 people (namely persons 1, 3 and 5).
Now for my real dataset I want to do this but then for a combination of at least 10 items that a group of at least 75 people all responded to. And since I have a lot of data preferably not by hand as in the example data.
I'm thus looking for a function/code in R, that will let me select the minimal amount of items, and questions, and than gives me all combinations of items and persons that adhere to these constraints or have a greater number of items/persons than the constrained.
Thus for the example matrix it would be something like;
f <- function(data,no.items,no.persons){
#code
}
> f(mat,3,3)
no.item no.pers items persons
1 3 3 A, B, C 1, 3, 5
Or in case of at least 2 items that are made by at least 3 persons;
> f(mat,2,3)
no.item no.pers items persons
1 2 4 A, B 1, 2, 3, 5
2 2 3 A, C 1, 3, 5
3 2 4 B, C 1, 3, 4, 5
4 3 3 A, B, C 1, 3, 5
Hopefully this clears up what my question actually is about. Tnx for the quick replies that I already received!
3. Written Code
Below is the code I've written today. It takes each item once as a starting point and then looks to the item that has been answered most by people who also responded to the start item. It the takes these two items and looks to a third item, and repeats this until the number of people that responded to all selected questions drops below the given limit. One drawback of the code is that it takes some time to run, (it goes up somewhat exponentially when the number of items grows). The second drawback is that this still does not evaluate all possible combinations of items, in the sense that the start item, and the subsequently chosen item may have a lot of persons that answered to these items in common, however if the chosen item has almost no similarities with the other (not yet chosen) items, the sample might shrink very fast. While if an item was chosen with somewhat less persons in common with the start item, and this item has a lot of connections to other items, the final collection of selected items might be much bigger than the one based on the code used below. So again suggestions and improvements in both directions are welcome!
set.seed(512)
mat <- matrix(rbinom(1000000, 1, .6), 10000, 100)
colnames(mat) <- 1:100
fff <- function(data,persons,items){
xx <- list()
for(j in 1:ncol(data)){
d <- matrix(c(j,length(which(data[,j]==1))),1,2)
colnames(d) <- c("item","n")
t = persons+1
a <- j
while(t >= persons){
b <- numeric(0)
for(i in 1:ncol(data)){
z <- c(a,i)
if(i %in% a){
b[i] = 0
} else {
b[i] <- length(which(rowSums(data[,z])==length(z)))
}
}
c <- c(which.max(b),max(b))
d <- rbind(d,c)
a <- c(a,c[1])
t <- max(b)
}
print(j)
xx[[j]] = d
}
x <- y <- z <- numeric(0)
zz <- matrix(c(0,0,rep(NA,ncol(data))),length(xx),ncol(data)+2,byrow=T)
colnames(zz) <- c("n.pers", "n.item", rep("I",ncol(data)))
for(i in 1:length(xx)){
zz[i,1] <- xx[[i]][nrow(xx[[i]])-1,2]
zz[i,2] <- length(unname(xx[[i]][1:nrow(xx[[i]])-1,1]))
zz[i,3:(zz[i,2]+2)] <- unname(xx[[i]][1:nrow(xx[[i]])-1,1])
}
zz <- zz[,colSums(is.na(zz))<nrow(zz)]
zz <- zz[which((rowSums(zz,na.rm=T)/rowMeans(zz,na.rm=T))-2>=items),]
zz <- as.data.frame(zz)
return(zz)
}
fff(mat,110,8)
> head(zz)
n.pers n.item I I I I I I I I I I
1 156 9 1 41 13 80 58 15 91 12 39 NA
2 160 9 2 27 59 13 81 16 15 6 92 NA
3 158 9 3 59 83 32 25 80 14 41 16 NA
4 160 9 4 24 27 71 32 10 63 42 51 NA
5 114 10 5 59 66 27 47 13 44 63 30 52
6 158 9 6 13 56 61 12 59 8 45 81 NA
#col 1 = number of persons in sample
#col 2 = number of items in sample
#col 3:12 = which items create this sample (NA if n.item is less than 10)
to follow up on my comment, something like:
set.seed(1618)
mat <- matrix(rbinom(1000, 1, .6), 100, 10)
colnames(mat) <- sample(LETTERS, 10)
rownames(mat) <- sprintf('person%s', 1:100)
mat1 <- mat[rowSums(mat) > 5, ]
head(mat1)
# A S X D R E Z K P C
# person1 1 1 1 0 1 1 1 1 1 1
# person3 1 0 1 1 0 1 0 0 1 1
# person4 1 0 1 1 1 1 1 0 1 1
# person5 1 1 1 1 1 0 1 1 0 0
# person6 1 1 1 1 0 1 0 1 1 0
# person7 0 1 1 1 1 1 1 1 0 0
table(rowSums(mat1))
# 6 7 8 9
# 24 23 21 5
tab <- table(sapply(1:nrow(mat1), function(x)
paste(names(mat1[x, ][mat1[x, ] == 1]), collapse = ',')))
data.frame(tab[tab > 1])
# tab.tab...1.
# A,S,X,D,R,E,P,C 2
# A,S,X,D,R,E,Z,P,C 2
# A,S,X,R,E,Z,K,C 3
# A,S,X,R,E,Z,P,C 2
# A,S,X,Z,K,P,C 2
Here is another idea that matches your output:
ff = function(x, No_items, No_persons)
{
do.call(rbind,
lapply(No_items:ncol(x),
function(n) {
col_combs = combn(seq_len(ncol(x)), n, simplify = F)
persons = lapply(col_combs, function(j) rownames(x)[rowSums(x[, j, drop = F]) == n])
keep = unlist(lapply(persons, function(z) length(z) >= No_persons))
data.frame(persons = unlist(lapply(persons[keep], paste, collapse = ", ")),
items = unlist(lapply(col_combs[keep], function(z) paste(colnames(x)[z], collapse = ", "))))
}))
}
ff(mat, 3, 3)
# persons items
#1 1, 3, 5 A, B, C
ff(mat, 2, 3)
# persons items
#1 1, 2, 3, 5 A, B
#2 1, 3, 5 A, C
#3 1, 3, 4, 5 B, C
#4 1, 3, 5 A, B, C

R- regex index of start postion and then add it to a string?

So far i have been able to merge two files and get the following dataframe (df1):
ID someLength someLongerSeq someSeq someMOD someValue
A 16 XCVBNMHGFDSTHJGF NMH T3(P) 7
A 16 XCVBNMHGFDSTHJGF NmH M3(O); S4(P); S6(P) 1
B 24 HDFGKJSDHFGKJSDFHGKLSJDF HFGKJSDFH S9(P) 5
C 22 QIOWEURQOIWERERQWEFFFF RQoIWERER Q16(D); S19(P) 7
D 19 HSEKDFGSFDKELJGFZZX KELJ S7(P); C9(C); S10(P) 1
i am looking for a way to do a regex match based on "someSeq" column to look for that substring in the "someLongersSeq" column and get the start location of the match and then add that to the whole numbers that are attached to the characters such as T3(P).
Example:
For the second row "ID:A","someSeq":"NmH" matches starts at location 4 of the someLongerSeq (after to upper conversion of NmH). So i want to add that number 4 to someMOD fields M3(O);S4(P);S6(P) so that i get M7(O);S8(P);S10(P) and then overwrite the new value in the someMOD column.
And do that for each row. Regex is per row bases.
Any help is really appreciated. Thanks.
First of all, I should mention that it is hard to read your data. I slightly modify it( I remove spaces from someMOD column) to read them. This is not a problem since you have already your data into a data.frame. So I read the data like this :
dat <- read.table(text='ID someLength someLongerSeq someSeq someMOD someValue
A 16 XCVBNMHGFDSTHJGF NMH T3(P) 7
A 16 XCVBNMHGFDSTHJGF NmH M3(O);S4(P);S6(P) 1
B 24 HDFGKJSDHFGKJSDFHGKLSJDF HFGKJSDFH S9(P) 5
C 22 QIOWEURQOIWERERQWEFFFF RQoIWERER Q16(D);S19(P) 7
D 19 HSEKDFGSFDKELJGFZZX KELJ S7(P);C9(C);S10(P) 1',header=TRUE)
Then the idea is:
to process row by row using apply
use gregexpr to get the index of someSeq into someLongerSeq
use gsubfn to add the previous index to its digit of someMOD
Here the whole solution:
library(gsubfn)
res <- t(apply(dat,1,function(x){
idx <- gregexpr(x['someSeq'],x['someLongerSeq'],
ignore.case = TRUE)[[1]][1]
x[['someMOD']] <- gsubfn("[[:digit:]]+",
function(x) as.numeric(x)+idx,
x[['someMOD']])
x
}))
as.data.frame(res)
ID someLength someLongerSeq someSeq someMOD someValue
1 A 16 XCVBNMHGFDSTHJGF NMH T8(P) 7
2 A 16 XCVBNMHGFDSTHJGF NmH M8(O);S9(P);S11(P) 1
3 B 24 HDFGKJSDHFGKJSDFHGKLSJDF HFGKJSDFH S18(P) 5
4 C 22 QIOWEURQOIWERERQWEFFFF RQoIWERER Q23(D);S26(P) 7
5 D 19 HSEKDFGSFDKELJGFZZX KELJ S18(P);C20(C);S21(P) 1

improve my code for collapsing a list of data.frames

Dear StackOverFlowers (flowers in short),
I have a list of data.frames (walk.sample) that I would like to collapse into a single (giant) data.frame. While collapsing, I would like to mark (adding another column) which rows have came from which element of the list. This is what I've got so far.
This is the data.frame that needs to be collapsed/stacked.
> walk.sample
[[1]]
walker x y
1073 3 228.8756 -726.9198
1086 3 226.7393 -722.5561
1081 3 219.8005 -728.3990
1089 3 225.2239 -727.7422
1032 3 233.1753 -731.5526
[[2]]
walker x y
1008 3 205.9104 -775.7488
1022 3 208.3638 -723.8616
1072 3 233.8807 -718.0974
1064 3 217.0028 -689.7917
1026 3 234.1824 -723.7423
[[3]]
[1] 3
[[4]]
walker x y
546 2 629.9041 831.0852
524 2 627.8698 873.3774
578 2 572.3312 838.7587
513 2 633.0598 871.7559
538 2 636.3088 836.6325
1079 3 206.3683 -729.6257
1095 3 239.9884 -748.2637
1005 3 197.2960 -780.4704
1045 3 245.1900 -694.3566
1026 3 234.1824 -723.7423
I have written a function to add a column that denote from which element the rows came followed by appending it to an existing data.frame.
collapseToDataFrame <- function(x) { # collapse list to a dataframe with a twist
walk.df <- data.frame()
for (i in 1:length(x)) {
n.rows <- nrow(x[[i]])
if (length(x[[i]])>1) {
temp.df <- cbind(x[[i]], rep(i, n.rows))
names(temp.df) <- c("walker", "x", "y", "session")
walk.df <- rbind(walk.df, temp.df)
} else {
cat("Empty list", "\n")
}
}
return(walk.df)
}
> collapseToDataFrame(walk.sample)
Empty list
Empty list
walker x y session
3 1 -604.5055 -123.18759 1
60 1 -562.0078 -61.24912 1
84 1 -594.4661 -57.20730 1
9 1 -604.2893 -110.09168 1
43 1 -632.2491 -54.52548 1
1028 3 240.3905 -724.67284 1
1040 3 232.5545 -681.61225 1
1073 3 228.8756 -726.91980 1
1091 3 209.0373 -740.96173 1
1036 3 248.7123 -694.47380 1
I'm curious whether this can be done more elegantly, with perhaps do.call() or some other more generic function?
I think this will work...
lengths <- sapply(walk.sample, function(x) if (is.null(nrow(x))) 0 else nrow(x))
cbind(do.call(rbind, walk.sample[lengths > 1]),
session = rep(1:length(lengths), ifelse(lengths > 1, lengths, 0)))
I'm not claiming this to be the most elegant approach, but I think it is working
library(plyr)
ldply(sapply(1:length(walk.sample), function(i)
if (length(walk.sample[[i]]) > 1)
cbind(walk.sample[[i]],session=rep(i,nrow(walk.sample[[i]])))
),rbind)
EDIT
After applying Marek's apt remarks
do.call(rbind,lapply(1:length(walk.sample), function(i)
if (length(walk.sample[[i]]) > 1)
cbind(walk.sample[[i]],session=i) ))