conditional list selection and appending of variable in R - list

Say i have a list mn like this
i<-c(w=5,n="oes")
p<-c(w=9,n="ty",j="ooe")
mn<-list(i,p,i,p,i,p,i)
Now I´d like to select the list elements with the shortest length (the i´s) and append "unknown" to the list before creating a dataframe. How can I do this?
EDIT: In the end I´d like the list to have every i element in mn as w=5,n="oes", and j="unknown" before mn including p is changed into a dataframe:

To find the lenght of each element in your list, use length wrapped in sapply:
len <- sapply(mn, length)
len
[1] 2 3 2 3 2 3 2
Now, to identify only those elements that have lengths equal to the shortest length:
which(len==min(len))
[1] 1 3 5 7
Use subsetting and as.data.frame to create your data.frame. But this data.frame will have somewhat random column names, so I rename the column names:
df <- as.data.frame(mn[which(len==min(len))])
names(df) <- seq_len(ncol(df))
df
1 2 3 4
w 5 5 5 5
n oes oes oes oes
You will have to clarify what you mean with "append unknown" to this data.frame.

Another possibility is:
all.names = unique( unlist( lapply( mn, names ) ) )
do.call( 'rbind', lapply( mn, function( r ) {
data.frame( sapply( all.names, function( v ) r[ v ], simplify=F ) )
} ) )
which gives:
w n j
w 5 oes <NA>
w1 9 ty ooe
w2 5 oes <NA>
w3 9 ty ooe
w4 5 oes <NA>
w5 9 ty ooe
w6 5 oes <NA>
But I get the feeling there's a much neater route to this solution...
edit
If you want unknown rather than <NA>, you can change the inner sapply to:
sapply( all.names, function( v ) if( is.na( r[v] ) ) 'unknown' else r[v], simplify=F )

There is not very elegant, but it might do the trick:
maxlength <- max(sapply(mn,length))
## make a new list, with the "missing" entries replaced with "unknown"
mn2 <- lapply(mn,function(x)c(x,rep('unknown',maxlength - length(x))))
## convert to a data.frame
mn3 <- data.frame(matrix(unlist(mn2),nrow = 3))
Which gives the following
> mn3
X1 X2 X3 X4 X5 X6 X7
1 5 9 5 9 5 9 5
2 oes ty oes ty oes ty oes
3 unknown ooe unknown ooe unknown ooe unknown
However it is better practice to use NA, rather than "unknown"

Related

Error: std::bad_alloc Rstudio

After running this code:
t1 <-Sys.time()
df.m <- left_join(df.h,daRta3,by=c("year","month","MA","day"))
t2 <- Sys.time()
difftime(t2,t1)
I have this error.
Error: std::bad_alloc
The dimension of the matrix that I have tried to create is 74495*2695 = 180.10^6 rows.
The computer in which I run the code has 20 GB of RAM
I tried the memory.limit() but it did not solve my issue.
Examine cardinality of your join key
Is the c("year","month","MA","day") unique in both df.h and daRta3?
What are the most frequent values?
NA values. left_join can treat NA values as equal or different:
> tibble(x = c(NA, NA, NA)) %>% left_join(., ., by = 'x')
# A tibble: 9 x 1
x
<lgl>
1 NA
2 NA
3 NA
4 NA
5 NA
6 NA
7 NA
8 NA
9 NA
> tibble(x = c(NA, NA, NA)) %>% left_join(., ., by = 'x', na_matches = 'never')
# A tibble: 3 x 1
x
<lgl>
1 NA
2 NA
3 NA
If order and values in c("year","month","MA","day") can be guaranteed to be the same then simple cbind or bind_cols might be an efficient solution

R Wildcard data frame merging

I'm trying to merge a data frame and vector not by exact string matches in a column, but by wildcard string matches. To clarify, say you have this dataframe:
v <-data.frame(X1=c("AGTACAGT","AGTGAAGT","TGTA","GTTA","GAT","GAT"),X2=c(1,1,1,1,1,1))
# X1 X2
# 1 AGTACAGT 1
# 2 AGTGAAGT 2
# 3 TGTA 3
# 4 GTTA 4
# 5 GAT 5
# 6 GAT 6
I want to create a dataframe by creating a different color for every AGT.{3}GT,.{T|G}TA,GAT pattern, and creating a new column X3 that would show that color. So something like this:
# X1 X2 X3
# 1 AGTACAGT 1 "#FE7F01"
# 2 AGTGAAGT 2 "#FE7F01"
# 3 TGTA 3 "#FE7F00"
# 4 GTTA 4 "#FE7F00"
# 5 GAT 5 "#FE8002"
# 6 GAT 6 "#FE8002"
So far I am using this to create colors for each level, but I don't know how to count how many "wildcard levels" as opposed to singular levels there are:
x <- nlevels(v$X1)
x.colors2 <- colorRampPalette(brewer.pal(8,"Paired"))(x)
G <- data.frame("X1"=levels(v$X1),"X3"=x.colors2)
v <- merge(v,G)
Here's a solution.
Find patterns:
pat <- c("^AGT.{3}GT$", "^.(T|G)TA$", "^GAT$")
n <- length(pat)
indList <- lapply(pat, grep, v$X1)
Generate colors:
library(RColorBrewer)
col <- colorRampPalette(brewer.pal(8, "Paired"))(n)
Add colors to data frame:
colFull <- rep(col, sapply(indList, length))
v$color <- colFull[order(unlist(indList))]
The result:
v
# X1 X2 color
# 1 AGTACAGT 1 #A6CEE3
# 2 AGTGAAGT 1 #A6CEE3
# 3 TGTA 1 #979C62
# 4 GTTA 1 #979C62
# 5 GAT 1 #FF7F00
# 6 GAT 1 #FF7F00

Which pattern occurs the most in a matrix - R (UPDATE)

UPDATE 2
*I've added some code (and explanation) I wrote myself at the end of this question, this is however a suboptimal solution (both in coding efficiency as resulting output) but kind of manages to make a selection of items that adhere to the constraints. If you have any ideas on how to improve it (again both in efficiency as resulting output) please let me know.
1. Updated Post
Please look below for the initial question and sample code. Thx to alexis_laz his answer the problem was solved for a small number of items. However when the number of items becomes to large the combn function in R cannot calculate it anymore because of the invalid 'ncol' value (too large or NA) error. Since my dataset has indeed a lot of items, I was wondering whether replacing some of his code (shown after this) with C++ provides a solution to this, and if this is the case what code I should use for this? Tnx!
This is the code as provided by alexis_laz;
ff = function(x, No_items, No_persons)
{
do.call(rbind,
lapply(No_items:ncol(x),
function(n) {
col_combs = combn(seq_len(ncol(x)), n, simplify = F)
persons = lapply(col_combs, function(j) rownames(x)[rowSums(x[, j, drop = F]) == n])
keep = unlist(lapply(persons, function(z) length(z) >= No_persons))
data.frame(persons = unlist(lapply(persons[keep], paste, collapse = ", ")),
items = unlist(lapply(col_combs[keep], function(z) paste(colnames(x)[z], collapse = ", "))))
}))
}
2. Initial Post
Currently I'm working on a set of data coming from adaptive measurement, which means that not all persons have made all of the same items. For my analysis however I need a dataset that contains only items that have been made by all persons (or a subset of these persons).
I have a matrix object in R with rows = persons (100000), and columns = items(220), and a 1 in a cell if the person has made the item and a 0 if the person has not made the item.
How can I use R to determine which combination of at least 15 items, is made by the highest amount of persons?
Hopefully the question is clear (if not please ask me for more details and I will gladly provide those).
Tnx in advance.
Joost
Edit:
Below is a sample matrix with the items (A:E) as columns and persons (1:5) as rows.
mat <- matrix(c(1,1,1,0,0,1,1,0,1,1,1,1,1,0,1,0,1,1,0,0,1,1,1,1,0),5,5,byrow=T)
colnames(mat) <- c("A","B","C","D","E")
rownames(mat) <- 1:5
> mat
A B C D E
"1" 1 1 1 0 0
"2" 1 1 0 1 1
"3" 1 1 1 0 1
"4" 0 1 1 0 0
"5" 1 1 1 1 0
mat[1,1] = 1 means that person 1 has given a response to item 1.
Now (in this example) I'm interested in finding out which set of at least 3 items is made by at least 3 people. So here I can just go through all possible combinations of 3, 4 and 5 items to check how many people have a 1 in the matrix for each item in a combination.
This will result in me choosing the item combination A, B and C, since it is the only combination of items that has been made by 3 people (namely persons 1, 3 and 5).
Now for my real dataset I want to do this but then for a combination of at least 10 items that a group of at least 75 people all responded to. And since I have a lot of data preferably not by hand as in the example data.
I'm thus looking for a function/code in R, that will let me select the minimal amount of items, and questions, and than gives me all combinations of items and persons that adhere to these constraints or have a greater number of items/persons than the constrained.
Thus for the example matrix it would be something like;
f <- function(data,no.items,no.persons){
#code
}
> f(mat,3,3)
no.item no.pers items persons
1 3 3 A, B, C 1, 3, 5
Or in case of at least 2 items that are made by at least 3 persons;
> f(mat,2,3)
no.item no.pers items persons
1 2 4 A, B 1, 2, 3, 5
2 2 3 A, C 1, 3, 5
3 2 4 B, C 1, 3, 4, 5
4 3 3 A, B, C 1, 3, 5
Hopefully this clears up what my question actually is about. Tnx for the quick replies that I already received!
3. Written Code
Below is the code I've written today. It takes each item once as a starting point and then looks to the item that has been answered most by people who also responded to the start item. It the takes these two items and looks to a third item, and repeats this until the number of people that responded to all selected questions drops below the given limit. One drawback of the code is that it takes some time to run, (it goes up somewhat exponentially when the number of items grows). The second drawback is that this still does not evaluate all possible combinations of items, in the sense that the start item, and the subsequently chosen item may have a lot of persons that answered to these items in common, however if the chosen item has almost no similarities with the other (not yet chosen) items, the sample might shrink very fast. While if an item was chosen with somewhat less persons in common with the start item, and this item has a lot of connections to other items, the final collection of selected items might be much bigger than the one based on the code used below. So again suggestions and improvements in both directions are welcome!
set.seed(512)
mat <- matrix(rbinom(1000000, 1, .6), 10000, 100)
colnames(mat) <- 1:100
fff <- function(data,persons,items){
xx <- list()
for(j in 1:ncol(data)){
d <- matrix(c(j,length(which(data[,j]==1))),1,2)
colnames(d) <- c("item","n")
t = persons+1
a <- j
while(t >= persons){
b <- numeric(0)
for(i in 1:ncol(data)){
z <- c(a,i)
if(i %in% a){
b[i] = 0
} else {
b[i] <- length(which(rowSums(data[,z])==length(z)))
}
}
c <- c(which.max(b),max(b))
d <- rbind(d,c)
a <- c(a,c[1])
t <- max(b)
}
print(j)
xx[[j]] = d
}
x <- y <- z <- numeric(0)
zz <- matrix(c(0,0,rep(NA,ncol(data))),length(xx),ncol(data)+2,byrow=T)
colnames(zz) <- c("n.pers", "n.item", rep("I",ncol(data)))
for(i in 1:length(xx)){
zz[i,1] <- xx[[i]][nrow(xx[[i]])-1,2]
zz[i,2] <- length(unname(xx[[i]][1:nrow(xx[[i]])-1,1]))
zz[i,3:(zz[i,2]+2)] <- unname(xx[[i]][1:nrow(xx[[i]])-1,1])
}
zz <- zz[,colSums(is.na(zz))<nrow(zz)]
zz <- zz[which((rowSums(zz,na.rm=T)/rowMeans(zz,na.rm=T))-2>=items),]
zz <- as.data.frame(zz)
return(zz)
}
fff(mat,110,8)
> head(zz)
n.pers n.item I I I I I I I I I I
1 156 9 1 41 13 80 58 15 91 12 39 NA
2 160 9 2 27 59 13 81 16 15 6 92 NA
3 158 9 3 59 83 32 25 80 14 41 16 NA
4 160 9 4 24 27 71 32 10 63 42 51 NA
5 114 10 5 59 66 27 47 13 44 63 30 52
6 158 9 6 13 56 61 12 59 8 45 81 NA
#col 1 = number of persons in sample
#col 2 = number of items in sample
#col 3:12 = which items create this sample (NA if n.item is less than 10)
to follow up on my comment, something like:
set.seed(1618)
mat <- matrix(rbinom(1000, 1, .6), 100, 10)
colnames(mat) <- sample(LETTERS, 10)
rownames(mat) <- sprintf('person%s', 1:100)
mat1 <- mat[rowSums(mat) > 5, ]
head(mat1)
# A S X D R E Z K P C
# person1 1 1 1 0 1 1 1 1 1 1
# person3 1 0 1 1 0 1 0 0 1 1
# person4 1 0 1 1 1 1 1 0 1 1
# person5 1 1 1 1 1 0 1 1 0 0
# person6 1 1 1 1 0 1 0 1 1 0
# person7 0 1 1 1 1 1 1 1 0 0
table(rowSums(mat1))
# 6 7 8 9
# 24 23 21 5
tab <- table(sapply(1:nrow(mat1), function(x)
paste(names(mat1[x, ][mat1[x, ] == 1]), collapse = ',')))
data.frame(tab[tab > 1])
# tab.tab...1.
# A,S,X,D,R,E,P,C 2
# A,S,X,D,R,E,Z,P,C 2
# A,S,X,R,E,Z,K,C 3
# A,S,X,R,E,Z,P,C 2
# A,S,X,Z,K,P,C 2
Here is another idea that matches your output:
ff = function(x, No_items, No_persons)
{
do.call(rbind,
lapply(No_items:ncol(x),
function(n) {
col_combs = combn(seq_len(ncol(x)), n, simplify = F)
persons = lapply(col_combs, function(j) rownames(x)[rowSums(x[, j, drop = F]) == n])
keep = unlist(lapply(persons, function(z) length(z) >= No_persons))
data.frame(persons = unlist(lapply(persons[keep], paste, collapse = ", ")),
items = unlist(lapply(col_combs[keep], function(z) paste(colnames(x)[z], collapse = ", "))))
}))
}
ff(mat, 3, 3)
# persons items
#1 1, 3, 5 A, B, C
ff(mat, 2, 3)
# persons items
#1 1, 2, 3, 5 A, B
#2 1, 3, 5 A, C
#3 1, 3, 4, 5 B, C
#4 1, 3, 5 A, B, C

improve my code for collapsing a list of data.frames

Dear StackOverFlowers (flowers in short),
I have a list of data.frames (walk.sample) that I would like to collapse into a single (giant) data.frame. While collapsing, I would like to mark (adding another column) which rows have came from which element of the list. This is what I've got so far.
This is the data.frame that needs to be collapsed/stacked.
> walk.sample
[[1]]
walker x y
1073 3 228.8756 -726.9198
1086 3 226.7393 -722.5561
1081 3 219.8005 -728.3990
1089 3 225.2239 -727.7422
1032 3 233.1753 -731.5526
[[2]]
walker x y
1008 3 205.9104 -775.7488
1022 3 208.3638 -723.8616
1072 3 233.8807 -718.0974
1064 3 217.0028 -689.7917
1026 3 234.1824 -723.7423
[[3]]
[1] 3
[[4]]
walker x y
546 2 629.9041 831.0852
524 2 627.8698 873.3774
578 2 572.3312 838.7587
513 2 633.0598 871.7559
538 2 636.3088 836.6325
1079 3 206.3683 -729.6257
1095 3 239.9884 -748.2637
1005 3 197.2960 -780.4704
1045 3 245.1900 -694.3566
1026 3 234.1824 -723.7423
I have written a function to add a column that denote from which element the rows came followed by appending it to an existing data.frame.
collapseToDataFrame <- function(x) { # collapse list to a dataframe with a twist
walk.df <- data.frame()
for (i in 1:length(x)) {
n.rows <- nrow(x[[i]])
if (length(x[[i]])>1) {
temp.df <- cbind(x[[i]], rep(i, n.rows))
names(temp.df) <- c("walker", "x", "y", "session")
walk.df <- rbind(walk.df, temp.df)
} else {
cat("Empty list", "\n")
}
}
return(walk.df)
}
> collapseToDataFrame(walk.sample)
Empty list
Empty list
walker x y session
3 1 -604.5055 -123.18759 1
60 1 -562.0078 -61.24912 1
84 1 -594.4661 -57.20730 1
9 1 -604.2893 -110.09168 1
43 1 -632.2491 -54.52548 1
1028 3 240.3905 -724.67284 1
1040 3 232.5545 -681.61225 1
1073 3 228.8756 -726.91980 1
1091 3 209.0373 -740.96173 1
1036 3 248.7123 -694.47380 1
I'm curious whether this can be done more elegantly, with perhaps do.call() or some other more generic function?
I think this will work...
lengths <- sapply(walk.sample, function(x) if (is.null(nrow(x))) 0 else nrow(x))
cbind(do.call(rbind, walk.sample[lengths > 1]),
session = rep(1:length(lengths), ifelse(lengths > 1, lengths, 0)))
I'm not claiming this to be the most elegant approach, but I think it is working
library(plyr)
ldply(sapply(1:length(walk.sample), function(i)
if (length(walk.sample[[i]]) > 1)
cbind(walk.sample[[i]],session=rep(i,nrow(walk.sample[[i]])))
),rbind)
EDIT
After applying Marek's apt remarks
do.call(rbind,lapply(1:length(walk.sample), function(i)
if (length(walk.sample[[i]]) > 1)
cbind(walk.sample[[i]],session=i) ))

R: What's the easiest way to print out pairs of values from a data.frame?

I have a data.frame:
df<-data.frame(a=c("x","x","y","y"),b=c(1,2,3,4))
> df
a b
1 x 1
2 x 2
3 y 3
4 y 4
What's the easiest way to print out each pair of values as a list of strings like this:
"x1", "x2", "y1", "y2"
apply(df, 1, paste, collapse="")
with(df, paste(a, b, sep=""))
And this should be faster than apply.
About timing
For 10000 rows we get:
df <- data.frame(
a = sample(c("x","y"), 10000, replace=TRUE),
b = sample(1L:4L, 10000, replace=TRUE)
)
N = 100
mean(replicate(N, system.time( with(df, paste(a, b, sep="")) )["elapsed"]), trim=0.05)
# 0.005778
mean(replicate(N, system.time( apply(df, 1, paste, collapse="") )["elapsed"]), trim=0.05)
# 0.09611
So increase in speed is visible for few thousands.
It's because Shane's solution call paste for each row separately. So there is nrow(df) calls of paste, in my solution is one call.
Also, you can use sqldf library:
library("sqldf")
df<-data.frame(a=c("x","x","y","y"),b=c(1,2,3,4))
result <- sqldf("SELECT a || cast(cast(b as integer) as text) as concat FROM df")
You will get the following result:
concat
1 x1
2 x2
3 y3
4 y4