KableExtra using row_spec in collapsed rows - r-markdown

I have a dataset that is set up like the following mtcars summary:
mtcarssumm <- mtcars %>%
group_by(cyl, gear) %>%
summarise(meanMPG = mean(mpg)) %>%
mutate(gear=as.character(gear)) %>%
bind_rows(group_by(mtcars,cyl) %>%
summarise(meanMPG=mean(mpg)) %>%
mutate(gear='Total')) %>%
arrange(cyl)
cyl gear meanMPG
<dbl> <chr> <dbl>
1 4 3 21.5
2 4 4 26.9
3 4 5 28.2
4 4 Total 26.7
5 6 3 19.8
6 6 4 19.8
7 6 5 19.7
8 6 Total 19.7
9 8 3 15.0
10 8 5 15.4
11 8 Total 15.1
What I would like is to create a table using kableextra, where the rows with gear = Total to be shaded. I'm currently using row_spec to do this. I'm also using collapse_rows to collapse down the repeated values in cyl:
mtcarssumm %>%
kbl(booktabs = TRUE) %>%
collapse_rows(1, latex_hline = "major", valign = "middle") %>%
kable_styling() %>%
row_spec(which(mtcarssumm$gear == "Total"), background = "#e5e5e5")
When I convert to PDF, the entire row is shaded, as in the image below.
Is there a way to use collapse_rows and row_spec so the shading is only in the rows that are not collapsed? (Note, this code gives the desired result in HTML output but not in PDF output.
Thank you!

I figured out a fix - I changed the first column's background (the collapsed one)
mtcarssumm %>%
kbl(booktabs = TRUE) %>%
collapse_rows(1,latex_hline = "major", valign = "middle") %>%
kable_styling() %>%
row_spec(which(mtcarssumm$gear == "Total"), background = "#e5e5e5") %>%
column_spec(1, background = "white")

Related

How to remove non-alphabetic characters and convert all letter to lowercase in R?

In the following string:
"I may opt for a yam for Amy, May, and Tommy."
How to remove non-alphabetic characters and convert all letter to lowercase and sort the letters within each word in R?
Meanwhile, I try to sort words in sentence and removes the duplicates.
You could use stringi
library(stringi)
unique(stri_sort(stri_trans_tolower(stri_extract_all_words(txt, simplify = TRUE))))
Which gives:
## [1] "a" "amy" "and" "for" "i" "may" "opt" "tommy" "yam"
Update
As per mentionned by #DavidArenburg, I overlooked the "sort the letters within words" part of your question. You didn't provide a desired output and no immediate application comes to mind but, assuming you want to identify which words have a matching counterpart (string distance of 0):
unique(stri_sort(stri_trans_tolower(stri_extract_all_words(txt, simplify = TRUE)))) %>%
stringdistmatrix(., ., useNames = "strings", method = "qgram") %>%
# a amy and for i may opt tommy yam
# a 0 2 2 4 2 2 4 6 2
# amy 2 0 4 6 4 0 6 4 0
# and 2 4 0 6 4 4 6 8 4
# for 4 6 6 0 4 6 4 6 6
# i 2 4 4 4 0 4 4 6 4
# may 2 0 4 6 4 0 6 4 0
# opt 4 6 6 4 4 6 0 4 6
# tommy 6 4 8 6 6 4 4 0 4
# yam 2 0 4 6 4 0 6 4 0
apply(., 1, function(x) sum(x == 0, na.rm=TRUE))
# a amy and for i may opt tommy yam
# 1 3 1 1 1 3 1 1 3
Words with more than one 0 per row ("amy", "may", "yam") have a scrambled counterpart.
str <- "I may opt for a yam for Amy, May, and Tommy."
## Clean the words (just keep letters and convert to lowercase)
words <- strsplit(tolower(gsub("[^A-Za-z ]", "", str)), " ")[[1]]
## split the words into characters and sort them
sortedWords <- sapply(words, function(word) sort(unlist(strsplit(word, ""))))
## Join the sorted letters back together
sapply(sortedWords, paste, collapse="")
# i may opt for a yam for amy may and
# "i" "amy" "opt" "for" "a" "amy" "for" "amy" "amy" "adn"
# tommy
# "mmoty"
## If you want to convert result back to string
do.call(paste, lapply(sortedWords, paste, collapse=""))
# [1] "i amy opt for a amy for amy amy adn mmoty"
stringr will let you work on all character sets in R and at C-speed, and magrittr will let you use a piping idiom that works well for your needs:
library(stringr)
library(magrittr)
txt <- "I may opt for a yam for Amy, May, and Tommy."
txt %>%
str_to_lower %>% # lowercase
str_replace_all("[[:punct:][:digit:][:cntrl:]]", "") %>% # only alpha
str_replace_all("[[:space:]]+", " ") %>% # single spaces
str_split(" ") %>% # tokenize
extract2(1) %>% # str_split returns a list
sort %>% # sort
unique # unique words
## [1] "a" "amy" "and" "for" "i" "may" "opt" "tommy" "yam"
The qdap package that I maintain has the bag_o_words function that works well for this:
txt <- "I may opt for a yam for Amy, May, and Tommy."
library(qdap)
unique(sort(bag_o_words(txt)))
## [1] "a" "amy" "and" "for" "i" "may" "opt" "tommy" "yam"

Removing brackets from a dataset

I have imported a dataset in R of 10 columns and 100 row. But in few columns there are brackets([]) and commas along with the values. How can i get rid of them?
As an instance, consider one of 4 columns and 2 rows.
V1 V2 V3 V4
3( [4 ([5 8
(1 5 9 [10,
And what i want is
V1 V2 V3 V4
3 4 5 8
1 5 9 10
Just use gsub:
mydf[] <- lapply(mydf, function(x) gsub("[][(),]", "", x))
mydf
# V1 V2 V3 V4
# 1 3 4 5 8
# 2 1 5 9 10
Instead of lapply, you can also use as.matrix:
mydf[] <- gsub("[][(),]", "", as.matrix(mydf))
mydf
# V1 V2 V3 V4
# 1 3 4 5 8
# 2 1 5 9 10

Parsing irregular character strings for numbers and put into structured format using regular expressions in R

I have a vector of irregularly-structured character data, that I want to find an extract particular numbers from. For example, take this piece of a much larger dataset:
x <- c("2001 Tax # $25.19/Widget, 2002 Est Tax # $10.68/Widget; 2000 Est Int # $55.67/Widget",
"1999 Tax # $81.16/Widget",
"1998 Tax # $52.72/Widget; 2001 Est Int # $62.49/Widget",
"1994 Combined Tax/Int # $68.33/widget; 1993 Est Int # $159.67/Widget",
"1993 Combined Tax/Int # $38.33/widget; 1992 Est Int # $159.67/Widget",
"2006 Tax # $129.21/Widget, 1991 Est Tax # $58.19/Widget; 1991 Est Int # $30.95/Widget")
and so on. Reading the table for a larger vector shows that most of the entries are separated by semi-colons or commas, and that there are only a limited number of terms used -- the year, Tax, Int, Combined, Est -- with occasional variations in entries (like ";" versus ",", or "Widget" versus "widget").
I'd like to extract each of the numbers related to the terms above into a more structured data table, such as:
[id] [year] [number] [cat] [est]
row1 2001 25.19 Tax
row1 2002 10.68 Tax Est
row1 2000 55.67 Int Est
row2 1999 81.16 Tax
row3 1998 52.72 Tax
row3 2001 62.49 Int Est
....
or else maybe a more compact / sparse representation like:
[id] [1999tax] [2001tax] [2002esttax] [2000estint]
row1 0 25.19 10.68 55.67
row2 81.16 0 0 0
If that makes sense -- I ultimately need to put this into a regression model.
My first approach has been to write the following pseudocode:
split strings into list using strsplit() on ";" or ","
extract all years
operate on list elements using function that extracts numbers between "$" and "/"
return structured table columns
So far, I've only gotten this far:
pieces.of.x <- strsplit(x1, "[;,]"); head(pieces.of.x)
which gives:
[[1]]
[1] "2001 Tax # $25.19/Widget" " 2002 Est Tax # $10.68/Widget" " 2000 Est Int # $55.67/Widget"
[[2]]
[1] "1999 Tax # $81.16/Widget"
[[3]]
[1] "1998 Tax # $52.72/Widget" " 2001 Est Int # $62.49/Widget"
[[4]]
[1] "1994 Combined Tax/Int # $68.33/widget" " 1993 Est Int # $159.67/Widget"
[[5]]
[1] "1993 Combined Tax/Int # $38.33/widget" " 1992 Est Int # $159.67/Widget"
[[6]]
[1] "2006 Tax # $129.21/Widget" " 1991 Est Tax # $58.19/Widget" " 1991 Est Int # $30.95/Widget"
Unfortunately, I don't have the knowledge of both lapply() and regular expressions ("regex") in R, to make a procedure that is robust enough to extract the years, operate on each sub-vector of elements, and then return them.
Thanks in advance for reading.
The stringr package is pretty useful when dealing with strings, and I bet that someone could even make a single matcher to extract named capture group to get a similar solution...
[edit: missed the combined entries]
library(stringr)
library(data.table)
# Split the row entries
x <- strsplit(x, "[,;]")
# Generate the entry identifiers.
i <- 0
id <- unlist( sapply( x, function(r) rep(i<<-i+1, length(r) ) ) )
# Extract the desired values
x <- unlist( x, recursive = FALSE )
year.re <- "(^\\s?([[:digit:]]{4})\\s)"
value.re <- "[$]([[:digit:]]+[.][[:digit:]]{2})[/]"
object.re <- "[/]([[:alnum:]]+)$"
Cats<- c("Tax","Int","Combination")
x <- lapply( x, function(str) {
c( Year=str_extract( str, year.re),
Category=Cats[ grepl( "Tax", str)*1 + grepl( "Int", str)*2 ],
Estimate=grepl( "Est", str),
Value=str_match( str, value.re)[2],
Object=str_match( str, object.re)[2] )
})
# Create a data object.
data.table( ID=id, do.call(rbind,x), key=c("Year") )
## ID Year Category Estimate Value Object
## 1: 6 1991 Tax TRUE 58.19 Widget
## 2: 6 1991 Int TRUE 30.95 Widget
## 3: 5 1992 Int TRUE 159.67 Widget
## 4: 4 1993 Int TRUE 159.67 Widget
## 5: 5 1993 Combination FALSE 38.33 widget
## 6: 4 1994 Combination FALSE 68.33 widget
## 7: 3 1998 Tax FALSE 52.72 Widget
## 8: 2 1999 Tax FALSE 81.16 Widget
## 9: 1 2000 Int TRUE 55.67 Widget
## 10: 3 2001 Int TRUE 62.49 Widget
## 11: 1 2001 Tax FALSE 25.19 Widget
## 12: 1 2002 Tax TRUE 10.68 Widget
## 13: 6 2006 Tax FALSE 129.21 Widget
This is similar to one of he other answers and distinguishes between line numbers (your [id] column).
matches <- regmatches(x,gregexpr("[0-9]{4} [^#]+# \\$[0-9.]+",x))
lengths <- sapply(matches,length)
z <- unlist(matches)
z <- regmatches(z,regexec("([0-9]{4}) ([^#]+) # \\$([0-9.]+)",z))
df <- t(sapply(z,function(x)c(year=x[2], number=x[4], cat=x[3])))
df <- data.frame(id=rep(1:length(x),times=lengths),df, stringsAsFactors=F)
df$est <- ifelse(grepl("Est",df$cat),"Est","")
df$cat <- regmatches(df$cat,regexpr("[^ /]+$",df$cat))
df
# id year number cat est
# 1 1 2001 25.19 Tax
# 2 1 2002 10.68 Tax Est
# 3 1 2000 55.67 Int Est
# 4 2 1999 81.16 Tax
# 5 3 1998 52.72 Tax
# 6 3 2001 62.49 Int Est
# 7 4 1994 68.33 Int
# 8 4 1993 159.67 Int Est
# 9 5 1993 38.33 Int
# 10 5 1992 159.67 Int Est
# 11 6 2006 129.21 Tax
# 12 6 1991 58.19 Tax Est
# 13 6 1991 30.95 Int Est
To create exactly the dataframe you are asking for, you can use a few tricks like strsplit, regular expressions, and rbind.
x <- unlist(strsplit(x, ',|;'))
bits <- regmatches(x,gregexpr('(\\d|\\.)+|(Tax|Int|Est)', x))
df <- do.call(rbind, lapply(bits, function(info) {
data.frame(year = info[[1]], number = tail(info, 1)[[1]],
cat = if ('Tax' %in% info) 'Tax' else 'Int',
est = if ('Est' %in% info) 'Est' else '')
}))
df$cat <- factor(df$cat); df$est <- factor(df$est);
which gives us
year number cat est
1 2001 25.19 Tax
2 2002 10.68 Tax Est
3 2000 55.67 Int Est
4 1999 81.16 Tax
5 1998 52.72 Tax
You can extract the numbers out using:
regmatches(x,gregexpr('(\\d)+', x))
which yields
[[1]]
[1] "2001" "25.19" "2002" "10.68" "2000" "55.67"
[[2]]
[1] "1999" "81.16"
[[3]]
[1] "1998" "52.72" "2001" "62.49"
[[4]]
[1] "1994" "68.33" "1993" "159.67"
[[5]]
[1] "1993" "38.33" "1992" "159.67"
[[6]]
[1] "2006" "129.21" "1991" "58.19" "1991" "30.95"
However, if you can assume every year's info is separated by a , or ;, try this:
x <- unlist(strsplit(x, ',|;'))
nums <- regmatches(x,gregexpr('(\\d|\\.)+', x))
df <- data.frame(matrix(as.numeric(unlist(nums)), ncol = 2, byrow = TRUE))
colnames(df) <- c('Year', 'Number')
which looks like
Year Number
1 2001 25.19
2 2002 10.68
3 2000 55.67
4 1999 81.16
5 1998 52.72

Dataframes in a list; adding a new variable with name of dataframe

I have a list of dataframes which I eventually want to merge while maintaining a record of their original dataframe name or list index. This will allow me to subset etc across all the rows. To accomplish this I would like to add a new variable 'id' to every dataframe, which contains the name/index of the dataframe it belongs to.
Edit: "In my real code the dataframe variables are created from reading multiple files using the following code, so I don't have actual names only those in the 'files.to.read' list which I'm unsure if they will align with the dataframe order:
mylist <- llply(files.to.read, read.csv)
A few methods have been highlighted in several posts:
Working-with-dataframes-in-a-list-drop-variables-add-new-ones and
Using-lapply-with-changing-arguments
I have tried two similar methods, the first using the index list:
df1 <- data.frame(x=c(1:5),y=c(11:15))
df2 <- data.frame(x=c(1:5),y=c(11:15))
mylist <- list(df1,df2)
# Adds a new coloumn 'id' with a value of 5 to every row in every dataframe.
# I WANT to change the value based on the list index.
mylist1 <- lapply(mylist,
function(x){
x$id <- 5
return (x)
}
)
#Example of what I WANT, instead of '5'.
#> mylist1
#[[1]]
#x y id
#1 1 11 1
#2 2 12 1
#3 3 13 1
#4 4 14 1
#5 5 15 1
#
#[[2]]
#x y id
#1 1 11 2
#2 2 12 2
#3 3 13 2
#4 4 14 2
#5 5 15 2
The second attempts to pass the names() of the list.
# I WANT it to add a new coloumn 'id' with the name of the respective dataframe
# to every row in every dataframe.
mylist2 <- lapply(names(mylist),
function(x){
portfolio.results[[x]]$id <- "dataframe name here"
return (portfolio.results[[x]])
}
)
#Example of what I WANT, instead of 'dataframe name here'.
# mylist2
#[[1]]
#x y id
#1 1 11 df1
#2 2 12 df1
#3 3 13 df1
#4 4 14 df1
#5 5 15 df1
#
#[[2]]
#x y id
#1 1 11 df2
#2 2 12 df2
#3 3 13 df2
#4 4 14 df2
#5 5 15 df2
But the names() function doesn't work on a list of dataframes; it returns NULL.
Could I use seq_along(mylist) in the first example.
Any ideas or better way to handle the whole "merge with source id"
Edit - Added Solution below: I've implemented a solution using Hadleys suggestion and Tommy’s nudge which looks something like this.
files.to.read <- list.files(datafolder, pattern="\\_D.csv$", full.names=FALSE)
mylist <- llply(files.to.read, read.csv)
all <- do.call("rbind", mylist)
all$id <- rep(files.to.read, sapply(mylist, nrow))
I used the files.to.read vector as the id for each dataframe
I also changed from using merge_recurse() as it was very slow for some reason.
all <- merge_recurse(mylist)
Thanks everyone.
Personally, I think it's easier to add the names after collapse:
df1 <- data.frame(x=c(1:5),y=c(11:15))
df2 <- data.frame(x=c(1:5),y=c(11:15))
mylist <- list(df1 = df1, df2 = df2)
all <- do.call("rbind", mylist)
all$id <- rep(names(mylist), sapply(mylist, nrow))
Your first attempt was very close. By using indices instead of values it will work. Your second attempt failed because you didn't name the elements in your list.
Both solutions below use the fact that lapply can pass extra parameters (mylist) to the function.
df1 <- data.frame(x=c(1:5),y=c(11:15))
df2 <- data.frame(x=c(1:5),y=c(11:15))
mylist <- list(df1=df1,df2=df2) # Name each data.frame!
# names(mylist) <- c("df1", "df2") # Alternative way of naming...
# Use indices - and pass in mylist
mylist1 <- lapply(seq_along(mylist),
    function(i, x){
        x[[i]]$id <- i
        return (x[[i]])
    }, mylist
)
# Now the names work - but I pass in mylist instead of using portfolio.results.
mylist2 <- lapply(names(mylist),
function(n, x){
x[[n]]$id <- n
return (x[[n]])
}, mylist
)
names() could work it it had names, but you didn't give it any. It's an unnamed list. You will need ti use numeric indices:
> for(i in 1:length(mylist) ){ mylist[[i]] <- cbind(mylist[[i]], id=rep(i, nrow(mylist[[i]]) ) ) }
> mylist
[[1]]
x y id
1 1 11 1
2 2 12 1
3 3 13 1
4 4 14 1
5 5 15 1
[[2]]
x y id
1 1 11 2
2 2 12 2
3 3 13 2
4 4 14 2
5 5 15 2
dlply function form plyr package could be an answer:
library('plyr')
df1 <- data.frame(x=c(1:5),y=c(11:15))
df2 <- data.frame(x=c(1:5),y=c(11:15))
mylist <- list(df1 = df1, df2 = df2)
all <- ldply(mylist)
You could also use tidyverse, using lst instead of list which automatically names the list for you and then use imap:
library(tidyverse)
mylist <- dplyr::lst(df1, df2)
purrr::imap(mylist, ~mutate(.x, id = .y))
# $df1
# x y id
# 1 1 11 df1
# 2 2 12 df1
# 3 3 13 df1
# 4 4 14 df1
# 5 5 15 df1
# $df2
# x y id
# 1 1 11 df2
# 2 2 12 df2
# 3 3 13 df2
# 4 4 14 df2
# 5 5 15 df2

Applying predicates on a list in R

Given a list of values in R, what is a nice way to filter values in a list by a given predicate function?
It's not entirely clear whether you have a proper list object in R, or another type of object such as a data.frame or vector. Assuming you have a true list object, we can combine lapply and subset to do what you want. If you don't have a list, then there's no need for lapply.
set.seed(1)
#Fake data
dat <- list(a = data.frame(x = sample(1:10, 20, TRUE))
, b = data.frame(x = sample(1:10, 20, TRUE)))
#Apply the subset function over the list
lapply(dat, subset, x < 3)
$a
x
10 1
12 2
$b
x
4 2
7 1
14 2
18 2
#Example two
lapply(dat, subset, x %in% c(1,7,9))
$a
x
6 9
8 7
9 7
10 1
13 7
$b
x
3 7
7 1
9 9
15 9
16 7