rhandsontable does not display anything - r-markdown

I am using the rhandsontable library in R to create a table in my rmarkdown file. Just to test, I tried the example code in the package
library(rhandsontable)
DF = data.frame(val = 1:10, bool = TRUE, big = LETTERS[1:10],
small = letters[1:10],
dt = seq(from = Sys.Date(), by = "days", length.out = 10),
stringsAsFactors = FALSE)
rhandsontable(DF) %>%
hot_cols(columnSorting = TRUE)
However, this does not print anything in the Viewer tab of RStudio. I don't get any error message also. Inserting rhandsontable code into a chunk in the markdown down document also has the same result, I get a blank. I am not sure, what is missing, any help here would be very useful. My system info is as follows:
> R.Version()
$platform
[1] "x86_64-w64-mingw32"
$arch
[1] "x86_64"
$os
[1] "mingw32"
$system
[1] "x86_64, mingw32"
$status
[1] ""
$major
[1] "3"
$minor
[1] "4.1"
$year
[1] "2017"
$month
[1] "06"
$day
[1] "30"
$`svn rev`
[1] "72865"
$language
[1] "R"
$version.string
[1] "R version 3.4.1 (2017-06-30)"
$nickname
[1] "Single Candle"
and the rhandsontable version is 0.3.5.

It seems the issue is resolved when I install the github version of the package
devtools::install_github("jrowen/rhandsontable", dependencies = T, upgrade_dependencies = T)

Related

How to PDF render Quarto books with dynamic content?

I am writing my thesis using a Quarto book in HTML, which has some dynamic content (leaflet maps, plotly dynamic graphs). However, eventually, I will need to export the book in PDF/LaTeX, or at least Word (and then I can copy and paste into LaTeX).
When I try to export to PDF I of course run into this error:
Functions that produce HTML output found in document targeting pdf
output. Please change the output type of this document to HTML.
Alternatively, you can allow HTML output in non-HTML formats by adding
this option to the YAML front-matter of your rmarkdown file:
always_allow_html: true
Note however that the HTML output will not be visible in non-HTML
formats.
I did try to add the always_allow_html: true in my YAML file, but I get the same exact error. I also tried the conditional rendering with {.content-hidden unless-format="pdf"}, but I can't seem to get it working.
Has anyone experienced the same issue?
Using .content-visible when-format="html" and .content-visible when-format="pdf" works very smoothly.
---
title: "Conditional Rendering"
format:
html: default
pdf: default
---
## Conditional Content in Quarto
::: {.content-visible when-format="html"}
```{r}
#| message: false
library(plotly)
library(ggplot2)
p <- ggplot(mtcars, aes(wt, mpg))
p <- p + geom_point(aes(colour = factor(cyl)))
ggplotly(p)
```
```{r}
#| message: false
#| fig-pos: "H"
#| fig-width: 4
#| fig-height: 3
library(leaflet)
# took this example from leaflet docs
m <- leaflet() %>%
addTiles() %>% # Add default OpenStreetMap map tiles
addMarkers(lng=174.768, lat=-36.852, popup="The birthplace of R")
m # Print the map
```
:::
::: {.content-visible when-format="pdf"}
```{r}
library(plotly)
library(ggplot2)
p <- ggplot(mtcars, aes(wt, mpg))
p <- p + geom_point(aes(colour = factor(cyl)))
p
```
:::
I use constructs like below
p <- ggplot()
if (interactive() || opts_knit$get("rmarkdown.pandoc.to") == "html") {
ggplotly(p)
} else {
p
}
Stumbled across this one too. I'm currently checking the output format of pandoc globally
```{r, echo = F}
output <- knitr::opts_knit$get("rmarkdown.pandoc.to")
```
and then evaluate chunks conditionally:
(leaflet example from here.)
```{r, echo = F, eval = output != "latex"}
library(leaflet)
leaflet() %>%
addTiles() %>% # Add default OpenStreetMap map tiles
addMarkers(lng=174.768, lat=-36.852, popup="The birthplace of R")
```
This is optional if you want a note on a missing component in the PDF version:
```{r, echo = F, eval = output == "latex", results = "asis"}
cat("\\textit{Please see the HTML version for interactive content.}")
```
Edit
I just checked, this also works with Quarto documents for me using the below YAML header.
---
title: "Untitled"
format:
html:
theme: cosmo
pdf:
documentclass: scrreprt
---

Flextable table links not linking to my tables

I have generated tables in docx using Rmarkdown (Output: word_document), with captions and links to the tables. For some reason clicking on the links takes me to the top of my word file and not to the relevant table.
Here is some example code: #first time having an rmarkdown related issue, please bare with me
library(officer)
library(flextable)
library(knitr)
library(magrittr)
# set chunks defaults
knitr::opts_chunk$set(echo=FALSE, message=FALSE, warning=FALSE)
#create example df
df1 <- data.frame(col1=c("a","b","c"),
col2=c("1","2","3"))
df2 <- data.frame(col1=c("d","e","f"),
col2=c("4","5","6"))
(Link to table 2) <--- this link should take me to flextable 2, but it doesn't.
#create flextable 1
ft1 <- flextable(df1)
ft1 <- set_caption(ft1, "Caption 1.",
style = "Table Caption",
autonum = run_autonum(seq_id = "tab",bkm="tab1"))
ft1
(Link to Table 1) <---- This link should take me to table 1, but it doesn't.
I set these links in this order to hopefully make the problem clearer.
#links created using format [(Link)](#tab:tab2)
#second flextable
ft2 <- flextable(df2)
ft2 <- set_caption(ft2, "Caption 2.",
style = "Table Caption",
autonum = run_autonum(seq_id = "tab",bkm="tab2"))
ft2
if this is possible using flextable, I am probably making a small mistake somewhere, but can't spot it. So far this is the only function I found that renders docx table output using Rmarkdown
Please help.
I made this example for you:
---
title: "Untitled"
author: "Your Name"
date: "December 16, 2021"
output:
bookdown::word_document2: default
---
```{r setup}
library(flextable)
#create example df
df1 <- data.frame(col1=c("a","b","c"),
col2=c("1","2","3"))
df2 <- data.frame(col1=c("d","e","f"),
col2=c("4","5","6"))
```
```{r Table1}
ft1 <- flextable(df1)
set_caption(ft1, "I love R language")
```
\newpage
```{r Table2}
ft2 <- flextable(df2)
set_caption(ft2, "I hope, it loves me too")
```
\newpage
[(Click, here is the first table)](#tab:Table1)
[(Here is the second)](#tab:Table2)
Good luck and the big success in the process of R learning to you;)
An addition:
```{r Table3, tab.cap = "An another table"}
flextable(df2)
```
\newpage
Ok, here is a third table [(Here is the third)](#tab:Table3).
For anyone else that might run into this problem.
Using the example #manro made above, I got it to work by editing the YAML.
title: "Untitled"
author: "Your Name"
date: "December 16, 2021"
output:
officedown::rdocx_document:
reference_docx: template.docx
library(flextable)
library(officedown)
library(officer)
#create example df
df1 <- data.frame(col1=c("a","b","c"),
col2=c("1","2","3"))
df2 <- data.frame(col1=c("d","e","f"),
col2=c("4","5","6"))
ft1 <- flextable(df1)
set_caption(ft1, "I love R language")
\newpage
ft2 <- flextable(df2)
set_caption(ft2, "I hope, it loves me too")
\newpage
And by removing "tab:" from [(Link)](#tab:tab2) from the link examples below.
[(Click, here is the first table)](#tab:Table1)
[(Here is the second)](#tab:Table2)
leaving me with.
[(Click, here is the first table)](#Table1)
[(Here is the second)](#Table2)

Extracting pattern from the nested list in R using regex

I have following sorted list (lst) of time periods and I want to split the periods into specific dates and then extract maximum time period without altering order of the list.
$`1`
[1] "01.12.2015 - 21.12.2015"
$`2`
[1] "22.12.2015 - 05.01.2016"
$`3`
[1] "14.09.2015 - 12.10.2015" "29.09.2015 - 26.10.2015"
Therefore, after adjustment list should look like this:
$`1`
[1] "01.12.2015" "21.12.2015"
$`2`
[1] "22.12.2015" "05.01.2016"
$`3`
[1] "14.09.2015" "12.10.2015" "29.09.2015" "26.10.2015"
In order to do so, I began with splitting the list:
lst_split <- str_split(lst, pattern = " - ")
which leads to the following:
[[1]]
[1] "01.12.2015" "21.12.2015"
[[2]]
[1] "22.12.2015" "05.01.2016"
[[3]]
[1] "c(\"14.09.2015" "12.10.2015\", \"29.09.2015" "26.10.2015\")"
Then, I tried to extract the pattern:
lapply(lst_split, function(x) str_extract(pattern = c("\\d+\\.\\d+\\.\\d+"),x))
but my output is missing one date (29.09.2015)
[[1]]
[1] "01.12.2015" "21.12.2015"
[[2]]
[1] "22.12.2015" "05.01.2016"
[[3]]
[1] "14.09.2015" "12.10.2015" "26.10.2015"
Does anyone have an idea how I could make it work and maybe propose more efficient solution? Thank you in advance.
Thanks to comments of #WiktorStribiżew and #akrun it is enough to use str_extract_all.
In this example:
> str_extract_all(lst,"\\d+\\.\\d+\\.\\d+")
[[1]]
[1] "01.12.2015" "21.12.2015"
[[2]]
[1] "22.12.2015" "05.01.2016"
[[3]]
[1] "14.09.2015" "12.10.2015" "29.09.2015" "26.10.2015"
1) Use strsplit, flatten each component using unlist, convert the dates to "Date" class and then use range to get the maximum time span. No packages are used.
> lapply(lst, function(x) range(as.Date(unlist(strsplit(x, " - ")), "%d.%m.%Y")))
$`1`
[1] "2015-12-01" "2015-12-21"
$`2`
[1] "2015-12-22" "2016-01-05"
$`3`
[1] "2015-09-14" "2015-10-26"
2) This variation using a magrittr pipeline also works:
library(magrittr)
lapply(lst, function(x)
x %>%
strsplit(" - ") %>%
unlist %>%
as.Date("%d.%m.%Y") %>%
range
)
Note: The input lst in reproducible form is:
lst <- structure(list(`1` = "01.12.2015 - 21.12.2015", `2` = "22.12.2015 - 05.01.2016",
`3` = c("14.09.2015 - 12.10.2015", "29.09.2015 - 26.10.2015"
)), .Names = c("1", "2", "3"))

How would I turn a multivalue string into a usable frequency table in R?

I have a field in a data frame called plugins_Apache_module
it contains strings like:
c("mod_perl/1.99_16,mod_python/3.1.3,mod_ssl/2.0.52",
"mod_auth_passthrough/2.1,mod_bwlimited/1.4,mod_ssl/2.2.23",
"mod_ssl/2.2.9")
I need a frequency table on the modules, and also their versions.
What is the best way to do this in R? As being rather new in R, I've seen strsplit, gsub, some chatrooms also suggested I use the qdap package.
Ideally I would want the string transformed into a dataframe with a column for every mod, if the module is there, then the version goes in that particular field. How would I accomplish such a transform?
What dataframe format would be suggested if I want top-level frequencies - say mod_ssl (all versions) as well as relational options (mod_perl is very often used with mod_ssl).
I'm not too sure how to handle such variable length data when pushing into a dataframe for processing. Any advice is welcome.
I consider the right answer to look like:
mod_perl mod_python mod_ssl mod_auth_passthrough mod_bwlimited
1.99_16 3.1.3 2.0.52
2.2.23 2.1 1.4
2.2.9
So basically the first bit becomes a column and the version(s) that follows become a row entry
st <- c("mod_perl/1.99_16,mod_python/3.1.3,mod_ssl/2.0.52", "mod_auth_passthrough/2.1,mod_bwlimited/1.4,mod_ssl/2.2.23", "mod_ssl/2.2.9")
scan(text=st, what="", sep=",")
Read 7 items
[1] "mod_perl/1.99_16" "mod_python/3.1.3" "mod_ssl/2.0.52"
[4] "mod_auth_passthrough/2.1" "mod_bwlimited/1.4" "mod_ssl/2.2.23"
[7] "mod_ssl/2.2.9"
strsplit( scan(text=st, what="", sep=","), "/")
Read 7 items
[[1]]
[1] "mod_perl" "1.99_16"
[[2]]
[1] "mod_python" "3.1.3"
[[3]]
[1] "mod_ssl" "2.0.52"
[[4]]
[1] "mod_auth_passthrough" "2.1"
[[5]]
[1] "mod_bwlimited" "1.4"
[[6]]
[1] "mod_ssl" "2.2.23"
[[7]]
[1] "mod_ssl" "2.2.9"
table( sapply(strsplit( scan(text=st, what="", sep=","), "/"), "[",1) )
#----------------
Read 7 items
mod_auth_passthrough mod_bwlimited mod_perl mod_python
1 1 1 1
mod_ssl
3
table( scan(text=st, what="", sep=",") )
#-----------
Read 7 items
mod_auth_passthrough/2.1 mod_bwlimited/1.4 mod_perl/1.99_16
1 1 1
mod_python/3.1.3 mod_ssl/2.0.52 mod_ssl/2.2.23
1 1 1
mod_ssl/2.2.9
1
You ask for at minimum two different things. Adding desired output greatly helped. I'm not sure if what you ask for is what you really want but you asked and it seemed like a fun problem. Ok here's how I would approach this using qdap (this requires qdap version 1.1.0 though):
## load qdap
library(qdap)
## your data
x <- c("mod_perl/1.99_16,mod_python/3.1.3,mod_ssl/2.0.52",
"mod_auth_passthrough/2.1,mod_bwlimited/1.4,mod_ssl/2.2.23",
"mod_ssl/2.2.9")
## strsplit on commas and slashes
dat <- unlist(lapply(x, strsplit, ",|/"), recursive=FALSE)
## make just a list of mods per row
mods <- lapply(dat, "[", c(TRUE, FALSE))
## make a string of versions
ver <- unlist(lapply(dat, "[", c(FALSE, TRUE)))
## make a lookup key and split it into lists
key <- data.frame(mod = unlist(mods), ver, row = rep(seq_along(mods),
sapply(mods, length)))
key2 <- split(key[, 1:2], key$row)
## make it into freq. counts
freqs <- mtabulate(mods)
## rename assign freq table to vers in case you want freqs ans replace 0 with NA
vers <- freqs
vers[vers==0] <- NA
## loop through and fill the ones in each row using an env. lookup (%l%)
for(i in seq_len(nrow(vers))) {
x <- vers[i, !is.na(vers[i, ]), drop = FALSE]
vers[i, !is.na(vers[i, ])] <- colnames(x) %l% key2[[i]]
}
## Don't print the NAs
print(vers, na.print = "")
## mod_auth_passthrough mod_bwlimited mod_perl mod_python mod_ssl
## 1 1.99_16 3.1.3 2.0.52
## 2 2.1 1.4 2.2.23
## 3 2.2.9
## the frequency counts per mods
freqs
## mod_auth_passthrough mod_bwlimited mod_perl mod_python mod_ssl
## 1 0 0 1 1 1
## 2 1 1 0 0 1
## 3 0 0 0 0 1

Parsing out a line in R to pick different objects

I have this line:
system<-c("System configuration: type=Shared mode=Uncapped smt=4 lcpu=96 mem=393216MB psize=64 ent=16.00")
I need to parse out this and pick smt, lcpu, mem, mpsize and ent into different objects.
For example, I doing this to pick the smt, but it picks the whole line, any ideas what I am doing wrong here?
smt<-sub('^.* smt=([[:digit:]])', '\\1', system)
smt needs to have a number 4 in this case.
I would use strsplit a couple times, and type.convert:
parse.config <- function(x) {
clean <- sub("System configuration: ", "", x)
pairs <- strsplit(clean, " ")[[1]]
items <- strsplit(pairs, "=")
keys <- sapply(items, `[`, 1)
values <- sapply(items, `[`, 2)
values <- lapply(values, type.convert, as.is = TRUE)
setNames(values, keys)
}
config <- parse.config(system)
# $type
# [1] "Shared"
#
# $mode
# [1] "Uncapped"
#
# $smt
# [1] 4
#
# $lcpu
# [1] 96
#
# $mem
# [1] "393216MB"
#
# $psize
# [1] 64
#
# $ent
# [1] 16
The output is a list so you can access any of the parsed items, for example:
config$smt
# [1] 4
Using strapplyc in the gusbfn package the following creates a list L whose names are the left hand sides such as smt and whose values are the right hand sides.
library(gsubfn)
LHS <- strapplyc( system, "(\\w+)=" )[[1]]
RHS <- strapplyc( system, "=(\\w+)" )[[1]]
L <- setNames( as.list(RHS), LHS )
For example we can now get smt like this (and similarly for the other left hand sides):
> L$smt
[1] "4"
UPDATE: Simplified.
add .* to the end of your matching expression and you'll get "4".
sub('^.* smt=([[:digit:]]+).*', '\\1', system)
You may want to add the + I included in the instance where it is more than a single digit.
You could also approach this by splitting on spaces and the finding the matches:
splits <- unlist(strsplit(system, ' '))
sub('smt=', '', grep('smt=', splits, value=TRUE))
# [1] "4"
or wrapping it in a function:
matchfun <- function(string, to_match, splitter=' ') {
splits <- unlist(strsplit(string, splitter))
sub(to_match, '', grep(to_match, splits, value=TRUE))
}
matchfun(system, 'smt=')
# [1] "4"
Well, I'm voting for #GaborGrothendieck's, but am offering this as a more pedestrian alternative:
inp <- c("System configuration: type=Shared mode=Uncapped smt=4 lcpu=96 mem=393216MB psize=64 ent=16.00")
inparsed <- read.table(text=inp, stringsAsFactors=FALSE)
vals <- unlist(inparsed)[grep("\\=", unlist(inparsed))]
vals
# V3 V4 V5 V6 V7 V8 V9
# type=Shared mode=Uncapped smt=4 lcpu=96 mem=393216MB psize=64 ent=16.00
vals[grep("smt|lcpu|mem|mpsize|ent", vals)]
V5 V6 V7 V9
"smt=4" "lcpu=96" "mem=393216MB" "ent=16.00"
I would note that choosing the name 'system' for a variable seems most unwise in light of the system function's existence.