Suppressing messages in Rmarkdown using sink() - r-markdown

I am writing a Rmarkdown document about a package called SparseTSCGM and have trouble suppressing messages generated by functions in this package. The messages I'm referring to can be generated by the following code:
if(!require(SparseTSCGM)) install.packages('SparseTSCGM')
library(SparseTSCGM)
datas <- sim.data(model="ar1", time=10,n.obs=10, n.var=5, prob0=0.35,
network="random")
res.tscgm <- sparse.tscgm(data=datas$data, lam1=NULL, lam2=NULL,nlambda=NULL, model="ar1", penalty="scad",optimality="bic_mod", control=list(maxit.out = 5, maxit.in = 5))
I have tried using the functions invisible() and suppressMessages() but these do not help in either Rmarkdown or the R console. I also tried adding the option message = FALSE as follows:
```{r message=FALSE}
library(SparseTSCGM)
datas <- sim.data(model="ar1", time=10, n.obs=10, n.var=7, prob0=0.35, network="random")
res.tscgm <- sparse.tscgm(data = data.fit, lam1 = NULL, lam2 = NULL, nlambda = NULL, model = "ar1", penalty = "lasso", optimality = "bic", control = list(maxit.out = 10, maxit.in = 100))
```
but this does not help.
I have found that I can suppress output in the R console using sink('NUL') (I'm working on a Windows system), but this approach does not work in Rmarkdown. When I try this approach the message I tried to suppress is still there but the Rmarkdown console gives a warning: "Warning message: In sink() : no sink to remove".
Does sink() not work with Rmarkdown, is there another way do to this? If there is no solution I can always manually remove the section from the HTML file but that's a last resort.

Related

Difference in PDFs when manually generated with Knit-Button vs rmarkdown::render() - e.g. section numbers not always included

I try to generate multiple reports (1 per group) with rmarkdown::render(), but it produces no section numbers.
I use the following file-structure for this in case you want to reproduce the sample (I tried to simplify each file and use only necessary code to reproduce the error):
*) groups.R: defines groups and subgroups in a nested list (all groups stored in a list and each group is a list itself)
groups <- list(
list(c("subgroup1","subgroup2"),"maingroup1"),
list(c("subgroup3","subgroup4"),"maingroup2")
)
*) main.Rmd: template for the reports
---
output:
pdf_document:
number_sections: true
classoption: a4paper
geometry: left=2cm,right=1cm,top=1.5cm,bottom=1cm,includeheadfoot
fontfamily: helvet
fontsize: 11pt
lang: en
header-includes:
- \usepackage{lastpage}
- \usepackage{fancyhdr}
- \pagestyle{fancy}
- \fancyhf{}
- \fancyhead[R]{\fontsize{9}{11} \selectfont \leftmark}
- \fancyhead[L]{\fontsize{9}{11} \selectfont Special report xxx}
- \fancyfoot[R]{\fontsize{9}{0} \selectfont Page \thepage\ of
\pageref{LastPage}}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(echo =
FALSE,comment=NA,warning=FALSE,message=FALSE)
```
\thispagestyle{empty}
\tableofcontents
\newpage
\setcounter{page}{1}
# Introduction
Some text...
```{r results="asis"}
source("graphics.R")
```
*) graphics.R: generates graphics for each subgroup (sections/section numbers are produced with cat() for each subgroup)
load("actgroup.RData")
source("template_graphics.R")
for (g in 1:length(act.group[[1]][[1]])) {
subgroup.name <- act.group[[1]][[1]][g]
cat("\\clearpage")
cat("\n# ",subgroup.name, "\n")
template_graphics(cars)
cat("\n\n")
cat("\\clearpage")
template_graphics(iris)
cat("\n\n")
cat("\\clearpage")
template_graphics(airquality)
cat("\n\n")
cat("\\clearpage")
cat("\n")
}
*) template_graphics.R: template for plotting
template_graphics <- function(data) {
plot(data)
}
*) loop.R: used for generating all reports as PDF - 1 per group
setwd("YOUR DIRECTORY HERE")
library(rmarkdown)
source("groups.R")
for(i in 1:length(groups)) {
act.group = list(groups[[i]])
save(act.group,file="actgroup.RData")
rmarkdown::render("main.Rmd",
output_format=pdf_document(),
output_file=paste0("Special Report ",act.group[[1]][[2]],".pdf"),
output_dir="~/Reports")
}
The problem is, that the final documents do not show the section numbers. When I knit manually in main.Rmd (pressing knit-Button), the section numbers are printed.
Version rmarkdown::render
Version knit-Button
I thought that pressing the knit-Button also starts the rendering-process with rmarkdown::render()? So it's surprising that the reports are not identical?
In advance I installed tinytex::install_tinytex(). The used latex-packages in main.Rmd were automatically installed during the first time rendering.
I am not sure what the problem is. I use R 4.1.0 and RStudio 2022.02.2.
Thanks for your help!!
The behaviour of pdf_document() as the output-format in rmarkdown::render caused the missing section-numbers.
In my YAML-header in main.Rmd I chose to keep the section numbers with number_sections: true. If this should also be rendered when using rmarkdown::render, it has to be an argument in the function:
pdf_document(number_sections=TRUE)
The code of loop.R produces now pdfs with section numbers:
library(rmarkdown)
source("groups.R")
for(i in 1:length(groups)) {
act.group = list(groups[[i]])
save(act.group,file="actgroup.RData")
rmarkdown::render("main.Rmd",
output_format=pdf_document(number_sections=TRUE),
output_file=paste0("Special Report ",act.group[[1]][[2]],".pdf"),
output_dir="~/Reports")
}
More information on pdf_document() can be found here:
https://pkgs.rstudio.com/rmarkdown/reference/pdf_document.html
Alternatively, fill in just a text-reference as output-format: output_format="pdf_document". When set like this, the options of the YAML-Header are not overwritten and the numbers are also included.

Is it possible to list all available names which could be loaded via `.Call()`? Error in .Call("function")

A very common question on StackOverflow with regards to C++/R or C/R package integration is regarding the error in dyn.load(), e.g.
> ## within R
> Error in .Call("function_c") : C symbol name "function_c" not in load table
whereby function_c is some function in C like
SEXP function_c() {
Rprintf("Hello World!\n"); // manually changed
return(R_NilValue);
}
This error come sup due to many types of mistakes, e.g. incorrect compliation, misnamed functions, the user didn't use extern "C" for Cpp code, etc.
Question: Is there any way to view all "available" objects which the user could load via dyn.load() after compilation?
How about the following? I'm not sure it covers everything, but it should be close:
# manipulate search() to get all loaded packages
loadedPkgs = grep('^package:', search(), value = TRUE)
loadedPkgs = gsub('package:', '', loadedPkgs, fixed = TRUE)
# add names here to make the results of lapply pretty
names(loadedPkgs) = loadedPkgs
allCRoutines = lapply(loadedPkgs, function(pkg) {
# see: https://stackoverflow.com/questions/8696158/
pkg_env = asNamespace(pkg)
# this works at a glance
check_CRoutine = function(vname) {
'CallRoutine' %in% attr(get(vname, envir = pkg_env), 'class')
}
names(which(sapply(ls(envir = pkg_env, all = TRUE), check_CRoutine)))
})
The object is a bit long, so I'll just show for one package:
allCRoutines[['utils']]
# $utils
# [1] "C_crc64" "C_flushconsole" "C_menu" "C_nsl" "C_objectSize" "C_octsize" "C_processevents"
# [8] "C_sockclose" "C_sockconnect" "C_socklisten" "C_sockopen" "C_sockread" "C_sockwrite"
What I'm not sure of is that check_CRoutine catches everything we'd consider as relevant to your question. I'm also not sure this covers your main interest (whether these objects can succesfully be fed to dyn.load); perhaps the routines returned here could be passed to dyn.load with a try wrapper?

tensorboard embeddings show no data

I am rying to use tensorboard embeddings page in order to visualize the results of word2vec. After debugging, digging of lots of codes i came to a point that tensorboard successfully runs, reads the confguration file, reads the tsv files but now the embeddings page does not show data.
( the page is opened , i can see the menus , items etc) this is my config file:
embeddings {
tensor_name: 'word_embedding'
metadata_path: 'c:\data\metadata.tsv'
tensor_path: 'c:\data\tensors2.tsv'
}
What can be the problem?
The tensor file originally is 1gb. in size, if i try that file , the app crashes becasue of the memory. So i copy and paste 1 or 2 pages of the original file into tensor2.tsv and use this file. May be this is the problem. May be i need to create more data by copy/ paste.
thx
tolga
Try following code snippet to get visualized word embedding in tensorboard. Open tensorboard with logdir, check localhost:6006 for viewing your embedding.
tensorboard --logdir="visual/1"
# code
fname = "word2vec_model_1000"
model = gensim.models.keyedvectors.KeyedVectors.load(fname)
# project part of vocab, max of 100 dimension
max = 1000
w2v = np.zeros((max,100))
with open("prefix_metadata.tsv", 'w+') as file_metadata:
for i,word in enumerate(model.wv.index2word[:max]):
w2v[i] = model.wv[word]
file_metadata.write(word + '\n')
# define the model without training
sess = tf.InteractiveSession()
with tf.device("/cpu:0"):
embedding = tf.Variable(w2v, trainable=False, name='prefix_embedding')
tf.global_variables_initializer().run()
path = 'visual/1'
saver = tf.train.Saver()
writer = tf.summary.FileWriter(path, sess.graph)
# adding into projector
config = projector.ProjectorConfig()
embed= config.embeddings.add()
embed.tensor_name = 'prefix_embedding'
embed.metadata_path = 'prefix_metadata.tsv'
# Specify the width and height of a single thumbnail.
projector.visualize_embeddings(writer, config)
saver.save(sess, path+'/prefix_model.ckpt', global_step=max)

break a slide of reveal.js (xaringan) when it's long

I'm actually using xaringan, but it uses reveal.js, so it should be the same.
I have a slide which prints bibliography using RefManageR, and I'd like to use as many slides as needed:
---
```
{r results = "asis", echo = FALSE}
PrintBibliography(bib, .opts = list(check.entries = FALSE, sorting = "ynt"))
```
---
I guess I'm looking for some type of allowframebreaks, but I couldn't manage to find one.
Xaringan uses remark.js, not reveal.

MAPI, HrQueryAllRows: Filter messages on subject

I'm pretty much new to MAPI and haven't wrote much C++ Code.
Basically I want to read all emails in the inbox and filter them based on their subject text. So far I'm using the source code provided at the microsoft msdn website which basically reads all emails from the inbox. What I want now is to not get all emails but filter them on the subject, lets say: I want all emails in my Inbox with the subject title "test".
So far I figuered out that the following line of code retrieves all the mails:
hRes = HrQueryAllRows(lpContentsTable, (LPSPropTagArray) &sptCols, &sres, NULL, 0, &pRows);
The parameter &sres is from the type SRestriction.
I tried to implement a filter on 'test' in the subject:
sres.rt = RES_CONTENT;
sres.res.resContent.ulFuzzyLevel = FL_FULLSTRING;
sres.res.resContent.ulPropTag = PR_SUBJECT;
sres.res.resContent.lpProp = &SvcProps;
SvcProps.ulPropTag = PR_SUBJECT;
SvcProps.Value.lpszA = "test";
ScvProps is from the type SPropValue.
If i execute the application then I get 0 lines returned. If I change the String test to an empty String then I get all emails.
I'm assuming i'm using the "filter" option wrong, any ideas?
Edit: When I change the FuzzyLevel to:
sres.res.resContent.ulFuzzyLevel = FL_SUBSTRING;
then I can look for subjects that contain a single character but as soon as I add a second character I get 0 rows as result. I'm pretty sure this is just some c++ stuff that I don't understand that causes all this problems ...
I figured the problem out.
Replacing
sres.res.resContent.ulFuzzyLevel = FL_FULLSTRING;
sres.res.resContent.ulPropTag = PR_SUBJECT;
SvcProps.ulPropTag = PR_SUBJECT;
with
sres.res.resContent.ulFuzzyLevel = FL_SUBSTRING;
sres.res.resContent.ulPropTag = PR_SUBJECT_A;
SvcProps.ulPropTag = PR_SUBJECT_A;
fixed the problem.