I am in serious trouble, I spent a day on this issue, would appreciate it if you could help me on this.
I am trying to make a beamer presentation using R Markdown. It was doing perfectly until a few days ago but now I have encountered this error every time I Knit. I was unable to find any missing LaTeX packages from the error log df.log.
! Undefined control sequence.
<write> ...ing: Your graphic driver \pgfsysdriver
\space does not support tr...
l.532 ...lt}{0}{0}{\pgfmathresult}{\pgf#x}{\pgf#y}
%
Error: LaTeX failed to compile df.tex. See https://yihui.org/tinytex/r/#debugging for debugging tips. See df.log for more info.
Execution halted
Thank you in advance for your help.
---
title: "Comparaison de performance de modèles"
subtitle: "Présentation de la problématique"
date: 2/6/2021
author: |
| amirrea...
| \scriptsize Master 2, Chargé d'études économiques et statistiques
output:
beamer_presentation:
latex_engine : xelatex
theme: "Montpellier"
colortheme: "beaver"
toc: FALSE
slide_level: 4
fig_width: 1
fig_height: 1
fig_caption: false
includes:
in_header: ../Presentation/header.tex
fontsize: 11pt
bibliography: ../Literature/bib.bib
---
```{r}
options(tinytex.verbose = TRUE)
```
# hello
some notes
# References
Here is the header tex:
% Colors
% \usepackage{colortbl}
% Change section names style
\titlegraphic{
\vspace{-10mm}
\flushright
\includegraphics[height = 8mm]{../images/lig_logo_1.png}
\includegraphics[height = 8mm]{../images/gael_logo_3.jpg}
% \includegraphics[height = 8mm]{../images/uga_logo.png}
\includegraphics[height = 8mm]{../images/inp_logo.png}
}
%%%%%%%%%%%%%%%%%
% Load Packages %
%%%%%%%%%%%%%%%%%
% Spacings
\usepackage{setspace}
% Tables
\usepackage{longtable}
\usepackage{tabu}
% Floats
\usepackage{morefloats}
\usepackage{float}
\usepackage{placeins}
% Highlighting
\usepackage{soul}
% Horizontal page position
\usepackage{pdflscape}
% Append pdfs
\usepackage{pdfpages}
% Add latex chunks
\usepackage{docmute}
% Short toc
\usepackage{shorttoc}
%\setcounter{tocdepth}{1}
%\usepackage{minitoc} - incompatible with document class
% Referencing mutliple things with a single command - \cref
\usepackage{cleveref}
% Array
\usepackage{array}
% Multiple columns
\usepackage{multicol}
% Image insertion and colors
\usepackage{graphicx}
% Latex comments
\newenvironment{dummy}{}{}
% Fonts
% \usepackage{fontspec}
% \setmainfont{Museo}
% drawing
\usepackage{tikz}
\usetikzlibrary{matrix,chains,positioning,decorations.pathreplacing,arrows}
\usepackage{dcolumn}
% \usepackage{subfig}
\usepackage[export]{adjustbox}
% \usepackage[demo]{graphicx}
\usepackage{subcaption}
\usetikzlibrary{shapes,arrows}
\usetikzlibrary{arrows.meta}
\usepackage[edges]{forest}
\usepackage{calc}
% Add footline
\makeatletter
\setbeamertemplate{footline}{%
\leavevmode%
\hbox{%
\begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,center]{author in head/foot}%
\usebeamerfont{author in head/foot}{amirreza}
\end{beamercolorbox}%
\begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,center]{title in head/foot}%
\usebeamerfont{institute in head/foot}\insertshortdate
\end{beamercolorbox}%
\begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,right]{date in head/foot}%
\usebeamerfont{date in head/foot}\insertshortinstitute{}\hspace*{1em}
%\insertframenumber{} / \inserttotalframenumber\hspace*{2ex} % old version
\insertframenumber{} \hspace*{2ex} % new version without total frames
\end{beamercolorbox}}%
\vskip0pt%
}
\makeatother
Here is the .tex file produced by r markdown rendering pdf file when I run it on texworks:
\PassOptionsToPackage{unicode}{hyperref}
\PassOptionsToPackage{hyphens}{url}
%
\documentclass[
11pt,
ignorenonframetext,
]{beamer}
\usepackage{pgfpages}
\setbeamertemplate{caption}[numbered]
\setbeamertemplate{caption label separator}{: }
\setbeamercolor{caption name}{fg=normal text.fg}
\beamertemplatenavigationsymbolsempty
% Prevent slide breaks in the middle of a paragraph
\widowpenalties 1 10000
\raggedbottom
\setbeamertemplate{part page}{
\centering
\begin{beamercolorbox}[sep=16pt,center]{part title}
\usebeamerfont{part title}\insertpart\par
\end{beamercolorbox}
}
\setbeamertemplate{section page}{
\centering
\begin{beamercolorbox}[sep=12pt,center]{part title}
\usebeamerfont{section title}\insertsection\par
\end{beamercolorbox}
}
\setbeamertemplate{subsection page}{
\centering
\begin{beamercolorbox}[sep=8pt,center]{part title}
\usebeamerfont{subsection title}\insertsubsection\par
\end{beamercolorbox}
}
\AtBeginPart{
\frame{\partpage}
}
\AtBeginSection{
\ifbibliography
\else
\frame{\sectionpage}
\fi
}
\AtBeginSubsection{
\frame{\subsectionpage}
}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{textcomp} % provide euro and other symbols
\else % if luatex or xetex
\usepackage{unicode-math}
\defaultfontfeatures{Scale=MatchLowercase}
\defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1}
\fi
\usetheme[]{Montpellier}
\usecolortheme{beaver}
% Use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
\IfFileExists{microtype.sty}{% use microtype if available
\usepackage[]{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\makeatletter
\#ifundefined{KOMAClassName}{% if non-KOMA class
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}}
}{% if KOMA class
\KOMAoptions{parskip=half}}
\makeatother
\usepackage{xcolor}
\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available
\IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}}
\hypersetup{
pdftitle={Comparaison de performance de modèles},
hidelinks,
pdfcreator={LaTeX via pandoc}}
\urlstyle{same} % disable monospaced font for URLs
\newif\ifbibliography
\usepackage{color}
\usepackage{fancyvrb}
\newcommand{\VerbBar}{|}
\newcommand{\VERB}{\Verb[commandchars=\\\{\}]}
\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}}
% Add ',fontsize=\small' for more characters per line
\usepackage{framed}
\definecolor{shadecolor}{RGB}{248,248,248}
\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}}
\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}}
\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}}
\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\BuiltInTok}[1]{#1}
\newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}}
\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}}
\newcommand{\ExtensionTok}[1]{#1}
\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}}
\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\ImportTok}[1]{#1}
\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}}
\newcommand{\NormalTok}[1]{#1}
\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}}
\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}}
\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}}
\newcommand{\RegionMarkerTok}[1]{#1}
\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}}
\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}}
\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{-\maxdimen} % remove section numbering
% Colors
% \usepackage{colortbl}
% Change section names style
\titlegraphic{
\vspace{-10mm}
\flushright
\includegraphics[height = 8mm]{../images/lig_logo_1.png}
\includegraphics[height = 8mm]{../images/gael_logo_3.jpg}
% \includegraphics[height = 8mm]{../images/uga_logo.png}
\includegraphics[height = 8mm]{../images/inp_logo.png}
}
%%%%%%%%%%%%%%%%%
% Load Packages %
%%%%%%%%%%%%%%%%%
% Spacings
\usepackage{setspace}
% Tables
\usepackage{longtable}
\usepackage{tabu}
% Floats
\usepackage{morefloats}
\usepackage{float}
\usepackage{placeins}
% Highlighting
\usepackage{soul}
% Horizontal page position
\usepackage{pdflscape}
% Append pdfs
\usepackage{pdfpages}
% Add latex chunks
\usepackage{docmute}
% Short toc
\usepackage{shorttoc}
%\setcounter{tocdepth}{1}
%\usepackage{minitoc} - incompatible with document class
% Referencing mutliple things with a single command - \cref
\usepackage{cleveref}
% Array
\usepackage{array}
% Multiple columns
\usepackage{multicol}
% Image insertion and colors
\usepackage{graphicx}
% Latex comments
\newenvironment{dummy}{}{}
% Fonts
% \usepackage{fontspec}
% \setmainfont{Museo}
% drawing
\usepackage{tikz}
\usetikzlibrary{matrix,chains,positioning,decorations.pathreplacing,arrows}
\usepackage{dcolumn}
% \usepackage{subfig}
\usepackage[export]{adjustbox}
% \usepackage[demo]{graphicx}
\usepackage{subcaption}
\usetikzlibrary{shapes,arrows}
\usetikzlibrary{arrows.meta}
\usepackage[edges]{forest}
\usepackage{calc}
% Add footline
\makeatletter
\setbeamertemplate{footline}{%
\leavevmode%
\hbox{%
\begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,center]{author in head/foot}%
\usebeamerfont{author in head/foot}{amirreza}
\end{beamercolorbox}%
\begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,center]{title in head/foot}%
\usebeamerfont{institute in head/foot}\insertshortdate
\end{beamercolorbox}%
\begin{beamercolorbox}[wd=.333333\paperwidth,ht=2.25ex,dp=1ex,right]{date in head/foot}%
\usebeamerfont{date in head/foot}\insertshortinstitute{}\hspace*{1em}
%\insertframenumber{} / \inserttotalframenumber\hspace*{2ex} % old version
\insertframenumber{} \hspace*{2ex} % new version without total frames
\end{beamercolorbox}}%
\vskip0pt%
}
\makeatother
\title{Comparaison de performance de modèles}
\subtitle{Présentation de la problématique}
\author{amirrea\ldots{}\\
\scriptsize Master 2, Chargé d'études économiques et statistiques}
\date{2/6/2021}
\begin{document}
\frame{\titlepage}
\begin{frame}[fragile]
\begin{Shaded}
\begin{Highlighting}[]
\KeywordTok{options}\NormalTok{(}\DataTypeTok{tinytex.verbose =} \OtherTok{TRUE}\NormalTok{)}
\end{Highlighting}
\end{Shaded}
\end{frame}
\hypertarget{hello}{%
\section{hello}\label{hello}}
\begin{frame}{hello}
some notes
\end{frame}
\hypertarget{references}{%
\section{References}\label{references}}
\end{document}
Related
I would like to include a table in an R markdown document (Bookdown/Huskydown) which should meet the following requirements. Ideally, the table works with several output formats, e.g. LaTex/PDF and HTML.
Requirements:
Table width: fixed
Cell width: fixed
Vertical alignment: cell content aligned to the top
Text formatting: like bold or italics (best would be if md formatting supported, such that code is output agnostic) and allow for line breaks in longer texts
Citations: should be rendered
URLs: as clickable links both in HTML and LaTex/PDF
Figures: include
figures stored locally, either in
a markdown way ![](Rlogo.png) or
a knitr way knitr::include_graphics("Rlogo.png")
figures taken straight from the web
Caption for the table
Captions text formatting: caption should also allow for text formatting
Footnote: include footnotes in the table
Table numeration: tables are should be numerated
Referencing the table: in the document is needed
Notes regarding different approaches
Fixed cell width: in markdown the number of "-"s in table header determine cell width
Linebreaks:
LaTex:\\linebreak
All others: <br/>
Referencing
LaTex: add \label{foo} => \ref{foo} ( \#ref(foo))
Markdown: add Table: (\#tab:md-table) Caption==> \#ref(tab:md-table))
Comments on different approaches
Markdown: easy coding of tables in markdown
Kable & kableExtra: Versatile R markdown coding of the table, but vertical text alignment obscure and figures are not included in PDF
Pander: achieves the most, but no vertical alignment and footnotes
Huxtable: most promising, but figures are not included in PDF
This is less an answer than providing MWEs for the table shown above
```{r}
# create some random text
library(stringi)
some_text <- stri_rand_lipsum(1)
some_text <- substr(some_text, 1, 75)
# create dataframe with some stuff
figpath <- "figure/"
df <- data.frame(
Citation = c("#R-base", "#R-bookdown"),
Textfield = c("**Formatted** string<br/> -- _Everyone_ needs H^2^O", some_text),
URL = c("[R-url](https://www.r-project.org/)", "[bookdown](https://bookdown.org/)"),
fig_local_md = c(
paste0("![](", figpath, "Rlogo.png){ width=10% height=5% }"),
paste0("![](", figpath, "bookdownlogo.png){ height='36px' width='36px' }")
)#,
# not working:
# fig_local_knitr = c("knitr::include_graphics('figure/Rlogo.png')", "knitr::include_graphics('figure/bookdownlogo.png')")
)
# only include if output format is HTML, else pander throws error
if (knitr::is_html_output()) {
df$fig_web <- c("![](https://www.picgifs.com/glitter-gifs/a/arrows/picgifs-arrows-110130.gif)")
output_format <- "html"
}
if (knitr::is_latex_output()) {
output_format <- "latex"
}
```
markdown
Table: markdown table: markdown styling works in HTML (*italics*, **bold**), LaTex styling in PDF (\\textbf{bold})
| Image | Description |
| :--------------------------------------------------------- | :----------------------------------------------------------- |
| ![](figure/Rlogo.png){ width=10% height=5% } | **Image description** [#R-base] <br/>Lorem ipsum dolor sit amet, ... [R-url](https://www.r-project.org/) |
| ![](figure/bookdownlogo.png){ height='36px' width='36px' } | **Image description** [#R-bookdown] <br/>Lorem ipsum dolor sit amet, ... [bookdown](https://bookdown.org/) |
kable table
```{r kable-table, echo=FALSE, out.width='90%', fig.align = "center", results='asis'}
library(knitr)
kable(df,
caption = "kable table: markdown styling works in HTML (*italics*, **bold**), LaTex styling in PDF (\\textbf{bold})",
caption.short = "md styling works in HTML (*italics*, **bold**), LaTex styling in PDF (\\textbf{bold})"
)
```
kableExtra table
```{r kableExtra-table, echo=FALSE, out.width='90%', fig.align = "center", results='asis'}
library(kableExtra)
# http://haozhu233.github.io/kableExtra/awesome_table_in_pdf.pdf
kable(
df,
caption = "kableExtra table: markdown styling works in HTML (*italics*, **bold**), LaTex styling in PDF (\\textbf{bold})",
output_format, booktabs = T, # output_format = latex, html (specify above)
# align = "l",
valign = "top"
) %>%
kable_styling(full_width = F,
latex_options = c(#"striped",
"hold_position", # stop table floating
"repeat_header") # for long tables
) %>%
column_spec(1, bold = T, border_right = T, width = "30em") %>%
column_spec(2, width = "50em") %>%
column_spec(3, width = "5em") %>%
column_spec(4, width = "10em") %>%
column_spec(5, width = "10em") %>%
footnote(general = "Here is a general comments of the table. ",
number = c("Footnote 1; ", "Footnote 2; "),
alphabet = c("Footnote A; ", "Footnote B; "),
symbol = c("Footnote Symbol 1; ", "Footnote Symbol 2"),
general_title = "General: ", number_title = "Type I: ",
alphabet_title = "Type II: ", symbol_title = "Type III: ",
footnote_as_chunk = T, title_format = c("italic", "underline")
)
```
pander table
```{r pander-table, echo=FALSE, out.width='90%', fig.align = "center", results='asis'}
library(pander)
# https://cran.r-project.org/web/packages/pander/vignettes/pandoc_table.html
pander(
df,
caption = "pander table: markdown styling works in HTML and PDF (*italics*, **bold**), LaTex styling in PDF (\\textbf{bold})",
# style = "multiline", # simple
split.table = Inf, # default = 80 characters; Inf = turn off table splitting
split.cells = c(15, 50, 5, 5, 5), # default = 30
# split.cells = c("25%", "50%", "5%", "10%", "10%"), # no difference
justify = "left"
)
```
huxtable table
```{r huxtable-table, echo=FALSE, out.width='90%', fig.align = "center", results='asis'}
library(dplyr)
library(huxtable)
# https://hughjonesd.github.io/huxtable/
hux <- as_hux(df) %>%
# huxtable::add_rownames(colname = '') %>%
huxtable::add_colnames() %>%
set_top_border(1, everywhere, 1) %>%
set_bottom_border(1, everywhere, 1) %>%
set_bottom_border(final(), everywhere, 1) %>%
set_bold(1, everywhere, TRUE) %>% # bold headlines
set_italic(-1, 1, TRUE) %>% # italics in first column (except the first row)
set_valign("top") %>%
set_width(1) %>%
set_col_width(c(0.10,0.45,0.05,0.10,0.10)) %>%
set_wrap(TRUE) %>%
set_position('left') %>% # fix table alignment (default is center)
add_footnote("Sample Footnote") %>%
set_font_size(4)
table_caption <- 'huxtable table: markdown styling works in HTML (*italics*, **bold**), LaTex styling in PDF (\\textbf{bold})'
# Print table conditional on output type
if (knitr::is_html_output()) {
caption(hux) <- paste0('(#tab:huxtable-table-explicit) ', table_caption)
print_html(hux) # output table html friendly (requires in chunk options "results='asis'")
}
if (knitr::is_latex_output()) {
caption(hux) <- paste0('(\\#tab:huxtable-table-explicit) ', table_caption)
hux # if using chunk option "results='asis'" simply output the table with "hux", i.e. do not use print_latex(hux)
}
```
Referencing the tables
works differently for different table types
Adding a short caption for the LoT
Finally adding a short caption for the table of figures is not really working as desired
(ref:huxtable-table-caption) huxtable-table caption
(ref:huxtable-table-scaption) huxtable-table short caption
```{r huxtable-table, echo=FALSE, out.width='90%', fig.align = "center", fig.cap='(ref:huxtable-table-caption)', fig.scap='(ref:huxtable-table-scaption)', results='asis'}
...
```
I have text data inside a caloumn of dataset as shown below
Record Note 1
1 Amount: $43,385.23
Mode: Air
LSP: Panalpina
2 Amount: $1,149.32
Mode: Ocean
LSP: BDP
3 Amount: $1,149.32
LSP: BDP
Mode: Road
4 Amount: U$ 3,234.01
Mode: Air
5 No details
I need to extract each of the details inside the text data and write them into new column as shown below how to do it in python
Expected Output
Record Amount Mode LSP
1 $43,385.23 Air Panalpina
2 $1,149.32 Ocean BDP
3 $1,149.32 Road BDP
4 $3,234.01 Air
5
Is this possible. how can this be do
Write a custom function and then use pd.apply() -
def parse_rec(x):
note = x['Note']
details = note.split('\n')
x['Amount'] = None
x['Mode'] = None
x['LSP'] = None
if len(details) > 1:
for detail in details:
if 'Amount' in detail:
x['Amount'] = detail.split(':')[1].strip()
if 'Mode' in detail:
x['Mode'] = detail.split(':')[1].strip()
if 'LSP' in detail:
x['LSP'] = detail.split(':')[1].strip()
return x
df = df.apply(parse_rec, axis=1)
import re
Amount = []
Mode = []
LSP = []
def extract_info(txt):
Amount_lst = re.findall(r"amounts?\s*:\s*(.*)", txt, re.I)
Mode_lst = re.findall(r"Modes??\s*:\s*(.*)", txt, re.I)
LSP_lst = re.findall(r"LSP\s*:\s*(.*)", txt, re.I)
Amount.append(Amount_lst[0].strip() if Amount_lst else "No details")
Mode.append(Mode_lst[0].strip() if Mode_lst else "No details")
LSP.append(LSP_lst[0].strip() if LSP_lst else "No details")
df["Note"].apply(lambda x : extract_info(x))
df["Amount"] = Amount_lst
df["Mode"]= Mode_lst
df["LSP"]= LSP_lst
df = df[["Record","Amount","Mode","LSP"]]
By using regex we can extract information such as the above code and write down to separate columns.
I'm using a nested list to hold data in a Cartesian coordinate type system.
The data is a list of categories which could be 0,1,2,3,4,5,255 (just 7 categories).
The data is held in a list formatted thus:
stack = [[0,1,0,0],
[2,1,0,0],
[1,1,1,3]]
Each list represents a row and each element of a row represents a data point.
I'm keen to hang on to this format because I am using it to generate images and thus far it has been extremely easy to use.
However, I have run into problems running the following code:
for j in range(len(stack)):
stack[j].append(255)
stack[j].insert(0, 255)
This is intended to iterate through each row adding a single element 255 to the start and end of each row. Unfortunately it adds 12 instances of 255 to both the start and end!
This makes no sense to me. Presumably I am missing something very trivial but I can't see what it might be. As far as I can tell it is related to the loop: if I write stack[0].append(255) outside of the loop it behaves normally.
The code is obviously part of a much larger script. The script runs multiple For loops, a couple of which are range(12) but which should have closed by the time this loop is called.
So - am I missing something trivial or is it more nefarious than that?
Edit: full code
step_size = 12, the code above is the part that inserts "right and left borders"
def classify(target_file, output_file):
import numpy
import cifar10_eval # want to hijack functions from the evaluation script
target_folder = "Binaries/" # finds target file in "Binaries"
destination_folder = "Binaries/Maps/" # destination for output file
# open the meta file to retrieve x,y dimensions
file = open(target_folder + target_file + "_meta" + ".txt", "r")
new_x = int(file.readline())
new_y = int(file.readline())
orig_x = int(file.readline())
orig_y = int(file.readline())
segment_dimension = int(file.readline())
step_size = int(file.readline())
file.close()
# run cifar10_eval and create predictions vector (formatted as a list)
predictions = cifar10_eval.map_interface(new_x * new_y)
del predictions[(new_x * new_y):] # get rid of excess predictions (that are an artefact of the fixed batch size)
print("# of predictions: " + str(len(predictions)))
# check that we are mapping the whole picture! (evaluation functions don't necessarily use the full data set)
if len(predictions) != new_x * new_y:
print("Error: number of predictions from cifar10_eval does not match metadata for this file")
return
# copy predictions to a nested list to make extraction of x/y data easy
# also eliminates need to keep metadata - x/y dimensions are stored via the shape of the output vector
stack = []
for j in range(new_y):
stack.append([])
for i in range(new_x):
stack[j].append(predictions[j*new_x + i])
predictions = None # clear the variable to free up memory
# iterate through map list and explode each category to cover more pixels
# assigns a step_size x step_size area to each classification input to achieve correspondance with original image
new_stack = []
for j in range(len(stack)):
row = stack[j]
new_row = []
for i in range(len(row)):
for a in range(step_size):
new_row.append(row[i])
for b in range(step_size):
new_stack.append(new_row)
stack = new_stack
new_stack = None
new_row = None # clear the variables to free up memory
# add a border to the image to indicate that some information has been lost
# border also ensures that map has 1-1 correspondance with original image which makes processing easier
# calculate border dimensions
top_and_left_thickness = int((segment_dimension - step_size) / 2)
right_thickness = int(top_and_left_thickness + (orig_x - (top_and_left_thickness * 2 + step_size * new_x)))
bottom_thickness = int(top_and_left_thickness + (orig_y - (top_and_left_thickness * 2 + step_size * new_y)))
print(top_and_left_thickness)
print(right_thickness)
print(bottom_thickness)
print(len(stack[0]))
# add the right then left borders
for j in range(len(stack)):
for b in range(right_thickness):
stack[j].append(255)
for b in range(top_and_left_thickness):
stack[j].insert(0, 255)
print(stack[0])
print(len(stack[0]))
# add the top and bottom borders
row = []
for i in range(len(stack[0])):
row.append(255) # create a blank row
for b in range(top_and_left_thickness):
stack.insert(0, row) # append the blank row to the top x many times
for b in range(bottom_thickness):
stack.append(row) # append the blank row to the bottom of the map
# we have our final output
# repackage this as a numpy array and save for later use
output = numpy.asarray(stack,numpy.uint8)
numpy.save(destination_folder + output_file + ".npy", output)
print("Category mapping complete, map saved as numpy pickle: " + output_file + ".npy")
I have a following data frame df, which I converted from sframe
URI name text
0 <http://dbpedia.org/resource/Digby_M... Digby Morrell digby morrell born 10 october 1979 i...
1 <http://dbpedia.org/resource/Alfred_... Alfred J. Lewy alfred j lewy aka sandy lewy graduat...
2 <http://dbpedia.org/resource/Harpdog... Harpdog Brown harpdog brown is a singer and harmon...
3 <http://dbpedia.org/resource/Franz_R... Franz Rottensteiner franz rottensteiner born in waidmann...
4 <http://dbpedia.org/resource/G-Enka> G-Enka henry krvits born 30 december 1974 i...
I have done the following:
from textblob import TextBlob as tb
import math
def tf(word, blob):
return blob.words.count(word) / len(blob.words)
def n_containing(word, bloblist):
return sum(1 for blob in bloblist if word in blob.words)
def idf(word, bloblist):
return math.log(len(bloblist) / (1 + n_containing(word, bloblist)))
def tfidf(word, blob, bloblist):
return tf(word, blob) * idf(word, bloblist)
bloblist = []
for i in range(0, df.shape[0]):
bloblist.append(tb(df.iloc[i,2]))
for i, blob in enumerate(bloblist):
print("Top words in document {}".format(i + 1))
scores = {word: tfidf(word, blob, bloblist) for word in blob.words}
sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True)
for word, score in sorted_words[:3]:
print("\tWord: {}, TF-IDF: {}".format(word, round(score, 5)))
But this is taking a lot of time as there are 59000 documents.
Is there a better way to do it?
I am confused about this subject. But I found a few solution on the internet with use Spark. Here you can look at:
https://www.linkedin.com/pulse/understanding-tf-idf-first-principle-computation-apache-asimadi
On the other hand i tried theese method and i didn't get bad results. Maybe you want to try :
I hava a word list. This list contains word and it's counts.
I found the average of this words counts.
I selected the lower limit and the upper limit with the average value.
(e.g. lower bound = average / 2 and upper bound = average * 5)
Then i created a new word list with upper and lower bound.
With theese i got theese result :
Before normalization word vector length : 11880
Mean : 19 lower bound : 9 upper bound : 95
After normalization word vector length : 1595
And also cosine similarity results were better.
I am trying to read bunch of this type of files using R to parse out the information and put the data in a data frame like format:
this is the contents of the file:
last_run current_run seconds
------------------------------- ------------------------------- -----------
Jul 4 2016 7:17AM Jul 4 2016 7:21AM 226
Engine Utilization (Tick %) User Busy System Busy I/O Busy Idle
------------------------- ------------ ------------ ---------- ----------
ThreadPool : syb_default_pool
Engine 0 5.0 % 0.4 % 22.4 % 72.1 %
Engine 1 3.9 % 0.5 % 22.8 % 72.8 %
Engine 2 5.6 % 0.3 % 22.5 % 71.6 %
Engine 3 5.1 % 0.4 % 22.7 % 71.8 %
------------------------- ------------ ------------ ---------- ----------
Pool Summary Total 336.1 % 25.6 % 1834.6 % 5803.8 %
Average 4.2 % 0.3 % 22.9 % 72.5 %
------------------------- ------------ ------------ ---------- ----------
Server Summary Total 336.1 % 25.6 % 1834.6 % 5803.8 %
Average 4.2 % 0.3 % 22.9 % 72.5 %
Transaction Profile
-------------------
Transaction Summary per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Committed Xacts 137.3 n/a 41198 n/a
Average Runnable Tasks 1 min 5 min 15 min % of total
------------------------- ------------ ------------ ---------- ----------
ThreadPool : syb_default_pool
Global Queue 0.0 0.0 0.0 0.0 %
Engine 0 0.0 0.1 0.1 0.6 %
Engine 1 0.0 0.0 0.0 0.0 %
Engine 2 0.2 0.1 0.1 2.6 %
------------------------- ------------ ------------ ----------
Pool Summary Total 7.2 5.9 6.1
Average 0.1 0.1 0.1
------------------------- ------------ ------------ ----------
Server Summary Total 7.2 5.9 6.1
Average 0.1 0.1 0.1
Device Activity Detail
----------------------
Device:
/dev/vx/rdsk/sybaserdatadg/datadev_125
datadev_125 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 n/a
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 0.0 %
-----------------------------------------------------------------------------
Device:
/dev/vx/rdsk/sybaserdatadg/datadev_126
datadev_126 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 n/a
------------------------- ------------ ------------ ---------- ----------
Total I/Os 0.0 0.0 0 0.0 %
-----------------------------------------------------------------------------
Device:
/dev/vx/rdsk/sybaserdatadg/datadev_127
datadev_127 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 0.0 0.0 5 0.4 %
Non-APF 0.0 0.0 1 0.1 %
Writes 3.8 0.0 1128 99.5 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 3.8 0.0 1134 0.1 %
Mirror Semaphore Granted 3.8 0.0 1134 100.0 %
Mirror Semaphore Waited 0.0 0.0 0 0.0 %
-----------------------------------------------------------------------------
Device:
/sybaser/database/sybaseR/dev/sybaseR.datadev_000
GPS_datadev_000 per sec per xact count % of total
------------------------- ------------ ------------ ---------- ----------
Reads
APF 7.9 0.0 2372 55.9 %
Non-APF 5.5 0.0 1635 38.6 %
Writes 0.8 0.0 233 5.5 %
------------------------- ------------ ------------ ---------- ----------
Total I/Os 14.1 0.0 4240 0.3 %
Mirror Semaphore Granted 14.1 0.0 4239 100.0 %
Mirror Semaphore Waited 0.0 0.0 2 0.0 %
I need to capture "Jul 4 2016 7:21AM" as Date,
from "Engine Utilization (Tick%) line, Server Summary ->Average "4.2%"
From "Transaction Profile" section ->Transaction Profile "count" entry.
so, my data frame should look something like this:
Date Cpu Count
Jul 4 2016 7:21AM 4.2 41198
Can somebody help me how to parse this file to get these output?
I have tried something like this:
read.table(text=readLines("file.txt")[count.fields("file.txt", blank.lines.skip=FALSE) == 9])
to get this line:
Average 4.2 % 0.3 % 22.9 % 72.5 %
But I want to be able to only extract Average right after
Engine Utilization (Tick %), since there could be many lines that start with Average. The Average line that shows up right after Engine Utilization (Tick %), is the one I want.
How do I put that in this line to extract this information from this file:
read.table(text=readLines("file.txt")[count.fields("file.txt", blank.lines.skip=FALSE) == 9])
Can I use grep in this read.table line to search for certain characters?
%%%% Shot 1 -- got something working
extract <- function(filenam="file.txt"){
txt <- readLines(filenam)
## date of current run:
## assumed to be on 2nd line following the first line matching "current_run"
ii <- 2 + grep("current_run",txt, fixed=TRUE)[1]
line_current_run <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
date_current_run <- paste(line_current_run[5:8], collapse=" ")
## Cpu:
## assumed to be on line following the first line matching "Server Summary"
## which comes after the first line matching "Engine Utilization ..."
jj <- grep("Engine Utilization (Tick %)", txt, fixed=TRUE)[1]
ii <- grep("Server Summary",txt, fixed=TRUE)
ii <- 1 + min(ii[ii>jj])
line_Cpu <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
Cpu <- line_Cpu[2]
## Count:
## assumed to be on 2nd line following the first line matching "Transaction Summary"
ii <- 2 + grep("Transaction Summary",txt, fixed=TRUE)[1]
line_count <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
count <- line_count[5]
data.frame(Date=date_current_run, Cpu=Cpu, Count=count, stringsAsFactors=FALSE)
}
print(extract("file.txt"))
##file.list <- dir("./")
file.list <- rep("file.txt",3)
merged <- do.call("rbind", lapply(file.list, extract))
print(merged)
file.list <- rep("file.txt",2000)
print(system.time(merged <- do.call("rbind", lapply(file.list, extract))))
## runs in about 2.5 secs on my laptop
%%% Shot 2: 1st attempt to extract a (potentially variable) number of device columns
extractv2 <- function(filenam="file2.txt"){
txt <- readLines(filenam)
## date of current run:
## assumed to be on 2nd line following the first line matching "current_run"
ii <- 2 + grep("current_run",txt, fixed=TRUE)[1]
line_current_run <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
date_current_run <- paste(line_current_run[5:8], collapse=" ")
## Cpu:
## assumed to be on line following the first line matching "Server Summary"
## which comes after the first line matching "Engine Utilization ..."
jj <- grep("Engine Utilization (Tick %)", txt, fixed=TRUE)[1]
ii <- grep("Server Summary",txt, fixed=TRUE)
ii <- 1 + min(ii[ii>jj])
line_Cpu <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
Cpu <- line_Cpu[2]
## Count:
## assumed to be on 2nd line following the first line matching "Transaction Summary"
ii <- 2 + grep("Transaction Summary",txt, fixed=TRUE)[1]
line_count <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
count <- line_count[5]
## Total I/Os
## 1. Each line "Device:" is assumed to be the header of a block of lines
## containing info about a single device (there are 4 such blocks
## in your example);
## 2. each block is assumed to contain one or more lines matching
## "Total I/Os";
## 3. the relevant count data is assumed to be contained in the last
## of such lines (at column 4), for each block.
## Approach: loop on the line numbers of those lines matching "Device:"
## to get: A. counts; B. device names
ii_block_dev <- grep("Device:", txt, fixed=TRUE)
ii_lines_IOs <- grep("Total I/Os", txt, fixed=TRUE)
nblocks <- length(ii_block_dev)
## A. get counts for each device
## for each block, select *last* line matching "Total I/Os"
ii_block_dev_aux <- c(ii_block_dev, Inf) ## just a hack to get a clean code
ii_lines_IOs_dev <- sapply(1:nblocks, function(block){
## select matching liens to "Total I/Os" within each block
IOs_per_block <- ii_lines_IOs[ ii_lines_IOs > ii_block_dev_aux[block ] &
ii_lines_IOs < ii_block_dev_aux[block+1]
]
tail(IOs_per_block, 1) ## get the last line of each block (if more than one match)
})
lines_IOs <- lapply(txt[ii_lines_IOs_dev], function(strng){
Filter(function(v) v!="", strsplit(strng," ")[[1]])
})
IOs_counts <- sapply(lines_IOs, function(v) v[5])
## B. get device names:
## assumed to be on lines following each "Device:" match
ii_devices <- 1 + ii_block_dev
device_names <- sapply(ii_devices, function(ii){
Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
})
## Create a data.frame with "device_names" as column names and "IOs_counts" as
## the values of a single row.
## Sorting the device names by order() will help produce the same column names
## if different sysmon files list the devices in different order
ord <- order(device_names)
devices <- as.data.frame(structure(as.list(IOs_counts[ord]), names=device_names[ord]),
check.names=FALSE) ## Prevent R from messing with our device names
data.frame(stringsAsFactors=FALSE, check.names=FALSE,
Date=date_current_run, Cpu=Cpu, Count=count, devices)
}
print(extractv2("file2.txt"))
## WATCH OUT:
## merging will ONLY work if all devices have the same names across sysmon files!!
file.list <- rep("file2.txt",3)
merged <- do.call("rbind", lapply(file.list, extractv2))
print(merged)
%%%%%%% Shot 3: extract two tables, one with a single row, and a second with a variable number of rows (depending on the which devices are listed in each sysmon file).
extractv3 <- function(filenam="file2.txt"){
txt <- readLines(filenam)
## date of current run:
## assumed to be on 2nd line following the first line matching "current_run"
ii <- 2 + grep("current_run",txt, fixed=TRUE)[1]
line_current_run <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
date_current_run <- paste(line_current_run[5:8], collapse=" ")
## Cpu:
## assumed to be on line following the first line matching "Server Summary"
## which comes after the first line matching "Engine Utilization ..."
jj <- grep("Engine Utilization (Tick %)", txt, fixed=TRUE)[1]
ii <- grep("Server Summary",txt, fixed=TRUE)
ii <- 1 + min(ii[ii>jj])
line_Cpu <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
Cpu <- line_Cpu[2]
## Count:
## assumed to be on 2nd line following the first line matching "Transaction Summary"
ii <- 2 + grep("Transaction Summary",txt, fixed=TRUE)[1]
line_count <- Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
count <- line_count[5]
## first part of output: fixed three-column structure
fixed <- data.frame(stringsAsFactors=FALSE,
Date=date_current_run, Cpu=Cpu, Count=count)
## Total I/Os
## 1. Each line "Device:" is assumed to be the header of a block of lines
## containing info about a single device (there are 4 such blocks
## in your example);
## 2. each block is assumed to contain one or more lines matching
## "Total I/Os";
## 3. the relevant count data is assumed to be contained in the last
## of such lines (at column 4), for each block.
## Approach: loop on the line numbers of those lines matching "Device:"
## to get: A. counts; B. device names
ii_block_dev <- grep("Device:", txt, fixed=TRUE)
if(length(ii_block_dev)==0){
variable <- data.frame(stringsAsFactors=FALSE,
date_current_run=date_current_run,
device_names=NA, IOs_counts=NA)
}else{
ii_lines_IOs <- grep("Total I/Os", txt, fixed=TRUE)
nblocks <- length(ii_block_dev)
if(length(ii_block_dev)==0){
sprintf("WEIRD datapoint at date %s: I have %d devices but 0 I/O lines??")
##stop()
}
## A. get counts for each device
## for each block, select *last* line matching "Total I/Os"
ii_block_dev_aux <- c(ii_block_dev, Inf) ## just a hack to get a clean code
ii_lines_IOs_dev <- sapply(1:nblocks, function(block){
## select matching lines to "Total I/Os" within each block
IOs_per_block <- ii_lines_IOs[ ii_lines_IOs > ii_block_dev_aux[block ] &
ii_lines_IOs < ii_block_dev_aux[block+1]
]
tail(IOs_per_block, 1) ## get the last line of each block (if more than one match)
})
lines_IOs <- lapply(txt[ii_lines_IOs_dev], function(strng){
Filter(function(v) v!="", strsplit(strng," ")[[1]])
})
IOs_counts <- sapply(lines_IOs, function(v) v[5])
## B. get device names:
## assumed to be on lines following each "Device:" match
ii_devices <- 1 + ii_block_dev
device_names <- sapply(ii_devices, function(ii){
Filter(function(v) v!="", strsplit(txt[ii]," ")[[1]])
})
## Create a data.frame with three columns: date, device, counts
variable <- data.frame(stringsAsFactors=FALSE,
date_current_run=rep(date_current_run, length(IOs_counts)),
device_names=device_names, IOs_counts=IOs_counts)
}
list(fixed=fixed, variable=variable)
}
print(extractv3("file2.txt"))
file.list <- c("file.txt","file2.txt","file3.txt")
res <- lapply(file.list, extractv3)
fixed.merged <- do.call("rbind", lapply(res, function(r) r$fixed))
print(fixed.merged)
variable.merged <- do.call("rbind", lapply(res, function(r) r$variable))
print(variable.merged)
Manipulating text files can sometimes be easier using dedicated programs. E.g. gawk is specifically designed for finding patterns in text files and outputting data from them. We can use a short gawk script to get the required data to load into R. Note, each line of the script consists of a pattern to look for, followed by an action to take enclosed in{}. NR is a counter that counts number of lines read so far.
BEGIN {OFS = ""; ORS = ""}
/current_run/ {dat_line = NR+2; cpu_done = 0}
/Server Summary/ {cpu_line = NR+1}
/Transaction Summary/ {cnt_line = NR+2}
NR == dat_line {print "'",$5," ",$6," ",$7," ",$8,"' "}
NR == cpu_line && cpu_done==0 {print $2," "; cpu_done = 1}
NR == cnt_line {print $5,"\n"}
Save this script with the name "ext.awk", then extract all the data files into an R data frame (assuming they are all located in one folder and have the extension .txt) with
df <- read.table(text=system("gawk -f ext.awk *.txt", T), col.names = c("Date","Cpu","Count"))
NB, gawk comes ready installed on most Linux versions. On windows you may need to install it from http://gnuwin32.sourceforge.net/packages/gawk.htm
For reading the files
Here I am assuming CSV as file type.
For others please visit
http://www.r-tutor.com/r-introduction/data-frame/data-import
>utilization <- read.csv(file="",head=TRUE)
>serverSummary <-read.csv(file="",head=TRUE)
>transcProfile <- read.csv(file="",head=TRUE)
==>merge only accepts two arguments
>data <- merge(utilization,serverSummary)
>dataframe <-merge(data,transcProfile)
now you will have all the columns in dataframe
>dataframe
u can see all the columns in dataframe
Extarct the columns as per required
==>The subset( ) function is the easiest way to select variables and observations
>subset(dataframe,select=c("last_run","Average","Transaction Profile")
Now you can write it to CSV or any file type
>write.csv(dataframe, file = "MyData.csv")
For merging all the files together
multmerge = function(mypath){
filenames=list.files(path=mypath, full.names=TRUE)
datalist = lapply(filenames, function(x){read.csv(file=x,header=T)})
Reduce(function(x,y) {merge(x,y)}, datalist)
After running the code to define the function, you are all set to use it. The function takes a path. This path should be the name of a folder that contains all of the files you would like to read and merge together and only those files you would like to merge. With this in mind, I have two tips:
Before you use this function, my suggestion is to create a new folder in a short directory (for example, the path for this folder could be “C://R//mergeme“) and save all of the files you would like to merge in that folder.
In addition, make sure that the column that will do the matching is formatted the same way (and has the same name) in each of the files.
Suppose you saved your 20 files into the mergeme folder at “C://R//mergeme” and you would like to read and merge them. To use my function, you use the following syntax:
mymergeddata = multmerge(“C://R//mergeme”)
After running this command, you have a fully merged data frame with all of your variables matched to each other
Now you can subset the dataframe as per required columns.
Use readLines or stringi::stri_read_lines to read the contents of the file as a character vector. The latter is typically faster, but not as mature, and occasionally breaks on unusual content.
lines <- readLines("the file name")
For fast regular expresssion matching, stringi is typically the best choice. rebus.datetimes allows you you to generate a regular expression from a strptime date format string.
Finding the current run date
The line of which current_run appears is found with:
library(stringi)
library(rebus.datetimes)
i_current_run <- which(stri_detect_fixed(lines, "current_run"))
To extract the dates, this code only looks at the 2nd line after the one where current run is found, but the code is vectorizable, so you can easily look at all the lines if you have files where that assumption doesn't hold.
date_format <- "%b%t%d%t%Y%t%H:%M%p"
rx_date <- rebus.datetimes::datetime(date_format, io = "input")
extracted_dates <- stri_extract_all_regex(lines[i_current_run + 2], rx_date)
current_run_date <- strptime(
extracted_dates[[1]][2], date_format, tz = "UTC"
)
## [1] "2016-07-04 07:21:00 UTC"
Finding the % user busy
The "Engine Utilization" section is found via
i_engine_util <- which(
stri_detect_fixed(lines, "Engine Utilization (Tick %)")
)
We want the first instance of "Server Summary" that comes after this line.
i_server_summary <- i_engine_util +
min(which(
stri_detect_fixed(lines[(i_engine_util + 1):n_lines], "Server Summary")
))
Use a regular expression to extract the number from the next line.
user_busy <- as.numeric(
stri_extract_first_regex(lines[i_server_summary + 1], "[0-9]+(?:\\.[0-9])")
)
## [1] 4.2
Finding the count of committed xacts
The "Committed Xacts" line is
i_comm_xacts <- which(stri_detect_fixed(lines, "Committed Xacts"))
The count value is a set of digits surrounded by space.
xacts_count <- as.integer(
stri_extract_all_regex(lines[i_comm_xacts], "(?<= )[0-9]+(?= )")
)
## [1] 41198
Combining the results
data.frame(
Date = current_run_date,
CPU = user_busy,
Count = xacts_count
)