I am using the rmarkdown with the rshiny for generating word file reports. I am using the R studio-server for development. On executing the rshiny application, it halts due to some error in the one of the rmarkdown.
The error says...
Quitting from lines 11-486 (/home/KS127/dev/shiny_apps/pashiny/inst/shiny/dataframe_source.Rmd)
Quitting from lines NA-486 (/home/KS127/dev/shiny_apps/pashiny/inst/shiny/dataframe_source.Rmd)
It's providing the line numbers which are not useful to identify the root cause. Adding print statements are also not useful as I am generating the word file report, until and unless the complete .Rmd doesn't get successfully executed, I won't be able to see print statements output.
I tried changing the rmarkdown output setting from chunk output inline to chunk output to console as mentioned here as well but it is of no use.
Is there any way to print the .Rmd file print statements or the output to the console or is there any way to debug the .Rmd file?
In addition to my comment above, Abhinandan, I've recently stumbled across a new package, called testrmd.
Although it is new, it seems to work with a number of different test packages and provides a useful front-end for Rmarkdown documents. (I'm certainly going to use it.)
You might want to check it out. Here's the link: https://github.com/ropenscilabs/testrmd.
I hope this helps you.
See My .Rmd file becomes very lengthy. Is that possible split it and source() it's smaller portions from main .Rmd?
That's what I do. -
Split your code chunks in separate files and add them one by one
Related
When my colleague and I run the same Rmd file on our respective computers they produce different .tex files. This is a problem, because the tex-file my computer produces doesn't compile. Apparently there is some invisible local setting that is different between our computers but what could it be? I updated all the Rpackages I use but to no avail.
The Rmd file starts with
output:
bookdown::pdf_document2:
keep_tex: yes
toc: false
And both of us compile it by simply hitting the knit-button in Rstudio.
Noticeable differences in the tex-files are:
extra linebreaks in different places
a line that is commented out in the rmd-file (<!-- blabla -->) appears in my tex-file, not in his, but some other out-commented lines appear in neither (as they should)
at the end of lines in tables there is a \strut inserted in my tex-file but not in his
Section heads read \hypertaget{blabla} in his file but not mine
For none of these difference I can find any place in the Rmd-file where any choice w.r.t to this is made - apparently some local settings file I am not aware of is used in the process??
Please let me know if you need more information.
EDIT: we found a partial answer and full solution, but I am still interested in what the underlying mechanism is. It turned out that I was using an older version of Rstudio. (It took me long to find that out because the check for updates tool in Rstudio kept telling me that I was using the newest version, but that is a separate issue.) Using the same version of Rstudio we get the same result.
The translation from Rmd to tex has multiple steps:
All the code chunks are extracted and executed via knitr, resulting in a md file.
The md file is translated to tex via pandoc.
For most people pandoc comes bundled with RStudio. So when you updated that, you got a more recent pandoc version. You can test for the used pandoc version with rmarkdown::pandoc_version().
I have a C++ project in which comments of source code are in Chinese language, now I want to convert them into English.
I tried to solve using google translator but got an Issue: Whole CPP files or header didn't get converted, also I have found the name of the struct, class etc gets changed. Sometimes code also gets modified.
Note: Each .cpp or .h file is less than 1000 lines of code.But there are multiple C++ projects each having around 10 files. Thus I have around 50 files for which I need to translate Chinese text to English.
Well, what did you expect? Google Translate doesn't know what a CPP file is and how to treat it. You'll have to write your own program that extracts comments from them (not that hard), runs just those through Google Translate, and then puts them back in.
Mind you, if there is commented out code, or the comments reference variable names, those will get translated too. Detecting and handling these cases is a lot harder already.
Extracting comments is a lexical issue, and mostly a quite simple one.
In a few hours, you could write (e.g. with flex) some simple command line program extracting them. And a good editor (such as GNU emacs) could even be configured to run that filter on selected code chunks.
(handling a few corner cases, such as raw string literals, might be slightly more difficult, but these don't happen often and you might handle them manually)
BTW, if you are assigned to work on that code, you'll need to understand it, and that takes much more time than copy&pasting or editing each comments manually.
At last, I am not sure of the quality of automatic translation of code comments. You might be disappointed. Also, the code names (of functions, of classes, of variables, etc...) matter a lot more.
Perhaps adding your comments in English could be wiser.
Don't forget to use some version control system. You really need one (e.g. git)
(I am not convinced that extracting comments for automatic translation would help your work)
First separate both comment and code part in different file using python script as below,
import sys
file=sys.argv[1]
f=open(file,"r")
lines=f.readlines()
f.close()
comment=open("comment.txt","w+")
code=open("code.txt","w+")
for l in lines:
if "//" in l:
comment.write(l)
code.write("\n")
else:
code.write(l)
comment.write("\n")
comment.close()
code.close()
Now translate comment.txt with google translator and then use
paste code.txt comment_en > source
where comment_en is translated comment in english.
There is a 'Code Coverage Results' window in Visual Studio which allows you to view the contents of a *.coverage file (generated by one of the VS performance tools). I was wondering if there was a way to export the Code Coverage Results to excel for further analysis. The tools in the Code Coverage Results window seem somewhat limited and was wondering if I was missing something.
I've queried quite a few statements and cannot find the answer I was hoping to find. There were three main questions which did not seem to have answers:
Can you search the data within the code coverage results? The typical VS search will not allow you to search within the Code Coverage Results window
Can the Code Coverage Results be exported to excel, or as a *.csv file? If not, then can the *.coveragexml file (which seems to be the only export option) be imported into excel in a way that i would get a table similar to the one in the Code Coverage Results window?
Is there an 'Expand All'/'Collapse All' button for the Code Coverage Results window? It would be nice to be able to expand all of the Code Coverage Result tree if possible ... or at least be able to expand a group of branches which have been expanded.
Any suggestions/input would be useful.
What you can do is this:
Export to XML (I renamed it to ...coverage.xml so it is recognized as an XML file but not sure if that's necessary)
Load with Visual Studio
Format in VS (Ctrl+K, Ctrl+D)
Now you can open this in Notepad++ for example (or any other good XML viewer). There you choose to close or open all text blocks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm a great fan of R markdown, finding it even easier than weaving LaTeX for quick project documentation (less than 15 pages). However, I also have to support sometimes other Statistics packages (SPSS, Stata + SAS) and was wondering for equivalent solutions for these.
To some extend this might go back to using some kind of original Noweb code + markdown file to be compiled over the command line. I guess calling the other packages from R is another option.
I have had a look at this example by John Muschelli: http://rpubs.com/muschellij2/3888 and it looks as though he knitted Stata code into an R markdown file.
Can someone provide specific examples of how this can be done in Stata, SAS or SPSS?
I do know of SASweave and StatWeave (the latter is apparently broken???), but think that a markdown solution would be far more advantageous in our case.
Stata has its own SMCL for annotation of logs, the M standing for mark-up. The main reason for a different language is that SMCL has to be created and interpreted line by line in situations where no end of document is in sight, namely within interactive sessions. This is created by Stata automatically as annotation when you ask for it and can be stipulated by users or programmers as a way of tuning Stata's display choices.
The possible connection to your question is that SMCL can be translated to HTML, which opens various doors. So, something that is easy in Stata is to do some work, keep a log file in SMCL and then translate the log file to HTML. You would not get anything really nice without further work, but the further work is easy and amounts to doing what you would done any way, but in your favourite text editor or text processor, rather than within Stata.
This is made easier by log2html which Stata users can install using ssc inst log2html. It exploits a feature undocumented in Stata.
Stata's help files can also be translated to HTML in the same way (but consider copyright issues if doing this with official help files; it's fair play with your own help files).
John Muschelli pointed me to this Stata program:
https://github.com/amarder/stata-tutorial/blob/master/knitr.do
It parses a .domd file which contains markdown and Stata code and produces a .md file with executed Stata code. The name of the file to be parsed is at the end of the knitr.do file.
More specifcally:
Download the knitr.do file from https://github.com/amarder/stata-tutorial/blob/master/knitr.do
Download the clustered-standard-errors.domd file from https://github.com/amarder/stata-tutorial/blob/master/clustered-standard-errors.domd
Save them both in some directory.
Modify the last line of knitr.do to reflect the complete path of its directory (e.g.
D:\Desktop\knit_example\clustered-standard-errors.domd
Run knitr.do to get your markdown (.md) file (and an intermediate .md1 file).
Note that knitr.do contains the programs that do the work and a line (the last one):
knit "whatever-file.domd"
that calls the program.
So you basically write a .domd file [that of step (2) is only an example] containing Markdown syntax and Stata commands, run knitr.do adjusting the file name, and get a Markdown file with executed Stata commands.
There are several caveats:
Only one-liner Stata commands are allowed. A loop, for example, won't work.
".domd" can't be part of the file name.
If there is an error with a Stata command, the user gets no return code.
File handles need to be manually closed if user hits the Break button when the program is running or if there is a Stata command error.
I'm not sure if this is what you want, but if you're looking to create .html files in SAS that contain statistical reports within them, then you can use the Output Delivery System (ODS).
Example syntax is:
ods html file='pathofdirectory\filename.html' <additional options>;
proc print... (SAS code that generates output)
proc means...
proc freq...
proc gchart...
proc gplot...
...
ods html close;
SPSS (and SAS I presume) have some overhead by the need to write everything to disk that makes the compilation in one fell swoop less appealing. Similar as to what Yick mentioned, SPSS has an output system that one can write automated reports to begin with and export to HTML or PDF or Word. It isn't the easiest thing to make look nice, but it is possible and additions to ease automated editing (mainly via Python scripts) are being rolled out on a regular basis.
Basically the automated reports I write now using SPSS and R have html shells. The code then just updates or inserts the needed tables and graphs. They are entirely self-contained, reproducible, and run on weekly or monthly timers without human intervention. They just don't have inline code blocks exactly defining how the tables are produced (you would have to trace the code slightly further back to figure it out - but that isn't too onerous IMO).
Because SPSS allows you to run SPSS code from the Python command prompt you could theoretically knit a document with Python code calling SPSS. I'm not quite sure I see the advantage of this over having more segmented code in seperate places though. Do you really want to read 100 lines of SPSS code that begins with an SQL query, does some transformations and produces a table and a graph? Wouldn't you rather see the table and graph, and then if interested in the nitty gritty go back to see DataPrep.sps that prepares all of the data, then see Table1.sps and Figure1.sps etc. to see how they each were exactly produced?
I'm trying to write a simple GUI for Wget. I'm looking for advice on how to read information from the command line output that Wget generates when it is doing a run. I'd like to update that download information real time to a list box or some equivalent. The GUI will be in Visual Basic. I know programs like WinWget do this, and their source code is available, but I don't know the language that's written in well enough to find what I'm looking for.
tl;dr: I need to update a list box real time with command line output.
There are two ways to use the output of one console application for the input of an other:
The first way is to use the | operator; for example:
dir |more
The second way is to write the data into a file and process it later.
dir > data.txt