I am following the hints to this question, but I'm impatient and I would like to run my tests more quickly, without having to wait for the 30+ checks that R CMD check src invokes before checking tests.
what I thought I could do was to add a --standalone option to the doRUnit.R suggested in that R-wiki page so that I could run the unit tests independently of R CMD.
I added these lines to the script:
opt <- list(standalone=NULL)
if(require("getopt", quietly=TRUE)) {
## path to unit tests may be given on command line, in which case
## we also want to move the cwd to this script
opt <- getopt(matrix(c('standalone', 's', 0, "logical"),
ncol=4, byrow=TRUE))
if(!is.null(opt$standalone)) {
## switch the cwd to the dir of this script
args <- commandArgs()
script.name <- substring(args[substring(args, 1, 7)=="--file="], 8, 1000)
if(!is.null(script.name))
setwd(dirname(script.name))
}
}
with this change, the script finds the test.*\.R files, independently from the directory from which I invoke the script.
the remaining problem now is that the doRUnit.R script loads the installed library, it does not source() the files that compose the library.
assuming that I want to load each and every file in the R directory, how would I do that?
assuming you have a better testing schema (satisfying the requirements "quick", "uninstalled"), what is it?
You may have to manually loop over the files in the R directory and source() them, maybe with something like source(dir("/some/Path", pattern="*.R", full.names=TRUE).
But I have the feeling that R CMD INSTALL does a little more. You may be better off working from the installed code. And just running your unit tests directly, as you do and as the wiki page suggests, is already pretty good. So no better scheme from me. But keep us posted.
Edit: Also note that R 2.10.1 gives us new options to accelerate R CMD INSTALL:
2.10.1 NEW FEATURES
R CMD INSTALL has new options --no-R,
--no-libs, --no-data, --no-help, --no-demo, --no-exec, and --no-inst to suppress installation of the specified
part of the package. These are
intended for special purposes (e.g.
building a database of help pages
without fully installing all
packages).
That should help too.
further additions/corrections to the script.
I can now invoke it as doRUnit.R --standalone
or have it invoked by R CMD check
if(!is.null(script.name)) {
setwd(dirname(script.name))
path <- '../inst/RUnit/'
}
.
.
.
if (is.null(opt$standalone)) {
cat("\nRunning unit tests of installed library\n")
library(package=pkg, character.only=TRUE)
} else {
cat("\nRunning unit tests of uninstalled library\n")
source(dir("../R/", pattern=".*\\.R", full.names=TRUE))
}
Related
I am writing an R package that includes C++ code, written by an external developer who is unable to help out.
The C++ code currently appears to have a memory leak: R's memory usage keeps increasing when the C++ code is run, and is not released until R is quit. It is my task to neutralize this leak.
Because I am using Windows, and calling the C++ code through R, it is not clear how best to track down this leak. My cunning plan was to use valgrind in a Linux environment on Travis CI, but this finds no problems.
What is the best way to track down memory leaks?
I have had partial success by adding a separate call to R with valgrind to my .travis.yml.
addons:
apt:
packages: valgrind
after_success:
- R -d "valgrind --leak-check=full --track-origins=yes" --vanilla < tests/testthat/valgrind.R
Ideally I'd've run tests/testthat.R, but because R -d runs interactively, I've had to create a separate file tests/testthat/valgrind.R for tests:
library(testthat)
# Load the package
library(pkgload)
load_all()
# This may run during build in root working directory,
# then again with R -d from tests wd.
if (dir.exists('testthat')) setwd('testthat/tests')
testFiles <- list.files(pattern = 'test\\-.*\\.R', full.names= TRUE)
# Test files using expect_doppleganger will fail in interactive mode.
# Remove them.
lapply(testFiles[-c(3, 6)], source)
This doesn't feel like an optimal solution... but it is sufficient for my immediate needs.
I'm new to running SonarQube scans and I get this error message in the log in Jenkins:
16:17:39 16:17:36.926 ERROR - The only way to get an accurate analysis of your C/C++/Objective-C project is by using the SonarSource build-wrapper. If for any reason, the use of the build-wrapper is not possible on your project, you can bypass it with the help of the "sonar.cfamily.build-wrapper-output.bypass=true" property. By using that property, you'll switch to an "at best" mode that could result in false-positives and false-negatives.
Can someone please advise where I can find and run this SonarSource build-wrapper?
Thanks a lot for your help!
To solve this issue, download the Build Wrapper directly from your SonarQube Server, so that its version perfectly matches your version of the plugin:
Build Wrapper for Linux can be downloaded from URL
http://localhost:9000/static/cpp/build-wrapper-linux-x86.zip
Unzip the downloaded Build Wrapper,
Configure it in your PATH because it's just more convenient
export PATH=$PATH:/path/where/you/unzip
Once done, Run below commands.
build-wrapper-linux-x86-64 --out-dir <dir-name> <build-command>
build-wrapper-linux-x86-64 --out-dir build_output make clean all
Once all this done, you have to modify your sonar-project.properties file with following line. Note the dir-name is same directory which we defined in previous command.
sonar.cfamily.build-wrapper-output=<dir-name>
and then you can run the sonar scanner command.
sonar-scanner
this will do the analysis against your code. For more details, you can check this link.
Contacted support, turns out this was caused by missing the argument sonar.cfamily.build-wrapper-output in the scanner begin command.
Build wrapper downloads:
Linux: https://sonarcloud.io/static/cpp/build-wrapper-linux-x86.zip
macOS: https://sonarcloud.io/static/cpp/build-wrapper-macosx-x86.zip
Windows: https://sonarcloud.io/static/cpp/build-wrapper-win-x86.zip
Some links covering how to run the build wrapper:
https://docs.sonarqube.org/latest/analysis/languages/cfamily/
https://blog.sonarsource.com/with-great-power-comes-great-configuration/
I have a C++ project in NetBeans using generated Makefiles. I set up a job in Jenkins (continuous integration server) to run the tests configured in NetBeans. Now Jenkins runs the tests and captures their output, but it considers the build successful even when a test fails.
I'm using the Boost Unit Test Framework which of course returns a non-zero code on failure as any proper *nix program would. So I wondered why Jenkins didn't understand when a test failed. Then I found this in the generated Makefile-Debug.mk from NetBeans:
# Run Test Targets
.test-conf:
#if [ "${TEST}" = "" ]; \
then \
${TESTDIR}/TestFiles/f1 || true; \
${TESTDIR}/TestFiles/f2 || true; \
else \
./${TEST} || true; \
fi
So it seems like they deliberately ignore the return value of all tests. But this doesn't make sense, because then what are your tests testing?
I tried to find a setting in NetBeans to say "Let failing tests break the build" but didn't find anything. I also tried to find a bug in the NetBeans tracker for this but didn't see any in my brief search.
Is there any other reasonable solution? I want Jenkins to fail my build if any test fails. Right now it only fails if a test fails to build, but if it builds and fails to run, success is reported.
It turns out that NetBeans (up to version 8 at least) cannot support this. What I did to work around it is to do make build-tests rather than make test in Jenkins, followed by a loop over all the generated test files (TestFiles/f* in the build directory) to run them.
This is a major shortcoming in NetBeans' Makefile generator, as it is fundamentally incompatible with running tests outside of NetBeans itself. Thanks to #HEKTO for the link which led me to this page about writing NetBeans testing plugins: http://wiki.netbeans.org/CND69UnitTestsPluginTutotial
What that page tells you is basically that NetBeans relies on parsing the textual output of tests to determine success or failure. What it doesn't tell you is that NetBeans generates defective Makefiles which ignore critical failures in tests, including aborts, segmentation faults, assertion failures, uncaught exceptions, etc. It assumes you will use a test framework that it knows about (which is only CppUnit), or manually write magic strings at the right moments in your test programs.
I thought about taking the time to write a NetBeans unit test plugin for the Boost Unit Test Framework, but it won't help Jenkins at all: the plugins are only used when tests are run inside NetBeans itself, to display pretty status indicators.
I recently found out about CMake testing possibilities. I wrote several test-clients using it, they work ok, but to perform tests I need to:
cmake .. -> make -> then run my program in the background or other terminal -> make test (which runs all test clients/test scenarios)
Lets say I want the command: make test not only to run the tests, but also to run the executable(That is being tested) in background and kill it after tests complete. How can I pass a bash command via CMakeLists? I haven't found a straightforward way to achieve what I want yet
You can do so by using ADD_CUSTOM_COMMAND. (CMake ADD_CUSTOM_COMMAND docs)
There is not a way to run a process in the background from ctest. To handle this for projects like paraview that use MPI, we write a c driver program that launches the processes and performs the test/tests. Basically each ctest test needs to be something that runs and returns a value. However, there is of course nothing keeping that test from starting and stopping as many processes as possible.
I am interested in generating a C++ header using Apache Avro's code generation tool (i.e. the python script). According to the documentation it should be fairly easy to do, but I don't usually use python, so things look kinda strange to me.
The instructions state:
To generate the code is a two step process:
precompile < imaginary > imaginary.flat
The precompile step converts the schema into an intermediate format that is used by the code generator. This intermediate file is just a text-based representation of the schema, flattened by a depth-first-traverse of the tree structure of the schema types.
python scripts/gen-cppcode.py --input=example.flat --output=example.hh –-namespace=Math
This tells the code generator to read your flattened schema as its input, and generate a C++ header file in example.hh. The optional argument namespace will put the objects in that namespace...
My Issue (no, I can't see a doctor or use a cream for it):
I don't see anything that explains in details how to precompile. The documentation makes it seem like if I just type "precompile" in the command prompt and supply the command line arguments, then things would magically work, but precompile is not a valid Windows command. So what's the proper way to precompile on Windows? If anybody knows how to do it, then PLEASE let me know!
I also tried to run the gen-cppcode.py script, but it gets an error in line 316 (which, I suspect, may be happening because I didn't precompile the schema):
def doEnum(args):
structDef = enumTemplate;
typename = args[1]
structDef = structDef.replace('$name$', typename)
end = False
symbols = '';
firstsymbol = '';
while not end:
line = getNextLine()
if line[0] == 'end': end = True
elif line[0] == 'name':
if symbols== '' :
firstsymbol = line[1]
else :
symbols += ', '
symbols += line[1]
else: print "error" // <-- Syntax Error: invalid syntax
structDef = structDef.replace('$enumsymbols$', symbols);
structDef = structDef.replace('$firstsymbol$', firstsymbol);
addStruct(typename, structDef)
return (typename,typename)
About the only way I figured to do this is to:
Download VirtualBox.
Install Ubuntu (or another distro).
Download Avro.
Install cmake.
Install the C++ compilers (build essential).
Install boost, flex, bison (sudo apt-get install boost flex bison); btw, you will specifically need these boost libraries:
-- regex
-- filesystem
-- system
-- program_options
Build Avro:
$ tar xf avro-cpp-1.5.1.tar.gz
$ cd avro-cpp-1.5.1
$ cmake -G "Unix Makefiles"
$ make -j3
$ build/precompile file.input file.flatoutput
You can now generate a header file (still in the terminal window of the VM):
python scripts/gen-cppcode.py --input=example.flat --output=example.hh
Note that even after you generate the C++ file, you will still be unable to build with it in Windows (even if you have the right dependency includes to the avro-cpp-1.5.1/api. Avro has dependencies on GNU libraries (such as sys/uio.h) and I'm not sure how to specifically resolve them yet.
I found that it is required python version 2 to run gen-cppcode.py
https://www.python.org/downloads/