tools for testing vim plugins - unit-testing

I'm looking for some tools for testing vim scripts. Either vim scripts that
do unit/functional testing, or
classes for some other library (eg Python's unittest module) that make it convenient to
run vim with parameters that cause it to do some tests on its environment, and
determine from the output whether or not a given test passed.
I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful:
vim-unit:
purports "To provide vim scripts with a simple unit testing framework and tools"
first and only version (v0.1) was released in 2004
documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished".
unit-test.vim:
This one also seems pretty experimental, and may not be particularly reliable.
May have been abandoned or back-shelved: last commit was in 2009-11 (> 6 months ago)
No tagged revisions have been created (ie no releases)
So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.

vader.vim is easy, and amazing. It has no external dependencies (doesn't require ruby/rake), it's a pure vimscript plugin. Here's a fully specified test:
Given (description of test):
foo bar baz
Do (move around, insert some text):
2Wiab\<Enter>c
Expect:
foo bar ab
cbaz
If you have the test file open, you can run it like this:
:Vader %
Or you can point to the file path:
:Vader ./test.vader

I've had success using Andrew Radev's Vimrunner in conjunction with RSpec to both test Vim plugins and set them up on a continuous integration server.
In brief, Vimrunner uses Vim's client-server functionality to fire up a Vim server and then send remote commands so that you can inspect (and verify) the outcome. It's a Ruby gem so you'll need at least some familiarity with Ruby but if you put the time in then you get the full power of RSpec in order to write your tests.
For example, a file called spec/runspec.vim_spec.rb:
require "vimrunner"
require "fileutils"
describe "runspec.vim" do
before(:suite) do
VIM = Vimrunner.start_gui_vim
VIM.add_plugin(File.expand_path('../..', __FILE__), 'plugin/runspec.vim')
end
after(:all) do
VIM.kill
end
it "returns the current path if it ends in _test.rb" do
VIM.echo('runspec#SpecPath("foo_test.rb")').should == "foo_test.rb"
VIM.echo('runspec#SpecPath("bar/foo_test.rb")').should == "bar/foo_test.rb"
end
context "with a spec directory" do
before do
FileUtils.mkdir("spec")
end
after do
FileUtils.remove_entry_secure("spec")
end
it "finds a spec with the same name" do
FileUtils.touch("spec/foo_spec.rb")
VIM.echo('runspec#SpecPath("foo.rb")').should == "spec/foo_spec.rb"
end
end
end
I've written about it at length in "Testing Vim Plugins on Travis CI with RSpec and Vimrunner" if you want more detail.

There is another (pure Vimscript) UT plugin that I'm maintaining.
It is documented, it comes with several examples, and it is also used by my other plugins.
It aims at testing function results and buffer contents, and displaying the failures in the quickfix window. Exception callstacks are also decoded. AFAIK, it's the only plugin so far (or at least the first) that's meant to fill the quickfix window. Since then, I've added helper scripts to produce test results with rspec (+Vimrunner)
Since v2.0 (May 2020), the plugin can also test buffer content -- after it has been altered with mappings/snippets/.... Up until then I've been using other plugins. For instance, I used to test my C++ snippets (from lh-cpp) on travis with VimRunner+RSpec.
Regarding the syntax, for instance the following
Assert 1 > 2
Assert 1 > 0
Assert s:foo > s:Bar(g:var + 28) / strlen("foobar")
debug AssertTxt (s:foo > s:Bar(g:var+28)
\, s:foo." isn't bigger than s:Bar(".g:var."+28)")
AssertEquals!('a', 'a')
AssertDiffers('a', 'a')
let dict = {}
AssertIs(dict, dict)
AssertIsNot(dict, dict)
AssertMatch('abc', 'a')
AssertRelation(1, '<', 2)
AssertThrows 0 + [0]
would produce:
tests/lh/README.vim|| SUITE <[lh#UT] Demonstrate assertions in README>
tests/lh/README.vim|27 error| assertion failed: 1 > 2
tests/lh/README.vim|31 error| assertion failed: s:foo > s:Bar(g:var + 28) / strlen("foobar")
tests/lh/README.vim|33 error| assertion failed: -1 isn't bigger than s:Bar(5+28)
tests/lh/README.vim|37 error| assertion failed: 'a' is not different from 'a'
tests/lh/README.vim|40 error| assertion failed: {} is not identical to {}
Or, if we want to test buffer contents
silent! call lh#window#create_window_with('new') " work around possible E36
try
" :SetBufferContent a/file/name.txt
" or
SetBufferContent << trim EOF
1
3
2
EOF
%sort
" AssertBufferMatch a/file/NAME.txt
" or
AssertBufferMatch << trim EOF
1
4
3
EOF
finally
silent bw!
endtry
which results into
tests/lh/README.vim|78 error| assertion failed: Observed buffer does not match Expected reference:
|| ---
|| +++
|| ## -1,3 +1,3 ##
|| 1
|| -4
|| +2
|| 3
(hitting D in the quickfix window will open the produced result alongside the expected result in diff mode in a new tab)

I've used vim-unit before. At the very least it means you don't have to write your own AssertEquals and AssertTrue functions. It also has a nice feature that lets you run the current function, if it begins with "Test", by placing the cursor within the function body and typing :call VUAutoRun().
The documentation is a bit iffy and unfinished, but if you have experience with other XUnit testing libraries it won't be unfamiliar to you.
Neither of the script mentioned have ways to check for vim specific features - you can't change buffers and then check expectations on the result - so you will have to write your vimscript in a testable way. For example, pass strings into functions rather than pulling them out of buffers with getline() inside the function itself, return strings instead of using setline(), that sort of thing.

There is vim-vspec.
Your tests are written in vimscript and you can write them using a BDD-style (describe, it, expect, ...)
runtime! plugin/sandwich/function.vim
describe 'Adding Quotes'
it 'should insert "" in an empty buffer'
put! = ''
call SmartQuotes("'")
Expect getline(1) == "''"
Expect col('.') == 2
end
end
The GitHub has links to a video and an article to get you started:
A tutorial to use vim-vspec by Vimcasts.org [the video]
Introduce unit testing to Vim plugin development with vim-vspec [the article]

For functional testing, there's a tool called vroom. It has some limitations and can take seconds-to-minutes to get through thorough tests for a good size project, but it has a nice literate testing / documentation format with vim syntax highlighting support.
It's used to test the codefmt plugin and a few similar projects. You can check out the vroom/ dir there for examples.

Another few candidates:
VimBot - Similar to VimRunner in that it's written in Ruby and allows you to control a vim instance remotely. Is built to be used with the unit testing framework RSpec.
VimDriver - Same as VimBot except done in Python instead of Ruby (started as a direct port from VimBot) so you can use Python's unit testing framework if you're more familiar with that.

Related

How to integrate tcltest test suite in git pre-commit hook

I'm working on a TCL project, for which version control is based on Git. So, in order to produce good-quality code, I set up put execution of the tests in pre-commit hook.
However, even if they are executed (trace is shown in command-line), and one of the tests is failed, Git performs the commit. So I launched the hook manually to check the error code, and I figured out that it is null, explaining why Git does not stop:
$ .git/hooks/pre-commit
++++ FlattenResult-test PASSED
(...)
==== CheckF69F70 FAILED
==== Content of test case:
(...)
==== CheckF69F70 FAILED
$ echo $?
0
(Launching the tests script with tclsh also results in $? to be 0.)
So my question is about this last line: why is $? equal to 0, when one of the tcl tests is failed? And how can I achieve a simple pre-commit hook that stops on failure?
I read and reread the tcltest documentation, but saw no setting or information about this error code. And I would really like not to have to parse the tcl tests output, to check if ERROR or FAILED is present...
Edit: versions
TCL version : 8.5
tcltest version: 2.3.4
This depends on how you run your test suite. Normally you run a file called tests/all.tcl which may look something like this:
package require Tcl 8.6
package require tcltest 2.5
namespace import tcltest::*
configure -testdir [file dirname [file normalize [info script]]] {*}$argv
runAllTests
That final runAllTests returns a boolean indicating success (0) or failure (1). You can use that to generate an exit code by changing the last line to:
exit [runAllTests]
I use this redefinition in some of my test scripts:
# Exit non-zero if any tests fail.
# tcltest's `cleanupTests` resets the numTests array, so capture it first.
proc cleanupTests {} {
set failed [expr {$::tcltest::numTests(Failed) > 0}]
uplevel 1 ::tcltest::cleanupTests
if {$failed} then {exit 1}
}
After some research, I could make it work, even though several factors were against me:
I have to use an old TCL version (8.5) with tcltest version 2.3.4, in which runAllTests returns nothing;
I forgot to write cleanupTests at the end of test scripts, as the documentation is not really clear about its usage. (It is not clearer now. I just figured out it is needed if you want to get your tests run by runAllTests, which is really not obvious).
And here is my solution, mostly based on Hai's DevBits blog post:
all.tcl
package require tcltest
::tcltest::configure (...)
proc ::tcltest::cleanupTestsHook {} {
variable numTests
set ::exitCode [expr {$numTests(Total) == 0 || $numTests(Failed) > 0}]
}
::tcltest::runAllTests
exit $exitCode
Some thoughts about it:
I added $numTests(Total) == 0 as a failure condition: this means that no tests was found, which is clearly an erroneous condition;
This doesn't catch exceptions in the configuration of the tests, for instance a source command that points to a non-existing file, revealing some failure in tests scaffolding. This would be catched as error in other test framewords (ah, pytest, I miss you!)

When running "go test", is there a way to get a count of how many tests were run?

I'd like to get the number of tests that were run with go test, as kind of a checksum to detect if all the tests are running. Since Go relies on filenames and method names to determine what's a test, it's easy to mistype something, which would mean the test would silently be skipped.
I think that the gotestsum tool is close to what you are looking for.
It is a wrapper around go test that prints formatted test output and a summary of the test run.
Default go test:
go test ./...
? github.com/marco-m/timeit/cmd/sleepit [no test files]
ok github.com/marco-m/timeit/cmd/timeit 0.601s
Default gotestsum:
gotestsum
∅ cmd/sleepit
✓ cmd/timeit (cached)
DONE 11 tests in 0.273s <== see here
Checkout the documentation and the built-in help, it is well-written.
In my experience, gotestsum (and the other tools by the same organization) is good. For me, it is also very important to be able to use the standard Go test package, without other "test frameworks". gotestsum allows me to do so.
On the other hand, to really satisfy your requirement (print the number of declared tests and verify that that number is actually ran), you would need something like TAP, the Test Anything Protocol, which works for any programming language:
1..4 <== see here
ok 1 - Input file opened
not ok 2 - First line of the input valid
ok 3 - Read the rest of the file
not ok 4 - Summarized correctly # TODO Not written yet
TAP actually is very nice and simple. I remember there was a Go port, tap-go, but it is now marked as archived.

'Assert Failed' message incomplete using CppUnit and TFS2015

Using: MSTest / CppUnit / TFS2015 / VS2013 / C++
I'm debugging a test that runs fine locally and fails on the build machine (which I don't have access to). This morning I sat down and was presented with almost all of my tests passing -- except one. The test happens to be comparing two rather large strings and the (usually) very helpful Assert failed. Expected:<... never made it to the Actual:<... part because the string was too long. It's just a simple: Assert::AreEqual(expectedStr, actualStr);.
Right now my workaround is to write a file to a network path that I have access to from within the test (which is already an integration type test luckily -- but still...). Oh -- and did I mention that I have to run a build that will take 40 minutes even if I set Clean Workspace to None in my build process parameters to even get the test to run? That's a whole other question for another post =/.
Is there a way to look at the full results of a test assertion failure (without, for example, a string comparison being cut off)? A test run log file maybe?
According to your description, you want to express assertion failure messages in C++. Check this case may help you:
"
A common solution for this problem is to create an assert macro. For an example see this question. The final form of their macro in that answer was the following:
#define dbgassert(EX,...) \
(void)((EX) || (realdbgassert (#EX, __FILE__, __LINE__, ## __VA_ARGS__),0))
In your case, the realdbgassert would be a function that prints any relevant information to stderr or other output console, and then calls the assert function itself. Depending on how much information you want, you could also do a stack dump, or log any other relevant information that will help you identify the issue. However, it can be as simple as passing a printf-esque format string, and relevant parameter value(s).
Note that if you compiler doesn't support variadic macros, you can create macros that take a specific number of parameters instead. This is slightly more cumbersome, but an option if your compiler lacks the support, eg:
#define dbgassert0(EX) \ ...
#define dbgassert1(EX,p0) \ ...
#define dbgassert2(EX,p0,p1) \ ...
"

How to unit test a command line interface

I've written a command line tool that I want to test (I'm not looking to run unit tests from command line). I want to map a specific set of input options to a specific output. I haven't been able to find any existing tools for this. The application is just a binary and could be written in any language but it accepts POSIX options and writes to standard output.
Something along the lines of:
For each known set of input options:
Launch application with specified input.
Pipe output to a file.
Diff output to stored (desired) output.
If diff is not empty, record error.
(Btw, is this what you call an integration test rather than a unit test?)
Edit: I know how I would go about writing my own tool for this, I don't need help with the code. What I want to learn is if this has already been done.
DejaGnu is a mature and somewhat standard framework for writing test suites for CLI programs.
Here is a sample test taken from this tutorial:
# send a string to the running program being tested:
send "echo Hello world!\n"
# inspect the output and determine whether the test passes or fails:
expect {
-re "Hello world.*$prompt $" {
pass "Echo test"
}
-re "$prompt $" {
fail "Echo test"
}
timeout {
fail "(timeout) Echo test"
}
}
Using a well-established framework like this is probably going to be better in the long run than anything you can come up with yourself, unless your needs are very simple.
You are looking for BATS (Bash Automated Testing System):
https://github.com/bats-core/bats-core
From the docs:
example.bats contains
#!/usr/bin/env bats
#test "addition using bc" {
result="$(echo 2+2 | bc)"
[ "$result" -eq 4 ]
}
#test "addition using dc" {
result="$(echo 2 2+p | dc)"
[ "$result" -eq 4 ]
}
$ bats example.bats
✓ addition using bc
✓ addition using dc
2 tests, 0 failures
bats-core
Well, I think every language should have a way of execute an external process.
In C#, you could do something like:
var p = new Process();
p.StartInfo = new ProcessStartInfo(#"C:\file-to-execute.exe");
... //You can set parameters here, etc.
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.Start();
//To read the standard output:
var output = p.StandardOutput.ReadToEnd();
I have never had to write to the standard input, but I believe it can be done by accessing to p.StandardInput as well. The idea is to treat both inputs as Stream objects, because that's what they are.
In Python there is the subprocess module. According to its documentation:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
I had to do the same when writing unit tests for the code generation part of a compiler I write some months ago: Writing unit tests in my compiler (which generates IL)
We wrote should, a single-file Python program to test any CLI tool. The default usage is to check that a line of the output contains some pattern. From the docs:
# A .should file launches any command it encounters.
echo "hello, world"
# Lines containing a `:` are test lines.
# The `test expression` is what is found at the right of the `:`.
# Here 'world' should be found on stdout, at least in one line.
:world
# What is at the left of the `:` are modifiers.
# One can specify the exact number of lines where the test expression has to appear.
# 'moon' should not be found on stdout.
0:moon
Should can check occurrences counts, look for regular expressions, use variables, filter tests, parse json data, and check exit codes.
Sure, it's been done literally thousands of times. But writing a tool to run simple shell scripts or batch files like what you propose is a trivial task, hardly worth trying to turn into a generic tool.

What core packages should a professional R developer have, and why? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are the specific utilities that can help R developers code and debug more efficiently?
I'm looking to set up an R development environment, and would like an overview of the tools that would be useful to me in crafting a unit testing infrastructure with code coverage, debugging, generation of package files and help files and maybe UML modeling.
Note: Please justify your answers with reasons and examples based on your experience with the tools you recommend. Don't just link.
Related
Recommendations for Windows text editor for R
What IDEs are available for R in Linux?
Tools Commonly used to Program in R
I have written way too many packages, so to keep things manageable I've invested a lot of time in infrastructure packages: packages that help me make my code more robust and help make it easier for others to use. These include:
roxygen2 (with Manuel Eugster and Peter Danenberg), which allows you to keep documentation next to the function it documents, which it makes it much more likely that I'll keep it up to date. roxygen2 also has a number of new features designed to minimise documentation duplication: templates (#template), parameter inheritance (#inheritParams), and function families (#family) to name a few.
testthat automates the testing of my code. This is becoming more and more important as I have less and less time to code: automated tests remember how the function should work, even when I don't.
devtools automates many common development tasks (as Andrie mentioned). The eventual goal for devtools is for it to act like R CMD check that runs continuously in the background and notifies you the instance that something goes wrong.
profr, particularly the unreleased interactive explorer, makes it easy for me to find bottlenecks in my code.
helpr (with Barret Schloerke), which will soon power http://had.co.nz/ggplot2, provides an elegant html interface to R documentation.
Useful R functions:
apropos: I'm always forgetting the names of useful functions, and apropos helps me find them, even if I only remember a fragment
Outside of R:
I use textmate to edit R (and other) files, but I don't think it's really that important. Pick one and learn all it's nooks and crannies.
Spend some time to learn the command line. Anything you can do to automate any part of your workflow will pay off in the long run. Running R from the command line leads to a natural process where each project has it's own instance of R; I often have 2-5 instances of R running at a time.
Use version control. I like git and github. Again, it doesn't matter exactly which system you use, but master it!
Things I wish R had:
code coverage tools
a dependency management framework like rake or jake
better memory profiling tools
a metadata standard for describing data frames (and other data sources)
better tools for describing and rendering tables in a variety of output formats
a package for markdown rendering
As I recall this has been asked before and my answer remains the same: Emacs.
Emacs can
do just about anything you want to do with R thanks to ESS, including
code execution of various snippets (line, region, function, buffer, ...)
inspection of workspaces,
display of variables,
multiple R sessions and easy switching between them
transcript mode for re-running (parts of) previous sessions
access to the help system
and much more
handles Latex with similar ease via the AucTex mode, which helps Sweave for R
has modes for whichever other programming languages you combine with R, be it C/C++, Python, shell, SQL, ... covering automatic indentation and colour highlighting
can access databases with sql-* mode
can work remotely with tramp mode: access remote files as if they were local (uses ssh/scp)
can be ran as a daemon which makes it stateful so you can reconnect to your same Emacs session, be it on the workstation under X11 (or equivalent) or remotely via ssh (with or without X11) or screen.
has org-mode, which together with babel, provides a powerful sweave alternative as discussed in this paper discussing workflow apps for (social) scientists
can run a shell via M-x shell and/or M-x eshell, has nice directory access functionality with dired mode, has ssh mode for remote access
interfaces all source code repositories with ease via specific modes (eg psvn for svn)
is cross-platform just like R so you have similar user-interface experiences on all relevant operating systems
is widely used, widely available and under active development for both code and extensions, see the emacswiki.org site for the latter
<tongueInCheek>is not Eclipse and does not require Java</tongueInCheek>
You can of course combine it with whichever CRAN packages you like: RUnit or testthat, the different profiling support packages, the debug package, ...
Additional tools that are useful:
R CMD check really is your friend as this is what CRAN uses to decide whether you are "in or out"; use it and trust it
the tests/ directory can offer a simplified version of unit tests by saving to-be-compared against output (from a prior R CMD check run), this is useful but proper unit tests are better
particularly for packages with object code, I prefer to launch fresh R sessions and littler makes that easy: r -lfoo -e'bar(1, "ab")' starts an R session, loads the foo package and evaluates the given expression (here a function bar() with two arguments). This, combined with R CMD INSTALL, provides a full test cycle.
Knowledge of, and ability to use, the basic R debugging tools is an essential first step in learning to quickly debug R code. If you know how to use the basic tools you can debug code anywhere without having to need all the extra tools provided in add-on packages.
traceback() allows you to see the call stack leading to an error
foo <- function(x) {
d <- bar(x)
x[1]
}
bar <- function(x) {
stopifnot(is.matrix(x))
dim(x)
}
foo(1:10)
traceback()
yields:
> foo(1:10)
Error: is.matrix(x) is not TRUE
> traceback()
4: stop(paste(ch, " is not ", if (length(r) > 1L) "all ", "TRUE",
sep = ""), call. = FALSE)
3: stopifnot(is.matrix(x))
2: bar(x)
1: foo(1:10)
So we can clearly see that the error happened in function bar(); we've narrowed down the scope of bug hunt. But what if the code generates warnings, not errors? That can be handled by turning warnings into errors via the warn option:
options(warn = 2)
will turn warnings into errors. You can then use traceback() to track them down.
Linked to this is getting R to recover from an error in the code so you can debug what went wrong. options(error = recover) will drop us into a debugger frame whenever an error is raised:
> options(error = recover)
> foo(1:10)
Error: is.matrix(x) is not TRUE
Enter a frame number, or 0 to exit
1: foo(1:10)
2: bar(x)
3: stopifnot(is.matrix(x))
Selection: 2
Called from: bar(x)
Browse[1]> x
[1] 1 2 3 4 5 6 7 8 9 10
Browse[1]> is.matrix(x)
[1] FALSE
You see we can drop into each frame on the call stack and see how the functions were called, what the arguments are etc. In the above example, we see that bar() was passed a vector not a matrix, hence the error. options(error = NULL) resets this behaviour to normal.
Another key function is trace(), which allows you to insert debugging calls into an existing function. The benefit of this is that you can tell R to debug from a particular line in the source:
> x <- 1:10; y <- rnorm(10)
> trace(lm, tracer = browser, at = 10) ## debug from line 10 of the source
Tracing function "lm" in package "stats"
[1] "lm"
> lm(y ~ x)
Tracing lm(y ~ x) step 10
Called from: eval(expr, envir, enclos)
Browse[1]> n ## must press n <return> to get the next line step
debug: mf <- eval(mf, parent.frame())
Browse[2]>
debug: if (method == "model.frame") return(mf) else if (method != "qr") warning(gettextf("method = '%s' is not supported. Using 'qr'",
method), domain = NA)
Browse[2]>
debug: if (method != "qr") warning(gettextf("method = '%s' is not supported. Using 'qr'",
method), domain = NA)
Browse[2]>
debug: NULL
Browse[2]> Q
> untrace(lm)
Untracing function "lm" in package "stats"
This allows you to insert the debugging calls at the right point in the code without having to step through the proceeding functions calls.
If you want to step through a function as it is executing, then debug(foo) will turn on the debugger for function foo(), whilst undebug(foo) will turn off the debugger.
A key point about these options is that I haven't needed to modify/edit any source code to insert debugging calls etc. I can try things out and see what the problem is directly from the session where there error has occurred.
For a different take on debugging in R, see Mark Bravington's debug package on CRAN