I've written a command line tool that I want to test (I'm not looking to run unit tests from command line). I want to map a specific set of input options to a specific output. I haven't been able to find any existing tools for this. The application is just a binary and could be written in any language but it accepts POSIX options and writes to standard output.
Something along the lines of:
For each known set of input options:
Launch application with specified input.
Pipe output to a file.
Diff output to stored (desired) output.
If diff is not empty, record error.
(Btw, is this what you call an integration test rather than a unit test?)
Edit: I know how I would go about writing my own tool for this, I don't need help with the code. What I want to learn is if this has already been done.
DejaGnu is a mature and somewhat standard framework for writing test suites for CLI programs.
Here is a sample test taken from this tutorial:
# send a string to the running program being tested:
send "echo Hello world!\n"
# inspect the output and determine whether the test passes or fails:
expect {
-re "Hello world.*$prompt $" {
pass "Echo test"
}
-re "$prompt $" {
fail "Echo test"
}
timeout {
fail "(timeout) Echo test"
}
}
Using a well-established framework like this is probably going to be better in the long run than anything you can come up with yourself, unless your needs are very simple.
You are looking for BATS (Bash Automated Testing System):
https://github.com/bats-core/bats-core
From the docs:
example.bats contains
#!/usr/bin/env bats
#test "addition using bc" {
result="$(echo 2+2 | bc)"
[ "$result" -eq 4 ]
}
#test "addition using dc" {
result="$(echo 2 2+p | dc)"
[ "$result" -eq 4 ]
}
$ bats example.bats
✓ addition using bc
✓ addition using dc
2 tests, 0 failures
bats-core
Well, I think every language should have a way of execute an external process.
In C#, you could do something like:
var p = new Process();
p.StartInfo = new ProcessStartInfo(#"C:\file-to-execute.exe");
... //You can set parameters here, etc.
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.Start();
//To read the standard output:
var output = p.StandardOutput.ReadToEnd();
I have never had to write to the standard input, but I believe it can be done by accessing to p.StandardInput as well. The idea is to treat both inputs as Stream objects, because that's what they are.
In Python there is the subprocess module. According to its documentation:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
I had to do the same when writing unit tests for the code generation part of a compiler I write some months ago: Writing unit tests in my compiler (which generates IL)
We wrote should, a single-file Python program to test any CLI tool. The default usage is to check that a line of the output contains some pattern. From the docs:
# A .should file launches any command it encounters.
echo "hello, world"
# Lines containing a `:` are test lines.
# The `test expression` is what is found at the right of the `:`.
# Here 'world' should be found on stdout, at least in one line.
:world
# What is at the left of the `:` are modifiers.
# One can specify the exact number of lines where the test expression has to appear.
# 'moon' should not be found on stdout.
0:moon
Should can check occurrences counts, look for regular expressions, use variables, filter tests, parse json data, and check exit codes.
Sure, it's been done literally thousands of times. But writing a tool to run simple shell scripts or batch files like what you propose is a trivial task, hardly worth trying to turn into a generic tool.
Related
I'd like to get the number of tests that were run with go test, as kind of a checksum to detect if all the tests are running. Since Go relies on filenames and method names to determine what's a test, it's easy to mistype something, which would mean the test would silently be skipped.
I think that the gotestsum tool is close to what you are looking for.
It is a wrapper around go test that prints formatted test output and a summary of the test run.
Default go test:
go test ./...
? github.com/marco-m/timeit/cmd/sleepit [no test files]
ok github.com/marco-m/timeit/cmd/timeit 0.601s
Default gotestsum:
gotestsum
∅ cmd/sleepit
✓ cmd/timeit (cached)
DONE 11 tests in 0.273s <== see here
Checkout the documentation and the built-in help, it is well-written.
In my experience, gotestsum (and the other tools by the same organization) is good. For me, it is also very important to be able to use the standard Go test package, without other "test frameworks". gotestsum allows me to do so.
On the other hand, to really satisfy your requirement (print the number of declared tests and verify that that number is actually ran), you would need something like TAP, the Test Anything Protocol, which works for any programming language:
1..4 <== see here
ok 1 - Input file opened
not ok 2 - First line of the input valid
ok 3 - Read the rest of the file
not ok 4 - Summarized correctly # TODO Not written yet
TAP actually is very nice and simple. I remember there was a Go port, tap-go, but it is now marked as archived.
I ‘m new with the Python world. I have four Python Script, and now during the testing phase, i have to run each of one from differenti console istances. My question is: is possible to create an unique script Python, and from it execute at the same time, the 4 scripts.
I’am working with a publisher/subscriber architetture, so i have one publisher and three sbuscriber.
Personally, I wouldn't run them from python. Create a batch file (windows) or a bash script(linux) and run all four of them as a background process so that they don't have to wait for each other to complete
In python you can try to use something like this:
import os
# list with the name of your scripts
scripts=["script_1.py", "script_2.py","script_3.py"]
for i in range (len(scripts)):
print "Executing ", scripts[i]
# string with script parameters
# in this case they are identical for everyone
script_parameters="-p parameters"
# build the command as if you typed it in the terminal
#
my_command = "python"+" "+scripts[i]+" "+script_parameters
print my_command
# run your command on the operating system
os.system(my_command)
I do not know if this is what you were looking for, but I hope you find it useful
I am running a Perl script that runs a Linux system command, namely service --status-all in order to list services which have stopped and services which are currently running. The code that I am using runs the system command, and then chomps the input into an array. Followed by this the code is supposed to check if the service that the user promts for is running or has stopped, this is done using a regex that does yield the correct result. The problem lies in the additional information that the program lists, namely lines such as dnsdomainname: Unknown host and sometimes prints several lines of the same result.
The code that I am running is as follows:
use warnings;
use strict;
#Looping variables
my $service_query = 1;
#Command
my $command_3 = "service --status-all";
#Input variables
my $service = 0;
while ($service_query == 1){
print "Please choose the service that you wish to analyze (Named service):\n";
$service = <>;
if ($service =~ /^[a-zA-Z0-9.:_-]+$/)
{
$service_query = 0;
}
else
{
print "Argument not allowed.\n";
}
}
chomp(my #service_data_1 = qx/$command_3/);
chomp($service);
foreach my $line4(#service_data_1) #Filter output according to "$service".
{
if ($line4 =~ /$service\s\Spid\s[0-9]+\S\s(\S+)/)
{
print "1 $service is running\n";
}
elsif ($line4 =~ /$service\sis\s(\S+)/)
{
print "0 $service has stopped\n";
}
}
The system command yields results such as these:
sandbox is stopped
rpc.svcgssd is stopped
rpcbind (pid 1284) is running...
saslauthd is stopped
openssh-daemon (pid 1593) is running...
wpa_supplicant (pid 1444) is running...
What I want to be listed:
User inputs sandbox, program should print: 0 sandbox has stopped
What I actually get:
User inputs sandbox, program prints:
dnsdomainname: Unknown host
0 sandbox has stopped
I appreciate any help on the matter, thank you.
Most likely the service command clears the environment and does other things to ensure consistent results from what are essentially shell scripts n your service.d directory. In fact service itself may be a shell script (it usually was when I used Linux long ago). You can inspect it to help figure out why you are seeing dnsdomainname: Unknown host (it may be due to an environment variable like $HOSTNAME being cleared, or a missing name in /etc/hosts that normally wouldn't be an issue for the service command, for example).
Following on #choroba's query the output you see on your console when you run service --status-all in your shell might not be the same output you get from qx/ / in your perl script because of the way service itselfs handles STDOUT and STDERR.
From perdoc -f qx (cf. #TLP's remarks):
To read both a command's STDOUT and its STDERR separately, it's
easiest to redirect them separately to files, and then read from
those files when the program is done.
Another approach, if you are willing to use a CPAN module, is Capture::Tiny which is a great tool for grabbing what you want from perl code or external commands.
I'm looking for some tools for testing vim scripts. Either vim scripts that
do unit/functional testing, or
classes for some other library (eg Python's unittest module) that make it convenient to
run vim with parameters that cause it to do some tests on its environment, and
determine from the output whether or not a given test passed.
I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful:
vim-unit:
purports "To provide vim scripts with a simple unit testing framework and tools"
first and only version (v0.1) was released in 2004
documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished".
unit-test.vim:
This one also seems pretty experimental, and may not be particularly reliable.
May have been abandoned or back-shelved: last commit was in 2009-11 (> 6 months ago)
No tagged revisions have been created (ie no releases)
So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.
vader.vim is easy, and amazing. It has no external dependencies (doesn't require ruby/rake), it's a pure vimscript plugin. Here's a fully specified test:
Given (description of test):
foo bar baz
Do (move around, insert some text):
2Wiab\<Enter>c
Expect:
foo bar ab
cbaz
If you have the test file open, you can run it like this:
:Vader %
Or you can point to the file path:
:Vader ./test.vader
I've had success using Andrew Radev's Vimrunner in conjunction with RSpec to both test Vim plugins and set them up on a continuous integration server.
In brief, Vimrunner uses Vim's client-server functionality to fire up a Vim server and then send remote commands so that you can inspect (and verify) the outcome. It's a Ruby gem so you'll need at least some familiarity with Ruby but if you put the time in then you get the full power of RSpec in order to write your tests.
For example, a file called spec/runspec.vim_spec.rb:
require "vimrunner"
require "fileutils"
describe "runspec.vim" do
before(:suite) do
VIM = Vimrunner.start_gui_vim
VIM.add_plugin(File.expand_path('../..', __FILE__), 'plugin/runspec.vim')
end
after(:all) do
VIM.kill
end
it "returns the current path if it ends in _test.rb" do
VIM.echo('runspec#SpecPath("foo_test.rb")').should == "foo_test.rb"
VIM.echo('runspec#SpecPath("bar/foo_test.rb")').should == "bar/foo_test.rb"
end
context "with a spec directory" do
before do
FileUtils.mkdir("spec")
end
after do
FileUtils.remove_entry_secure("spec")
end
it "finds a spec with the same name" do
FileUtils.touch("spec/foo_spec.rb")
VIM.echo('runspec#SpecPath("foo.rb")').should == "spec/foo_spec.rb"
end
end
end
I've written about it at length in "Testing Vim Plugins on Travis CI with RSpec and Vimrunner" if you want more detail.
There is another (pure Vimscript) UT plugin that I'm maintaining.
It is documented, it comes with several examples, and it is also used by my other plugins.
It aims at testing function results and buffer contents, and displaying the failures in the quickfix window. Exception callstacks are also decoded. AFAIK, it's the only plugin so far (or at least the first) that's meant to fill the quickfix window. Since then, I've added helper scripts to produce test results with rspec (+Vimrunner)
Since v2.0 (May 2020), the plugin can also test buffer content -- after it has been altered with mappings/snippets/.... Up until then I've been using other plugins. For instance, I used to test my C++ snippets (from lh-cpp) on travis with VimRunner+RSpec.
Regarding the syntax, for instance the following
Assert 1 > 2
Assert 1 > 0
Assert s:foo > s:Bar(g:var + 28) / strlen("foobar")
debug AssertTxt (s:foo > s:Bar(g:var+28)
\, s:foo." isn't bigger than s:Bar(".g:var."+28)")
AssertEquals!('a', 'a')
AssertDiffers('a', 'a')
let dict = {}
AssertIs(dict, dict)
AssertIsNot(dict, dict)
AssertMatch('abc', 'a')
AssertRelation(1, '<', 2)
AssertThrows 0 + [0]
would produce:
tests/lh/README.vim|| SUITE <[lh#UT] Demonstrate assertions in README>
tests/lh/README.vim|27 error| assertion failed: 1 > 2
tests/lh/README.vim|31 error| assertion failed: s:foo > s:Bar(g:var + 28) / strlen("foobar")
tests/lh/README.vim|33 error| assertion failed: -1 isn't bigger than s:Bar(5+28)
tests/lh/README.vim|37 error| assertion failed: 'a' is not different from 'a'
tests/lh/README.vim|40 error| assertion failed: {} is not identical to {}
Or, if we want to test buffer contents
silent! call lh#window#create_window_with('new') " work around possible E36
try
" :SetBufferContent a/file/name.txt
" or
SetBufferContent << trim EOF
1
3
2
EOF
%sort
" AssertBufferMatch a/file/NAME.txt
" or
AssertBufferMatch << trim EOF
1
4
3
EOF
finally
silent bw!
endtry
which results into
tests/lh/README.vim|78 error| assertion failed: Observed buffer does not match Expected reference:
|| ---
|| +++
|| ## -1,3 +1,3 ##
|| 1
|| -4
|| +2
|| 3
(hitting D in the quickfix window will open the produced result alongside the expected result in diff mode in a new tab)
I've used vim-unit before. At the very least it means you don't have to write your own AssertEquals and AssertTrue functions. It also has a nice feature that lets you run the current function, if it begins with "Test", by placing the cursor within the function body and typing :call VUAutoRun().
The documentation is a bit iffy and unfinished, but if you have experience with other XUnit testing libraries it won't be unfamiliar to you.
Neither of the script mentioned have ways to check for vim specific features - you can't change buffers and then check expectations on the result - so you will have to write your vimscript in a testable way. For example, pass strings into functions rather than pulling them out of buffers with getline() inside the function itself, return strings instead of using setline(), that sort of thing.
There is vim-vspec.
Your tests are written in vimscript and you can write them using a BDD-style (describe, it, expect, ...)
runtime! plugin/sandwich/function.vim
describe 'Adding Quotes'
it 'should insert "" in an empty buffer'
put! = ''
call SmartQuotes("'")
Expect getline(1) == "''"
Expect col('.') == 2
end
end
The GitHub has links to a video and an article to get you started:
A tutorial to use vim-vspec by Vimcasts.org [the video]
Introduce unit testing to Vim plugin development with vim-vspec [the article]
For functional testing, there's a tool called vroom. It has some limitations and can take seconds-to-minutes to get through thorough tests for a good size project, but it has a nice literate testing / documentation format with vim syntax highlighting support.
It's used to test the codefmt plugin and a few similar projects. You can check out the vroom/ dir there for examples.
Another few candidates:
VimBot - Similar to VimRunner in that it's written in Ruby and allows you to control a vim instance remotely. Is built to be used with the unit testing framework RSpec.
VimDriver - Same as VimBot except done in Python instead of Ruby (started as a direct port from VimBot) so you can use Python's unit testing framework if you're more familiar with that.
I am running a shell script on windows with cygwin in which I execute a program multiple times with different arguments each time. Sometimes, the program generates segmentation fault for some input arguments. I want to generate a text file in which the shell script can write for which of the inputs, the program failed. Basically I want to check return value of the program each time it runs. Here I am assuming that when program fails, it returns a different value from that when it succeeds. I am not sure about this. The executable is a C++ program.
Is it possible to do this? Please guide. If possible, please provide a code snippet for shell script.
Also, please tell what all values are returned.
My script is .sh file.
The return value of the last program that finished is available in the environment variable $?.
You can test the return value using shell's if command:
if program; then
echo Success
else
echo Fail
fi
or by using "and" or "or" lists to do extra commands only if yours succeeds or failed:
program && echo Success
program || echo Fail
Note that the test succeeds if the program returns 0 for success, which is slightly counterintuitive if you're used to C/C++ conditions succeeding for non-zero values.
if it is bat file you can use %ERRORLEVEL%
Assuming no significant spaces in your command line arguments:
cat <<'EOF' |
-V
-h
-:
-a whatnot peezat
!
while read args
do
if program $args
then : OK
else echo "!! FAIL !! ($?) $args" >> logfile
fi
done
This takes a but more effort (to be polite about it) if you must retain spaces. Well, a bit more effort; you probably use an eval in front of the 'program'.