I'm working on a TCL project, for which version control is based on Git. So, in order to produce good-quality code, I set up put execution of the tests in pre-commit hook.
However, even if they are executed (trace is shown in command-line), and one of the tests is failed, Git performs the commit. So I launched the hook manually to check the error code, and I figured out that it is null, explaining why Git does not stop:
$ .git/hooks/pre-commit
++++ FlattenResult-test PASSED
(...)
==== CheckF69F70 FAILED
==== Content of test case:
(...)
==== CheckF69F70 FAILED
$ echo $?
0
(Launching the tests script with tclsh also results in $? to be 0.)
So my question is about this last line: why is $? equal to 0, when one of the tcl tests is failed? And how can I achieve a simple pre-commit hook that stops on failure?
I read and reread the tcltest documentation, but saw no setting or information about this error code. And I would really like not to have to parse the tcl tests output, to check if ERROR or FAILED is present...
Edit: versions
TCL version : 8.5
tcltest version: 2.3.4
This depends on how you run your test suite. Normally you run a file called tests/all.tcl which may look something like this:
package require Tcl 8.6
package require tcltest 2.5
namespace import tcltest::*
configure -testdir [file dirname [file normalize [info script]]] {*}$argv
runAllTests
That final runAllTests returns a boolean indicating success (0) or failure (1). You can use that to generate an exit code by changing the last line to:
exit [runAllTests]
I use this redefinition in some of my test scripts:
# Exit non-zero if any tests fail.
# tcltest's `cleanupTests` resets the numTests array, so capture it first.
proc cleanupTests {} {
set failed [expr {$::tcltest::numTests(Failed) > 0}]
uplevel 1 ::tcltest::cleanupTests
if {$failed} then {exit 1}
}
After some research, I could make it work, even though several factors were against me:
I have to use an old TCL version (8.5) with tcltest version 2.3.4, in which runAllTests returns nothing;
I forgot to write cleanupTests at the end of test scripts, as the documentation is not really clear about its usage. (It is not clearer now. I just figured out it is needed if you want to get your tests run by runAllTests, which is really not obvious).
And here is my solution, mostly based on Hai's DevBits blog post:
all.tcl
package require tcltest
::tcltest::configure (...)
proc ::tcltest::cleanupTestsHook {} {
variable numTests
set ::exitCode [expr {$numTests(Total) == 0 || $numTests(Failed) > 0}]
}
::tcltest::runAllTests
exit $exitCode
Some thoughts about it:
I added $numTests(Total) == 0 as a failure condition: this means that no tests was found, which is clearly an erroneous condition;
This doesn't catch exceptions in the configuration of the tests, for instance a source command that points to a non-existing file, revealing some failure in tests scaffolding. This would be catched as error in other test framewords (ah, pytest, I miss you!)
Related
We have a project managed in Gitlab, with CI pipeline for builds and tests (pytest, Google tests). Two or three of our test cases in Google tests fail. But Gitlab consider that the test stage is successful. Is it because the success percentage is more than 90% (an arbitrary value) ? Is there a way to make the stage (and thus the complete pipeline) fail if we don't get 100% of success ?
Here is a screenshot of the pipeline summary:
Here is the yml script of the stage:
test_unit_test:
stage: test
needs: ["build", "build_unit_test"]
image: $DOCKER_IMAGE
rules:
- if: '$CI_PIPELINE_SOURCE != "merge_request_event"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script: |
ZIPNAME=`cat _VERSION_.txt`
./scripts/gitlab-ci/stage-unittests.sh test_unit_test_report.xml $ZIPNAME
artifacts:
reports:
junit: test_unit_test_report.xml
expire_in: 1 week
Thank you for any help.
Regards.
Gitlab CI/CD jobs don't care what the script areas are doing (so they don't look at, for example, test pass percentages). The only they things use to determine if a job passed or failed are exit codes and the allow_failure keyword.
After each command in the before_script, script, and after_script sections are executed, the Gitlab Runner checks the exit code of the command to see if it is 0. If it is non-zero, the command is considered a failure, and if the allow_failure keyword is not set to true for the job, the job fails.
So, for your job, even though the tests are failing, somehow the script is existing with exit code 0, meaning the command itself finished successfully. The command in this case is:
ZIPNAME=$(cat _VERSION_.txt)
./scripts/gitlab-ci/stage-unittests.sh test_unit_test_report.xml $ZIPNAME
NOTE: I replaced your backticks '`' with the $(command) syntax explained here which does the same thing (execute this command) but has some advantages over '`command`' including nesting and easier use in markdown where '`' indicates code formatting.
So, since you are calling a script (./scripts/gitlab-ci/stage-unittests.sh) to run your tests, that script itself is finishing successfully, so the job finishes successfully. Take a look at that script to see if you can tell why it finishes successfully even though the tests fail.
Is there a way to rerun entire Test Suite if a single Particular test case fails .
So for example , a Robot Code which contain a test case which will check the cookie value , if the cookie is of a particular pattern will continue execution of rest of code , if it fails it should rerun the entire Robot Code / Test Suite and repeat this 3 times , if the cookie value is not same for three runs , let it fail the test suite completely .
You can run original test, rerun failed and merge the results of both runs. If some tests are failed in the first run and then pass in the second, you will see that in the results.
There is often a need to re-execute a subset of tests, for example,
after fixing a bug in the system under test or in the tests
themselves. This can be accomplished by selecting test cases by names
(--test and --suite options), tags (--include and --exclude), or by
previous status (--rerunfailed or --rerunfailedsuites).
robot --output original.xml tests # first execute all tests
robot --rerunfailedsuites original.xml --output rerun.xml tests # then re-execute failing
rebot --merge original.xml rerun.xml # finally merge results
You can read more about this in here https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#merging-re-executed-tests
To your specific example, I am not sure that you can do that. But you can save the exit code of the run and evaluate it base on that
robot "your robot options" $#
if [ $? -eq 0 ]; then
"evaluation options when passed"
fi
else
"evaluation options when failed"
fi
I am using various xunit tests for years (starting from cppunit in early 2000s). In all cases it was very easy to set a break point on failure: there was a function that indicated detected failure:
b 'atf::tests::tc::fail(std::string const&)'
command
up 1
end
It seems that gtest is quite different, what is the established practice of doing the same with gtest?
If you need to break at the start of the test to observe something, first get the symbol names present in the executable, and grep for the test name of interest, e.g.:
nm -C myclass_test | grep MyTest0
if you want to break at:
TEST(MainTest, MyTest0) {
EXPECT_EQ(1, 1);
}
Of the results of that grep, the most promising one seemed to be:
0000000000407c64 T MainTest_MyTest0_Test::TestBody()
and so:
gdb myclass_test
and:
b MainTest_MyTest0_Test::TestBody
r
and then this leaves me at the start of the desired test.
Tested with this setup at revision 2.
what is the established practice of doing the same with gtest?
Reading gtest.cc, the closest I see is --gunit_break_on_failure, which should cause the code to execute INT3 trap on x86/Linux, and to call DebugBreak on Windows.
Update: the flag appears to have been renamed to --gtest_break_on_failure in latest public releases.
When I run an Execute shell build step to execute a script and that script returns 0, Jenkins flags the build as SUCCESS, otherwise it flags it as FAILURE which is the expected default behaviour as 0 means no errors and any other value represents an error.
Is there a way to mark a build as SUCCESS only if the return value matches a specific value other than 0 (e.g. 1,2,3...)?
PS: in case you're wondering why I'm looking for that, this will allow me to perform unit testing of Jenkins itself as my scripts are written to return different exit values depending on various factors, thus allowing me to expect certain values depending on certain setup mistakes and making sure my whole Jenkins integration picks up on those.
Alright, I went on IRC #jenkins and no-one new about a plugin to set a particular job status depending on a particular exit code :( I managed to do what I wanted by creating an Execute shell step with the following content:
bash -c "/path/to/myscript.sh; if [ "\$?" == "$EXPECTED_EXIT_CODE" ]; then exit 0; else exit 1; fi"
-Running the script under bash -c allows catching the exit code and prevents Jenkins from stopping build execution when that exit code is different than 0 (which it normally does).
-\$? is interpreted as $? after the script execution and represents its exit code.
-$EXPECTED_EXIT_CODE is one of my job parameters which defines the exit code I'm expecting.
-The if statement simply does the following: if I get the expected exit code, exit with 0 so that the build is marked as SUCCESS, else exit with 1 so that the build is marked as FAILURE.
/path/to/myscript.sh || if [ "$?" == "$EXPECTED_EXIT_CODE" ]; then continue; else exit 1; fi
I would use continue instead of exit 0 in case you have other items below that you need to run through.
Can handle it via the Text-finder Plugin:
Have your script print the exit-code it is about to exit with, like: Failed on XXX - Exiting with RC 2
Use the Text-finder Plugin to catch that error-message and mark the build as 'Failed' or 'Unstable',for example, if you decide RC 2, 3 and 4 should mark the build as 'Unstable', look for text in this pattern: Exiting with RC [2-4].
Create a wrapper for your shell script. Have that wrapper execute your tests and then set the resturn value according to whatever criteria you want.
I do it like this:
set +e
./myscript.sh
rc="$?"
set -e
if [ "$rc" == "$EXPECTED_CODE_1" ]; then
#...actions 1 (if required)
exit 0
elif [ "$rc" == "$EXPECTED_CODE_2" ]; then
#...actions 2 (if required)
exit 0
else
#...actions else (if required)
exit "$rc"
fi
echo "End of script" #Should never happen, just to indicate there's nothing further
Here +e is to avoid default Jenkins behavior to report FAILURE on any sneeze during your script execution. Then get back with -e.
So that you can handle your exit code as appropriate, else eventually FAIL with the returned code.
robocopy "srcDir" "destDir" /"copyOption" if %ERRORLEVEL% LEQ 2 exit 0
If robocopy exit code is less than or equal to 2 then it will exit successfully.
Robocopy Exit Codes:
0×00 0 No errors occurred, and no copying was done.
The source and destination directory trees are completely synchronized.
0×01 1 One or more files were copied successfully (that is, new files have arrived).
0×02 2 Some Extra files or directories were detected. No files were copied
Examine the output log for details.
0×04 4 Some Mismatched files or directories were detected.
Examine the output log. Housekeeping might be required.
0×08 8 Some files or directories could not be copied
(copy errors occurred and the retry limit was exceeded).
Check these errors further.
0×10 16 Serious error. Robocopy did not copy any files.
Either a usage error or an error due to insufficient access privileges
on the source or destination directories.
I'm looking for some tools for testing vim scripts. Either vim scripts that
do unit/functional testing, or
classes for some other library (eg Python's unittest module) that make it convenient to
run vim with parameters that cause it to do some tests on its environment, and
determine from the output whether or not a given test passed.
I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful:
vim-unit:
purports "To provide vim scripts with a simple unit testing framework and tools"
first and only version (v0.1) was released in 2004
documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished".
unit-test.vim:
This one also seems pretty experimental, and may not be particularly reliable.
May have been abandoned or back-shelved: last commit was in 2009-11 (> 6 months ago)
No tagged revisions have been created (ie no releases)
So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.
vader.vim is easy, and amazing. It has no external dependencies (doesn't require ruby/rake), it's a pure vimscript plugin. Here's a fully specified test:
Given (description of test):
foo bar baz
Do (move around, insert some text):
2Wiab\<Enter>c
Expect:
foo bar ab
cbaz
If you have the test file open, you can run it like this:
:Vader %
Or you can point to the file path:
:Vader ./test.vader
I've had success using Andrew Radev's Vimrunner in conjunction with RSpec to both test Vim plugins and set them up on a continuous integration server.
In brief, Vimrunner uses Vim's client-server functionality to fire up a Vim server and then send remote commands so that you can inspect (and verify) the outcome. It's a Ruby gem so you'll need at least some familiarity with Ruby but if you put the time in then you get the full power of RSpec in order to write your tests.
For example, a file called spec/runspec.vim_spec.rb:
require "vimrunner"
require "fileutils"
describe "runspec.vim" do
before(:suite) do
VIM = Vimrunner.start_gui_vim
VIM.add_plugin(File.expand_path('../..', __FILE__), 'plugin/runspec.vim')
end
after(:all) do
VIM.kill
end
it "returns the current path if it ends in _test.rb" do
VIM.echo('runspec#SpecPath("foo_test.rb")').should == "foo_test.rb"
VIM.echo('runspec#SpecPath("bar/foo_test.rb")').should == "bar/foo_test.rb"
end
context "with a spec directory" do
before do
FileUtils.mkdir("spec")
end
after do
FileUtils.remove_entry_secure("spec")
end
it "finds a spec with the same name" do
FileUtils.touch("spec/foo_spec.rb")
VIM.echo('runspec#SpecPath("foo.rb")').should == "spec/foo_spec.rb"
end
end
end
I've written about it at length in "Testing Vim Plugins on Travis CI with RSpec and Vimrunner" if you want more detail.
There is another (pure Vimscript) UT plugin that I'm maintaining.
It is documented, it comes with several examples, and it is also used by my other plugins.
It aims at testing function results and buffer contents, and displaying the failures in the quickfix window. Exception callstacks are also decoded. AFAIK, it's the only plugin so far (or at least the first) that's meant to fill the quickfix window. Since then, I've added helper scripts to produce test results with rspec (+Vimrunner)
Since v2.0 (May 2020), the plugin can also test buffer content -- after it has been altered with mappings/snippets/.... Up until then I've been using other plugins. For instance, I used to test my C++ snippets (from lh-cpp) on travis with VimRunner+RSpec.
Regarding the syntax, for instance the following
Assert 1 > 2
Assert 1 > 0
Assert s:foo > s:Bar(g:var + 28) / strlen("foobar")
debug AssertTxt (s:foo > s:Bar(g:var+28)
\, s:foo." isn't bigger than s:Bar(".g:var."+28)")
AssertEquals!('a', 'a')
AssertDiffers('a', 'a')
let dict = {}
AssertIs(dict, dict)
AssertIsNot(dict, dict)
AssertMatch('abc', 'a')
AssertRelation(1, '<', 2)
AssertThrows 0 + [0]
would produce:
tests/lh/README.vim|| SUITE <[lh#UT] Demonstrate assertions in README>
tests/lh/README.vim|27 error| assertion failed: 1 > 2
tests/lh/README.vim|31 error| assertion failed: s:foo > s:Bar(g:var + 28) / strlen("foobar")
tests/lh/README.vim|33 error| assertion failed: -1 isn't bigger than s:Bar(5+28)
tests/lh/README.vim|37 error| assertion failed: 'a' is not different from 'a'
tests/lh/README.vim|40 error| assertion failed: {} is not identical to {}
Or, if we want to test buffer contents
silent! call lh#window#create_window_with('new') " work around possible E36
try
" :SetBufferContent a/file/name.txt
" or
SetBufferContent << trim EOF
1
3
2
EOF
%sort
" AssertBufferMatch a/file/NAME.txt
" or
AssertBufferMatch << trim EOF
1
4
3
EOF
finally
silent bw!
endtry
which results into
tests/lh/README.vim|78 error| assertion failed: Observed buffer does not match Expected reference:
|| ---
|| +++
|| ## -1,3 +1,3 ##
|| 1
|| -4
|| +2
|| 3
(hitting D in the quickfix window will open the produced result alongside the expected result in diff mode in a new tab)
I've used vim-unit before. At the very least it means you don't have to write your own AssertEquals and AssertTrue functions. It also has a nice feature that lets you run the current function, if it begins with "Test", by placing the cursor within the function body and typing :call VUAutoRun().
The documentation is a bit iffy and unfinished, but if you have experience with other XUnit testing libraries it won't be unfamiliar to you.
Neither of the script mentioned have ways to check for vim specific features - you can't change buffers and then check expectations on the result - so you will have to write your vimscript in a testable way. For example, pass strings into functions rather than pulling them out of buffers with getline() inside the function itself, return strings instead of using setline(), that sort of thing.
There is vim-vspec.
Your tests are written in vimscript and you can write them using a BDD-style (describe, it, expect, ...)
runtime! plugin/sandwich/function.vim
describe 'Adding Quotes'
it 'should insert "" in an empty buffer'
put! = ''
call SmartQuotes("'")
Expect getline(1) == "''"
Expect col('.') == 2
end
end
The GitHub has links to a video and an article to get you started:
A tutorial to use vim-vspec by Vimcasts.org [the video]
Introduce unit testing to Vim plugin development with vim-vspec [the article]
For functional testing, there's a tool called vroom. It has some limitations and can take seconds-to-minutes to get through thorough tests for a good size project, but it has a nice literate testing / documentation format with vim syntax highlighting support.
It's used to test the codefmt plugin and a few similar projects. You can check out the vroom/ dir there for examples.
Another few candidates:
VimBot - Similar to VimRunner in that it's written in Ruby and allows you to control a vim instance remotely. Is built to be used with the unit testing framework RSpec.
VimDriver - Same as VimBot except done in Python instead of Ruby (started as a direct port from VimBot) so you can use Python's unit testing framework if you're more familiar with that.