I have a Cabal test target:
test-suite Tests
type: exitcode-stdio-1.0
main-is: Main.hs
hs-source-dirs: test, src
build-depends: base, …
default-language: Haskell2010
And a simple testing Main.hs:
import Test.HUnit
testSanity = TestCase $ assertEqual "Should fail" 2 1
main = runTestTT testSanity
Now running cabal test passes:
Test suite Tests: RUNNING...
Test suite Tests: PASS
Test suite logged to: dist/test/Project-0.1.0-Tests.log
1 of 1 test suites (1 of 1 test cases) passed.
Even though there are failures correctly logged in the test suite log:
Test suite Tests: RUNNING...
Cases: 1 Tried: 0 Errors: 0 Failures: 0
### Failure:
Should fail
expected: 2
but got: 1
Cases: 1 Tried: 1 Errors: 0 Failures: 1
Test suite Tests: PASS
Test suite logged to: dist/test/Project-0.1.0-Tests.log
What am I doing wrong?
main has to be of type IO a where a can be anything, and that value is not tied in any way to the POSIX exit status of the program. You need to look at the Counts returned by runTest and explicitly choose a successful or failed output code.
import System.Exit
main :: IO ()
main = do
cs#(Counts _ _ errs fails) <- runTestTT testSanity
putStrLn (showCounts cs)
if (errs > 0 || fails > 0)
then exitFailure
else exitSuccess
I believe exitcode-stdio-1.0 expects the test to communicate failure via the program's exit code. However, if you run the test program manually from the shell (look in the dist/build directory) I think you'll find that the exit code is always 0.
Here is a related SO question which suggests that your test should call exitFailure if your suite doesn't pass:
QuickCheck exit status on failures, and cabal integration
FWIW, it works correctly when using stack:
$ mkdir new-dir
$ cd new-dir
$ stack new
(edit new-template.cabal to add HUnit as a test dependency)
(add test case to test/Spec.hs)
$ stack test
Perhaps stack is also scanning the test output.
Related
I'm relatively new to Haskell, so apologies in advance if my terminology is not quite correct.
I would like to implement some plain unit test for a very simple project, managed through cabal. I noticed this very similar question, but it didn't really help. This one didn't either (and it mentions tasty, see below).
I think I can accomplish this by using only HUnit - however, I admit I am a bit confused by all other "things" that guides on the net talk about:
I don't quite appreciate the difference between the interfaces exitcode-stdio-1.0 and detailed-0.9
I am not sure about the differences (or mid- and long-terms) implication of using HUnit or Quickcheck or others?
What's the role of tasty that the HUnit guide mentions.
So, I tried to leave all "additional" packages out of the equation and everything else as "default" as much as I could aside and did the following:
$ mkdir example ; mkdir example/test
$ cd example
$ cabal init
Then edited example.cabal and added this section:
Test-Suite test-example
type: exitcode-stdio-1.0
hs-source-dirs: test, app
main-is: Main.hs
build-depends: base >=4.15.1.0,
HUnit
default-language: Haskell2010
Then I created test/Main.hs with this content:
module Main where
import Test.HUnit
tests = TestList [
TestLabel "test2"
(TestCase $ assertBool "Why is this not running," False)
]
main :: IO ()
main = do
runTestTT tests
return ()
Finally, I tried to run the whole lot:
$ cabal configure --enable-tests && cabal build && cabal test
Up to date
Build profile: -w ghc-9.2.4 -O1
In order, the following will be built (use -v for more details):
- example-0.1.0.0 (test:test-example) (additional components to build)
Preprocessing test suite 'test-example' for example-0.1.0.0..
Building test suite 'test-example' for example-0.1.0.0..
Build profile: -w ghc-9.2.4 -O1
In order, the following will be built (use -v for more details):
- example-0.1.0.0 (test:test-example) (ephemeral targets)
Preprocessing test suite 'test-example' for example-0.1.0.0..
Building test suite 'test-example' for example-0.1.0.0..
Running 1 test suites...
Test suite test-example: RUNNING...
Test suite test-example: PASS
Test suite logged to:
/home/jir/workinprogress/haskell/example/dist-newstyle/build/x86_64-linux/ghc-9.2.4/example-0.1.0.0/t/test-example/test/example-0.1.0.0-test-example.log
1 of 1 test suites (1 of 1 test cases) passed.
And the output is not what I expected.
I'm clearly doing something fundamentally wrong, but I don't know what it is.
In order for the exitcode-stdio-1.0 test type to recognize a failed suite, you need to arrange for your test suite's main function to exit with failure in case there are any test failures. Fortunately, there's a runTestTTAndExit function to handle this, so if you replace your main with:
main = runTestTTAndExit tests
it should work fine.
We are currently migrating from a CMake-based build to bazel. For unit-testing, we are using our own implemented framework.
When dealing with a SEGFAULT, ctest gives the following output:
The following tests FAILED:
19 - SomeTest (SEGFAULT)
Errors while running CTest
However, when executing the exact same test with the exact same build-options and sources, the bazel output looks like:
//services/SomeTest:test FAILED in 0.2s
/root/.cache/bazel/_bazel_root/b343aed36e4de4757a8e698762574e37/execroot/repo/bazel-out/k8-fastbuild/testlogs/SomeTest/test/test.log
The other output is just the regular printout from the test, nothing regarding the SEGFAULT. Same goes for the contents of SomeTest/test/test.log.
I tried the following options to bazel test: --test_output=all, --test_output=errors, --verbose_test_summary, and --verbose_failures.
What am I missing here?
The output you're seeing comes from CTest, not from your application under test. If you want to see helpful information like that you'll need some testing framework to provide it to you. Here's a comparison between a vanilla test and a Catch2 test.
Setup
test_vanilla.cc
int main() { return 1 / 0; }
test_catch2.cc
#include <catch2/catch.hpp>
TEST_CASE("Hello") { REQUIRE(1 / 0); }
WORKSPACE
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "catch2",
sha256 = "3cdb4138a072e4c0290034fe22d9f0a80d3bcfb8d7a8a5c49ad75d3a5da24fae",
strip_prefix = "Catch2-2.13.7",
urls = ["https://github.com/catchorg/Catch2/archive/v2.13.7.tar.gz"],
)
BUILD
cc_test(
name = "test_vanilla",
srcs = ["test_vanilla.cc"],
)
cc_test(
name = "test_catch2",
srcs = ["test_catch2.cc"],
defines = ["CATCH_CONFIG_MAIN"],
deps = ["#catch2"],
)
Testing
No test framework
Now let's run the tests.
❯ bazel test //:test_vanilla
[...]
//:test_vanilla FAILED in 0.3s
test.log
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //:test_vanilla
-----------------------------------------------------------------------------
You can see that the test failed, because it did not return 0 (as it failed by illegally dividing by zero.
If you have systemd-coredump installed (and coredumps enabled), you can get some info with
❯ coredumpctl -1 debug
[...]
Core was generated by `/home/laurenz/.cache/bazel/_bazel_laurenz/be59967ad4f5a83f16e874b5d49a28d5/sand'.
Program terminated with signal SIGFPE, Arithmetic exception.
#0 0x0000561398132668 in main ()
(gdb)
Catch2
If you have a test framework like CTest or Catch2, it will provide more infos so you don't even need to check the coredump yourself. The test log will provide the problematic file and line as well as the signal.
❯ bazel test //:test_catch2
[...]
//:test_catch2 FAILED in 0.2s
test.log
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //:test_catch2
-----------------------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
test_catch2 is a Catch v2.13.7 host application.
Run with -? for options
-------------------------------------------------------------------------------
Hello
-------------------------------------------------------------------------------
test_catch2.cc:3
...............................................................................
test_catch2.cc:3: FAILED:
due to a fatal error condition:
SIGFPE - Floating point error signal
===============================================================================
test cases: 1 | 1 failed
assertions: 1 | 1 failed
I have a project with unit tests and when I run ctest (cmake version 3.18.2 on macos), success is reported for all tests. However, if I run one of the tests by itself, it has exit status 1. As far as I know, this shouldn't happen, so what is causing this and how can I fix it?
The issue was a small careless mistake, but for the benefit of others running into the issue, I provided an answer for which the rest of the question can be skipped.
Unfortunately, I cannot reproduce this behavior with a smaller minimum working example. I will try to provide as much relevant information as possible, please let me know if I am missing something. Here is the code of the unit test:
#include "alignment_reader.h"
#define CATCH_CONFIG_MAIN
#define CATCH_CONFIG_COLOUR_NONE
#include "catch.h"
#include "string_conversions.h"
#include "exceptions.h"
namespace paste_alignments {
namespace test {
namespace {
SCENARIO("Test correctness of AlignmentReader::FromFile.",
"[AlignmentReader][FromFile][correctness]") {
GIVEN("The name of a valid input file.") {
std::string input_file{"test_data/valid_alignment_file.tsv"};
WHEN("Constructed with default number of fields and chunk size.") {
CHECK_NOTHROW(AlignmentReader::FromFile(input_file));
/*AlignmentReader reader{AlignmentReader::FromFile(input_file)};
THEN("Field number and chunk size are at default values.") {
CHECK(reader.NumFields() == 12);
CHECK(reader.ChunkSize() == 128 * 1000 * 1000);
}*/
}
}
}
} // namespace
} // namespace test
} // namespace paste_alignments
when I uncomment the commented part, the exit code changes to 2 and ctest still reports success.
Here is what happens when I run ctest (both with and without the commented portion):
$ ctest
Test project /Users/Jasper/cpp_projects/PasteAlignments/debug
Start 1: alignment_reader_test
1/1 Test #1: alignment_reader_test ............ Passed 0.16 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.17 sec
Here is what I get if I run the test individually and check exit status (with the stuff commented out):
$ ./test/alignment_reader_test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
alignment_reader_test is a Catch v2.12.2 host application.
Run with -? for options
-------------------------------------------------------------------------------
Scenario: Test correctness of AlignmentReader::FromFile.
Given: The name of a valid input file.
When: Constructed with default number of fields and chunk size.
-------------------------------------------------------------------------------
/Users/Jasper/cpp_projects/PasteAlignments/test/alignment_reader_test.cc:64
...............................................................................
/Users/Jasper/cpp_projects/PasteAlignments/test/alignment_reader_test.cc:65: FAILED:
CHECK_NOTHROW( AlignmentReader::FromFile(input_file) )
due to unexpected exception with message:
Unable to open file: 'test_data/valid_alignment_file.tsv'.
===============================================================================
test cases: 1 | 1 failed
assertions: 1 | 1 failed
$ echo $?
1
The only things that change when I include the commented portion is that two tests fail and the exit status is 2 instead of 1.
Here with --verbose flag:
$ ctest --verbose
UpdateCTestConfiguration from :/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Parse Config file:/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
UpdateCTestConfiguration from :/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Parse Config file:/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Test project /Users/Jasper/cpp_projects/PasteAlignments/debug
Constructing a list of tests
Done constructing a list of tests
Updating test list for fixtures
Added 0 tests to meet fixture requirements
Checking test dependency graph...
Checking test dependency graph end
test 1
Start 1: alignment_reader_test
1: Test command: /Users/Jasper/cpp_projects/PasteAlignments/debug/test/alignment_reader_test
1: Test timeout computed to be: 1500
1: ===============================================================================
1: All tests passed (1 assertion in 1 test case)
1:
1/1 Test #1: alignment_reader_test ............ Passed 0.36 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.36 sec
I didn't change the ctest configurations (as suggested here). CMakeLists.txt looks like this:
add_executable(alignment_reader_test
"${PROJECT_SOURCE_DIR}/test/alignment_reader_test.cc"
"${PROJECT_SOURCE_DIR}/src/alignment_reader.cc"
"${PROJECT_SOURCE_DIR}/src/alignment_batch.cc"
"${PROJECT_SOURCE_DIR}/src/scoring_system.cc"
"${PROJECT_SOURCE_DIR}/src/alignment.cc"
"${PROJECT_SOURCE_DIR}/src/helpers.cc")
target_include_directories(alignment_reader_test PUBLIC
"${PROJECT_SOURCE_DIR}/test"
"${PROJECT_SOURCE_DIR}/include"
"${PROJECT_SOURCE_DIR}/lib/catch/include")
add_test(NAME alignment_reader_test COMMAND alignment_reader_test)
I found the issue thanks to #Tsyvarev.
The build directory structure is as follows:
build/
|--test/
|--test_executable
|--test_data/
|--test_datafile
When CMake runs test_executable it runs it from within the directory test.
When I ran the executable separately, I ran it from the build directory. (./test/test_executable).
Inside the unit-test code, the test_datafile is referred to as test_data/test_datafile. This is not recognized when the test is run from the build executable (as I did) as opposed to when it's run from the build/test directory.
Therefore, when ctest ran the test, it actually succeeded as it should.
Indeed, if I cd into test first, the test has exit code 0, as it should.
I am trying to use googletest with bazel and cmake.
The cmake and clion works perfect as for the test part, it fails where it should fail and passes where it should pass.
However, bazel test will pass all the tests even they should not.
Bazel test not working
For example, I have the stupid_test.cc file:
#include "gtest/gtest.h"
TEST(StupidTests, Stupid1) {
EXPECT_EQ(100, 20);
EXPECT_TRUE(false);
}
which should fail always.
The bazel BUILD file:
cc_test(
name = "stupid_test",
srcs = ["stupid_test.cc"],
deps = [
"//bazel_build/googletest:gtest",
],
)
where bazel_build/googletest is exactly the same files downloaded from the googletest github.
Then I run bazel test :stupid_test, the output is:
⇒ blaze test :stupid_test
DEBUG: /private/var/tmp/_bazel_myusername/741c62b201e51840aa320b156e05fd70/external/bazel_tools/tools/osx/xcode_configure.bzl:87:9: Invoking xcodebuild failed, developer dir: /Users/honghaoli/Downloads/Xcode.app/Contents/Developer ,return code 1, stderr: xcrun: error: invalid DEVELOPER_DIR path (/Users/honghaoli/Downloads/Xcode.app/Contents/Developer), missing xcrun at: /Users/honghaoli/Downloads/Xcode.app/Contents/Developer/usr/bin/xcrun
, stdout:
INFO: Analysed target //:stupid_test (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
Target //:stupid_test up-to-date:
bazel-bin/stupid_test
INFO: Elapsed time: 1.840s, Critical Path: 1.55s
INFO: 4 processes: 4 darwin-sandbox.
INFO: Build completed successfully, 4 total actions
//:stupid_test PASSED in 0.1s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 4 total actions
I have no idea why it passed.
CMake Works
The same test file would fail with very clear message using cmake and clion. For example, part of the CMakeLists
add_executable(stupid_test stupid_test.cc)
target_link_libraries(stupid_test gtest gtest_main)
add_test(NAME stupid_test COMMAND stupid_test)
and the output fail message:
Testing started at 19:16 ...
/Users/myusername...blabla/cmake-build-debug/stupid_test --gtest_filter=* --gtest_color=no
Running main() from /Users/myusername...blabla/cmake-build-debug/googletest-src/googletest/src/gtest_main.cc
[==========] Running 1 test from 1 test suite./Users/myusername...blabla/stupid_test.cc:11: Failure
Expected equality of these values:
100
20
/Users/myusername...blabla/stupid_test.cc:12: Failure
Value of: false
Actual: false
Expected: true
/Users/myusername...blabla/stupid_test.cc:13: Failure
Value of: true
Actual: true
Expected: false
Process finished with exit code 1
I am using the Mac OS 10.14 if that helps.
I figured out the reason.
Just change
"//bazel_build/googletest:gtest",
in the BUILD file into
"//bazel_build/googletest:gtest_main",
I'm still confused!
If the gtest does not work, it should at least fail the build or throw error/warnings. It should never say:
Executed 1 out of 1 test: 1 test passes.
as the output.
I consider it as a bug.
I would like to compile a binary which runs a certain subset of tests. When I run the following, it works:
ubuntu#ubuntu-xenial:/ox$ cargo test hash::vec
Finished dev [unoptimized + debuginfo] target(s) in 0.11 secs
Running target/debug/deps/ox-824a031ff1732165
running 9 tests
test hash::vec::test_hash_entry::test_get_offset_tombstone ... ok
test hash::vec::test_hash_entry::test_get_offset_value ... ok
test hash::vec::test_hash_table::test_delete ... ok
test hash::vec::test_hash_table::test_delete_and_set ... ok
test hash::vec::test_hash_table::test_get_from_hash ... ok
test hash::vec::test_hash_table::test_get_non_existant_from_hash ... ok
test hash::vec::test_hash_table::test_override ... ok
test hash::vec::test_hash_table::test_grow_hash ... ok
test hash::vec::test_hash_table::test_set_after_filled_with_tombstones ... ok
test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 8 filtered out
When I try to run target/debug/deps/ox-824a031ff1732165, it runs all my tests, not just the 9 specified in hash::vec.
I've tried to run cargo rustc --test hash::vec but I get
error: no test target namedhash::vec.cargo rustc -- --testworks, but creates a binary that runs all tests. If I trycargo rustc -- --test hash::vec`, I get:
Compiling ox v0.1.0 (file:///ox)
error: multiple input filenames provided
error: Could not compile `ox`.
cargo rustc -h says that you can pass NAME with the --test flag (--test NAME Build only the specified test target), so I'm wondering what "NAME" is and how to pass it in so I get a binary that only runs the specified 9 tests in hash::vec.
You can't, at least not directly.
In the case of cargo test hash::vec, the hash::vec is just a substring matched against the full path of each test function when the test runner is executed. That is, it has absolutely no impact whatsoever on which tests get compiled, only on which tests run. In fact, this parameter is passed to the test runner itself; Cargo doesn't even interpret it itself.
In the case of --test NAME, NAME is the name of the test source. As in, passing --test blah tells Cargo to build and run the tests in tests/blah.rs. It's the same sort of argument as --bin NAME (for src/bin/NAME.rs) and --example NAME (for examples/NAME.rs).
If you really want to only compile a particular subset of tests, the only way I can think of is to use conditional compilation via features. You'd need a package feature for each subset of tests you want to be able to enable/disable.
This functionality has found its way into Cargo. cargo build now has a parameter
--test [<NAME>] Build only the specified test target
which builds a binary with the specified set of tests only.