The application of interest is a compiler which returns a non-zero exit code when it encounters an error in the source. The unit tests for the compiler are composed of small snippets which intentionally triggers errors.
The function used to add a test is:
function(add_compiler_test test_name options)
add_test(NAME ${test_name}
COMMAND $<TARGET_FILE:pawncc> ${DEFAULT_COMPILER_OPTIONS} ${options})
set_tests_properties(${test_name} PROPERTIES
ENVIRONMENT PATH=$<TARGET_FILE_DIR:pawnc>)
endfunction()
This causes the test to fail when the program returns a non-zero exit code despite that being the correct behaviour.
How can the exit status of the program be tested?
If you want the test(s) to be reported as SUCCESS when it returns non-zero exit status and FAILED otherwise, set WILL_FAIL property for the test(s):
set_tests_properties(<test1> <test2> ... PROPERTIES WILL_FAIL TRUE)
Related
We are currently migrating from a CMake-based build to bazel. For unit-testing, we are using our own implemented framework.
When dealing with a SEGFAULT, ctest gives the following output:
The following tests FAILED:
19 - SomeTest (SEGFAULT)
Errors while running CTest
However, when executing the exact same test with the exact same build-options and sources, the bazel output looks like:
//services/SomeTest:test FAILED in 0.2s
/root/.cache/bazel/_bazel_root/b343aed36e4de4757a8e698762574e37/execroot/repo/bazel-out/k8-fastbuild/testlogs/SomeTest/test/test.log
The other output is just the regular printout from the test, nothing regarding the SEGFAULT. Same goes for the contents of SomeTest/test/test.log.
I tried the following options to bazel test: --test_output=all, --test_output=errors, --verbose_test_summary, and --verbose_failures.
What am I missing here?
The output you're seeing comes from CTest, not from your application under test. If you want to see helpful information like that you'll need some testing framework to provide it to you. Here's a comparison between a vanilla test and a Catch2 test.
Setup
test_vanilla.cc
int main() { return 1 / 0; }
test_catch2.cc
#include <catch2/catch.hpp>
TEST_CASE("Hello") { REQUIRE(1 / 0); }
WORKSPACE
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "catch2",
sha256 = "3cdb4138a072e4c0290034fe22d9f0a80d3bcfb8d7a8a5c49ad75d3a5da24fae",
strip_prefix = "Catch2-2.13.7",
urls = ["https://github.com/catchorg/Catch2/archive/v2.13.7.tar.gz"],
)
BUILD
cc_test(
name = "test_vanilla",
srcs = ["test_vanilla.cc"],
)
cc_test(
name = "test_catch2",
srcs = ["test_catch2.cc"],
defines = ["CATCH_CONFIG_MAIN"],
deps = ["#catch2"],
)
Testing
No test framework
Now let's run the tests.
❯ bazel test //:test_vanilla
[...]
//:test_vanilla FAILED in 0.3s
test.log
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //:test_vanilla
-----------------------------------------------------------------------------
You can see that the test failed, because it did not return 0 (as it failed by illegally dividing by zero.
If you have systemd-coredump installed (and coredumps enabled), you can get some info with
❯ coredumpctl -1 debug
[...]
Core was generated by `/home/laurenz/.cache/bazel/_bazel_laurenz/be59967ad4f5a83f16e874b5d49a28d5/sand'.
Program terminated with signal SIGFPE, Arithmetic exception.
#0 0x0000561398132668 in main ()
(gdb)
Catch2
If you have a test framework like CTest or Catch2, it will provide more infos so you don't even need to check the coredump yourself. The test log will provide the problematic file and line as well as the signal.
❯ bazel test //:test_catch2
[...]
//:test_catch2 FAILED in 0.2s
test.log
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //:test_catch2
-----------------------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
test_catch2 is a Catch v2.13.7 host application.
Run with -? for options
-------------------------------------------------------------------------------
Hello
-------------------------------------------------------------------------------
test_catch2.cc:3
...............................................................................
test_catch2.cc:3: FAILED:
due to a fatal error condition:
SIGFPE - Floating point error signal
===============================================================================
test cases: 1 | 1 failed
assertions: 1 | 1 failed
I have a project with unit tests and when I run ctest (cmake version 3.18.2 on macos), success is reported for all tests. However, if I run one of the tests by itself, it has exit status 1. As far as I know, this shouldn't happen, so what is causing this and how can I fix it?
The issue was a small careless mistake, but for the benefit of others running into the issue, I provided an answer for which the rest of the question can be skipped.
Unfortunately, I cannot reproduce this behavior with a smaller minimum working example. I will try to provide as much relevant information as possible, please let me know if I am missing something. Here is the code of the unit test:
#include "alignment_reader.h"
#define CATCH_CONFIG_MAIN
#define CATCH_CONFIG_COLOUR_NONE
#include "catch.h"
#include "string_conversions.h"
#include "exceptions.h"
namespace paste_alignments {
namespace test {
namespace {
SCENARIO("Test correctness of AlignmentReader::FromFile.",
"[AlignmentReader][FromFile][correctness]") {
GIVEN("The name of a valid input file.") {
std::string input_file{"test_data/valid_alignment_file.tsv"};
WHEN("Constructed with default number of fields and chunk size.") {
CHECK_NOTHROW(AlignmentReader::FromFile(input_file));
/*AlignmentReader reader{AlignmentReader::FromFile(input_file)};
THEN("Field number and chunk size are at default values.") {
CHECK(reader.NumFields() == 12);
CHECK(reader.ChunkSize() == 128 * 1000 * 1000);
}*/
}
}
}
} // namespace
} // namespace test
} // namespace paste_alignments
when I uncomment the commented part, the exit code changes to 2 and ctest still reports success.
Here is what happens when I run ctest (both with and without the commented portion):
$ ctest
Test project /Users/Jasper/cpp_projects/PasteAlignments/debug
Start 1: alignment_reader_test
1/1 Test #1: alignment_reader_test ............ Passed 0.16 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.17 sec
Here is what I get if I run the test individually and check exit status (with the stuff commented out):
$ ./test/alignment_reader_test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
alignment_reader_test is a Catch v2.12.2 host application.
Run with -? for options
-------------------------------------------------------------------------------
Scenario: Test correctness of AlignmentReader::FromFile.
Given: The name of a valid input file.
When: Constructed with default number of fields and chunk size.
-------------------------------------------------------------------------------
/Users/Jasper/cpp_projects/PasteAlignments/test/alignment_reader_test.cc:64
...............................................................................
/Users/Jasper/cpp_projects/PasteAlignments/test/alignment_reader_test.cc:65: FAILED:
CHECK_NOTHROW( AlignmentReader::FromFile(input_file) )
due to unexpected exception with message:
Unable to open file: 'test_data/valid_alignment_file.tsv'.
===============================================================================
test cases: 1 | 1 failed
assertions: 1 | 1 failed
$ echo $?
1
The only things that change when I include the commented portion is that two tests fail and the exit status is 2 instead of 1.
Here with --verbose flag:
$ ctest --verbose
UpdateCTestConfiguration from :/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Parse Config file:/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
UpdateCTestConfiguration from :/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Parse Config file:/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Test project /Users/Jasper/cpp_projects/PasteAlignments/debug
Constructing a list of tests
Done constructing a list of tests
Updating test list for fixtures
Added 0 tests to meet fixture requirements
Checking test dependency graph...
Checking test dependency graph end
test 1
Start 1: alignment_reader_test
1: Test command: /Users/Jasper/cpp_projects/PasteAlignments/debug/test/alignment_reader_test
1: Test timeout computed to be: 1500
1: ===============================================================================
1: All tests passed (1 assertion in 1 test case)
1:
1/1 Test #1: alignment_reader_test ............ Passed 0.36 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.36 sec
I didn't change the ctest configurations (as suggested here). CMakeLists.txt looks like this:
add_executable(alignment_reader_test
"${PROJECT_SOURCE_DIR}/test/alignment_reader_test.cc"
"${PROJECT_SOURCE_DIR}/src/alignment_reader.cc"
"${PROJECT_SOURCE_DIR}/src/alignment_batch.cc"
"${PROJECT_SOURCE_DIR}/src/scoring_system.cc"
"${PROJECT_SOURCE_DIR}/src/alignment.cc"
"${PROJECT_SOURCE_DIR}/src/helpers.cc")
target_include_directories(alignment_reader_test PUBLIC
"${PROJECT_SOURCE_DIR}/test"
"${PROJECT_SOURCE_DIR}/include"
"${PROJECT_SOURCE_DIR}/lib/catch/include")
add_test(NAME alignment_reader_test COMMAND alignment_reader_test)
I found the issue thanks to #Tsyvarev.
The build directory structure is as follows:
build/
|--test/
|--test_executable
|--test_data/
|--test_datafile
When CMake runs test_executable it runs it from within the directory test.
When I ran the executable separately, I ran it from the build directory. (./test/test_executable).
Inside the unit-test code, the test_datafile is referred to as test_data/test_datafile. This is not recognized when the test is run from the build executable (as I did) as opposed to when it's run from the build/test directory.
Therefore, when ctest ran the test, it actually succeeded as it should.
Indeed, if I cd into test first, the test has exit code 0, as it should.
how to fail the GO build based on the sonar quality gate?
I expect the stage to be failed when gateway fails. Is there any configurations that can be done to fail the build
Based on #moritz answer, this worked for me.
I created a bat file that would check for the status code of the SONAR project, after the SONAR build is executed and based on the status of the response, would return the exit code.
For /F "Delims=" %%A In ('"curl http://mysonarserver/api/qualitygates/project_status?projectKey=com.mypackage:sampleproject | jq ".projectStatus.status""') Do Set "test=%%~A"
echo %test%
If /I "%test%"=="ERROR" exit -1
If /I "%test%"=="OK" exit 0
In my case, the SONAR server would return ERROR and OK based on the status of the build.
I have used curl and jq for making the http request from the command line and to parse the response to json respectively.
I had to do some tweaking to make it work on Windows, hopefully it should work on Linux as well.
You can also add the call to the Maven build for Sonar from this script if needed.
I hope it helps!
GoCD fails the task (and thus stage and job, unless configured otherwise) when the exit code is non-zero.
So you need to bring whatever command you are executing to indicate a failure through a non-zero exit code (which most UNIX commands do).
I would like to compile a binary which runs a certain subset of tests. When I run the following, it works:
ubuntu#ubuntu-xenial:/ox$ cargo test hash::vec
Finished dev [unoptimized + debuginfo] target(s) in 0.11 secs
Running target/debug/deps/ox-824a031ff1732165
running 9 tests
test hash::vec::test_hash_entry::test_get_offset_tombstone ... ok
test hash::vec::test_hash_entry::test_get_offset_value ... ok
test hash::vec::test_hash_table::test_delete ... ok
test hash::vec::test_hash_table::test_delete_and_set ... ok
test hash::vec::test_hash_table::test_get_from_hash ... ok
test hash::vec::test_hash_table::test_get_non_existant_from_hash ... ok
test hash::vec::test_hash_table::test_override ... ok
test hash::vec::test_hash_table::test_grow_hash ... ok
test hash::vec::test_hash_table::test_set_after_filled_with_tombstones ... ok
test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 8 filtered out
When I try to run target/debug/deps/ox-824a031ff1732165, it runs all my tests, not just the 9 specified in hash::vec.
I've tried to run cargo rustc --test hash::vec but I get
error: no test target namedhash::vec.cargo rustc -- --testworks, but creates a binary that runs all tests. If I trycargo rustc -- --test hash::vec`, I get:
Compiling ox v0.1.0 (file:///ox)
error: multiple input filenames provided
error: Could not compile `ox`.
cargo rustc -h says that you can pass NAME with the --test flag (--test NAME Build only the specified test target), so I'm wondering what "NAME" is and how to pass it in so I get a binary that only runs the specified 9 tests in hash::vec.
You can't, at least not directly.
In the case of cargo test hash::vec, the hash::vec is just a substring matched against the full path of each test function when the test runner is executed. That is, it has absolutely no impact whatsoever on which tests get compiled, only on which tests run. In fact, this parameter is passed to the test runner itself; Cargo doesn't even interpret it itself.
In the case of --test NAME, NAME is the name of the test source. As in, passing --test blah tells Cargo to build and run the tests in tests/blah.rs. It's the same sort of argument as --bin NAME (for src/bin/NAME.rs) and --example NAME (for examples/NAME.rs).
If you really want to only compile a particular subset of tests, the only way I can think of is to use conditional compilation via features. You'd need a package feature for each subset of tests you want to be able to enable/disable.
This functionality has found its way into Cargo. cargo build now has a parameter
--test [<NAME>] Build only the specified test target
which builds a binary with the specified set of tests only.
I currently have a custom workflow activity to run an external process, after the solution has been compiled successfully, that returns an ExitCode <> 0 if the external process has failed. Once I know that the process has failed, I want to set the build status to FAIL (as you would see if code has not compiled) so I have added a SetBuildProperties activity which will set the Status property to BuildStatus.Failed but this only seems to result in giving a Partially Succeeded build when the build is finished.
I have tried setting the build's compilation status to failed inside my custom activity which does result in a Failed build, but I don't really want to have to do that as it is misleading when the solution has compiled.
Can anyone tell me how I can force a build to fail?? (preferably without having to set the compilation status to failed!)
Thanks
What happened here is that when the build finishes, the workflow manager will overwrite the build status to a value that depends on the combination of statuses of the build process. In your case, the CompilationStatus is Succeeded, but there is a custom activity failure (you set the BuildStatus to Failed), so the overall status would be PartiallySucceeded.
The only workaround here is to set the CompilationStatus to either Failed or Unknown, then the build status will be Failed.
I haven't tried setting the CompilationStatus to Unknown though. But if it can be done, you can later go back and change it to Passed. Just a way to distinguish with the real failed compilation builds.
Not a great workaround, I know :(
UPDATE: Using SetBuildProperties activity to set the build status to Failed and I was able to fail the build even though the compilation succeeded.
In TFS 2013, using a customized build template you can just set the the build status to Failed while leaving the CompilationStatus and TestStatus to it's rightful values.
You have to do it after the test run though, otherwise it gets updated back. Use the SetBuildProperties activity and set the "Status" property.