I would like to compile a binary which runs a certain subset of tests. When I run the following, it works:
ubuntu#ubuntu-xenial:/ox$ cargo test hash::vec
Finished dev [unoptimized + debuginfo] target(s) in 0.11 secs
Running target/debug/deps/ox-824a031ff1732165
running 9 tests
test hash::vec::test_hash_entry::test_get_offset_tombstone ... ok
test hash::vec::test_hash_entry::test_get_offset_value ... ok
test hash::vec::test_hash_table::test_delete ... ok
test hash::vec::test_hash_table::test_delete_and_set ... ok
test hash::vec::test_hash_table::test_get_from_hash ... ok
test hash::vec::test_hash_table::test_get_non_existant_from_hash ... ok
test hash::vec::test_hash_table::test_override ... ok
test hash::vec::test_hash_table::test_grow_hash ... ok
test hash::vec::test_hash_table::test_set_after_filled_with_tombstones ... ok
test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 8 filtered out
When I try to run target/debug/deps/ox-824a031ff1732165, it runs all my tests, not just the 9 specified in hash::vec.
I've tried to run cargo rustc --test hash::vec but I get
error: no test target namedhash::vec.cargo rustc -- --testworks, but creates a binary that runs all tests. If I trycargo rustc -- --test hash::vec`, I get:
Compiling ox v0.1.0 (file:///ox)
error: multiple input filenames provided
error: Could not compile `ox`.
cargo rustc -h says that you can pass NAME with the --test flag (--test NAME Build only the specified test target), so I'm wondering what "NAME" is and how to pass it in so I get a binary that only runs the specified 9 tests in hash::vec.
You can't, at least not directly.
In the case of cargo test hash::vec, the hash::vec is just a substring matched against the full path of each test function when the test runner is executed. That is, it has absolutely no impact whatsoever on which tests get compiled, only on which tests run. In fact, this parameter is passed to the test runner itself; Cargo doesn't even interpret it itself.
In the case of --test NAME, NAME is the name of the test source. As in, passing --test blah tells Cargo to build and run the tests in tests/blah.rs. It's the same sort of argument as --bin NAME (for src/bin/NAME.rs) and --example NAME (for examples/NAME.rs).
If you really want to only compile a particular subset of tests, the only way I can think of is to use conditional compilation via features. You'd need a package feature for each subset of tests you want to be able to enable/disable.
This functionality has found its way into Cargo. cargo build now has a parameter
--test [<NAME>] Build only the specified test target
which builds a binary with the specified set of tests only.
Related
I wrote some simple boost tests.
The test compiles, but starting it does not execute the test body.
Below is an example of a test.
// Windows uses Boost static libraries
#ifndef _WIN32
#define BOOST_TEST_DYN_LINK
#endif
#define BOOST_TEST_MODULE "SimpleTest"
#include <boost/test/unit_test.hpp>
#include "radarInterface/ObjConfiguration.h"
#include "radarInterface/ObjIF.h"
#include "CommonFunctions.h"
#ifndef BOOST_TEST
#define BOOST_TEST(A)
#endif
BOOST_AUTO_TEST_SUITE(_obj_interface_)
BOOST_AUTO_TEST_CASE(init_string)
{
BOOST_TEST_MESSAGE("init_string");
ObjConfiguration conf;
conf.mcastAddress("225.0.0.40");
conf.mcastPort(6310);
conf.ipAddress("127.0.0.1");
conf.tcpPort(6312);
BOOST_TEST(conf.isComplete() == true);
ObjIF objIf;
BOOST_CHECK_NO_THROW(objIf.init(conf));
usleep(3000000);
ri.fini();
}
BOOST_AUTO_TEST_SUITE_END()
Trying to run it looks like everything is fine but in truth the test body is not running.
I use CMake to compile and run tests.
The following is the result of running the tests with CMake (ctest) after their compilation.
Test project C:/Users/kongrosian/SimpleTest/build
Start 1: ObjInterface_test
1/1 Test #1: ObjInterface_test .............. Passed 0.03 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.05 sec
Even running it from the command line doesn't seem to work. Using the command
".\ObjInterface_test.exe --log_level=message --run_test=init_string"
I would expect to see at least the "init string" message. Instead I simply get
Process PID: xxxx
If I use the test code written in the comments by sehe in my project the result is the same.
When executing commands
./sotest --log_level=message --run_test=_obj_interface_/init_string
or
./sotest --list_content
I get Process PID: xxxx as command line output.
Do you have any ideas on why this behavior?
The boost version I'm using is 1.72.
I hope I have clarified better.
It's unclear how you arrive at the conclusion the test is not running. It would seem it explicitly states that it is running the test (and passing).
However, the time consumed doesn't match your expectation, which leads to the guess that you may be running the wrong target (an old build e.g. one of a build configuration that is not up-to-date).
Command Line
Given the test suite name, the command line argument should be like:
./sotest --log_level=message --run_test=_obj_interface_/init_string
Which for me prints:
Running 1 test case...
init_string
*** No errors detected
You can use some wildcards if you want:
./sotest -l all -t "*/init_string"
Running 1 test case...
Entering test module "SimpleTest"
/home/sehe/Projects/stackoverflow/test.cpp(20): Entering test suite "_obj_interface_"
/home/sehe/Projects/stackoverflow/test.cpp(22): Entering test case "init_string"
init_string
/home/sehe/Projects/stackoverflow/test.cpp(45): info: check conf.isComplete() == true has passed
/home/sehe/Projects/stackoverflow/test.cpp(53): info: check 'no exceptions thrown by objIf.init(conf)' has passed
/home/sehe/Projects/stackoverflow/test.cpp(22): Leaving test case "init_string"; testing time: 3000212us
/home/sehe/Projects/stackoverflow/test.cpp(20): Leaving test suite "_obj_interface_"; testing time: 3000270us
Leaving test module "SimpleTest"; testing time: 3000294us
*** No errors detected
Use --list_content to ... list the contents:
sehe ~ Projects stackoverflow ./sotest --list_content
_obj_interface_*
init_string*
Warning About Naming
Names starting with underscores (or containing double __ underscores) are reserved. Using them makes your program not well-formed and you may run into issues. See What are the rules about using an underscore in a C++ identifier?
So you should probably change the name of your test suite.
I'm writing a project to learn how to use Rust and I'm calling my project future-finance-labs. After writing some basic functions and verifying the app can be built I wanted to include some tests, located in aggregates/mod.rs. [The tests are in the same file as the actual code as per the documentation.] I'm unable to get the tests to run despite following the documentation to the best of my ability. I have tried to build the project using PowerShell as well as Bash. [It fails to run on Fedora Linux as well]
Here is my output on Bash:
~/future-finance-labs$ cargo test -- src/formatters/mod.rs
Finished test [unoptimized + debuginfo] target(s) in 5.98s
Running target/debug/deps/future_finance_labs-16ed066e1ea3b9a1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Using PowerShell I get the same output with some errors like the following:
error: failed to remove C:\Users\jhale\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\home\jhale\future-finance-labs\target\debug\build\mime_guess-890328c8763afc22\build_script_build-890328c8763afc22.build_script_build.c22di3i8-cgu.0.rcgu.o: The system cannot find the path specified. (os error 3)
After my initial excitement at the prospect of writing a few tests that passed on the first attempt, I quickly realized all the green was indicative; rather, of a failure to even run the tests. I just want to run the unit tests. Running cargo test alone without a separate and file fails as well. Why can't I run any test in this project with my current setup?
It can't find your test because the rust compiler doesn't know about it. You need to add mod aggregates to main.
mod aggregates;
fn main() {
println!("Hello, world!");
}
After you do that, you'll see that your aggregates/mod.rs doesn't compile for many reasons.
And as Mihir was trying to say, you need to use the name of the test, not the name of the file to run a specific test:
cargo test min_works
cargo test aggregates
See also:
How do I “use” or import a local Rust file?
Rust Book: Controlling How Tests Are Run
I have a project with unit tests and when I run ctest (cmake version 3.18.2 on macos), success is reported for all tests. However, if I run one of the tests by itself, it has exit status 1. As far as I know, this shouldn't happen, so what is causing this and how can I fix it?
The issue was a small careless mistake, but for the benefit of others running into the issue, I provided an answer for which the rest of the question can be skipped.
Unfortunately, I cannot reproduce this behavior with a smaller minimum working example. I will try to provide as much relevant information as possible, please let me know if I am missing something. Here is the code of the unit test:
#include "alignment_reader.h"
#define CATCH_CONFIG_MAIN
#define CATCH_CONFIG_COLOUR_NONE
#include "catch.h"
#include "string_conversions.h"
#include "exceptions.h"
namespace paste_alignments {
namespace test {
namespace {
SCENARIO("Test correctness of AlignmentReader::FromFile.",
"[AlignmentReader][FromFile][correctness]") {
GIVEN("The name of a valid input file.") {
std::string input_file{"test_data/valid_alignment_file.tsv"};
WHEN("Constructed with default number of fields and chunk size.") {
CHECK_NOTHROW(AlignmentReader::FromFile(input_file));
/*AlignmentReader reader{AlignmentReader::FromFile(input_file)};
THEN("Field number and chunk size are at default values.") {
CHECK(reader.NumFields() == 12);
CHECK(reader.ChunkSize() == 128 * 1000 * 1000);
}*/
}
}
}
} // namespace
} // namespace test
} // namespace paste_alignments
when I uncomment the commented part, the exit code changes to 2 and ctest still reports success.
Here is what happens when I run ctest (both with and without the commented portion):
$ ctest
Test project /Users/Jasper/cpp_projects/PasteAlignments/debug
Start 1: alignment_reader_test
1/1 Test #1: alignment_reader_test ............ Passed 0.16 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.17 sec
Here is what I get if I run the test individually and check exit status (with the stuff commented out):
$ ./test/alignment_reader_test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
alignment_reader_test is a Catch v2.12.2 host application.
Run with -? for options
-------------------------------------------------------------------------------
Scenario: Test correctness of AlignmentReader::FromFile.
Given: The name of a valid input file.
When: Constructed with default number of fields and chunk size.
-------------------------------------------------------------------------------
/Users/Jasper/cpp_projects/PasteAlignments/test/alignment_reader_test.cc:64
...............................................................................
/Users/Jasper/cpp_projects/PasteAlignments/test/alignment_reader_test.cc:65: FAILED:
CHECK_NOTHROW( AlignmentReader::FromFile(input_file) )
due to unexpected exception with message:
Unable to open file: 'test_data/valid_alignment_file.tsv'.
===============================================================================
test cases: 1 | 1 failed
assertions: 1 | 1 failed
$ echo $?
1
The only things that change when I include the commented portion is that two tests fail and the exit status is 2 instead of 1.
Here with --verbose flag:
$ ctest --verbose
UpdateCTestConfiguration from :/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Parse Config file:/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
UpdateCTestConfiguration from :/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Parse Config file:/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Test project /Users/Jasper/cpp_projects/PasteAlignments/debug
Constructing a list of tests
Done constructing a list of tests
Updating test list for fixtures
Added 0 tests to meet fixture requirements
Checking test dependency graph...
Checking test dependency graph end
test 1
Start 1: alignment_reader_test
1: Test command: /Users/Jasper/cpp_projects/PasteAlignments/debug/test/alignment_reader_test
1: Test timeout computed to be: 1500
1: ===============================================================================
1: All tests passed (1 assertion in 1 test case)
1:
1/1 Test #1: alignment_reader_test ............ Passed 0.36 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.36 sec
I didn't change the ctest configurations (as suggested here). CMakeLists.txt looks like this:
add_executable(alignment_reader_test
"${PROJECT_SOURCE_DIR}/test/alignment_reader_test.cc"
"${PROJECT_SOURCE_DIR}/src/alignment_reader.cc"
"${PROJECT_SOURCE_DIR}/src/alignment_batch.cc"
"${PROJECT_SOURCE_DIR}/src/scoring_system.cc"
"${PROJECT_SOURCE_DIR}/src/alignment.cc"
"${PROJECT_SOURCE_DIR}/src/helpers.cc")
target_include_directories(alignment_reader_test PUBLIC
"${PROJECT_SOURCE_DIR}/test"
"${PROJECT_SOURCE_DIR}/include"
"${PROJECT_SOURCE_DIR}/lib/catch/include")
add_test(NAME alignment_reader_test COMMAND alignment_reader_test)
I found the issue thanks to #Tsyvarev.
The build directory structure is as follows:
build/
|--test/
|--test_executable
|--test_data/
|--test_datafile
When CMake runs test_executable it runs it from within the directory test.
When I ran the executable separately, I ran it from the build directory. (./test/test_executable).
Inside the unit-test code, the test_datafile is referred to as test_data/test_datafile. This is not recognized when the test is run from the build executable (as I did) as opposed to when it's run from the build/test directory.
Therefore, when ctest ran the test, it actually succeeded as it should.
Indeed, if I cd into test first, the test has exit code 0, as it should.
I am working on Gitlab and I would like to set up a CI (it is the first time I configure something like that, please assume that I am a beginner)
I wrote a code in C with a simple test in Cunit, I configured CI with a "build" job and a "test" job. My test job succeed whereas I wrote a KO test, when I open the job on Gitlab I see the failed output, but the job is marked "Passed".
How can I configure Gitlab to understand that the test failed ?
I think there is a parsing configuration somewhere, I tried in "CI / CD Setting -> Test coverage parsing" but I think it is wrong, and it did not work.
I let you the output of my test :
CUnit - A unit testing framework for C - Version 2.1-2
http://cunit.sourceforge.net/
Suite: TEST SUITE FUNCTION
Test: Test of function::triple ...FAILED
1. main.c:61 - CU_ASSERT_EQUAL(triple(3),1)
Run Summary: Type Total Ran Passed Failed Inactive<br/>
suites 1 1 n/a 0 0<br/>
tests 1 1 0 1 0<br/>
asserts 3 3 2 1 n/a<br/>
Elapsed time = 0.000 seconds
Gitlab supports test reports in JUnit format and coverage reports in cobertura XML format
See the links for C++ examples that may help you, as an example for CUnit, the .gitlab_ci.yaml file should look like:
cunit:
stage: test
script:
- ./my-cunit-test
artifacts:
when: always
reports:
junit: ./my-cunit-test.xml
I am trying to run a bamboo build which includes a batch script to run the testng test suit. out of 6 test suits 4 are passed and 2 getting failed. but after the build I am getting build as successful. I wanted to get the build as failed if any of the tests in the test suits failed. can any one tell me how can I configure this?
If your test suite fails, does it return a negative code?
One possible solution is, use a wrapper script to check the status of your suite and return different status code so that Bamboo will catch it.
Depending on how your test suites runs you can use something as simple as a shell script.