Boost Unit Test start but not execute - c++

I wrote some simple boost tests.
The test compiles, but starting it does not execute the test body.
Below is an example of a test.
// Windows uses Boost static libraries
#ifndef _WIN32
#define BOOST_TEST_DYN_LINK
#endif
#define BOOST_TEST_MODULE "SimpleTest"
#include <boost/test/unit_test.hpp>
#include "radarInterface/ObjConfiguration.h"
#include "radarInterface/ObjIF.h"
#include "CommonFunctions.h"
#ifndef BOOST_TEST
#define BOOST_TEST(A)
#endif
BOOST_AUTO_TEST_SUITE(_obj_interface_)
BOOST_AUTO_TEST_CASE(init_string)
{
BOOST_TEST_MESSAGE("init_string");
ObjConfiguration conf;
conf.mcastAddress("225.0.0.40");
conf.mcastPort(6310);
conf.ipAddress("127.0.0.1");
conf.tcpPort(6312);
BOOST_TEST(conf.isComplete() == true);
ObjIF objIf;
BOOST_CHECK_NO_THROW(objIf.init(conf));
usleep(3000000);
ri.fini();
}
BOOST_AUTO_TEST_SUITE_END()
Trying to run it looks like everything is fine but in truth the test body is not running.
I use CMake to compile and run tests.
The following is the result of running the tests with CMake (ctest) after their compilation.
Test project C:/Users/kongrosian/SimpleTest/build
Start 1: ObjInterface_test
1/1 Test #1: ObjInterface_test .............. Passed 0.03 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.05 sec
Even running it from the command line doesn't seem to work. Using the command
".\ObjInterface_test.exe --log_level=message --run_test=init_string"
I would expect to see at least the "init string" message. Instead I simply get
Process PID: xxxx
If I use the test code written in the comments by sehe in my project the result is the same.
When executing commands
./sotest --log_level=message --run_test=_obj_interface_/init_string
or
./sotest --list_content
I get Process PID: xxxx as command line output.
Do you have any ideas on why this behavior?
The boost version I'm using is 1.72.
I hope I have clarified better.

It's unclear how you arrive at the conclusion the test is not running. It would seem it explicitly states that it is running the test (and passing).
However, the time consumed doesn't match your expectation, which leads to the guess that you may be running the wrong target (an old build e.g. one of a build configuration that is not up-to-date).
Command Line
Given the test suite name, the command line argument should be like:
./sotest --log_level=message --run_test=_obj_interface_/init_string
Which for me prints:
Running 1 test case...
init_string
*** No errors detected
You can use some wildcards if you want:
./sotest -l all -t "*/init_string"
Running 1 test case...
Entering test module "SimpleTest"
/home/sehe/Projects/stackoverflow/test.cpp(20): Entering test suite "_obj_interface_"
/home/sehe/Projects/stackoverflow/test.cpp(22): Entering test case "init_string"
init_string
/home/sehe/Projects/stackoverflow/test.cpp(45): info: check conf.isComplete() == true has passed
/home/sehe/Projects/stackoverflow/test.cpp(53): info: check &apos;no exceptions thrown by objIf.init(conf)&apos; has passed
/home/sehe/Projects/stackoverflow/test.cpp(22): Leaving test case "init_string"; testing time: 3000212us
/home/sehe/Projects/stackoverflow/test.cpp(20): Leaving test suite "_obj_interface_"; testing time: 3000270us
Leaving test module "SimpleTest"; testing time: 3000294us
*** No errors detected
Use --list_content to ... list the contents:
 sehe  ~  Projects  stackoverflow  ./sotest --list_content
_obj_interface_*
init_string*
Warning About Naming
Names starting with underscores (or containing double __ underscores) are reserved. Using them makes your program not well-formed and you may run into issues. See What are the rules about using an underscore in a C++ identifier?
So you should probably change the name of your test suite.

Related

Rust tests fail to even run

I'm writing a project to learn how to use Rust and I'm calling my project future-finance-labs. After writing some basic functions and verifying the app can be built I wanted to include some tests, located in aggregates/mod.rs. [The tests are in the same file as the actual code as per the documentation.] I'm unable to get the tests to run despite following the documentation to the best of my ability. I have tried to build the project using PowerShell as well as Bash. [It fails to run on Fedora Linux as well]
Here is my output on Bash:
~/future-finance-labs$ cargo test -- src/formatters/mod.rs
Finished test [unoptimized + debuginfo] target(s) in 5.98s
Running target/debug/deps/future_finance_labs-16ed066e1ea3b9a1
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Using PowerShell I get the same output with some errors like the following:
error: failed to remove C:\Users\jhale\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs\home\jhale\future-finance-labs\target\debug\build\mime_guess-890328c8763afc22\build_script_build-890328c8763afc22.build_script_build.c22di3i8-cgu.0.rcgu.o: The system cannot find the path specified. (os error 3)
After my initial excitement at the prospect of writing a few tests that passed on the first attempt, I quickly realized all the green was indicative; rather, of a failure to even run the tests. I just want to run the unit tests. Running cargo test alone without a separate and file fails as well. Why can't I run any test in this project with my current setup?
It can't find your test because the rust compiler doesn't know about it. You need to add mod aggregates to main.
mod aggregates;
fn main() {
println!("Hello, world!");
}
After you do that, you'll see that your aggregates/mod.rs doesn't compile for many reasons.
And as Mihir was trying to say, you need to use the name of the test, not the name of the file to run a specific test:
cargo test min_works
cargo test aggregates
See also:
How do I “use” or import a local Rust file?
Rust Book: Controlling How Tests Are Run

ctest reports success for test with exit status 1

I have a project with unit tests and when I run ctest (cmake version 3.18.2 on macos), success is reported for all tests. However, if I run one of the tests by itself, it has exit status 1. As far as I know, this shouldn't happen, so what is causing this and how can I fix it?
The issue was a small careless mistake, but for the benefit of others running into the issue, I provided an answer for which the rest of the question can be skipped.
Unfortunately, I cannot reproduce this behavior with a smaller minimum working example. I will try to provide as much relevant information as possible, please let me know if I am missing something. Here is the code of the unit test:
#include "alignment_reader.h"
#define CATCH_CONFIG_MAIN
#define CATCH_CONFIG_COLOUR_NONE
#include "catch.h"
#include "string_conversions.h"
#include "exceptions.h"
namespace paste_alignments {
namespace test {
namespace {
SCENARIO("Test correctness of AlignmentReader::FromFile.",
"[AlignmentReader][FromFile][correctness]") {
GIVEN("The name of a valid input file.") {
std::string input_file{"test_data/valid_alignment_file.tsv"};
WHEN("Constructed with default number of fields and chunk size.") {
CHECK_NOTHROW(AlignmentReader::FromFile(input_file));
/*AlignmentReader reader{AlignmentReader::FromFile(input_file)};
THEN("Field number and chunk size are at default values.") {
CHECK(reader.NumFields() == 12);
CHECK(reader.ChunkSize() == 128 * 1000 * 1000);
}*/
}
}
}
} // namespace
} // namespace test
} // namespace paste_alignments
when I uncomment the commented part, the exit code changes to 2 and ctest still reports success.
Here is what happens when I run ctest (both with and without the commented portion):
$ ctest
Test project /Users/Jasper/cpp_projects/PasteAlignments/debug
Start 1: alignment_reader_test
1/1 Test #1: alignment_reader_test ............ Passed 0.16 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.17 sec
Here is what I get if I run the test individually and check exit status (with the stuff commented out):
$ ./test/alignment_reader_test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
alignment_reader_test is a Catch v2.12.2 host application.
Run with -? for options
-------------------------------------------------------------------------------
Scenario: Test correctness of AlignmentReader::FromFile.
Given: The name of a valid input file.
When: Constructed with default number of fields and chunk size.
-------------------------------------------------------------------------------
/Users/Jasper/cpp_projects/PasteAlignments/test/alignment_reader_test.cc:64
...............................................................................
/Users/Jasper/cpp_projects/PasteAlignments/test/alignment_reader_test.cc:65: FAILED:
CHECK_NOTHROW( AlignmentReader::FromFile(input_file) )
due to unexpected exception with message:
Unable to open file: 'test_data/valid_alignment_file.tsv'.
===============================================================================
test cases: 1 | 1 failed
assertions: 1 | 1 failed
$ echo $?
1
The only things that change when I include the commented portion is that two tests fail and the exit status is 2 instead of 1.
Here with --verbose flag:
$ ctest --verbose
UpdateCTestConfiguration from :/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Parse Config file:/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
UpdateCTestConfiguration from :/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Parse Config file:/Users/Jasper/cpp_projects/PasteAlignments/debug/DartConfiguration.tcl
Test project /Users/Jasper/cpp_projects/PasteAlignments/debug
Constructing a list of tests
Done constructing a list of tests
Updating test list for fixtures
Added 0 tests to meet fixture requirements
Checking test dependency graph...
Checking test dependency graph end
test 1
Start 1: alignment_reader_test
1: Test command: /Users/Jasper/cpp_projects/PasteAlignments/debug/test/alignment_reader_test
1: Test timeout computed to be: 1500
1: ===============================================================================
1: All tests passed (1 assertion in 1 test case)
1:
1/1 Test #1: alignment_reader_test ............ Passed 0.36 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.36 sec
I didn't change the ctest configurations (as suggested here). CMakeLists.txt looks like this:
add_executable(alignment_reader_test
"${PROJECT_SOURCE_DIR}/test/alignment_reader_test.cc"
"${PROJECT_SOURCE_DIR}/src/alignment_reader.cc"
"${PROJECT_SOURCE_DIR}/src/alignment_batch.cc"
"${PROJECT_SOURCE_DIR}/src/scoring_system.cc"
"${PROJECT_SOURCE_DIR}/src/alignment.cc"
"${PROJECT_SOURCE_DIR}/src/helpers.cc")
target_include_directories(alignment_reader_test PUBLIC
"${PROJECT_SOURCE_DIR}/test"
"${PROJECT_SOURCE_DIR}/include"
"${PROJECT_SOURCE_DIR}/lib/catch/include")
add_test(NAME alignment_reader_test COMMAND alignment_reader_test)
I found the issue thanks to #Tsyvarev.
The build directory structure is as follows:
build/
|--test/
|--test_executable
|--test_data/
|--test_datafile
When CMake runs test_executable it runs it from within the directory test.
When I ran the executable separately, I ran it from the build directory. (./test/test_executable).
Inside the unit-test code, the test_datafile is referred to as test_data/test_datafile. This is not recognized when the test is run from the build executable (as I did) as opposed to when it's run from the build/test directory.
Therefore, when ctest ran the test, it actually succeeded as it should.
Indeed, if I cd into test first, the test has exit code 0, as it should.

Compile specific tests into binary

I would like to compile a binary which runs a certain subset of tests. When I run the following, it works:
ubuntu#ubuntu-xenial:/ox$ cargo test hash::vec
Finished dev [unoptimized + debuginfo] target(s) in 0.11 secs
Running target/debug/deps/ox-824a031ff1732165
running 9 tests
test hash::vec::test_hash_entry::test_get_offset_tombstone ... ok
test hash::vec::test_hash_entry::test_get_offset_value ... ok
test hash::vec::test_hash_table::test_delete ... ok
test hash::vec::test_hash_table::test_delete_and_set ... ok
test hash::vec::test_hash_table::test_get_from_hash ... ok
test hash::vec::test_hash_table::test_get_non_existant_from_hash ... ok
test hash::vec::test_hash_table::test_override ... ok
test hash::vec::test_hash_table::test_grow_hash ... ok
test hash::vec::test_hash_table::test_set_after_filled_with_tombstones ... ok
test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 8 filtered out
When I try to run target/debug/deps/ox-824a031ff1732165, it runs all my tests, not just the 9 specified in hash::vec.
I've tried to run cargo rustc --test hash::vec but I get
error: no test target namedhash::vec.cargo rustc -- --testworks, but creates a binary that runs all tests. If I trycargo rustc -- --test hash::vec`, I get:
Compiling ox v0.1.0 (file:///ox)
error: multiple input filenames provided
error: Could not compile `ox`.
cargo rustc -h says that you can pass NAME with the --test flag (--test NAME Build only the specified test target), so I'm wondering what "NAME" is and how to pass it in so I get a binary that only runs the specified 9 tests in hash::vec.
You can't, at least not directly.
In the case of cargo test hash::vec, the hash::vec is just a substring matched against the full path of each test function when the test runner is executed. That is, it has absolutely no impact whatsoever on which tests get compiled, only on which tests run. In fact, this parameter is passed to the test runner itself; Cargo doesn't even interpret it itself.
In the case of --test NAME, NAME is the name of the test source. As in, passing --test blah tells Cargo to build and run the tests in tests/blah.rs. It's the same sort of argument as --bin NAME (for src/bin/NAME.rs) and --example NAME (for examples/NAME.rs).
If you really want to only compile a particular subset of tests, the only way I can think of is to use conditional compilation via features. You'd need a package feature for each subset of tests you want to be able to enable/disable.
This functionality has found its way into Cargo. cargo build now has a parameter
--test [<NAME>] Build only the specified test target
which builds a binary with the specified set of tests only.

Using Boost test detect_memory_leaks option on macOS

I am currently trying to use the detect_memory_leaks option with boost on my mac (OS: el capitan 10.11.3). So far, everytime I executed my test binary with the option --detect_memory_leaks=1, no matter how much I leak, boost doesn’t complain. If you want to reproduce my problem, here is a way to reproduce:
I’m using boost 1.59 version and compile the unit test framework as static libraries. Then, I create two sample programs:
main.cpp:
#define BOOST_TEST_MODULE test module name
#include <boost/test/unit_test.hpp>
test.cpp:
#include <boost/test/unit_test.hpp>
BOOST_AUTO_TEST_SUITE( TestSuiteSample )
BOOST_AUTO_TEST_CASE( TestCaseSample )
{
int * a = new int[3]; // This will leak memory, boost should complain
BOOST_CHECK(true);
}
BOOST_AUTO_TEST_SUITE_END()
I compile my binary the following way:
g++ -I../boost_1_59_0 -L../boost_1_59_0/stage/lib -lboost_unit_test_framework -lboost_chrono -lboost_prg_exec_monitor -lboost_system -lboost_test_exec_monitor -lboost_timer -lboost_unit_test_framework main.cpp test.cpp
As you can see, I’ve included all the boost libraries generated when compiling boost/test but using -lboost_unit_test_framework only compiles fine also.
Now I have an executable a.out that I launch this way:
./a.out --detect_memory_leaks=1 --log_level=all --report_level=detailed
and I get the following result:
Running 1 test case...
Entering test module "test module name"
test.cpp:3: Entering test suite "TestSuiteSample"
test.cpp:5: Entering test case "TestCaseSample"
test.cpp:8: info: check true has passed
test.cpp:5: Leaving test case "TestCaseSample"; testing time: 60us
test.cpp:3: Leaving test suite "TestSuiteSample"; testing time: 82us
Leaving test module "test module name"; testing time: 105us
Test module "test module name" has passed with:
1 test case out of 1 passed
1 assertion out of 1 passed
Test suite "TestSuiteSample" has passed with:
1 test case out of 1 passed
1 assertion out of 1 passed
Test case "TestSuiteSample/TestCaseSample" has passed with:
1 assertion out of 1 passed
As you can see, there is no complaint from boost about the new int[3] I didn’t delete. At first, I thought it was the compiler that optimizes the code and don’t even allocate my variable, but valgrind sees a leak as definitely lost:
==2571== by 0x10004E5BD: TestSuiteSample::TestCaseSample::test_method() (test.cpp:7)
I don’t get what I’m doing wrong but if anyone knows how to get the error generated by the leak in test.cpp, I’d be glad to know it. I tried several ways to call the option and tried to figure out what to do in the boost documentation, but so far, nothing seems to work.
Help is welcome :)
According to the documentation [1] [2] (emphasis mine):
On platforms where memory leak detection is possible inside of running application (at the moment this is only Windows family) you can switch this feature on and off using this interface.
Since you're on a Mac and building with g++, it won't do anything.

Boost unit test failure detected in wrong test suite

I'm learning how to use the Boost Test Library at the moment, and I can't seem to get test suites to work correctly. In the following code 'test_case_1' fails correctly but it's reported as being in the Master Test Suite instead of 'test_suite_1'.
Anyone know what I'm doing wrong?
#define BOOST_AUTO_TEST_MAIN
#include <boost/test/auto_unit_test.hpp>
BOOST_AUTO_TEST_SUITE(test_suite_1);
BOOST_AUTO_TEST_CASE(test_case_1) {
BOOST_REQUIRE_EQUAL(1, 2);
}
BOOST_AUTO_TEST_SUITE_END();
edit:
Ovanes' answer led me to understand the suite hierarchy better - in this case test_suite_1 is a sub-suite of the root suite which by default is named 'Master Test Suite'. The default logging only shows the root suite, which isn't what I expected by I can deal with it :)
You can set the root suite name by defining BOOST_TEST_MODULE - so an alternative version of the above example which gives the expected error message is:
#define BOOST_TEST_MODULE test_suite_1
#define BOOST_AUTO_TEST_MAIN
#include <boost/test/auto_unit_test.hpp>
BOOST_AUTO_TEST_CASE(test_case_1) {
BOOST_REQUIRE_EQUAL(1, 2);
}
It depends how you configure your logger to produce the report. For example passing to your example --log_level=all will result in the following output:
Running 1 test case...
Entering test suite "Master Test Suite"
Entering test suite "test_suite_1"
Entering test case "test_case_1"
d:/projects/cpp/test/main.cpp(9): fatal error in "test_case_1": critical check 1 == 2 failed [1 != 2]
Leaving test case "test_case_1"
Leaving test suite "test_suite_1"
Leaving test suite "Master Test Suite"
*** 1 failure detected in test suite "Master Test Suite"
Here is the link to the command line config options of Boost Test Framework.
Regards,
Ovanes
Also, once you define BOOST_TEST_MODULE, you don't need to define BOOST_AUTO_TEST_MAIN