How to tell Boost.Test to stop on first failing test case? - c++

I've got a number of Boost test cases ordered in several test suites. Some test cases have one, some more than one check.
However, when executing all tests, they all get executed – no matter how many fail or pass. I know, that I can stop the execution of one test case with several checks by using BOOST_REQUIRE instead of BOOST_CHECK. But that's not want I want.
How can I tell Boost to stop the whole execution after the first test case failed? I would prefer a compiled solution (e.g. realized with a global fixture) over a runtime solution (i.e. runtime parameters).

BOOST_REQUIRE will stop the current test case in a test suite but go on on others.
I don't really see what you wanted when you asked for a "compiled solution" but here is a trick that should work. I use a boolean to check the stability of the whole test suite. If it is unstable i.e a BOOST_REQUIRE has been triggered then I stop the whole thing.
Hope it can help you.
//#include <...>
//FIXTURES ZONE
struct fixture
{
fixture():x(0.0),y(0.0){}
double x;
double y;
};
//HELPERS ZONE
static bool test_suite_stable = true;
void in_strategy(bool & stable)
{
if(stable)
{
stable = false;
}
else
{
exit();
}
}
void out_strategy(bool & stable)
{
if(!stable)
{
stable = true;
}
}
BOOST_AUTO_TEST_SUITE(my_test_suite)
//TEST CASES ZONE
BOOST_FIXTURE_TEST_CASE(my_test_case, fixture)
{
in_strategy(test_suite_stable);
//...
//BOOST_REQUIRE() -> triggered
out_strategy(test_suite_stable);
}
BOOST_FIXTURE_TEST_CASE(another_test_case, fixture)
{
in_strategy(test_suite_stable); //-> exit() since last triggered so stable = false
//...
//BOOST_REQUIRE()
out_strategy(test_suite_stable);
}
BOOST_TEST_SUITE_END()
Benoit.

Why not just use assert? Not only you abort immediately the whole program, you will also be able to see stack if necessary.

Related

Missing capabilities for unit test

I've implemented a C++ Class that will execute something in a timed cycle using a thread. The thread is set to be scheduled with the SCHED_DEADLINE scheduler of the Linux kernel. To setup the Scheduler the process running this must have certain Linux capabilities.
My question is, how to test this?
I can of course make a unit test and create the threat, do some counting an exit the test after a time to validate the cycle counter but that only works if the unit test is allowed to apply the right scheduler. If not, the default scheduler applies and the timing of the cyclic loops will be immediate and therefore executes a different behaviour.
How would you test this scenario?
Some Code Example:
void thread_handler() {
// setup SCHED_DEADLINE Parameters
while (running) {
// execute application logic
sched_yield();
}
}
There two separate units to test here. First the cyclic execution of code and second the strategy with the os interface. The first unit would look like this:
class CyclicThread : public std::thread {
public:
CyclicThread(Strategy& strategy) :
std::thread(bind(&CyclicThread::worker, this)),
strategy(strategy) { }
add_task(std::function<void()> handler) {
...
}
private:
Strategy& strategy;
void worker() {
while (running) {
execute_handler()
strategy.yield();
}
}
}
This is fairly easy to test with a mock object of the strategy.
The Deadline scheduling strategy looks like this:
class DeadlineStrategy {
public:
void yield() {
sched_yield();
}
}
This class can also be tested fairly easy by mocking the sched_yield() system call.

How to stop / kill my current test case?

I want to stop the execution of a test if it matches a certain scenario in order to avoid code duplication.
Consider the following situation:
CoreProviderTest
public void executeCoreSuccess(Object responseModel){
assertNotNull("Response successful", responseModel != null);
if (responseModel == null) {
//Kill Test
}
}
ChildProviderTest - extends CoreProviderTest
#Test
public void responseTester() {
new Provider().getServiceResponse(new Provider.Interface() {
#Override
public void onSuccess(Object responseModel) {
executeCoreSuccess(responseModel);
//Continue assertions
}
#Override
public void onFailure(ErrorResponseModel error) {
executeCoreFailure(error);
}
});
}
For a null response, I would like to kill my current test case inside CoreProviderTest otherwise that might trigger exceptions in further assertions. I wanted to avoid something like:
CoreProviderTest
if (responseModel == null) {
return true;
}
ChildProviderTest
#Override
public void onSuccess(Object responseModel) {
if (executeCoreSuccess(responseModel))
return;
//Continue assertions
}
Is there a way to kill the current test execution with Mockito, JUnit or Roboletric? No luck so far googling an answer.
Thanks in advance
If you are using JUnit5, it has features like Assumtions, Disabling tests and Conditional Test Execution.
Here's the link :
https://junit.org/junit5/docs/current/user-guide/#writing-tests-assumptions
In your case, looks like assumingThat should work. Here's the API :
https://junit.org/junit5/docs/5.0.0/api/org/junit/jupiter/api/Assumptions.html#assumingThat-boolean-org.junit.jupiter.api.function.Executable-
JUnit Assumptions suits perfect to the given case.
Code snippet now stands like:
CoreProvider
public void executeCoreSuccess(Object responseModel){
assumeTrue("Response successful",responseModel != null);
}
According to JUnit's documentation:
A failed assumption does not mean the code is broken, but that the
test provides no useful information. Assume basically means "don't run
this test if these conditions don't apply". The default JUnit runner
skips tests with failing assumptions.
+1 Adelin and Dossani

Parallelizing C-Code Module in C++ Program

My situation:
I have C code running on a microcontroller. To test this code I have written a test program in C++ that checks the C-functions. Since the test functions are very slow, I wanted to do the whole thing in parallel. However, I don't have much experience.
For example, I have a program module in C that looks like this:
/* c-code: */
static int a=0;
void set_a(int value){
a = value;
}
void inc_a(void){
a++;
}
int get_a(void){
return a;
}
Now I want to parallelize these functions in C++. However, I am bothered by the global variable a, which cannot be avoided in my situation.
In the QT environment I want to perform an "asynchronous run" of the function inc_a. This works but does not improve:
int foo(int somevalue){
set_a(somevalue);
inc_a();
return get_a();
}
int myinput = 1,myoutput;
QFuture<int> future = QtConcurrent::run(foo,myinput);
future.waitForFinished();
myoutput = future.result();
This is what I want:
int myinput1 = 1,myoutput1;
int myinput2 = 8,myoutput2;
QFuture<int> future1 = QtConcurrent::run(foo,myinput1);
QFuture<int> future2 = QtConcurrent::run(foo,myinput2);
future1.waitForFinished();
future2.waitForFinished();
myoutput1 = future1.result();
myoutput2 = future2.result();
So my first question is (to be sure): is it correct that the variable a (in C) is now the same in both threads? If not, I have to look over my code again.If yes, how do I solve the problem as elegantly as possible? I thought of creating two C-program modules with the same functionality. However, this makes the program very maintenance-unfriendly:
/* c-code: */
static int a1=0;
void set_a1(int value){
a1 = value;
}
void inc_a1(void){
a1++;
}
int get_a1(void){
return a1;
}
static int a2=0;
void set_a2(int value){
a2 = value;
}
void inc_a2(void){
a2++;
}
int get_a2(void){
return a2;
}
Is there a better way?
You are out of luck.
Ideally, rewrite your testable asset so that it carries round a state struct containing all those pesky globals, and maybe you will get away with it.
Vroomfondel also suggests that wrapping the offending C code in a namespace might hide the issue, if the code can be made to compile as C++.
You could create as many namespaces as you want parallel threads:
namespace TEST1
{
#include "offender.c"
}
namespace TEST2
{
#include "offender.c"
}
RetCode DoTest(int instance, TestId testid)
{
switch (instance)
{
case 1: return TEST1::DoTest(testid);
case 2: return TEST2::DoTest(testid);
}
return OUT_OF_RANGE;
}
If your target really uses global state and can't be changed, then you could consider using forks.
In a fork, a complete copy of the current state is made for the child to run in, and they both resume with just enough info so you know which is the child and which is the owner. You can also set up a pipe for them to communicate with each other. When a test completes, it transmits its status and exits its forked process.
Forks can be really good for test suites because each fork starts with a completely clean environment.
There is a /lot/ more to getting forking right than I think is reasonable to put as an answer to this question.
The third option is to drive the program externally, so that some monitor script or program launches multiple parallel instances that each run linearly through a subset of the test list. Ideally build in features so the monitor can dispatch tests on demand and load-balance.

Simultaneously running 2 or more boost testcases belonging to different test suites via cmd

Consider the following scenario:
BOOST_AUTO_TEST_SUITE(suite1)
{
BOOST_AUTO_TEST_CASE(case1)
{
//my test code here
}
}
BOOST_AUTO_TEST_SUITE(suite2)
{
BOOST_AUTO_TEST_CASE(case1)
{
//my test code here
}
BOOST_AUTO_TEST_CASE(case2)
{
//my test code here
}
}
Now, if I want to run suite1/case1 and suite2/case2 at once, I try the following command line argument:
MyProject.exe --run_test="suite1/case1, suite2/case2"
But this doesn't seem to run.
I know that I can separately run these test cases, as:
MyProject.exe --run_test="suite1/case1"
and
MyProject.exe --run_test="suite2/case2"
But I want to run them together at one go. What should I do?
Thanks in advance :)
This is not a feature currently supported by Boost.Test. The documentation states that you can use a comma separated list if the tests are in the same suite:
Running multiple test cases residing within the same test suite by listing their names in coma separated list.
You can also use wildcards to select suites and test cases, but depending on the names of your suites and cases, you may not be able to limit the selection to just to two cases you desire.
http://www.boost.org/doc/libs/1_55_0/libs/test/doc/html/utf/user-guide/runtime-config/run-by-name.html
Edit It seems I might have taken the question title a bit too literally. Running tests simultaneously means "in parallel" to me.
Anyways, if you are happy to run suite2/case1 as well, you can just
MyProject.exe --run_test="suite1,suite2"
See it Live On Coliru too.
Old answer: What is wrong with running the two processes in parallel? By all means, uncomplicate!
However, if you insist, you can fork copies of the main process:
#include <sys/types.h>
#include <sys/wait.h>
#include <iostream>
static int relay_unit_test_main(std::vector<std::string> args);
int main()
{
if (int const child_pid = fork())
{
int exit_code = relay_unit_test_main({"--run_test=suite1"});
int child_status;
while (-1 == waitpid(child_pid, &child_status, 0));
if (!WIFEXITED(child_status)) {
std::cerr << "Child process (" << child_pid << ") failed" << std::endl;
return 1;
}
return exit_code? exit_code : WEXITSTATUS(child_status);
} else
{
return relay_unit_test_main({"--run_test=suite2"});
}
}
See it Live On Coliru
The function relay_unit_test_main is really nothing more than a convenience wrapper around unit_test_main that avoids meddling with argv[] manually:
static bool init_function() { return true; }
static int relay_unit_test_main(std::vector<std::string> args)
{
std::vector<char const*> c_args;
c_args.push_back("fake_program_name");
std::transform(args.begin(), args.end(), std::back_inserter(c_args), std::mem_fn(&std::string::data));
c_args.push_back(nullptr);
return unit_test_main( &init_function, c_args.size()-1, const_cast<char**>(c_args.data()) );
}
This actually spawns a child process - and even tries to usefully combine the exit code information. Having a separate process prevents the problems that you'd get from using code that wasn't designed for multi-threaded use on different threads.
One caveat remains: if your program does static initializations before entry of main(), and these use external resources (like, log files, e.g.) there might be conflicts. See
man fork(3)
Does Boost Log support process forking? for an example of a lib that has potential issues with fork()

optimizing branching by re-ordering

I have this sort of C function -- that is being called a zillion times:
void foo ()
{
if (/*condition*/)
{
}
else if(/*another_condition*/)
{
}
else if (/*another_condition_2*/)
{
}
/*And so on, I have 4 of them, but we can generalize it*/
else
{
}
}
I have a good test-case that calls this function, causing certain if-branches to be called more than the others.
My goal is to figure the best way to arrange the if statements to minimize the branching.
The only way I can think of is to do write to a file for every if condition branched to, thereby creating a histogram. This seems to be a tedious way. Is there a better way, better tools?
I am building it on AS3 Linux, using gcc 3.4; using oprofile (opcontrol) for profiling.
It's not portable, but many versions of GCC support a function called __builtin_expect() that can be used to tell the compiler what we expect a value to be:
if(__builtin_expect(condition, 0)) {
// We expect condition to be false (0), so we're less likely to get here
} else {
// We expect to get here more often, so GCC produces better code
}
The Linux kernel uses these as macros to make them more intuitive, cleaner, and more portable (i.e. redefine the macros on non-GCC systems):
#ifdef __GNUC__
# define likely(x) __builtin_expect((x), 1)
# define unlikely(x) __builtin_expect((x), 0)
#else
# define likely(x) (x)
# define unlikely(x) (x)
#endif
With this, we can rewrite the above:
if(unlikely(condition)) {
// we're less likely to get here
} else {
// we expect to get here more often
}
Of course, this is probably unnecessary unless you're aiming for raw speed and/or you've profiled and found that this is a problem.
Try a profiler (gprof?) - it will tell you how much time is spent. I don't recall if gprof counts branches, but if not, just call a separate empty method in each branch.
Running your program under Callgrind will give you branch information. Also I hope you profiled and actually determined this piece of code is problematic, as this seems like a microoptimization at best. The compiler is going to generate a branch table from the if/else if/else if it's able to which would require no branching (this is dependent on what the conditionals are, obviously)0, and even failing that the branch predictor on your processor (assuming this is not for embedded work, if it is feel free to ignore me) is pretty good at determining the target of branches.
It doesn't actually matter what order you change them round to, IMO. The branch predictor will store the most common branch and auto take it anyway.
That said, there are something you could try ... You could maintain a set of job queues and then, based on the if statements, assign them to the correct job queue before executing them one after another at the end.
This could further be optimised by using conditional moves and so forth (This does require assembler though, AFAIK). This could be done by conditionally moving a 1 into a register, that is initialised as 0, on condition a. Place the pointer valueat the end of the queue and then decide to increment the queue counter or not by adding that conditional 1 or 0 to the counter.
Suddenly you have eliminated all branches and it becomes immaterial how many branch mispredictions there are. Of course, as with any of these things, you are best off profiling because, though it seems like it would provide a win ... it may not.
We use a mechanism like this:
// pseudocode
class ProfileNode
{
public:
inline ProfileNode( const char * name ) : m_name(name)
{ }
inline ~ProfileNode()
{
s_ProfileDict.Find(name).Value() += 1; // as if Value returns a nonconst ref
}
static DictionaryOfNodesByName_t s_ProfileDict;
const char * m_name;
}
And then in your code
void foo ()
{
if (/*condition*/)
{
ProfileNode("Condition A");
// ...
}
else if(/*another_condition*/)
{
ProfileNode("Condition B");
// ...
} // etc..
else
{
ProfileNode("Condition C");
// ...
}
}
void dumpinfo()
{
ProfileNode::s_ProfileDict.PrintEverything();
}
And you can see how it's easy to put a stopwatch timer in those nodes too and see which branches are consuming the most time.
Some counter may help. After You see the counters, and there are large differences, You can sort the conditions in a decreasing order.
static int cond_1, cond_2, cond_3, ...
void foo (){
if (condition){
cond_1 ++;
...
}
else if(/*another_condition*/){
cond_2 ++;
...
}
else if (/*another_condtion*/){
cond_3 ++;
...
}
else{
cond_N ++;
...
}
}
EDIT: a "destructor" can print the counters at the end of a test run:
void cond_print(void) __attribute__((destructor));
void cond_print(void){
printf( "cond_1: %6i\n", cond_1 );
printf( "cond_2: %6i\n", cond_2 );
printf( "cond_3: %6i\n", cond_3 );
printf( "cond_4: %6i\n", cond_4 );
}
I think it is enough to modify only the file that contains the foo() function.
Wrap the code in each branch into a function and use a profiler to see how many times each function is called.
Line-by-line profiling gives you an idea which branches are called more often.
Using something like LLVM could make this optimization automatically.
As a profiling technique, this is what I rely on.
What you want to know is: Is the time spent in evaluating those conditions a significant fraction of execution time?
The samples will tell you that, and if not, it just doesn't matter.
If it does matter, for example if the conditions include function calls that are on the stack a significant part of the time, what you want to avoid is spending much time in comparisons that are false. The way you tell this is, if you often see it calling a comparison function from, say, the first or second if statement, then catch it in such a sample and step out of it to see if it returns false or true. If it typically returns false, it should probably go farther down the list.