I have some test code written using gmock. Due to some code changes, the test is not executing completely, and completes prematurely (I know this, as I can see failure messages in the logs saying expected to execute once, but did not run for many functions). However, the compilation/execution is not failing, as it getting an exception that it is expecting (as the same exception is thrown in multiple places). So the test appears to pass, but it is not executing completely. How can I make gmock treat all warnings/failures as errors?
Using
::testing::GTEST_FLAG(throw_on_failure) = true
in the method where the tests were failing helped catch these failures while running the tests. The throw_on_failure flag causes GMock to throw an exception when a mock related exception fails.
Reference
Related
Asserts raise an error in test code alerting that a test failed. If I am expecting a test to pass and not raise an exception, why would I use assert_not_raised (or equivalent) instead of just letting the raised exception fail the test?
I suppose it may be more explicit for others reading the test code, i.e. that the thing being tested could raise an exception, but still seems unnecessary.
I want to unit test a C++ function which throws aborting assertion error on invalid input.
The function is as follows:
uint64_t FooBar::ReadTimeStamp(std::string& name) {
auto iter = hash_table_.find(name);
assert(iter != hash_table_.end());
....
}
In unit test, I use CPPUNIT_ASSERT_ASSERTION_FAIL to assert on assertion failure:
void FooBarTest::
TestReadNonexistentTimestamp() {
CPPUNIT_ASSERT_ASSERTION_FAIL(ReadTimestamp("NON_EXISTENT"));
}
But I got abort message and unit test failed.
I read this page. It's not clear to me if I need to throw exception here and what the correct way to unit test this scenario would be. Thanks!
Firstly, your misunderstanding is caused by the different ways to use the term "assertion". The test framework itself talks about assertions, but it does not mean the assert() macro provided by the standardlibrary. Since the standard assertion failure causes program termination, you get those results.
Now, how to fix that:
Don't use assert(). Instead, you could throw an exception.
Don't test this code path. Since this is a programming error that's not recoverable anyway, it can only be caused by misuse of your code (i.e. violating preconditions). Since it's not your code that's at fault, not testing it doesn't have any negative impact on its quality.
Hijack assert() to throw a failure that CppUnit understands. This could be tricky because for one, assert() is part of C++ and shouldn't be redefined carelessly (perhaps substituting it with a different macro would be better). Further, you now have three different behaviours: Throw in tests, abort() in regular use, UB with NDEBUG defined.
Which of these works best is up to you to decide, based on your use case and, admittedly, personal preference.
Here it's discussed how to catch failing assert, e.g. you setup your fixture so that assert() fails and you see nice output. But what I need is the opposite. I want to test that assert() succeeds. But in case it fails I want to have nice output. At that point it just terminates when it snags on assert().
#define LIMIT 5
struct Obj {
int getIndex(int index) {
assert(index < LIMIT);
// do stuff;
}
}
Obj obj;
TEST(Fails_whenOutOfRange) {
ASSERT_DEATH(obj->getIndex(6), "");
}
TEST(Succeeds_whenInRange) {
obj->getIndex(4);
}
Above is contrived example. I want second test not to terminate in case it fails, for example if I set LIMIT to 3. After all, ASSERT_DEATH suppresses somehow termination when assert() fails.
You should try using the command line option --gtest_break_on_failure
It is meant to run tests within a debugger, so you get a breakpoint upon test failure. If you don't use a debugger you'll just get a SEGFAULT and execution will stop.
The following is just my opinion, but it seems for me that you are either testing a wrong thing, or using a wrong tool.
Assert (C assert()) is not for verifying input, it is for catching impossible situations. It will disappear from release code, for example, so you can't rely on it.
What you should test is your function specification rather than implementation. And you should decide, what is your specification for invalid input values:
Undefined behavior, so assert is fine, but you can't test it with unit-test, because undefined behavior is, well, undefined.
Defined behavior. Then you should be consistent regardless of NDEBUG presence. And throwing exception, in my opinion, is the right thing to do here, instead of calling std::abort, which is almost useless for user (can't be intercepted and processed properly).
If assert triggers (fails) you get "nice output" (or a crash or whatever assert does in your environment). If assert does not trigger then nothing happens and execution continues.
What more do you need to know?
This (hack) adds a EXPECT_NODEATH macro to Google Test. It is the "opposite" of EXPECT_DEATH in that it will pass if the statement does not assert, abort, or otherwise fail.
The general idea was simple, but I did not take the time to make the error messages any nicer. I tried to leave Google Test as untouched as possible and just piggy-back on what is already there. You should be able to include this without any side effects to the rest of Google Test
For your case:
TEST(Succeeds_whenInRange) {
EXPECT_NODEATH(obj->getIndex(4), "");
}
GTestNoDeath.h
In the developer code, there are many places where it calls assert(xyz):
(from assert.h)
#define assert(_Expression) (void)( (!!(_Expression)) || (_wassert(_CRT_WIDE(#_Expression), _CRT_WIDE(__FILE__), __LINE__), 0) )
When I run my tests through gtest and one of these asserts fails, then my executable completely shuts down.
I want a way for gtest to just catch this assert, fail the test, and the continue execution. Is this possible?
As from google test's reference documentation
How to Write a Death Test
Google Test has the following macros to support death tests:
where statement is a statement that is expected to cause the process to die, predicate is a function or function object that evaluates an integer exit status, and regex is a regular expression that the stderr output of statement is expected to match. Note that statement can be any valid statement (including compound statement) and doesn't have to be an expression.
You can use these test macros to intercept native exit() or _exit() calls of your tested code, if these return different values from 0.
As for your comment
"What if the test itself doesn't expect it, but it happens anyway? I don't want the rest of my execution to stop. Just that test to fail, then continue on."
Sorry, you can't prevent that. That's what assert() statements are designed for, and act as a self assertion for certain functions that test the inputs or conditions they achieve.
You may try to compile your testing and under test code using the -DNDEBUG compiler option, but this will leavee you with even more obscure issues hitting undefined behavior or such.
If a test case is likely to hit an unexpected assertion, there's either something wrong with your test cases input values, or with the code tested.
So you should setup reproducible conditions, that either the test case fails with the assert (and the unit tester runnable carries on), or the whole thing blows up (exits the test runner process), which means your tested input didn't pass (and you'll need to change the testcase, or fix the the code under test).
Basically, if the code you are testing is broken, the test cannot continue.
To keep gtest from crashing, make sure the code you are testing at least compiles properly, and input it is gathering is valid.
I am saying this not to be mean, but rather out of personal experience. I use gtest and gmock for my own projects. I have been playing around with code lately that was a bit out of my league (after all, the only way to grow is to stretch beyond your perceived limits).
The code was taking data from a data file, and this was crashing my tests, not because there was anything wrong with the test, but because I wasn't doing proper error checking yet for the functions that were reading from the file, and it was throwing a wrench in things when the program was getting a string, and wanted an integer.
Believe it or not, exceptions are a GOOD thing in tests. You don't want to just ignore them and move on, you want to figure out what is causing them and make it stop. That is the entire reason for testing.
I use the Boost Test framework to unit test my C++ code and wondered if it is possible to test if a function will assert? Yes, sounds a bit strange but bear with me! Many of my functions check the input parameters upon entry, asserting if they are invalid, and it would be useful to test for this. For example:
void MyFunction(int param)
{
assert(param > 0); // param cannot be less than 1
...
}
I would like to be able to do something like this:
BOOST_CHECK_ASSERT(MyFunction(0), true);
BOOST_CHECK_ASSERT(MyFunction(-1), true);
BOOST_CHECK_ASSERT(MyFunction(1), false);
...
You can check for exceptions being thrown using Boost Test so I wondered if there was some assert magic too...
Having the same problem, I digged through the documentation (and code) and
found a "solution".
The Boost UTF uses boost::execution_monitor (in
<boost/test/execution_monitor.hpp>). This is designed with the aim to catch
everything that could happen during test execution. When an assert is found
execution_monitor intercepts it and throws boost::execution_exception. Thus,
by using BOOST_REQUIRE_THROW you may assert the failure of an assert.
so:
#include <boost/test/unit_test.hpp>
#include <boost/test/execution_monitor.hpp> // for execution_exception
BOOST_AUTO_TEST_CASE(case_1)
{
BOOST_REQUIRE_THROW(function_w_failing_assert(),
boost::execution_exception);
}
Should do the trick. (It works for me.)
However (or disclaimers):
It works for me. That is, on Windows XP, MSVC 7.1, boost 1.41.0. It might
be unsuitable or broken on your setup.
It might not be the intention of the author of Boost Test.
(although it seem to be the purpose of execution_monitor).
It will treat every form of fatal error the same way. I e it could be
that something other than your assert is failing. In this case you
could miss e g a memory corruption bug, and/or miss a failed failed assert.
It might break on future boost versions.
I expect it would fail if run in Release config, since the assert will be
disabled and the code that the assert was set to prevent will
run. Resulting in very undefined behavior.
If, in Release config for msvc, some assert-like or other fatal error
would occur anyway it would not be caught. (see execution_monitor docs).
If you use assert or not is up to you. I like them.
See:
http://www.boost.org/doc/libs/1_41_0/libs/test/doc/html/execution-monitor/reference.html#boost.execution_exception
the execution-monitor user-guide.
Also, thanks to Gennadiy Rozental (Author of Boost Test), if you happen to
read this, Great Work!!
There are two kinds of errors I like to check for: invariants and run-time errors.
Invariants are things that should always be true, no matter what. For those, I use asserts. Things like you shouldn't be passing me a zero pointer for the output buffer you're giving me. That's a bug in the code, plain and simple. In a debug build, it will assert and give me a chance to correct it. In a retail build, it will cause an access violation and generate a minidump (Windows, at least in my code) or a coredump (Mac/unix). There's no catch that I can do that makes sense to deal with dereferencing a zero pointer. On Windows catch (...) can suppress access violations and give the user a false sense of confidence that things are OK when they've already gone horribly, horribly wrong.
This is one reason why I've come to believe that catch (...) is generally a code smell in C++ and the only reasonable place where I can think of that being present is in main (or WinMain) right before you generate a core dump and politely exit the app.
Run-time errors are things like "I can't write this file because of permissions" or "I can't write this file because the disk is full". For these sorts of errors throwing an exception makes sense because the user can do something about it like change the permission on a directory, delete some files or choose an alternate location to save the file. These run-time errors are correctable by the user. A violation of an invariant can't be corrected by the user, only by a programmer. (Sometimes the two are the same, but typically they aren't.)
Your unit tests should force code to throw the run-time error exceptions that your code could generate. You might also want to force exceptions from your collaborators to ensure that your system under test is exception safe.
However, I don't believe there is value in trying to force your code to assert against invariants with unit tests.
I don't think so. You could always write your own assert which throws an exception and then use BOOST_CHECK_NOTHROW() for that exception.
I think this question, and some of replies, confuse run-time errors detection with bug detection. They also confuse intent and mechanism.
Run-time error is something that can happen in a 100% correct program. It need detection, and it needs proper reporting and handling, and it should be tested. Bugs also happen, and for programmer's convenience it's better to catch them early using precondition checks or invariant checks or random assert. But this is programmer's tool. The error message will make no sense for ordinary user, and it does not seem reasonable to test function behaviour on the data that properly written program will never pass to it.
As for intent and mechanism, it should be noted that exception is nothing magic. Some time ago, Peter Dimov said on Boost mailing list (approximately) that "exceptions are just non-local jump mechanism". And this is very true. If you have application where it's possible to continue after some internal error, without the risk that something will be corrupted before repair, you can implement custom assert that throws C++ exception. But it would not change the intent, and won't make testing for asserts much more reasonable.
At work I ran into the same problem. My solution is to use a compile flag. When my flag GROKUS_TESTABLE is on my GROKUS_ASSERT is turned into an exception and with Boost you can test code paths that throw exceptions. When GROKUS_TESTABLE is off, GROKUS_ASSERT is translated to c++ assert().
#if GROKUS_TESTABLE
#define GROKUS_ASSERT ... // exception
#define GROKUS_CHECK_THROW BOOST_CHECK_THROW
#else
#define GROKUS_ASSERT ... // assert
#define GROKUS_CHECK_THROW(statement, exception) {} // no-op
#endif
My original motivation was to aid debugging, i.e. assert() can be debugged quickly and exceptions often are harder to debug in gdb. My compile flag seems to balance debuggability and testability pretty well.
Hope this helps
Sorry, but you're attacking your problem the wrong way.
"assert" is the spawn of the devil (a.k.a. "C") and is useless with any language that has proper exceptions. It's waaaaaay better to reimplement an assert-like functionality with exceptions. This way you actually get a chance of handling errors the right way (incl proper cleanup procedures) or triggering them at will (for unit testing).
Besides, if your code ever runs in Windows, when you fail an assertion you get a useless popup offering you to debug/abort/retry. Nice for automated unit tests.
So do yourself a favor and re-code an assert function that throws exceptions. There's one here:
How can I assert() without using abort()?
Wrap it in a macro so you get _ _FILE _ _ and _ _ LINE _ _ (useful for debug) and you're done.