I have been using CPPUnit as a unit testing framework and am now trying to use it in an automated build and package system. However a problem holding me back is that if a crash occurs during the running of the unit tests, e.g. a null pointer dereferencing, it halts the remainder of the automation.
Is there any way for CPPUnit to recover from the exception, record the test failure and then exist gracefully rather than terminating the unit test process? Even an approach specific to null pointer dereferencing would be useful as that makes up about 90% of the issues I have had.
To be technology-specific, I am using makefiles on a Windows system.
You're automating the execution of your cppunit-based unit-tests during your build process, right ?
If you were trying to use CppUnit to execute the build process, I would be tempted to say don't do that !
Could you tell us what is stopping the build process when the unit tests crash ? And what are your unit tests started by, a Makefile, a script of your own, or a continuous integration framework ?
To try to answer your question, CppUnit cannot recover from violation or segmentation errors. On Unix-like systems you should be able to catch the SIGSEGV and to continue, but in which state ?
If your crashes occur in your unit test and not in your product, then I'd recommend you to rely on assertion guards to prevent dereferencing NULL pointers:
class TestObject : public CPPUNIT_NS::TestCase
{
CPPUNIT_TEST_SUITE(Test);
CPPUNIT_TEST(testObjectIsReady);
CPPUNIT_TEST_SUITE_END();
public:
void setUp(void) {}
void tearDown(void) {}
protected:
void testObjectIsReady(void)
{
Object *theObject = GetObject();
CPPUNIT_ASSERT_MESSAGE("check pointer is not null", theObject != NULL);
//--- now you can play with your object without dereferencing a NULL pointer
CPPUNIT_ASSERT_MESSAGE("check objet is ready", theObject->isReady());
}
};
Sorry to say this but the previous answers you received on this are ridiculous.
cppunit really lacks in this regard. cppunit should implement an EXIT_ON_FAIL macro which allows you to trap the access violation in windows (using SetUnhandledExceptionFilter), then you can do any clean-up and allow cpp-unit to report the failure via EXIT_ON_FAIL. Then after reporting, exit the application.
In C/C++, the best way to recover from errors like that is to run each test in a separate process and then monitor them from a parent process. This is very easy in UNIX -- just fork() before the test begins. check supports this, and you could likely patch CPPUnit to have this behavior without much fuss.
As an additional note to anyone perusing this question later, I've found UnitTest++ can catch exceptions in tests and just fail the test with appropriate information rather than resulting in a process exit.
I didn't try it, but if in Windows, I guess use SEH would help:
__try
{
// running your case
}
__except
{
}
Integrate it into the CppUnit framework, and everytime receive an unknown exception, mark the case as fail.
Related
I want to unit test a C++ function which throws aborting assertion error on invalid input.
The function is as follows:
uint64_t FooBar::ReadTimeStamp(std::string& name) {
auto iter = hash_table_.find(name);
assert(iter != hash_table_.end());
....
}
In unit test, I use CPPUNIT_ASSERT_ASSERTION_FAIL to assert on assertion failure:
void FooBarTest::
TestReadNonexistentTimestamp() {
CPPUNIT_ASSERT_ASSERTION_FAIL(ReadTimestamp("NON_EXISTENT"));
}
But I got abort message and unit test failed.
I read this page. It's not clear to me if I need to throw exception here and what the correct way to unit test this scenario would be. Thanks!
Firstly, your misunderstanding is caused by the different ways to use the term "assertion". The test framework itself talks about assertions, but it does not mean the assert() macro provided by the standardlibrary. Since the standard assertion failure causes program termination, you get those results.
Now, how to fix that:
Don't use assert(). Instead, you could throw an exception.
Don't test this code path. Since this is a programming error that's not recoverable anyway, it can only be caused by misuse of your code (i.e. violating preconditions). Since it's not your code that's at fault, not testing it doesn't have any negative impact on its quality.
Hijack assert() to throw a failure that CppUnit understands. This could be tricky because for one, assert() is part of C++ and shouldn't be redefined carelessly (perhaps substituting it with a different macro would be better). Further, you now have three different behaviours: Throw in tests, abort() in regular use, UB with NDEBUG defined.
Which of these works best is up to you to decide, based on your use case and, admittedly, personal preference.
I'm experiencing a very weird behaviour. Was unable to isolate the problem in a MCVE, but will when I'll progress on my investigation.
I have a program, based on CPPUNIT library and Qt that runs ~900 unit tests. This program is deployed on Android using QtCreator. It links with ~80 libraries, each one defining some tests.
On PC, the programs runs perfectly. When deployed on Android, when I run it, after some tests were ran (~100), I start getting std::bad_cast exceptions for every dynamic_castdone within my tests. I see it comes from places where I call dynamic_cast on a pointer, not on a reference. According to the doc, std::bad_cast is only thrown when dynamic_cast is called on a reference...
void validate( ParentTestHelper& testHelper )
{
const ChildTestHelper* child = dynamic_cast<const ChildTestHelper*>( &testHelper );
...
}
However, my code throws std::bad_cast.
If I run only the test doing the dynamic_cast, it works. It will only fail if it is run after other ones...and running them manually one by one does not let me reproduce the problem. There must be something weird somewhere leading to this issue and I'm still investigating.
If anyone has an idea why dynamic_cast called on a pointer could throw std::bad_cast, this may help...
After a lot of investigations, I found out that the root cause of the problem was the fact that I applied a custom facet to std::cout. At some point, it makes boost code throw std::bad_cast while doing a use_facet in it's boost::posix_time::ptime's operator<<.
See this other topic where hopefully someone will explain why: Why imbue with boost::posix_time::time_facet on std::cout crashs my app?
I'm surprised that Google C++ Testing Framework does not explicitly support checking for memory leaks. There is, however, a workaround for Microsoft Visual C++, but what about Linux?
If memory management is crucial for me, is it better to use another C++ unit-testing framework?
If memory management is crucial for me, is it better to use another C++ unit-testing framework?
i don't know about c++ unit-testing, but i used Dr. memory, it works on linux windows and mac
if you have the symbols it even tells you in what line the memory leak happened! really usefull :Dmore info
http://drmemory.org/
Even if this thread is very old. I was searching for this lately.
I now came up with a simple solution (inspired by https://stackoverflow.com/a/19315100/8633816)
Just write the following header:
#include "gtest/gtest.h"
#include <crtdbg.h>
class MemoryLeakDetector {
public:
MemoryLeakDetector() {
_CrtMemCheckpoint(&memState_);
}
~MemoryLeakDetector() {
_CrtMemState stateNow, stateDiff;
_CrtMemCheckpoint(&stateNow);
int diffResult = _CrtMemDifference(&stateDiff, &memState_, &stateNow);
if (diffResult)
reportFailure(stateDiff.lSizes[1]);
}
private:
void reportFailure(unsigned int unfreedBytes) {
FAIL() << "Memory leak of " << unfreedBytes << " byte(s) detected.";
}
_CrtMemState memState_;
};
Then just add a local MemoryLeakDetector to your Test:
TEST(TestCase, Test) {
// Do memory leak detection for this test
MemoryLeakDetector leakDetector;
//Your test code
}
Example:
A test like:
TEST(MEMORY, FORCE_LEAK) {
MemoryLeakDetector leakDetector;
int* dummy = new int;
}
Produces the output:
I am sure there are better tools out there, but this is a very easy and simple solution.
"I'm surprised that Google C++ Testing Framework does not explicitly support checking for memory leaks."
It's not (and never was) purposed to do so.
You can actually do some certifying, e.g. using google mock and setting up expected calls (for e.g. destructors). But using a tool specialized upon this aspect, will certainly do better, than everything you're able to write yourself.
"is it better to use another C++ unit-testing framework?"
So why bothering looking for different unit testing frameworks (that won't support such feature either, at least there's none I know of).
There are tools like valgrind you can use, and run your UnitTester executable under their control to detect memory leaks.
Note:
The above advice to do this with the UnitTester executable, won't be able to catch all of the possible memory leaks from the final executable produced with your code, but just help to find bugs/flaws with the actually tested code.
Not sure whether this worked in 2015, but since 2018 or so we use GoogleTest with CLang's sanitizers, including LeakSanitizer, AddressSanitizer and UndefinedBehavior sanitizer.
Just build tests with sanitizers enabled, example for the CMake-based project:
add_compile_options(-fsanitize=leak,address,undefined -fno-omit-frame-pointer -fno-common -O1)
link_libraries(-fsanitize=leak,address,undefined)
Memory leaks are a result of incorrect use of system interfaces, The unit test should check if those interfaces are being used correctly in your unit under test, not what the implementation specific results of any of those interfaces is. It should check that the memory allocation and deallocation interfaces used directly by your unit are being used as designed. Testing the system specific results would be a part of component or integration testing. In the unit test, the memory management interfaces are external to the unit under test and thus should be stubbed out with a test implementation.
I use the Boost Test framework to unit test my C++ code and wondered if it is possible to test if a function will assert? Yes, sounds a bit strange but bear with me! Many of my functions check the input parameters upon entry, asserting if they are invalid, and it would be useful to test for this. For example:
void MyFunction(int param)
{
assert(param > 0); // param cannot be less than 1
...
}
I would like to be able to do something like this:
BOOST_CHECK_ASSERT(MyFunction(0), true);
BOOST_CHECK_ASSERT(MyFunction(-1), true);
BOOST_CHECK_ASSERT(MyFunction(1), false);
...
You can check for exceptions being thrown using Boost Test so I wondered if there was some assert magic too...
Having the same problem, I digged through the documentation (and code) and
found a "solution".
The Boost UTF uses boost::execution_monitor (in
<boost/test/execution_monitor.hpp>). This is designed with the aim to catch
everything that could happen during test execution. When an assert is found
execution_monitor intercepts it and throws boost::execution_exception. Thus,
by using BOOST_REQUIRE_THROW you may assert the failure of an assert.
so:
#include <boost/test/unit_test.hpp>
#include <boost/test/execution_monitor.hpp> // for execution_exception
BOOST_AUTO_TEST_CASE(case_1)
{
BOOST_REQUIRE_THROW(function_w_failing_assert(),
boost::execution_exception);
}
Should do the trick. (It works for me.)
However (or disclaimers):
It works for me. That is, on Windows XP, MSVC 7.1, boost 1.41.0. It might
be unsuitable or broken on your setup.
It might not be the intention of the author of Boost Test.
(although it seem to be the purpose of execution_monitor).
It will treat every form of fatal error the same way. I e it could be
that something other than your assert is failing. In this case you
could miss e g a memory corruption bug, and/or miss a failed failed assert.
It might break on future boost versions.
I expect it would fail if run in Release config, since the assert will be
disabled and the code that the assert was set to prevent will
run. Resulting in very undefined behavior.
If, in Release config for msvc, some assert-like or other fatal error
would occur anyway it would not be caught. (see execution_monitor docs).
If you use assert or not is up to you. I like them.
See:
http://www.boost.org/doc/libs/1_41_0/libs/test/doc/html/execution-monitor/reference.html#boost.execution_exception
the execution-monitor user-guide.
Also, thanks to Gennadiy Rozental (Author of Boost Test), if you happen to
read this, Great Work!!
There are two kinds of errors I like to check for: invariants and run-time errors.
Invariants are things that should always be true, no matter what. For those, I use asserts. Things like you shouldn't be passing me a zero pointer for the output buffer you're giving me. That's a bug in the code, plain and simple. In a debug build, it will assert and give me a chance to correct it. In a retail build, it will cause an access violation and generate a minidump (Windows, at least in my code) or a coredump (Mac/unix). There's no catch that I can do that makes sense to deal with dereferencing a zero pointer. On Windows catch (...) can suppress access violations and give the user a false sense of confidence that things are OK when they've already gone horribly, horribly wrong.
This is one reason why I've come to believe that catch (...) is generally a code smell in C++ and the only reasonable place where I can think of that being present is in main (or WinMain) right before you generate a core dump and politely exit the app.
Run-time errors are things like "I can't write this file because of permissions" or "I can't write this file because the disk is full". For these sorts of errors throwing an exception makes sense because the user can do something about it like change the permission on a directory, delete some files or choose an alternate location to save the file. These run-time errors are correctable by the user. A violation of an invariant can't be corrected by the user, only by a programmer. (Sometimes the two are the same, but typically they aren't.)
Your unit tests should force code to throw the run-time error exceptions that your code could generate. You might also want to force exceptions from your collaborators to ensure that your system under test is exception safe.
However, I don't believe there is value in trying to force your code to assert against invariants with unit tests.
I don't think so. You could always write your own assert which throws an exception and then use BOOST_CHECK_NOTHROW() for that exception.
I think this question, and some of replies, confuse run-time errors detection with bug detection. They also confuse intent and mechanism.
Run-time error is something that can happen in a 100% correct program. It need detection, and it needs proper reporting and handling, and it should be tested. Bugs also happen, and for programmer's convenience it's better to catch them early using precondition checks or invariant checks or random assert. But this is programmer's tool. The error message will make no sense for ordinary user, and it does not seem reasonable to test function behaviour on the data that properly written program will never pass to it.
As for intent and mechanism, it should be noted that exception is nothing magic. Some time ago, Peter Dimov said on Boost mailing list (approximately) that "exceptions are just non-local jump mechanism". And this is very true. If you have application where it's possible to continue after some internal error, without the risk that something will be corrupted before repair, you can implement custom assert that throws C++ exception. But it would not change the intent, and won't make testing for asserts much more reasonable.
At work I ran into the same problem. My solution is to use a compile flag. When my flag GROKUS_TESTABLE is on my GROKUS_ASSERT is turned into an exception and with Boost you can test code paths that throw exceptions. When GROKUS_TESTABLE is off, GROKUS_ASSERT is translated to c++ assert().
#if GROKUS_TESTABLE
#define GROKUS_ASSERT ... // exception
#define GROKUS_CHECK_THROW BOOST_CHECK_THROW
#else
#define GROKUS_ASSERT ... // assert
#define GROKUS_CHECK_THROW(statement, exception) {} // no-op
#endif
My original motivation was to aid debugging, i.e. assert() can be debugged quickly and exceptions often are harder to debug in gdb. My compile flag seems to balance debuggability and testability pretty well.
Hope this helps
Sorry, but you're attacking your problem the wrong way.
"assert" is the spawn of the devil (a.k.a. "C") and is useless with any language that has proper exceptions. It's waaaaaay better to reimplement an assert-like functionality with exceptions. This way you actually get a chance of handling errors the right way (incl proper cleanup procedures) or triggering them at will (for unit testing).
Besides, if your code ever runs in Windows, when you fail an assertion you get a useless popup offering you to debug/abort/retry. Nice for automated unit tests.
So do yourself a favor and re-code an assert function that throws exceptions. There's one here:
How can I assert() without using abort()?
Wrap it in a macro so you get _ _FILE _ _ and _ _ LINE _ _ (useful for debug) and you're done.
I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body.
When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why?
#Martin Meredith
That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first.
#Jimmeh
That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out.
#Matt Howells
Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case
#Mitch Wheat
That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.
For this scenario, rather than calling Assert.Fail, I do the following (in C# / NUnit)
[Test]
public void MyClassDoesSomething()
{
throw new NotImplementedException();
}
It is more explicit than an Assert.Fail.
There seems to be general agreement that it is preferable to use more explicit assertions than Assert.Fail(). Most frameworks have to include it though because they don't offer a better alternative. For example, NUnit (and others) provide an ExpectedExceptionAttribute to test that some code throws a particular class of exception. However in order to test that a property on the exception is set to a particular value, one cannot use it. Instead you have to resort to Assert.Fail:
[Test]
public void ThrowsExceptionCorrectly()
{
const string BAD_INPUT = "bad input";
try
{
new MyClass().DoSomething(BAD_INPUT);
Assert.Fail("No exception was thrown");
}
catch (MyCustomException ex)
{
Assert.AreEqual(BAD_INPUT, ex.InputString);
}
}
The xUnit.Net method Assert.Throws makes this a lot neater without requiring an Assert.Fail method. By not including an Assert.Fail() method xUnit.Net encourages developers to find and use more explicit alternatives, and to support the creation of new assertions where necessary.
It was deliberately left out. This is Brad Wilson's reply as to why is there no Assert.Fail():
We didn't overlook this, actually. I
find Assert.Fail is a crutch which
implies that there is probably an
assertion missing. Sometimes it's just
the way the test is structured, and
sometimes it's because Assert could
use another assertion.
I've always used Assert.Fail() for handling cases where you've detected that a test should fail through logic beyond simple value comparison. As an example:
try
{
// Some code that should throw ExceptionX
Assert.Fail("ExceptionX should be thrown")
}
catch ( ExceptionX ex )
{
// test passed
}
Thus the lack of Assert.Fail() in the framework looks like a mistake to me. I'd suggest patching the Assert class to include a Fail() method, and then submitting the patch to the framework developers, along with your reasoning for adding it.
As for your practice of creating tests that intentionally fail in your workspace, to remind yourself to implement them before committing, that seems like a fine practice to me.
I use MbUnit for my Unit Testing. They have an option to Ignore tests, which show up as Orange (rather than Green or Red) in the test suite. Perhaps xUnit has something similar, and would mean you don't even have to put any assert into the method, because it would show up in an annoyingly different colour making it hard to miss?
Edit:
In MbUnit it is in the following way:
[Test]
[Ignore]
public void YourTest()
{ }
This is the pattern that I use when writting a test for code that I want to throw an exception by design:
[TestMethod]
public void TestForException()
{
Exception _Exception = null;
try
{
//Code that I expect to throw the exception.
MyClass _MyClass = null;
_MyClass.SomeMethod();
//Code that I expect to throw the exception.
}
catch(Exception _ThrownException)
{
_Exception = _ThrownException
}
finally
{
Assert.IsNotNull(_Exception);
//Replace NullReferenceException with expected exception.
Assert.IsInstanceOfType(_Exception, typeof(NullReferenceException));
}
}
IMHO this is a better way of testing for exceptions over using Assert.Fail(). The reason for this is that not only do I test for an exception being thrown at all but I also test for the exception type. I realise that this is similar to the answer from Matt Howells but IMHO using the finally block is more robust.
Obviously it would still be possible to include other Assert methods to test the exceptions input string etc. I would be grateful for your comments and views on my pattern.
Personally I have no problem with using a test suite as a todo list like this as long as you eventually get around to writing the test before you implement the code to pass.
Having said that, I used to use this approach myself, although now I'm finding that doing so leads me down a path of writing too many tests upfront, which in a weird way is like the reverse problem of not writing tests at all: you end up making decisions about design a little too early IMHO.
Incidentally in MSTest, the standard Test template uses Assert.Inconclusive at the end of its samples.
AFAIK the xUnit.NET framework is intended to be extremely lightweight and yes they did cut Fail deliberately, to encourage the developer to use an explicit failure condition.
Wild guess: withholding Assert.Fail is intended to stop you thinking that a good way to write test code is as a huge heap of spaghetti leading to an Assert.Fail in the bad cases. [Edit to add: other people's answers broadly confirm this, but with quotations]
Since that's not what you're doing, it's possible that xUnit.Net is being over-protective.
Or maybe they just think it's so rare and so unorthogonal as to be unnecessary.
I prefer to implement a function called ThisCodeHasNotBeenWrittenYet (actually something shorter, for ease of typing). Can't communicate intention more clearly than that, and you have a precise search term.
Whether that fails, or is not implemented (to provoke a linker error), or is a macro that doesn't compile, can be changed to suit your current preference. For instance when you want to run something that is finished, you want a fail. When you're sitting down to get rid of them all, you may want a compile error.
With the good code I usually do:
void goodCode() {
// TODO void goodCode()
throw new NotSupportedOperationException("void goodCode()");
}
With the test code I usually do:
#Test
void testSomething() {
// TODO void test Something
Assert.assert("Some descriptive text about what to test")
}
If using JUnit, and don't want to get the failure, but the error, then I usually do:
#Test
void testSomething() {
// TODO void test Something
throw new NotSupportedOperationException("Some descriptive text about what to test")
}
Beware Assert.Fail and its corrupting influence to make developers write silly or broken tests. For example:
[TestMethod]
public void TestWork()
{
try {
Work();
}
catch {
Assert.Fail();
}
}
This is silly, because the try-catch is redundant. A test fails if it throws an exception.
Also
[TestMethod]
public void TestDivide()
{
try {
Divide(5,0);
Assert.Fail();
} catch { }
}
This is broken, the test will always pass whatever the outcome of the Divide function. Again, a test fails if and only if it throws an exception.
If you're writing a test that just fails, and then writing the code for it, then writing the test. This isn't Test Driven Development.
Technically, Assert.fail() shouldn't be needed if you're using test driven development correctly.
Have you thought of using a Todo List, or applying a GTD methodology to your work?
MS Test has Assert.Fail() but it also has Assert.Inconclusive(). I think that the most appropriate use for Assert.Fail() is if you have some in-line logic that would be awkward to put in an assertion, although I can't even think of any good examples. For the most part, if the test framework supports something other than Assert.Fail() then use that.
I think you should ask yourselves what (upfront) testing should do.
First, you write a (set of) test without implmentation.
Maybe, also the rainy day scenarios.
All those tests must fail, to be correct tests:
So you want to achieve two things:
1) Verify that your implementation is correct;
2) Verify that your unit tests are correct.
Now, if you do upfront TDD, you want to execute all your tests, also, the NYI parts.
The result of your total test run passes if:
1) All implemented stuff succeeds
2) All NYI stuff fails
After all, it would be a unit test ommision if your unit tests succeeds whilst there is no implementation, isnt it?
You want to end up with something of a mail of your continous integration test that checks all implemented and not implemented code, and is sent if any implemented code fails, or any not implemented code succeeds. Both are undesired results.
Just write an [ignore] tests wont do the job.
Neither, an asserts that stops an the first assert failure, not running other tests lines in the test.
Now, how to acheive this then?
I think it requires some more advanced organisation of your testing.
And it requires some other mechanism then asserts to achieve these goals.
I think you have to split up your tests and create some tests that completly run but must fail, and vice versa.
Ideas are to split your tests over multiple assemblies, use grouping of tests (ordered tests in mstest may do the job).
Still, a CI build that mails if not all tests in the NYI department fail is not easy and straight-forward.
Why would you use Assert.Fail for saying that an exception should be thrown? That is unnecessary. Why not just use the ExpectedException attribute?
This is our use case for Assert.Fail().
One important goal for our Unit tests is that they don't touch the database.
Sometimes mocking doesn't happen properly, or application code is modified and a database call is inadvertently made.
This can be quite deep in the call stack. The exception may be caught so it won't bubble up, or because the tests are running initially with a database the call will work.
What we've done is add a config value to the unit test project so that when the database connection is first requested we can call Assert.Fail("Database accessed");
Assert.Fail() acts globally, even in different libraries. This therefore acts as a catch-all for all of the unit tests.
If any one of them hits the database in a unit test project then they will fail.
We therefore fail fast.