c++ google test division by zero - c++

I'm learning to write unit tests, and have started with an easy "Calculator"-class that I wanted to test.
I figured out how to use the EXPECT/ASSERT functions, and what test cases etc. are, but I got a problem when I wanted to test the division by zero. Is there any possibility to test it? I mean, what should I write as test result? Is there anything like "ERROR"? Or do I have to use exceptions?
These are my tests so far:
TEST(TestCalc, TestPos)
{
Calc calculate;
EXPECT_EQ(10.0, calculate.add(5.0, 5.0));
EXPECT_EQ(9, calculate.mul(3, 3));
EXPECT_EQ(9, calculate.div(27, 3));
EXPECT_EQ(9, calculate.sub(12, 3));
}
TEST(TestCalc, TestNeg)
{
Calc calculate;
EXPECT_EQ(-1.0, calculate.add(5.0, -6.0));
EXPECT_EQ(-9, calculate.mul(3, -3));
EXPECT_EQ(-9, calculate.div(27, -3));
EXPECT_EQ(15, calculate.sub(12, -3));
}
TEST(TestCalc, TestZero)
{
Calc calculate;
EXPECT_EQ(10.0, calculate.add(5.0, 0));
EXPECT_EQ(9, calculate.mul(3, 0));
EXPECT_EQ(, calculate.div(27,0));
EXPECT_EQ(12, calculate.sub(12,0));
}

I don't agree with #Ketzu. You have an expectation how calculator should behave, when dividing by zero.
EXPECT_EQ(, calculate.div(27,0));
This expectation is perhaps not well formulated in this test.
If calculate.div(27,0) throws an exception, than you can catch this exception and your test fails, if it is not thrown. You can write something like this
TEST(ExceptionTest, ExpectThrowsSpecificException) {
try {
calculate.div(27,0);
FAIL() << "calculate.div(27,0) should throw an error, since a division by zero is not valid\n";
} catch (TestException& exception) {
EXPECT_THAT(std::string(exception.what()), Eq("VALID_SETTING"));
EXPECT_THAT(exception.errorCode, Eq(20));
}
}
See here for a detailed discussion.
If no exception is thrown, how do you detect the abnormal usage of calculate.div?

Is there any possibility to test it? I mean, what should I write as test result? Is there anything like "ERROR"? Or do I have to use exceptions?
This is a core question you need to ask (and answer) yourself - how should your calculator (in this case the function Calc::div()) behave if an invalid input is given. There are several ways it can behave:
Crash. This (usually) is the behavior you get if you divide by zero in C++ (technically the behaviour is undefined, so the compiler may do ANYTHING. Fortunately (for us) most compilers agree that terminating the entire process is the "correct" ANYTHING to do here)
Return some value. You could return (in the case of division by zero) infinity. Or possibly NaN. Or any other value that makes sense in your context. This approach however mixes the result with error handling, which is discouraged nowadays (as it would force you to check each and every invokation of the function for "error results" - and if you forget a single check you get nasty bugs as you continue operations with invalid/bogus values.)
Throw an exception. You can throw an exception, signalling that something went wrong. This is the usually used method nowadays (except for extremly performance sensitve stuff, where every microsecond counts), as it neatly separates the normal path (return result value) from the error path (exception) (and if you forget to handle the exception you will notice at once instead of using bad value like in option 2)
Once you have decided on what your behaviour should be you can test it.
For Option 1 gtest provides death tests.
For Option 2 you can simply validate that you get your expected result.
For Option 3 you can catch and evalute the exception, either via exception assertions or a homemade try { } catch { } with a FAIL() at the end of the try block (so you notice if the function fails to throw an exception when you expect it to)

Your comment
I thought I could test it
suggest a misunderstanding about tests.
Your tests only test if the behaviour of your classes is the way you want it to be. Tests are in no way error handling mechanisms!
If one of your functions is creating an application crash with certain parameters, that is not something your tests should mark as correct.
For how to solve it (exception and suitable macro) see john's comment.

Related

How to suppress termination in Google test when assert() unexpectedly triggers?

Here it's discussed how to catch failing assert, e.g. you setup your fixture so that assert() fails and you see nice output. But what I need is the opposite. I want to test that assert() succeeds. But in case it fails I want to have nice output. At that point it just terminates when it snags on assert().
#define LIMIT 5
struct Obj {
int getIndex(int index) {
assert(index < LIMIT);
// do stuff;
}
}
Obj obj;
TEST(Fails_whenOutOfRange) {
ASSERT_DEATH(obj->getIndex(6), "");
}
TEST(Succeeds_whenInRange) {
obj->getIndex(4);
}
Above is contrived example. I want second test not to terminate in case it fails, for example if I set LIMIT to 3. After all, ASSERT_DEATH suppresses somehow termination when assert() fails.
You should try using the command line option --gtest_break_on_failure
It is meant to run tests within a debugger, so you get a breakpoint upon test failure. If you don't use a debugger you'll just get a SEGFAULT and execution will stop.
The following is just my opinion, but it seems for me that you are either testing a wrong thing, or using a wrong tool.
Assert (C assert()) is not for verifying input, it is for catching impossible situations. It will disappear from release code, for example, so you can't rely on it.
What you should test is your function specification rather than implementation. And you should decide, what is your specification for invalid input values:
Undefined behavior, so assert is fine, but you can't test it with unit-test, because undefined behavior is, well, undefined.
Defined behavior. Then you should be consistent regardless of NDEBUG presence. And throwing exception, in my opinion, is the right thing to do here, instead of calling std::abort, which is almost useless for user (can't be intercepted and processed properly).
If assert triggers (fails) you get "nice output" (or a crash or whatever assert does in your environment). If assert does not trigger then nothing happens and execution continues.
What more do you need to know?
This (hack) adds a EXPECT_NODEATH macro to Google Test. It is the "opposite" of EXPECT_DEATH in that it will pass if the statement does not assert, abort, or otherwise fail.
The general idea was simple, but I did not take the time to make the error messages any nicer. I tried to leave Google Test as untouched as possible and just piggy-back on what is already there. You should be able to include this without any side effects to the rest of Google Test
For your case:
TEST(Succeeds_whenInRange) {
EXPECT_NODEATH(obj->getIndex(4), "");
}
GTestNoDeath.h

How to assert in cppunit that a statement throws an exception either of type Excp1 or Excp2?

CPPUNIT_ASSERT_THROW(Expression, ExceptionType) does not seem to allow checking for exceptions of multiple types i.e. for a statement that can throw more than one kind of exceptions.
For e.x. an expression may throw Excp1 on one platform, or Excp2 on another platform. Is there a workaround to test such statements using CPPUNIT_ASSERT_THROW?
First test, you make your test conditions such that it throws exeption 1.
If it fails to throw, that is a test failure.
if it does throw, you catch it as an exception 1, and accept it as passing.
If it throws something else, the framework catches it.
Second test, you make using conditional compilation to enable code for platform 2 only. You make your test conditions such that it throws exception 2.
IF it fails to throw, that is a test failure.
If it does throw, you catch it as exception 2, and accept it as passing.
If it throws something else, the framework catches it.
On the first platform the test simply passes, as there is nothing for it to do.
On the second platform you catch exception 2 as expected.
There is no direct support for this feature in cppunit but you have basically two solutions how you can implement it easily in your code.
So the basic idea behind this assert is the following code:
bool expected_exception_thrown = false;
try
{
yourExpression();
}
catch(const ExpectedException&)
{
expected_exception_thrown = true;
}
catch(...)
{
}
if (!expected_exception_thrown)
CPPUNIT_FAIL();
Of course the actual implementation is a bit fancier and involves some additional features (like better messages for unexpected std::exception and the missing support for an error message) but the general idea is the same.
So now you can easily extend that pattern to support as many exceptions as you need. You can have a look at the existing implementation in include/cppunit/TestAssert.h and either use that implementation and extend it or use the simplified one that I posted above.

Assert() - what is it good for ?

I don't understand the purpose of assert() .
My lecturer says that the purpose of assert is to find bugs .
For example :
double divide(int a , int b )
{
assert (0 != b);
return a/b;
}
Does the above assert justified ? I think that the answer is yes , because if my program
doesn't supposed to work with 0 (the number zero) , but somehow a zero does find its way into the b variable , then something is wrong with the code .
Am I correct ?
Can you show me some examples for a justified assert() ?
Regards
assert is used to validate things that should always be true if the
program is correct. Whether assert is justified in your example
depends on the specification of divide: if b != 0 is a precondition,
then the assert is usually the preferred way of verifying it: if
someone calls the function without fulfilling the preconditions, it is a
programming error, and you should terminate the program with extreme
prejudice, doing as little additional work as possible. (Usually.
There are applications where this is not the case, and where it is
better to throw an exception, and stumble along, hoping for the best.)
If, however, the specification of divide defines somw behavior when b
== 0 (e.g. return +/-Inf), then you should implement this instead of
using assert.
Also, it's possible to turn the assert off, if it turns out that it
takes too much runtime. Generally, however, this should only be done in
critical sections of code, and only if the profiler shows that you
really need it.
FWIW: not related to your question, but the code you've posted will
return 0.0 for divide( 1, 3 ). Somehow, I don't think that this is
what you wanted.
Another aspect of assertions:
They are also a kind of documentation.
Instead of comments like
// ptr is never NULL
// vec has now n elements
better write
assert(ptr!=0);
assert(vec.size()==n);
Comments may become outdated over time and will cause confusion.
But assertions are verified all the time.
Comments can be ignored. Assertions cannot.
You're pretty much spot-on in your assesment of assert, except for the fact you typically use assert during a debug-phase ... This is because you don't want an assert to trigger during production code ... throwing exceptions (and properly handling them) is the proper method for run-time error-management in production level code.
In general though, assert is used for testing an assumption. If an assumed condition is not met in the code during the debugging phase, especially when you are getting values that are out-of-bound for the desired input, you want your program to bail out at the point that the error is encountered so you can fix it. For instance, suppose you were calling a function that returned a pointer, and that function should never return a NULL pointer value. In other words returning a NULL value is not just some indicator of an error-condition, but it means that the assumption of how you imagine your code works is wrong. That is a good place to use assert ... you assume your program will work one way, and if it doesn't then you don't want that error propagating to cause some crazy hard-to-find bug somewhere else ... you want to nix it right when it occurs.
Finally, you can combine built in macros with assert such as __LINE__ and __FILE__ that will give you the file and line number in the code where the assert took place to help you quickly identify the problem area.
The purpose of an assert is to signal out unexpected behavior during debugging (as it's only available in a debug build). Your example is a justified case of assert. The next line would probably crash, but with the assert there you have the option to break execution right before the line is hit, and do some debugging.
This is usually done in parallel with exceptions - you assert to signal that something is wrong, and throw an exception to treat the case gracefully (even exiting the program):
double divide(int a , int b )
{
assert (0 != b);
if ( b )
return a/b;
throw division_by_0_exception();
}
There are cases where you want to continue execution, but still want to signal that something went wrong.
Assert is used to test assumptions about your code in a debug environment. Asserts generally have no effect on your final build.
Whether or not it is a valid test is another matter entirely. We can't answer that without intimate knowledge of your application.
Asserts should never fail. If you see any possibility that the assertion could fail, then you need an if statement instead to handle those cases where the condition is not true. Assertions are only for conditions that you believe will never fail.
Asserts are used to check invariants during code execution, those are the conditions that are assumed by programmer to always stay the same, if they differ from assumptions then there is a bug in the code.
Asserts can be also used for checking preconditions and postconditions, the first is checked before some code block and verifies if provided data/state is correct, the second one checks whether the outcome of some calculations are correct. This helps to narrow where problems/bugs might be located:
assert( /*preconditions*/ );
/*here some algorithm - and maybe more asserts checking invariants*/
assert( /*postconditions*/ );
Some examples of justified asserts:
Checking function return value, for example if you call some external API function and you know that it returns some error value only in case of programming error:
WinAPI Thread32First function requires that provided LPTHREADENTRY32 structure has properly assigned dwSize field, in case of error it fails. This failure should be catched by assert.
If function accepts pointer to some data, then add assert at the start of function to verify that it is non-null. This makes sense if this function cannot work on null pointer.
If you have a lock on mutex with set timeout then if this timeout ends then you can use assert to indicate possible race condition / deadlock
... and many many more
Nice trick with asserts is to add some info inside, ex.:
assert(false && "Reason for this assert");
"Reason for this assert" will show up to you in a message box
You might also want to know that we also have static asserts that indicate errors during compilation.

Should it be "Arrange-Assert-Act-Assert"?

Regarding the classic test pattern of Arrange-Act-Assert, I frequently find myself adding a counter-assertion that precedes Act. This way I know that the passing assertion is really passing as the result of the action.
I think of it as analogous to the red in red-green-refactor, where only if I've seen the red bar in the course of my testing do I know that the green bar means I've written code that makes a difference. If I write a passing test, then any code will satisfy it; similarly, with respect to Arrange-Assert-Act-Assert, if my first assertion fails, I know that any Act would have passed the final Assert - so that it wasn't actually verifying anything about the Act.
Do your tests follow this pattern? Why or why not?
Update Clarification: the initial assertion is essentially the opposite of the final assertion. It's not an assertion that Arrange worked; it's an assertion that Act hasn't yet worked.
This is not the most common thing to do, but still common enough to have its own name. This technique is called Guard Assertion. You can find a detailed description of it on page 490 in the excellent book xUnit Test Patterns by Gerard Meszaros (highly recommended).
Normally, I don't use this pattern myself, since I find it more correct to write a specific test that validates whatever precondition I feel the need to ensure. Such a test should always fail if the precondition fails, and this means that I don't need it embedded in all the other tests. This gives a better isolation of concerns, since one test case only verifies one thing.
There may be many preconditions that need to be satisfied for a given test case, so you may need more than one Guard Assertion. Instead of repeating those in all tests, having one (and one only) test for each precondition keeps your test code more mantainable, since you will have less repetition that way.
It could also be specified as Arrange-Assume-Act-Assert.
There is a technical handle for this in NUnit, as in the example here:
http://nunit.org/index.php?p=theory&r=2.5.7
Here's an example.
public void testEncompass() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7));
range.encompass(7);
assertTrue(range.includes(7));
}
It could be that I wrote Range.includes() to simply return true. I didn't, but I can imagine that I might have. Or I could have written it wrong in any number of other ways. I would hope and expect that with TDD I actually got it right - that includes() just works - but maybe I didn't. So the first assertion is a sanity check, to ensure that the second assertion is really meaningful.
Read by itself, assertTrue(range.includes(7)); is saying: "assert that the modified range includes 7". Read in the context of the first assertion, it's saying: "assert that invoking encompass() causes it to include 7. And since encompass is the unit we're testing, I think that's of some (small) value.
I'm accepting my own answer; a lot of the others misconstrued my question to be about testing the setup. I think this is slightly different.
An Arrange-Assert-Act-Assert test can always be refactored into two tests:
1. Arrange-Assert
and
2. Arrange-Act-Assert
The first test will only assert on that which was set up in the Arrange phase, and the second test will only assert for that which happened in the Act phase.
This has the benefit of giving more precise feedback on whether it's the Arrange or the Act phase that failed, while in the original Arrange-Assert-Act-Assert these are conflated and you would have to dig deeper and examine exactly what assertion failed and why it failed in order to know if it was the Arrange or Act that failed.
It also satisfies the intention of unit testing better, as you are separating your test into smaller independent units.
I am now doing this. A-A-A-A of a different kind
Arrange - setup
Act - what is being tested
Assemble - what is optionally needed to perform the assert
Assert - the actual assertions
Example of an update test:
Arrange:
New object as NewObject
Set properties of NewObject
Save the NewObject
Read the object as ReadObject
Act:
Change the ReadObject
Save the ReadObject
Assemble:
Read the object as ReadUpdated
Assert:
Compare ReadUpdated with ReadObject properties
The reason is so that the ACT does not contain the reading of the ReadUpdated is because it is not part of the act. The act is only changing and saving. So really, ARRANGE ReadUpdated for assertion, I am calling ASSEMBLE for assertion. This is to prevent confusing the ARRANGE section
ASSERT should only contain assertions. That leaves ASSEMBLE between ACT and ASSERT which sets up the assert.
Lastly, if you are failing in the Arrange, your tests are not correct because you should have other tests to prevent/find these trivial bugs. Because for the scenario i present, there should already be other tests which test READ and CREATE. If you create a "Guard Assertion", you may be breaking DRY and creating maintenance.
I don't use that pattern, because I think doing something like:
Arrange
Assert-Not
Act
Assert
May be pointless, because supposedly you know your Arrange part works correctly, which means that whatever is in the Arrange part must be tested aswell or be simple enough to not need tests.
Using your answer's example:
public void testEncompass() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7)); // <-- Pointless and against DRY if there
// are unit tests for Range(int, int)
range.encompass(7);
assertTrue(range.includes(7));
}
Tossing in a "sanity check" assertion to verify state before you perform the action you're testing is an old technique. I usually write them as test scaffolding to prove to myself that the test does what I expect, and remove them later to avoid cluttering tests with test scaffolding. Sometimes, leaving the scaffolding in helps the test serve as narrative.
I've already read about this technique - possibly from you btw - but I do not use it; mostly because I'm used to the triple A form for my unit tests.
Now, I'm getting curious, and have some questions: how do you write your test, do you cause this assertion to fail, following a red-green-red-green-refactor cycle, or do you add it afterwards ?
Do you fail sometimes, perhaps after you refactor the code ? What does this tell you ? Perhaps you could share an example where it helped. Thanks.
I have done this before when investigating a test that failed.
After considerable head scratching, I determined that the cause was the methods called during "Arrange" were not working correctly. The test failure was misleading. I added a Assert after the arrange. This made the test fail in a place which highlighted the actual problem.
I think there is also a code smell here if the Arrange part of the test is too long and complicated.
In general, I like "Arrange, Act, Assert" very much and use it as my personal standard. The one thing it fails to remind me to do, however, is to dis-arrange what I have arranged when the assertions are done. In most cases, this doesn't cause much annoyance, as most things auto-magically go away via garbage collection, etc. If you have established connections to external resources, however, you will probably want to close those connections when you're done with your assertions or you many have a server or expensive resource out there somewhere holding on to connections or vital resources that it should be able to give away to someone else. This is particularly important if you're one of those developers who does not use TearDown or TestFixtureTearDown to clean up after one or more tests. Of course, "Arrange, Act, Assert" is not responsible for my failure to close what I open; I only mention this "gotcha" because I have not yet found a good "A-word" synonym for "dispose" to recommend! Any suggestions?
Have a look at Wikipedia's entry on Design by Contract. The Arrange-Act-Assert holy trinity is an attempt to encode some of the same concepts and is about proving program correctness. From the article:
The notion of a contract extends down to the method/procedure level; the
contract for each method will normally contain the following pieces of
information:
Acceptable and unacceptable input values or types, and their meanings
Return values or types, and their meanings
Error and exception condition values or types that can occur, and their meanings
Side effects
Preconditions
Postconditions
Invariants
(more rarely) Performance guarantees, e.g. for time or space used
There is a tradeoff between the amount of effort spent on setting this up and the value it adds. A-A-A is a useful reminder for the minimum steps required but shouldn't discourage anyone from creating additional steps.
Depends on your testing environment/language, but usually if something in the Arrange part fails, an exception is thrown and the test fails displaying it instead of starting the Act part. So no, I usually don't use a second Assert part.
Also, in the case that your Arrange part is quite complex and doesn't always throw an exception, you might perhaps consider wrapping it inside some method and writing an own test for it, so you can be sure it won't fail (without throwing an exception).
If you really want to test everything in the example, try more tests... like:
public void testIncludes7() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7));
}
public void testIncludes5() throws Exception {
Range range = new Range(0, 5);
assertTrue(range.includes(5));
}
public void testIncludes0() throws Exception {
Range range = new Range(0, 5);
assertTrue(range.includes(0));
}
public void testEncompassInc7() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(7));
}
public void testEncompassInc5() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(5));
}
public void testEncompassInc0() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(0));
}
Because otherwise you are missing so many possibilities for error... eg after encompass, the range only inlcudes 7, etc...
There are also tests for length of range (to ensure it didn't also encompass a random value), and another set of tests entirely for trying to encompass 5 in the range... what would we expect - an exception in encompass, or the range to be unaltered?
Anyway, the point is if there are any assumptions in the act that you want to test, put them in their own test, yes?
I use:
1. Setup
2. Act
3. Assert
4. Teardown
Because a clean setup is very important.

Test Cases VS ASSERTION statement

In my most C++ project I heavily used ASSERTION statement as following:
int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
if(!fantasticData)
return -1;
// ,,,
return WOW_VALUE;
}
But TDD community seems like to enjoy doing something like this:
int doMoreWonderfulThings(const int* fantasticData)
{
if(!fantasticData)
return ERROR_VALUE;
// ...
return AHA_VALUE;
}
TEST(TDD_Enjoy)
{
ASSERT_EQ(ERROR_VALUE, doMoreWonderfulThings(0L));
ASSERT_EQ(AHA_VALUE, doMoreWonderfulThings("Foo"));
}
Just with my experiences first approaches let me remove so many subtle bugs.
But TDD approaches are very smart idea to handle legacy codes.
"Google" - they compare "FIRST METHOD" to "Walk the shore with life-vest, swim ocean without any safe guard".
Which one is better?
Which one makes software robust?
In my (limited) experience the first option is quite a bit safer. In a test-case you only test predefined input and compare the outcome, this works well as long as every possible edge-case has been checked. The first option just checks every input and thus tests the 'live' values, it filters out bugs real quickly, however it comes with a performance penalty.
In Code Complete Steve McConnell learns us the first method can be used successfully to filter out bugs in a debug build. In release build you can filter-out all assertions (for instance with a compiler flag) to get the extra performance.
In my opinion the best way is to use both methods:
Method 1 to catch illegal values
int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
ASSERTNOTEQUAL(0, fantasticData)
return WOW_VALUE / fantasticData;
}
and method 2 to test edge-cases of an algorithm.
int doMoreWonderfulThings(const int fantasticNumber)
{
int count = 100;
for(int i = 0; i < fantasticNumber; ++i) {
count += 10 * fantasticNumber;
}
return count;
}
TEST(TDD_Enjoy)
{
// Test lower edge
ASSERT_EQ(0, doMoreWonderfulThings(-1));
ASSERT_EQ(0, doMoreWonderfulThings(0));
ASSERT_EQ(110, doMoreWonderfulThings(1));
//Test some random values
ASSERT_EQ(350, doMoreWonderfulThings(5));
ASSERT_EQ(2350, doMoreWonderfulThings(15));
ASSERT_EQ(225100, doMoreWonderfulThings(150));
}
Both mechanisms have value. Any decent test framework will catch the standard assert() anyway, so a test run that causes the assert to fail will result in a failed test.
I typically have a series of asserts at the start of each c++ method with a comment '// preconditions'; it's just a sanity check on the state I expect the object to have when the method is called. These dovetail nicely into any TDD framework because they not only work at runtime when you're testing functionality but they also work at test time.
There is no reason why your test package cannot catch asserts such as the one in doMoreWonderfulThings. This can be done either by having your ASSERT handler support a callback mechanism, or your test asserts contain a try/catch block.
I don't know which particlar TDD subcommunity you're refering to but the TDD patterns I've come across either use Assert.AreEqual() for positive results or otherwise use an ExpectedException mechanism (e.g., attributes in .NET) to declare the error that should be observed.
In C++, I prefer method 2 when using most testing frameworks. It usually makes for easier to understand failure reports. This is invaluable when a test months to years after the test was written.
My reason is that most C++ testing frameworks will print out the file and line number of where the assert occurred without any kind of stack trace information. So most of the time you will get the reporting line number inside of the function or method and not inside of the test case.
Even if the assert is caught and re-asserted from the caller the reporting line will be with the catch statement and may not be anywhere close to the test case line which called the method or function that asserted. This can be really annoying when the function that asserted may have been used on multiple times in the test case.
There are exceptions though. For example, Google's test framework has a scoped trace statement which will print as part of the trace if an exception occurs. So you can wrap a call to generalized test function with the trace scope and easily tell, within a line or two, which line in the exact test case failed.