Test Cases VS ASSERTION statement - c++

In my most C++ project I heavily used ASSERTION statement as following:
int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
if(!fantasticData)
return -1;
// ,,,
return WOW_VALUE;
}
But TDD community seems like to enjoy doing something like this:
int doMoreWonderfulThings(const int* fantasticData)
{
if(!fantasticData)
return ERROR_VALUE;
// ...
return AHA_VALUE;
}
TEST(TDD_Enjoy)
{
ASSERT_EQ(ERROR_VALUE, doMoreWonderfulThings(0L));
ASSERT_EQ(AHA_VALUE, doMoreWonderfulThings("Foo"));
}
Just with my experiences first approaches let me remove so many subtle bugs.
But TDD approaches are very smart idea to handle legacy codes.
"Google" - they compare "FIRST METHOD" to "Walk the shore with life-vest, swim ocean without any safe guard".
Which one is better?
Which one makes software robust?

In my (limited) experience the first option is quite a bit safer. In a test-case you only test predefined input and compare the outcome, this works well as long as every possible edge-case has been checked. The first option just checks every input and thus tests the 'live' values, it filters out bugs real quickly, however it comes with a performance penalty.
In Code Complete Steve McConnell learns us the first method can be used successfully to filter out bugs in a debug build. In release build you can filter-out all assertions (for instance with a compiler flag) to get the extra performance.
In my opinion the best way is to use both methods:
Method 1 to catch illegal values
int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
ASSERTNOTEQUAL(0, fantasticData)
return WOW_VALUE / fantasticData;
}
and method 2 to test edge-cases of an algorithm.
int doMoreWonderfulThings(const int fantasticNumber)
{
int count = 100;
for(int i = 0; i < fantasticNumber; ++i) {
count += 10 * fantasticNumber;
}
return count;
}
TEST(TDD_Enjoy)
{
// Test lower edge
ASSERT_EQ(0, doMoreWonderfulThings(-1));
ASSERT_EQ(0, doMoreWonderfulThings(0));
ASSERT_EQ(110, doMoreWonderfulThings(1));
//Test some random values
ASSERT_EQ(350, doMoreWonderfulThings(5));
ASSERT_EQ(2350, doMoreWonderfulThings(15));
ASSERT_EQ(225100, doMoreWonderfulThings(150));
}

Both mechanisms have value. Any decent test framework will catch the standard assert() anyway, so a test run that causes the assert to fail will result in a failed test.
I typically have a series of asserts at the start of each c++ method with a comment '// preconditions'; it's just a sanity check on the state I expect the object to have when the method is called. These dovetail nicely into any TDD framework because they not only work at runtime when you're testing functionality but they also work at test time.

There is no reason why your test package cannot catch asserts such as the one in doMoreWonderfulThings. This can be done either by having your ASSERT handler support a callback mechanism, or your test asserts contain a try/catch block.

I don't know which particlar TDD subcommunity you're refering to but the TDD patterns I've come across either use Assert.AreEqual() for positive results or otherwise use an ExpectedException mechanism (e.g., attributes in .NET) to declare the error that should be observed.

In C++, I prefer method 2 when using most testing frameworks. It usually makes for easier to understand failure reports. This is invaluable when a test months to years after the test was written.
My reason is that most C++ testing frameworks will print out the file and line number of where the assert occurred without any kind of stack trace information. So most of the time you will get the reporting line number inside of the function or method and not inside of the test case.
Even if the assert is caught and re-asserted from the caller the reporting line will be with the catch statement and may not be anywhere close to the test case line which called the method or function that asserted. This can be really annoying when the function that asserted may have been used on multiple times in the test case.
There are exceptions though. For example, Google's test framework has a scoped trace statement which will print as part of the trace if an exception occurs. So you can wrap a call to generalized test function with the trace scope and easily tell, within a line or two, which line in the exact test case failed.

Related

c++ google test division by zero

I'm learning to write unit tests, and have started with an easy "Calculator"-class that I wanted to test.
I figured out how to use the EXPECT/ASSERT functions, and what test cases etc. are, but I got a problem when I wanted to test the division by zero. Is there any possibility to test it? I mean, what should I write as test result? Is there anything like "ERROR"? Or do I have to use exceptions?
These are my tests so far:
TEST(TestCalc, TestPos)
{
Calc calculate;
EXPECT_EQ(10.0, calculate.add(5.0, 5.0));
EXPECT_EQ(9, calculate.mul(3, 3));
EXPECT_EQ(9, calculate.div(27, 3));
EXPECT_EQ(9, calculate.sub(12, 3));
}
TEST(TestCalc, TestNeg)
{
Calc calculate;
EXPECT_EQ(-1.0, calculate.add(5.0, -6.0));
EXPECT_EQ(-9, calculate.mul(3, -3));
EXPECT_EQ(-9, calculate.div(27, -3));
EXPECT_EQ(15, calculate.sub(12, -3));
}
TEST(TestCalc, TestZero)
{
Calc calculate;
EXPECT_EQ(10.0, calculate.add(5.0, 0));
EXPECT_EQ(9, calculate.mul(3, 0));
EXPECT_EQ(, calculate.div(27,0));
EXPECT_EQ(12, calculate.sub(12,0));
}
I don't agree with #Ketzu. You have an expectation how calculator should behave, when dividing by zero.
EXPECT_EQ(, calculate.div(27,0));
This expectation is perhaps not well formulated in this test.
If calculate.div(27,0) throws an exception, than you can catch this exception and your test fails, if it is not thrown. You can write something like this
TEST(ExceptionTest, ExpectThrowsSpecificException) {
try {
calculate.div(27,0);
FAIL() << "calculate.div(27,0) should throw an error, since a division by zero is not valid\n";
} catch (TestException& exception) {
EXPECT_THAT(std::string(exception.what()), Eq("VALID_SETTING"));
EXPECT_THAT(exception.errorCode, Eq(20));
}
}
See here for a detailed discussion.
If no exception is thrown, how do you detect the abnormal usage of calculate.div?
Is there any possibility to test it? I mean, what should I write as test result? Is there anything like "ERROR"? Or do I have to use exceptions?
This is a core question you need to ask (and answer) yourself - how should your calculator (in this case the function Calc::div()) behave if an invalid input is given. There are several ways it can behave:
Crash. This (usually) is the behavior you get if you divide by zero in C++ (technically the behaviour is undefined, so the compiler may do ANYTHING. Fortunately (for us) most compilers agree that terminating the entire process is the "correct" ANYTHING to do here)
Return some value. You could return (in the case of division by zero) infinity. Or possibly NaN. Or any other value that makes sense in your context. This approach however mixes the result with error handling, which is discouraged nowadays (as it would force you to check each and every invokation of the function for "error results" - and if you forget a single check you get nasty bugs as you continue operations with invalid/bogus values.)
Throw an exception. You can throw an exception, signalling that something went wrong. This is the usually used method nowadays (except for extremly performance sensitve stuff, where every microsecond counts), as it neatly separates the normal path (return result value) from the error path (exception) (and if you forget to handle the exception you will notice at once instead of using bad value like in option 2)
Once you have decided on what your behaviour should be you can test it.
For Option 1 gtest provides death tests.
For Option 2 you can simply validate that you get your expected result.
For Option 3 you can catch and evalute the exception, either via exception assertions or a homemade try { } catch { } with a FAIL() at the end of the try block (so you notice if the function fails to throw an exception when you expect it to)
Your comment
I thought I could test it
suggest a misunderstanding about tests.
Your tests only test if the behaviour of your classes is the way you want it to be. Tests are in no way error handling mechanisms!
If one of your functions is creating an application crash with certain parameters, that is not something your tests should mark as correct.
For how to solve it (exception and suitable macro) see john's comment.

Should I use assert to verify a third party function?

I use some third party function which based on the filter returns specified number of objects:
//void GetObjects(std::vector<T>&, Filter, int /*objectsNumber*/)
GetObjects(vec, filter, 1);
if(vec.empty())
{
throw ObjectNotFound();
}
assert(vec.size() == 1);
Should I use assert like above ? Is it a typical assert scenario ?
How to handle errors in your program depends on you, and on your program nature.
In a production environment, you usually try not to assert because then it means your application dies. In other cases, the process that executes your program would realize your program died and then restart it.
If it's just for learning/training, asserting with a proper message is a good way to find your problem easily and fast.
Bottom line - it's really up to you. There's no right or wrong here.
If you do want to assert, usually you do it only when some very basic invariant/condition is not met, when your program just cannot know how to proceed from this point.
Well, assert() is a macro that allows the checking code to be disabled for production code, so ensuring that the foreign function complies to the interface contract, it's a good custom to use assert to ensure the function specification are met.
Anyway, using mock instances and some Unit testing framework will give you better results, as it allows you to ensure the interface contract with finer exposure to foreign mistakes. I recommend you both, and happy to see those ideas circulating over these environments :)

Exceptions vs assert for a scientific computing guy (I am the sole user of my code)?

Exceptions vs assert has been asked here before: Design by contract using assertions or exceptions?, Assertion VS Runtime exception, C++ error-codes vs ASSERTS vs Exceptions choices choices :(, Design by contract using assertions or exceptions?, etc. (*) There are also books, like Herb Sutter's Coding Standards that talk about this. The general consensus seems to be this:
Use assertions for internal errors, in the sense that the user of the module and the developer are one and the same person/team. Use exceptions for everything else. (**)
This rule makes a lot of sense to me, except for one thing. I am a scientist, using C++ for scientific simulations. In my particular context, this means that I am the sole user of most of my code. If I apply this rule, it means I never have to use exceptions? I guess not, for example, there are still I/O errors, or memory allocation issues, where exceptions are still necessary. But apart from those interactions of my program with the "outside world", are there other scenarios where I should be using exceptions?
In my experience, many good programming practices have been very useful to me, in spite of those practices being designed mostly for large complex systems or for large teams, while my programs are mostly small scientific simulations which are written mostly by me alone. Hence this question. What good practices of exception use apply in my context? Or should I use only asserts (and exceptions for I/O, memory allocation, and other interactions with the "outside world")?
(*) I hope that after reading the complete question, you agree that this is not a duplicate. The topic of exceptions vs assert has been dealt with before in general, but, as I try to explain here, I don't feel that any of those questions addresses my particular situation.
(**) I wrote this with my own words, trying to resume what I've read. Feel free to criticize this statement if you feel it does not reflect the majority's consensus.
assert() is a safeguard against programmer mistakes, while exceptions are safeguards against the rest of existence.
Let's explain this with an example:
double divide(double a, double b) {
return a / b;
}
The obvious problem of this function is that if b == 0, you'll get an error.
Now, let's assume this function is called with arguments which values are decided by you and only you. You can detect the problem by changing the function into this:
double divide(double a, double b) {
ASSERT(b != 0);
return a / b;
}
If you have accidentally made a mistake in your code so that b can take a 0 value, you're covered, and can fix the calling code, either by testing explicitely for 0, or to make sure such a condition never occurs in the first place.
As long as this assertion is in place, you will get some level of protection as the developer of the code.
It is a contract that makes it easy to see what kind of problem can occur in the function, especially while you are testing your code.
Now, what happens if you have no control over the values that are passed to the function ?
The assertion will just disrupt the flow of the program without any protection whatsoever.
The sensible thing to do is this:
double divide(double a, double b) {
ASSERT(b != 0);
if (b == 0)
throw DivideByZeroException();
return a / b;
}
try {
result = divide(num, user_val);
} catch (DivideByZeroException & e) {
display_informative_message_to_user(e);
}
Note that the assertion is still in place because it is the most readable indication of what can go wrong.
The addition of the exception, however, allows you to recover more easily from the problem.
It can be argued that such an approach is redundant, but in a release build, the assertions will usually be NOOPs without generated code, so the exception remains the sole protection.
Also, this function is very simple, so the assertion and the exception throw are immediately visible, but with a few dozens of lines of code added, that would not be the case anymore.
Now, when you are developing and likely to do mistakes, the assertion failure will be visible at exactly the line where it occured, while the exception might bubble up into an unrelated try/catch block that would make it harder to pintpoint the problem exactly, especially if the catch block does not log stack traces.
So, if you want to be safe and mitigate the risks of mistakes during development and during normal execution, you can never be too careful, and might want to provide both mechanisms in a complementary way.
I'm in a similar situation; engineering software, sole developer, very few users of my programs. My rule of thumb is to use exceptions when the program could feasibly recover from an error, or when a user should be expected to react to the error in some way. An example is checking for negative numbers where only positive numbers are allowed: the program doesn't need to terminate because the user typed in a negative value for mass, they just need to recheck their inputs and try again.
On the other hand, I use asserts to catch major bugs in the software. In the event that some error occurs from which the program has no hope of recovering (or that the user has no hope of fixing themselves), I just let the assert print out the file name and line number so that the user can report it to me, and I can fix it. An example of where I would use an assert is checking that the number of rows and columns of a matrix are equal when I'm expecting a square matrix. If num_rows != num_cols, then something is seriously broken with the code and some debugging is required. In my opinion, this is easier than trying to imagine all the possible ways that a matrix could become invalid, and test for them all.
As far as performance, I only disable or remove asserts and other error checks in critical sections of code, and then only when that section has been thoroughly tested and debugged.
My approach is probably not good for production software though. I can't imagine some program like Microsoft Excel bombing out with an "assertion failed" message. Ha ha. It's one thing if the three coworkers who use your software complain about your error-handling strategy, but quite another if you have thousands of unhappy customers who paid cash for it.
I'd use assertions where I expect the check to have a performance impact. I.e. when writing a vector or matrix class of simple types (e.g. double, complex<double>), and I wanted to make a bounds check I'd use assert(), because the check there has a potentially large performance impact, since it happens with every element access. I can then turn off this check in production builds with -DNDEBUG.
If the cost of the check does not matter (e.g. a check that an initial solution does not contain NaN values before you pass it to an iterative scheme), I would use an exception or another mechanism that is also active in production builds. If your job aborts after waiting in the queue of a cluster for three days and running for 10 hours, you at least want to have a diagnostic better than "killed (SIGSEGV)", so you can avoid rebuilding in debug-mode, waiting another 3 days and spending another 10 hours of expensive computation time.
The are situations where neither exceptions nor asserts are appropriate. An example would be an error where the cost of checking does not matter, but that is nevertheless fatal enough that the program cannot continue under any circumstances. An assertion is not appropriate, because it only triggers in debug mode, an exception is not appropriate, because it can (accidentally) be caught, obscuring the problem. In such a case I'd use a custom assert macro, that does not depend on NDEBUG, e.g.:
// This assert macro does not depend on the value of NDEBUG
#define assert_always(expr) \
do \
{ \
if(!(expr)) \
{ \
std::cerr << __FILE__ << ":" << __LINE__ << ": assert_always(" \
<< #expr << ") failed" << std::endl; \
std::abort(); \
} \
} while(false)
(This example was taken from here with a modified name to indicate the slightly broader purpose).

Should it be "Arrange-Assert-Act-Assert"?

Regarding the classic test pattern of Arrange-Act-Assert, I frequently find myself adding a counter-assertion that precedes Act. This way I know that the passing assertion is really passing as the result of the action.
I think of it as analogous to the red in red-green-refactor, where only if I've seen the red bar in the course of my testing do I know that the green bar means I've written code that makes a difference. If I write a passing test, then any code will satisfy it; similarly, with respect to Arrange-Assert-Act-Assert, if my first assertion fails, I know that any Act would have passed the final Assert - so that it wasn't actually verifying anything about the Act.
Do your tests follow this pattern? Why or why not?
Update Clarification: the initial assertion is essentially the opposite of the final assertion. It's not an assertion that Arrange worked; it's an assertion that Act hasn't yet worked.
This is not the most common thing to do, but still common enough to have its own name. This technique is called Guard Assertion. You can find a detailed description of it on page 490 in the excellent book xUnit Test Patterns by Gerard Meszaros (highly recommended).
Normally, I don't use this pattern myself, since I find it more correct to write a specific test that validates whatever precondition I feel the need to ensure. Such a test should always fail if the precondition fails, and this means that I don't need it embedded in all the other tests. This gives a better isolation of concerns, since one test case only verifies one thing.
There may be many preconditions that need to be satisfied for a given test case, so you may need more than one Guard Assertion. Instead of repeating those in all tests, having one (and one only) test for each precondition keeps your test code more mantainable, since you will have less repetition that way.
It could also be specified as Arrange-Assume-Act-Assert.
There is a technical handle for this in NUnit, as in the example here:
http://nunit.org/index.php?p=theory&r=2.5.7
Here's an example.
public void testEncompass() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7));
range.encompass(7);
assertTrue(range.includes(7));
}
It could be that I wrote Range.includes() to simply return true. I didn't, but I can imagine that I might have. Or I could have written it wrong in any number of other ways. I would hope and expect that with TDD I actually got it right - that includes() just works - but maybe I didn't. So the first assertion is a sanity check, to ensure that the second assertion is really meaningful.
Read by itself, assertTrue(range.includes(7)); is saying: "assert that the modified range includes 7". Read in the context of the first assertion, it's saying: "assert that invoking encompass() causes it to include 7. And since encompass is the unit we're testing, I think that's of some (small) value.
I'm accepting my own answer; a lot of the others misconstrued my question to be about testing the setup. I think this is slightly different.
An Arrange-Assert-Act-Assert test can always be refactored into two tests:
1. Arrange-Assert
and
2. Arrange-Act-Assert
The first test will only assert on that which was set up in the Arrange phase, and the second test will only assert for that which happened in the Act phase.
This has the benefit of giving more precise feedback on whether it's the Arrange or the Act phase that failed, while in the original Arrange-Assert-Act-Assert these are conflated and you would have to dig deeper and examine exactly what assertion failed and why it failed in order to know if it was the Arrange or Act that failed.
It also satisfies the intention of unit testing better, as you are separating your test into smaller independent units.
I am now doing this. A-A-A-A of a different kind
Arrange - setup
Act - what is being tested
Assemble - what is optionally needed to perform the assert
Assert - the actual assertions
Example of an update test:
Arrange:
New object as NewObject
Set properties of NewObject
Save the NewObject
Read the object as ReadObject
Act:
Change the ReadObject
Save the ReadObject
Assemble:
Read the object as ReadUpdated
Assert:
Compare ReadUpdated with ReadObject properties
The reason is so that the ACT does not contain the reading of the ReadUpdated is because it is not part of the act. The act is only changing and saving. So really, ARRANGE ReadUpdated for assertion, I am calling ASSEMBLE for assertion. This is to prevent confusing the ARRANGE section
ASSERT should only contain assertions. That leaves ASSEMBLE between ACT and ASSERT which sets up the assert.
Lastly, if you are failing in the Arrange, your tests are not correct because you should have other tests to prevent/find these trivial bugs. Because for the scenario i present, there should already be other tests which test READ and CREATE. If you create a "Guard Assertion", you may be breaking DRY and creating maintenance.
I don't use that pattern, because I think doing something like:
Arrange
Assert-Not
Act
Assert
May be pointless, because supposedly you know your Arrange part works correctly, which means that whatever is in the Arrange part must be tested aswell or be simple enough to not need tests.
Using your answer's example:
public void testEncompass() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7)); // <-- Pointless and against DRY if there
// are unit tests for Range(int, int)
range.encompass(7);
assertTrue(range.includes(7));
}
Tossing in a "sanity check" assertion to verify state before you perform the action you're testing is an old technique. I usually write them as test scaffolding to prove to myself that the test does what I expect, and remove them later to avoid cluttering tests with test scaffolding. Sometimes, leaving the scaffolding in helps the test serve as narrative.
I've already read about this technique - possibly from you btw - but I do not use it; mostly because I'm used to the triple A form for my unit tests.
Now, I'm getting curious, and have some questions: how do you write your test, do you cause this assertion to fail, following a red-green-red-green-refactor cycle, or do you add it afterwards ?
Do you fail sometimes, perhaps after you refactor the code ? What does this tell you ? Perhaps you could share an example where it helped. Thanks.
I have done this before when investigating a test that failed.
After considerable head scratching, I determined that the cause was the methods called during "Arrange" were not working correctly. The test failure was misleading. I added a Assert after the arrange. This made the test fail in a place which highlighted the actual problem.
I think there is also a code smell here if the Arrange part of the test is too long and complicated.
In general, I like "Arrange, Act, Assert" very much and use it as my personal standard. The one thing it fails to remind me to do, however, is to dis-arrange what I have arranged when the assertions are done. In most cases, this doesn't cause much annoyance, as most things auto-magically go away via garbage collection, etc. If you have established connections to external resources, however, you will probably want to close those connections when you're done with your assertions or you many have a server or expensive resource out there somewhere holding on to connections or vital resources that it should be able to give away to someone else. This is particularly important if you're one of those developers who does not use TearDown or TestFixtureTearDown to clean up after one or more tests. Of course, "Arrange, Act, Assert" is not responsible for my failure to close what I open; I only mention this "gotcha" because I have not yet found a good "A-word" synonym for "dispose" to recommend! Any suggestions?
Have a look at Wikipedia's entry on Design by Contract. The Arrange-Act-Assert holy trinity is an attempt to encode some of the same concepts and is about proving program correctness. From the article:
The notion of a contract extends down to the method/procedure level; the
contract for each method will normally contain the following pieces of
information:
Acceptable and unacceptable input values or types, and their meanings
Return values or types, and their meanings
Error and exception condition values or types that can occur, and their meanings
Side effects
Preconditions
Postconditions
Invariants
(more rarely) Performance guarantees, e.g. for time or space used
There is a tradeoff between the amount of effort spent on setting this up and the value it adds. A-A-A is a useful reminder for the minimum steps required but shouldn't discourage anyone from creating additional steps.
Depends on your testing environment/language, but usually if something in the Arrange part fails, an exception is thrown and the test fails displaying it instead of starting the Act part. So no, I usually don't use a second Assert part.
Also, in the case that your Arrange part is quite complex and doesn't always throw an exception, you might perhaps consider wrapping it inside some method and writing an own test for it, so you can be sure it won't fail (without throwing an exception).
If you really want to test everything in the example, try more tests... like:
public void testIncludes7() throws Exception {
Range range = new Range(0, 5);
assertFalse(range.includes(7));
}
public void testIncludes5() throws Exception {
Range range = new Range(0, 5);
assertTrue(range.includes(5));
}
public void testIncludes0() throws Exception {
Range range = new Range(0, 5);
assertTrue(range.includes(0));
}
public void testEncompassInc7() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(7));
}
public void testEncompassInc5() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(5));
}
public void testEncompassInc0() throws Exception {
Range range = new Range(0, 5);
range.encompass(7);
assertTrue(range.includes(0));
}
Because otherwise you are missing so many possibilities for error... eg after encompass, the range only inlcudes 7, etc...
There are also tests for length of range (to ensure it didn't also encompass a random value), and another set of tests entirely for trying to encompass 5 in the range... what would we expect - an exception in encompass, or the range to be unaltered?
Anyway, the point is if there are any assumptions in the act that you want to test, put them in their own test, yes?
I use:
1. Setup
2. Act
3. Assert
4. Teardown
Because a clean setup is very important.

Is Assert.Fail() considered bad practice?

I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body.
When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why?
#Martin Meredith
That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first.
#Jimmeh
That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out.
#Matt Howells
Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case
#Mitch Wheat
That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.
For this scenario, rather than calling Assert.Fail, I do the following (in C# / NUnit)
[Test]
public void MyClassDoesSomething()
{
throw new NotImplementedException();
}
It is more explicit than an Assert.Fail.
There seems to be general agreement that it is preferable to use more explicit assertions than Assert.Fail(). Most frameworks have to include it though because they don't offer a better alternative. For example, NUnit (and others) provide an ExpectedExceptionAttribute to test that some code throws a particular class of exception. However in order to test that a property on the exception is set to a particular value, one cannot use it. Instead you have to resort to Assert.Fail:
[Test]
public void ThrowsExceptionCorrectly()
{
const string BAD_INPUT = "bad input";
try
{
new MyClass().DoSomething(BAD_INPUT);
Assert.Fail("No exception was thrown");
}
catch (MyCustomException ex)
{
Assert.AreEqual(BAD_INPUT, ex.InputString);
}
}
The xUnit.Net method Assert.Throws makes this a lot neater without requiring an Assert.Fail method. By not including an Assert.Fail() method xUnit.Net encourages developers to find and use more explicit alternatives, and to support the creation of new assertions where necessary.
It was deliberately left out. This is Brad Wilson's reply as to why is there no Assert.Fail():
We didn't overlook this, actually. I
find Assert.Fail is a crutch which
implies that there is probably an
assertion missing. Sometimes it's just
the way the test is structured, and
sometimes it's because Assert could
use another assertion.
I've always used Assert.Fail() for handling cases where you've detected that a test should fail through logic beyond simple value comparison. As an example:
try
{
// Some code that should throw ExceptionX
Assert.Fail("ExceptionX should be thrown")
}
catch ( ExceptionX ex )
{
// test passed
}
Thus the lack of Assert.Fail() in the framework looks like a mistake to me. I'd suggest patching the Assert class to include a Fail() method, and then submitting the patch to the framework developers, along with your reasoning for adding it.
As for your practice of creating tests that intentionally fail in your workspace, to remind yourself to implement them before committing, that seems like a fine practice to me.
I use MbUnit for my Unit Testing. They have an option to Ignore tests, which show up as Orange (rather than Green or Red) in the test suite. Perhaps xUnit has something similar, and would mean you don't even have to put any assert into the method, because it would show up in an annoyingly different colour making it hard to miss?
Edit:
In MbUnit it is in the following way:
[Test]
[Ignore]
public void YourTest()
{ }
This is the pattern that I use when writting a test for code that I want to throw an exception by design:
[TestMethod]
public void TestForException()
{
Exception _Exception = null;
try
{
//Code that I expect to throw the exception.
MyClass _MyClass = null;
_MyClass.SomeMethod();
//Code that I expect to throw the exception.
}
catch(Exception _ThrownException)
{
_Exception = _ThrownException
}
finally
{
Assert.IsNotNull(_Exception);
//Replace NullReferenceException with expected exception.
Assert.IsInstanceOfType(_Exception, typeof(NullReferenceException));
}
}
IMHO this is a better way of testing for exceptions over using Assert.Fail(). The reason for this is that not only do I test for an exception being thrown at all but I also test for the exception type. I realise that this is similar to the answer from Matt Howells but IMHO using the finally block is more robust.
Obviously it would still be possible to include other Assert methods to test the exceptions input string etc. I would be grateful for your comments and views on my pattern.
Personally I have no problem with using a test suite as a todo list like this as long as you eventually get around to writing the test before you implement the code to pass.
Having said that, I used to use this approach myself, although now I'm finding that doing so leads me down a path of writing too many tests upfront, which in a weird way is like the reverse problem of not writing tests at all: you end up making decisions about design a little too early IMHO.
Incidentally in MSTest, the standard Test template uses Assert.Inconclusive at the end of its samples.
AFAIK the xUnit.NET framework is intended to be extremely lightweight and yes they did cut Fail deliberately, to encourage the developer to use an explicit failure condition.
Wild guess: withholding Assert.Fail is intended to stop you thinking that a good way to write test code is as a huge heap of spaghetti leading to an Assert.Fail in the bad cases. [Edit to add: other people's answers broadly confirm this, but with quotations]
Since that's not what you're doing, it's possible that xUnit.Net is being over-protective.
Or maybe they just think it's so rare and so unorthogonal as to be unnecessary.
I prefer to implement a function called ThisCodeHasNotBeenWrittenYet (actually something shorter, for ease of typing). Can't communicate intention more clearly than that, and you have a precise search term.
Whether that fails, or is not implemented (to provoke a linker error), or is a macro that doesn't compile, can be changed to suit your current preference. For instance when you want to run something that is finished, you want a fail. When you're sitting down to get rid of them all, you may want a compile error.
With the good code I usually do:
void goodCode() {
// TODO void goodCode()
throw new NotSupportedOperationException("void goodCode()");
}
With the test code I usually do:
#Test
void testSomething() {
// TODO void test Something
Assert.assert("Some descriptive text about what to test")
}
If using JUnit, and don't want to get the failure, but the error, then I usually do:
#Test
void testSomething() {
// TODO void test Something
throw new NotSupportedOperationException("Some descriptive text about what to test")
}
Beware Assert.Fail and its corrupting influence to make developers write silly or broken tests. For example:
[TestMethod]
public void TestWork()
{
try {
Work();
}
catch {
Assert.Fail();
}
}
This is silly, because the try-catch is redundant. A test fails if it throws an exception.
Also
[TestMethod]
public void TestDivide()
{
try {
Divide(5,0);
Assert.Fail();
} catch { }
}
This is broken, the test will always pass whatever the outcome of the Divide function. Again, a test fails if and only if it throws an exception.
If you're writing a test that just fails, and then writing the code for it, then writing the test. This isn't Test Driven Development.
Technically, Assert.fail() shouldn't be needed if you're using test driven development correctly.
Have you thought of using a Todo List, or applying a GTD methodology to your work?
MS Test has Assert.Fail() but it also has Assert.Inconclusive(). I think that the most appropriate use for Assert.Fail() is if you have some in-line logic that would be awkward to put in an assertion, although I can't even think of any good examples. For the most part, if the test framework supports something other than Assert.Fail() then use that.
I think you should ask yourselves what (upfront) testing should do.
First, you write a (set of) test without implmentation.
Maybe, also the rainy day scenarios.
All those tests must fail, to be correct tests:
So you want to achieve two things:
1) Verify that your implementation is correct;
2) Verify that your unit tests are correct.
Now, if you do upfront TDD, you want to execute all your tests, also, the NYI parts.
The result of your total test run passes if:
1) All implemented stuff succeeds
2) All NYI stuff fails
After all, it would be a unit test ommision if your unit tests succeeds whilst there is no implementation, isnt it?
You want to end up with something of a mail of your continous integration test that checks all implemented and not implemented code, and is sent if any implemented code fails, or any not implemented code succeeds. Both are undesired results.
Just write an [ignore] tests wont do the job.
Neither, an asserts that stops an the first assert failure, not running other tests lines in the test.
Now, how to acheive this then?
I think it requires some more advanced organisation of your testing.
And it requires some other mechanism then asserts to achieve these goals.
I think you have to split up your tests and create some tests that completly run but must fail, and vice versa.
Ideas are to split your tests over multiple assemblies, use grouping of tests (ordered tests in mstest may do the job).
Still, a CI build that mails if not all tests in the NYI department fail is not easy and straight-forward.
Why would you use Assert.Fail for saying that an exception should be thrown? That is unnecessary. Why not just use the ExpectedException attribute?
This is our use case for Assert.Fail().
One important goal for our Unit tests is that they don't touch the database.
Sometimes mocking doesn't happen properly, or application code is modified and a database call is inadvertently made.
This can be quite deep in the call stack. The exception may be caught so it won't bubble up, or because the tests are running initially with a database the call will work.
What we've done is add a config value to the unit test project so that when the database connection is first requested we can call Assert.Fail("Database accessed");
Assert.Fail() acts globally, even in different libraries. This therefore acts as a catch-all for all of the unit tests.
If any one of them hits the database in a unit test project then they will fail.
We therefore fail fast.