Asserting set up and pre-conditions in Unit Tests - unit-testing

Is having more than one assert per test a really bad smell? I usually try to follow the “arrange, act, assert” pattern as well as the single assert per test guideline. I think having clean, small, isolated tests is pure awesomeness. For the most part I manage to do this. However, sometimes I find myself asserting “pre-conditions” right after my arrange like so:
'arrange:
'pre-conditions:
Assert the arrange worked
'act:
'assert:
Is my test testing too much? Is it caring about things it shouldn’t care about? I’d love to hear some opinions on this.

As I said here, I think perhaps our best practice should be, not Arrange-Act-Assert, but rather Arrange-Assume-Act-Assert. That before Acting, we assert that the desired result of the Action is not already in effect. This isn't exactly the same as what you are asking; generally I don't think it's important to verify the setup, because setup errors tend to manifest themselves pretty "loudly" in any case; but it's a good reason to have a second assert in the test.

I think having to use Assert to validate the setup is suboptimal, as it muddies the results (it might be hard to figure out what you're testing, if just looking at the output).
I recognize that there are times when this is necessary, or you just want to make sure that everything is being tested in the way you intend.
My practice is to use Debug.Assert (in C#) for this purpose, so that the setup validation does not become part of the test output.
You might be able to achieve the same in other languages by throwing an exception if the setup does not put the system into an expected state.
Different test runners may handle this differently, so you should make sure that this approach has the desired effect (tests fail, but no extra report output as long as the Debug.Assert passes or no exception thrown).

I would not use a vanilla "Assert" for that but rather Assert.Inconclusive (MSTest)¹. The test has not failed, so you don't want to fail the test run.
1) Assume in JUnit and NUnit I believe.

Related

nUnit Test interdependencies/hierarchies

Is there a way to give nUnit a hierarchy of tests, so that later test aren't run if earlier test fail?
Suppose I have a set of data that is expected to be processed and ordered, and I’ve got a UT that tests that the ordering is done, but also a bunch other tests that test the processing and which assume that the ordering has been successful and will hence fail spuriously if that’s false.
(And suppose that breaking this dependency is not feasible - I know that is the ideal solution!)
Then it would be nice to tell nUnit … "If the ordering test has failed, then don’t even bother with these ones (throw inconclusive, maybe?) ‘cos we know they ain’t gonna work."
Obviously I could do it manually, but that's going to add a lot of essentially irrelevant cruft to the tests (we have nice short tests, so a "call the head test and catch Exceptions" block would double the length of some tests)
I'm hoping that nUnit might JustDoThis already, but I can’t see it anywhere?
As mentioned in your question, the most 'proper' fix is to break the dependency between the tests. If you really want to do this though, NUnit does support assumptions (which are a bit like assertions, but at the start of your test), which could be used to check for some precondition that affects the validity of your test. Searching for "NUnit Assume" brought up the following StackOverflow question, which looks similar to this one: How do I ignore a test based on another test in NUnit?
I believe hierarchy in unit tests is a bad idea. As you said your unit tests are small, nice and independent of course. And it should stay the same. Adding some dependencies breaks it.
To offer some workaround, I would suggest to use the Category attribute combined with the /stoponerror option. You could group tests by its importance and run it by this categories. And use stoponerror option will stop executing the rest of tests when the first fails. I'm not sure if this also works in integration with Visual Studio but you can achieve this with NUnit command line tool. See this

Should I create a new test method for each assertion?

I know that this is subjective, but I'd like to follow the most common practice.
Do you normally create one test method for each class method and stuff it with multiple assertions, or do you create one test method per assertion?
For example, if I am testing a bank account's withdraw method, and I want make sure that an exception is thrown if the user tries to overdraw the account or withdraw a negative amount, should I create testOverdaw and testNegativeWithdrawal, or would I just combine those two assertions in a method called testWithdraw?
Think of it this way: each test should stand on its own and exercise a relatively discrete set of functionality. If you want to assert whether three things are true about some method that you have created, then you should create a test that includes those three things.
Thus, I have to strongly disagree with the others who have answered. Arbitrarily limiting yourself to one assertion per test will do nothing for you except make your testing unwieldy and tedious. Ultimately it may put you off testing altogether - which would certainly be a shame: bad for your project and career.
Now, that does not mean you have license to write large, unwieldy or multi-purpose testing routines. Indeed, I don't think I've ever written one that is more than 20 lines or so.
As far as knowing which assertion fails when there are several in one function, you will note that both nUnit and MSTest give you both a description and a link when an assertion fails that will take you right to the offending line (nUnit will require an integration tool such as TestDriven.net). This makes figuring out the failure point trivial. Both will also stop on the first failure in a function and both give you the ability to do a debug walkthrough as well.
Personally I would create one test for each assertion otherwise you have to dig to find the reason for the failure rather than it being obvious from the test name.
If you have to write a few lines of code to set up a test and don't want to duplicate that then depending on your language and test framework you should be able to create a set of tests where each test will execute a block of code before it runs.
Make multiple test methods; do not combine them into one. Each of your test methods should test one behavior of the method. As you are saying, testing with a negative balance is a different behavior then testing with a positive balance. So, that would be two tests.
You want to do it this way so that when a test fails, you are not stuck trying to figure out which part in that test failed.
One way to do it is have one separate method for each different scenario or setup. In your case you'd probably want one scenario where there is sufficient funds, and one scenario where there is insufficient funds. And assert that in the first one everything works, and in the second one the same operations won't work.
I would recommend having one test method per assertion, or rather per a expected behavior. This allows to localize the erroneous code much faster, if any test fails.
I would make those two seperate assertions.
The first represents a valid operation that would happen if a user was using the account regularly, the second would represent a case where data sanitizing was not done, or not properly done.
You want separate test cases so that you can logically implement the test cases as needed, especially in regression scenarios where running all tests can be prohibitively expensive.
testOverdraw and testNegativeWithdrawal are two separate behaviors. They shall be tested separately.
A good rule of thumb is to have only one action on the method under test in one unit test (not counting setup and assertions).
From the NUnit documentation:
"If an assertion fails, the method call does not return and an error is reported. If a test contains multiple assertions, any that follow the one that failed will not be executed. For this reason, it's usually best to try for one assertion per test."
http://www.nunit.org/index.php?p=assertions&r=2.4.6
However, nothing forces you to follow best practices. If it isn't worth your time and effort for this particular project to have 100% granular tests, where one bug means exactly one test failure (and vice-versa), then don't do it. Ultimately it is a tool for you to use at the best cost/benefit balance that you can. That balance depends on a lot of variables that are specific to your scenario, and specific to each project.

Unit testing: Is it a good practice to have assertions in setup methods?

In unit testing, the setup method is used to create the objects needed for testing.
In those setup methods, I like using assertions: I know what values I want to see in those
objects, and I like to document that knowledge via an assertion.
In a recent post on unit tests calling other unit tests here on stackoverflow, the general feeling seems to be that unit tests should not call other tests:
The answer to that question seems to be that you should refactor your setup, so
that test cases do not depend on each other.
But there isn't much difference in a "setup-with-asserts" and a
unit test calling other unit tests.
Hence my question: Is it good practice to have assertions in setup methods?
EDIT:
The answer turns out to be: this is not a good practice in general. If the setup results need to be tested, it is recommended to add a separate test method with the assertions (the answer I ticked); for documenting intent, consider using Java asserts.
Instead of assertions in the setup to check the result, I used a simple test (a test method along the others, but positionned as first test method).
I have seen several advantages:
The setup keeps short and focused, for readability.
The assertions are run only once, which is more efficient.
Usage and discussion :
For example, I name the method testSetup().
To use it, when I have some test errors in that class, I know that if testSetup() has an error, I don't need to bother with the other errors, I need to fix this one first.
If someone is bothered by this, and wants to make this dependency explicit, the testSetup() could be called in the setup() method. But I don't think it matters. My point is that, in JUnit, you can already have something similar in the rest of your tests:
some tests that test local code,
and some tests that is calls more global code, which indirectly calls the same code as the previous test.
When you read the test result where both fail, you already have to take care of this dependency that is not in the test, but in the code being called. You have to fix the simple test first, and then rerun the global test to see if it still fails.
This is the reason why I'm not bothered by the implicit dependency I explained before.
Having assertions in the Setup/TearDown methods is not advisable. It makes the test less readable if the user needs to "understand" that some of the test logic is not in the test method.
There are times when you do not have a choice but to use the setup/teardown methods for something other than what they where intended for.
There is a bigger issue in this question: a test that calls another test, it is a smell for some problem in your tests.
Each test should test a specific aspect of your code and should only have one or two assertions in it, so if your test calls another test you might be testing too many things in that test.
For more information read: Unit Testing: One Test, One Assertion - Why It Works
They're different scenarios; I don't see the similarity.
Setup methods should contain code that is common to (ideally) all tests in a fixture. As such, there's nothing inherently wrong with putting asserts in a test setup method if certain things must be true before the rest of the test code executes. The setup is an extension of the test; it is part of the test as a whole. If the assert trips, people will discover which pre-requisite failed.
On the other hand, if the setup is complicated enough that you feel the need to assert it is correct, it may be a warning sign. Furthermore, if all tests do not require the setup's full output, then it is a sign that the fixture has poor cohesion and should be split up based on scenarios and/or refactored.
It's partly because of this that I tend to stay away from using Setup methods. Where possible, I use private factory methods or similar to set things up. It makes the test more readable and avoids confusion. Sometimes this is not practical (e.g. working with tightly coupled classes and/or when writing integration tests), but for the majority of my tests it does the job.
Follow your heart / Blink decisions. Asserts within a Setup method can document intent ; improver readability. So personally I'd back you up on this.
It is different from a test calling other tests - which is bad. No test isolation. A test should not influence the outcome of another test.
Although it is not a freq use-case, I sometimes use Asserts inside a Setup method so that I can know if test setup has not taken place as I intended it to; usually when I'm dealing with components that I didn't write myself. An Assertion failure which reads 'Setup failed!' in the errors tab - quickly helps me zone in on the setup code instead of having to look at a bunch of failed tests.
A Setup failure usually should cause all tests in that fixture to fail - which is a smell that your nose should soon pickup. 'All tests failed usually implies Setup broke ' So assertions are not always needed. That said be pragmatic, look at your specific context and 'Add to taste.'
I use Java asserts, rather than JUnit ones, in the cases where something like this is necessary. e.g. when you use some other utility class to set up test data.:
byte[] pkt = pktFactory.makePacket(TIME, 12, "23, F2");
assert pkt.length == 15;
Failing has the implication 'system is not in a state to even try to run this test'.

Does YAGNI also apply when writing tests?

When I write code I only write the functions I need as I need them.
Does this approach also apply to writing tests?
Should I write a test in advance for every use-case I can think of just to play it safe or should I only write tests for a use-case as I come upon it?
I think that when you write a method you should test both expected and potential error paths. This doesn't mean that you should expand your design to encompass every potential use -- leave that for when it's needed, but you should make sure that your tests have defined the expected behavior in the face of invalid parameters or other conditions.
YAGNI, as I understand it, means that you shouldn't develop features that are not yet needed. In that sense, you shouldn't write a test that drives you to develop code that's not needed. I suspect, though, that's not what you are asking about.
In this context I'd be more concerned with whether you should write tests that cover unexpected uses -- for example, errors due passing null or out of range parameters -- or repeating tests that only differ with respect to the data, not the functionality. In the former case, as I indicated above, I would say yes. Your tests will document the expected behavior of your method in the face of errors. This is important information to people who use your method.
In the latter case, I'm less able to give you a definitive answer. You certainly want your tests to remain DRY -- don't write a test that simply repeats another test even if it has different data. Alternatively, you may not discover potential design issues unless you exercise the edge cases of your data. A simple example is a method that computes a sum of two integers: what happens if you pass it maxint as both parameters? If you only have one test, then you may miss this behavior. Obviously, this is related to the previous point. Only you can be sure when a test is really needed or not.
Yes YAGNI absolutely applies to writing tests.
As an example, I, for one, do not write tests to check any Properties. I assume that properties work a certain way, and until I come to one that does something different from the norm, I won't have tests for them.
You should always consider the validity of writing any test. If there is no clear benefit to you in writing the test, then I would advise that you don't. However, this is clearly very subjective, since what you might think is not worth it someone else could think is very worth the effort.
Also, would I write tests to validate input? Absolutely. However, I would do it to a point. Say you have a function with 3 parameters that are ints and it returns a double. How many tests are you going to write around that function. I would use YAGNI here to determine which tests are going to get you a good ROI, and which are useless.
Write the test as you need it. Tests are code. Writing a bunch of (initially failing) tests up front breaks the red/fix/green cycle of TDD, and makes it harder to identify valid failures vs. unwritten code.
You should write the tests for the use cases you are going to implement during this phase of development.
This gives the following benefits:
Your tests help define the functionality of this phase.
You know when you've completed this phase because all of your tests pass.
You should write tests that cover all your code, ideally. Otherwise, the rest of your tests lose value, and you will in the end debug that piece of code repeatedly.
So, no. YAGNI does not include tests :)
There is of course no point in writing tests for use cases you're not sure will get implemented at all - that much should be obvious to anyone.
For use cases you know will get implemented, test cases are subject to diminishing returns, i.e. trying to cover each and every possible obscure corner case is not a useful goal when you can cover all important and critical paths with half the work - assuming, of course, that the cost of overlooking a rarely occurring error is endurable; I would certainly not settle for anything less than 100% code and branch coverage when writing avionics software.
You'll probably get some variance here, but generally, the goal of writing tests (to me) is to ensure that all your code is functioning as it should, without side effects, in a predictable fashion and without defects. In my mind, then, the approach you discuss of only writing tests for use cases as they are come upon does you no real good, and may in fact cause harm.
What if the particular use case for the unit under test that you ignore causes a serious defect in the final software? Has the time spent developing tests bought you anything in this scenario beyond a false sense of security?
(For the record, this is one of the issues I have with using code coverage to "measure" test quality -- it's a measurement that, if low, may give an indication that you're not testing enough, but if high, should not be used to assume that you are rock-solid. Get the common cases tested, the edge cases tested, then consider all the ifs, ands and buts of the unit and test them, too.)
Mild Update
I should note that I'm coming from possibly a different perspective than many here. I often find that I'm writing library-style code, that is, code which will be reused in multiple projects, for multiple different clients. As a result, it is generally impossible for me to say with any certainty that certain use cases simply won't happen. The best I can do is either document that they're not expected (and hence may require updating the tests afterward), or -- and this is my preference :) -- just writing the tests. I often find option #2 is for more livable on a day-to-day basis, simply because I have much more confidence when I'm reusing component X in new application Y. And confidence, in my mind, is what automated testing is all about.
You should certainly hold off writing test cases for functionality you're not going to implement yet. Tests should only be written for existing functionality or functionality you're about to put in.
However, use cases are not the same as functionality. You only need to test the valid use cases that you've identified, but there's going to be a lot of other things that might happen, and you want to make sure those inputs get a reasonable response (which could well be an error message).
Obviously, you aren't going to get all the possible use cases; if you could, there'd be no need to worry about computer security. You should get at least the more plausible ones, and as problems come up you should add them to the use cases to test.
I think the answer here is, as it is in so many places, it depends. If the contract that a function presents states that it does X, and I see that it's got associated unit tests, etc., I'm inclined to think it's a well-tested unit and use it as such, even if I don't use it that exact way elsewhere. If that particular usage pattern is untested, then I might get confusing or hard-to-trace errors. For this reason, I think a test should cover all (or most) of the defined, documented behavior of a unit.
If you choose to test more incrementally, I might add to the doc comments that the function is "only tested for [certain kinds of input], results for other inputs are undefined".
I frequently find myself writing tests, TDD, for cases that I don't expect the normal program flow to invoke. The "fake it 'til you make it" approach has me starting, generally, with a null input - just enough to have an idea in mind of what the function call should look like, what types its parameters will have and what type it will return. To be clear, I won't just send null to the function in my test; I'll initialize a typed variable to hold the null value; that way when Eclipse's Quick Fix creates the function for me, it already has the right type. But it's not uncommon that I won't expect the program normally to send a null to the function. So, arguably, I'm writing a test that I AGN. But if I start with values, sometimes it's too big a chunk. I'm both designing the API and pushing its real implementation from the beginning. So, by starting slow and faking it 'til I make it, sometimes I write tests for cases I don't expect to see in production code.
If you're working in a TDD or XP style, you won't be writing anything "in advance" as you say, you'll be working on a very precise bit of functionality at any given moment, so you'll be writing all the necessary tests in order make sure that bit of functionality works as you intend it to.
Test code is similar with "code" itself, you won't be writing code in advance for every use cases your app has, so why would you write test code in advance ?

Unit Testing without Assertions

Occasionally I come accross a unit test that doesn't Assert anything. The particular example I came across this morning was testing that a log file got written to when a condition was met. The assumption was that if no error was thrown the test passed.
I personally don't have a problem with this, however it seems to be a bit of a "code smell" to write a unit test that doesn't have any assertions associated with it.
Just wondering what people's views on this are?
It's simply a very minimal test, and should be documented as such. It only verifies that it doesn't explode when run. The worst part about tests like this is that they present a false sense of security. Your code coverage will go up, but it's illusory. Very bad odor.
This would be the official way to do it:
// Act
Exception ex = Record.Exception(() => someCode());
// Assert
Assert.Null(ex);
If there is no assertion, it isn't a test.
Quit being lazy -- it may take a little time to figure out how to get the assertion in there, but well worth it to know that it did what you expected it to do.
These are known as smoke tests and are common. They're basic sanity checks. But they shouldn't be the only kinds of tests you have. You'd still need some kind of verification in another test.
Such a test smells. It should check that the file was written to, at least that the modified time was updated perhaps.
I've seen quite a few tests written this way that ended up not testing anything at all i.e. the code didn't work, but it didn't blow up either.
If you have some explicit requirement that the code under test doesn't throw an exception and you want to explicitly call out this fact (tests as requirements docs) then I would do something like this:
try
{
unitUnderTest.DoWork()
}
catch
{
Assert.Fail("code should never throw exceptions but failed with ...")
}
... but this still smells a bit to me, probably because it's trying to prove a negative.
In some sense, you are making an implicit assertion - that the code doesn't throw an exception. Of course it would be more valuable to actually grab the file and find the appropriate line, but I suppose something's better than nothing.
It can be a good pragmatic solution, especially if the alternative is no test at all.
The problem is that the test would pass if all the functions called were no-ops. But sometimes it just isn't feasible to verify the side effects are what you expected. In the ideal world there would be enough time to write the checks for every test ... but I don't live there.
The other place I've used this pattern is for embedding some performance tests in with unit tests because that was an easy way to get them run every build. The tests don't assert anything, but measure how long the test took and log that.
The name of the test should document this.
void TestLogDoesNotThrowException(void) {
log("blah blah");
}
How does the test verify if the log is written without assertion ?
In general, I see this occuring in integration testing, just the fact that something succeeded to completion is good enough. In this case Im cool with that.
I guess if I saw it over and over again in unit tests I would be curious as to how useful the tests really were.
EDIT: In the example given by the OP, there is some testable outcome (logfile result), so assuming that if no error was thrown that it worked is lazy.
We do this all the time. We mock our dependencies using JMock, so I guess in a sense the JMock framework is doing the assertion for us... but it goes something like this. We have a controller that we want to test:
Class Controller {
private Validator validator;
public void control(){
validator.validate;
}
public setValidator(Validator validator){ this.validator = validator; }
}
Now, when we test Controller we dont' want to test Validator because it has it's own tests. so we have a test with JMock just to make sure we call validate:
public void testControlShouldCallValidate(){
mockValidator.expects(once()).method("validate");
controller.control;
}
And that's all, there is no "assertion" to see but when you call control and the "validate" method is not called then the JMock framework throws you an exception (something like "expected method not invoked" or something).
We have those all over the place. It's a little backwards since you basically setup your assertion THEN make the call to the tested method.
I've seen something like this before and I think this was done just to prop up code coverage numbers. It's probably not really testing code behaviour. In any case, I agree that it (the intention) should be documented in the test for clarity.
I sometimes use my unit testing framework of choice (NUnit) to build methods that act as entry points into specific parts of my code. These methods are useful for profiling performance, memory consumption and resource consumption of a subset of the code.
These methods are definitely not unit tests (even though they're marked with the [Test] attribute) and are always flagged to be ignored and explicitly documented when they're checked into source control.
I also occasionally use these methods as entry points for the Visual Studio debugger. I use Resharper to step directly into the test and then into the code that I want to debug. These methods either don't make it as far as source control, or they acquire their very own asserts.
My "real" unit tests are built during normal TDD cycles, and they always assert something, although not always directly - sometimes the assertions are part of the mocking framework, and sometimes I'm able to refactor similar assertions into a single method. The names of those refactored methods always start with the prefix "Assert" to make it obvious to me.
I have to admit that I have never written a unit test that verified I was logging correctly. But I did think about it and came across this discussion of how it could be done with JUnit and Log4J. Its not too pretty but it looks like it would work.
Tests should always assert something, otherwise what are you proving and how can you consistently reproduce evidence that your code works?
I would say that a test with no assertions indicates one of two things:
a test that isn't testing the code's important behavior, or
code without any important behaviors, that might be removed.
Thing 1
Most of the comments in this thread are about thing 1, and I would agree that if code under test has any important behavior, then it should be possible to write tests that make assertions about that behavior, either by
asserting on a function/method return value,
asserting on calls to 'test double' dependencies, or
asserting on changes to visible state.
If the code under test has important behavior, but there aren't assertions on the correctness of that behavior, then the test is deficient.
Your question appears to belong in this category. The code under test is supposed to log when a condition is met. So there are at least two tests:
Given that the condition is met, when we call the method, then does the logging occur?
Given that the condition is not met, when we call the method, then does the logging not occur?
The test would need a way to arrange the state of the code so that the condition was or was not met, and it would need a way to confirm that the logging either did or did not occur, probably with some logging 'test double' that just recorded the logging calls (people often use mocking frameworks for this.)
Thing 2
So how about those other tests, that lack assertions, but it's because the code under test doesn't do anything important? I would say that a judgment call is required. In large code bases with high code velocity (many commits per day) and with many simultaneous contributors, it is necessary to deliver code incrementally in small commits. This is so that:
your code reviewers are not overwhelmed by large complicated commits
you avoid merge conflicts
it is easy to revert your commit if it causes a fault.
In these situations, I have added 'placeholder' classes, which don't do anything interesting, but which provide the structure for the implementation that will follow. Adding this class now, and even using it from other classes, can help show reviewers how the pieces will fit together even if the important behavior of the new class is not yet implemented.
So, if we assume that such placeholders are appropriate to add, should we test them? It depends. At the least, you will want to confirm that the class is syntactically valid, and perhaps that none of its incidental behaviors cause uncaught exceptions.
For examples:
Python is an interpreted language, and so your continuous build may not have a way to confirm that your placeholder class is syntactically valid unless it executes the code as part of a test.
Your placeholder may have incidental behavior, such as logging statements. These behaviors are not important enough to assert on because they are not an essential part of the class's behavior, but they are potential sources of exceptions. Most test frameworks treat uncaught exceptions as errors, and so by executing this code in a test, you are confirming that the incidental behavior does not cause uncaught exceptions.
Personally I believe that this reasoning supports the temporary inclusion of assertion-free tests in a code base. That said, the situation should be temporary, and the placeholder class should soon receive a more complete implementation, or it should be removed.
As a final note, I don't think it's a good idea to include asserts on incidental behavior just to satisfy a formalism that 'all tests must have assertions'. You or another author may forget to remove these formalistic assertions, and then they will clutter the tests with assertions of non-essential behavior, distracting focus from the important assertions. Many of us are probably familiar with the situation where you come upon a test, and you see something that looks like it doesn't belong, and we say, "I'd really like to remove this...but it makes no sense why it's there. So it might be there for some potentially obscure and important reason that the original author forgot to document. I should probably just leave it so that I 1) respect the intentions of the original author, and 2) don't end up breaking anything and making my life more difficult." (See Chesterton's fence.)