Unit Testing without Assertions - unit-testing

Occasionally I come accross a unit test that doesn't Assert anything. The particular example I came across this morning was testing that a log file got written to when a condition was met. The assumption was that if no error was thrown the test passed.
I personally don't have a problem with this, however it seems to be a bit of a "code smell" to write a unit test that doesn't have any assertions associated with it.
Just wondering what people's views on this are?

It's simply a very minimal test, and should be documented as such. It only verifies that it doesn't explode when run. The worst part about tests like this is that they present a false sense of security. Your code coverage will go up, but it's illusory. Very bad odor.

This would be the official way to do it:
// Act
Exception ex = Record.Exception(() => someCode());
// Assert
Assert.Null(ex);

If there is no assertion, it isn't a test.
Quit being lazy -- it may take a little time to figure out how to get the assertion in there, but well worth it to know that it did what you expected it to do.

These are known as smoke tests and are common. They're basic sanity checks. But they shouldn't be the only kinds of tests you have. You'd still need some kind of verification in another test.

Such a test smells. It should check that the file was written to, at least that the modified time was updated perhaps.
I've seen quite a few tests written this way that ended up not testing anything at all i.e. the code didn't work, but it didn't blow up either.
If you have some explicit requirement that the code under test doesn't throw an exception and you want to explicitly call out this fact (tests as requirements docs) then I would do something like this:
try
{
unitUnderTest.DoWork()
}
catch
{
Assert.Fail("code should never throw exceptions but failed with ...")
}
... but this still smells a bit to me, probably because it's trying to prove a negative.

In some sense, you are making an implicit assertion - that the code doesn't throw an exception. Of course it would be more valuable to actually grab the file and find the appropriate line, but I suppose something's better than nothing.

It can be a good pragmatic solution, especially if the alternative is no test at all.
The problem is that the test would pass if all the functions called were no-ops. But sometimes it just isn't feasible to verify the side effects are what you expected. In the ideal world there would be enough time to write the checks for every test ... but I don't live there.
The other place I've used this pattern is for embedding some performance tests in with unit tests because that was an easy way to get them run every build. The tests don't assert anything, but measure how long the test took and log that.

The name of the test should document this.
void TestLogDoesNotThrowException(void) {
log("blah blah");
}
How does the test verify if the log is written without assertion ?

In general, I see this occuring in integration testing, just the fact that something succeeded to completion is good enough. In this case Im cool with that.
I guess if I saw it over and over again in unit tests I would be curious as to how useful the tests really were.
EDIT: In the example given by the OP, there is some testable outcome (logfile result), so assuming that if no error was thrown that it worked is lazy.

We do this all the time. We mock our dependencies using JMock, so I guess in a sense the JMock framework is doing the assertion for us... but it goes something like this. We have a controller that we want to test:
Class Controller {
private Validator validator;
public void control(){
validator.validate;
}
public setValidator(Validator validator){ this.validator = validator; }
}
Now, when we test Controller we dont' want to test Validator because it has it's own tests. so we have a test with JMock just to make sure we call validate:
public void testControlShouldCallValidate(){
mockValidator.expects(once()).method("validate");
controller.control;
}
And that's all, there is no "assertion" to see but when you call control and the "validate" method is not called then the JMock framework throws you an exception (something like "expected method not invoked" or something).
We have those all over the place. It's a little backwards since you basically setup your assertion THEN make the call to the tested method.

I've seen something like this before and I think this was done just to prop up code coverage numbers. It's probably not really testing code behaviour. In any case, I agree that it (the intention) should be documented in the test for clarity.

I sometimes use my unit testing framework of choice (NUnit) to build methods that act as entry points into specific parts of my code. These methods are useful for profiling performance, memory consumption and resource consumption of a subset of the code.
These methods are definitely not unit tests (even though they're marked with the [Test] attribute) and are always flagged to be ignored and explicitly documented when they're checked into source control.
I also occasionally use these methods as entry points for the Visual Studio debugger. I use Resharper to step directly into the test and then into the code that I want to debug. These methods either don't make it as far as source control, or they acquire their very own asserts.
My "real" unit tests are built during normal TDD cycles, and they always assert something, although not always directly - sometimes the assertions are part of the mocking framework, and sometimes I'm able to refactor similar assertions into a single method. The names of those refactored methods always start with the prefix "Assert" to make it obvious to me.

I have to admit that I have never written a unit test that verified I was logging correctly. But I did think about it and came across this discussion of how it could be done with JUnit and Log4J. Its not too pretty but it looks like it would work.

Tests should always assert something, otherwise what are you proving and how can you consistently reproduce evidence that your code works?

I would say that a test with no assertions indicates one of two things:
a test that isn't testing the code's important behavior, or
code without any important behaviors, that might be removed.
Thing 1
Most of the comments in this thread are about thing 1, and I would agree that if code under test has any important behavior, then it should be possible to write tests that make assertions about that behavior, either by
asserting on a function/method return value,
asserting on calls to 'test double' dependencies, or
asserting on changes to visible state.
If the code under test has important behavior, but there aren't assertions on the correctness of that behavior, then the test is deficient.
Your question appears to belong in this category. The code under test is supposed to log when a condition is met. So there are at least two tests:
Given that the condition is met, when we call the method, then does the logging occur?
Given that the condition is not met, when we call the method, then does the logging not occur?
The test would need a way to arrange the state of the code so that the condition was or was not met, and it would need a way to confirm that the logging either did or did not occur, probably with some logging 'test double' that just recorded the logging calls (people often use mocking frameworks for this.)
Thing 2
So how about those other tests, that lack assertions, but it's because the code under test doesn't do anything important? I would say that a judgment call is required. In large code bases with high code velocity (many commits per day) and with many simultaneous contributors, it is necessary to deliver code incrementally in small commits. This is so that:
your code reviewers are not overwhelmed by large complicated commits
you avoid merge conflicts
it is easy to revert your commit if it causes a fault.
In these situations, I have added 'placeholder' classes, which don't do anything interesting, but which provide the structure for the implementation that will follow. Adding this class now, and even using it from other classes, can help show reviewers how the pieces will fit together even if the important behavior of the new class is not yet implemented.
So, if we assume that such placeholders are appropriate to add, should we test them? It depends. At the least, you will want to confirm that the class is syntactically valid, and perhaps that none of its incidental behaviors cause uncaught exceptions.
For examples:
Python is an interpreted language, and so your continuous build may not have a way to confirm that your placeholder class is syntactically valid unless it executes the code as part of a test.
Your placeholder may have incidental behavior, such as logging statements. These behaviors are not important enough to assert on because they are not an essential part of the class's behavior, but they are potential sources of exceptions. Most test frameworks treat uncaught exceptions as errors, and so by executing this code in a test, you are confirming that the incidental behavior does not cause uncaught exceptions.
Personally I believe that this reasoning supports the temporary inclusion of assertion-free tests in a code base. That said, the situation should be temporary, and the placeholder class should soon receive a more complete implementation, or it should be removed.
As a final note, I don't think it's a good idea to include asserts on incidental behavior just to satisfy a formalism that 'all tests must have assertions'. You or another author may forget to remove these formalistic assertions, and then they will clutter the tests with assertions of non-essential behavior, distracting focus from the important assertions. Many of us are probably familiar with the situation where you come upon a test, and you see something that looks like it doesn't belong, and we say, "I'd really like to remove this...but it makes no sense why it's there. So it might be there for some potentially obscure and important reason that the original author forgot to document. I should probably just leave it so that I 1) respect the intentions of the original author, and 2) don't end up breaking anything and making my life more difficult." (See Chesterton's fence.)

Related

Is a postcondition a (type of) unit test?

I'm trying to incorporate some design-by-contract techniques into my coding style. Postconditions look a lot to me like embedded unit tests and I'm wondering if my thinking here is on the right track or way off-base.
Wikipedia defines a postcondition as "a condition or predicate that must always be true just after the execution of some section of code or after an operation in a formal specification. Postconditions are sometimes tested using assertions within the code itself".
Is that not very similar to what you do in a unit test that verifies state directly (doesn't use mocks)?
If that's the case:
1) By using post-conditions, aren't I now sort of embedding testing code in my production code, and isn't that frowned upon?
2) Should using postconditions change the structure of my unit tests? My first thought is that the assertion logic is moved from the tests to the postconditions. That is, tests will use the same inputs and I'm still testing everything I was testing before, but now instead of making assertions in the unit tests I'm making a simple binary assertion about the postconditions passing or not.
3) My second thought is that postcondition code might have control flow and is therefore not ideal for test code, which is supposed to be simple and avoid control flow. But, if I test the postconditions, can I then rely on them in my unit tests?
4) It seems difficult to test postconditions because if I understand them correctly they basically pass or fail and you would have to repeat the logic of the postcondition itself to check that it did the right thing. So, how do you test a postcondition? Do you check them by not utilizing them in your unit testing and ensuring your unit tests and postconditions pass or fail together?
5) My unit tests sometimes verify that a method has caused changes to state in collaborators. In standard practice, do postconditions cover collaborator state or just the state of the class they are defined on?
You are on the right track.
It is true that post-conditions serve a similar purpose to unit tests. The key difference is that the post-condition always runs, while the unit test only runs against a known set of data. This means that the post-condition is less likely to overlook the corner case you didn't think of, but is more expensive at run time.
Here are answers to your specific questions.
There is a run-time penalty to post-conditions. However (depending on your environment), it may be possible to drop assertions for speed. (In C you can use an #ifdef, in Java look up AOP, in Python anything in a assert only runs if you pass the --debug flag, etc.) Should you get a performance problem from your assertions, it is solvable. However my preference would be to leave them on until you have a reason not to.
Some of your logic will naturally move from the unit test to the post-condition. However it is worthwhile to make sure that you have unit tests that run through all of the cases of interest for your post-condition. This is particularly true if you are dropping assertions in production for speed.
Post-conditions are not unit tests. Write them in whatever way that makes sense for what they do. (In general they should be somewhat simple.)
In general you test post-conditions as described in #2, by passing in a set of inputs of interest where the post-condition might possibly be violated, and check that it isn't. If you want to test the logic of the post-condition itself, then you can set up code that can violate the post-condition, but which will only run during tests. For instance have a global variable that tests can set which, if it is set, replaces the data to be returned with whatever you want. Now you can cause the post-condition to receive any input you want.
I'm not going to give you a hard and fast rule. They are your contracts. They should say what makes sense for what the function is doing. That said, what you are describing can lead to tight coupling between those objects. Tight coupling is something you should only do with good reason.
Contracts aren't a form of unit-testing. Rather they're a way of specifying (in an executable format) what conditions should hold before and after a particular function or method is called, and may also specify invariants of objects.
You still need tests when you have contracts since just because you've specified what the functions are supposed to do doesn't mean they'll actually do it. But you'll find that your contracts will help you debug - because by having code that can check that what's happening at run-time is what was expected means that any logic or programming error will cause a failure near to the code that contains the error.
You may find that with contracts you're happy to have fewer smaller tests and more larger-scale tests since the contracts will let you narrow down the source of an error even if the test is broad. Also, there's less need for unit tests to play the role of a specification of how the logic is supposed to work, further limiting the value of the smaller tests.
Contracts are like assertions in that you may choose to or choose not to have them enabled in production code. My opinion is that contracts tend to be more expensive than assertions and so you'll tend to have them disabled in production.
As with any methodology or coding style - there is no single correct answer. However, one thing I found to be true so far is that there is never a 'one size fits all' solution.
So, if you implement these assertions into a logics of every single postcondition in your design, I'd consider it to be wrong.
My own opinion is that such assertions should be used only if failure to meet postconditions leads the entire system to a dangerously inconsistent state. So, if something like that happens, I'd definitely like the system to do something like: send email/sms to admin, halt production execution, run diagnostics or whatever should be done for that particular system. Note, that this would be an actual feature which purpose is increased security, it's not a unit test code.
On the other hand, if you're coding assertions after every single method call, then as you noticed only thing you are doing is hardcoding test cases into production code. That doesn't serve any real purpose, other than to make your codebase a big mess.

Are assertions redundant when you have unit tests?

I'm not used yet to write unit tests and I want to do this on a full framework of little tools (making it more secure to use). That way I'll certainly learn more about unit tests than what I learnt until now.
However I'm really used to add assertions systematically everywhere I see there is a context to be sure about (that are removed in the final release). Mostly as preconditions in functions implementations and each time I retrieve informations that have to be correct (like C/C++ pointers validity for a famous example).
Now I'm asking : are assertions redundant when you have unit tests? Because it looks redundant as you're testing the behaviour of a bit of code; but in the same time it's not the same execution context.
Should I do both?
Assertions that check preconditions can help detect and locate integration bugs. That is, whereas unit tests demonstrate that a method operates correctly when it is used (called) correctly, assertions that check preconditions can detect incorrect uses (calls) to the method. Using assertions causes faulty code to fail fast, which assists debugging.
Assertions not only validate the code but serve as a form of documentation that informs readers about properties of various structures that they can be sure will be satisfied at that point in execution (e.g. node->next != NULL). This helps to create a mental model of the code when you're reading through it.
Assertions also serve to prevent disaster scenarios during runtime. e.g.
assert(fuel);
launch_rocket();
Trying to launch when there is no fuel might be disastrous. Your unit tests might have caught this scenario but it's always possible that you've missed it and an abort because a condition is unmet is Way better than a crash and burn during runtime.
So, in short, I'd keep them there. It's a good habit to add them and there's no gain in unlearning it.
I would say that you need both. Unit tests test that your in-method asserts are correct ;)
Unit testing and assertions are vastly different things, yet they complement each other.
Assertions deals with proper input-output of methods.
Unit tests deal with ensuring a unit works according to some governing rules, e.g. a specification. A unit may be ax method, a class, or a set of collaborating classes (sometimes such tests go by the name "integration tests")
So the assertions for a method square root are along the lines of the input and output both be non negative numbers.
Unitests on the other hand may test that square root of 9 is 3 and that square root of some negative number yields an exception.

Asserting set up and pre-conditions in Unit Tests

Is having more than one assert per test a really bad smell? I usually try to follow the “arrange, act, assert” pattern as well as the single assert per test guideline. I think having clean, small, isolated tests is pure awesomeness. For the most part I manage to do this. However, sometimes I find myself asserting “pre-conditions” right after my arrange like so:
'arrange:
'pre-conditions:
Assert the arrange worked
'act:
'assert:
Is my test testing too much? Is it caring about things it shouldn’t care about? I’d love to hear some opinions on this.
As I said here, I think perhaps our best practice should be, not Arrange-Act-Assert, but rather Arrange-Assume-Act-Assert. That before Acting, we assert that the desired result of the Action is not already in effect. This isn't exactly the same as what you are asking; generally I don't think it's important to verify the setup, because setup errors tend to manifest themselves pretty "loudly" in any case; but it's a good reason to have a second assert in the test.
I think having to use Assert to validate the setup is suboptimal, as it muddies the results (it might be hard to figure out what you're testing, if just looking at the output).
I recognize that there are times when this is necessary, or you just want to make sure that everything is being tested in the way you intend.
My practice is to use Debug.Assert (in C#) for this purpose, so that the setup validation does not become part of the test output.
You might be able to achieve the same in other languages by throwing an exception if the setup does not put the system into an expected state.
Different test runners may handle this differently, so you should make sure that this approach has the desired effect (tests fail, but no extra report output as long as the Debug.Assert passes or no exception thrown).
I would not use a vanilla "Assert" for that but rather Assert.Inconclusive (MSTest)¹. The test has not failed, so you don't want to fail the test run.
1) Assume in JUnit and NUnit I believe.

Unit testing: Is it a good practice to have assertions in setup methods?

In unit testing, the setup method is used to create the objects needed for testing.
In those setup methods, I like using assertions: I know what values I want to see in those
objects, and I like to document that knowledge via an assertion.
In a recent post on unit tests calling other unit tests here on stackoverflow, the general feeling seems to be that unit tests should not call other tests:
The answer to that question seems to be that you should refactor your setup, so
that test cases do not depend on each other.
But there isn't much difference in a "setup-with-asserts" and a
unit test calling other unit tests.
Hence my question: Is it good practice to have assertions in setup methods?
EDIT:
The answer turns out to be: this is not a good practice in general. If the setup results need to be tested, it is recommended to add a separate test method with the assertions (the answer I ticked); for documenting intent, consider using Java asserts.
Instead of assertions in the setup to check the result, I used a simple test (a test method along the others, but positionned as first test method).
I have seen several advantages:
The setup keeps short and focused, for readability.
The assertions are run only once, which is more efficient.
Usage and discussion :
For example, I name the method testSetup().
To use it, when I have some test errors in that class, I know that if testSetup() has an error, I don't need to bother with the other errors, I need to fix this one first.
If someone is bothered by this, and wants to make this dependency explicit, the testSetup() could be called in the setup() method. But I don't think it matters. My point is that, in JUnit, you can already have something similar in the rest of your tests:
some tests that test local code,
and some tests that is calls more global code, which indirectly calls the same code as the previous test.
When you read the test result where both fail, you already have to take care of this dependency that is not in the test, but in the code being called. You have to fix the simple test first, and then rerun the global test to see if it still fails.
This is the reason why I'm not bothered by the implicit dependency I explained before.
Having assertions in the Setup/TearDown methods is not advisable. It makes the test less readable if the user needs to "understand" that some of the test logic is not in the test method.
There are times when you do not have a choice but to use the setup/teardown methods for something other than what they where intended for.
There is a bigger issue in this question: a test that calls another test, it is a smell for some problem in your tests.
Each test should test a specific aspect of your code and should only have one or two assertions in it, so if your test calls another test you might be testing too many things in that test.
For more information read: Unit Testing: One Test, One Assertion - Why It Works
They're different scenarios; I don't see the similarity.
Setup methods should contain code that is common to (ideally) all tests in a fixture. As such, there's nothing inherently wrong with putting asserts in a test setup method if certain things must be true before the rest of the test code executes. The setup is an extension of the test; it is part of the test as a whole. If the assert trips, people will discover which pre-requisite failed.
On the other hand, if the setup is complicated enough that you feel the need to assert it is correct, it may be a warning sign. Furthermore, if all tests do not require the setup's full output, then it is a sign that the fixture has poor cohesion and should be split up based on scenarios and/or refactored.
It's partly because of this that I tend to stay away from using Setup methods. Where possible, I use private factory methods or similar to set things up. It makes the test more readable and avoids confusion. Sometimes this is not practical (e.g. working with tightly coupled classes and/or when writing integration tests), but for the majority of my tests it does the job.
Follow your heart / Blink decisions. Asserts within a Setup method can document intent ; improver readability. So personally I'd back you up on this.
It is different from a test calling other tests - which is bad. No test isolation. A test should not influence the outcome of another test.
Although it is not a freq use-case, I sometimes use Asserts inside a Setup method so that I can know if test setup has not taken place as I intended it to; usually when I'm dealing with components that I didn't write myself. An Assertion failure which reads 'Setup failed!' in the errors tab - quickly helps me zone in on the setup code instead of having to look at a bunch of failed tests.
A Setup failure usually should cause all tests in that fixture to fail - which is a smell that your nose should soon pickup. 'All tests failed usually implies Setup broke ' So assertions are not always needed. That said be pragmatic, look at your specific context and 'Add to taste.'
I use Java asserts, rather than JUnit ones, in the cases where something like this is necessary. e.g. when you use some other utility class to set up test data.:
byte[] pkt = pktFactory.makePacket(TIME, 12, "23, F2");
assert pkt.length == 15;
Failing has the implication 'system is not in a state to even try to run this test'.

Is unit-testing of accessors a must?

For classes that have several setters and getters besides other methods, is it reasonable to save time on writing unit tests for the accessors, taking into account that they will be called while testing the rest of the interface anyway?
I would only unit test them if they do more than set or return a variable. At some point, you need to trust that the compiler is going to generate the right program for you.
Absolutely. The idea of unit tests is to ensure that changes do not affect behavior in unknown ways. You might save some time by not writing a test for getFoo(). If you change the type of Foo to be something a little more complex then you could easily forget to test the accessor. If you are questioning whether you should write a test or not, you are better off writing it.
IMHO, if you are thinking about skipping adding tests for a method, you might want to ask yourself if the method is necessary or not. In interest of full disclosure, I am one of those people that only adds a setter or getter when it is proven necessary. You would be surprised how often you really don't need access to a specific member after construction or when you only want to give access to the result of some calculation that is really dependent on the member. But I digress.
A good mantra is to always add tests. If you don't think that you need one because the method is trivial, consider removing the method instead. I realize that the common advice is that it is okay to skip tests for "trivial" methods but you have to ask yourself if the method is even necessary. If you skip the test, you are saying that the method will always be trivial. Remember that unit tests also function as documentation of the intent and the contract offered. Hence tests of a trivial method state that the method is indeed meant to be trivial.
My criteria for testing is that every piece of code containing conditional logic (while, if, for, etc) be tested. If the accessors are simple getters/setters, I'd say testing them is wasting your time.
You don't have to write test for properties that contain no logic.
The only explanation to test simple properties is to boost test coverage - but it's just silly.
I think it's reasonable to save time and not write unit tests that you don't think will be particularly helpful.
While 100% test coverage is an admirable ideal, at some point you run into diminishing returns where the time you spent writing the test isn't worth the benefit you get out of having it.
You can always go back and add more unit tests later if you find situations where you decide they would be useful.
Our company has both kinds of people and opinions. I'm tending to not testing them specifically, as they are usually
automatically generated
tested in the context of another test (e.g. there's some other code making use of these accessors
not containing any code that might break
There are exceptions though:
When they are not simply generated 'getters' and 'setters'
When they are part of an important API that's just provided for other users and not really tested in the context you're currently in
Both these cases might cause me to test them. The first one more than the second.
No-friggin way!
Waste of time!
Even Bob Martin See SO podcast 41, the grandfather of Agile says no.
If your IDE generates and manages modifications for member accessors --- you wont' be doing anything special --- then testing them really isn't important; types will match up, naming will be by a template, etc.
I think most people will say testing them is a waste of your time. In the 99% case that is true. If there's a bug in an accessor and the rest of your unit tests don't catch it indirectly then I'd start questioning why that property is there at all.
On the other hand, testing an accessor takes less typing that asking this question :)
Personally I test them. But this is a gray area for me and I don't press other people in my group to test them as long as they have sufficient coverage around the functionality of the class.
Usually when I consider writing unit tests I ask myself the following:
Is the getter/setter accessing anything on the DAL (Data Access Layer)?
If so then I would include a unit test. Just in case because if at some point in the future you decide to implement lazy loading, or something more advanced than a simple get/set, then you'll need to make sure this is working properly.
Is it forseable that the getter/setter will throw an exception?
The best practice for getters is to not allow them to throw exceptions at all. Setter's are another matter. However, either way, if you decide that a property might possibly throw an exception, then write a unit test for that property, both for a successful access, and for purposefully generating the exception.
Other than that I wouldn't bother, as Dan pointed out, "At some point, you need to trust that the compiler is going to generate the right program for you."
I like to have unit tests for them. If an accessor does any kind of work besides simply return a field then that code will be tested appropriately.
Even if a given accessor doesn't do anything other than return a field, it might be modified later to do something extra.
Also, it's an easy way to up the number of tests being run, which many managers like.