What to test when writing Unit Tests? - unit-testing

I want to begin unit testing our application, because I believe that this is the first step to developing a good relationship with testing and will allow me to branch into other forms of testing, most interesting BDD with Cucumber.
We currently generate all of our Base classes using Codesmith which are based entirely on the tables in a database. I am curious as to the benefits of generating test cases with these Base classes? Is this poor testing practices?
This leads me to the ultimate question of my post. What do we test when using Unit Tests?
Do we test the examples we know we want out? or do we test the examples we do not want?
Their can be methods that have multiple ways of Failing and multiple ways of Success, how do we know when to stop?
Take a Summing function for example. Give it 1,2 and expect 3 in the only unit test.. how do we know that 5,6 isn't coming back 35?
Question Recap
Generating unit tests (Good/Bad)
What/How much do we test?

Start with your requirements and write tests that test the expected behavior. From that point on, how many other scenarios you test can be driven by your schedule, or maybe by your recognizing non-success scenarios that are particularly high-risk.
You might consider writing non-success tests only in response to defects you (or your users) discover (the idea being that you write a test that tests the defect fix before you actually fix the defect, so that your test will fail if that defect is re-introduced into your code in future development).

The point of unit tests is to give you confidence (but only in special cases does it give you certainty) that the actual behavior of your public methods matches the expected behavior. Thus, if you have a class Adder
class Adder { public int Add(int x, int y) { return x + y; } }
and a corresponding unit test
[Test]
public void Add_returns_that_one_plus_two_is_three() {
Adder a = new Adder();
int result = a.Add(1, 2);
Assert.AreEqual(3, result);
}
then this gives you some (but not 100%) confidence that the method under test is behaving appropriately. It also gives you some defense against breaking the code upon refactoring.
What do we test when using Unit Tests?
The actual behavior of your public methods against the expected (or specified) behavior.
Do we test the examples we know we want out?
Yes, one way to gain confidence in the correctness of your method is to take some input with known expected output, execute the public method on the input and compare the acutal output to the expected output.

What to test: Everything that has ever gone wrong.
When you find a bug, write a test for the buggy behavior before you fix the code. Then, when the code is working correctly, the test will pass, and you'll have another test in your arsenal.

1) To start, i'd recommend you to test your app's core logic.
2) Then, use code coverage tool in vs to see whether all of your code is used in tests(all branches of if-else, case conditions are invoked).
This is some sort of an answer to your question about testing 1+2 = 3, 5 + 6 = 35: when code is covered, you can feel safe with further experiments.
3)It's a good practice to cover 80-90% of code: the rest of work is usually unefficient: getters-setters, 1-line exception handling, etc.
4) Learn about separation of concerns.
5) Generation unit tests - try it, you'll see, that you can save a pretty lines of code writing them manually. I prefer generating the file with vs, then write the rest TestMethods by myself.

You unittest things where you
want to make sure your algorithm works
want to safeguard against accidental changes in the future
So in your example it would not make much sense to test the generated classes. Test the generator instead.
It's good practice to test the main use cases (what the tested function was designed for) first. Then you test the main error cases. Then you write tests for corner cases (i.e. lower and upper bounds). The unusual error cases are normally so hard to produce that it doesn't make sense to unit-test them.
If you need to verify a large range of parameter sets, use data-driven testing.
How many things you test is a matter of effort vs. return, so it really depends on the individual project. Normally you try to follow the 80/20 rule, but there may be applications where you need more test coverage because a failure would have very serious consequences.
You can dramatically reduce the time you need to write tests if you use a test-driven approach (TDD). That's because code that isn't written with testability in mind is much harder, sometimes near to impossible to test. But since nothing in life is free, the code developed with TDD tends to be more complex itself.

I'm also beginning the process of more consistently using unit tests and what I've found is that the biggest task in unit testing is structuring my code to support testing. As I start to think about how to write tests, it becomes clear where classes have become overly coupled, to the point that the complexity of the 'unit' makes defining tests difficult. I spend as much or more time refactoring my code as I do writing tests. Once the boundaries between testable units become clearer, the question of where to start testing resolves itself; start with your smallest isolated dependencies (or at least the ones you're worried about) and work your way up.

There are three basic events I test for:
min, max, and somewhere between min and max.
And where appropriate two extremes: below min, and above max.
There are obvious exceptions (some code may not have a min or max for example) but I've found that unit testing for these events is a good start and captures a majority of "common" issues with the code.

Related

Is there such a thing as a bad unit test?

Given you can't write tests for dynamic content, outside of this, is there ever a reason you should not add a unit test? Maybe a test project integrity might be considered not necessary, but I could argue this both ways.
Example:
Objective-C/xCode = Write a test to make sure fonts are listed in your constants are also listed in your projects info.plist UIAppFonts array.
Technically, tests are supposed to adhere to a few key metrics: they should be fast, easy to read and interpret, consistent results, etc.
If any of these (and more) qualities of a good unit test are not met, you will end up with a cost. If the unit tests are slow then you spend time twindling your thumbs or if they are too hard you're spending time interpreting tests instead of writing new ones/code, same goes for tests that have inconsistent results.
Therefore we can say that bad unit tests exist.
However if we look into your concrete example of "should we test X" then that is a lot more subjective.
If something is easy to test like a getter/setter (aka: trivial code) then some might not find it worth their time while others consider it no problem: by adding these quick, small tests you will never encounter an unexpected problem just because someone added logic to their getter/setter and there are no tests to catch any mistakes.
I have no knowledge about Objective-C but at first glance that seems like a reasonable concept to test.
General rule: unless you have an explicit reason not to test something, test it.
Unit tests are really just a tool to create a lower watermark for quality of your code.
If you're 100% confident that your code works as intended, then you have enough unit tests. Adding more tests in this case is just a waste of time.
Think "hello world". How many unit tests would you write for that? 1 or 0?
If you're unsure about something, then you need more unit tests. Reasons for this feeling can be:
You or someone else just found a bug. Always write unit tests for bugs.
Someone asked for a new feature and you're not confident how to implement -> write tests to design the API and to be sure the final result will meet the expectation (and to make sure that everyone knows and agrees on expectations).
You are using a new technology and want to document a) how it works and b) how you use it. These tests work as a kind of template when you wonder later "how did I do this?"
You just found a bug in a library that you use. When you fix the bug, you should also add a test case that tells you "this bug has now been fixed!" so you don't hunt in the wrong place later.
Examples for bad unit tests:
Integration test hiding inside of a unit test
Testing setters and getters
Disabled unit tests
Commented out unit tests
Unit tests that break one per day or week (they erode your confidence and your willingness to write unit tests)
Any test that takes more than 10s to execute
Unit tests that are longer than 50 lines (incl. all the setup code)
My answer would be yes, writing tests is still writing code and the best way to avoid bugs is to not write the code in the first place.
IMHO, writing good tests is generally harder than writing good code. You can't write a useable test until you understand the problem, both in how it should work and how it can fail. The former is generally much easier to understand than the latter.
Having said all that, you have to start somewhere and sometimes it's easiest to write the simplest tests first, even if then don't really test anything useful.
However, you should winnow those tests out as you work through the TDD process. Work towards having a test set that documents just the external interfaces of an object. This is important, as when you come back to the object for later refactoring, you want a set of tests that defines the responsibilities of the object to the rest of the program, not the responsibilities of the object to itself.
(i.e. you want to test the inputs and outputs of the object as a "black box", not the internal wiring of the object. This allows you as much freedom to change w/o causing damage outside of the object. )

How do I write a unit test when the class to test is complicated?

I am trying to employ TDD in writing a backgammon game in C++ using VS 2010.
I have set up CxxTest to write the test cases.
The first class to test is
class Position
{
public:
...
...
bool IsSingleMoveValid(.....)
...
...
}
I 'd like to write a test for the function IsSingleMoveValid(), and I guess the test should prove that the function works correctly. Unfortunately there are so many cases to test and even if I test several cases, some might escape.
What do you suggest ? How does TDD handle these problems ?
A few guidelines:
Test regular cases. In your problem: test legal moves that you KNOW are valid. You can either take the easy way and have only a handful of test cases, or you can write a loop generating all possible legal moves that can occur in your application and test them all.
Test boundary cases. This is not really applicable to your problem, but for testing simple numerical functions of the form f(x) where you know that x has to lie in a range [x_min, x_max), you would typically also test f(x_min-1), f(x_min), f(x_max-1), f(x_max). (It could be relevant for board games if you have an internal board representation with an overflow edge around it)
Test known bugs. If you ever come across a legal move that is not recognized by your IsSingleMoveValid(), you add this as a testcase and then fix your code. It's useful to keep such test cases to guard against future regressions (some future code additions/modifications could re-introduce this bug, and the test will catch it).
The test coverage (percentage of code lines covered by tests) is a target that can be calculated by tools such as gcov You should do your own cost-benefit analysis how thorough you want to test your code. But for something as essential as legal move detection in a game program, I'd suggest you be vigilant here.
Others have already commented on breaking up the tests in smaller subtests. The nomenclature for that is that such isolated functions are tested with unit testing, whereas the collabaration between such functions in higher-level code is tested with integration testing.
Generally, by breaking complex classes into multiple simpler classes, each doing a well-defined task that is easy to test.
If you are writing the tests then the easiest thing to do is to break your IsSingleMoveValid function down into smaller functions and test them individually.
As you can see on Wikipedia, TDD - Test Driven Development means writing the test first.
In your case, it would mean to establish all valid moves and write a test function for them. Then, you write code for each of those breaking test, until all the test pass.
... Unfortunately there are so many cases to test and even if I test several cases, some might escape.
As other said, when a function is too complex it is time for Refactoring!
I strongly suggest you the book Refactoring - Improve the Design of Existing Code from Martin Fowler with contribution of Kent Beck and others. It is both a learning and reference book which makes it very valuable in my opinion.
This is probably the best book on refactoring and it will teach you how to split your function without breaking everything. Also, refactoring is a really important asset for TDD. :)
There is no such thing as "too many cases to test". If the code to handling a set of cases can be written, they need to be thought. If they can be written and are thought, they code that test them can we written as well. In average, for each 10 lines of (testable) code that you write, you can add a constant factor of testing code associated to it.
Of course, the whole trick is knowing how to write code that matches the testable description.
Hence, you need to start by writing a test for all the cases.
if there is a big, let's say for the sake of discussion that you have a countable set of possible cases to test (i.e: that add(n,m) == n+m for all n and m integer), but your actual code is really simple; return n+m. This of course is trivially true but don't miss the point: you don't need to test all the possible moves in the board, TDD aims so that your tests cover all the code (i.e: the tests exercise all the if branches in your code), not necessarily all possible values or combinations of states (which are exponentially big)
a project with 80-90% of line coverage, means that your tests exercise 9 lines out of each 10 lines of your code. In general if there is a bug in your code, it will in the majority of circumstances be evidenced when walking a particular code path.

Unit Testing : what to test / what not to test?

Since a few days ago I've started to feel interested in Unit Testing and TDD in C# and VS2010. I've read blog posts, watched youtube tutorials, and plenty more stuff that explains why TDD and Unit Testing are so good for your code, and how to do it.
But the biggest problem I find is, that I don't know what to check in my tests and what not to check.
I understand that I should check all the logical operations, problems with references and dependencies, but for example, should I create an unit test for a string formatting that's supossed to be user-input? Or is it just wasting my time while I just can check it in the actual code?
Is there any guide to clarify this problem?
In TDD every line of code must be justified by a failing test-case written before the code.
This means that you cannot develop any code without a test-case. If you have a line of code (condition, branch, assignment, expression, constant, etc.) that can be modified or deleted without causing any test to fail, it means this line of code is useless and should be deleted (or you have a missing test to support its existence).
That is a bit extreme, but this is how TDD works. That being said if you have a piece of code and you are wondering whether it should be tested or not, you are not doing TDD correctly. But if you have a string formatting routine or variable incrementation or whatever small piece of code out there, there must be a test case supporting it.
UPDATE (use-case suggested by Ed.):
Like for example, adding an object to a list and creating a test to see if it is really inside or there is a duplicate when the list shouldn't allow them.
Here is a counterexample, you would be surprised how hard it is to spot copy-paste errors and how common they are:
private Set<String> inclusions = new HashSet<String>();
private Set<String> exclusions = new HashSet<String>();
public void include(String item) {
inclusions.add(item);
}
public void exclude(String item) {
inclusions.add(item);
}
On the other hand testing include() and exclude() methods alone is an overkill because they do not represent any use-cases by themselves. However, they are probably part of some business use-case, you should test instead.
Obviously you shouldn't test whether x in x = 7 is really 7 after assignment. Also testing generated getters/setters is an overkill. But it is the easiest code that often breaks. All too often due to copy&paste errors or typos (especially in dynamic languages).
See also:
Mutation testing
Your first few TDD projects are going to probably result in worse design/redesign and take longer to complete as you are learning (at least in my experience). This is why you shouldn't jump into using TDD on a large critical project.
My advice is to use "pure" TDD (acceptance/unit test everything test-first) on a few small projects (100-10,000 LOC). Either do the side projects on your own or if you don't code in your free time, use TDD on small internal utility programs for your job.
After you do "pure" TDD on about 6-12 projects, you will start to understand how TDD affects design and learn how to design for testability. Once you know how to design for testability, you will need to TDD less and maximize the ROI of unit, regression, acceptance, etc. tests rather than test everything up front.
For me, TDD is more of teaching method for good code design than a practical methodology. However, I still TDD logic code and unit test instead of debug.
There is no simple answer to this question. There is the law of diminishing returns in action, so achieving perfect coverage is seldom worth it. Knowing what to test is a thing of experience, not rules. It’s best to consciously evaluate the process as you go. Did something break? Was it feasible to test? If not, is it possible to rewrite the code to make it more testable? Is it worth it to always test for such cases in the future?
If you split your code into models, views and controllers, you’ll find that most of the critical code is in the models, and those should be fairly testable. (That’s one of the main points of MVC.) If a piece of code is critical, I test it, even if it means that I would have to rewrite it to make it more testable. If a piece of code is easy to get wrong or get broken by future updates, it gets a test. I seldom test controllers and views, as it’s not proving worth the trouble for me.
The way I see it all of your code falls into one of three buckets:
Code that is easy to test: This includes your own deterministic public methods.
Code that is difficult to test: This includes GUI, non-deterministic methods, private methods, and methods with complex setup.
Code that you don't want to test: This includes 3rd party code, and code that is difficult to test and not worth the effort.
Of the three, you should focus on testing the easy code. The difficult to test code should be refactored so that into two parts: code that you don't want to test and easy code. And of course, you should test the refactored easy code.
I think you should only unit test entry points to behavior of the system. This include public methods, public accessors and public fields, but not constants (constant fields, enums, methods, etc.). It also includes any code which directly deals with IO, I explain why further below.
My reasoning is as follows:
Everything that's public is basically an entry point to a behavior of the system. A unit test should therefore be written that guarantees that the expected behavior of that entry point works as required. You shouldn't test all possible ways of calling the entry point, only the ones that you explicitly require. Your unit tests are therefore also the specs of what behavior your system supports and your documentation of how to use it.
Things that are not public can basically be deleted/re-factored at will with no impact to the behavior of the system. If you were to test those, you'd create a hard dependency from your unit test to that code, which would prevent you from doing refactoring on it. That's why you should not test anything else but public methods, fields and accessors.
Constants by design are not behavior, but axioms. A unit test that verifies a constant is itself a constant, so it would only be duplicated code and useless effort to write a test for constants.
So to answer your specific example:
should I create an unit test for a string formatting that's supossed
to be user-input?
Yes, absolutely. All methods which receive or send external input/output (which can be summed up as receiving IO), should be unit tested. This is probably the only case where I'd say non-public things that receive IO should also be unit tested. That's because I consider IO to be a public entry. Anything that's an entry point to an external actor I consider public.
So unit test public methods, public fields, public accessors, even when those are static constructs and also unit test anything which receives or sends data from an external actor, be it a user, a database, a protocol, etc.
NOTE: You can write temporary unit tests on non public things as a way for you to help make sure your implementation works. This is more of a way to help you figure out how to implement it properly, and to make sure your implementation works as you intend. After you've tested that it works though, you should delete the unit test or disable it from your test suite.
Kent Beck, in Extreme Programming Explained, said you only need to test the things that need to work in production.
That's a brusque way of encapsulating both test-driven development, where every change in production code is supported by a test that fails when the change is not present; and You Ain't Gonna Need It, which says there's no value in creating general-purpose classes for applications that only deal with a couple of specific cases.
I think you have to change your point of view.
In a pure form TDD requires the red-green-refactor workflow:
write test (it must fail) RED
write code to satisfy test GREEN
refactor your code
So the question "What I have to test?" has a response like: "You have to write a test that correspond to a feature or a particular requirements".
In this way you get must code coverage and also a better code design (remember that TDD stands also for Test Driven "Design").
Generally speaking you have to test ALL public method/interfaces.
should I create an unit test for a string formatting that's supossed
to be user-input? Or is it just wasting my time while I just can check
it in the actual code?
Not sure I understand what you mean, but the tests you write in TDD are supposed to test your production code. They aren't tests that check user input.
To put it another way, there can be TDD unit tests that test the user input validation code, but there can't be TDD unit tests that validate the user input itself.

How deep are your unit tests?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
The thing I've found about TDD is that its takes time to get your tests set up and being naturally lazy I always want to write as little code as possible. The first thing I seem do is test my constructor has set all the properties but is this overkill?
My question is to what level of granularity do you write you unit tests at?
..and is there a case of testing too much?
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.
Write unit tests for things you expect to break, and for edge cases. After that, test cases should be added as bug reports come in - before writing the fix for the bug. The developer can then be confident that:
The bug is fixed;
The bug won't reappear.
Per the comment attached - I guess this approach to writing unit tests could cause problems, if lots of bugs are, over time, discovered in a given class. This is probably where discretion is helpful - adding unit tests only for bugs that are likely to re-occur, or where their re-occurrence would cause serious problems. I've found that a measure of integration testing in unit tests can be helpful in these scenarios - testing code higher up codepaths can cover the codepaths lower down.
Everything should be made as simple as
possible, but not simpler. - A. Einstein
One of the most misunderstood things about TDD is the first word in it. Test. That's why BDD came along. Because people didn't really understand that the first D was the important one, namely Driven. We all tend to think a little bit to much about the Testing, and a little bit to little about the driving of design. And I guess that this is a vague answer to your question, but you should probably consider how to drive your code, instead of what you actually are testing; that is something a Coverage-tool can help you with. Design is a quite bigger and more problematic issue.
To those who propose testing "everything": realise that "fully testing" a method like int square(int x) requires about 4 billion test cases in common languages and typical environments.
In fact, it's even worse than that: a method void setX(int newX) is also obliged not to alter the values of any other members besides x -- are you testing that obj.y, obj.z, etc. all remain unchanged after calling obj.setX(42);?
It's only practical to test a subset of "everything." Once you accept this, it becomes more palatable to consider not testing incredibly basic behaviour. Every programmer has a probability distribution of bug locations; the smart approach is to focus your energy on testing regions where you estimate the bug probability to be high.
The classic answer is "test anything that could possibly break". I interpret that as meaning that testing setters and getters that don't do anything except set or get is probably too much testing, no need to take the time. Unless your IDE writes those for you, then you might as well.
If your constructor not setting properties could lead to errors later, then testing that they are set is not overkill.
I write tests to cover the assumptions of the classes I will write. The tests enforce the requirements. Essentially, if x can never be 3, for example, I'm going to ensure there is a test that covers that requirement.
Invariably, if I don't write a test to cover a condition, it'll crop up later during "human" testing. I'll certainly write one then, but I'd rather catch them early. I think the point is that testing is tedious (perhaps) but necessary. I write enough tests to be complete but no more than that.
Part of the problem with skipping simple tests now is in the future refactoring could make that simple property very complicated with lots of logic. I think the best idea is that you can use Tests to verify requirements for the module. If when you pass X you should get Y back, then that's what you want to test. Then when you change the code later on, you can verify that X gives you Y, and you can add a test for A gives you B, when that requirement is added later on.
I've found that the time I spend during initial development writing tests pays off in the first or second bug fix. The ability to pick up code you haven't looked at in 3 months and be reasonably sure your fix covers all the cases, and "probably" doesn't break anything is hugely valuable. You also will find that unit tests will help triage bugs well beyond the stack trace, etc. Seeing how individual pieces of the app work and fail gives huge insight into why they work or fail as a whole.
In most instances, I'd say, if there is logic there, test it. This includes constructors and properties, especially when more than one thing gets set in the property.
With respect to too much testing, it's debatable. Some would say that everything should be tested for robustness, others say that for efficient testing, only things that might break (i.e. logic) should be tested.
I'd lean more toward the second camp, just from personal experience, but if somebody did decide to test everything, I wouldn't say it was too much... a little overkill maybe for me, but not too much for them.
So, No - I would say there isn't such a thing as "too much" testing in the general sense, only for individuals.
Test Driven Development means that you stop coding when all your tests pass.
If you have no test for a property, then why should you implement it? If you do not test/define the expected behaviour in case of an "illegal" assignment, what should the property do?
Therefore I'm totally for testing every behaviour a class should exhibit. Including "primitive" properties.
To make this testing easier, I created a simple NUnit TestFixture that provides extension points for setting/getting the value and takes lists of valid and invalid values and has a single test to check whether the property works right. Testing a single property could look like this:
[TestFixture]
public class Test_MyObject_SomeProperty : PropertyTest<int>
{
private MyObject obj = null;
public override void SetUp() { obj = new MyObject(); }
public override void TearDown() { obj = null; }
public override int Get() { return obj.SomeProperty; }
public override Set(int value) { obj.SomeProperty = value; }
public override IEnumerable<int> SomeValidValues() { return new List() { 1,3,5,7 }; }
public override IEnumerable<int> SomeInvalidValues() { return new List() { 2,4,6 }; }
}
Using lambdas and attributes this might even be written more compactly. I gather MBUnit has even some native support for things like that. The point though is that the above code captures the intent of the property.
P.S.: Probably the PropertyTest should also have a way of checking that other properties on the object didn't change. Hmm .. back to the drawing board.
I make unit test to reach the maximum feasible coverage. If I cannot reach some code, I refactor until the coverage is as full as possible
After finished to blinding writing test, I usually write one test case reproducing each bug
I'm used to separate between code testing and integration testing. During integration testing, (which are also unit test but on groups of components, so not exactly what for unit test are for) I'll test for the requirements to be implemented correctly.
So the more I drive my programming by writing tests, the less I worry about the level of granuality of the testing. Looking back it seems I am doing the simplest thing possible to achieve my goal of validating behaviour. This means I am generating a layer of confidence that my code is doing what I ask to do, however this is not considered as absolute guarantee that my code is bug free. I feel that the correct balance is to test standard behaviour and maybe an edge case or two then move on to the next part of my design.
I accept that this will not cover all bugs and use other traditional testing methods to capture these.
Generally, I start small, with inputs and outputs that I know must work. Then, as I fix bugs, I add more tests to ensure the things I've fixed are tested. It's organic, and works well for me.
Can you test too much? Probably, but it's probably better to err on the side of caution in general, though it'll depend on how mission-critical your application is.
I think you must test everything in your "core" of your business logic. Getter ans Setter too because they could accept negative value or null value that you might do not want to accept. If you have time (always depend of your boss) it's good to test other business logic and all controller that call these object (you go from unit test to integration test slowly).
I don't unit tests simple setter/getter methods that have no side effects. But I do unit test every other public method. I try to create tests for all the boundary conditions in my algorthims and check the coverage of my unit tests.
Its a lot of work but I think its worth it. I would rather write code (even testing code) than step through code in a debugger. I find the code-build-deploy-debug cycle very time consuming and the more exhaustive the unit tests I have integrated into my build the less time I spend going through that code-build-deploy-debug cycle.
You didn't say why architecture you are coding too. But for Java I use Maven 2, JUnit, DbUnit, Cobertura, & EasyMock.
The more I read about it the more I think some unit tests are just like some patterns: A smell of insufficient languages.
When you need to test whether your trivial getter actually returns the right value, it is because you may intermix getter name and member variable name. Enter 'attr_reader :name' of ruby, and this can't happen any more. Just not possible in java.
If your getter ever gets nontrivial you can still add a test for it then.
Test the source code that makes you worried about it.
Is not useful to test portions of code in which you are very very confident with, as long as you don't make mistakes in it.
Test bugfixes, so that it is the first and last time you fix a bug.
Test to get confidence of obscure code portions, so that you create knowledge.
Test before heavy and medium refactoring, so that you don't break existing features.
This answer is more for figuring out how many unit tests to use for a given method you know you want to unit test due to its criticality/importance. Using Basis Path Testing technique by McCabe, you could do the following to quantitatively have better code coverage confidence than simple "statement coverage" or "branch coverage":
Determine Cyclomatic Complexity value of your method that you want to unit test (Visual Studio 2010 Ultimate for example can calculate this for you with static analysis tools; otherwise, you can calculate it by hand via flowgraph method - http://users.csc.calpoly.edu/~jdalbey/206/Lectures/BasisPathTutorial/index.html)
List the basis set of independent paths that flow thru your method - see link above for flowgraph example
Prepare unit tests for each independent basis path determined in step 2

What Makes a Good Unit Test? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'm sure most of you are writing lots of automated tests and that you also have run into some common pitfalls when unit testing.
My question is do you follow any rules of conduct for writing tests in order to avoid problems in the future? To be more specific: What are the properties of good unit tests or how do you write your tests?
Language agnostic suggestions are encouraged.
Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.)
Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..)
Automatic : Invoking of tests as well as checking results for PASS/FAIL should be automatic
Thorough: Coverage; Although bugs tend to cluster around certain regions in the code, ensure that you test all key paths and scenarios.. Use tools if you must to know untested regions
Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.
Independent: Very important.
Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem.
Tests should not rely on each other - Isolated. No assumptions about order of test execution. Ensure 'clean slate' before each test by using setup/teardown appropriately
Professional: In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc.
Good tests also run Fast. any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.
Update 2010-08:
Readable : This can be considered part of Professional - however it can't be stressed enough. An acid test would be to find someone who isn't part of your team and asking him/her to figure out the behavior under test within a couple of minutes. Tests need to be maintained just like production code - so make it easy to read even if it takes more effort. Tests should be symmetric (follow a pattern) and concise (test one behavior at a time). Use a consistent naming convention (e.g. the TestDox style). Avoid cluttering the test with "incidental details".. become a minimalist.
Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability.
Don't write ginormous tests. As the 'unit' in 'unit test' suggests, make each one as atomic and isolated as possible. If you must, create preconditions using mock objects, rather than recreating too much of the typical user environment manually.
Don't test things that obviously work. Avoid testing the classes from a third-party vendor, especially the one supplying the core APIs of the framework you code in. E.g., don't test adding an item to the vendor's Hashtable class.
Consider using a code coverage tool such as NCover to help discover edge cases you have yet to test.
Try writing the test before the implementation. Think of the test as more of a specification that your implementation will adhere to. Cf. also behavior-driven development, a more specific branch of test-driven development.
Be consistent. If you only write tests for some of your code, it's hardly useful. If you work in a team, and some or all of the others don't write tests, it's not very useful either. Convince yourself and everyone else of the importance (and time-saving properties) of testing, or don't bother.
Most of the answers here seem to address unit testing best practices in general (when, where, why and what), rather than actually writing the tests themselves (how). Since the question seemed pretty specific on the "how" part, I thought I'd post this, taken from a "brown bag" presentation that I conducted at my company.
Womp's 5 Laws of Writing Tests:
1. Use long, descriptive test method names.
- Map_DefaultConstructorShouldCreateEmptyGisMap()
- ShouldAlwaysDelegateXMLCorrectlyToTheCustomHandlers()
- Dog_Object_Should_Eat_Homework_Object_When_Hungry()
2. Write your tests in an Arrange/Act/Assert style.
While this organizational strategy
has been around for a while and
called many things, the introduction
of the "AAA" acronym recently has
been a great way to get this across.
Making all your tests consistent with
AAA style makes them easy to read and
maintain.
3. Always provide a failure message with your Asserts.
Assert.That(x == 2 && y == 2, "An incorrect number of begin/end element
processing events was raised by the XElementSerializer");
A simple yet rewarding practice that makes it obvious in your runner application what has failed. If you don't provide a message, you'll usually get something like "Expected true, was false" in your failure output, which makes you have to actually go read the test to find out what's wrong.
4. Comment the reason for the test – what’s the business assumption?
/// A layer cannot be constructed with a null gisLayer, as every function
/// in the Layer class assumes that a valid gisLayer is present.
[Test]
public void ShouldNotAllowConstructionWithANullGisLayer()
{
}
This may seem obvious, but this
practice will protect the integrity
of your tests from people who don't
understand the reason behind the test
in the first place. I've seen many
tests get removed or modified that
were perfectly fine, simply because
the person didn't understand the
assumptions that the test was
verifying.
If the test is trivial or the method
name is sufficiently descriptive, it
can be permissible to leave the
comment off.
5. Every test must always revert the state of any resource it touches
Use mocks where possible to avoid
dealing with real resources.
Cleanup must be done at the test
level. Tests must not have any
reliance on order of execution.
Keep these goals in mind (adapted from the book xUnit Test Patterns by Meszaros)
Tests should reduce risk, not
introduce it.
Tests should be easy to run.
Tests should be easy to maintain as
the system evolves around them
Some things to make this easier:
Tests should only fail because of
one reason.
Tests should only test one thing
Minimize test dependencies (no
dependencies on databases, files, ui
etc.)
Don't forget that you can do intergration testing with your xUnit framework too but keep intergration tests and unit tests separate
Tests should be isolated. One test should not depend on another. Even further, a test should not rely on external systems. In other words, test your code, not the code your code depends on.You can test those interactions as part of your integration or functional tests.
Some properties of great unit tests:
When a test fails, it should be immediately obvious where the problem lies. If you have to use the debugger to track down the problem, then your tests aren't granular enough. Having exactly one assertion per test helps here.
When you refactor, no tests should fail.
Tests should run so fast that you never hesitate to run them.
All tests should pass always; no non-deterministic results.
Unit tests should be well-factored, just like your production code.
#Alotor: If you're suggesting that a library should only have unit tests at its external API, I disagree. I want unit tests for each class, including classes that I don't expose to external callers. (However, if I feel the need to write tests for private methods, then I need to refactor.)
EDIT: There was a comment about duplication caused by "one assertion per test". Specifically, if you have some code to set up a scenario, and then want to make multiple assertions about it, but only have one assertion per test, you might duplication the setup across multiple tests.
I don't take that approach. Instead, I use test fixtures per scenario. Here's a rough example:
[TestFixture]
public class StackTests
{
[TestFixture]
public class EmptyTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
}
[TestMethod]
[ExpectedException (typeof(Exception))]
public void PopFails()
{
_stack.Pop();
}
[TestMethod]
public void IsEmpty()
{
Assert(_stack.IsEmpty());
}
}
[TestFixture]
public class PushedOneTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
_stack.Push(7);
}
// Tests for one item on the stack...
}
}
What you're after is delineation of the behaviours of the class under test.
Verification of expected behaviours.
Verification of error cases.
Coverage of all code paths within the class.
Exercising all member functions within the class.
The basic intent is increase your confidence in the behaviour of the class.
This is especially useful when looking at refactoring your code. Martin Fowler has an interesting article regarding testing over at his web site.
HTH.
cheers,
Rob
Test should originally fail. Then you should write the code that makes them pass, otherwise you run the risk of writing a test that is bugged and always passes.
I like the Right BICEP acronym from the aforementioned Pragmatic Unit Testing book:
Right: Are the results right?
B: Are all the boundary conditions correct?
I: Can we check inverse relationships?
C: Can we cross-check results using other means?
E: Can we force error conditions to happen?
P: Are performance characteristics within bounds?
Personally I feel that you can get pretty far by checking that you get the right results (1+1 should return 2 in a addition function), trying out all the boundary conditions you can think of (such as using two numbers of which the sum is greater than the integer max value in the add function) and forcing error conditions such as network failures.
Good tests need to be maintainable.
I haven't quite figured out how to do this for complex environments.
All the textbooks start to come unglued as your code base starts reaching
into the hundreds of 1000's or millions of lines of code.
Team interactions explode
number of test cases explode
interactions between components explodes.
time to build all the unittests becomes a significant part of the build time
an API change can ripple to hundreds of test cases. Even though the production code change was easy.
the number of events required to sequence processes into the right state increases which in turn increases test execution time.
Good architecture can control some of interaction explosion, but inevitably as
systems become more complex the automated testing system grows with it.
This is where you start having to deal with trade-offs:
only test external API otherwise refactoring internals results in significant test case rework.
setup and teardown of each test gets more complicated as an encapsulated subsystem retains more state.
nightly compilation and automated test execution grows to hours.
increased compilation and execution times means designers don't or won't run all the tests
to reduce test execution times you consider sequencing tests to take reduce set up and teardown
You also need to decide:
where do you store test cases in your code base?
how do you document your test cases?
can test fixtures be re-used to save test case maintenance?
what happens when a nightly test case execution fails? Who does the triage?
How do you maintain the mock objects? If you have 20 modules all using their own flavor of a mock logging API, changing the API ripples quickly. Not only do the test cases change but the 20 mock objects change. Those 20 modules were written over several years by many different teams. Its a classic re-use problem.
individuals and their teams understand the value of automated tests they just don't like how the other team is doing it. :-)
I could go on forever, but my point is that:
Tests need to be maintainable.
I covered these principles a while back in This MSDN Magazine article which I think is important for any developer to read.
The way I define "good" unit tests, is if they posses the following three properties:
They are readable (naming, asserts, variables, length, complexity..)
They are Maintainable (no logic, not over specified, state-based, refactored..)
They are trust-worthy (test the right thing, isolated, not integration tests..)
Unit Testing just tests the external API of your Unit, you shouldn't test internal behaviour.
Each test of a TestCase should test one (and only one) method inside this API.
Aditional Test Cases should be included for failure cases.
Test the coverage of your tests: Once a unit it's tested, the 100% of the lines inside this unit should had been executed.
Jay Fields has a lot of good advices about writing unit tests and there is a post where he summarize the most important advices. There you will read that you should critically think about your context and judge if the advice is worth to you. You get a ton of amazing answers here, but is up to you decide which is best for your context. Try them and just refactoring if it smells bad to you.
Kind Regards
Never assume that a trivial 2 line method will work. Writing a quick unit test is the only way to prevent the missing null test, misplaced minus sign and/or subtle scoping error from biting you, inevitably when you have even less time to deal with it than now.
I second the "A TRIP" answer, except that tests SHOULD rely on each other!!!
Why?
DRY - Dont Repeat Yourself - applies to testing as well! Test dependencies can help to 1) save setup time, 2) save fixture resources, and 3) pinpoint to failures. Of course, only given that your testing framework supports first-class dependencies. Otherwise, I admit, they are bad.
Follow up http://www.iam.unibe.ch/~scg/Research/JExample/
Often unit tests are based on mock object or mock data.
I like to write three kind of unit tests:
"transient" unit tests: they create their own mock objects/data and test their function with it, but destroy everything and leave no trace (like no data in a test database)
"persistent" unit test: they test functions within your code creating objects/data that will be needed by more advanced function later on for their own unit test (avoiding for those advanced function to recreate every time their own set of mock objects/data)
"persistent-based" unit tests: unit tests using mock objects/data that are already there (because created in another unit test session) by the persistent unit tests.
The point is to avoid to replay everything in order to be able to test every functions.
I run the third kind very often because all mock objects/data are already there.
I run the second kind whenever my model change.
I run the first one to check the very basic functions once in a while, to check to basic regressions.
Think about the 2 types of testing and treat them differently - functional testing and performance testing.
Use different inputs and metrics for each. You may need to use different software for each type of test.
I use a consistent test naming convention described by Roy Osherove's Unit Test Naming standards Each method in a given test case class has the following naming style MethodUnderTest_Scenario_ExpectedResult.
The first test name section is the name of the method in the system under test.
Next is the specific scenario that is being tested.
Finally is the results of that scenario.
Each section uses Upper Camel Case and is delimited by a under score.
I have found this useful when I run the test the test are grouped by the name of the method under test. And have a convention allows other developers to understand the test intent.
I also append parameters to the Method name if the method under test have been overloaded.