As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What are some of the tricks or tools or policies (besides having a unit testing standard) that you guys are using to write better unit tests? By better I mean 'covers as much of your code in as few tests as possible'. I'm talking about stuff that you have used and saw your unit tests improve by leaps and bounds.
As an example I was trying out Pex the other day and I thought it was really really good. There were tests I was missing out and Pex easily showed me where. Unfortunately it has a rather restrictive license.
So what are some of the other great stuff you guys are using/doing?
EDIT: Lots of good answers. I'll be marking as correct the answer that I'm currently not practicing but will definitely try and that hopefully gives the best gains. Thanks to all.
Write many tests per method.
Test the smallest thing possible. Then test the next smallest thing.
Test all reasonable input and output ranges. IOW: If your method returns boolean, make sure to test the false and true returns. For int? -1,0,1,n,n+1 (proof by mathematical induction). Don't forget to check for all Exceptions (assuming Java).
4a. Write an abstract interface first.
4b. Write your tests second.
4c. Write your implementation last.
Use Dependency Injection. (for Java: Guice - supposedly better, Spring - probably good enough)
Mock your "Unit's" collaborators with a good toolkit like mockito (assuming Java, again).
Google much.
Keep banging away at it. (It took me 2 years - without much help but for google - to start "getting it".)
Read a good book about the topic.
Rinse, repeat...
Write tests before you write the code (ie: Test Driven Development). If for some reason you are unable to write tests before, write them as you write the code. Make sure that all the tests fail initially. Then, go down the list and fix each broken one in sequence. This approach will lead to better code and better tests.
If you have time on your side, then you may even consider writing the tests, forgetting about it for a week, and then writing the actual code. This way you have taken a step away from the problem and can see the problem more clearly now. Our brains process tasks differently if they come from external or internal sources and this break makes it an external source.
And after that, don't worry about it too much. Unit tests offer you a sanity check and stable ground to stand on -- that's all.
On my current project we use a little generation tool to produce skeleton unit tests for various entities and accessors, it provides a fairly consistent approach for each modular unit of work which needs to be tested, and creates a great place for developers to test out their implementations from (i.e the unit test class is added when the rest of the entities and other dependencies are added by default).
The structure of the (templated) tests follows a fairly predictable syntax, and the template allows for implementation of module/object-specific buildup/tear down (we also use a base class for all the tests to encapsule some logic).
We also create instances of entities (and assign test data values) in static functions so that objects can be created programatically and used within different test scenarios and across test classes, whcih is proving to be very helpful.
Read a book like The Art of Unit Testing will definitely help.
As far as policy goes read Kent Beck's answer on SO, particularly:
to test as little as possible to reach a given level of confidence
Write pragmatic unit tests for tricky parts of your code and don't lose site of the fact that it's the program you are testing that's important not the unit tests.
I have a ruby script that generates test stubs for "brown" code that wasnt built with TDD. It writes my build script, sets up includes/usings and writes a setup/teardown to instantiate the test class in the stub. Helps to start with a consistent starting point without all the typing tedium when I hack at code written in the Dark Times.
One practice I've found very helpful is the idea of making your test suite isomorphic to the code being tested. That means that the tests are arranged in the same order as the lines of code they are testing. This makes it very easy to take a piece of code and the test suite for that code, look at them side-by-side and step through each line of code to verify there is an appropriate test. I have also found that the mere act of enforcing isomorphism like this forces me to think carefully about the code being tested, such as ensuring that all the possible branches in the code are exercised by tests, or that all the loop conditions are tested.
For example, given code like this:
void MyClass::UpdateCacheInfo(
CacheInfo *info)
{
if (mCacheInfo == info) {
return;
}
info->incrRefCount();
mCacheInfo->decrRefCount();
mCacheInfo = info
}
The test suite for this function would have the following tests, in order:
test UpdateCacheInfo_identical_info
test UpdateCacheInfo_increment_new_info_ref_count
test UpdateCacheInfo_decrement_old_info_ref_count
test UpdateCacheInfo_update_mCacheInfo
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I recently spent about 70% of the time coding a feature writing integration tests. At one point, I was thinking “Damn, all this hard work testing it, I know I don’t have bugs here, why do I work so hard on this? Let’s just skim on the tests and finish it already…”
Five minutes later a test fails. Detailed inspection shows it’s an important, unknown bug in a 3rd party library we’re using.
So … where do you draw your line on what to test on what to take on faith? Do you test everything, or the code where you expect most of the bugs?
In my opinion, it's important to be pragmatic when it comes to testing. Prioritize your testing efforts on the things that are most likely to fail, and/or the things that it is most important that do not fail (i.e. take probability and consequence into consideration).
Think, instead of blindly following one metric such as code coverage.
Stop when you are comfortable with the test suite and your code. Go back and add more tests when (if?) things start failing.
When you're no longer afraid to make medium to major changes in your code, then chances are you've got enough tests.
Good question!
Firstly - it sounds like your extensive integration testing paid off :)
From my personal experience:
If its a "green fields" new project,
I like to enforce strict unit testing
and have a thorough (as thorough as
possible) integration test plan
designed.
If its an existing piece of software
that has poor test coverage, then I
prefer to design a set integration
tests that test specific/known
functionality. I then introduce
tests (unit/integration) as I
progress further with the code base.
How much is enough? Tough question - I dont think that there ever can be enough!
"Too much of everything is just enough."
I don't follow strict TDD practices. I try to write enough unit tests to cover all code paths and exercise any edge cases I think are important. Basically I try to anticipate what might go wrong. I also try to match the amount of test code I write to how brittle or important I think the code under test is.
I am strict in one area: if a bug is found, I first write a test that exercises the bug and fails, make the code changes, and verify that the test passes.
Gerald Weinberg's classic book "The Psychology of Computer Programming" has lots of good stories about testing. One I especially like is in Chapter 4 "Programming as a Social Activity" "Bill" asks a co-worker to review his code and they find seventeen bugs in only thirteen statements. Code reviews provide additional eyes to help find bugs, the more eyes you use the better chance you have of finding ever-so-subtle bugs. Like Linus said, "Given enough eyeballs, all bugs are shallow" your tests are basically robotic eyes who will look over your code as many times as you want at any hour of day or night and let you know if everything is still kosher.
How many tests are enough does depend on whether you are developing from scratch or maintaining an existing system.
When starting from scratch, you don't want to spend all your time writing test and end up failing to deliver because the 10% of the features you were able to code are exhaustively tested. There will be some amount of prioritization to do. One example is private methods. Since private methods must be used by the code which is visible in some form (public/package/protected) private methods can be considered to be covered under the tests for the more-visible methods. This is where you need to include some white-box tests if there are some important or obscure behaviors or edge cases in the private code.
Tests should help you make sure you 1) understand the requirements, 2) adhere to good design practices by coding for testability, and 3) know when previously existing code stops working. If you can't describe a test for some feature, I would be willing to bet that you don't understand the feature well enough to code it cleanly. Using unit test code forces you to do things like pass in as arguments those important things like database connections or instance factories instead of giving in to the temptation of letting the class do way too much by itself and turning into a 'God' object. Letting your code be your canary means that you are free to write more code. When a previously passing test fails it means one of two things, either the code no longer does what was expected or that the requirements for the feature have changed and the test simply needs to be updated to fit the new requirements.
When working with existing code, you should be able to show that all the known scenarios are covered so that when the next change request or bug fix comes along, you will be free to dig into whatever module you see fit without the nagging worry, "what if I break something" which leads to spending more time testing even small fixes then it took to actually change the code.
So, we can't give you a hard and fast number of tests but you should shoot for a level of coverage which increases your confidence in your ability to keep making changes or adding features, otherwise you've probably reached the point of diminished returns.
If you or your team has been tracking metrics, you could see how many bugs are found for every test as the software life-cycle progresses. If you've defined an acceptable threshold where the time spent testing does not justify the number of bugs found, then THAT is the point at which you should stop.
You will probably never find 100% of your bugs.
I spend a lot of time on unit tests, but very little on integration tests. Unit tests allow me to build out a feature in a structure way. And now you have some nice documentation and regression tests that can be run every build
Integration tests are a different matter. They are difficult to maintain and by definition integrate a lot of different pieces of functionality, often with infrastructure that is difficult to work with.
As with everything in life it is limited by time and resources and relative to its importance. Ideally you would test everything that you reasonably think could break. Of course you can be wrong in your estimate, but overtesting to ensure that your assumptions are right depends on how significant a bug would be vs. the need to move on to the next feature/release/project.
Note: My answer primarily address integration testing. TDD is very different. It was covered on SO before, and there you stop testing when you have no more functionality to add. TDD is about design, not bug discovery.
I prefer to unit test as much as possible. One of the greatest side-effects (other than increasing the quality of your code and helping keep some bugs away) is that, in my opinion, high unit test expectations require one to change the way they write code for the better. At least, that's how it worked out for me.
My classes are more cohesive, easier to read, and much more flexible because they're designed to be functional and testable.
That said, I default unit test coverage requirements of 90% (line and branch) using junit and cobertura (for Java). When I feel that these requirements cannot be met due to the nature of a specific class (or bugs in cobertura) then I make exceptions.
Unit tests start with coverage, and really work for you when you've used them to test boundary conditions realistically. For advice on how to implement that goal, the other answers all have it right.
This article gives some very interesting insights on the effectiveness of user testing with different numbers of users. It suggests that you can find about two thirds of your errors with only three users testing the application, and as much as 85% of your errors with just five users.
Unit testing is harder to put a discrete value on. One suggestion to keep in mind is that unit testing can help to organize your thoughts on how to develop the code you're testing. Once you've written the requirements for a piece of code and have a way to check it reliably, you can write it more quickly and reliably.
I test Everything. I hate it, but it's an important part of my work.
I worked in QA for 1.5 years before becoming a developer.
You can never test everything (I was told when trained all the permutations of a single text box would take longer than the known universe).
As a developer it's not your responsibility to know or state the priorities of what is important to test and what not to test. Testing and quality of the final product is a responsibility, but only the client can meaningfully state the priorities of features, unless they have explicitly given this responsibility to you. If there isn't a QA team and you don't know, ask the project manager to find out and prioritise.
Testing is a risk reduction exercise and the client/user will know what is important and what isn't. Using a test first driven development from Extreme Programming will be helpful, so you have a good test base and can regression test after a change.
It's important to note that due to natural selection code can become "immune" to tests. Code Complete says when fixing a defect to write a test case for it and look for similar defects, it's also a good idea to write a test case for defects similar to it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was wondering when most people wrote their unit tests, if at all. I usually write tests after writing my initial code to make sure it works like its supposed to. I then fix what is broken.
I have been pretty successful with this method but have been wondering if maybe switching to writing the test first would be advantageous?
whenever possible i try to follow a pure TDD approach:
write the unit tests for the feature being developed; this forces me to decide on the public interface(s)
code the feature ASAP (as simple as possible, but not simpler)
correct/refactor/retest
additional tests if required for better coverage, exceptional paths, etc. [rare but worth consideration]
repeat with next feature
it is easy to get excited and start coding the feature first, but this often means that you will not think through all of the public interfaces in advance.
EDIT: note that if you write the code first, it is easy to unintentionally write the test to conform to the code, instead of the other way 'round!
I really want to write the code first, and often do. But the more I do real TDD, where I refuse to write any code with out a test the more I find I write more testable code and better code.
So, yes, write the test first. It takes willpower and determination, but it really produces better results.
As an added bonus, TDD has really helped me keep focused in an environment with distractions.
I follow a TDD approach, but I'm not as much of a purist as some. Typically, I will rough in the class/method with a stub that simply throws a NotImplementedException. Then I will start writing the test(s). One feature at a time, one test at a time. Frequently, I'll find that I've missed a test -- perhaps when writing other tests or when I find a bug -- then I'll go back and write a test (and the bug fix, if necessary).
Using TDD helps keep your code in line with YAGNI (you aren't gonna need it) as long as you only write tests for features that you need to develop AND only write the simplest code that will satisfy your tests.
We try and write them before hand, but I will fully admit, that our environment is chaotic at times, and sometimes this step is initially passed over and written later.
I usually write a list of the unit tests I will be writing before I write the code, and actually code the unit tests after, but that's because I use a program to generate the unit test stubs and then modify them to test the appropriate cases.
I tend to use unit tests to test the code as I'm writing it. That way I'm using what I'm writing at the same time, as I use the unit test to test code while I'm writing it. Much like a console app would.
Sometimes I've even written tests without asserts, just debug outputs, just to test that how I use something is working without exceptions.
So my answer is, after a little bit of coding.
I use Test-Driven Development (TDD) to produce most of my production code, so I write my unit tests first unless I'm working on untestable legacy code (when the bar to write the tests is too high).
Otherwise, I write functional-level acceptance tests that the code shall satisfice.
Writing the unit tests first allow me to know exactly "where I am" : I know that what has been coded so far is operational and can be integrated or sent to the test team ; it's not bug free but it works.
I'll offer that I'm fairly new to writing unit tests, and that I wish I had written them before I wrote my code. Or, at least had a better understanding of how to write more testable code as well as a better understanding of concepts like dependency injection which seems to be critical to writing testable code.
Normally I write unit tests before the code. They are very beneficial and writing them before the code just makes sense. Dependency injection lends itself well to unit testing. Dependency injection allows for parts of your code to be mocked out so you only test a particular unit i.e. your code is loosely coupled and it’s therefore easy to swap out for a different (in this case mocked) implementation.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been building IMO a really cool RIA. But its now close to completion and I need to test it to see if there are any bugs or counter-intuitive parts or anything like that. But how? Anytime I ask someone to try to break it, they look at it for like 3 minutes and say "it's solid". How do you guys test things? I have never used a UnitTest before, actually about 3 months ago I never even heard of a unit-test, and I still don't really understand what it is. Would I have to build a whole new application to run every function? That would take forever, plus some functions may only produce errors in certain situations, so I do not understand unit tests.
The question is pretty open-ended so this post won't answer all your question. If you can refine what you are looking for, that would help.
There are two major pieces of testing you likely want to do. The first is unit testing and the second is what might be called acceptance testing.
Unit testing is trying each of the classes/methods in relative isolation and making sure they work. You can use something like jUnit, nUnit, etc. as a framework to hold your tests. Take a method and look at what the different inputs it might expect and what its outcome is. Then write a test case for each of these input/output pairs. This will tell you that most of the parts work as intended.
Acceptance testing (or end-to-end testing as it is sometimes called) is running the whole system and making sure it works. Come up with a list of scenarios you expect users to do. Now systematically try them all. Try variations of them. Do they work? If so, you are likely ready to roll it out to at least a limited audience.
Also, check out How to Break Software by James Whittaker. It's one of the better testing books and is a short read.
First thing is to systematically make sure everything works in the manner you expect it to. Then you want to try it against every realistic hardware with software installed combination that is feasible and appropriate. Then you want to take every point of human interaction and try putting as much data in, no data in, and special data that may cause exceptions. The try doing things in an order or workflow you did not expect sometimes certain actions depend on others. You and your friends will naturally do those steps in order, what happens when someone doesn't? Also, having complete novices use it is a good way to see odd things users might try.
Release it in beta?
It's based on Xcode and Cocoa development, but this video is still a great introduction to unit testing. Unit testing is really something that should be done alongside development, so if your application is almost finished it's going to take a while to implement.
Firebug has a good profiler for web apps. As for testing JS files, I use Scriptaculous. Whatever backend you are using needs to be fully tested too.
But before you do that, you need to understand what unit testing is. Unit testing is verifying that all of the individual units of source code function as they are intended. This means that you verify the output of all of your functions/methods. Basically, read this. There are different testing strategies beyond unit testing such as integration testing, which is testing that different modules integrate with one another. What you are asking people to do is Acceptance testing, which is verifying that it looks and behaves according to the original plan. Here is more on various testing strategies.
PS: always test boundary conditions
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'm sure most of you are writing lots of automated tests and that you also have run into some common pitfalls when unit testing.
My question is do you follow any rules of conduct for writing tests in order to avoid problems in the future? To be more specific: What are the properties of good unit tests or how do you write your tests?
Language agnostic suggestions are encouraged.
Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.)
Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..)
Automatic : Invoking of tests as well as checking results for PASS/FAIL should be automatic
Thorough: Coverage; Although bugs tend to cluster around certain regions in the code, ensure that you test all key paths and scenarios.. Use tools if you must to know untested regions
Repeatable: Tests should produce the same results each time.. every time. Tests should not rely on uncontrollable params.
Independent: Very important.
Tests should test only one thing at a time. Multiple assertions are okay as long as they are all testing one feature/behavior. When a test fails, it should pinpoint the location of the problem.
Tests should not rely on each other - Isolated. No assumptions about order of test execution. Ensure 'clean slate' before each test by using setup/teardown appropriately
Professional: In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc.
Good tests also run Fast. any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.
Update 2010-08:
Readable : This can be considered part of Professional - however it can't be stressed enough. An acid test would be to find someone who isn't part of your team and asking him/her to figure out the behavior under test within a couple of minutes. Tests need to be maintained just like production code - so make it easy to read even if it takes more effort. Tests should be symmetric (follow a pattern) and concise (test one behavior at a time). Use a consistent naming convention (e.g. the TestDox style). Avoid cluttering the test with "incidental details".. become a minimalist.
Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability.
Don't write ginormous tests. As the 'unit' in 'unit test' suggests, make each one as atomic and isolated as possible. If you must, create preconditions using mock objects, rather than recreating too much of the typical user environment manually.
Don't test things that obviously work. Avoid testing the classes from a third-party vendor, especially the one supplying the core APIs of the framework you code in. E.g., don't test adding an item to the vendor's Hashtable class.
Consider using a code coverage tool such as NCover to help discover edge cases you have yet to test.
Try writing the test before the implementation. Think of the test as more of a specification that your implementation will adhere to. Cf. also behavior-driven development, a more specific branch of test-driven development.
Be consistent. If you only write tests for some of your code, it's hardly useful. If you work in a team, and some or all of the others don't write tests, it's not very useful either. Convince yourself and everyone else of the importance (and time-saving properties) of testing, or don't bother.
Most of the answers here seem to address unit testing best practices in general (when, where, why and what), rather than actually writing the tests themselves (how). Since the question seemed pretty specific on the "how" part, I thought I'd post this, taken from a "brown bag" presentation that I conducted at my company.
Womp's 5 Laws of Writing Tests:
1. Use long, descriptive test method names.
- Map_DefaultConstructorShouldCreateEmptyGisMap()
- ShouldAlwaysDelegateXMLCorrectlyToTheCustomHandlers()
- Dog_Object_Should_Eat_Homework_Object_When_Hungry()
2. Write your tests in an Arrange/Act/Assert style.
While this organizational strategy
has been around for a while and
called many things, the introduction
of the "AAA" acronym recently has
been a great way to get this across.
Making all your tests consistent with
AAA style makes them easy to read and
maintain.
3. Always provide a failure message with your Asserts.
Assert.That(x == 2 && y == 2, "An incorrect number of begin/end element
processing events was raised by the XElementSerializer");
A simple yet rewarding practice that makes it obvious in your runner application what has failed. If you don't provide a message, you'll usually get something like "Expected true, was false" in your failure output, which makes you have to actually go read the test to find out what's wrong.
4. Comment the reason for the test – what’s the business assumption?
/// A layer cannot be constructed with a null gisLayer, as every function
/// in the Layer class assumes that a valid gisLayer is present.
[Test]
public void ShouldNotAllowConstructionWithANullGisLayer()
{
}
This may seem obvious, but this
practice will protect the integrity
of your tests from people who don't
understand the reason behind the test
in the first place. I've seen many
tests get removed or modified that
were perfectly fine, simply because
the person didn't understand the
assumptions that the test was
verifying.
If the test is trivial or the method
name is sufficiently descriptive, it
can be permissible to leave the
comment off.
5. Every test must always revert the state of any resource it touches
Use mocks where possible to avoid
dealing with real resources.
Cleanup must be done at the test
level. Tests must not have any
reliance on order of execution.
Keep these goals in mind (adapted from the book xUnit Test Patterns by Meszaros)
Tests should reduce risk, not
introduce it.
Tests should be easy to run.
Tests should be easy to maintain as
the system evolves around them
Some things to make this easier:
Tests should only fail because of
one reason.
Tests should only test one thing
Minimize test dependencies (no
dependencies on databases, files, ui
etc.)
Don't forget that you can do intergration testing with your xUnit framework too but keep intergration tests and unit tests separate
Tests should be isolated. One test should not depend on another. Even further, a test should not rely on external systems. In other words, test your code, not the code your code depends on.You can test those interactions as part of your integration or functional tests.
Some properties of great unit tests:
When a test fails, it should be immediately obvious where the problem lies. If you have to use the debugger to track down the problem, then your tests aren't granular enough. Having exactly one assertion per test helps here.
When you refactor, no tests should fail.
Tests should run so fast that you never hesitate to run them.
All tests should pass always; no non-deterministic results.
Unit tests should be well-factored, just like your production code.
#Alotor: If you're suggesting that a library should only have unit tests at its external API, I disagree. I want unit tests for each class, including classes that I don't expose to external callers. (However, if I feel the need to write tests for private methods, then I need to refactor.)
EDIT: There was a comment about duplication caused by "one assertion per test". Specifically, if you have some code to set up a scenario, and then want to make multiple assertions about it, but only have one assertion per test, you might duplication the setup across multiple tests.
I don't take that approach. Instead, I use test fixtures per scenario. Here's a rough example:
[TestFixture]
public class StackTests
{
[TestFixture]
public class EmptyTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
}
[TestMethod]
[ExpectedException (typeof(Exception))]
public void PopFails()
{
_stack.Pop();
}
[TestMethod]
public void IsEmpty()
{
Assert(_stack.IsEmpty());
}
}
[TestFixture]
public class PushedOneTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
_stack.Push(7);
}
// Tests for one item on the stack...
}
}
What you're after is delineation of the behaviours of the class under test.
Verification of expected behaviours.
Verification of error cases.
Coverage of all code paths within the class.
Exercising all member functions within the class.
The basic intent is increase your confidence in the behaviour of the class.
This is especially useful when looking at refactoring your code. Martin Fowler has an interesting article regarding testing over at his web site.
HTH.
cheers,
Rob
Test should originally fail. Then you should write the code that makes them pass, otherwise you run the risk of writing a test that is bugged and always passes.
I like the Right BICEP acronym from the aforementioned Pragmatic Unit Testing book:
Right: Are the results right?
B: Are all the boundary conditions correct?
I: Can we check inverse relationships?
C: Can we cross-check results using other means?
E: Can we force error conditions to happen?
P: Are performance characteristics within bounds?
Personally I feel that you can get pretty far by checking that you get the right results (1+1 should return 2 in a addition function), trying out all the boundary conditions you can think of (such as using two numbers of which the sum is greater than the integer max value in the add function) and forcing error conditions such as network failures.
Good tests need to be maintainable.
I haven't quite figured out how to do this for complex environments.
All the textbooks start to come unglued as your code base starts reaching
into the hundreds of 1000's or millions of lines of code.
Team interactions explode
number of test cases explode
interactions between components explodes.
time to build all the unittests becomes a significant part of the build time
an API change can ripple to hundreds of test cases. Even though the production code change was easy.
the number of events required to sequence processes into the right state increases which in turn increases test execution time.
Good architecture can control some of interaction explosion, but inevitably as
systems become more complex the automated testing system grows with it.
This is where you start having to deal with trade-offs:
only test external API otherwise refactoring internals results in significant test case rework.
setup and teardown of each test gets more complicated as an encapsulated subsystem retains more state.
nightly compilation and automated test execution grows to hours.
increased compilation and execution times means designers don't or won't run all the tests
to reduce test execution times you consider sequencing tests to take reduce set up and teardown
You also need to decide:
where do you store test cases in your code base?
how do you document your test cases?
can test fixtures be re-used to save test case maintenance?
what happens when a nightly test case execution fails? Who does the triage?
How do you maintain the mock objects? If you have 20 modules all using their own flavor of a mock logging API, changing the API ripples quickly. Not only do the test cases change but the 20 mock objects change. Those 20 modules were written over several years by many different teams. Its a classic re-use problem.
individuals and their teams understand the value of automated tests they just don't like how the other team is doing it. :-)
I could go on forever, but my point is that:
Tests need to be maintainable.
I covered these principles a while back in This MSDN Magazine article which I think is important for any developer to read.
The way I define "good" unit tests, is if they posses the following three properties:
They are readable (naming, asserts, variables, length, complexity..)
They are Maintainable (no logic, not over specified, state-based, refactored..)
They are trust-worthy (test the right thing, isolated, not integration tests..)
Unit Testing just tests the external API of your Unit, you shouldn't test internal behaviour.
Each test of a TestCase should test one (and only one) method inside this API.
Aditional Test Cases should be included for failure cases.
Test the coverage of your tests: Once a unit it's tested, the 100% of the lines inside this unit should had been executed.
Jay Fields has a lot of good advices about writing unit tests and there is a post where he summarize the most important advices. There you will read that you should critically think about your context and judge if the advice is worth to you. You get a ton of amazing answers here, but is up to you decide which is best for your context. Try them and just refactoring if it smells bad to you.
Kind Regards
Never assume that a trivial 2 line method will work. Writing a quick unit test is the only way to prevent the missing null test, misplaced minus sign and/or subtle scoping error from biting you, inevitably when you have even less time to deal with it than now.
I second the "A TRIP" answer, except that tests SHOULD rely on each other!!!
Why?
DRY - Dont Repeat Yourself - applies to testing as well! Test dependencies can help to 1) save setup time, 2) save fixture resources, and 3) pinpoint to failures. Of course, only given that your testing framework supports first-class dependencies. Otherwise, I admit, they are bad.
Follow up http://www.iam.unibe.ch/~scg/Research/JExample/
Often unit tests are based on mock object or mock data.
I like to write three kind of unit tests:
"transient" unit tests: they create their own mock objects/data and test their function with it, but destroy everything and leave no trace (like no data in a test database)
"persistent" unit test: they test functions within your code creating objects/data that will be needed by more advanced function later on for their own unit test (avoiding for those advanced function to recreate every time their own set of mock objects/data)
"persistent-based" unit tests: unit tests using mock objects/data that are already there (because created in another unit test session) by the persistent unit tests.
The point is to avoid to replay everything in order to be able to test every functions.
I run the third kind very often because all mock objects/data are already there.
I run the second kind whenever my model change.
I run the first one to check the very basic functions once in a while, to check to basic regressions.
Think about the 2 types of testing and treat them differently - functional testing and performance testing.
Use different inputs and metrics for each. You may need to use different software for each type of test.
I use a consistent test naming convention described by Roy Osherove's Unit Test Naming standards Each method in a given test case class has the following naming style MethodUnderTest_Scenario_ExpectedResult.
The first test name section is the name of the method in the system under test.
Next is the specific scenario that is being tested.
Finally is the results of that scenario.
Each section uses Upper Camel Case and is delimited by a under score.
I have found this useful when I run the test the test are grouped by the name of the method under test. And have a convention allows other developers to understand the test intent.
I also append parameters to the Method name if the method under test have been overloaded.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
OK, I know there have already been questions about getting started with TDD.. However, I guess I kind of know the general concensus is to just do it , However, I seem to have the following problems getting my head into the game:
When working with collections, do will still test for obvious add/remove/inserts successful, even when based on Generics etc where we kind of "know" its going to work?
Some tests seem to take forever to implement.. Such as when working with string output, is there a "better" way to go about this sort of thing? (e.g. test the object model before parsing, break parsing down into small ops and test there) In my mind you should always test the "end result" but that can vary wildly and be tedious to set up.
I don't have a testing framework to use (work wont pay for one) so I can "practice" more. Are there any good ones that are free for commercial use? (at the moment I am using good 'ol Debug.Assert :)
Probably the biggest.. Sometimes I don't know what to expect NOT to happen.. I mean, you get your green light but I am always concerned that I may be missing a test.. Do you dig deeper to try and break the code, or leave it be and wait for it all fall over later (which will cost more)..
So basically what I am looking for here is not a " just do it " but more " I did this, had problems with this, solved them by this ".. The personal experience :)
First, it is alright and normal to feel frustrated when you first start trying to use TDD in your coding style. Just don't get discouraged and quit, you will need to give it some time. It is a major paradigm shift in how we think about solving a problem in code. I like to think of it like when we switched from procedural to object oriented programming.
Secondly, I feel that test driven development is first and foremost a design activity that is used to flesh out the design of a component by creating a test that first describes the API it is going to expose and how you are going to consume it's functionality. The test will help shape and mold the System Under Test until you have been able to encapsulate enough functionality to satisfy whatever tasks you happen to be working on.
Taking the above paragraph in mind, let's look at your questions:
If I am using a collection in my system under test, then I will setup an expectation to make sure that the code was called to insert the item and then assert the count of the collection. I don't necessarily test the Add method on my internal list. I just make sure it was called when the method that adds the item is called. I do this by adding a mocking framework into the mix, with my testing framework.
Testing strings as output can be tedious. You cannot account for every outcome. You can only test what you expect based on the functionality of the system under test. You should always break your tests down to the smallest element that it is testing. Which means you will have a lot of tests, but tests that are small and fast and only test what they should, nothing else.
There are a lot of open source testing frameworks to choose from. I am not going to argue which is best. Just find one you like and start using it.
MbUnit
nUnit
xUnit
All you can do is setup your tests to account for what you want to happen. If a scenario comes up that introduces a bug in your functionality, at least you have a test around the functionality to add that scenario into the test and then change your functionality until the test passes. One way to find where we may have missed a test is to use code coverage.
I introduced you to the mocking term in the answer for question one. When you introduce mocking into your arsenal for TDD, it dramatically makes testing easier to abstract away the parts that are not part of the system under test. Here are some resources on the mocking frameworks out there are:
Moq: Open Source
RhinoMocks: Open Source
TypeMock: Commercial Product
NSubstitute: Open Source
One way to help in using TDD, besides reading about the process, is to watch people do it. I recommend in watching the screen casts by JP Boodhoo on DNRTV. Check these out:
Jean Paul Boodhoo on Test Driven Development Part 1
Jean Paul Boodhoo on Test Driven Development Part 2
Jean Paul Boodhoo on Demystifying Design Patterns Part 1
Jean Paul Boodhoo on Demystifying Design Patterns Part 2
Jean Paul Boodhoo on Demystifying Design Patterns Part 3
Jean Paul Boodhoo on Demystifying Design Patterns Part 4
Jean Paul Boodhoo on Demystifying Design Patterns Part 5
OK, these will help you see how the terms I introduced are used. It will also introduce another tool called Resharper and how it can facilitate the TDD process. I couldn't recommend this tool enough when doing TDD. Seems like you are learning the process and you are just finding some of the problems that have already been solved with using other tools.
I think I would be doing an injustice to the community, if I didn't update this by adding Kent Beck's new series on Test Driven Development on Pragmatic Programmer.
From my own experience:
Only test your own code, not the underlying framework's code. So if you're using a generic list then there's no need to test Add, Remove etc.
There is no 2. Look over there! Monkeys!!!
NUnit is the way to go.
You definitely can't test every outcome. I test for what I expect to happen, and then test a few edge cases where I expect to get exceptions or invalid responses. If a bug comes up down the track because of something you forgot to test, the first thing you should do (before trying to fix the bug) is write a test to prove that the bug exists.
My take on this is following:
+1 for not testing framework code, but you may still need to test classes derived from framework classes.
If some class/method is cumbersome to test it may be strong indication that something is wrong with desing. I try to follow "1 class - 1 responsibility, 1 method - 1 action" principle. That way you will be able to test complex methods much easier by doing that in smaller portions.
+1 for xUnit. For Java you may also consider TestNG.
TDD is not single event it is a process. So do not try to envision everything from the beginning, but make sure that every bug found in code is actually covered by test once discovered.
I think the most important thing with (and actually one of the great outcomes of, in a somewhat recursive manner) TDD is successful management of dependencies. You have to make sure that modules are tested in isolation with no elaborate setup needed. For example, if you're testing a component that eventually sends an email, make the email sender a dependency so that you can mock it in your tests.
This leads to a second point - mocks are your friends. Get familiarized with mocking frameworks and the style of tests they promote (behavioral, as opposed to the classic state based), and the design choices they encourage (The "Tell, don't ask" principle).
I found that the principles illustrated in the Three Index Cards to Easily Remember the Essence of TDD is a good guide.
Anyway, to answer your questions
You don't have to test something you "know" is going to work, unless you wrote it. You didn't write generics, Microsoft did ;)
If you need to do so much for your test, maybe your object/method is doing too much as well.
Download TestDriven.NET to immediately start unit testing on your Visual Studio, (except if it's an Express edition)
Just test the correct thing that will happen. You don't need to test everything that can go wrong: you have to wait for your tests to fail for that.
Seriously, just do it, dude. :)
I am no expert at TDD, by any means, but here is my view:
If it is completely trivial (getters/setters etc) do not test it, unless you don't have confidence in the code for some reason.
If it is a quite simple, but non-trivial method, test it. The test is probably easy to write anyway.
When it comes to what to expect not to happen, I would say that if a certain potential problem is the responsibility of the class you are testing, you need to test that it handles it correctly. If it is not the current class' responsibility, don't test it.
The xUnit testing frameworks are often free to use, so if you are a .Net guy, check out NUnit, and if Java is your thing check out JUnit.
The above advice is good, and if you want a list of free frameworks you have to look no farther than the xUnit Frameworks List on Wikipedia. Hope this helps :)
In my opinion (your mileage may vary):
1- If you didn't write it don't test it. If you wrote it and you don't have a test for it it doesn't exist.
3- As everyone's said, xUnit's free and great.
2 & 4- Deciding exactly what to test is one of those things you can debate about with yourself forever. I try to draw this line using the principles of design by contract. Check out 'Object Oriented Software Construction" or "The Pragmatic Programmer" for details on it.
Keep tests short, "atomic". Test the smallest assumption in each test. Make each TestMethod independent, for integration tests I even create a new database for each method. If you need to build some data for each test use an "Init" method. Use mocks to isolate the class your testing from it's dependencies.
I always think "what's the minimum amount of code I need to write to prove this works for all cases ?"
Over the last year I have become more and more convinced of the benefits of TDD.
The things that I have learned along the way:
1) dependency injection is your friend. I'm not talking about inversion of control containers and frameworks to assemble plugin architectures, just passing dependencies into the constructor of the object under test. This pays back huge dividends in the testability of your code.
2) I set out with the passion / zealotry of the convert and grabbed a mocking framework and set about using mocks for everything I could. This led to brittle tests that required lots of painful set up and would fall over as soon as I started any refactoring. Use the correct kind of test double. Fakes where you just need to honour an interface, stubs to feed data back to the object under test, mock only where you care about interaction.
3) Test should be small. Aim for one assertion or interaction being tested in each test. I try to do this and mostly I'm there. This is about robustness of test code and also about the amount of complexity in a test when you need to revisit it later.
The biggest problem I have had with TDD has been working with a specification from a standards body and a third party implementation of that standard that was the de-facto standard. I coded lots of really nice unit tests to the letter of the specification only to find that the implementation on the other side of the fence saw the standard as more of an advisory document. They played quite loose with it. The only way to fix this was to test with the implementation as well as the unit tests and refactor the tests and code as necessary. The real problem was the belief on my part that as long as I had code and unit tests all was good. Not so. You need to be building actual outputs and performing functional testing at the same time as you are unit testing. Small pieces of benefit all the way through the process - into users or stakeholders hands.
Just as an addition to this, I thought I would say I have put a blog post up on my thoughts on getting started with testing (following this discussion and my own research), since it may be useful to people viewing this thread.
"TDD – Getting Started with Test-Driven Development" - I have got some great feedback so far and would really appreciate any more that you guys have to offer.
I hope this helps! :)