What do I, as a programmer, need to know about Behavior Driven Development? - unit-testing

To put this in context. I like TDD. I like writing my tests first, and expressing what I need my code to do using assertEquals and assertTrue etc.
But everyone seems to be getting with the BDD programme. I see a lot of talk about rSpec and Cucumber and Lettuce. When I look at these, they look overly verbose, almost like Cobol in their naive assumption that somehow writing long "pseudo-English" makes formal specifications legible to the layman.
Some of the writings about BDD make it sound like it's for people who found TDD too hard to do in practice. I don't feel I have this problem. Or, at least, where I have it's been due to problems with doing TDD against databases or in interactive environments, not because I couldn't formulate or prioritise my tests.
So my question is this. What value is BDD for me as a programmer? a) In the context of projects I'm writing for myself (or with other programmers). b) In the context of working with non-technical customers.
For people who've used BDD for a number of projects, what did it buy you over and above TDD?
Are you finding a customers, product or project managers who can write sufficiently rigid test cases in BDD but couldn't write them as ordinary tests?

I've tested BDD on a very simple internal project, then exported it on a complex one.
I found that the main difference is the kind of test you run in BDD.
The BDD outer tests are based on acceptance tests, which do not deals with the classes or any internal code, but relay on testing the system as a whole.
The BDD inner tests are exactly the same unit test you do in TDD.
In this way you can run the same red-green-refactor approach on two levels.
I found external tests extremely helpfull on complex project.

To answer question (a), if you don't have any non-technical stakeholders, then maybe Cucumber isn't appropriate, but are you confident you have sufficient integration testing in place? Unit tests usually aren't enough.

I like this video's discussion on the difference between TDD and BDD: http://channel9.msdn.com/Series/mvcConf/mvcConf-2-Brandom-Satrom-BDD-in-ASPNET-MVC-using-SpecFlow-WatiN-and-WatiN-Test-Helpers (its .NET tools, and not necessarily the .NET tools I use, but the concepts are right)
Generally, you might say it improves your feedback loop by changing it to check if you think the software is implemented as you'd expect (with TDD) to check if the software meets requirements from your users' perspectives (with BDD).

Related

Is TDD based on unit test?

I search a lot but couldn't find any right answer for this question.
Some articles define TDD which you can do any sort of test in it.
Some articles just said TDD is just about function test, and when it comes to acceptance test it will be BDD not TDD.
So...
Is TDD really just unit test?
There's no universally accepted definition of what a unit test is, so it follows that there can't be a universally accepted answer to that question.
Modern-day TDD is an invention (or rediscovery) of Kent Beck. If you read his book Test Driven Development: By Example, you'll see that he uses small deterministic tests without dependencies. This is a common way to do TDD, and seems to fit most people's definition of a unit test.
On the other hand, just because Kent Beck originally used unit tests to demonstrate the TDD technique, it doesn't exclude other types of tests. Another great resource that uses a slightly wider kind of test is Growing Object-Oriented Software, Guided by Tests by Nat Pryce and Steve Freeman. While they don't use Gherkin, you can view that approach as congenial with BDD - at least, I'd call it a sort of outside-in TDD.
I once had the opportunity to talk to Dan North (the inventor of BDD) about the overall purpose of these kinds of techniques, and I think that we agreed that the overall motivation is to get fast feedback. With unit tests, you can run a test suite in mere seconds. That gives you almost immediate feedback on your API design and implementation.
If other types of test can give you similar feedback, it fits into the overall motivational framework of TDD. Exactly what you call the tests is of less importance.
But to answer the explicit question:
Is TDD really just unit test?
No, test-driven development (TDD) is a process in which you write (unit) tests and let the feedback you receive from these tests guide you to figure out what to do next. A common TDD workflow is the red-green-refactor cycle.
Is TDD really just unit test?
No
The problem with driving development with small-scale tests (I call them "unit tests", but they don't match the accepted definition of unit tests very well).... -- Kent Beck, 2003
Michael Feathers, writing in 2005
...there is a failure case for teams that attempt to get test infected; you can end up writing very slow tests that take so long to run that they essentially start to feel like baggage even though they help you catch errors.
The important idea was that tests be fast and reliable (so that they aren't hindering the "refactor" task). But that doesn't necessarily mean that the test subject needs to be small.
That said, the tests we are talking about are programmer tests: they are there to support the making of the product code. The tests that support other stakeholders are a different thing, subject to different constraints.

Do I need NUnit now that I've migrated all my unit tests to MSpec?

I was doing TDD using NUnit. I was naming my NUnit tests in a behavioral style (like given, when, then). However I am now using MSpec for all my unit tests. I'm still writing tests first, using mocks, etc... so, they're still unit tests. But, I don't see a need for NUnit.
I am nervous to throw away all the effort I put into learning NUnit. Should I abandon TDD/NUnit completely, taking into consideration that BDD is TDD done right?
Now that you have embraced BDD you are following an "Outside-In" development approach.
A nice succinct definition of this development technique can be found at programmers.stackexchange.com. I quote:
"Outside-In (London school, top-down or "mockist TDD" as Martin Fowler
would call it): you know about the interactions and collaborators
upfront (especially those at top levels) and start there (top level),
mocking necessary dependencies. With every finished component, you
move to the previously mocked collaborators and start with TDD again
there, creating actual implementations (which, even though used, were
not needed before thanks to abstractions). Note that outside-in
approach goes well with YAGNI principle."
When using BDD, you develop in a top-down manner and mock dependencies to satisfy your test. Once your BDD test passes you then revert to using TDD to implement concrete versions of the dependencies you encountered during your BDD test (using an "Inside-Out" approach).
Hence both your TDD and BDD tests are valuable, as they test different aspects of your code i.e. the BDD tests ensure that a user's interaction is tested against all of the layers in your system, whilst the TDD tests cover the individual components in detail and in isolation (via mocking).
So don't abandon your NUnit tests!
To end my answer, you say that:
BDD is TDD done right
As I've explained above, the major difference between BDD and TDD is the scope of code which they cover. Dan North has a good article on this here.
NUnit and MSpec are, basically, test frameworks. They can both be used to write unit, integration, or acceptance tests. You implement the test at the right intersection of layers, behaviors, or whatever your definition is. Both frameworks support BDD-style and naming. MSpec does it up front with the custom delegates. NUnit makes it a little more challenging (you have to fiddle with constructors and setup & test methods).
You're still writing tests first (TDD), but now you're using a test framework that directly supports context/specification-grammar and behavioral testing (BDD) vs. object-structure testing.
The question isn't really about TDD vs. BDD, Arrange-Act-Assert grammar vs. context/specification-grammar, or any of the other structural differences in the test framework (one setup per context, one assertion per spec, etc), but of your skills with a particular framework!
I say, embrace your new knowledge! Do you like mspec? Are you likely to engage your colleagues to switch to mspec? Will you completely forget your NUnit skills (the API or the command-line runner)?
If you inherit some old projects or have team-members who like NUnit, the two frameworks can exist side-by-side in your solution and in your build script with little trouble. It's just not great to have many different ways to write tests and report results.
From my experience there are some cases where NUnit is still a good choice. For example, mspec currently does not support examples, whereas NUnit has TestCase and TestCaseSource. These are useful in Unit Testing scenarios, so there might still be a use for xUnit style tools. No need to "forget" anything, I think it good to be aware of all the tools in your toolbelt and choose the right one for the task at hand.

Is BDD a replacement of TDD?

I wonder whether BDD is a replacement of TDD? What I understand now is that in an ultimate BDD we don't have unit tests any more. Instead there are stories/scenarios/features and "test steps". And it looks like a complete replacement of TDD for me. TDD is dead?
Not at all. BDD is just a variant of TDD.
In TDD, you formulate your requirements as an executable test, then write the production code to fulfill the test. BDD does nothing but re-formulate these requirements into a more human-readable form and thus makes the tests somewhat more verbose to a human reader who looks at the test report. (Btw: To achieve this, BDD requires way more code than traditional data-driven unit testing...)
That's all.
Thomas
I have a different viewpoint on this than other responders.
Dan North created BDD druing his consulting work on TDD when he saw, that many people were confused by the "test" part, because they had testing experience, he decided to change the name. So at first, BDD was exactly what TDD is, explained correctly.
After that Dan started to extend the idea of using executable specifications (unit tests) to drive the implementations by adding another levels of specification. He was inspired by user stories, so the simplest BDD implemented by most tools lets you write requirements as user story scenarios, than you write code which generates unit tests and than from those unit tests you work on implementation. So now you see compared to TDD there is another level of specification - user stories. Many tools include prepared translations of user stories to tests, so many forget about them as you did, but they are still there and cannot be fully omitted - practically and also theoretically as noted, programming user stories is not efficient. But that is not the point, you use user stories to gather requirements from stakeholders and to proove you implemented them by executing acceptance tests.
There are many other small things in BDD, you better read Dans blog to understand them, but the main point is that BDD is an extension of TDD even outside the implementation phase, so they cannot be interchanged or rendered useless by each other.
Gabriel is almost right.
The fundamental difference at a unit level is that BDD uses the word "should" instead of "test". It turns out that when you say "test", most people start thinking about what their code does, and how they can test it. With BDD, we consider - and question - what our code should do. It's a subtle but important point, and if you want to know why that's powerful, go read up on Neuro-linguistic Programming - particularly around the way in which words affect thoughts and the model of the world. As a brief example, many people who are new to TDD start pinning their code down so that nobody can break it. BDDers tend to provide examples which demonstrate the value of their code so that people can change their code safely.
Dan realised while he was talking with Chris Matts and writing JBehave that he could take this up to a scenario level (scenarios aren't quite the same as stories). Because we were already using "should" at a unit level, it made sense to start writing things in English (I tend to use "should give me" rather than "should return", for instance). Acceptance Test Driven Development - ATDD - has been around for a long time, but this was AFAIK the first time anyone had written them in English with business stakeholders involved.
It's more than just a replacement for TDD. It's a different way of thinking about testing - very much focused on learning, deliberately discovering areas where we perhaps thought we knew what we were doing but didn't, uncovering and helping us to resolve ignorance and misunderstanding. It works at many levels. Chris Matts' Feature Injection takes this into the higher level space, right the way up to project visioning.
We still do write examples - or specifications if you like - at a unit level too, but really, it's a pattern which goes far higher than even scenarios. If you want to know more you might find my blog useful, Dan's is even better. Also, Chris has a comic book on Real Options which outlines some of the patterns I've mentioned.
BDD is not about replacing TDD. It is about giving some more structure and deciplene to your TDD practices. The hardest thing about TDD is that developers without the bigger picture hardly have clue on what to test and how much to test. BDD provides a very concrete guideline with this gray area. Check out this post,
http://codingcraft.wordpress.com/2011/11/12/bdd-get-your-tdd-right/
As far as I understand the advantages of BDD over TDD are:
Decoupling the tests from the implementation details. So the feature files won't break, just the step files, if you modify the implementation, but not the behavior.
Reusing existing testing code. You can do the same by TDD, if you define custom assertions, fixtures, helpers, etc... But we (at least I) usually copy-paste testing code (bad habit). It is much easier to reuse code by BDD. There will be still some repetition, but at least it will be in gherkin.
Everything else goes the same way as normally by TDD. So you can use any assertion lib in the step definitions you would use in the unit tests. The only difference that you added another abstraction level by separating what (feature description in gherkin) from how (step definitions in a programming language) in your testing code.
You can use the term "Specification by Example" for BDD, which emphasis an important aspect of this methodology: Specifying collaboratively - through all-team specification workshops, smaller meetings or teleconference reviews. Within these sessions with different stakeholders, concrete examples are used to illustrate requirements. Discussing requirements in the form of examples helps to create a shared understanding of the problem domain and possible solutions.
By accident specifications with examples are well suited for test automation. As a result you usually improve test coverage. But this methodology also helps to involve non-technical stakeholders. The tools that help you create business readable input are by nature not related to programming languages, but often based on simple document formats that are easily understandable by many people.
BDD should emphasize behavior from a user perspective and is ideally suited to drive end-to-end tests, a kind of poor man's DSL for acceptance test driven development. It can complement TDD but it definitely is not a substitute. TDD is as much a design activity as it is a testing activity (Code that is poorly designed is difficult to test -> unit tests encourage good design). BDD has nothing to do with design. It is a kind of testing that abstracts away from the code altogether.
In practice BDD results in a lot more boiler-plate code under the hood than normal acceptance tests, therefore I prefer creating an internal DSL in a normal programming language to drive my acceptance tests. As for unit tests, BDD emphasizes behavior from a user perspective and therefore should not be used at the unit level.
BDD is an attempt to bridge the communication gap between business stake holders and programmers. In some areas it can be useful, such as banking applications where attention to detail on things like interest rate calculations is important, and requires direct input from domain experts. IMHO BDD is not the panacea that some of it's acolytes claim it is and should only be used if there is a compelling reason to do so.

How to write good Unit Tests? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Could anyone suggest books or materials to learn unit test?
Some people consider codes without unit tests as legacy codes. Nowadays, Test Driven Development is the approach for managing big software projects with ease. I like C++ a lot, I learnt it on my own without any formal education. I never looked into Unit Test before, so feel left out. I think Unit Tests are important and would be helpful in the long run. I would appreciate any help on this topic.
My main points of concern are:
What is a Unit Test? Is it a comprehensive list of test cases which should be analyzed? So lets us a say i have a class called "Complex Numbers" with some methods in it (lets says finding conjugate, an overloaded assignment operator and an overloaded multiplication operator. What should be typical test cases for such a class? Is there any methodology for selecting test cases?
Are there any frameworks which can create unit tests for me or i have to write my own class for tests? I see an option of "Test" in Visual Studio 2008, but never got it working.
What is the criteria for Units tests? Should there be a unit test for each and every function in a class? Does it make sense to have Unit Tests for each and every class?
An important point (that I didn't realise in the beginning) is that Unit Testing is a testing technique that can be used by itself, without the need to apply the full Test Driven methodology.
For example, you have a legacy application that you want to improve by adding unit tests to problem areas, or you want to find bugs in an existing app. Now you write a unit test to expose the problem code and then fix it. These are semi test-driven, but can completely fit in with your current (non-TDD) development process.
Two books I've found useful are:
Test Driven Development in Microsoft .NET
A very hands on look at Test Driven development, following on from Kent Becks' original TDD book.
Pragmatic Unit Testing with C# and nUnit
It comes straight to the point what unit testing is, and how to apply it.
In response to your points:
A Unit test, in practical terms is a single method in a class that contains just enough code to test one aspect / behaviour of your application. Therefore you will often have many very simple unit tests, each testing a small part of your application code. In nUnit for example, you create a TestFixture class that contains any number of test methods. The key point is that the tests "test a unit" of your code, ie a smallest (sensible) unit as possible. You don't test the underlying API's you use, just the code you have written.
There are frameworks that can take some of the grunt work out of creating test classes, however I don't recommmend them. To create useful unit tests that actually provide a safety net for refactoring, there is no alternative but for a developer to put thought into what and how to test their code. If you start becoming dependent on generating unit tests, it is all too easy to see them as just another task that has to be done. If you find yourself in this situation you're doing it completely wrong.
There are no simple rules as to how many unit tests per class, per method etc. You need to look at your application code and make an educated assessment of where the complexity exists and write more tests for these areas. Most people start by testing public methods only because these in turn usually exercise the remainder of the private methods. However this is not always the case and sometimes it is necessary to test private methods.
In short, even experienced unit testers start by writing obvious unit tests, then look for more subtle tests that become clearer once they have written the obvious tests. They don't expect to get every test up-front, but instead add them as they come to their mind.
While you've already accepted an answer to your question I'd like to recommend a few other books not yet mentioned:
Working Effectively with Legacy Code - Michael Feathers - As far as I know this is the only book to adequately tackle the topic of turning existing code that wasn't designed for testability into testable code. Written as more of a reference manual, its broken down into three sections: An overview of the tools and techniques, A series of topical guides to common road blocks in legacy code, A set of specific dependency breaking techniques referenced throughout the rest of the book.
Agile Principles, Patterns, and Practices - Robert C. Martin - Examples in java, there is a sequel with examples in C#. Both are easy to adapt to C++
Clean Code:A Handbook of Agile Software Craftsmanship - Robert C. Martin - Martin describes this as a prequel to his APPP books and I would agree. This book makes a case for professionalism and self-discipline, two essential qualities in any serious software developer.
The two books by Robert (Uncle Bob) Martin cover much more material than just Unit testing but they drive home just how beneficial unit testing can be to code quality and productivity. I find myself referring to these three books on a regular basis.
In .NET I strongly recommend "The Art of Unit Testing" by Roy Osherove, it is very comprehensive and full of good advice.
Nowadays, Test Driven Development is
the approach for managing big software
projects with ease.
That is because TDD allows you to make sure after each change that everything that worked before the change still works, and if it doesn't it allows you to pinpoint what was broken, much easier. (see at the end)
What is a Unit Test? Is it a
comprehensive list of test cases which
should be analyzed?
A Unit Test is a piece of code that asks a "unit" of your code to perform an operation, then verifies that the operation was indeed performed and the result is as expected. If the result is not correct, it raises / logs an error.
So lets us a say i have a class called
"Complex Numbers" with some methods in
it (lets says finding conjugate, an
overloaded assignment operator and an
overloaded multiplication operator.
What should be typical test cases for
such a class? Is there any methodology
for selecting test cases?
Ideally, you would test all the code.
when you create an instance of the
class, it is created with the correct
default values
when you ask it to find the
conjugates, it does finds the correct
ones (also test border cases, like the
conjugate for zero)
when you assign a value the value is
assigned and displayed correctly
when you multiply a complex by a
value, it is multiplied correctly
Are their any frameworks which can
create unit tests for me or i have to
write my own class for tests?
See CppUnit
I see an option of "Test" in Visual
Studio 2008, but never got it working.
Not sure on that. I haven't used VS 2008 but it may be available just for .NET.
What is the criteria for Units tests?
Should there be a unit test for each
and every function in a class? Does it
make sense to have Unit Tests for each
and every class?
Yes, it does. While that is an awful lot of code to write (and maintain with every change) the price is worth paying for large projects: It guarantees that your changes to the code base do what you want them to and nothing else.
Also, when you make a change, you need to update the unit-tests for that change (so that they pass again).
In TDD, you first decide what you want the code to do (say, your complex numbers class), then write the tests that verify those operations, then write the class so that the tests compile and execute correctly (and nothing more).
This ensures that you write the minimal code possible (and don't over-complicate the design of the complex class) and it also ensures that your code does what it does. At the end of writing the code, you have a way to test it's functionality and ensuring it's correctness.
You also have an example of using the code that you will be able to access at any point.
For further reading/documentation, look into "dependency injection" and method stubs as used in unit testing and TDD.
With test driven design, you normally want to write the tests first. They should cover the operations you're actually using/going to use. I.e. unless they're necessary for the client code to do its job, they shouldn't exist. Selecting test cases is something of an art. There are obvious things like testing boundary conditions, but in the end, nobody's found a really reliable, systematic way of assuring that tests (unit or otherwise) cover all the conditions that matter.
Yes, there are frameworks. A couple of the best known are:
Boost Unit Test Framework
CPPUNit
CPPUnit is a port of JUnit, so those who've used JUnit previously will probably find it comfortable. Otherwise, I'd tend to recommend Boost -- they also have a Test Library to help write the individual tests -- rather a handy addition.
Unit tests should be sufficient to ensure that the code works. If (for example) you have a private function that's used internally, you generally don't need to test it directly. Instead, you test whatever provides the public interface. As long as that works correctly, it's no business of the outside world how it does its job. Of course, in some cases it's easier to test little pieces, and when it is, that's perfectly legitimate -- but ultimately you care about the visible interface, not the internals. Certainly the whole external interface should be exercised, and test cases generally chosen to exercise the paths through the code. Again, there's nothing massively different about unit tests versus other kinds. It's mostly just a more systematic way of applying normal testing techniques.
Unit tests are simply a way to exercise a given body of code to ensure that a defined set of conditions leads to the expected set of out comes. As Steven points out, these "exercises" should check across a range of criteria ("BICEP"). Yes, ideally you should test all of your classes and all of the methods in these classes although there is always some room for judgement: testing shouldn't be an end in itself but rather should support the wider project goals.
Ok, so...theory is nice but to really understand Unit Testing, my recommendation would be to pull together the appropriate tools and just get started. Like most things in programming, if you have the right tools, it is easy to learn by doing.
First, pick up a copy of NUnit. It is free, easy to install and easy to work with. If you'd like some documentation, check out Pragmatic Unit Testing in C# with NUnit.
Next, go to http://www.testdriven.net/ and get a copy of TestDriven.net. It installs into Visual Studio 2008 and gives you right-click access to a full range of testing tools including the ability to run NUnit tests against a file, directory or project (typically, tests are written in a separate project). You can also run tests with debugging or, coolest of all, run all the tests against a copy of NCover. NCover will show you exactly what code is being exercised so you can figure out where you need to improve your test coverage. TestDriven.net costs $170 for a professional license but, if you are like me, it will very quickly become an integral tool in your toolbox. Anyway, I've found it to be an excellent professional investment.
Good luck!
I can't answer your question for Visual Studio 2008, but I know that Netbeans has a few integrated tools for you to use.
The code coverage two allows for you to see which paths have been checked, and how much of the code is actually covered by the unit tests.
It has the support for the unit tests built in.
As far as quality of tests I'm borrowing a bit from the "Pragmatic Unit Testing in Java with JUnit" by Andrew Hunt and David Thomas:
Unit testing should check for BICEP:
Boundary, Inverse relationships, Cross-checking, Error conditions, and Performance.
Also quality of the tests are determined by A-TRIP:
Automatic, Thorough, Repeatable, Independent, and Professional.
Here's something on when not to write unit tests (i.e. on when it's viable and even preferable to skip unit testing): Should one test internal implementation, or only test public behaviour?
The short answer is:
When you can automate integration tests (because it's important to have automated tests, but those tests don't have to be unit tests)
When it's cheap to run the integration test suite (no good if it takes two days to run, or if you can't afford to let every developer have access to an integration test equipment)
When it isn't necessary to find bugs before integration testing (which depends in part on whether components are developed separately or incrementally)
Buy the book "xUnit Test Patterns: Refactoring Test Code". Its very excellent. It does cover high-level strategy decisions as well as low level test patterns.
Nowadays, Test Driven Development is the approach for managing big software projects with ease.
TDD built on unit tests but they are different. You don't need to be use TDD to make use of unit tests. My personal preference is to write test first, but I don't feel I do the whole TDD thing.
What is a Unit Test?
A Unit Test is a bit of code that tests the behaviour of one unit. How one unit is defined differs between people. But in general they are:
Quick to run
Independent from each other
Test only a small part (a unit ;) of your code base.
Binary outcome - That is it passes or fails.
Should only test one outcome of the unit (for each outcome create a different unit test)
Repeatable
Are their any frameworks which can create unit tests
To write the tests - Yes but I've never seen anyone say anything nice about them.
To help you write & run tests, a whole bunch of them.
Should there be a unit test for each and every function in a class?
You have a few different camps in this - the 100%ers would say yes. Every method must be tested and you should have 100% code coverage. The other extreme is that unit tests should only cover areas that you have even encounter bugs or you expect to find bugs. The middle ground (and the stand I take) is to unit tests everything that is not "too simple to break". Setters/getters and anything that just calls a single other method. I aim to have 80% code coverage and a low CRAP factor (so a low chance I've been naughty and decided to not test something as it was "too complex to test).
The book that helped me "get" unit tests JUnit in Action. Sorry I don't do much in the C++ world, so I can not suggest a C++ based alternative.

When to unit-test vs manual test

While unit-testing seems effective for larger projects where the APIs need to be industrial strength (for example development of the .Net framework APIs, etc.), it seems possibly like overkill on smaller projects.
When is the automated TDD approach the best way, and when might it be better to just use manual testing techniques, log the bugs, triage, fix them, etc.
Another issue--when I was a tester at Microsoft, it was emphasized to us that there was a value in having the developers and testers be different people, and that the tension between these two groups could help create a great product in the end. Can TDD break this idea and create a situation where a developer might not be the right person to rigorously find their own mistakes? It may be automated, but it would seem that there are many ways to write the tests, and that it is questionable whether a given set of tests will "prove" that quality is acceptable.
The effectiveness of TDD is independent of project size. I will practice the three laws of TDD even on the smallest programming exercise. The tests don't take much time to write, and they save an enormous amount of debugging time. They also allow me to refactor the code without fear of breaking anything.
TDD is a discipline similar to the discipline of dual-entry-bookkeeping practiced by accountants. It prevents errors in-the-small. Accountants will enter every transaction twice; once as a credit, and once as a debit. If no simple errors were made, then the balance sheet will sum to zero. That zero is a simple spot check that prevents the executives from going to jail.
By the same token programmers write unit tests in advance of their code as a simple spot check. In effect, they write each bit of code twice; once as a test, and once as production code. If the tests pass, the two bits of code are in agreement. Neither practice protects against larger and more complex errors, but both practices are nonetheless valuable.
The practice of TDD is not really a testing technique, it is a development practice. The word "test" in TDD is more or less a coincidence. As such, TDD is not a replacement for good testing practices, and good QA testers. Indeed, it is a very good idea to have experienced testers write QA test plans independently (and often in advance of) the programmers writing the code (and their unit tests).
It is my preference (indeed my passion) that these independent QA tests are also automated using a tool like FitNesse, Selenium, or Watir. The tests should be easy to read by business people, easy to execute, and utterly unambiguous. You should be able to run them at a moment's notice, usually many times per day.
Every system also needs to be tested manually. However, manual testing should never be rote. A test that can be scripted should be automated. You only want to put humans in the loop when human judgement is needed. Therefore humans should be doing exploratory testing, not blindly following test plans.
So, the short answer to the question of when to unit-test versus manual test is that there is no "versus". You should write automated unit tests first for the vast majority of the code you write. You should have automated QA acceptance tests written by testers. And you should also practice strategic exploratory manual testing.
Unit tests aren't meant to replace functional/component tests. Unit tests are really focused, so they won't be hitting database, external services, etc. Integration tests does that, but you can have them really focused. The bottom line, is that on the specific question, the answer is that they don't replace those manual tests.
Now, automated functional tests + automated component tests can certainly replace manual tests. It will depend a lot of the project and the approach to it on who will actually do those.
Update 1: Note that if developers are creating automated functional tests you still want to review that those have the appropriate coverage, complementing them as appropriate. Some developers create automated functional tests with their "unit" test framework, because they still have to do smoke tests regardless of the unit tests, and it really helps having those automated :)
Update 2: Unit testing isn't overkill for a small project, nor is automating the smoke tests or using TDD. What is overkill is having the team doing any of that for their first time on the small project. Doing any of those have an associated learning curve (specially unit testing or TDD), and not always will be done right at first. You also want someone who has been doing it for a while involved, to help avoid pitfalls and get pasts some coding challenges that aren't obvious when starting on it. The issue is that it isn't common for teams to have these skills.
TDD is the best approach whenever it is feasible to do so. TDD testing is automatic, quantifiable through code coverage, and reliable method of ensuring code quality.
Manual testing requires a huge amount of time (as compared to TDD) and suffers from human error.
There is nothing saying that TDD means only developers test. Developers should be responsible for coding a percentage of the test framework. QA should be responsible for a much larger portion. Developers test APIs the way they want to test them. QA tests APIs in ways that I really wouldn't have ever thought to and do things that, while seemingly crazy, are actually done by customers.
I would say that unit-tests are a programmers aid to answer the question:
Does this code do what I think it
does?
This is a question they need to ask themselves alot. Programers like to automate anything they do alot where they can.
The separate test team needs to answer a different question:-
Does this system do what I (and the end users) expect it
to do? Or does it suprise me?
There are a whole massive class of bugs related to the programer or designers having a different idea about what is correct that unit tests will never pickup.
According to studies of various projects (1), Unit tests find 15..50% of the defects (average of 30%). This doesn't make them the worst bug finder in your arsenal, but not a silver bullet either. There are no silver bullets, any good QA strategy consists of multiple techniques.
A test that is automated runs more often, thus it will find defects earlier and reduce total cost of these immensely - that is the true value of test automation.
Invest your ressources wisely and pick the low hanging fruit first.
I find that automated tests are easiest to write and to maintain for small units of code - isolated functions and classes. End user functionality is easier tested manually - and a good tester will find many oddities beyond the required tests. Don't set them up against each other, you need both.
Dev vs. Testers Developers are notoriously bad at testing their own code: reasons are psychological, technical and last not least economical - testers are usually cheaper than developers. But developers can do their part, and make testing easier. TDD makes testing an intrinsic part of program construction, not just an afterthought, that is the true value of TDD.
Another interesting point about testing: There's no point in 100% coverage. Statistically, bugs follow an 80:20 rule - the majority of bugs is found in small sections of code. Some studies suggest that this is even sharper - and tests should focuse on the places where bugs turn up.
(1) Programming Productivity Jones 1986 u.a., quoted from Code Complete, 2nd. ed. But as others have said, unit tests are only one part of tests, integration, regression and system tests can be - at leat partially - automated as well.
My interpretation of the results: "many eyes" has the best defect detection, but only if you have some formal process that makes them actually look.
Every application gets tested.
Some applications get tested in the form of does my code compile and does the code appear to function.
Some applications get tested with Unit tests. Some developers are religious about Unit tests, TDD and code coverage to a fault. Like everything, too much is more often than not bad.
Some applications are luckily enough to get tested via a QA team. Some QA teams automate their testing, others write test cases and manually test.
Michael Feathers, who wrote: Working Effectively with Legacy Code, wrote that code not wrapped in tests is legacy code. Until you have experienced The Big Ball of Mud, I don't think any developer truly understands the benefit of good Application Architecture and a suite of well written Unit Tests.
Having different people test is a great idea. The more people that can look at an application the more likely all the scenarios will get covered, including the ones you didn't intend to happen.
TDD has gotten a bad rap lately. When I think of TDD I think of dogmatic developers meticulously writing tests before they write the implementation. While this is true, what has been overlooked is by writing the tests, (first or shortly after) the developer experiences the method/class in the shoes of the consumer. Design flaws and shortcomings are immediately apparent.
I argue that the size of the project is irrelevant. What is important is the lifespan of the project. The longer a project lives the more the likelihood that a developer other than the one who wrote it will work on it. Unit tests are documentation to the expectations of the application -- A manual of sorts.
Unit tests can only go so far (as can all other types of testing). I look on testing as a kind of "sieve" process. Each different type of testing is like a sieve that you are placing under the outlet of your development process. The stuff that comes out is (hopefully) mostly features for your software product, but it also contains bugs. The bugs come in lots of different shapes and sizes.
Some of the bugs are pretty easy to find because they are big or get caught in basically any kind of sieve. On the other hand, some bugs are smooth and shiny, or don't have a lot of hooks on the sides so they would slip through one type of sieve pretty easily. A different type of sieve might have different shape or size holes so it will be able to catch different types of bugs. The more sieves you have, the more bugs you will catch.
Obviously the more sieves you have in the way, the slower it is for the features to get through as well, so you'll want to try to find a happy medium where you aren't spending too much time testing that you never get to release any software.
The nicest point (IMO) of automated unit tests is that when you change (improve, refactor) the existing code, it's easy to test that you didn't break it. It would be tedious to test everything manually again and again.
Your question seems to be more about automated testing vs manual testing. Unit testing is a form of automated testing but a very specific form.
Your remark about having separate testers and developers is right on the mark though. But that doesn't mean developers shouldn't do some form of verification.
Unit testing is a way for developers to get fast feedback on what they're doing. They write tests to quickly run small units of code and verify their correctness. It's not really testing in the sense you seem to use the word testing just like a syntax check by a compiler isn't testing. Unit testing is a development technique. Code that's been written using this technique is probably of higher quality than code written without but still has to go through quality control.
The question about automated testing vs manual testing for the test department is easier to answer. Whenever the project gets big enough to justify the investment of writing automated tests you should use automated tests. When you've got lots of small one-time tests you should do them manually.
Having been on both sides, QA and development, I would assert that someone should always manually test your code. Even if you are using TDD, there are plenty of things that you as a developer may not be able to cover with unit tests, or may not think about testing. This especially includes usability and aesthetics. Aesthetics includes proper spelling, grammar, and formatting of output.
Real life example 1:
A developer was creating a report we display on our intranet for managers. There were many formulas, all of which the developer tested before the code came to QA. We verified that the formulas were, indeed, producing the correct output. What we asked development to correct, almost immediately, was the fact that the numbers were displayed in pink on a purple background.
Real life example 2:
I write code in my spare time, using TDD. I like to think I test it thoroughly. One day my wife walked by when I had a message dialog up, read it, and promptly asked, "What on Earth is that message supposed to mean?" I thought the message was rather clear, but when I reread it I realized it was talking about parent and child nodes in a tree control, and probably wouldn't make sense to the average user. I reworded the message. In this case, it was a usability issue, which was not caught by my own testing.
unit-testing seems effective for larger projects where the APIs need to be industrial strength, it seems possibly like overkill on smaller projects.
It's true that unit tests of a moving API are brittle, but unit-testing is also effective on API-less projects such as applications. Unit-testing is meant to test the units a project is made with. It allows ensuring every unit works as expected. This is a real safety net when modifying - refactoring - the code.
As far as the size of the project is concerned, It's true that writing unit-tests for a small project can be overkill. And here, I would define small project as a small program, that can be tested manually, but very easily and quickly, in no more than a few seconds. Also a small project can grow, in which case it might be advantageous to have unit tests at hand.
there was a value in having the developers and testers be different people, and that the tension between these two groups could help create a great product in the end.
Whatever the development process, unit-testing is not meant to supersede any other stages of test, but to complement them with tests at the development level, so that developers can get very early feedback, without having to wait for an official build and official test. With unit-testing, development team delivers code that works, downstream, not bug-free code, but code that can be tested by the test team(s).
To sum up, I test manually when it's really very easy, or when writing unit tests is too complex, and I don't aim to 100% coverage.
I believe it is possible to combine the expertise of QA/testing staff (defining the tests / acceptance criteria), with the TDD concept of using a developer owned API (as oppose to GUI or HTTP/messaging interface) to drive an application under test.
It is still critical to have independent QA staff, but we don't need huge manual test teams anymore with modern test tools like FitNesse, Selenium and Twist.
Just to clarify something many people seem to miss:
TDD, in the sense of
"write failing test, write code to make test pass, refactor, repeat"
Is usually most efficient and useful when you write unit tests.
You write a unit test around just the class/function/unit of code you are working on, using mocks or stubs to abstract out the rest of the system.
"Automated" testing usually refers to higher level integration/acceptance/functional tests - you can do TDD around this level of testing, and it's often the only option for heavily ui-driven code, but you should be aware that this sort of testing is more fragile, harder to write test-first, and no substitute for unit testing.
TDD gives me, as the developer, confidence that the change I am making to the code has the intended consequences and ONLY the intended consequences, and thus the metaphor of TDD as a "safety net" is useful; change any code in a system without it and you can have no idea what else you may have broken.
Engineering tension between developers and testers is really bad news; developers cultivate a "well, the testers are paid to find the bugs" mindset (leading to laziness) and the testers -- feeling as if they aren't being seen to do their jobs if they don't find any faults -- throw up as many trivial problems as they can. This is a gross waste of everyone's time.
The best software development, in my humble experience, is where the tester is also a developer and the unit tests and code are written together as part of a pair programming exercise. This immediately puts the two people on the same side of the problem, working together towards the same goal, rather than putting them in opposition to each other.
Unit testing is not the same as functional testing. And as far as automation is concerned, it should normally be considered when the testing cycle will be repeated more than 2 or 3 times... It is preferred for regression testing. If the project is small or it will not have frequent changes or updates then manual testing is a better and less cost effective option. In such cases automation will prove to be more costly with the script writing and maintainence.