I search a lot but couldn't find any right answer for this question.
Some articles define TDD which you can do any sort of test in it.
Some articles just said TDD is just about function test, and when it comes to acceptance test it will be BDD not TDD.
So...
Is TDD really just unit test?
There's no universally accepted definition of what a unit test is, so it follows that there can't be a universally accepted answer to that question.
Modern-day TDD is an invention (or rediscovery) of Kent Beck. If you read his book Test Driven Development: By Example, you'll see that he uses small deterministic tests without dependencies. This is a common way to do TDD, and seems to fit most people's definition of a unit test.
On the other hand, just because Kent Beck originally used unit tests to demonstrate the TDD technique, it doesn't exclude other types of tests. Another great resource that uses a slightly wider kind of test is Growing Object-Oriented Software, Guided by Tests by Nat Pryce and Steve Freeman. While they don't use Gherkin, you can view that approach as congenial with BDD - at least, I'd call it a sort of outside-in TDD.
I once had the opportunity to talk to Dan North (the inventor of BDD) about the overall purpose of these kinds of techniques, and I think that we agreed that the overall motivation is to get fast feedback. With unit tests, you can run a test suite in mere seconds. That gives you almost immediate feedback on your API design and implementation.
If other types of test can give you similar feedback, it fits into the overall motivational framework of TDD. Exactly what you call the tests is of less importance.
But to answer the explicit question:
Is TDD really just unit test?
No, test-driven development (TDD) is a process in which you write (unit) tests and let the feedback you receive from these tests guide you to figure out what to do next. A common TDD workflow is the red-green-refactor cycle.
Is TDD really just unit test?
No
The problem with driving development with small-scale tests (I call them "unit tests", but they don't match the accepted definition of unit tests very well).... -- Kent Beck, 2003
Michael Feathers, writing in 2005
...there is a failure case for teams that attempt to get test infected; you can end up writing very slow tests that take so long to run that they essentially start to feel like baggage even though they help you catch errors.
The important idea was that tests be fast and reliable (so that they aren't hindering the "refactor" task). But that doesn't necessarily mean that the test subject needs to be small.
That said, the tests we are talking about are programmer tests: they are there to support the making of the product code. The tests that support other stakeholders are a different thing, subject to different constraints.
Related
We're running a project on which we started adopting test-driven design long after the development was started.
We have both unit tests and integration tests. Integration tests are run on a real database, initialized in a known state before the tests are run.
As we write tests, we start noticing than even for classes that could be tested in the "standard way", in isolation & with mock objects, it has actually become faster, and cleaner (read: shorter & easier to understand code) to just use real objects/services directly talking to the database, rather than cluttering the test class with complicated mock objects setup.
Is there anything wrong with this approach?
Nothing wrong with it, at all. On the contrary, based on my experience I would say that it's wrong to favour unit testing. The perception your team had that tests "become faster, and cleaner" is the same I had many years ago and since then.
I would even suggest that you drop unit tests altogether, and continue investing in integration tests. I say this as someone who develops a testing library with a very sophisticated mocking API, who gave isolated unit tests the benefit of the doubt for years, and then finally came to the conclusion that integration tests (after having written many tests of both kinds) are so much better.
One correction, though: unit tests "in isolation & with mock objects" are not the "standard way". As you probably know, Kent Beck is the "father" of TDD and the inventor of JUnit. But guess what, Kent did not use mocks at all in his TDD unit tests. Strictly speaking, the "unit" tests written by the guy who invented TDD are closer to integration tests, as they make no effort of isolating the tested unit from its dependencies. (This is a common misunderstanding about unit testing. For an accurate definition see this article by Martin Fowler.)
If you are using real objects/services, then you don't know why your tests failed. E.g. you changed something in service implementation, or renamed database field, or there your network is down - you have 50 failed "unit" tests. Also you can't do test-driven development in this case, because dependencies which will be required by class you are testing should be implemented before you writing test. Can you foresee which API will be required by consumers of your service? That will lead to less usable API and code which not used at all (parameters, methods, etc).
If your tests take a lot of efforts to arrange mocks, then:
tests should be short and simple
don't verify what other tests already verified
dependencies should be easy to interact with
try to follow Single Responsibility Principle for your classes (both dependencies and sut)
try to follow Tell Don't Ask Principle - instead of asking many questions to dependency, tell it what you want
To put this in context. I like TDD. I like writing my tests first, and expressing what I need my code to do using assertEquals and assertTrue etc.
But everyone seems to be getting with the BDD programme. I see a lot of talk about rSpec and Cucumber and Lettuce. When I look at these, they look overly verbose, almost like Cobol in their naive assumption that somehow writing long "pseudo-English" makes formal specifications legible to the layman.
Some of the writings about BDD make it sound like it's for people who found TDD too hard to do in practice. I don't feel I have this problem. Or, at least, where I have it's been due to problems with doing TDD against databases or in interactive environments, not because I couldn't formulate or prioritise my tests.
So my question is this. What value is BDD for me as a programmer? a) In the context of projects I'm writing for myself (or with other programmers). b) In the context of working with non-technical customers.
For people who've used BDD for a number of projects, what did it buy you over and above TDD?
Are you finding a customers, product or project managers who can write sufficiently rigid test cases in BDD but couldn't write them as ordinary tests?
I've tested BDD on a very simple internal project, then exported it on a complex one.
I found that the main difference is the kind of test you run in BDD.
The BDD outer tests are based on acceptance tests, which do not deals with the classes or any internal code, but relay on testing the system as a whole.
The BDD inner tests are exactly the same unit test you do in TDD.
In this way you can run the same red-green-refactor approach on two levels.
I found external tests extremely helpfull on complex project.
To answer question (a), if you don't have any non-technical stakeholders, then maybe Cucumber isn't appropriate, but are you confident you have sufficient integration testing in place? Unit tests usually aren't enough.
I like this video's discussion on the difference between TDD and BDD: http://channel9.msdn.com/Series/mvcConf/mvcConf-2-Brandom-Satrom-BDD-in-ASPNET-MVC-using-SpecFlow-WatiN-and-WatiN-Test-Helpers (its .NET tools, and not necessarily the .NET tools I use, but the concepts are right)
Generally, you might say it improves your feedback loop by changing it to check if you think the software is implemented as you'd expect (with TDD) to check if the software meets requirements from your users' perspectives (with BDD).
Could TDD be oriented to another kind of testing different from unit testing?
While that might be possible under some interpretation of TDD, I think the main point of TDD is to write the tests before any production code. Given that, you won't have a large system to write integration or functional tests for, so the testing is necessarily going to be on the unit level.
Behavior-Driven Development (BDD) applies the ideas of TDD at the integration testing and functional testing level.
The red-green-refactor cycle of TDD is supposed to be quick, really quick. Fast feedback keeps you in the groove. I've seen approaches to TDD that take a full story, express it as a test, then drive development to pass that (large-ish) test. It's nominally TDD (or maybe BDD), but it doesn't feel right to me. Tiny steps, unit tests, is how I learned TDD, how I think of it, and how it works best for me.
Technically TDD is a way of doing things, not just about unit testing, in theory it should drive all the development process.
In theory the philosophy is that testing drives development, for a more complex scenario, like integration between systems, you should define the integration test, then code to pass those integration tests (even if the test are not automated)...
Of course YES. TDD relies on automated tests which is an orthogonal concern to the 'type' of tests.
If you concentrate on idea, not technical realization, than yes. What I'm saying is if,just for a moment, you forget about unit testing, and focus on idea of writing tests first, before writing implementation in order to achieve clearer design than it can be done even on system level.
Imagine this, you have some requirements. Based on that you write User Acceptance Testing tests - tests on high level that capture functionality. Next you start development - you already have use cases in form of UAT test. You know exactly what is expected, so it is easier to implement desired functionality.
Other example is project based on scrum. In planning meeting you discuss/create/have user stories that are later developed during sprint. Those user stories can actually be UAT tests.
Anyway way I see TDD as way of specifying design upfront, not application testing cycle/phase/methodology. Reason why TDD is perceived as synonym for unit testing is that unit tests are as close to developer as possible. They seem natural way for developer to express functional design of a class/method.
Certainly! TDD does not require unit tests, not at all. Unfortunately, this seems to be a common misunderstanding.
For a concrete example, I drive the development of an open source mocking library of mine (for Java) entirely with integration tests. I don't write separate unit tests for internal classes. Instead, for every new feature or enhancement I first add a failing acceptance (integration) test, and then change or add to existing production code until the test passes. With the eventual refactoring step, this is pure TDD, even if no unit tests get written.
I want to introduce Unit Testing to some colleagues that have no or little experience with Unit Testing. I'll start with a presentation of about an hour to explain the concept and give lots of examples. I'll follow up with pair programming sessions and code reviews.
What are the key points that should be focussed on at the intrduction?
To keep it really short: Unit testing is about two things
a tool for verifying intentions
a necessary safety net for refactoring
Obviously, it is a lot more than that, but to me that pretty much the sums it up.
Unit tests test small things
Another thing to remember is that unit tests test small things, "units". So if your test runs against a resource like a live server or a database, most people call that a system or integration test. To unit test just the code that talks to a resource like that, people often use mock objects (often called mocks).
Unit tests should run fast and be run often
When unit tests test small things, the tests run fast. That's a good thing. Frequently running unit tests helps you catch problems soon after the occur. The ultimate in frequently running unit tests is having them automated as part of continuous integration.
Unit tests work best when coverage is high
People have different views as to whether 100% unit test coverage is desirable. I'm of the belief that high coverage is good, but that there's a point of diminishing return. As a very rough rule of thumb, I would be happy with a code base that had 85% coverage with good unit tests.
Unit tests aren't a substitute for other types of tests
As important as unit tests are, other types of testing, like integration tests, acceptance tests, and others can also be considered parts of a well-tested system.
Unit testing existing code poses special challenges
If you're looking to add unit tests to existing code, you may want to look at Working Effectively with Legacy Code by Michael Feathers. Code that wasn't designed with testing in mind may have characteristics that make testing difficult and Feathers writes about ways of carefully refactoring code to make it easier to test. And when you're familiar with certain patterns that make testing code difficult, you and your team can write code that tries to avoid/minimize those patterns.
You might get some inspiration here too https://stackoverflow.com/questions/581589/introducing-unit-testing-to-a-wary-team/581610#581610
Remember to point out that Unit Testing is not a silver bullet and shouldn't replace other forms of traditional testing (Functional Tests etc) but should be used in conjunction.
Unit testing works better in some areas than others, so the only way to have truly comprehensive testing is to combine it with other forms.
This seems to be one of the biggest criticisms I see of Unit Testing as a lot of people don't seem to 'get' that it shouldn't be replacing other forms of testing in totality.
Main points:
unit tests help both design (by expressing intent) and regression test (by never going away) code;
unit tests are for lazy programmers who don't want to debug their code again;
tests have no business of influencing or affecting business logic and functionality, but they do test it fully;
unit tests demand the same qualities as regular code: theory and strategy, organization, patterns, smells, refactoring;
Unit tests should be FAIR.
F Fast
A Can be easily Automated
I Can be run Independently
R Repeatable
Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
Running the tests becomes automate-able and repeatable
You can test at a much more granular level than point-and-click testing via a GUI
Rytmis
My question is, what are the current "best practices" in terms of tools as well as when and where to use unit testing as part of your daily coding?
Lets try to be somewhat language agnostic and cover all the bases.
Ok here's some best practices from some one who doesn't unit test as much as he should...cough.
Make sure your tests test one
thing and one thing only.
Write unit tests as you go. Preferably before you write the code you are testing against.
Do not unit test the GUI.
Separate your concerns.
Minimise the dependencies of your tests.
Mock behviour with mocks.
You might want to look at TDD on Three Index Cards and Three Index Cards to Easily Remember the Essence of Test-Driven Development:
Card #1. Uncle Bob’s Three Laws
Write no production code except to pass a failing test.
Write only enough of a test to demonstrate a failure.
Write only enough production code to pass the test.
Card #2: FIRST Principles
Fast: Mind-numbingly fast, as in hundreds or thousands per second.
Isolated: The test isolates a fault clearly.
Repeatable: I can run it repeatedly and it will pass or fail the same way each time.
Self-verifying: The Test is unambiguously pass-fail.
Timely: Produced in lockstep with tiny code changes.
Card #3: Core of TDD
Red: test fails
Green: test passes
Refactor: clean code and tests
The so-called xUnit framework is widely used. It was originally developed for Smalltalk as SUnit, evolved into JUnit for Java, and now has many other implementations such as NUnit for .Net. It's almost a de facto standard - if you say you're using unit tests, a majority of other developers will assume you mean xUnit or similar.
A great resource for 'best practices' is the Google Testing Blog, for example a recent post on Writing Testable Code is a fantastic resource. Specifically their 'Testing on the Toilet' series weekly posts are great for posting around your cube, or toilet, so you can always be thinking about testing.
The xUnit family are the mainstay of unit testing. They are integrated into the likes of Netbeans, Eclipse and many other IDEs. They offer a simple, structured solution to unit testing.
One thing I always try and do when writing a test is to minimise external code usage. By that I mean: I try to minimise the setup and teardown code for the test as much as possible and try to avoid using other modules/code blocks as much as possible. Well-written modular code shouldn't require too much external code in it's setup and teardown.
NUnit is a good tool for any of the .NET languages.
Unit tests can be used in a number of ways:
Test Logic
Increase separation of code units. If you can't fully test a function or section of code, then the parts that make it up are too interdependant.
Drive development, some people write tests before they write the code to be tested. This forces you to think about what you want the code to do, and then gives you a definite guideline on when you have acheived that.
Don't forget refactoring support. ReSharper on .NET provides automatic refactoring and quick fixes for missing code. That means if you write a call to something that does not exist, ReSharper will ask if you want to create the missing piece.