Is a class that is hard to unit test badly designed? [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am now doing unit testing on an application which was written over the year, before I started to do unit-testing diligently. I realized that the classes I wrote are hard to unit test, for the following reasons:
Relies on loading data from database. Which means I have to setup a row in the table just to run the unit test (and I am not testing database capabilities).
Requires a lot of other external classes just to get the class I am testing to its initial state.
On the whole, there doesn't seem to be anything wrong with the design except that it is too tightly coupled (which by itself is a bad thing). I figure that if I have written automated test cases with each of the class, hence ensuring that I don't heap extra dependencies or coupling for the class to work, the class might be better designed.
Does this reason holds water? What are your experiences?

Yes you are right. A class which is not unit testable hard to unit test is (almost always) not well designed (there are exceptions, as always, but these are rare - IMHO one should better not try to explain the problem away this way). Lack of unit tests means that it is harder to maintain - you have no way of knowing whether you have broken existing functionality whenever you modify anything in it.
Moreover, if it is (co)dependent with the rest of the program, any changes in it may break things even in seemingly unrelated, far away parts of the code.
TDD is not simply a way to test your code - it is also a different way of design. Effectively using - and thinking about using - your own classes and interfaces from the very first moment may result in a very different design than the traditional way of "code and pray". One concrete result is that typically most of your critical code is insulated from the boundaries of your system, i.e. there are wrappers/adapters in place to hide e.g. the concrete DB from the rest of the system, and the "interesting" (i.e. testable) code is not within these wrappers - these are as simple as possible - but in the rest of the system.
Now, if you have a bunch of code without unit tests and want to cover it, you have a challenge. Mocking frameworks may help a lot, but still it is a pain in the ass to write unit tests for such a code. A good source of techniques to deal with such issues (commonly known as legacy code) is Working Effectively with Legacy Code, by Michael Feathers.

Yes, the design could be better with looser coupling, but ultimately you need data to test against.
You should look into Mocking frameworks to simulate the database and the other classes that this class relies on so you can have better control over the testing.

I've found that dependency injection is the design pattern that most helps make my code testable (and, often, also reusable and adaptable to contexts that are different from the original one that I had designed it for). My Java-using colleagues adore Guice; I mostly program in Python, so I generally do my dependency injection "manually", since duck typing makes it easy; but it's the right fundamental DP for either static or dynamic languages (don't let me get started on "monkey patching"... let's just say it's not my favorite;-).
Once your class is ready to accept its dependencies "from the outside", instead of having them hard-coded, you can of course use fake or mock versions of the dependencies to make testing easier and faster to run -- but this also opens up other possibilities. For example, if the state of the class as currently designed is complex and hard to set up, consider the State design pattern: you can refactor the design so that the state lives in a separate dependency (which you can set up and inject as desired) and the class itself is mostly responsible for behavior (updating the state).
Of course, by refactoring in this way, you'll be introducing more and more interfaces (abstract classes, if you're using C++) -- but that's perfectly all right: it's a excellent principle to "program to an interface, not an implementation".
So, to address your question directly, you're right: the difficulty in testing is definitely the design equivalent of what extreme programming calls a "code smell". On the plus side, though, there's a pretty clear path to refactor this problem away -- you don't have to have a perfect design to start with (fortunately!-), but can enhance it as you go. I'd recommend the book Refactoring to Patterns as good guidance to this purpose.

For me, code should be designed for testability. The other way around, I consider non-testable or hard to test code as badly designed (regardless of its beauty).
In your case, maybe you can mock external dependencies to run real unit tests (in isolation).

I'll take a different tack: the code just isn't designed for testability, but that does not mean its necessarily badly designed. A design is the product of competing *ilities, of which testability is only one of them. Every coding decision increases some of the *itilies while decreasing others. For example, designing for testability generally harms its simplicity/readability/understandability (because it adds complexiety). A good design favors the most important *ilities of your situation.
Your code isn't bad, it just maximizes the other *ilities other than testability. :-)
Update: Let me add this before I get accused of saying designing for *testability isn't important
The trick of course is to design and code to maximize the good *ilities, particularly the important ones. Which ones are the important ones depends on your situation. In my experience in my situations, designing for testability has been one of the more important *ilities.

Ideally, the large majority of your classes will be unit-testable, but you'll never get to 100% since you're bound to have at least one bit that is dedicated to binding the other classes together to an overall program. (It's best if that can be exactly one place that is obviously and trivially correct, but not all code is as pleasant to work with.)

While there isn't a way to establish if a class is "well designed" or not, at least the first thing you mention is usually a sign of a questionable design.
Instead of relying on the database, the class you are testing should have a dependency on an object whose only responsibility is getting that data, maybe using a pattern like Repository or DAO.
As for the second reason, it doesn't necessarily highlight a bad design of the class; it can be a problem with the design of the tests (not having a fixture supertype or helpers where you can setup the dependencies used in several tests) and/or the overall architecture (i.e. not using factories or inversion of control to inject the corresponding dependencies)
Also, you probably shouldn't be using "real" objects for your dependencies, but test doubles. This helps you make sure you are testing the behavior of that one class, and not that of its dependencies. I suggest you look into mocking frameworks.

I might have an ideal solution for you to consider... the Private Accessor
Recently I served in a role where we used them prolifically to avoid the very situation you're describing -- reliance upon artificial data maintained within the primary data-store.
While not the simplest technology to implement, after doing so you'll be able to easily hard-define private members in the method being tested to whatever conditions you feel they should possess, and right from the unit-test code (so no loading from the database). Also you'll have accomplished the goal without violating class protection levels.
Then it's basic assert & verification for desired conditions as normal.

Related

Is creating "testable code" always consistent with following the best OOP design principles?

This is perhaps too general/subjective a question for StackOverflow, but I've been dying to ask it somewhere.
Maybe it's because I'm new to the software engineering world, but it seems like the buzzwords I've been hearing the past couple years are like
"testable code"
"test coverage"
"pure functions"
"every code path in your entire application covered by a test that is a pure in-memory test -- doesn't open database connections or anything. Then we'll know that the code we deploy is guaranteed to work" (yea, right lol)
and sometimes I find this hard to reconcile with the way I want to design my application.
For example, one thing that happens often is I have a complex algorithm inside one or more private methods
private static void DoFancyAlgorithm(string first, string second)
{
// ...
}
and I want or need it to have test coverage.
Well, since you're not supposed to directly test private methods, I have 3 options:
make the method accessible for test (maybe InternalsVisibleTo in C# or friend in C++)
move the logic to a separate class whose "single responsibility" is dealing with this logic, even though from an OOP perspective I believe the logic should be confined to the class it is currently inside, and test that separate class
leave the code as-is and spend 100+ hours figuring out to setup the scenario where I indirectly test the logic of the method. Sometimes this means creating ridiculous mock objects to inject in the class with.
So my question is:
Is creating "testable code" always consistent with the best OOP
practice or is there sometimes a tradeoff?
Creating a testable code has of course consequences on the applicative design.
So it happens that you may do some trade-off on the design but this is generally limited.
Besides unit testing of the component API focuses on input and output of the tested functions.
So I have difficulties to understand as you may finish in such a bad smell:
leave the code as-is and spend 100+ hours figuring out to setup the
scenario where I indirectly test the logic of the method. Sometimes
this means creating ridiculous mock objects to inject in the class
with.
In most of cases as a setup/understanding of the unit test scenarios consumes so much time, it very probably means that :
the component API is not clear.
and/or you mock too many things and so you should wonder whether you should not favor integration tests (without mocks or by limiting them) instead of unit tests (with mocks).
Ideally, in case of legacy code which is running on production, code refactoring to write new unit test cases is NOT the way to go.
Instead, it would be better to first write the unit test cases for the existing code and check-in. With the safety-net in place then onwards (that you have to make all the unit test cases pass at all steps), you can refactor your code (and test cases) in small steps. Goal of these refactoring steps should be to make the code follow the best OOP design principles.
Spending time in writing new unit test cases for a legacy codebase is the biggest disadvantage of working with legacy codebase.

Is it acceptable as a professional developer not to write unit tests? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Just wondering on the pros and cons on TDD/automated unit testing and looking for the community's view on whether it's acceptable for professional developers to write applications without supporting unit tests?
Re-asked on Programmers: https://softwareengineering.stackexchange.com/questions/159572/is-it-acceptable-as-a-professional-developer-not-to-write-unit-tests
I bet I'll get -1 -ed for this, but I still say: if you have other measures to ensure quality, including avoiding regression, program validation, program verification, then no.
The only problem is usually that people don't have any other tools than unit testing to achieve this.
In case you have formally tested models (there's a tool, that actually tests it, or it was constructed in a way which ensures it's valid), and you have formally tested ways to ensure that the actually running software is conform to that model, then it's fine.
Example: if you are sure, that the code you wrote in ruby will act as you'd expect it (because you or someone else tested the ruby interpreter and it doesn't have bugs, or you use only a subset of features known to be safe) then its fine. Usually, we trust C compilers and CPUs in this manner.
Also, if a program is only to be used once,there's no regression problem! If I write a one-liner in bash, which will calculate something for me, I might test it first manually on fake data, then run it on the real one - no need to write an automated test.
If you take the blame, you can also go with along with assumptions: I assume usually, that eclipse is pretty good at creating setters and getters, and I don't test on those. Also, I assume, that in case there'd be any problem with java's Collection classes in Java 7, it'd have turned out by now. But in case there's a trouble, it's your personal trouble. Don't blame anyone.
Personally, I rarely use unit testing on certain codes as I formally test them while they're still flowcharts on a piece of paper, and I ensure that I only use subsets of the language/libraries which are known to work in such situations. Also, I never let code out without peer review. Still, it's sometimes better if there's someone who runs an acceptance test on them...
It is up to you. The question is more philosophical in nature.
Unit tests are just a tool to help you. You can chose to ignore them. However, if you are going to work on a more than trivial project I would advise you to use unit tests.
Yes, they take time, too to write. But in the end you will save a lot if there is any refactoring done or some parts of the code need to be changed.
As always: It depends.
Generally speaking, unit tests are a good thing: they catch a whole class of errors, they verify that particular parts of your code work as expected under given circumstances, and they make it easier to track down errors when something does go wrong. So unless you have good reasons not to, you should write unit tests.
Good reasons not to write unit tests include:
Making relatively small changes to a codebase that is structured badly and hardly testable because of this (usually, the reason is that there is little separation of concerns, and testable units cannot be isolated for testing without intrusive changes to the codebase itself).
The nature of the problem domain makes the code inherently untestable. This is rare, but it happens - for example, it is very hard to come up with meaningful unit tests for a routine that draws a GUI: you'd have to make it render to a mocked surface and then check individual pixels, but you'd also have to mock all the parameters that influence layout decisions, etc.; while this is theoretically possible, it's not usually worth the effort, and one should opt for manual or semi-automatic testing in such cases.
The project is a tiny throwaway program with such a small scope and such a short lifecycle that the benefit gained from unit testing (increased maintainability, decreased complexity) is marginal. Keep in mind, however, that software tends to live longer than it was designed for, and your one-off throwaway script might very well end up becoming a mission-critical part of the company's processes.

Goal of unit testing and TDD: find/minimize bugs or improve design?

I'm fairly green to unit testing and TDD, so please bear with me as I ask what some may consider newbie questions, or if this has been debated before. If this turns out to be considered a "bad question" (too subjective and open for debate), I will happily close it. However, I've searched for a couple days, and am not getting a definitive answer, and I need a better understand of this, so I know no better way to get more info than to post here.
I've started reading an older book on unit testing (because a colleague had it on hand), and its opening chapter talks about why to unit test. One of the points it makes is that in the long run, your code is much more reliable and cleaner, and less prone to bugs. It also points out that effective unit testing will make tracking and fixing bugs much easier. So it seems to focus quite a bit on the overall prevention/reduction of bugs in your code.
On the other hand, I also found an article about writing great unit tests, and it states that the goal of unit testing is to make your design more robust, and conversely, finding bugs is the goal of manual testing, not unit testing.
So being the newbie to TDD that I am, I'm a little confused as to the state of mind with which I should go into TDD and building my unit tests. I'll admit that part of the reason I'm taking this on now with my recently started project is because I'm tired of my changes breaking previously existing code. And admittedly, the linked article above does at least point this out as an advantage to TDD. But my hope is that by going back in and adding unit tests to my existing code (and then continuing TDD from this point forward) is to help prevent these bugs in the first place.
Are this book and this article really saying the same thing in different tones, or is there some subjectivity on this subject, and what I'm seeing is just two people having somewhat different views on how to approach TDD?
Thanks in advance.
Unit tests and automated tests generally are for both better design and verified code.
Unit test should test some execution path in some very small unit. This unit is usually public method or internal method exposed on your object. The method itself can still use many other protected or private methods from the same object instance. You can have single method and several unit test for this method to test different execution paths. (By execution path I meant something controlled by if, switch, etc.) Writing unit tests this way will validate that your code really does what you expect. This can be especially important in some corner cases where you expect to throw exception in some rare scenarios etc. You can also test how method behaves if you pass different parameters - for example null instead of object instance, negative value for integer used for indexing, etc. That is especially useful for public API.
Now suppose that your tested method also uses instances of other classes. How to deal with it? Should you still test your single method and believe that class works? What if the class is not implemented yet? What if the class has some complex logic inside? Should you test these execution paths as well on your current method? There are two approaches to deal with this:
For some cases you will simply let the real class instance to be tested together with your method. This is for example very common in case of logging (it is not bad to have logs available for test as well).
For other scenarios you would like to take this dependencies from your method but how to do it? The solution is dependency injection and implementing against abstraction instead of implementation. What does it mean? It means that your method / class will not create instances of these dependencies but instead it will get them either through method parameters, class constructor or class properties. It also means that you will not expect concrete implementation but either abstract base class or interface. This will allow you to pass fake, dummy or mock implementation to your tested object. These special type of implementations simply don't do any processing they get some data and return expected result. This will allow you to test your method without dependencies and lead to much better and more extensible design.
What is the disadvantage? Once you start using fakes / mocks you are testing single method / class but you don't have a test which will grab all real implementations and put them together to test if the whole system really works = You can have thousands of unit tests and validate that each your method works but it doesn't mean they will work together. This is scenario for more complex tests - integration or end-to-end tests.
Unit tests should be usually very easy to write - if they are not it means that your design is probably complicated and you should think about refactoring. They should be also very fast to execute so you can run them very often. Other kinds of test can be more complex and very slow and they should run mostly on build server.
How it fits with SW development process? The worst part of development process is stabilization and bug fixing because this part can be very hardly estimated. To be able to estimate how much time bug fixing takes you must know what causes the bug. But this investigation cannot be estimated. You can have bug which will take one hour to fix but you will spend two weeks by debugging your application and searching for this bug. When using good code coverage you will most probably find such bug early during development.
Automated testing don't say that SW doesn't contain bugs. It only say that you did your best to find and solve them during development and because of that your stabilization could be much less painful and much shorter. It also doesn't say that your SW does what it should - that is more about application logic itself which must be tested by some separate tests going through each use case / user story - acceptance tests (they can be also automated).
How this fit with TDD? TDD takes it to extreme because in TDD you will write your test first to drive your quality, code coverage and design.
It's a false choice. "Find/minimize bugs" OR improve design.
TDD, in particular (and as opposed to "just" unit testing) is all about giving you better design.
And when your design is better, what are the consequences?
Your code is easier to read
Your code is easier to understand
Your code is easier to test
Your code is easier to reuse
Your code is easier to debug
Your code has fewer bugs in the first place
With well-designed code, you spend less time finding and fixing bugs, and more time adding features and polish. So TDD gives you a savings on bugs and bug-hunting, by giving you better design. These things are not separate; they are dependent and interrelated.
There can many different reasons why you might want to test your code. Personally, I test for a number of reasons:
I usually design API using a combination of the normal design patterns (top-down) and test-driven development (TDD; bottom-up) to ensure that I have a sound API both from a best practices point-of-view as well as from an actual usage point-of-view. The focus of the tests is both on the major use-cases for the API, but also on the completeness of the API and the behavior - so they are primary "black box" tests. The development sequence is often:
main API based on design patterns and "gut feeling"
TDD tests for the major use-cases according to the high-level specification for the API - primary in order to make sure the API is "natural" and easy to use
fleshed out API and behavior
all the needed test cases to ensure the completeness and correct behavior
Whenever I fix an error in my code, I try to write a test to make sure it stay fixed. Somehow, the error got into my original design and passed my original testing of the code, so it is probably not all that trivial. I have noticed that many of the tests tests are "write box" tests.
In order to be able to make any sort of major re-factoring of the code, you need an extensive set of API tests to make sure the behavior of the code stays the same after the re-factoring. For any non-trivial API, I want the test suite to be in place and working for a long time before the re-factoring to be sure that all the major use-cases are covered in a good way. As often as not, you are forced to throw away most of your "white box" tests as they - by the very definition - makes too many assumptions about the internals. I usually try to "translate" as many as possible of these tests as the same non-trivial problems tend to survive re-factoring of the code.
In order to transfer any code between developers, I usually also want a good test suite with focus on the API and the major use-cases. So basically the tests from the initial TDD...
I think that answer to your question is: both.
You will improve design because there is one particular thing about TDD that is great: while you write tests you put yourself in the position of the client code that will be using the system under test - and this alone makes you think about certain design choices.
For example: UI. When you start writing the tests, you will see that those God-Forms are impossible to test, so you separate the logic behind the screens to a presenter/controller, and you get MVP/MVC/whatever.
Having the concept of unit testing a class and mocking dependencies brings you to Single Responsibility Principle. There is a point about every of SOLID principles.
As for bugs, well, if you unit test every method of every class you write (except properties, very simple methods and such) you will catch most bugs in the start. Write the integration tests, you cover almost all of them.
I'll take my stab at this using a remix of a previous answer I wrote. In short, I don't see this as a dichotomy between driving good design and minimizing bugs. I see it more as one (good design) leading to the other (minimizing bugs).
I tend towards saying TDD is a design process that happens to involve unit testing. It's a design process because within each Red-Green-Refactor iteration, you write the test first for code that doesn't exist. You're designing as you're going.
The first beauty of TDD is that the design of your code is guaranteed to be testable. Testable code tends to have loose coupling and high cohesion. Loose coupling and high cohesion are important because they make the code easy to change when requirements change. The second beauty of TDD is that after you're done implementing your system, you happen to have a huge regression suite to catch any bugs and changes in assumptions. Thus, TDD makes your code easy to change because of the design it creates and it makes your code safe to change because of the test harness it creates.
Trying to retrospectively add Unit tests can be quite painful and expensive. If the code doesn't support Unit test you may be better looking at integration tests to test your code.
Don't mix Unit Testing with TDD.
Unit Testing is just the fact of "testing" your code to ensure quality and maintainability.
TDD is a full blown development methodology in which you first write your tests (based on requirements), and only then you write the needed code (and just the needed code) to make that test pass. This means that you only write code to repair a broken test.
Once done that, you write another test, and the code needed to make it pass. In the way, you may be forced to do "refactoring" of the code to allow a new test run without braking another. This way, the "design" arises from the tests.
The purpose of this methodology is of course reduce bugs and improve design, but the main goal of it is to improve productivity because you write exactly the code you need. And you don't write documentation: the tests are the documentation. If a requirement changes, then you change the tests and the code afterwards. If new requirements appear, just add new tests.

How to determine if an existing class can be unit-tested?

Recently, i took ownership of some c++ code. I am going to maintain this code, and add new features later on.
I know many people say that it is usually not worth adding unit-tests to existing code, but i would still like to add some tests which will at least partially cover the code. In particular, i would like to add tests which reproduce bugs which i fixed.
Some of the classes are constructed with some pretty complex state, which can make it more difficult to unit-test.
I am also willing to refactor the code to make it easier to test.
Is there any good article you recommend on guidelines which help to identify classes which are easier to unit-test? Do you have any advice of your own?
While Martin Fowler's book on refactoring is a treasure trove of information, why not take a look at "Working Effectively with Legacy Code."
Also, if you're going to be dealing with classes where there's a ton of global variables or huge amounts of state transitions I'd put in a lot of integration checks. Separate out as much of the code which interacts with the code you're refactoring to make sure that all expected inputs in the order they are recieved continue to produce the same outputs. This is critical as it's very easy to "fix" a subtle bug that might have been addressed somewhere else.
Take notes too. If you do find that there is a bug which another function/class expects and handles properly you'll want to change both at the same time. That's difficult unless you keep thorough records.
Presumably the code was written for a purpose, and a unit test will check if the purpose is met, i.e. the pre-conditions and post-conditions hold for the methods.
If the public class methods are such that you can externally check the state it can be unit tested easily enough (black-box test). If the class state is invisible or if you have to test tricky private methods, your test class may need to be a friend (white-box test).
A class that is hard to unit test will be one that
Has enormous dependencies, i.e. tightly coupled
Is intended to work in a high-volume or multi-threaded environment. There you would use a system test rather than a unit test and the actual output may not be totally determinate.
I written a fair number of blog posts about unit testing, non-trivial, C++ code: http://www.lenholgate.com/blog/2004/05/practical-testing.html
I've also written quite a lot about adding tests to existing code: http://www.lenholgate.com/blog/testing/
Almost everything can and should be unit tested. If not directly, then by using mock classes.
Since you decided to refactor your classes, try to use BDD or TDD approach.
To prevent breaking existing functionality, the only way is to have good integration tests, but usually it takes time to execute them all for a complex system.
Without more details on what you do, it is not that easy to give more implementation details. Some are :
use MVP or presenter first for developing gui
use design patterns where appropriate
use function and member pointers, or observer design pattern to break dependencies
I think that if you're having to come up with some "measure" to test if a class is testable, you're already fscked. You should be able to tell just by looking at it: can you write an independent program that links to this class alone and makes sure it works?
If a class is too huge so that you can't be sure just by looking at it...chances are it probably isn't testable. People that don't know how to make small, distinct interfaces generally don't know how to adhere to any other principle either.
In the end though, the way to find out if a class is testable is to try to put it in a harness. If you end up having to pull in half your program to do it, try refactoring. If you find that you can't even perform the most basic refactor without having to rewrite the entire program, analyze the expense of doing so.
We at IPL published a paper It's testing Jim, but not as we know it which explores the practical problems of testing C++ and suggests some techniques to address them that may well be of use given your question. These techniques are also well supported in Cantata++ - our C/C++ unit and integration testing tool.

What is test-driven development (TDD)? Is an initial design required?

I am very new to test-driven development (TDD), not yet started using it.
But I know that we have to write tests first and then the actual code to pass the test and refactor it till the design is good.
My concern over TDD is where it fits in our systems development life cycle (SDLC).
Suppose I get a requirement of making an order processing system.
Now, without having any model or design for this system, how can I start writing tests?
Shouldn't we require to define the entities and their attributes to proceed?
If not, is it possible to develop a big system without any design?
There is two levels of TDD, ATDD or acceptance test driven development, and normal TDD which is driven by unit tests.
I guess the relationship between TDD and design is influenced by the somewhat "agile" concept that source code IS the design of a software product. A lot of people reinforce this by translating TDD as Test Driven Design rather than development. This makes a lot of sense as TDD should be seen as having a lot more to do with driving the design than testing. Having acceptance and unit tests at the end of it is a nice side effect.
I cannot really say too much about where it fits into your SDLC without knowing more about it, but one nice workflow is:
For every user story:
Write acceptance tests using a tool like FitNesse or Cucumber, this would specify what the desired outputs are for the given inputs, from a perspective that the user understands. This level automates the specifications, or can even replace specification documentation in ideal situations.
Now you will probably have a vague idea of the sort of software design you might need as far as classes / behaviour etc goes.
For each behaviour:
Write a failing test that shows how calling code you would like to use the class.
Implement the behaviour that makes the test pass
Refactor both the test and actual code to reflect good design.
Go onto the next behaviour.
Go onto the next user story.
Of course the whole time you will be thinking of the evolving high level design of the system. Ideally TDD will lead to a flexible design at the lower levels that permits the appropriate high design to evolve as you go rather than trying to guess it at the beginning.
It should be called Test Driven Design, because that is what it is.
There is no practical reason to separate the design into a specific phase of the project. Design happens all the time. From the initial discussion with the stakeholder, through user story creation, estimation, and then of course during your TDD sessions.
If you want to formalize the design using UML or whatever, that is fine, just keep in mind that the code is the design. Everything else is just an approximation.
And remember that You Aren't Gonna Need It (YAGNI) applies to everything, including design documents.
Writing test first forces you to think first about the problem domain, and acts as a kind of specification. Then in a 2nd step you move to solution domain and implement the functionality.
TDD works well iteratively:
Define your initial problem domain (can be small, evolutionary prototype)
Implement it
Grow the problem domain (add features, grow the prototype)
Refactor and implement it
Repeat step 3.
Of course you need to have a vague architectural vision upfront (technologies, layers, non-functional requirement, etc.). But the features that bring added-value to your your application can be introduced nicely with TDD.
See related question TDD: good for a starter?
With TDD, you don't care much about design. The idea is that you must first learn what you need before you can start with a useful design. The tests make sure that you can easily and reliably change your application when the time comes that you need to decide on your design.
Without TDD, this happens: You make a design (which is probably too complex in some areas plus you forgot to take some important facts into account since you didn't knew about them). Then you start implementing the design. With time, you realize all the shortcomings of your design, so you change it. But changing the design doesn't change your program. Now, you try to change your code to fit the new design. Since the code wasn't written to be changed easily, this will eventually fail, leaving you with two designs (one broken and the other in an unknown state) and code which doesn't fit either.
To start with TDD, turn your requirements into test. To do this, ask "How would I know that this requirement is fulfilled?" When you can answer this question, write a test that implements the answer to this question. This gives you the API which your (to be written) code must adhere to. It's a very simple design but one that a) always works and b) which is flexible (because you can't test unflexible code).
Also starting with the test will turn you into your own customer. Since you try hard to make the test as simple as possible, you will create a simple API that makes the test work.
And over time, you'll learn enough about your problem domain to be able to make a real design. Since you have plenty of tests, you can then change your code to fit the design. Without terminally breaking anything on the way.
That's the theory :-) In practice, you will encounter a couple of problems but it works pretty well. Or rather, it works better than anything else I've encountered so far.
Well of course you need a solid functional analysis first, including a domain model, without knowing what you'll have to create in the first place it's impossible to write your unit tests.
I use a test-driven development to program and I can say from experience it helps create more robust, focussed and simpler code. My recipe for TDD goes something likes this:
Using a unit-test framework (I've written my own) write code as you wish to use it and tests to ensure return values etc. are correct. This ensures you only write the code you're actually going to use. I also add a few more tests to check for edge cases.
Compile - you will get compiler errors!!!
For each error add declarations until you get no compiler errors. This ensures you have the minimum declarations for your code.
Link - you will get linker errors!!!
Write enough implementation code to remove the linker errors.
Run - you unit tests will fail. Write enough code to make the test succeed.
You've finished at this point. You have written the minimum code you need to implement your feature, and you know it is robust because of your tests. You will also be able to detect if you break things in the future. If you find any bugs, add a unit test to test for that bug (you may not have thought of an edge case for example). And you know that if you add more features to your code you won't make it incompatible to existing code that uses your feature.
I love this method. Makes me feel warm and fuzzy inside.
TDD implies that there is some existing design (external interface) to start with. You have to have some kind of design in mind in order to start writing a test. Some people will say that TDD itself requires less detailed design, since the act of writing tests provides feedback to the design process, but these concepts are generally orthogonal.
You need some form of specification, rather than a form of design -- design is about how you go about implementing something, specification is about what you're going to implement.
Most common form of specs I've seen used with TDD (and other agile processes) are user stories -- an informal kind of "use case" which tends to be expressed in somewhat stereotyped English sentences like "As a , I can " (the form of user stories is more or less rigid depending on the exact style/process in use).
For example, "As a customer, I can start a new order", "As a customer, I can add an entry to an existing order of mine", and so forth, might be typical if that's what your "order entry" system is about (the user stories would be pretty different if the system wasn't "self-service" for users but rather intended to be used by sales reps entering orders on behalf of users, of course -- without knowing what kind of order-entry system is meant, it's impossible to proceed sensibly, which is why I say you do need some kind of specification about what the system's going to do, though typically not yet a complete idea about how it's going to do it).
Let me share my view:
If you want to build an application, along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc. on the other hand TDD changes your mindset.
Commonly, you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. this is a Syntax debugging driven development . but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
In practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)