How does Test Driven Design help a one man software project? - unit-testing

I've spent a lot of time building out tests for my latest project, and I'm really not sure what the ROI was on the time spent.
I'm a one man operation, and I'm building web applications. I don't necessarily have to "prove" that my software works to anyone (except my users), and I'm worried that I spent a good deal of time needlessly rebugging test code in the past months.
My question is, while I like the idea of TDD for small to large software teams, how does it help a one man team build high quality code quickly?
Thanks
=> ran across this today, from the blog of joel spolsky, one of the founders of stackoverflow:
http://www.joelonsoftware.com/items/2009/09/23.html
"Zawinski didn’t do many unit tests. They “sound great in principle. Given a leisurely development pace, that’s certainly the way to go. But when you’re looking at, ‘We’ve got to go from zero to done in six weeks,’ well, I can’t do that unless I cut something out. And what I’m going to cut out is the stuff that’s not absolutely critical. And unit tests are not critical. If there’s no unit test the customer isn’t going to complain about that.”"
as i'm getting older i think i'm realizing more and more that it's just all about speed and functionality. i'd love to build unit tests. but since we only have so much time at our disposal, i'd rather build it faster, and rely on beta testing and good automated error reporting to weed out any problems as they crop up. if the project eventually gets big enough that this bites me in the a**, it will be generating enough revenue that i can justify a rebuild.

I think that a situation like yours it helps greatly when you have to change/refactor/optimize something on which a lot of code depends... By using unit-testing you can quickly ensure that everything that worked before the change, still works afterwards :) In other words, it gives you confidence.

TDD doesn't really have anything to do with team size. It has to do with creating the smallest amount of software needed with the right interface that works correctly.
The TDD process requires you to write only just enough code to satisfy a test, so you don't end up creating code you don't need.
Using TDD to design a class makes you think as a client of the class, so you end up creating a better interface more often than if you developed it without TDD.
TDD, by its nature will acheive 100% code coverage, proving your code works. A side-effect of this is that you and others can now more safely change your class because it has a full suite of automated tests.
I should add that its iterative nature creates a positive feedback loop as well, so as you iterate you gain more and more confidence in your code.

TDD is not only about testing, it is also about designing your classes / API.
I mean: by writing a test first, you are forced to think on how you want to use your class. So, you first think about the interface of your class, how you want to use your class, and hence, your object model becomes more usable and readable.

rebugging is always needless - just don't delete the bugs in the first place...
For a real answer, you can't do better than 'it depends'. If:
you don't tend to have the kind of
problems that automated unit testing can
find (as opposed to performance, visual or
aesthetic ones)
you have some other way of designing the code (e.g. UML)
you don't tend to have cause to change things while keeping them working
It could well be the case that TDD doesn't really work out for you.
Or maybe you are doing it wrong, and if you did it differently it would work better.
Or, just maybe, it is actually working but you don't realise it. One thing about working solo is that self-assessment is difficult.
In general, people's self-views hold
only a tenuous to modest relationship
with their actual behavior and
performance. The correlation between
self-ratings of skill and actual
performance in many domains is
moderate to meager—indeed, at times,
other people's predictions of a
person's outcomes prove more accurate
than that person's self-predictions.
In addition, people overrate
themselves. On average, people say
that they are "above average" in skill
(a conclusion that defies statistical
possibility), overestimate the
likelihood that they will engage in
desirable behaviors and achieve
favorable outcomes, furnish overly
optimistic estimates of when they will
complete future projects, and reach
judgments with too much confidence.
Several psychological processes
conspire to produce flawed
self-assessments.

In theory team size should not matter. TDD is supposed to pay off because:
You test the code so you get the bugs out
When you maintain or refactor the code you know you didn't break it because you easily can test it
You produce better code because you focus on edge cases as you write the code
You produce better code because you can refactor the code with confidence
And generally I do find that the approach is valuable. I must admit to being in two minds about the ongoing maintenance of some tests.

Even before the term TDD became popular, I've always written a little main function to test whatever piece I was working on. I'd throw it away right afterwards. I have trouble understanding the mentality of programmers who could write code that never been executed and plug it in. When you find a "Bug" after doing that a few times it can take days or even weeks to track down.
Unit testing is a slightly better way to go because your tests hang around and you can see the intent.
Unit testing will find the bug much faster than testing from a UI after integration--it'll just save you time.
Now saving all your unit tests and creating a suite can be of less value if you are single, especially if you like to refactor a lot (You can easily spend more time refactoring tests than code), but the tests are still worth while to create.
Plus, test-driven development is kinda fun to a degree.

I've found that people concentrate too much on the TEST in TDD. There are many who don't believe that this is what the originators had in mind.
I've found that BDD is quite useful, no matter what the problem size. It concentrates on the gathering of how the system is supposed to behave.
Some people take it all the way to the creation of automated unit tests. I use them as both specifications and test cases. Because they're in English, it is easy for the business to understand them, as well as the QA department.
In general, it is a formalized way of recording specs, so that code can be written. Isn't that what the ultimate goal is?
Here are a few links
What's in a Story
Introducing BDD
Designing Klingon Warships Using Behaviour Driven Development

Related

Pro's and Con's of unit testing after the fact

I have a largish complex app around 27k lines. Its essentially a rule drive multithreaded processing engine, without giving too much away Its been partially tested as it's been built, certain components.
Question I have, is what is the pro's and con's of doing unit testing on after the fact, so to speak, after its been implemented. It is clear that traditional testing is going to take 2-3+ months to test every facet, and it all needs to work, and that time is not available really.
I've done a fair bit of unit testing in the past, but generally it's been on desktop automation or LOB apps, which are fairly simple. The app is itself is highly componentized internally, interface driven really. I've not decided on what particular framework to use. Any advice would be appreciated.
What say you.
I think there are several advantages to unit testing existing code
Regression management
Better understanding of the code. Testing it will reveal cases you did not anticipate and will help define the behavior of the code
It will point out design deficiencies in the code as you stuggle to test poorly defined methods.
But I think it's more interesting to consider the cons of unit testing code. AFAIK, there are no cons. All of the time spent adding tests will pay for themselves even in everything but the shortest of time cycles.
There are many reasons to unit test code. The main reason I would advocate unit testing after the fact is simple. Your code is broken, you just don't know it yet.
There is a very simple rule in software. If the code is not tested, it's broken. This may not be immediately obvious at first, but as you begin testing, you will find bugs. It's up to you to determine how much you care about finding these bugs.
Besides this, there are several other important benefits of unit testing,
regression testing will be made simpler
other developers, that are less knowledgeable, can't break your desired behavior
the tests are a form of self documentation
can reduce time in future modifications (no more manual testing?, less bugs?)
The list can go on and on. The only real drawback is the time it takes to write these tests. I believe that drawback will always be offset by the time it takes you to debug
problems you could have found while unit testing!
Depending on how many bugs "manual testing" turns up, you could simply do test-driven bug fixing which in my experience is far more effective than simply driving up code coverage by writing "post-mortem" unit tests.
(Which is not to say writing unit tests afterwards is a bad idea, it's just that TDD is almost always a better idea.)
Here's a few of each to my mind:
Pro:
Time is saved in not having to test methods that have been removed as the design evolved over time. What is left is what really has to get tested.
By adding tests, this allows an opportunity to review all the aspects in the app and determine what other optimizations one could add now that a working prototype is ready.
Con:
Large time investment to get the tests written, new functionality may be delayed for some time to generate all the tests.
Bugs may have been introduced that the tests will discover that may cause this to be longer than initially planned.
The main point would be that adding unit tests allows for refactoring and putting more polish on the application.
I think one of the biggest con of testing "after the fact" is that you will probably have a harder time testing. If you write code without tests, you usually don't have testability in mind and end up writing code that is hard to test.
But, after you spent this extra time writing tests and changing your code for better testability, you'll be much more confident about making changes, once you won't need a lot of time to debug and check if everything is ok.
Finally, you might find new bugs which weren't caught before, and spend some time fixing it. But hey, that's what tests are for =)
Pro post facto unit testing:
Get documentation you can trust.
Improve understanding of the code.
Push toward refactoring and improving the code itself.
Fix bugs that lurk in the code.
Con post facto unit testing:
Waste time fixing bugs you can live with. (If you wrote 27KLOC, we hope it does something, right?)
Spend time understanding and refactoring code you don't need to understand.
Lose time that could go into the next project.
The unasked question is just how important an asset is this code to your organization, long term? The answer to this question determines how much you should invest. I have plenty of (successful) competitors where the major purpose of their code is to get out numbers to evaluate some new technique or idea. Once they have the numbers, the code is of little marginal value. They (rightly) test very carefully to make sure the numbers are meaningful. After that, if there are fifty open bugs that don't affect the numbers, they don't care. And why should they? The code has served its purpose.
If you are doing any refactoring, those tests will help you detect any bugs that will appear in the process.
Unit testing "after the fact" is still valuable, and provides most of the same advantages of unit testing during development.
That being said, I find it's more work to test after the fact (if you want to get the same level of testing). It's still valuable, and still worth while.
Personally, when trying to tackle something with limited time, I try to focus my testing efforts as much as possible. Any time you fix a bug, add tests to help prevent it in the future. Any time you're going to refactor, try to put enough testing in place to feel confident you're not going to break something.
The only con of adding unit testing is that it does take some development time. Personally, I find that the development time spent on testing is far outweighed by the time saved in maintenance, but this is something you need determine on your own.
Unit testing is still definitely useful. Check out http://en.wikipedia.org/wiki/Unit_testing for a full list and explanation of the benefits.
The main benefits you will gain are documentation, making change easier, and it simplifies future integration.
There are really no costs to adding unit testing except your time. Realize though that the time you spend adding unit testing will reduce the amount of time you will need to spend in other areas of development by at least the same amount and most likely more.
Unit testing doesn't prove that a system works. It proves that each unit works as an independent unit. It doesn't prove that the integrated system will work
Unit testing "after the fact" is useful for two things - finding bugs that you've missed so far and won't find using any other kind of testing (especially for rare conditions - there's huge numbers of rare conditions that can happen in particular units for any real world system), and as regression tests during maintenance.
Neither of these is going to help much in your situation - you need to do other forms of testing either way. If you don't have time to do what you need to do, taking on even more work is unlikely to help.
That said, without unit testing, I guarantee you will have nasty surprises when the customers start using the code. It's all those rare conditions - there's so many of them that some of them are bound to occur soon. Black-box testers tend to get into habitual patterns, which mean they only test so many rare cases - and they have no way of knowing what rare cases there are in particular units and how to trigger them anyway. More users means more variations in usage patterns.
I'm with those who say unit tests should be written as part of the programming process - one of the programmers responsibilities. As a rule, code gets written faster that way, as you get fewer and less complex bugs to track down as you go, and you tend to find out about them when you're still familiar with the code that has the bug.
If development is "done" I would say that there is not too much point in unit testing.
This is one of these difficult value judgement types of questions.
I would mostly agree with Epaga, that writing new tests as you fix bugs (perhaps with a couple of extra tests thrown in) is a good approach.
I would add two further comments:
Doing backed-off black box testing to a unit before making large changes can be a good idea
Consistency testing isn't unit testing, but certain types of program lend themselves to the easy generation of consistency tests. This might be one approach to making sure you don't break things.

Why do code quality discussions evoke strong reactions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

What's the best argument to convince developers to learn TDD?

Let me first come out of closet. I'm a TDD believer. I'm trying to practice Test Driven Development as much as I can.
Some developers at my work refuse to even try it. I myself started TDD by trying to prove to one of my peers that Test Driven Development is a bad idea. The arguments are:
Why? I was pretty successful developer so far.
It's going to slow me down.
What's the best pro TDD argument did hear or used?
See also: What is the best reason for unit testing?
Perhaps they know better.
Unit testing by developers is an extremely useful practice and I cannot overemphasize its benefits, not only during initial development but also during refactoring when unit tests can catch early not only ordinary code defects but also the break of assumptions made by developers that were never captured in formal documentation and thus are likely lost by the time refactoring occurs.
That being said, TDD is no magic pixie dust:
the 'just write enough code to pass the test' approach gives false positives. There are often known fallacies and problems that the 'just enough' approach fails to address. Quick examples that come to mind are distributed systems fallacies or NUMA performance problems. Just capturing those requirements into simply expressing those test cases for TDD would turn into a full time job in itself.
the explosion of moqs goes out of control for any serious size project. mocks are code like any other code, they need to be maintained and just don't write themselves out of the blue.
TDD is often used as an excuse to eliminate QA testing. 'our developer have already written tested id, lets ship it' neglects completely the end-to-end feature oriented testing QA should cover
I don't trust the fox guarding the hen house. A wrong algorithm can still pass TDD with flying colors if the same mistakes are made in both the test and in the implementation.
All methodologies in the end try to use process to substitute talent.
My main quarrel with TDD is that is presented as a magic solution to most development problems but its cost is kept under the table by its advocates. Doubling or tripling your code base with moqs does not come for free. I much rather see a few comprehensive unit tests written during development. The test-first TDD approach, I'm yet to see its benefits in a real size project.
I understand I'll be egg-ed to death now for posting this, but what the heck, who cares...
No amount of argument will convince anyone to use TDD.
You have to SHOW them, and demonstrate the benefits. It's easier to make someones 'light go on' by showing rather than telling.
TDD is a "pay me now or pay me later" tradeoff. If you only count the time from starting coding to checking in your code then TDD often does take longer, especially when first learning TDD. The payoff comes later during the testing phase, and also in future rounds of coding.
For the testing phase, I found that with TDD:
I had substantially fewer bugs. My last TDD code I had bugs only due to requirements misunderstandings (or changes) or in the areas where I wasn't able to bring the code under test (PHP code in that case).
The bugs I had were generally easier to reproduce under test, because I had already gotten the system under test.
Fixing the bugs was faster, and with the tests I could have a greater belief that I didn't introduce new bugs.
The code itself had the following properties:
As I started out thinking like a client of the code, the code tended to be easier to use. (This is one of the benefits of writing tests first).
The code is easier to test.
Writing unit tests is easier (and in many cases more fun) just before rather than after, so more tests are written.
The code is easier to refactor and clean up. This was particularly true with Python, where automatic refactoring tools have a harder time.
Because of that, when it came time to revisit the code, it was easier to understand and easier to change, plus we had at least some regression tests already in place.
What this means is that the payback for TDD time may be months later. Furthermore, starting TDD with legacy code is particularly hard. Then there is time needed to both learn how to write good tests (a bad test set can either be insufficient or worse be brittle making it harder, not easier, to do refactorings) and how to get a complex system under test.
I have to admit I haven't been really able to get too many other people to switch to TDD. I think I switched largely because I wanted an easier way of testing and also I had the opportunity to learn how with a small code base and personal project.
Different people will be convinced (or not) in different ways, so the only honest answer is "it depends".
One way I've seen work several times is to sit with someone after they've been struggling with a chunk of code, and recreate it using TDD. The resulting code is usually smaller and clearer.
I don't practice TDD. Although I see how it is good if you have complex projects in which you have many different test cases to test, I don't see a great benefit in using it in, say, a simple web application.
One way someone could convince me to use TDD would be if we took the same project and did them side by side, see who comes up with better results and who completes the task faster.
Pair with them. You don't have to call it "pair programming" - that's scary to someone who's reluctant to even consider "radical" techniques like TDD - but if the two of you sit at a desk and work together on the same problem, it's easy to demonstrate the value of TDD. That can't be the end of the conversation, but it's one hell of a start. Gives you credibility for the rest of the conversation, and gives you something real as a basis for further discussion.
The "aha" moment for me was reading chapter 2 of "Test-Driven Development in Microsoft.Net" by James Newkirk. (Not that the rest of the book wasn't important...he dedicates several chapters to building a multi-tiered application in TDD).
He builds a simple stack, but you get to see the code "evolve" its complexity instead of starting out complex.
Even then, you will still have trouble convincing nay-sayers because it appears that TDD requires a lot more work than traditional programming. Most anti-TDD developers, however, forget to factor in the development time for unit tests at the end, at least in my experience.
The arguments you listed are not rational, logical arguments. They have no reasoning behind them (unless you've actually just summarized much longer real arguments.)
As such, I don't think that you will be able to convince anyone who makes those claims with rational arguments of your own. The best way will be to appeal to the source of their arguments; experience. Either get them to use TDD for a while on a provisional basis to see what they think of it, or else do TDD work yourself that is clearly very good work, and present it as an example to them.
(I'm not a TDD believer. This is a practical way you could convince me that it was a good idea.)
As a professional developer for 10+ years, the best argument I can put forward is that even I found my bugs before I got to a point of actually being able to "run" the application.
I also found that the design of my code was more robust and easier to change, and it gave me greater confidence to refactor.
"Pretty successful" doesn't equal "Really successful".
The other great advantage is that I don't have to write test harnesses anymore as the Unit Test runners effectively become my test harness.
Show them this presentation. It sold me.
http://www.slideshare.net/sebastian_bergmann/testing-with-phpunit-and-selenium-156187
Any programmer who's ever been faced with a really complex task with a lot of edge conditions should be able to see the value of TDD. When it comes to something like making sure a search engine will match certain strings, TDD is the only way you'll be able to stay sane during maintenance -- the only way to be sure you've fixed one case without breaking a few others is with automated testing.
Thorough unit tests reduce bugs occurrences, but they also reduce recidivation or the scope of damage caused by recidivation.

How to reduce the time spent on testing?

I just looked back through the project that nearly finished recently and found a very serious problem. I spent most of bank time on testing the code, reproducing the different situations "may" cause code errors.
Do you have any idea or experience to share on how to reduce the time spent on testing, so that makes the development much more smoothly?
I tried follow the concept of test-driven for all my code , but I found it really hard to achieve this, really need some help from the senior guys here.
Thanks
Re: all
Thanks for the answers above here, initially my question was how to reduce the time on general testing, but now, the problem is down to how to write the effecient automate test code.
I will try to improve my skills on how to write the test suit to cut down this part of time.
However, I still really struggle with how to reduce the time I spent on reproduce the errors , for instance, A standard blog project will be easy to reproduce the situations may cause the errors but a complicate bespoke internal system may "never" can be tested throught out easily, is it worthy ? Do you have any idea on how to build a test plan on this kind of project ?
Thanks for the further answers still.
Test driven design is not about testing (quality assurance). It has been poorly named from the outset.
It's about having machine runnable assumptions and specifications of program behavior and is done by programmers during programming to ensure that assumptions are explicit.
Since those tasks have to be done at some point in the product lifecycle, it's simply a shift of the work. Whether it's more or less efficient is a debate for another time.
What you refer to I would not call testing. Having strong TDD does mean that the testing phase does not have to be relied upon as heavily for errors which would be caught long before they reach a test build (as they are with experience programmers with a good spec and responsive stakeholders in a non-TDD environment).
If you think the upfront tests (runnable spec) is a serious problem, I guess it comes down to how much work the relative stages of development are expected to cost in time and money?
I think I understand. Above the developer-test level, you have the customer test level, and it sounds like, at that level, you are finding a lot of bugs.
For every bug you find, you have to stop, take your testing hat off, put your reproduction hat on, and figure out a precise reproduction strategy. Then you have to document the bug, perhaps put it in a bug-tracking system. Then you have to put the testing hat on. In the mean time, you've lost whatever setup you were working on and lost track of where you were on whatever test plan you were following.
Now - if that didn't have to happen - if you had far few bugs - you could zip along right through testing, right?
It's doubtful that GUI-driving test automation will help with this problem. You'll spend a great amount of time recording and maintaining the tests, and those regression tests will take a fair amount of time to return the investment. Initially, you'll go much slower with GUI-Driving customer-facing tests.
So (I submit) that what might really help is higher /initial/ code quality coming out of development activities. Micro-tests -- also called developer-tests or test-driven-development in the original sense - might really help with that. Another thing that can help is pair programming.
Assuming you can't grab someone else to pair, I'd spend an hour looking at your bug tracking system. I would look at the past 100 defects and try to categorize them into root causes. "Training issue" is not a cause, but "off by one error" might be.
Once you have them categorized and counted, put them in a spreadsheet and sort. Whatever root cause occurs the most often is the root cause you prevent first. If you really want to get fancy, multiply the root cause by some number that is the pain amount it causes. (Example: If in those 100 bugs you have 30 typos on menus, which as easy to fix, and 10 hard-to-reproduce javascript errors, you may want to fix the javascript issue first.)
This assumes you can apply some magical 'fix' to each of those root causes, but it's worth a shot. For example: Transparent icons in IE6 may be because IE6 can not easily process .png files. So have a version control trigger that rejects .gif's on checkin and the issue is fixed.
I hope that helps.
The Subversion team has developed some pretty good test routines, by automating the whole process.
I've begun using this process myself, for example by writing tests before implementing the new features. It works very well, and generates consistent testing through the whole programming process.
SQLite also have a decent test system with some very good documentation about how it's done.
In my experience with test driven development, the time saving comes well after you have written out the tests, or at least after you have written the base test cases. The key thing being here is that you actually have to write our your automated tests. The way your phrased your question leads me to believe you weren't actually writing automated tests. After you have your tests written you can easily go back later and update the tests to cover bugs they didn't previously find (for better regression testing) and you can easily and relatively quickly refactor your code with the ease of mind that the code will still work as expected after you have substantially changed it.
You wrote:
"Thanks for the answers above here,
initially my question was how to
reduce the time on general testing,
but now, the problem is down to how to
write the efficient automate test
code."
One method that has been proven in multiple empirical studies to work extremely well to maximize testing efficiency is combinatorial testing. In this approach, a tester will identify WHAT KINDS of things should be tested (and input it into a simple tool) and the tool will identify HOW to test the application. Specifically, the tool will generate test cases that specify what combinations of test conditions should be executed in which test script and the order that each test script should be executed in.
In the August, 2009 IEEE Computer article I co-wrote with Dr. Rick Kuhn, Dr. Raghu Kacker, and Dr. Jeff Lei, for example, we highlight a 10 project study I led where one group of testers used their standard test design methods and a second group of testers, testing the same application, used a combinatorial test case generator to identify test cases for them. The teams using the combinatorial test case generator found, on average, more than twice as many defects per tester hour. That is strong evidence for efficiency. In addition, the combinatorial testers found 13% more defects overall. That is strong evidence for quality/thoroughness.
Those results are not unusual. Additional information about this approach can be found at http://www.combinatorialtesting.com/clear-introductions-1 and our tool overview here. It contains screen shots and and explanation of how of our the tool makes testing more efficient by identifying a subset of tests that maximize coverage.
Also free version of our Hexawise test case generator can be found at www.hexawise.com/users/new
There is nothing inherently wrong with spending a lot of time testing if you are testing productively. Keep in mind, test-driven development means writing the (mostly automated) tests first (this can legitimately take a long time if you write a thorough test suite). Running the tests shouldn't take much time.
It sounds like your problem is you are not doing automatic testing. Using automated unit and integration tests can greatly reduce the amount of time you spend testing.
First, it's good that you recognise that you need help -- now go and find some :)
The idea is to use the tests to help you think about what the code should do, they're part of your design time.
You should also think about the total cost of ownership of the code. What is the cost of a bug making it through to production rather than being fixed first? If you're in a bank, are there serious implications about getting the numbers wrong? Sometimes, the right stuff just takes time.
One of the hardest things about any project of significant size is designing the underlying archetecture, and the API. All of this is exposed at the level of unit tests. If you're writing your tests first, then that aspect of design happens when your coding your tests, rather than the program logic. This is compounded by added effort of making code testable. Once you've got your tests, the program logic is usually quite obvious.
That being said, there seem to be some interesting automatic test builders on the horizon.

Disadvantages of Test Driven Development? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What do I lose by adopting test driven design?
List only negatives; do not list benefits written in a negative form.
If you want to do "real" TDD (read: test first with the red, green, refactor steps) then you also have to start using mocks/stubs, when you want to test integration points.
When you start using mocks, after a while, you will want to start using Dependency Injection (DI) and a Inversion of Control (IoC) container. To do that you need to use interfaces for everything (which have a lot of pitfalls themselves).
At the end of the day, you have to write a lot more code, than if you just do it the "plain old way". Instead of just a customer class, you also need to write an interface, a mock class, some IoC configuration and a few tests.
And remember that the test code should also be maintained and cared for. Tests should be as readable as everything else and it takes time to write good code.
Many developers don't quite understand how to do all these "the right way". But because everybody tells them that TDD is the only true way to develop software, they just try the best they can.
It is much harder than one might think. Often projects done with TDD end up with a lot of code that nobody really understands. The unit tests often test the wrong thing, the wrong way. And nobody agrees how a good test should look like, not even the so called gurus.
All those tests make it a lot harder to "change" (opposite to refactoring) the behavior of your system and simple changes just becomes too hard and time consuming.
If you read the TDD literature, there are always some very good examples, but often in real life applications, you must have a user interface and a database. This is where TDD gets really hard, and most sources don't offer good answers. And if they do, it always involves more abstractions: mock objects, programming to an interface, MVC/MVP patterns etc., which again require a lot of knowledge, and... you have to write even more code.
So be careful... if you don't have an enthusiastic team and at least one experienced developer who knows how to write good tests and also knows a few things about good architecture, you really have to think twice before going down the TDD road.
Several downsides (and I'm not claiming there are no benefits - especially when writing the foundation of a project - it'd save a lot of time at the end):
Big time investment. For the simple case you lose about 20% of the actual implementation, but for complicated cases you lose much more.
Additional Complexity. For complex cases your test cases are harder to calculate, I'd suggest in cases like that to try and use automatic reference code that will run in parallel in the debug version / test run, instead of the unit test of simplest cases.
Design Impacts. Sometimes the design is not clear at the start and evolves as you go along - this will force you to redo your test which will generate a big time lose. I would suggest postponing unit tests in this case until you have some grasp of the design in mind.
Continuous Tweaking. For data structures and black box algorithms unit tests would be perfect, but for algorithms that tend to be changed, tweaked or fine tuned, this can cause a big time investment that one might claim is not justified. So use it when you think it actually fits the system and don't force the design to fit to TDD.
When you get to the point where you have a large number of tests, changing the system might require re-writing some or all of your tests, depending on which ones got invalidated by the changes. This could turn a relatively quick modification into a very time-consuming one.
Also, you might start making design decisions based more on TDD than on actually good design prinicipals. Whereas you may have had a very simple, easy solution that is impossible to test the way TDD demands, you now have a much more complex system that is actually more prone to mistakes.
I think the biggest problem for me is the HUGE loss of time it takes "getting in to it". I am still very much at the beginning of my journey with TDD (See my blog for updates my testing adventures if you are interested) and I have literally spent hours getting started.
It takes a long time to get your brain into "testing mode" and writing "testable code" is a skill in itself.
TBH, I respectfully disagree with Jason Cohen's comments on making private methods public, that's not what it is about. I have made no more public methods in my new way of working than before. It does, however involve architectural changes and allowing for you to "hot plug" modules of code to make everything else easier to test. You should not be making the internals of your code more accessible to do this. Otherwise we are back to square one with everything being public, where is the encapsulation in that?
So, (IMO) in a nutshell:
The amount of time taken to think (i.e. actually grok'ing testing).
The new knowledge required of knowing how to write testable code.
Understanding the architectural changes required to make code testable.
Increasing your skill of "TDD-Coder" while trying to improve all the other skills required for our glorious programming craft :)
Organising your code base to include test code without screwing your production code.
PS: If you would like links to positives, I have asked and answered several questions on it, check out my profile.
In the few years that I've been practicing Test Driven Development, I'd have to say the biggest downsides are:
Selling it to management
TDD is best done in pairs. For one, it's tough to resist the urge to just write the implementation when you KNOW how to write an if/else statement. But a pair will keep you on task because you keep him on task. Sadly, many companies/managers don't think that this is a good use of resources. Why pay for two people to write one feature, when I have two features that need to be done at the same time?
Selling it to other developers
Some people just don't have the patience for writing unit tests. Some are very proud of their work. Or, some just like seeing convoluted methods/functions bleed off the end of the screen. TDD isn't for everyone, but I really wish it were. It would make maintaining stuff so much easier for those poor souls who inherit code.
Maintaining the test code along with your production code
Ideally, your tests will only break when you make a bad code decision. That is, you thought the system worked one way, and it turns out it didn't. By breaking a test, or a (small) set of tests, this is actually good news. You know exactly how your new code will affect the system. However, if your tests are poorly written, tightly coupled or, worse yet, generated (cough VS Test), then maintaining your tests can become a choir quickly. And, after enough tests start to cause more work that the perceived value they are creating, then the tests will be the first thing to be deleted when schedules become compressed (eg. it gets to crunch time)
Writing tests so that you cover everything (100% code coverage)
Ideally, again, if you adhere to the methodology, your code will be 100% tested by default. Typically, thought, I end up with code coverage upwards of 90%. This usually happens when I have some template style architecture, and the base is tested, and I try to cut corners and not test the template customizations. Also, I have found that when I encounter a new barrier I hadn't previously encountered, I have a learning curve in testing it. I will admit to writing some lines of code the old skool way, but I really like to have that 100%. (I guess I was an over achiever in school, er skool).
However, with that I'd say that the benefits of TDD far outweigh the negatives for the simple idea that if you can achieve a good set of tests that cover your application but aren't so fragile that one change breaks them all, you will be able to keep adding new features on day 300 of your project as you did on day 1. This doesn't happen with all those who try TDD thinking it's a magic bullet to all their bug-ridden code, and so they think it can't work, period.
Personally I have found that with TDD, I write simpler code, I spend less time debating if a particular code solution will work or not, and that I have no fear to change any line of code that doesn't meet the criteria set forth by the team.
TDD is a tough discipline to master, and I've been at it for a few years, and I still learn new testing techniques all the time. It is a huge time investment up front, but, over the long term, your sustainability will be much greater than if you had no automated unit tests. Now, if only my bosses could figure this out.
On your first TDD project there are two big losses, time and personal freedom
You lose time because:
Creating a comprehensive, refactored, maintainable suite of unit and acceptance tests adds major time to the first iteration of the project. This may be time saved in the long run but equally it can be time you don't have to spare.
You need to choose and become expert in a core set of tools. A unit testing tool needs to be supplemented by some kind of mocking framework and both need to become part of your automated build system. You also want to pick and generate appropriate metrics.
You lose personal freedom because:
TDD is a very disciplined way of writing code that tends to rub raw against those at the top and bottom of the skills scale. Always writing production code in a certain way and subjecting your work to continual peer review may freak out your worst and best developers and even lead to loss of headcount.
Most Agile methods that embed TDD require that you talk to the client continually about what you propose to accomplish (in this story/day/whatever) and what the trade offs are. Once again this isn't everyone's cup of tea, both on the developers side of the fence and the clients.
Hope this helps
TDD requires you to plan out how your classes will operate before you write code to pass those tests. This is both a plus and a minus.
I find it hard to write tests in a "vacuum" --before any code has been written. In my experience I tend to trip over my tests whenever I inevitably think of something while writing my classes that I forgot while writing my initial tests. Then it's time to not only refactor my classes, but ALSO my tests. Repeat this three or four times and it can get frustrating.
I prefer to write a draft of my classes first then write (and maintain) a battery of unit tests. After I have a draft, TDD works fine for me. For example, if a bug is reported, I will write a test to exploit that bug and then fix the code so the test passes.
Prototyping can be very difficult with TDD - when you're not sure what road you're going to take to a solution, writing the tests up-front can be difficult (other than very broad ones). This can be a pain.
Honestly I don't think that for "core development" for the vast majority of projects there's any real downside, though; it's talked down a lot more than it should be, usually by people who believe their code is good enough that they don't need tests (it never is) and people who just plain can't be bothered to write them.
Well, and this stretching, you need to debug your tests. Also, there is a certain cost in time for writing the tests, though most people agree that it's an up-front investment that pays off over the lifetime of the application in both time saved debugging and in stability.
The biggest problem I've personally had with it, though, is getting up the discipline to actually write the tests. In a team, especially an established team, it can be hard to convince them that the time spent is worthwhile.
The downside to TDD is that it is usually tightly associated with 'Agile' methodology, which places no importance on documentation of a system, rather the understanding behind why a test 'should' return one specific value rather than any other resides only in the developer's head.
As soon as the developer leaves or forgets the reason that the test returns one specific value and not some other, you're screwed. TDD is fine IF it is adequately documented and surrounded by human-readable (ie. pointy-haired manager) documentation that can be referred to in 5 years when the world changes and your app needs to as well.
When I speak of documentation, this isn't a blurb in code, this is official writing that exists external to the application, such as use cases and background information that can be referred to by managers, lawyers and the poor sap who has to update your code in 2011.
I've encountered several situations where TDD makes me crazy. To name some:
Test case maintainability:
If you're in a big enterprise, many chances are that you don't have to write the test cases yourself or at least most of them are written by someone else when you enter the company. An application's features changes from time to time and if you don't have a system in place, such as HP Quality Center, to track them, you'll turn crazy in no time.
This also means that it'll take new team members a fair amount of time to grab what's going on with the test cases. In turn, this can be translated into more money needed.
Test automation complexity:
If you automate some or all of the test cases into machine-runnable test scripts, you will have to make sure these test scripts are in sync with their corresponding manual test cases and in line with the application changes.
Also, you'll spend time to debug the codes that help you catch bugs. In my opinion, most of these bugs come from the testing team's failure to reflect the application changes in the automation test script. Changes in business logic, GUI and other internal stuff can make your scripts stop running or running unreliably. Sometimes the changes are very subtle and difficult to detect. Once all of my scripts report failure because they based their calculation on information from table 1 while table 1 was now table 2 (because someone swapped the name of the table objects in the application code).
If your tests are not very thorough you might fall into a false sense of "everything works" just because you tests pass. Theoretically if your tests pass, the code is working; but if we could write code perfectly the first time we wouldn't need tests. The moral here is to make sure to do a sanity check on your own before calling something complete, don't just rely on the tests.
On that note, if your sanity check finds something that is not tested, make sure to go back and write a test for it.
The biggest problem are the people who don't know how to write proper unit tests. They write tests that depend on each other (and they work great running with Ant, but then all of sudden fail when I run them from Eclipse, just because they run in different order). They write tests that don't test anything in particular - they just debug the code, check the result, and change it into test, calling it "test1". They widen the scope of classes and methods, just because it will be easier to write unit tests for them. The code of unit tests is terrible, with all the classical programming problems (heavy coupling, methods that are 500 lines long, hard-coded values, code duplication) and is a hell to maintain. For some strange reason people treat unit tests as something inferior to the "real" code, and they don't care about their quality at all. :-(
You lose the ability to say you are "done" before testing all your code.
You lose the capability to write hundreds or thousands of lines of code before running it.
You lose the opportunity to learn through debugging.
You lose the flexibility to ship code that you aren't sure of.
You lose the freedom to tightly couple your modules.
You lose option to skip writing low level design documentation.
You lose the stability that comes with code that everyone is afraid to change.
You lose a lot of time spent writing tests. Of course, this might be saved by the end of the project by catching bugs faster.
Refocusing on difficult, unforeseen requirements is the constant bane of the programmer. Test-driven development forces you to focus on the already-known, mundane requirements, and limits your development to what has already been imagined.
Think about it, you are likely to end up designing to specific test cases, so you won't get creative and start thinking "it would be cool if the user could do X, Y, and Z". Therefore, when that user starts getting all excited about potential cool requirements X, Y, and Z, your design may be too rigidly focused on already specified test cases, and it will be difficult to adjust.
This, of course, is a double edged sword. If you spend all your time designing for every conceivable, imaginable, X, Y, and Z that a user could ever want, you will inevitably never complete anything. If you do complete something, it will be impossible for anyone (including yourself) to have any idea what you're doing in your code/design.
You will lose large classes with multiple responsibilities.
You will also likely lose large methods with multiple responsibilities.
You may lose some ability to refactor, but you will also lose some of the need to refactor.
Jason Cohen said something like:
TDD requires a certain organization for your code. This might be architecturally wrong; for example, since private methods cannot be called outside a class, you have to make methods non-private to make them testable.
I say this indicates a missed abstraction -- if the private code really needs to be tested, it should probably be in a separate class.
Dave Mann
The biggest downside is that if you really want to do TDD properly you will have to fail a lot before you succeed. Given how many software companies work (dollar per KLOC) you will eventually get fired. Even if your code is faster, cleaner, easier to maintain, and has less bugs.
If you are working in a company that pays you by the KLOCs (or requirements implemented -- even if not tested) stay away from TDD (or code reviews, or pair programming, or Continuous Integration, etc. etc. etc.).
I second the answer about initial development time. You also lose the ability to confortably work without the safety of tests. I've also been described as a TDD nutbar, so you could lose a few friends ;)
It's percieved as slower. Long term that's not true in terms of the grief it will save you down the road, but you'll end up writing more code so arguably you're spending time on "testing not coding". It's a flawed argument, but you did ask!
It can be hard and time consuming writing tests for "random" data like XML-feeds and databases (not that hard). I've spent some time lately working with weather data feeds. It's quite confusing writing tests for that, at least as i don't have too much experience with TDD.
You have to write applications in a different way: one which makes them testable. You'd be surprised how difficult this is at first.
Some people find the concept of thinking about what they're going to write before they write it too hard. Concepts such as mocking can be difficult for some too. TDD in legacy apps can be very difficult if they weren't designed for testing. TDD around frameworks that are not TDD friendly can also be a struggle.
TDD is a skill so junior devs may struggle at first (mainly because they haven't been taught to work this way).
Overall though the cons become solved as people become skilled and you end up abstracting away the 'smelly' code and have a more stable system.
unit test are more code to write, thus a higher upfront cost of development
it is more code to maintain
additional learning required
Good answers all. I would add a few ways to avoid the dark side of TDD:
I've written apps to do their own randomized self-test. The problem with writing specific tests is even if you write lots of them they only cover the cases you think of. Random-test generators find problems you didn't think of.
The whole concept of lots of unit tests implies that you have components that can get into invalid states, like complex data structures. If you stay away from complex data structures there's a lot less to test.
To the extent your application allows it, be shy of design that relies on the proper ordering of notifications, events and side-effects. Those can easily get dropped or scrambled so they need a lot of testing.
Let me add that if you apply BDD principles to a TDD project, you can alleviate a few of the major drawbacks listed here (confusion, misunderstandings, etc.). If you're not familiar with BDD, you should read Dan North's introduction. He came up the concept in answer to some of the issues that arose from applying TDD at the workplace. Dan's intro to BDD can be found here.
I only make this suggestion because BDD addresses some of these negatives and acts as a gap-stop. You'll want to consider this when collecting your feedback.
It takes some time to get into it and some time to start doing it in a project but... I always regret not doing a Test Driven approach when I find silly bugs that an automated test could have found very fast. In addition, TDD improves code quality.
You have to make sure your tests are always up to date, the moment you start ignoring red lights is the moment the tests become meaningless.
You also have to make sure the tests are comprehensive, or the moment a big bug appears, the stuffy management type you finally convinced to let you spend time writing more code will complain.
The person who taught my team agile development didn't believe in planning, you only wrote as much for the tiniest requirement.
His motto was refactor, refactor, refactor. I came to understand that refactor meant 'not planning ahead'.
Development time increases : Every method needs testing, and if you have a large application with dependencies you need to prepare and clean your data for tests.
TDD requires a certain organization for your code. This might be inefficient or difficult to read. Or even architecturally wrong; for example, since private methods cannot be called outside a class, you have to make methods non-private to make them testable, which is just wrong.
When code changes, you have to change the tests as well. With refactoring this can be a
lot of extra work.