Study about the benefits unitTest - unit-testing

Good afternoon.
Is there a study about the benefits of automated(Unit and Integration) tests.
I try to prove to my colleagues that it is useful and will
bring profit to the company, but among the arguments I would
like to refer to the results of research.

At one of my previous employers we have performed such a study ourselves. It was not published, however, but the results were clearly indicating that unit-testing was beneficial - in the sense that it more than paid for itself taking the whole development cycle into account. The people involved in the study, however, where trained and experienced in unit-testing and eager to continuously improve. Which means, they knew how to apply unit-testing in a sensible way (including dealing with dependencies etc.) and, maybe more important, they knew where unit-testing does not make sense. There were also no formal coverage goals, so there was no pressure to apply unit-testing to areas where it did not make sense.
When measuring the cost and the benefit, we tried our best to be neutral and consider all efforts involved, including time to write and run tests and analyze results, refactoring time for testability, training time, discussion time (asking colleagues how to unit-test some function), maintenance time of tests when the SUT changes etc.. On the benefits side we recorded the number of bugs that could be found earlier because of doing the unit-testing.
We saw, however, that it has a tremendous impact whether or not unit-testing is done right. That means, you can apply it in ways that will increase your development efforts without bringing the intended defect reduction. For example, in a different organizational unit, a development team was forced to apply unit-testing. However, they did their unit-testing after they had already heavily debugged the code. Not surprisingly, they found that with unit-testing they would not find as many bugs as expected. The importance of "doing it right" is also nicely discussed by Meszaros at http://xunitpatterns.com/Goals%20of%20Test%20Automation.html.
What our study consequently showed was, that unit-testing is beneficial if performed by people skilled and motivated to "doing it right", where doing it right includes "non-dogmatically". If performed by people who are only doing it because of being forced, possibly driven by formal coverage goals, you will likely find that unit-testing is not beneficial.

Related

Why many developers focus on increasing code coverage for unit test instead of integration or functional test?

It's impossible to have 100% code coverage for unit tests in real life. But it seems like my colleagues like to write unit test cases and happy flow functional test only. In the end, tickets are rejected by QA and they have to spend more time in refactoring again.
It's impossible to say what motivates the specific developers you're working with, but unit tests can be:
Faster
Easier to set up / tear down (because they tend to rely less on external resources)
Easier to write since you have access to internals
Less flaky (because they tend to rely less on external resources)
Having solid regression tests is important, but any test that can be written as a unit test probably should be.
There is a risk in being unjust to your colleagues when answering your question, because you do not give a specific code example to discuss about. Therefore, there is certainly a chance that your colleagues are truly overdoing it with unit-testing as is your view on it, but it is also possible that they are doing it properly.
However, it is also my observation that there are certain people who, to speak generally, try to manage the difficulties of the craft of software generation (including testing as well as lots of other activities) by setting up simple rules. Since the habitually prescribed rules give some good guidance for a certain fraction of the cases, it requires quite some self-confidence, experience, will and capability to argue against them in cases where the rules are too simplistic.
Talking about code coverage, the more the better, right? At least many people at first think like this. At this stage they may not even be aware that there are different coverage criteria (statement coverage, branch coverage, ...). They may not even have a clear understanding what distinguishes unit-testing from interaction-testing (integration-testing) and subsystem-testing (component-testing or functional-testing). Only if you gain more experience and if you are interested in extending your knowledge about testing methodology you will learn more and more and possibly understand at some point in time where what kind of approach will be most beneficial.
But, even if you reach that point, you still have to be willing to argue with those who have set up the organizational or project rules in the first place: These are often enough parties that have not much of practical experience with software development themselves, like, customers or quality assurance people. To convince these people that the rules they have chosen will not fit in this or the other case requires to teach them a lot - which takes a certain effort and sometimes may be completely useless if they are not interested in extending their knowledge.
But, even if at some point they agree to your view and accept that in your case an exception is OK, they are hesitant to relax or refine the rules: Often these parties consider the group of developers as a bunch of morons that only by setting firm rules can be trusted to write software that does not come with bugs due to violation of fundamental rules. Like, people writing only few unit-tests or even no unit-tests at all unless you set some high test coverage goal for everyone. And, unfortunately, their experience proves them right on that.
Maybe those lines above seem to be a bit pessimistic. What is the solution? A solution is an organisation that takes quality serious and understands that the essence of software quality (in your case test quality) is fundamentally different from fulfilling simple metric criteria or following simple rules. For tests to have high quality, an organisation has to be willing to invest into testing in the right way, by ensuring all parties understand the respective goals of the different kinds of tests and having developers and testers argue about test cases, the way test code is written etc. "Rule driven software development" needs to be transformed into "software development driven by understanding", but such a transformation is hard.

How much additional time does unit testing take? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am looking for possibly a study between the time difference of regular coding vs coding + unit tests (not strict TDD just yet). I know the whole "Saves you time in the long run" angle, but from a project planning perspective for the team that has never done it before, I need to be able to roughly estimate how much additional time to allocate.
Does such a study exist? Can anyone comment from experience?
I know that you're not interested in going for full TDD just yet, but I think the best performance and planning markers are going to be found in Test Driven Development case studies, such as those conducted by both Microsoft and IBM.
Case studies were conducted with three
development teams at Microsoft and one
at IBM that have adopted TDD. The
results of the case studies indicate
that the pre-release defect density of
the four products decreased between
40% and 90% relative to similar
projects that did not use the TDD
practice. Subjectively, the teams
experienced a 15–35% increase in
initial development time after
adopting TDD.
Source.
That is part of a preface to a full comparison of normal development vs development using unit testing as a core principle. The upfront development planning addition that we use is 20%, which includes both unit and integration tests. In all honesty testing is vital to any successful piece of software, unit testing just removes some of the hard yards and manual effort. The ability to run a few hundred unit and integration tests upon finishing a bit of functionality and being able to test the system within a few seconds is invaluable.
There are a number of different 'magic' numbers that people add to their estimates to incorporate the additional time to write unit tests, but it really just comes down to a simple trade-off. Essentially your balancing the increase in development time vs the increase in bug fixing/error checking time, also taking into account the likeyhood of downtime and the system criticality (if it is a primary revenue or a non-essential system).
If you’re interested in doing a little more reading here is the complete Microsoft study. Its pretty short but gives some interesting findings. And if you’re really keen here is a unit testing slideshow that outlines the concepts and benefits in reasonable detail (previously this was a link to the source content of this presentation, but sadly that content is now gone).
I can't comment on studies for this topic.
From experience I'd say your magic number range is 30-40% since your team is new to this. Your team will need to learn how to create mocks and fakes, and get used to writing tests, in addition to setting up infrastructure, lots of up-front costs until your team gets up to speed. If your main language is C++, then it takes more effort to write mocks and fakes, than with C# (from my experience). If your project is all brand new code, then it will take less effort than working with existing code. If you can get your team quickly up to speed on testing, than TDD will prove less effort than writing tests after the fact. With enough experience, the times for tests are probably around 20%, yet another magic number. Pardon my lack of exact numbers, I don't have the precise metrics here from my experience.
In all state of affairs, in all teams this should be right:
TimeOf(Coding+Testing) < TimeOf(CodingWithoutTesting)
It should't take additional time at all. Or it'll become useless.
As someone who is currently working on his first project using unit tests (not full-blown TDD, but close), I would say that you should double the time it would usually take for your team to do their initial implementation.
Roy Osherove's book, The Art of Unit Testing, has an informal study that shows this, but also shows that when QA cycles are included, the overall release time was slightly lower when using unit tests, and there were far fewer defects in the code developed with unit tests.
I would make these suggestions:
Have your programmers read everything they can get on unit tests and TDD, especially anything they can find on how to design code that is test friendly (use of dependency injection, interfaces, etc.). Osherove's book would be a great start.
Start evaluating unit testing frameworks (NUnit, xUnit, etc.), and Mocking frameworks (Rhino Mocks, Moq, etc.), and have them choose ones to standardize on. The most popular are probably NUnit and Rhino Mocks, but I chose xUnit and Moq.
Don't let your programmers give up. It can be tough to change your mindset and overcome the natural resistance to change, but they need to work through that. Initially it may feel like unit tests just get in the way, but the first time they refactor sections of code on the fly and use unit tests to know they didn't break anything as opposed to hoping they didn't will be a revelation.
Finally, if possible, don't start unit tests on a large, high-pressure project with a tight deadline; this would likely lead to them not overcoming their initial difficulties and ditching unit tests in favor of just getting the code done.
Hope this helps!
I need to be able to roughly estimate how much additional time to allocate.
That's silly. There's no additional time.
Here's what you do. Take your existing testing budget. The 40% of development effort at the end of the project.
Spread most of this testing effort through the life of the project as unit testing. Call it 30% of the total effort, allocated everywhere.
Leave some of the testing effort at the end for "integration" and "performance" testing. Call it 10% of the total effort, allocated just at the end and just for integration and performance testing. Or User Acceptance Testing or whatever is left over that you didn't unit test.
There's no "additional". You have to do the testing anyway. You can either do it as first-class software, during the project, or you can scramble around at the end of the project doing a bad job.
The "cost" of TDD is -- at best -- a subjective impression. Read the excellent study summary carefully. "Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD". [Emphasis added.]
Actually there is zero cost to TDD. TDD simply saves time.
It's much less expensive to do TDD than it is to create software other ways.
Why?
You have to test anyway. Since you have to test, you may as well plan for testing by driving all of your development around the testing.
When you have a broken test case, you have a focused set of tasks. When you are interrupted by phone calls, meetings, product demos, operational calls to support previous releases, it's easy to get distracted. When you have tests that fail, you get focus back immediately.
You have to test anyway. It's either Code then Test or it's Test then Code. You can't escape the cost of testing. Since you can't escape, do it first.
The idea that TDD has "additional" or "incremental" cost is crazy. Even a well-documented study like Link can't -- actually -- compare the same project done two ways. The idea of "additional development time" cannot actually be measured. That's why it's a "subjective impression". And they're simply wrong about it.
When you ask programmers -- who are new to TDD -- if TDD slowed them down, they lie about about it.
Yes. They Lie.
One. You made them change. Change is bad. Everyone knows that. They can't say it was easier, because it was "different".
Two. It calls management into question. We can't say bad things about our managers, that's bad. Everyone knows that. They can't say it was easier because previous things our managers demanded are now obviously wrong.
Three. Most programmers (and managers) think that there's a distinction between "real" code and "test" code. TDD takes longer to get to the "real" code because you spend your up-front time doing "test" code.
This "real" code vs. "test" code is a false distinction. Anyone who says this doesn't get how important testing is. Since testing is central to demonstrating that an application works, test code should be first-class. Making this distinction is wrong. Test code is real code.
The time spent writing test code is -- effectively -- time taken away from design of the real code. Rather than create "paper" designs, you are creating a working, living design in the form of test cases.
TDD saves time.
Folks who says otherwise are resisting change and/or trying to protect management from appearing to be wrong and/or making a false distinction between real code and test code. All things that are simply wrong.
I wouldn't rely on a study to tell you that, you can just work it out yourself.
Normally a developer would break down major functionality into smaller tasks, and then they would estimate the time each task will take. As part of this task breakdown they can also evaluate which tasks (or pieces of functionality) are unit testable, and then at that point they can also log tasks and estimates to write those particular tests. The esitimates for the unit tests should have the same margin of error applied as the code they are testing - the more complex the tested code, the more complex the test (and therefore the more likely the estimate to write the test will go over schedule).
Of course none of this is a perfect science, if it was there would be no need for project managers :) If you are just looking for high level estimates, then any figure you give for the unit testing portion is going to be as imprecise as the figure you give for the actual code. You won't have any certainty or accuracy until you break it down into tasks, and then apply relevant scales to it (like you need to factor in a developer's true coding velocity, which might only be 60% - 70%).

Why do code quality discussions evoke strong reactions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

How does Test Driven Design help a one man software project?

I've spent a lot of time building out tests for my latest project, and I'm really not sure what the ROI was on the time spent.
I'm a one man operation, and I'm building web applications. I don't necessarily have to "prove" that my software works to anyone (except my users), and I'm worried that I spent a good deal of time needlessly rebugging test code in the past months.
My question is, while I like the idea of TDD for small to large software teams, how does it help a one man team build high quality code quickly?
Thanks
=> ran across this today, from the blog of joel spolsky, one of the founders of stackoverflow:
http://www.joelonsoftware.com/items/2009/09/23.html
"Zawinski didn’t do many unit tests. They “sound great in principle. Given a leisurely development pace, that’s certainly the way to go. But when you’re looking at, ‘We’ve got to go from zero to done in six weeks,’ well, I can’t do that unless I cut something out. And what I’m going to cut out is the stuff that’s not absolutely critical. And unit tests are not critical. If there’s no unit test the customer isn’t going to complain about that.”"
as i'm getting older i think i'm realizing more and more that it's just all about speed and functionality. i'd love to build unit tests. but since we only have so much time at our disposal, i'd rather build it faster, and rely on beta testing and good automated error reporting to weed out any problems as they crop up. if the project eventually gets big enough that this bites me in the a**, it will be generating enough revenue that i can justify a rebuild.
I think that a situation like yours it helps greatly when you have to change/refactor/optimize something on which a lot of code depends... By using unit-testing you can quickly ensure that everything that worked before the change, still works afterwards :) In other words, it gives you confidence.
TDD doesn't really have anything to do with team size. It has to do with creating the smallest amount of software needed with the right interface that works correctly.
The TDD process requires you to write only just enough code to satisfy a test, so you don't end up creating code you don't need.
Using TDD to design a class makes you think as a client of the class, so you end up creating a better interface more often than if you developed it without TDD.
TDD, by its nature will acheive 100% code coverage, proving your code works. A side-effect of this is that you and others can now more safely change your class because it has a full suite of automated tests.
I should add that its iterative nature creates a positive feedback loop as well, so as you iterate you gain more and more confidence in your code.
TDD is not only about testing, it is also about designing your classes / API.
I mean: by writing a test first, you are forced to think on how you want to use your class. So, you first think about the interface of your class, how you want to use your class, and hence, your object model becomes more usable and readable.
rebugging is always needless - just don't delete the bugs in the first place...
For a real answer, you can't do better than 'it depends'. If:
you don't tend to have the kind of
problems that automated unit testing can
find (as opposed to performance, visual or
aesthetic ones)
you have some other way of designing the code (e.g. UML)
you don't tend to have cause to change things while keeping them working
It could well be the case that TDD doesn't really work out for you.
Or maybe you are doing it wrong, and if you did it differently it would work better.
Or, just maybe, it is actually working but you don't realise it. One thing about working solo is that self-assessment is difficult.
In general, people's self-views hold
only a tenuous to modest relationship
with their actual behavior and
performance. The correlation between
self-ratings of skill and actual
performance in many domains is
moderate to meager—indeed, at times,
other people's predictions of a
person's outcomes prove more accurate
than that person's self-predictions.
In addition, people overrate
themselves. On average, people say
that they are "above average" in skill
(a conclusion that defies statistical
possibility), overestimate the
likelihood that they will engage in
desirable behaviors and achieve
favorable outcomes, furnish overly
optimistic estimates of when they will
complete future projects, and reach
judgments with too much confidence.
Several psychological processes
conspire to produce flawed
self-assessments.
In theory team size should not matter. TDD is supposed to pay off because:
You test the code so you get the bugs out
When you maintain or refactor the code you know you didn't break it because you easily can test it
You produce better code because you focus on edge cases as you write the code
You produce better code because you can refactor the code with confidence
And generally I do find that the approach is valuable. I must admit to being in two minds about the ongoing maintenance of some tests.
Even before the term TDD became popular, I've always written a little main function to test whatever piece I was working on. I'd throw it away right afterwards. I have trouble understanding the mentality of programmers who could write code that never been executed and plug it in. When you find a "Bug" after doing that a few times it can take days or even weeks to track down.
Unit testing is a slightly better way to go because your tests hang around and you can see the intent.
Unit testing will find the bug much faster than testing from a UI after integration--it'll just save you time.
Now saving all your unit tests and creating a suite can be of less value if you are single, especially if you like to refactor a lot (You can easily spend more time refactoring tests than code), but the tests are still worth while to create.
Plus, test-driven development is kinda fun to a degree.
I've found that people concentrate too much on the TEST in TDD. There are many who don't believe that this is what the originators had in mind.
I've found that BDD is quite useful, no matter what the problem size. It concentrates on the gathering of how the system is supposed to behave.
Some people take it all the way to the creation of automated unit tests. I use them as both specifications and test cases. Because they're in English, it is easy for the business to understand them, as well as the QA department.
In general, it is a formalized way of recording specs, so that code can be written. Isn't that what the ultimate goal is?
Here are a few links
What's in a Story
Introducing BDD
Designing Klingon Warships Using Behaviour Driven Development

Disadvantages of Test Driven Development? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What do I lose by adopting test driven design?
List only negatives; do not list benefits written in a negative form.
If you want to do "real" TDD (read: test first with the red, green, refactor steps) then you also have to start using mocks/stubs, when you want to test integration points.
When you start using mocks, after a while, you will want to start using Dependency Injection (DI) and a Inversion of Control (IoC) container. To do that you need to use interfaces for everything (which have a lot of pitfalls themselves).
At the end of the day, you have to write a lot more code, than if you just do it the "plain old way". Instead of just a customer class, you also need to write an interface, a mock class, some IoC configuration and a few tests.
And remember that the test code should also be maintained and cared for. Tests should be as readable as everything else and it takes time to write good code.
Many developers don't quite understand how to do all these "the right way". But because everybody tells them that TDD is the only true way to develop software, they just try the best they can.
It is much harder than one might think. Often projects done with TDD end up with a lot of code that nobody really understands. The unit tests often test the wrong thing, the wrong way. And nobody agrees how a good test should look like, not even the so called gurus.
All those tests make it a lot harder to "change" (opposite to refactoring) the behavior of your system and simple changes just becomes too hard and time consuming.
If you read the TDD literature, there are always some very good examples, but often in real life applications, you must have a user interface and a database. This is where TDD gets really hard, and most sources don't offer good answers. And if they do, it always involves more abstractions: mock objects, programming to an interface, MVC/MVP patterns etc., which again require a lot of knowledge, and... you have to write even more code.
So be careful... if you don't have an enthusiastic team and at least one experienced developer who knows how to write good tests and also knows a few things about good architecture, you really have to think twice before going down the TDD road.
Several downsides (and I'm not claiming there are no benefits - especially when writing the foundation of a project - it'd save a lot of time at the end):
Big time investment. For the simple case you lose about 20% of the actual implementation, but for complicated cases you lose much more.
Additional Complexity. For complex cases your test cases are harder to calculate, I'd suggest in cases like that to try and use automatic reference code that will run in parallel in the debug version / test run, instead of the unit test of simplest cases.
Design Impacts. Sometimes the design is not clear at the start and evolves as you go along - this will force you to redo your test which will generate a big time lose. I would suggest postponing unit tests in this case until you have some grasp of the design in mind.
Continuous Tweaking. For data structures and black box algorithms unit tests would be perfect, but for algorithms that tend to be changed, tweaked or fine tuned, this can cause a big time investment that one might claim is not justified. So use it when you think it actually fits the system and don't force the design to fit to TDD.
When you get to the point where you have a large number of tests, changing the system might require re-writing some or all of your tests, depending on which ones got invalidated by the changes. This could turn a relatively quick modification into a very time-consuming one.
Also, you might start making design decisions based more on TDD than on actually good design prinicipals. Whereas you may have had a very simple, easy solution that is impossible to test the way TDD demands, you now have a much more complex system that is actually more prone to mistakes.
I think the biggest problem for me is the HUGE loss of time it takes "getting in to it". I am still very much at the beginning of my journey with TDD (See my blog for updates my testing adventures if you are interested) and I have literally spent hours getting started.
It takes a long time to get your brain into "testing mode" and writing "testable code" is a skill in itself.
TBH, I respectfully disagree with Jason Cohen's comments on making private methods public, that's not what it is about. I have made no more public methods in my new way of working than before. It does, however involve architectural changes and allowing for you to "hot plug" modules of code to make everything else easier to test. You should not be making the internals of your code more accessible to do this. Otherwise we are back to square one with everything being public, where is the encapsulation in that?
So, (IMO) in a nutshell:
The amount of time taken to think (i.e. actually grok'ing testing).
The new knowledge required of knowing how to write testable code.
Understanding the architectural changes required to make code testable.
Increasing your skill of "TDD-Coder" while trying to improve all the other skills required for our glorious programming craft :)
Organising your code base to include test code without screwing your production code.
PS: If you would like links to positives, I have asked and answered several questions on it, check out my profile.
In the few years that I've been practicing Test Driven Development, I'd have to say the biggest downsides are:
Selling it to management
TDD is best done in pairs. For one, it's tough to resist the urge to just write the implementation when you KNOW how to write an if/else statement. But a pair will keep you on task because you keep him on task. Sadly, many companies/managers don't think that this is a good use of resources. Why pay for two people to write one feature, when I have two features that need to be done at the same time?
Selling it to other developers
Some people just don't have the patience for writing unit tests. Some are very proud of their work. Or, some just like seeing convoluted methods/functions bleed off the end of the screen. TDD isn't for everyone, but I really wish it were. It would make maintaining stuff so much easier for those poor souls who inherit code.
Maintaining the test code along with your production code
Ideally, your tests will only break when you make a bad code decision. That is, you thought the system worked one way, and it turns out it didn't. By breaking a test, or a (small) set of tests, this is actually good news. You know exactly how your new code will affect the system. However, if your tests are poorly written, tightly coupled or, worse yet, generated (cough VS Test), then maintaining your tests can become a choir quickly. And, after enough tests start to cause more work that the perceived value they are creating, then the tests will be the first thing to be deleted when schedules become compressed (eg. it gets to crunch time)
Writing tests so that you cover everything (100% code coverage)
Ideally, again, if you adhere to the methodology, your code will be 100% tested by default. Typically, thought, I end up with code coverage upwards of 90%. This usually happens when I have some template style architecture, and the base is tested, and I try to cut corners and not test the template customizations. Also, I have found that when I encounter a new barrier I hadn't previously encountered, I have a learning curve in testing it. I will admit to writing some lines of code the old skool way, but I really like to have that 100%. (I guess I was an over achiever in school, er skool).
However, with that I'd say that the benefits of TDD far outweigh the negatives for the simple idea that if you can achieve a good set of tests that cover your application but aren't so fragile that one change breaks them all, you will be able to keep adding new features on day 300 of your project as you did on day 1. This doesn't happen with all those who try TDD thinking it's a magic bullet to all their bug-ridden code, and so they think it can't work, period.
Personally I have found that with TDD, I write simpler code, I spend less time debating if a particular code solution will work or not, and that I have no fear to change any line of code that doesn't meet the criteria set forth by the team.
TDD is a tough discipline to master, and I've been at it for a few years, and I still learn new testing techniques all the time. It is a huge time investment up front, but, over the long term, your sustainability will be much greater than if you had no automated unit tests. Now, if only my bosses could figure this out.
On your first TDD project there are two big losses, time and personal freedom
You lose time because:
Creating a comprehensive, refactored, maintainable suite of unit and acceptance tests adds major time to the first iteration of the project. This may be time saved in the long run but equally it can be time you don't have to spare.
You need to choose and become expert in a core set of tools. A unit testing tool needs to be supplemented by some kind of mocking framework and both need to become part of your automated build system. You also want to pick and generate appropriate metrics.
You lose personal freedom because:
TDD is a very disciplined way of writing code that tends to rub raw against those at the top and bottom of the skills scale. Always writing production code in a certain way and subjecting your work to continual peer review may freak out your worst and best developers and even lead to loss of headcount.
Most Agile methods that embed TDD require that you talk to the client continually about what you propose to accomplish (in this story/day/whatever) and what the trade offs are. Once again this isn't everyone's cup of tea, both on the developers side of the fence and the clients.
Hope this helps
TDD requires you to plan out how your classes will operate before you write code to pass those tests. This is both a plus and a minus.
I find it hard to write tests in a "vacuum" --before any code has been written. In my experience I tend to trip over my tests whenever I inevitably think of something while writing my classes that I forgot while writing my initial tests. Then it's time to not only refactor my classes, but ALSO my tests. Repeat this three or four times and it can get frustrating.
I prefer to write a draft of my classes first then write (and maintain) a battery of unit tests. After I have a draft, TDD works fine for me. For example, if a bug is reported, I will write a test to exploit that bug and then fix the code so the test passes.
Prototyping can be very difficult with TDD - when you're not sure what road you're going to take to a solution, writing the tests up-front can be difficult (other than very broad ones). This can be a pain.
Honestly I don't think that for "core development" for the vast majority of projects there's any real downside, though; it's talked down a lot more than it should be, usually by people who believe their code is good enough that they don't need tests (it never is) and people who just plain can't be bothered to write them.
Well, and this stretching, you need to debug your tests. Also, there is a certain cost in time for writing the tests, though most people agree that it's an up-front investment that pays off over the lifetime of the application in both time saved debugging and in stability.
The biggest problem I've personally had with it, though, is getting up the discipline to actually write the tests. In a team, especially an established team, it can be hard to convince them that the time spent is worthwhile.
The downside to TDD is that it is usually tightly associated with 'Agile' methodology, which places no importance on documentation of a system, rather the understanding behind why a test 'should' return one specific value rather than any other resides only in the developer's head.
As soon as the developer leaves or forgets the reason that the test returns one specific value and not some other, you're screwed. TDD is fine IF it is adequately documented and surrounded by human-readable (ie. pointy-haired manager) documentation that can be referred to in 5 years when the world changes and your app needs to as well.
When I speak of documentation, this isn't a blurb in code, this is official writing that exists external to the application, such as use cases and background information that can be referred to by managers, lawyers and the poor sap who has to update your code in 2011.
I've encountered several situations where TDD makes me crazy. To name some:
Test case maintainability:
If you're in a big enterprise, many chances are that you don't have to write the test cases yourself or at least most of them are written by someone else when you enter the company. An application's features changes from time to time and if you don't have a system in place, such as HP Quality Center, to track them, you'll turn crazy in no time.
This also means that it'll take new team members a fair amount of time to grab what's going on with the test cases. In turn, this can be translated into more money needed.
Test automation complexity:
If you automate some or all of the test cases into machine-runnable test scripts, you will have to make sure these test scripts are in sync with their corresponding manual test cases and in line with the application changes.
Also, you'll spend time to debug the codes that help you catch bugs. In my opinion, most of these bugs come from the testing team's failure to reflect the application changes in the automation test script. Changes in business logic, GUI and other internal stuff can make your scripts stop running or running unreliably. Sometimes the changes are very subtle and difficult to detect. Once all of my scripts report failure because they based their calculation on information from table 1 while table 1 was now table 2 (because someone swapped the name of the table objects in the application code).
If your tests are not very thorough you might fall into a false sense of "everything works" just because you tests pass. Theoretically if your tests pass, the code is working; but if we could write code perfectly the first time we wouldn't need tests. The moral here is to make sure to do a sanity check on your own before calling something complete, don't just rely on the tests.
On that note, if your sanity check finds something that is not tested, make sure to go back and write a test for it.
The biggest problem are the people who don't know how to write proper unit tests. They write tests that depend on each other (and they work great running with Ant, but then all of sudden fail when I run them from Eclipse, just because they run in different order). They write tests that don't test anything in particular - they just debug the code, check the result, and change it into test, calling it "test1". They widen the scope of classes and methods, just because it will be easier to write unit tests for them. The code of unit tests is terrible, with all the classical programming problems (heavy coupling, methods that are 500 lines long, hard-coded values, code duplication) and is a hell to maintain. For some strange reason people treat unit tests as something inferior to the "real" code, and they don't care about their quality at all. :-(
You lose the ability to say you are "done" before testing all your code.
You lose the capability to write hundreds or thousands of lines of code before running it.
You lose the opportunity to learn through debugging.
You lose the flexibility to ship code that you aren't sure of.
You lose the freedom to tightly couple your modules.
You lose option to skip writing low level design documentation.
You lose the stability that comes with code that everyone is afraid to change.
You lose a lot of time spent writing tests. Of course, this might be saved by the end of the project by catching bugs faster.
Refocusing on difficult, unforeseen requirements is the constant bane of the programmer. Test-driven development forces you to focus on the already-known, mundane requirements, and limits your development to what has already been imagined.
Think about it, you are likely to end up designing to specific test cases, so you won't get creative and start thinking "it would be cool if the user could do X, Y, and Z". Therefore, when that user starts getting all excited about potential cool requirements X, Y, and Z, your design may be too rigidly focused on already specified test cases, and it will be difficult to adjust.
This, of course, is a double edged sword. If you spend all your time designing for every conceivable, imaginable, X, Y, and Z that a user could ever want, you will inevitably never complete anything. If you do complete something, it will be impossible for anyone (including yourself) to have any idea what you're doing in your code/design.
You will lose large classes with multiple responsibilities.
You will also likely lose large methods with multiple responsibilities.
You may lose some ability to refactor, but you will also lose some of the need to refactor.
Jason Cohen said something like:
TDD requires a certain organization for your code. This might be architecturally wrong; for example, since private methods cannot be called outside a class, you have to make methods non-private to make them testable.
I say this indicates a missed abstraction -- if the private code really needs to be tested, it should probably be in a separate class.
Dave Mann
The biggest downside is that if you really want to do TDD properly you will have to fail a lot before you succeed. Given how many software companies work (dollar per KLOC) you will eventually get fired. Even if your code is faster, cleaner, easier to maintain, and has less bugs.
If you are working in a company that pays you by the KLOCs (or requirements implemented -- even if not tested) stay away from TDD (or code reviews, or pair programming, or Continuous Integration, etc. etc. etc.).
I second the answer about initial development time. You also lose the ability to confortably work without the safety of tests. I've also been described as a TDD nutbar, so you could lose a few friends ;)
It's percieved as slower. Long term that's not true in terms of the grief it will save you down the road, but you'll end up writing more code so arguably you're spending time on "testing not coding". It's a flawed argument, but you did ask!
It can be hard and time consuming writing tests for "random" data like XML-feeds and databases (not that hard). I've spent some time lately working with weather data feeds. It's quite confusing writing tests for that, at least as i don't have too much experience with TDD.
You have to write applications in a different way: one which makes them testable. You'd be surprised how difficult this is at first.
Some people find the concept of thinking about what they're going to write before they write it too hard. Concepts such as mocking can be difficult for some too. TDD in legacy apps can be very difficult if they weren't designed for testing. TDD around frameworks that are not TDD friendly can also be a struggle.
TDD is a skill so junior devs may struggle at first (mainly because they haven't been taught to work this way).
Overall though the cons become solved as people become skilled and you end up abstracting away the 'smelly' code and have a more stable system.
unit test are more code to write, thus a higher upfront cost of development
it is more code to maintain
additional learning required
Good answers all. I would add a few ways to avoid the dark side of TDD:
I've written apps to do their own randomized self-test. The problem with writing specific tests is even if you write lots of them they only cover the cases you think of. Random-test generators find problems you didn't think of.
The whole concept of lots of unit tests implies that you have components that can get into invalid states, like complex data structures. If you stay away from complex data structures there's a lot less to test.
To the extent your application allows it, be shy of design that relies on the proper ordering of notifications, events and side-effects. Those can easily get dropped or scrambled so they need a lot of testing.
Let me add that if you apply BDD principles to a TDD project, you can alleviate a few of the major drawbacks listed here (confusion, misunderstandings, etc.). If you're not familiar with BDD, you should read Dan North's introduction. He came up the concept in answer to some of the issues that arose from applying TDD at the workplace. Dan's intro to BDD can be found here.
I only make this suggestion because BDD addresses some of these negatives and acts as a gap-stop. You'll want to consider this when collecting your feedback.
It takes some time to get into it and some time to start doing it in a project but... I always regret not doing a Test Driven approach when I find silly bugs that an automated test could have found very fast. In addition, TDD improves code quality.
You have to make sure your tests are always up to date, the moment you start ignoring red lights is the moment the tests become meaningless.
You also have to make sure the tests are comprehensive, or the moment a big bug appears, the stuffy management type you finally convinced to let you spend time writing more code will complain.
The person who taught my team agile development didn't believe in planning, you only wrote as much for the tiniest requirement.
His motto was refactor, refactor, refactor. I came to understand that refactor meant 'not planning ahead'.
Development time increases : Every method needs testing, and if you have a large application with dependencies you need to prepare and clean your data for tests.
TDD requires a certain organization for your code. This might be inefficient or difficult to read. Or even architecturally wrong; for example, since private methods cannot be called outside a class, you have to make methods non-private to make them testable, which is just wrong.
When code changes, you have to change the tests as well. With refactoring this can be a
lot of extra work.