How do you decide what to test in your test suites? - unit-testing

I'm an intern working on a project that has the potential to introduce a lot of bugs at a company with an extremely large code base. Currently the company has no automated testing implemented for any of their projects, so I want to begin writing tests for the code as I go so that I can tell when I break something, but I have a hard time developing an intuition for what is worth testing and how to test it. Some things are more obvious than others: testing string manipulation functions isn't too tough, but what to write for a multithreaded custom memory manager is trickier.
How do you go about designing tests for an existing codebase and what do you test for? How do you figure out what underlying assumptions the code is making?

Answer to most of your questions
http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052

No easy answers for you I'm afraid. That is just a tough spot to be in.
The method to be applied is
identify regions that deliver the most bang-for-test-buck. (This is something that you have to come up with - unique to your situation).
Spend time getting to know the region. Identify interactions of this region with the rest of the code base.
Document the same using tests - these act as a regression "vice" that will hold your software in place while you make subsequent changes
Now you've a safety net to work above. You can now start making your enhancements/fixes/changes using a TDD approach.
The idea is slowly islands of the codebase will emerge above the safety net till you reach a point of diminishing returns. Michael feather's WELC book as posted by Pangea above is a must-read if you're venturing into this area.

A similar question has been asked and answered here
Some quick thoughts from me:
At the beginning, add tests for new
written code, either in new project
or for changes on a existing project.
Don't touch code that is running and is not changed.
concentrate on functionality that is
often used or that is critical.
The subject is really manifold and maybe you should try to get a training to get an overview. Assuming you are in the US, you can have a closer look here. Here is their course content.
They have also a long list of useful resources.

Related

So.. I need to train the team on Unit Testing - could use C&C on lesson plan

So - management is looking to do a push to move towards doing unit-testing in all the applications moving forward - and eventually get into full TDD/Continuous Integration/Automated build mode (I hope). At this point however we are just concerned about getting everyone developing apps moving forward using unit-testing. I'd like to just start with the basics.
I won't lie - I'm by far no expert by any means in unit-testing, but I do have a good enough understanding to start the initiative with the basics, and allow us to grow togeather as a team. I'd really love to get some comments & critisism from all you experts on my plan of attack on this thing. It's a team of about 10 developers in a small shop, which makes for a great opportunity to move forward with agile development methodologies and best practices.
First off - the team consists of mainly mid level developers with a couple of junior devs and one senior, all with minimal to no exposure to unit testing. The training will be a semi-monthly meeting for about 30-60 minutes each time (probably wind up running an hour long i'd guess, and maybe have them more often). We will continue these meetings until it makes sense to stop them to allow others to catch up with their own 'homework' and experience - but the push will always be on.
Anyway - here is my lesson plan I have come up with. Well, the first two at least. Any advice from your experts out there on the actual content or structure of the lessons, etc, would be great. Comments & Critisism greatly appreciated. Thanks very much.
I apologize if this is 'too much' to post in here or read through. I think this would be a great thread for SO users looking to get into unit testing in the first place as well. Perhaps you could just skip to the 'lesson plans' section - thanks again everyone.
CLIFF NOTES - I realize this post is incredibly long and ugly, so here is the cliff notes - Lesson 1 will be 'hello world unit tests' - Lesson 2 will be opening the solution to my most recent application, and showing how to apply each 'hello world' example in real life... Thanks so much everyone for the feedback you've given me so far.. just wantd to highlight the fact that lesson 2 is going to have real life production unit tests in it, since many suggested I do that when it was my plan from the begining =)
Unit Testing Lesson Plan
Overview
Why unit test? Seems like a bunch of extra work - so why do it?
• Become the master of your own destiny. Most of our users do not do true UATs, and unfortunately tend to do their testing once in production. With unit-tests, we greatly decrease risk associated with this, especially when we create enough test data and take into account as many top level inputs as we possibly can. While not being a ‘silver bullet’ that prevents all bugs – it is your first line of defense – a huge front line, comparable to that of the SB championship Giants.
• Unit-Testing enforces good design and architecture practices. It is ‘the violent psychopath who maintains your code and knows where you live’ so to say. You simply can’t write poor quality code that is well unit-tested
• How many times have you not refactored smelly code because you were too scared of breaking something? Automated testing remove this fear, makes refactoring much easier, in turn making code more readable and easier to maintain.
• Bottom line – maintenance becomes much easier and cheaper. The time spent writing unit tests might be costly now – but the time it saves you down the road has been proven time and time again to be far more valuable. This is the #1 reason to automate testing your code. It gives us confidence that allows us to take on more ambitious changes to systems that we might have otherwise had to reduce requirements on, or maybe even not take on at all.
Terminology Review
• Unit testing - testing the lowest level, single unit of work. E.G. – test all possible code paths that a single function can flow through.
• Integration testing - testing how your units work together. E.g. – run a ‘job’ (or series of function calls) that does a bunch of work, with known inputs - and then query the database at the end and assert the values are what you expect from those known inputs (instead of having to eye-ball a grid on a web page somewhere, e.g. doing a functional test).
• Fakes – a fake is a type of object whose purpose is to use for your testing. It allows you too easily not test code that you do not want to test. Instead of having to call code that you do not want – like a database call – you use a fake object to ‘fake’ that DB call and perhaps read the data from an XML/Excel file or a mocking framework.
o Mock – a type of fake to which you make assert statements against.
o Stub – a type of fake to which you use as placeholder code, so you can skip the database call, but do not make asserts against
Lessons
Lesson one – Hello Worlds
• Hello World unit test - I will create a ‘hello world’ console application that is unit tested. Will create this application on the fly during the meeting, showing the tools in Visual Studio 2008 (test-project, test tools toolbar, etc.) that we are going to use along the way while explaining what they do. This will have only a single unit-test. (OK, maybe I won’t create it ‘on the fly’ =), have to think about it more). Will also explain the Assert class and its purpose and the general flow of a unit-test.
• Hello World, a bit more complicated. Now our function has different paths/logical branches the code can flow through. We will have ~3 unit tests for this function. This will be a pre-written solution I make before the meeting.
• Hello World, dependency injection. (Not using any DI frameworks). A pre-written solution that builds off the previous one, this time using dependency injection. Will explain what DI is and show a sample of how it works.
• Hello World, Mock Objects. A pre-written solution that builds off the previous one, this time using our newly added DI code to inject a mock object into our class to show how mocking works. Will use NMock2.0 as this is the only one I have exposure to. Very simple example to just display the use of mock objects. (Perhaps put this one in a separate lesson?).
• Hello World, (non-automated) Integration Test. Building off the previous solution, we create an integration test so show how to test 2 functions together, or entire class together (perhaps put this one in a separate lesson?)
Lesson two – now we know the basics – how to apply this in real life?
• General overview of best practices
o Rule #1- Single Responsibility Principal. Single Responsibility Principal. Single Responsibility Principal. Facetiously stating that this is the single most important thing to keep in mind while writing your code. A class should have only one purpose; a function should do only one thing. The key word here is ‘unit’ test – and the SRP will keep your classes & functions encapsulated into units.
o Dependency Injection is your second best friend. DI allows you to ‘plug’ behavior into classes you have, at run time. Among other things, this is how we use mocking frameworks to make our bigger classes more easily testable.
o Always think ‘how will I test this’ as you are writing code. If it seems ‘too hard to test’, it is likely that your code is too complicated. Re-factor until it into more logical units of classes/functions - take that one class that does 5 things and turn it into 5 classes, one which calls the other 4. Now your code will be much easier to test – and also easier to read and refactor as well.
o Test behavior, not implementation. In nutshell, this means that we can for the most part test only the Public functions on our classes. We don’t care about testing the private ones (implementation), because the public ones (behavior) are what our calling code uses. For example... I’m a millionaire software developer and go to the Aston Martin dealership to buy myself a brand new DB9. The sales guy tells me that it can do 0-60 in 3 seconds. How would you test this? Would you lift the engine out and perform diagnostics tests, etc..? No... You would take it onto the parkway and do 160 MPH =). This is testing behavior vs. implementation.
• Reviewing a real life unit-tested application. Here we will go over each of the above ‘hello world’ examples – but the real life versions, using my most recent project as an example. I'll open a simple unit test, a more complex one, one that uses DI, and one that uses Mocks (probably coupled to the DI one). This project is fairly small & simple so it is really a perfect fit. This will also include testing the DAL and how to setup a test database to run these tests against.
I like the commitment and thoroughness you're expressing here, but my feeling is your adoption plan is going to take longer this way than if you started straight in, paired up and wrote some actual production tests. You'll need infrastructure anyway -- why not bootstrap your actual production code with a CI server and a TDD framework? (A lot of times this is a bigger pain point than learning "asserts." Get this part out of the way sooner rather than later). Then sit down and solve an actual problem with a fellow coder. Pick a simple bug or small feature and try to identify what the failing tests would look like and try to write them.
I like Hello Worlds for a brown bag lunch or something. But I can't think of any reason not to just jump right in and solve some problems.
I helped design and run a Test Driven Development training in house at my company for Java developers. We ended up doing it as a full day training, and we broke it up similar to how you have here.
Why do testing?
Simple example.
More complex example.
One thing I must stress though, is people need to learn by doing.
For my training, we set up a lab environment where each student could start with the same snapshot of code, and develop and run the tests themselves, with instructors walking around the training room helping individuals who were confused or weren't getting it.
For the "Simple Example" we had a "cooking show" version of code that was on a projector and went through the TDD process step by step. Developers would have to go through the process of writing a test, then creating implementation that was just sufficient to pass the test, then repeat. At each phase, a prepared solution to the current phase was shown on the projector after the students had some time to try it on their own.
For the "Complex Example" we created a set of requirements, and then allowed the students to come up with their own solutions using TDD to do so. We even had an exercise where requirements surprise, surprise changed suddenly part way through the exercise.
I like your idea of doing it over a longer period of time with regular checkups. One downfall of our training was a lack of followup. Developers benefited by the training, I'm sure, but many I think, did not take the practice back to their ordinary work. Regular checkups would help instill unit testing as a matter of habit.
For more ideas, check out my answer to this question.
Getting people interested in unit testing on someone else's/sample code will not be as effective as having folks "start with the testcase" on some new code they have to write.
Cover the basics of starting with a test case and triangulating to a valid result (all the while emphasizing that a failing test is progress).
That said, my personal experience is that things like this are better done in a pair programming environment. It's extra work for you, but the combination of one-on-one, some ownership and a working example they can reference at the end of it will make adoption much easier.
Getting a coverage tool running at some point later on can, given the right environment, provide a fun, competitive and gratifying way of marking progress. Making this in any way part of compensation/bonuses is a real bad idea. Done correctly, folks will adopt it because it works.
I tried to introduce TDD in a small team (7-9 programmers). Lectures failed. People are skeptical, TDD sounds like snake oil. Plus, there's always the proverbial Willy.
In the end, instead of TDD, people on the team do occasional DDT (Development-Driven Tests): the write unit tests after the code to confirm that it does what they meant it to. Looks like utter failure until you realize that even this form of developer-driven QA is much better than what you had before because, even if it doesn't really improve the code for the future revisits, it cuts into deployed bugs.
There was, however, one key difference compared to your setting: the push didn't come from the management, they were just good enough to let the programmers decide for themselves.
The lectures failed, but I didn't put all the eggs in a single basket. I wrote a bunch of tests for many of the smaller, more focused functions and classes used across the project. Those little buggers you occasionally need to bend a bit to fit exactly in the next call site, hoping you didn't break any of the existing ones (you looked around the code, the change seemed safe). It was running the tests after every other svn up (not only by me), and the subsequent replies to the guilty commit emails "you broke it, pls fix ASAP!" that finally convinced them of at least partial utility of unit tests.
No matter what examples you come up with for the lectures, they'll tell you it only worked because the CUT was unrealistically (or unusually) isolated from the rest of the code. They'll ask you to write unit tests for a convoluted, buggy God Class (preferably a Singleton, everybody loves global variables, no?) because, to the best of their knowledge, that's what real-world code looks like, and they think it'll look like that with TDD as well.
Screw lectures. Have the management buy everybody a few books for the Christmas: at the very least TDD By Example (Beck) and Refactoring (Fowler). Speaking from experience, the books won't necessarily have any impact with most of them (more of the "that wasn't real-world enough" bullshit), but it's bound to stick to someone, and, no offense intended, I'd bet $1000 that Beck is the better evangelist of you two. ;) Practice TDD yourself in code you share with them. If it really works, they'll follow. Sss-looowww-lyyyy, but they will.
Then again, maybe you don't have that many Willies on your team, maybe your colleagues are eager, just in need of a leader, I don't really know, you didn't tell.
First commandment: don't be put off by slow and/or partial intake: every inch counts! (I'm preaching to myself here, ya know...)
I think it will be harder to start with test-after type of testing. A major difference compared to test-first is that test-after does not guide you in writing code which is easy to test. Unless you have first done TDD, it will be very hard to write code for which it's easy to write tests.
I recommend starting with TDD/BDD and after getting the hang of that, continue learning how to do test-after on legacy code (also PDF). IMO, doing test-after well is much harder than test-first.
In order to learn TDD, I recommend finishing this tutorial. That should get you started on what kind of tests to write.
I think it's a good outline for a session or two. But I'd strongly recommend standing up a testing infrastructure first and setting the developers up for early success, rather than shoving the developers head-first into the unit testing world and shouting "ready, set, GO!"
Assuming you develop in an IDE, a one-click test generator tool will really help with step one, test creation. You'll want to integrate a GUI-based test runner with your IDE, and you really need an automated test runner integrated with your CI builds. Having a code coverage tool integrated with the test runner tool will help developers increase their coverage. You need to provide developers with a complete framework for writing the tests, and that includes documenting how to organize the test source code (test naming standards, project settings, folder locations, etc.) Tools that automate any of the manual steps will go a long way. Provide a mock framework including mock objects for any in-house or third-party library APIs you depend on. If you're going to recommend a database full of fake data (instead of a suite of fake data access objects) create one with a few key tables and populate it with test data, and provide whatever it takes to reset it between testing runs. If not, provide a few fake DAOs of your own to serve as examples. You should have a small suite of tests already in your source control system testing production code to show as examples. And you need to have all this documented so you can hand it out in meeting #1. (Don't forget to test the document!)
One of our teams tried the ad hoc approach a couple years ago, but adoption was poor. Without the definitions or the organization of the tests, individual developers kind of did their own thing. Lack of a naming standard made it difficult to identify the right tests to run for which modules, and few people bothered. We had no IDE integration for any of it. Nobody wanted to write mocks, and we didn't provide mocks in advance for the framework we use. And it's really hard to add tests to legacy code that wasn't written with Dependency Injection in mind.
I'm now trying to get our infrastructure reorganized and ready for another shot at the task. Only after it's standing up will I work on trying to get the developers to use it again.
However, you have the one thing we were kind of lacking last time, and that's strong management support. Our previous managers were kind of lukewarm to the idea because they didn't want to delay coding by having to write all these tests. We now have new management and they are focusing on code quality -- automated unit tests are now in vogue, so it's a good time for us to try again.
Is a classroom style approach going to work well? That's a question that comes to my mind as what may happen here is so much of a separation between knowledge and application of the ideas that while they may know it, they don't quite get how to use it.
Just to give a different approach, one could jump into whatever projects are going on and start trying to apply some of this which may take a while to trickle down through everyone as the idea would be that you'd train one and then as a pair you'd train another pair and so on, to try to get everyone up to the same point. This way the theoretical stuff that isn't relevant gets taken out early.
Another idea would be to see if it makes sense to try to form teams within the group of 10, such as 2 groups of 3 and a group of 4 or 5 pairs, so that each person can present what they've done and how well do they use this.
I'd second the boundary testing, as well as showing why it can be useful to test with invalid input to see that this doesn't cause the system to roll over and die.
Lastly, it may make sense to have time devoted to handling questions and other 1:1 issues that people have as not everyone will want to ask questions in front of a large group.
EDIT: A caution on Lesson 2, which is a good idea but has a few hazards to note. First, be humble in doing this lesson so that you aren't coming across as a show-off wanting to slam everyone else. Second, I'd suggest leaving some tests incomplete when going over things so that there is an interactive aspect here that prevents the other developers from merely tuning out or rolling their eyes. Third, a follow-up lesson or two of others showing what they did or you going over what they did may be more useful to show that this is a team effort in some ways.
If management is serious about this, then they have to take the bull by the horns and take action themselves not just have a little training. Start having them ask to see the tests in code reviews. Once this is common and people know they willnot pass the code reveiw without a test, then have them refuse to deploy new code until there are tests written. It will only take a couple of iterations of this before people will know that they can't get away wihtout writing the tests and then they will do so fairly reliably. Yes the first time the manager refuses to deploy, something will probably be late. They have to take that into accound in their own planning. They also will need to start adding test writing time to their task estimating. No one wants to do this if they think they will be blamed because it takes longer to write tests and code than just to write code. Have a published schedule for how this will be implemented (you need to give them some time to learn how to do this before it is required for all new development) and on the drop dead date, nothing gets to prod without tests.
I like it. One comment I have is it's probably worth dropping in some guidelines/advice on how to go about testing threaded/asynchronous code.
Also, I'd make a comment about boundary testing i.e. using unit tests to state your assumptions on any 3rd party API you utilize.
Just because I have been doing something where TDD (and unit tests in general) has been invaluable and because it might make a good demo, I'd suggest an example using Regex, especially if you aren't a Regex expert, it's quite easy to make mistakes.
It'd also make for a good 'show, not tell' demo. If you just:
step 1: Tell them just to write some code with a Regex to match a certain pattern.
step 2: expand the pattern to something else.
step 3: show how unit tests immediately let them be sure that a) it matches the new pattern and b) it hasn't broken any of the previous matches or let through results that shouldn't be.
In general C&C of your review, it's hard to get an idea of exactly what sequence events will go in. Also, semi-month seems far too infrequently if there's just a void inbetween with no push to apply their knowledge. The last thing you want is it to become some meeting they go to and forget about as soon as they are out of the room!
Do you have sufficient downtime in your current workload to address technical debt and get the team to actually pair up and start looking at adding unit tests to some existing functionality and refactoring where necessary to allow mocking?
I would say that will prove far more effective than lectures and potted workshops, YMMV.

Why do code quality discussions evoke strong reactions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

How to reduce the time spent on testing?

I just looked back through the project that nearly finished recently and found a very serious problem. I spent most of bank time on testing the code, reproducing the different situations "may" cause code errors.
Do you have any idea or experience to share on how to reduce the time spent on testing, so that makes the development much more smoothly?
I tried follow the concept of test-driven for all my code , but I found it really hard to achieve this, really need some help from the senior guys here.
Thanks
Re: all
Thanks for the answers above here, initially my question was how to reduce the time on general testing, but now, the problem is down to how to write the effecient automate test code.
I will try to improve my skills on how to write the test suit to cut down this part of time.
However, I still really struggle with how to reduce the time I spent on reproduce the errors , for instance, A standard blog project will be easy to reproduce the situations may cause the errors but a complicate bespoke internal system may "never" can be tested throught out easily, is it worthy ? Do you have any idea on how to build a test plan on this kind of project ?
Thanks for the further answers still.
Test driven design is not about testing (quality assurance). It has been poorly named from the outset.
It's about having machine runnable assumptions and specifications of program behavior and is done by programmers during programming to ensure that assumptions are explicit.
Since those tasks have to be done at some point in the product lifecycle, it's simply a shift of the work. Whether it's more or less efficient is a debate for another time.
What you refer to I would not call testing. Having strong TDD does mean that the testing phase does not have to be relied upon as heavily for errors which would be caught long before they reach a test build (as they are with experience programmers with a good spec and responsive stakeholders in a non-TDD environment).
If you think the upfront tests (runnable spec) is a serious problem, I guess it comes down to how much work the relative stages of development are expected to cost in time and money?
I think I understand. Above the developer-test level, you have the customer test level, and it sounds like, at that level, you are finding a lot of bugs.
For every bug you find, you have to stop, take your testing hat off, put your reproduction hat on, and figure out a precise reproduction strategy. Then you have to document the bug, perhaps put it in a bug-tracking system. Then you have to put the testing hat on. In the mean time, you've lost whatever setup you were working on and lost track of where you were on whatever test plan you were following.
Now - if that didn't have to happen - if you had far few bugs - you could zip along right through testing, right?
It's doubtful that GUI-driving test automation will help with this problem. You'll spend a great amount of time recording and maintaining the tests, and those regression tests will take a fair amount of time to return the investment. Initially, you'll go much slower with GUI-Driving customer-facing tests.
So (I submit) that what might really help is higher /initial/ code quality coming out of development activities. Micro-tests -- also called developer-tests or test-driven-development in the original sense - might really help with that. Another thing that can help is pair programming.
Assuming you can't grab someone else to pair, I'd spend an hour looking at your bug tracking system. I would look at the past 100 defects and try to categorize them into root causes. "Training issue" is not a cause, but "off by one error" might be.
Once you have them categorized and counted, put them in a spreadsheet and sort. Whatever root cause occurs the most often is the root cause you prevent first. If you really want to get fancy, multiply the root cause by some number that is the pain amount it causes. (Example: If in those 100 bugs you have 30 typos on menus, which as easy to fix, and 10 hard-to-reproduce javascript errors, you may want to fix the javascript issue first.)
This assumes you can apply some magical 'fix' to each of those root causes, but it's worth a shot. For example: Transparent icons in IE6 may be because IE6 can not easily process .png files. So have a version control trigger that rejects .gif's on checkin and the issue is fixed.
I hope that helps.
The Subversion team has developed some pretty good test routines, by automating the whole process.
I've begun using this process myself, for example by writing tests before implementing the new features. It works very well, and generates consistent testing through the whole programming process.
SQLite also have a decent test system with some very good documentation about how it's done.
In my experience with test driven development, the time saving comes well after you have written out the tests, or at least after you have written the base test cases. The key thing being here is that you actually have to write our your automated tests. The way your phrased your question leads me to believe you weren't actually writing automated tests. After you have your tests written you can easily go back later and update the tests to cover bugs they didn't previously find (for better regression testing) and you can easily and relatively quickly refactor your code with the ease of mind that the code will still work as expected after you have substantially changed it.
You wrote:
"Thanks for the answers above here,
initially my question was how to
reduce the time on general testing,
but now, the problem is down to how to
write the efficient automate test
code."
One method that has been proven in multiple empirical studies to work extremely well to maximize testing efficiency is combinatorial testing. In this approach, a tester will identify WHAT KINDS of things should be tested (and input it into a simple tool) and the tool will identify HOW to test the application. Specifically, the tool will generate test cases that specify what combinations of test conditions should be executed in which test script and the order that each test script should be executed in.
In the August, 2009 IEEE Computer article I co-wrote with Dr. Rick Kuhn, Dr. Raghu Kacker, and Dr. Jeff Lei, for example, we highlight a 10 project study I led where one group of testers used their standard test design methods and a second group of testers, testing the same application, used a combinatorial test case generator to identify test cases for them. The teams using the combinatorial test case generator found, on average, more than twice as many defects per tester hour. That is strong evidence for efficiency. In addition, the combinatorial testers found 13% more defects overall. That is strong evidence for quality/thoroughness.
Those results are not unusual. Additional information about this approach can be found at http://www.combinatorialtesting.com/clear-introductions-1 and our tool overview here. It contains screen shots and and explanation of how of our the tool makes testing more efficient by identifying a subset of tests that maximize coverage.
Also free version of our Hexawise test case generator can be found at www.hexawise.com/users/new
There is nothing inherently wrong with spending a lot of time testing if you are testing productively. Keep in mind, test-driven development means writing the (mostly automated) tests first (this can legitimately take a long time if you write a thorough test suite). Running the tests shouldn't take much time.
It sounds like your problem is you are not doing automatic testing. Using automated unit and integration tests can greatly reduce the amount of time you spend testing.
First, it's good that you recognise that you need help -- now go and find some :)
The idea is to use the tests to help you think about what the code should do, they're part of your design time.
You should also think about the total cost of ownership of the code. What is the cost of a bug making it through to production rather than being fixed first? If you're in a bank, are there serious implications about getting the numbers wrong? Sometimes, the right stuff just takes time.
One of the hardest things about any project of significant size is designing the underlying archetecture, and the API. All of this is exposed at the level of unit tests. If you're writing your tests first, then that aspect of design happens when your coding your tests, rather than the program logic. This is compounded by added effort of making code testable. Once you've got your tests, the program logic is usually quite obvious.
That being said, there seem to be some interesting automatic test builders on the horizon.

Why should I use Test Driven Development? [duplicate]

This question already has answers here:
Closed 13 years ago.
Duplicate:
Why should I practice Test Driven Development and how should I start?
For a developer that doesn't know about Test-Driven Development, what problem(s) will be solved by adopting TDD?
[EDIT] Let's assume that the developer already (ab)uses a unit testing framework.
Here are three reasons that TDD might help a developer/team:
Better understanding of what you're going to write
Enforces the policy of writing tests a little better
Speeds up development
One reason to write the tests first is to have a better understanding of the actual code before you write it. To me, this is the main benefit of test driven development. When you write the test cases first, you think more critically about the corner cases. It's then easier to address them when you write the code and ensure that they're accurate.
Another reason is to actually enforce writing the tests. Often when people do unit-testing without the TDD, they have a testing framework set up, write some new code, and then quit. They think that the code already works just fine, so why write tests? It's simple enough that it won't break, right? But now you've lost the advantages of doing unit-tests in the first place (completely different discussion). Write them first, and they're already there.
Writing these tests first could mean that you don't need to launch the program in a debugging environment (slow — especially for larger projects) to test if a few small things work. Of course there's no excuse for not doing so before committing changes.
Convincing yourself or other people to write the tests first may be difficult. You may have better luck getting them to write both at the same time which may be just as beneficial.
Presumably you test code that you've written before you commit it to a repository.
If that's not true you have other issues to deal with.
If it is true, you can look at writing tests using a framework as a way to automate those main routines or drivers that you currently write so you can run all of them automatically at the push of a button. You don't have to pore over output to decide if the test passed or failed; you embed the success or failure of the test in the code and get a thumbs up or down decision right away. Running all the tests at once reduces the chances of a "whack a mole" situation where you fix something in one class and break something else. All the tests have to pass.
Sounds good so far, yes?
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
If you're already using a test framework, getting good value out of the tests you write, and have meaningful code coverage up around 70%, then I think you're doing well. I'm not sure that TDD will give you much more value. It's up to you to decide whether or not you go that extra mile. Personally, I don't do it. I write tests after the class and refactor if I feel the need. Some people might find it helpful to write the test first knowing it'll fail, but I don't.
(This is more of a comment agreeing with duffymo's answer than an answer of its own.)
duffymo answers:
The TDD folks just take it one step further by demanding that you write the test FIRST before you write the class. It fails, of course, because you haven't written the class. It's their way of guaranteeing that you write test classes.
I think it's actually to force coders to think about what their code is doing. Having to think about a test makes one consider what the code is supposed to do: what the pre-conditions and post-conditions are, which functions are primitive and which are composed of primitive functions, what the minimal necessary public interface is, and what's an implementation detail.
These are all things I routinely think about, so like you, "test first" doesn't add a whole lot, for me. And frankly (I know this is heresy in some circles) I like to "anchor" the core ideas of a class by sketching out the public interface first; that way I can look at it, mentally use it, and see if it's as clean as I thought it was. (A class or a library should be easy and intuitive for client programmers to use.)
In other words, I do what TDD tries to ensure happens by writing tests first, but like duffymo, I get there a different way.
And the real point of "test first" is to get a coder to pause and think like a designer. It's silly to make a fetish of how the programmer enters that state; for those who don't do it naturally, "test first" serves as a ritual to get them there. For those who do, "test first" doesn't add much -- and can get in the way of the programmer's habitual way of getting into that state.
Again, we want to look at results, not rituals. If a junior guy needs a ritual, a "stations of the cross" or a rosary* to "get in the groove", "test first" serves that purpose. If someone has their own way to get there, that's great too.
Note that I'm not saying that code shouldn't be tested. It should. It gives us a safety net, which in turn allows us to concentrate our attention on writing good code, even audacious code, because we know the net is there to catch errors.
All I am saying is that fetishistic insistence on "test first" confuses the method (one of many) with the goal, making the programmer think about what he's coding.
* To be ecumenical, I'll note that both Catholics and Muslims use rosaries. And again, it's a mechanical, muscle-memory way to put oneself into a certain frame of mind. It's a fetish (in the original sense of a magic object, not the "sexual fetish" meaning) or good-luck charm. So is saying "Om mani padme hum", or sitting zazen, or stroking a "lucky" rabbit's foot, (Not so lucky for the rabbit.) The philosopher Jerry Fodor, when thinking about hard problems, has a similar ritual: he repeats to himself, "C'mon, Jerry, you can do it!" (I tried that too, but since my name is not Jerry, it didn't work for me. ;) )
Ideally:
You won't waste time writing features you don't need. You'll have a comprehensive unit test suite to serve as a safety net for refactoring. You'll have executable examples of how your code is intended to be used. Your development flow will be smoother and faster; you'll spend less time in the debugger.
But most of all, your design will be better. Your code will be better factored - loosely coupled, highly cohesive - and better formed - smaller, better-named methods & classes.
For my current project (which runs on a relatively heavyweight process), I have adopted a peculiar form of TDD that consists of writing skeleton test cases based on requirements documents and GUI mockups. I write dozens, sometimes hundreds of those before starting to implement anything (this runs totally against "pure" TDD which says you should write a few tests, then immediately start on a skeleton implementation).
I have found this to be an excellent way to review the requirements documents. I have to think about the behaviour described in them much more intensively than if I just were to read them . In consequence, I find many more inconsistencies and gaps in them which I would otherwise only have found during implementation. This way, I can ask for clarification earlier and have better requirements when I start implementing.
Then, during implementation, the tests are a way to measure how far I've yet to go. And they prevent me from forgetting anything (don't laugh, that's a real problem when you work on larger use cases).
And the moral is: even when your dev process doesn't really support TDD, it can still be done in a way, and improve quality and productivity.
I personally do not use TDD, but one of the biggest pro's I can see with the methology is that customer satisfaction ensurance. Basically, the idea is that the steps of your development process are these:
1) Talk to customer about what the application is supposed to do, and how it is supposed to react to different situations.
2) Translate the outcome of 1) into Unit Tests, which each test one feature or scenario.
3) Write simple, "sloppy" code that (barely) passes the tests. When this is done, you have met your customer's expectations.
4) Refactor the code you wrote in 3) until you think you've done it in the most effective way possible.
When this is done you have hopefully produced high-quality code, that meets your customer's needs. If the customer now wants a new feature, you start the cycle over - discuss the feature, write a test that makes sure it works, write code that passes the test, refactor.
And as others have said, each time you run your tests you ensure that the old code still works, and that you can add new functionality without breaking old one.
Most of the people I have talked to don't use a complete TDD model. They usually find the best testing model that works for them. Find yours play with TDD and find where you are the most productive.
TDD (Test Driven Development/ Design) provides the following advantages
ensures you know the story card's acceptance criteria before you start
ensures that you know when to stop coding (i.e., when the acceptance criteria has been meet thus prevents gold platting)
As a result you end up with code that is
testable
clean design
able to be refactored with confidence
the minimal code necessary to satisfy the story card
a living specification of how the code works
able to support a sustainable pace of new features
I made a big effort to learn TDD for Ruby on Rails development. It took several days before I really got into it and it. I was very skeptical but I made the effort because programmers I respect support it.
At this point I feel it was definitely worth the effort. There are several benefits which I'm sure others will be happy to list for you. To me the most important advantage is that it helps avoid that nightmare situation late in a project where something suddenly breaks for no apparent reason and then you're spending a day and a half with the debugger. It helps prevent your code base from deteriorating as you add more and more logic to it.
It is common knowledge that writing tests and having a large number of automated tests are a Good Thing.
However, without TDD, it often just becomes tedious. People write tests, and then leave it, and the tests do not get updated as they should, nor do new features get tested as often as they should either.
A big part of this is because the code has become a pain to test - TDD will influence your design so that it is much easier to test. Because you've used TDD, you have a good number of tests, which makes it much easier to find regressions whenever your code or requirements change, simplifying debugging drammatically, causing an appreciation of good TDD and encouraging more tests to be written when changes are needed - and we're back to the start of the cycle.
There are many advantages:
Higher code quality
Fewer bugs
Less wasted time
Any of those alone would be sufficient justification to implement TDD.

Disadvantages of Test Driven Development? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What do I lose by adopting test driven design?
List only negatives; do not list benefits written in a negative form.
If you want to do "real" TDD (read: test first with the red, green, refactor steps) then you also have to start using mocks/stubs, when you want to test integration points.
When you start using mocks, after a while, you will want to start using Dependency Injection (DI) and a Inversion of Control (IoC) container. To do that you need to use interfaces for everything (which have a lot of pitfalls themselves).
At the end of the day, you have to write a lot more code, than if you just do it the "plain old way". Instead of just a customer class, you also need to write an interface, a mock class, some IoC configuration and a few tests.
And remember that the test code should also be maintained and cared for. Tests should be as readable as everything else and it takes time to write good code.
Many developers don't quite understand how to do all these "the right way". But because everybody tells them that TDD is the only true way to develop software, they just try the best they can.
It is much harder than one might think. Often projects done with TDD end up with a lot of code that nobody really understands. The unit tests often test the wrong thing, the wrong way. And nobody agrees how a good test should look like, not even the so called gurus.
All those tests make it a lot harder to "change" (opposite to refactoring) the behavior of your system and simple changes just becomes too hard and time consuming.
If you read the TDD literature, there are always some very good examples, but often in real life applications, you must have a user interface and a database. This is where TDD gets really hard, and most sources don't offer good answers. And if they do, it always involves more abstractions: mock objects, programming to an interface, MVC/MVP patterns etc., which again require a lot of knowledge, and... you have to write even more code.
So be careful... if you don't have an enthusiastic team and at least one experienced developer who knows how to write good tests and also knows a few things about good architecture, you really have to think twice before going down the TDD road.
Several downsides (and I'm not claiming there are no benefits - especially when writing the foundation of a project - it'd save a lot of time at the end):
Big time investment. For the simple case you lose about 20% of the actual implementation, but for complicated cases you lose much more.
Additional Complexity. For complex cases your test cases are harder to calculate, I'd suggest in cases like that to try and use automatic reference code that will run in parallel in the debug version / test run, instead of the unit test of simplest cases.
Design Impacts. Sometimes the design is not clear at the start and evolves as you go along - this will force you to redo your test which will generate a big time lose. I would suggest postponing unit tests in this case until you have some grasp of the design in mind.
Continuous Tweaking. For data structures and black box algorithms unit tests would be perfect, but for algorithms that tend to be changed, tweaked or fine tuned, this can cause a big time investment that one might claim is not justified. So use it when you think it actually fits the system and don't force the design to fit to TDD.
When you get to the point where you have a large number of tests, changing the system might require re-writing some or all of your tests, depending on which ones got invalidated by the changes. This could turn a relatively quick modification into a very time-consuming one.
Also, you might start making design decisions based more on TDD than on actually good design prinicipals. Whereas you may have had a very simple, easy solution that is impossible to test the way TDD demands, you now have a much more complex system that is actually more prone to mistakes.
I think the biggest problem for me is the HUGE loss of time it takes "getting in to it". I am still very much at the beginning of my journey with TDD (See my blog for updates my testing adventures if you are interested) and I have literally spent hours getting started.
It takes a long time to get your brain into "testing mode" and writing "testable code" is a skill in itself.
TBH, I respectfully disagree with Jason Cohen's comments on making private methods public, that's not what it is about. I have made no more public methods in my new way of working than before. It does, however involve architectural changes and allowing for you to "hot plug" modules of code to make everything else easier to test. You should not be making the internals of your code more accessible to do this. Otherwise we are back to square one with everything being public, where is the encapsulation in that?
So, (IMO) in a nutshell:
The amount of time taken to think (i.e. actually grok'ing testing).
The new knowledge required of knowing how to write testable code.
Understanding the architectural changes required to make code testable.
Increasing your skill of "TDD-Coder" while trying to improve all the other skills required for our glorious programming craft :)
Organising your code base to include test code without screwing your production code.
PS: If you would like links to positives, I have asked and answered several questions on it, check out my profile.
In the few years that I've been practicing Test Driven Development, I'd have to say the biggest downsides are:
Selling it to management
TDD is best done in pairs. For one, it's tough to resist the urge to just write the implementation when you KNOW how to write an if/else statement. But a pair will keep you on task because you keep him on task. Sadly, many companies/managers don't think that this is a good use of resources. Why pay for two people to write one feature, when I have two features that need to be done at the same time?
Selling it to other developers
Some people just don't have the patience for writing unit tests. Some are very proud of their work. Or, some just like seeing convoluted methods/functions bleed off the end of the screen. TDD isn't for everyone, but I really wish it were. It would make maintaining stuff so much easier for those poor souls who inherit code.
Maintaining the test code along with your production code
Ideally, your tests will only break when you make a bad code decision. That is, you thought the system worked one way, and it turns out it didn't. By breaking a test, or a (small) set of tests, this is actually good news. You know exactly how your new code will affect the system. However, if your tests are poorly written, tightly coupled or, worse yet, generated (cough VS Test), then maintaining your tests can become a choir quickly. And, after enough tests start to cause more work that the perceived value they are creating, then the tests will be the first thing to be deleted when schedules become compressed (eg. it gets to crunch time)
Writing tests so that you cover everything (100% code coverage)
Ideally, again, if you adhere to the methodology, your code will be 100% tested by default. Typically, thought, I end up with code coverage upwards of 90%. This usually happens when I have some template style architecture, and the base is tested, and I try to cut corners and not test the template customizations. Also, I have found that when I encounter a new barrier I hadn't previously encountered, I have a learning curve in testing it. I will admit to writing some lines of code the old skool way, but I really like to have that 100%. (I guess I was an over achiever in school, er skool).
However, with that I'd say that the benefits of TDD far outweigh the negatives for the simple idea that if you can achieve a good set of tests that cover your application but aren't so fragile that one change breaks them all, you will be able to keep adding new features on day 300 of your project as you did on day 1. This doesn't happen with all those who try TDD thinking it's a magic bullet to all their bug-ridden code, and so they think it can't work, period.
Personally I have found that with TDD, I write simpler code, I spend less time debating if a particular code solution will work or not, and that I have no fear to change any line of code that doesn't meet the criteria set forth by the team.
TDD is a tough discipline to master, and I've been at it for a few years, and I still learn new testing techniques all the time. It is a huge time investment up front, but, over the long term, your sustainability will be much greater than if you had no automated unit tests. Now, if only my bosses could figure this out.
On your first TDD project there are two big losses, time and personal freedom
You lose time because:
Creating a comprehensive, refactored, maintainable suite of unit and acceptance tests adds major time to the first iteration of the project. This may be time saved in the long run but equally it can be time you don't have to spare.
You need to choose and become expert in a core set of tools. A unit testing tool needs to be supplemented by some kind of mocking framework and both need to become part of your automated build system. You also want to pick and generate appropriate metrics.
You lose personal freedom because:
TDD is a very disciplined way of writing code that tends to rub raw against those at the top and bottom of the skills scale. Always writing production code in a certain way and subjecting your work to continual peer review may freak out your worst and best developers and even lead to loss of headcount.
Most Agile methods that embed TDD require that you talk to the client continually about what you propose to accomplish (in this story/day/whatever) and what the trade offs are. Once again this isn't everyone's cup of tea, both on the developers side of the fence and the clients.
Hope this helps
TDD requires you to plan out how your classes will operate before you write code to pass those tests. This is both a plus and a minus.
I find it hard to write tests in a "vacuum" --before any code has been written. In my experience I tend to trip over my tests whenever I inevitably think of something while writing my classes that I forgot while writing my initial tests. Then it's time to not only refactor my classes, but ALSO my tests. Repeat this three or four times and it can get frustrating.
I prefer to write a draft of my classes first then write (and maintain) a battery of unit tests. After I have a draft, TDD works fine for me. For example, if a bug is reported, I will write a test to exploit that bug and then fix the code so the test passes.
Prototyping can be very difficult with TDD - when you're not sure what road you're going to take to a solution, writing the tests up-front can be difficult (other than very broad ones). This can be a pain.
Honestly I don't think that for "core development" for the vast majority of projects there's any real downside, though; it's talked down a lot more than it should be, usually by people who believe their code is good enough that they don't need tests (it never is) and people who just plain can't be bothered to write them.
Well, and this stretching, you need to debug your tests. Also, there is a certain cost in time for writing the tests, though most people agree that it's an up-front investment that pays off over the lifetime of the application in both time saved debugging and in stability.
The biggest problem I've personally had with it, though, is getting up the discipline to actually write the tests. In a team, especially an established team, it can be hard to convince them that the time spent is worthwhile.
The downside to TDD is that it is usually tightly associated with 'Agile' methodology, which places no importance on documentation of a system, rather the understanding behind why a test 'should' return one specific value rather than any other resides only in the developer's head.
As soon as the developer leaves or forgets the reason that the test returns one specific value and not some other, you're screwed. TDD is fine IF it is adequately documented and surrounded by human-readable (ie. pointy-haired manager) documentation that can be referred to in 5 years when the world changes and your app needs to as well.
When I speak of documentation, this isn't a blurb in code, this is official writing that exists external to the application, such as use cases and background information that can be referred to by managers, lawyers and the poor sap who has to update your code in 2011.
I've encountered several situations where TDD makes me crazy. To name some:
Test case maintainability:
If you're in a big enterprise, many chances are that you don't have to write the test cases yourself or at least most of them are written by someone else when you enter the company. An application's features changes from time to time and if you don't have a system in place, such as HP Quality Center, to track them, you'll turn crazy in no time.
This also means that it'll take new team members a fair amount of time to grab what's going on with the test cases. In turn, this can be translated into more money needed.
Test automation complexity:
If you automate some or all of the test cases into machine-runnable test scripts, you will have to make sure these test scripts are in sync with their corresponding manual test cases and in line with the application changes.
Also, you'll spend time to debug the codes that help you catch bugs. In my opinion, most of these bugs come from the testing team's failure to reflect the application changes in the automation test script. Changes in business logic, GUI and other internal stuff can make your scripts stop running or running unreliably. Sometimes the changes are very subtle and difficult to detect. Once all of my scripts report failure because they based their calculation on information from table 1 while table 1 was now table 2 (because someone swapped the name of the table objects in the application code).
If your tests are not very thorough you might fall into a false sense of "everything works" just because you tests pass. Theoretically if your tests pass, the code is working; but if we could write code perfectly the first time we wouldn't need tests. The moral here is to make sure to do a sanity check on your own before calling something complete, don't just rely on the tests.
On that note, if your sanity check finds something that is not tested, make sure to go back and write a test for it.
The biggest problem are the people who don't know how to write proper unit tests. They write tests that depend on each other (and they work great running with Ant, but then all of sudden fail when I run them from Eclipse, just because they run in different order). They write tests that don't test anything in particular - they just debug the code, check the result, and change it into test, calling it "test1". They widen the scope of classes and methods, just because it will be easier to write unit tests for them. The code of unit tests is terrible, with all the classical programming problems (heavy coupling, methods that are 500 lines long, hard-coded values, code duplication) and is a hell to maintain. For some strange reason people treat unit tests as something inferior to the "real" code, and they don't care about their quality at all. :-(
You lose the ability to say you are "done" before testing all your code.
You lose the capability to write hundreds or thousands of lines of code before running it.
You lose the opportunity to learn through debugging.
You lose the flexibility to ship code that you aren't sure of.
You lose the freedom to tightly couple your modules.
You lose option to skip writing low level design documentation.
You lose the stability that comes with code that everyone is afraid to change.
You lose a lot of time spent writing tests. Of course, this might be saved by the end of the project by catching bugs faster.
Refocusing on difficult, unforeseen requirements is the constant bane of the programmer. Test-driven development forces you to focus on the already-known, mundane requirements, and limits your development to what has already been imagined.
Think about it, you are likely to end up designing to specific test cases, so you won't get creative and start thinking "it would be cool if the user could do X, Y, and Z". Therefore, when that user starts getting all excited about potential cool requirements X, Y, and Z, your design may be too rigidly focused on already specified test cases, and it will be difficult to adjust.
This, of course, is a double edged sword. If you spend all your time designing for every conceivable, imaginable, X, Y, and Z that a user could ever want, you will inevitably never complete anything. If you do complete something, it will be impossible for anyone (including yourself) to have any idea what you're doing in your code/design.
You will lose large classes with multiple responsibilities.
You will also likely lose large methods with multiple responsibilities.
You may lose some ability to refactor, but you will also lose some of the need to refactor.
Jason Cohen said something like:
TDD requires a certain organization for your code. This might be architecturally wrong; for example, since private methods cannot be called outside a class, you have to make methods non-private to make them testable.
I say this indicates a missed abstraction -- if the private code really needs to be tested, it should probably be in a separate class.
Dave Mann
The biggest downside is that if you really want to do TDD properly you will have to fail a lot before you succeed. Given how many software companies work (dollar per KLOC) you will eventually get fired. Even if your code is faster, cleaner, easier to maintain, and has less bugs.
If you are working in a company that pays you by the KLOCs (or requirements implemented -- even if not tested) stay away from TDD (or code reviews, or pair programming, or Continuous Integration, etc. etc. etc.).
I second the answer about initial development time. You also lose the ability to confortably work without the safety of tests. I've also been described as a TDD nutbar, so you could lose a few friends ;)
It's percieved as slower. Long term that's not true in terms of the grief it will save you down the road, but you'll end up writing more code so arguably you're spending time on "testing not coding". It's a flawed argument, but you did ask!
It can be hard and time consuming writing tests for "random" data like XML-feeds and databases (not that hard). I've spent some time lately working with weather data feeds. It's quite confusing writing tests for that, at least as i don't have too much experience with TDD.
You have to write applications in a different way: one which makes them testable. You'd be surprised how difficult this is at first.
Some people find the concept of thinking about what they're going to write before they write it too hard. Concepts such as mocking can be difficult for some too. TDD in legacy apps can be very difficult if they weren't designed for testing. TDD around frameworks that are not TDD friendly can also be a struggle.
TDD is a skill so junior devs may struggle at first (mainly because they haven't been taught to work this way).
Overall though the cons become solved as people become skilled and you end up abstracting away the 'smelly' code and have a more stable system.
unit test are more code to write, thus a higher upfront cost of development
it is more code to maintain
additional learning required
Good answers all. I would add a few ways to avoid the dark side of TDD:
I've written apps to do their own randomized self-test. The problem with writing specific tests is even if you write lots of them they only cover the cases you think of. Random-test generators find problems you didn't think of.
The whole concept of lots of unit tests implies that you have components that can get into invalid states, like complex data structures. If you stay away from complex data structures there's a lot less to test.
To the extent your application allows it, be shy of design that relies on the proper ordering of notifications, events and side-effects. Those can easily get dropped or scrambled so they need a lot of testing.
Let me add that if you apply BDD principles to a TDD project, you can alleviate a few of the major drawbacks listed here (confusion, misunderstandings, etc.). If you're not familiar with BDD, you should read Dan North's introduction. He came up the concept in answer to some of the issues that arose from applying TDD at the workplace. Dan's intro to BDD can be found here.
I only make this suggestion because BDD addresses some of these negatives and acts as a gap-stop. You'll want to consider this when collecting your feedback.
It takes some time to get into it and some time to start doing it in a project but... I always regret not doing a Test Driven approach when I find silly bugs that an automated test could have found very fast. In addition, TDD improves code quality.
You have to make sure your tests are always up to date, the moment you start ignoring red lights is the moment the tests become meaningless.
You also have to make sure the tests are comprehensive, or the moment a big bug appears, the stuffy management type you finally convinced to let you spend time writing more code will complain.
The person who taught my team agile development didn't believe in planning, you only wrote as much for the tiniest requirement.
His motto was refactor, refactor, refactor. I came to understand that refactor meant 'not planning ahead'.
Development time increases : Every method needs testing, and if you have a large application with dependencies you need to prepare and clean your data for tests.
TDD requires a certain organization for your code. This might be inefficient or difficult to read. Or even architecturally wrong; for example, since private methods cannot be called outside a class, you have to make methods non-private to make them testable, which is just wrong.
When code changes, you have to change the tests as well. With refactoring this can be a
lot of extra work.