Are you really using unit tests? [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been involved in a lot of projects, both old and new, and one thing that they have in common is that almost none of them have been using unit testing. I prefer to use it, but often the customer isn´t ready to pay for that, and suppose that the code just works as it should.
So, do you use unit testing in your projects, or do you rely on your coding skills?

Using unit-testing is a coding skill.
I find it adds very little overhead to coding time. On top of that the code produced tends to be much easier to understand and to refactor, with an untold reduction in maintenance time.
A full discussion of the benefits here: Is Unit Testing worth the effort?

I'll be honest. I've only recently started writing unit tests, and when I go back to modify an old DLL that's from my bad old days, I'll hunker down and write unit tests to get it near 100% code coverage. I say "near" because the last few percent can be hard to get to due to the way that the code is structured, that it didn't have mocking or unit testing in mind (this happens a lot with the abstract classes or classes that P/Invoke into native code). And while I understand that 100% code coverage is not the same thing as 100% of all code execution paths, I've found that having the tests there is a good way to tell when I'm doing something that's going to break a lot of things.
To be honest, this is probably one reason why it has taken many developers (including myself) so long to get around to adopting unit testing (let's not get into TDD just yet).
I'm not saying that any of these are legitimate reasons, but they did more or less go through my head, and I bet they went through some of yours too:
First, there's a thought along the lines of "Oh, I've written mountains of code with zero tests, and seeing as I'm already crushed by that mountain, I don't possibly have the time to add several more mountains of test code on top of it."
Second, nobody likes to talk about the fact that the classic code-run-debug cycle actually works "good enough" when you're
writing new code that does not alter the behavior of old code
working on a relatively small software project or a throwaway utility
the sole developer on a project (you can keep most of it in your head, up to a point)
Third, unit tests are easier to appreciate when you're maintaining existing code, and if you're always writing new code, well, the benefits aren't immediately obvious. When your job is to churn out a utility and never touch it again, a developer can see little value in the unit test because they probably won't be working on that utility or at the company by the time a maintenance programmer (who wishes there was a unit test) comes around.
Fourth, unit testing is a fairly recent innovation. (Oh, hush ... I know the idea has been around forever, especially in mission-critical and DoD applications, but for the "Joe the Plumber Developer" types like me? Unit testing wasn't mentioned at all during my entire CS undergraduate career in the early part of this decade; in 2008, I hear it's a requirement for all projects from level 101 up. Visual Studio didn't even get a built-in testing framework until this year, and even then only in the professional edition.) Was it blissful ignorance on my part? Sure, and I think a lot of people who code because it's their day job (which is fine) simply aren't aware. If they are, then they know that they have to stay current, but when it comes to learning a new methodology (particularly one that involves writing more code and taking more up-front time when you needed that new feature done yesterday) means that it'll get pushed back. This is the long tail, and in a decade or so our talks about unit testing will seem as quaint as our mutterings about object-oriented programming enabling a new era of development during the 1990s. Heck, if I've started unit testing, the end must be near, because I'm usually an idiot.
Fifth, unit testing some areas of code is really difficult, and this scared me away for a while. Multi-threaded event queues, graphical rendering, and GUIs come to mind. Many developers realize this and think "well heck, unit testing would be great for library code but when was the last time I wrote just a class library." It takes a while to come around. To that end, no unit tests does not necessarily mean bad code lies ahead.
Finally, the level of evangelism that sometimes comes from unit testing advocates can be intimidating and off-putting. Seriously, I thought I was a moron because I just couldn't grasp how to mock a certain interface and why I would want to. Selfishly, I wanted to hate unit testing if just for the fact that everybody would not shut UP about it. No silver bullet.
So, apologies for the rambling post. In summary:
I did not unit test for a long time. Now I do. Unit tests are a great tool, especially during maintenance.
For existing projects, you can at least go in and write a test that documents a current input and its output. When you change that big mess of murky code in the future, you'll be able to see if you've altered that little snapshot. This is also an excellent way to change the development culture in your company.
Let the flames begin! =)

Unit testing can't be an afterthought, it is and has to be something which factors in to your design. I would go so far as to say that even if you never write or call a single unit test, the benefits it has in leading tight component driven software is worth the effort 95% of the time.

Writing and running unit-tests is part of a healthy coding process, it is not an addition the customer should have the choice to pay or not pay for.
Testing strategy is a coding issue just as any other: what datastructures to use, variable naming conventions, comment standards, etc.

I use unit testing, and tdd, whenever I can. However in my experience for every unit test advocate there are three or more developers who don't really believe writing unit tests is worth the effort, and that it slows down development. However these people tend to keep quiet, so you are unlikely to see many here.

I like Bob Martin's analogy: imagine you're a surgeon. What would you say to a patient who wanted to pay the surgery but told you to skip washing up ahead of time?
When a client hires me to code they are hiring me as a professional, as someone with the skills and discipline of a professional. My job is to give them "code just works as it should", and I know the best way for me to do that is to use TDD.

After coming onto a couple of projects that were in production but needed major new functionality, one of my bottom lines as a technical lead starting up a project is that unit-tests are a must.
It just costs too much to try and rewrite code that has been written without unit tests. The code is invariably poorly structured (A multi-thousand line web-service all in a single code behind anyone?) and making changes to it (even when it is well structured) without introducing bugs is a really painful process.
This becomes particularly true when a project enters fire-fighting mode (and not having unit tests contributes to projects getting into that state too) - customers are getting grumpy, they've lost faith in the project, few things worse than being the poor guy trying to get the next fix in without introducing a whole pile of regression bugs, and not even having unit tests.
Those situations can be so easily avoided or at least mitigated by explaining the value of tests up front. Of course there are situations where unit tests aren't so important but they are truly the exception.
So yes - I insist on unit tests and have spent a lot of time fixing the messes made by other developers who relied on their coding skills.

I often use unit testing for complex mathematical algorithms, for example for functions like LineIntersectsLine where you can define some important examples to test. After that, it is easier to control this function. You can simply rewrite/optimize it and test if it still works, or you can add new tests if you encounter bugs and then try to correct these.

I find newer developers benefit more from unit testing than older ones who had to learn from the school of hard knocks of the pitfalls that could make something fail. Unit testing does not lead to good design - it just leads to a design that is more testable. You need to test your code but the way unit testing is preached is dangerous. "It will force you to design your code better", "How can you test whether it is correct?".
I prefer to have well written structured code that when you read it automatically tells you simply what it is trying to accomplish and how it does it. Functions/classes should be small and concise. They should only have one responsibility. Unit tests don't protect against logical errors.
Unit tests give more false positives than anything else particularly when a project is first written. Good design trumps tests - tests should be the verification stage nothing more. I never bought into the testing comes before everything else concept. In my experience this line of thinking favours testability at the expense of extensible code (may be ok for throwaway projects or one off utilities but ironically unit testing isn't as important for these).

I agree with Ken that unit testing is part of software development.
About the cost question, it's true than writing the code plus the unit test is longer than writing just the code. However, when you write the code along with its tests - which is called TDD - Test-Driven Development, you end with "clean code that works". When you just write the code, then you have to make it work, which can be long and painful.
Another benefit is that you know where you are, as the code that has been written has already been unit-tested.
To answer your question, yes, I'm using unit testing on my projects when it's possible. I write unit tests for all the new code and I strive to for legacy code.

I'm a big fan of unit testing, mainly because it's completely addicitive - the "code-test-see horrible red bar-fix-test-see lovely green bar" cycle in JUnit seriously gets the endorphins pumping.

My company makes system components for various companies in the Aerospace Industry.
The FAA requires Modified Condition/Decision Coverage for Quality level A Safety Critical flight software (DO-178-B) So for verification we do (again from DO-178-B):
Analysis of all code and traceability from tests and results to all requirements is typically required (depending on software level).
This process typically also involves:
Requirements based test tools
Code coverage analyser tools
Other names for tests performed in this process can be:
Unit testing
Integration testing
Black box and acceptance testing
Unit testing reveals code errors all the time.

In regards to relying on my coding skills, I find that my coding skills actually improve when I use TDD and rely on my unit tests to correct me when I break a test. You learn how to use many language features because you can make a tweak to test out an assumption.

I know from experience that when I write code without unit tests, I'll get a few issues that come up to fix, but when I've written the unit tests I rarely hear of issues. Also, from time spent writing unit tests I write better code to begin with. I look at methods and think of the ways they could fail and build them from the start not to fail or at least to fail in a better way.
Unit tests ARE essential to writing better code, but it also will make you a better developer because you SEE areas where things could go wrong and fix them before you ever get to testing.

Unit testing is an essential part of development, and (I have found) will actually reduce the time to completion of a project while improving overall quality, especially when done in a TDD fasion.

I do Unit test. I am actully building currently a constraint validation engine and my customers want it bullet proof. Well, without unit test, I would die from stress...
So yes, I use it !

Most functions that I write have tests. Like many have said up there, unit testing is an essential part of software engineering. So, to answer your question, yes, I REALLY write and run tests.
Also, with continuous integration, regression tests will be automated and constantly reported.

Another very useful reminder of why you want to unit test wherever you can just showed up in this post: Top 12 Reasons to Write Unit Tests I need to have that engraved on my retinas.
I can testify personally to the value of thinking about how something will be tested from the beginning, with tests developed through each iteration. I need to be more systematic, myself. I am printing out that list right now.

Your customers would probably save money overall if unit testing was in place. Some of the errors prevented by unit testing are much more of a liability if found later in the development stage rather than during unit testing. It saves so much headache in the future, now that I use it I don't think I could ever go back.

Related

Any good arguments for not writing unit tests? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have read a lot of threads about the joys and awesome points of unit testing. Is there have a good argument against unit testing?
In the places I have previously worked, unit testing is usually used as a reason to run with a smaller testing department; the logic is "we have UNIT TESTS!! Our code can't possibly fail!! Because we have unit tests, we don't need real testers!!"
Of course that logic is flawed. I have seen many cases where you cannot trust the tests. I have also seen many cases where the tests become out of date due to tight time schedules - when you have a week to do a big job, most developers would spend the week doing the real code and shipping the product, rather than refactoring the unit tests for that first week, then pleading for at least another week to do the real code, and then spending a final week bringing the unit tests up to date with what they actually wrote.
I have also seen cases where the business logic involved in the unit test was more monstrous and hard to understand than the logic buried in the application. When these tests fail, you have to spend twice as long trying to work out the problem - is the test flawed, or the real code?
Now the zealots are not going to like this bit: the place where I work has largely escaped using unit tests because the developers are of a high enough calibre that it is hard to justify the time and resource to write unit tests (not to mention we use REAL testers). For real. Writing unit tests would have only given us minimal value, and the return on investment is just not there. Sure it would give people a nice warm fuzzy feeling - "I can sleep at night because my code is PROTECTED by unit tests and consequently the universe is at a nice equilibrium", but the reality is we are in the business of writing software, not giving managers warm fuzzy feelings.
Sure, there absolutely are good reasons for having unit tests. The trouble with unit testing and TDD is:
Too many people bet the family farm on it.
Too many people use it as a religion rather than just a tool or another methodology.
Too many people have tried to make money out of it, which has skewed how it should be used.
In reality, it should be used as one of the tools or methodologies that you use on a day to day basis, and it should never become the single methodology.
It's important to understand that it's not free. Tests require effort to write - and, more importantly, maintain.
Project managers and development teams need to be aware of this.
Virtually none of my bugs would have been found by unit testing. My bugs are mostly integration or unexpected-use-case bugs, which in order to have found them earlier, more extensive (and ideally automated) system tests would have been the best bet.
I'm waiting for more evidence-based and less religion-based arguments for unit testing, as dummymo said. And I don't mean some experiment in some academic setting; I mean an argument that for my development scenario and programming ability, cost-benefit would be positive.
So, to agree with other answers to the OP: because they cost time and cost-benefit is not shown.
You have a data access layer that isn't easy adapted for mocking.
The simple truth is, when you write some code, you have to make sure it works before you say it's done. Which means you exercise it - build some scaffolding to call the function, passing some arguments, checking to make sure it returns what you expect. Is it so much extra work, to keep the scaffolding around, so you can run the tests again?
Well yes, actually, it can be. More often than not the tests will fail, even when the code is right, because the data you were using is no longer consistent, etc.
But if you have a unit testing framework in place, the cost of keeping the test code around can be only marginally more work than throwing it away. And while yes, you'll find that many of your test cases will fail because of problems with the data you are using, instead of problems with the code, that will happen less as you learn how to structure your tests so as to minimize the problem.
True, passing your unit tests does not guarantee that your system works. But it does provide some assurance that certain subsystems are working, which isn't nothing. And the test cases provide useful examples of how the functions were meant to be called.
It's not a panacea, but it's a useful practice.
Formal verification.
If you can formally prove the correctness of code, there is no reason to unit test it unless the test condition brings in new variables, in which case, you'd still only have a small amount of unit tests (or prove for the new variables).
Unit tests will tell you whether one specific class method sets a variable correctly (or some variation on that). That does not, in any way, shape, or form, indicate that your application will behave properly or that it will handle the circumstances it will need to handle.
Any problem you can think to write a test for, you are going to handle in your code, and that problem is never going to show up. So then you have 300 tests passing but real-world scenarios you just didn't think to test for. The effort required to create and maintain the tests, then, isn't necessarily worth it.
It's the usual cost/benefit analysis.
Cost: You need to spend time developing and maintaining the tests, and put resources into actually running them.
Benefits are well known (mostly cheaper maintenance/refactoring with less bugs).
So, you balance one against the other in the context of the project.
If it's a throwaway quick hack you know will never be re-used, unit tests might not make sense. Although to be honest, if I had a dollar for every throwaway quick hack that I saw running years later or worse, had to maintaing/refactor years later, I'd probably be able to be one of venture capitalists investing into SO :)
Non-deterministic outcomes.
In simple cases you can seed the random generator(s) (or mock them somehow) to get reproducible results but when the algorithm is complex this becomes impossible as code changes will alter the demand for random numbers and thus alter the results.
This would rarely be encountered in a business situation but it's quite possible in games.
Testing is like insurance.
You don't put all your money in to it.
But you don't avoid your life insurance. (People form US should still be remembering the Health Insurance Bill).
Insurance is ESSENTIAL evil.
BUT BUT BUT...
You don't get insured expecting a Fatal accident to recover all the money you put into your insurance plan.
In summary,
There is SOME reason to write Tests.
Unit tests are sometimes One of the many ways to go forward
BUT There is NO REASON to just to focus entirely on writing (Unit) Tests.
It can discourage experimenting with several variations, especially in early stages of a project. (But it can also encourage experimenting in later stages!)
It can't replace system testing, because it doesn't cover the relationship between components. So if you have to split up the available testing time between system testing and unit testing, then too much unit testing can have a negative impact on the amount of system tests.
I want to add, that I usually encourage unit testing!
It is impossible to generalize where unit tests are going to provide cost-benefit and where they are not. I see a lot of people arguing strongly in favor of unit testing and blaming people who don't for not using TDD enough, while completely ignoring the fact that applications can differ as much as the real world does.
For instance, it is incredibly hard to get anything useful out of unit tests when you have a lot of integration points, either between systems, and/or between processes and threads of your own application.
If all you ever did were websites like Stackoverflow, where problem domain is well understood, and most solutions are fairly trivial, then yes, writing unit tests have a lot of benefits, but there are lots of applications out there that simply can't be unit tested properly, as they lack, well, "units".
Laziness; sometimes I'm lazy and just don't want to do it!
But seriously, unit testing is great, but if I'm just coding for my own enjoyment I generally don't do it, because the projects are short lived, I'm the only one working on it, and they're not that big.
There is never a reason to never write unit tests.
There are good reasons to not write specific unit tests. (Especially if you use code generation. Of course you could code generate the unit tests to make sure nobody has mucked with the generated code. But that is dependent upon trusting the team.)
*Edit
Oh. And from what I understand some things in functional programming either compile thus work or don't compile.
Would those things need unit tests?
I agree with the notion that there are no good arguments against unit testing in general. There are some specific situations, however, where unit testing may not be a viable option or is at least problematic and/or poses a difficult return-on-investment proposition for the level of effort involved to create and maintain tests.
Here are some examples:
Real-time-dependent behavior in response to external conditions. Some purists may argue that this is not unit testing but rather involves scenarios at an integration or system testing level. However, I've written code for simple, low-level functionality for quasi-embedded applications where it would be useful to at least partially test real-time response via a unit-testing framework for build/regression testing purposes.
Testing behaviorial and/or policy-level functionality that requires a complex data description of the environmental state to which the tested code module is responding. This is related to an earlier poster's comment regarding the difficulty of doing unit testing involving a data access layer that isn't easily adapted for mocking. Although the behavior/policy being tested may be relatively simple, it needs to be tested across a complex state description. The value of doing unit testing here is in assuring that rare yet key conditions are handled correctly for a mission-critical application. One wants to mock the latter and create a simulated environment/state, but the cost of doing so may be high.
There are at least two alternatives to unit testing for the above scenarios:
For real-time or quasi-real-time functionality testing, extensive system testing can be done to try to compensate for the lack of good unit testing. For some applications this may be the only option, e.g., embedded systems involving hardware and/or physical systems.
Create a test harness or system-level simulator that facilitates extensive testing across a range of randomly simulated conditions. This can be useful for testing the policy/behavior scenarios described earlier involving a complex environmental state. Although significant work may be involved in creating the test harness or simulator, the return-on-investment may be a much better value than for isolated unit tests since a much broader range of conditions can be tested.
Since the test environment involves random rather than specific conditions, this approach may not offer quite the level of assurance desired for some mission-critical scenarios. Conducting extensive tests may help make up for this. Alternatively, creating a test harness or system simulator for random conditions may also help with reducing the overall cost of testing specific complex state scenarios since the development cost is now shared across a broader range of testing needs.
In the end, how to best approach testing any given application comes down to cost vs. value. Unit testing is one of the best options and should always be used where feasible, but it is not universally applicable to all scenarios. Like many things in software, sometimes it will just be a judgment call one has to make and then be prepared to make adjustments based on the outcome.
Unit testing is a trade-off. I see two problems:
It takes time. Not only to write the tests, but also (and this is the most annoying) if you want to make an important change, now you have two places you need to modify. In the worst case it could possibly discourage you to re-architect your codebase.
It only protects against problems that you think could arise, and you mostly cannot test against side-effects. This can lead to a false sense of security as mentioned before.
I agree unit testing is a valuable tool for increasing the reliability of enterprise software with a relatively stable codebase. But for personal projects or infant projects, I think a generous use of asserts in your code is a much better trade-off.
I wouldn't say it's an argument against it, but for us we have a legacy application with a TON of code, and written in COBOL. It is virtually impossible at this point to say we want to implement unit testing and do it with any degree of accuracy or within a reasonable time frame for business as pointed out by duffymo.
So I guess to add onto that, maybe one argument would be the inability (in some cases) of trying to implement unit tests after development has been completed (and maintained for years).
Instead of completely getting rid of them, we write unit tests only for core functionality such as payment authorization, user authentication, etc etc. It is very useful as there will always be some touch points that are very sensitive to code changes in a large code base and you would want some way to verify those touch points work without failing in QA.
For writing unit tests in general, learning curve is the biggest reason I know of to not bother. I have been trying to learn good unit testing for about 1.5 years now, and I feel like I'm just getting good at it (writing audit log spies, mocking, testing 1 constraint per test, etc.), although I feel it has sped up development for me for about 1 year of that time. So call it 6 months of struggling through it before it really started paying off. (I was still doing "real" work during that time, of course.)
Most of the pain experienced during that time was due to failure to follow the guidelines of good unit testing.
For a variety of specific cases, ability to unit test may be blocked; others have commented on some of those.
In Test Driven Development, the unit tests are actually more importantly a way to design your code to be testable to begin with. As it turns out your code tends to be more modular and writing the tests helps to flesh out APIs and so forth.
Too often though, you find yourself developing the code then writing the tests, commenting out the code you just wrote to ensure that the tests fail, then selectively removing the comment tokens to make the tests pass. Why? Well because it's much harder to write tests than it is to write code in some cases. It's also often much more difficult to write code that can be tested in a completely automated way. Think about user interfaces, code that generates images or pdfs, database transactions.
So unit tests do help a lot, but expect to write about 3 times as much code for the tests than you will write for the actual code. Plus all of this will need to be maintained. Significant changes to the application will invalidate huge chunks of tests - a good measure of the impact to the system but still... Are you prepared for that? Are you on a small team where you are doing the job of 5 developers? If so then automated TDD development just won't fly - you won't have time to get stuff done fast enough. So then you end up relying on you're own manual testing, QA testing stuff as well and just living with bugs slipping through and then fixing things up ASAP. It's unfortunate, high pressure and exasperating but it's reality in small companies that don't hire enough developers for the work that needs to be done.
No, really I don't. In my experience people who do are presenting a straw man's argument or just don't know how to unit test things that are not obvious how to unit test.
#Khorkak - If you change a feature in your production code, only a handfull of your unit tests should be affected. If you don't that means that you are not decoupling your production code and testing in isolation, but instead excecising large chunks of your production code in integration. That's just poor unit testing skills, and yes it's a BIG problem. Not only because your unit test code base will become hard to maintain, but also because your production code base will suck and have the same problem.
Unit tests make no sense for Disposable Code: If the code is a Q&D proof of concept, something created during a spike to investigate various approaches, or anything else you are SURE will almost always be thrown away, then doing unit tests won't bring much return on the investment. In fact they could hurt you as the time spent there is time spent not trying a different approach etc (alternative cost)
The key is being sure that's the case, or having enough of an understanding with teammates etc that if someone says 'that's great, use that one' that you then invest the time bring the code up to the standards for NON disposable code.
For those who asked for better proof I'd refer you to this page. http://biblio.gdinwiddie.com/biblio/StudiesOfTestDrivenDevelopment
note that many of these are studies by academic types, but done against groups doing real production software work, so personally it seems like they have a pretty good amount of validity.

TDD vs. Unit testing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s).
There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits?
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written?
EDIT
What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input.
FOLLOW UP
We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!
If the team is floundering at implementing TDD, but they weren't creating any Unit Tests before...then start them off by creating Unit Tests after their code is written. Even Unit tests written after the code are better than no Unit Tests at all!
Once they're proficient at Unit Testing (and everything that comes with it), then you can work on getting them to create the tests first...and code second.
It is absolutely still worth writing the unit tests after code is written. It's just that sometimes it's often harder because your code wasn't designed to be testable, and you may have overcomplicated it.
I think a good pragmatic way to bring a team into TDD is to provide the alternative method of "test-during development" in the transition period, or possibly in the long-term. They should be encouraged to TDD sections of code that seem natural to them. However, in sections of code that seem difficult to approach test-first or when using objects that are predetermined by a non-agile A&D process, developers can be given the option of writing a small section of the code, then writing tests to cover that code, and repeating this process. Writing unit tests for some code immediately after writing that code is better than not writing any unit tests at all.
It's in my humble opinion better to have 50% test coverage with "code first, test after" and a 100% completed library, than 100% test coverage and a 50% completed library with TDD. After a while, your fellow developers will hopefully find it entertaining and educational to write tests for all of the public code they write, so TDD will sneak its way into their development routine.
TDD is about design! So if you use it, you will be sure to have a testable design of your code, making it easier to write your tests. If you write tests after the code is written they are still valuable but IMHO you will be wasting time since you will probably not have a testable design.
One suggestion I can give to you to try to convince your team to adopt TDD is using some of the techniques described in Fearless Change: Patterns for Introducing New Ideas, by Mary Lynn Manns and Linda Rising.
I just read this on a calendar: "Every rule, executed to its utmost, becomes ridiculous or even dangerous." So my suggestion is not to be religious about it. Every member of your team must find a balance between what they feel "right" when it comes to testing. This way, every member of your team will be most productive (instead of, say, thinking "why do I have to write this sti**** test??").
So some tests are better than none, tests after the code are better than few tests and testing before the code is better than after. But each step has its own merits and you shouldn't frown upon even small steps.
If they're new to testing than IMO start off testing code that's already been written and slowly graduate to writing tests first. As someone trying to learn TDD and new to unit testing, I've found it kind of hard to do a complete 180 and change my mindset to write tests before code, so the approach I'm taking is sort of a 50-50 mix; when I know exactly what the code will look like, I'll write the code and then write a test to verify it. For situations where I'm not entirely sure then I'll start with a test and work my way backwards.
Also remember that there is nothing wrong with writing tests to verify code, instead of writing code to satisfy tests. If your team doesn't want to go the TDD route then don't force it on them.
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
There is absolutely no doubt about the fact that there is value in unit tested code (regardless of when tests were written) and I include "the code is unit tested" in the "Definition of Done". People may use TDD or not, as long as they test.
Regarding version control, I like to use "development branches" with a unit tested policy (i.e. the code compiles and builds, all unit tests pass). When features are done, they are published from development branches to the trunk. In other words, the trunk branch is the "Done branch" (No junk on the trunk!) and has a shippable policy (can release at any time) which is more strict and includes more things than "unit tested".
This is something that your team will have to have its own successes with before they begin to believe it in. I'll rant about my nUnit epiphany for anyone who cares:
About 5 years ago I discovered nUnit when working on a project. We had almost completed V1.0 and I created a few tests just to try out this new tool. We had a lot of bugs (obviously!) because we were a new team, on a tight deadline, high expectations (sound familiar?) etc. Anyway we got 1.0 in and started on 1.1. We re-orged the team a little bit and I got 2 devs assigned to me. I did a 1-hour demo for them and told them that everything we wrote had to have a test case with it. We constantly ran "behind" the rest of the team during the 1.1 dev cycle because we were writing more code, the unit tests. We ended up working more but here's the payoff -- when we finally got into testing we had exactly 0 bugs in our code. We helped everyone else debug and repair their bugs. In the postmortem, when the bug counts showed up, it got EVERYONE's attention.
I'm not dumb enough to think you can test your way to success but I am a true believer when it comes to unit tests. The project adopted nUnit and it soon spread to the company for all .Net projects as a result of 1 success. Total time period for our V1.1 release was 9 dev weeks so it was definitely NOT an overnight success. But long term, it proved successful for our project and the company we built solutions for.
There is no doubt that testing (First, While or even After) will save your bacon, and improve your productivity and confidence. I recommend adopting it!
I was in a similar situation, because I was a "noob" developer, I was often frustrated when working on team project by the fact that a contribution had broken the build. I did not know if I was to blame or even in some cases, who to blame. But I was more concerned that I was doing to same thing to my fellow developers. This realisation then motivated to adopt some TDD strategies. Our team started have silly games, and rules, like you cannot go home till all your tests pass, or if you submit something without a test, then you have to buy everyone "beer/lunch/etc" and it made TDD more fun.
One of the most useful aspect of unit testing is ensuring the continuing correctness of already working code. When you can refactor at will, let an IDE remind you of compile time errors, and then click a button to let your tests spot any potential runtime errors--sometimes arriving in previously trivial blocks of code, then I think you will find your team starting to appreciate TDD. So starting with testing existing code is definitely useful.
Also, to be blunt, I have learned more about how to write testable code by trying to test written code than from starting with TDD. It can be too abstract at first if you are trying to think of contracts that will both accomplish the end goal and allow testing. But when you look at code and can say "This singleton here completely spoils dependency injection and makes testing this impossible," you start to develop an appreciation for what patterns make your testing life easier.
Well, if you do not write tests firsts it's not "Test Driven", it's just testing. It has benefits in itself and if you allready have a code base adding tests for it is certainly usefull even if it's not TDD but merely testing.
Writing tests first is about focusing on what the code should do before writing it. Yes you also get a test doing that and it's good, but some may argue it's not even the most important point.
What I would do is train the team on toy projects like these (see Coding Dojo, Katas) using TDD (if you can get experienced TDD programmers to participate in such workshop it would be even better). When they'll see the benefits they will use TDD for the real project. But meanwhile do not force them, it they do not see the benefit they won't do it right.
If you have design sessions before writing code or have to produce a design doc, then you could add Unit Tests as the tangible outcome of a session.
This could then serve as a specification as to how your code should work. Encourage pairing on the design session, to get people talking about how something should work and what it should do in given scenarios. What are the edge cases, with explicit test cases for them so everyone knows what it's going to do if given a null argument for example.
An aside but BDD also may be of interest
You may find some traction by showing an example or two where TDD results in less code being written - because you only write code required to make the test pass, the temptation to gold-plate or engage in YAGNI is easier to resist. Code you don't write doesn't need to be maintained, refactored, etc, so it's a "real savings" that can help sell the concept of TDD.
If you can clearly demonstrate the value in terms of time, cost, code and bugs saved, you may find it's an easier sell.
Starting to build JUnit test classes is the way to start, for existing code it's the only way to start. In my experience it is very usefull to create test classes for existing code. If management thinks this will invest too much time, you can propose to only write test classes when the corresponding class is found to contain a bug, or is in need of cleanup.
For the maintenance process the approach to get the team over the line would be to write JUnit tests to reproduce bugs before you fix them, i.e.
bug is reported
create JUnit test class if needed
add a test that reproduces the bug
fix your code
run the test to show the current code does not reproduce the bug
You can explain that by "documenting" bugs in this way will prevent those bugs from creeping back in later. That is a benefit the team can experience immediately.
I have done this in many organizations and I have found the single best way to get TDD started and followed is to set up pair programming. If you have someone else you can count on that knows TDD then the two of you can split up and pair with other developers to actually do some paired programming using TDD. If not I would train someone who will help you to do this before presenting it to the rest of the team.
One of the major hurdles with unit testing and especially TDD is that developers don't know how to do it, so they can not see how it can be worth their time. Also when you first start out, it is much slower and doesn't seem to provide benefits. It is only really providing you benefits when you are good at it. By setting up paired programming sessions you can quickly get developers to be able to learn it quickly and get good at it quicker. Additionally they will be able to see immediate benefits from it as you work though it together.
This approach has worked many times for me in the past.
One powerful way to discover the benefits of TDD is to do a significant rewrite of some existing functionality, perhaps for performance reasons. By creating a suite of tests that do a good job covering all the functionality of the existing code, this then gives you the confidence to refactor to your heart's content with full confidence that your changes are safe.
Note that in this case I'm talking about testing the design or contract - unit tests that test implementation details will not be suitable here. But then again, TDD can't test implementation by definition, as they are supposed to be written before the implementation.
TDD is a tool that developers can use to produce better code. I happen to feel that the exercise of writing testable code is as least as valuable as the tests themselves. Isolating the IUT (Implementation Under Test) for testing purposes has the side affect of decoupling your code.
TDD isn't for everyone, and there's no magic that will get a team to choose to do it. The risk is that unit test writers that don't know what's worth testing will write a lot of low value tests, which will be cannon fodder for the TDD skeptics in your organization.
I usually make automated Acceptance Tests non-negotiable, but allow developers to adopt TDD as it suits them. I have my experienced TDDers train/mentor the rest and "prove" the usefulness by example over a period of many months.
This is as much a social/cultural change as it is a technical one.

How much testing is enough? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I recently spent about 70% of the time coding a feature writing integration tests. At one point, I was thinking “Damn, all this hard work testing it, I know I don’t have bugs here, why do I work so hard on this? Let’s just skim on the tests and finish it already…”
Five minutes later a test fails. Detailed inspection shows it’s an important, unknown bug in a 3rd party library we’re using.
So … where do you draw your line on what to test on what to take on faith? Do you test everything, or the code where you expect most of the bugs?
In my opinion, it's important to be pragmatic when it comes to testing. Prioritize your testing efforts on the things that are most likely to fail, and/or the things that it is most important that do not fail (i.e. take probability and consequence into consideration).
Think, instead of blindly following one metric such as code coverage.
Stop when you are comfortable with the test suite and your code. Go back and add more tests when (if?) things start failing.
When you're no longer afraid to make medium to major changes in your code, then chances are you've got enough tests.
Good question!
Firstly - it sounds like your extensive integration testing paid off :)
From my personal experience:
If its a "green fields" new project,
I like to enforce strict unit testing
and have a thorough (as thorough as
possible) integration test plan
designed.
If its an existing piece of software
that has poor test coverage, then I
prefer to design a set integration
tests that test specific/known
functionality. I then introduce
tests (unit/integration) as I
progress further with the code base.
How much is enough? Tough question - I dont think that there ever can be enough!
"Too much of everything is just enough."
I don't follow strict TDD practices. I try to write enough unit tests to cover all code paths and exercise any edge cases I think are important. Basically I try to anticipate what might go wrong. I also try to match the amount of test code I write to how brittle or important I think the code under test is.
I am strict in one area: if a bug is found, I first write a test that exercises the bug and fails, make the code changes, and verify that the test passes.
Gerald Weinberg's classic book "The Psychology of Computer Programming" has lots of good stories about testing. One I especially like is in Chapter 4 "Programming as a Social Activity" "Bill" asks a co-worker to review his code and they find seventeen bugs in only thirteen statements. Code reviews provide additional eyes to help find bugs, the more eyes you use the better chance you have of finding ever-so-subtle bugs. Like Linus said, "Given enough eyeballs, all bugs are shallow" your tests are basically robotic eyes who will look over your code as many times as you want at any hour of day or night and let you know if everything is still kosher.
How many tests are enough does depend on whether you are developing from scratch or maintaining an existing system.
When starting from scratch, you don't want to spend all your time writing test and end up failing to deliver because the 10% of the features you were able to code are exhaustively tested. There will be some amount of prioritization to do. One example is private methods. Since private methods must be used by the code which is visible in some form (public/package/protected) private methods can be considered to be covered under the tests for the more-visible methods. This is where you need to include some white-box tests if there are some important or obscure behaviors or edge cases in the private code.
Tests should help you make sure you 1) understand the requirements, 2) adhere to good design practices by coding for testability, and 3) know when previously existing code stops working. If you can't describe a test for some feature, I would be willing to bet that you don't understand the feature well enough to code it cleanly. Using unit test code forces you to do things like pass in as arguments those important things like database connections or instance factories instead of giving in to the temptation of letting the class do way too much by itself and turning into a 'God' object. Letting your code be your canary means that you are free to write more code. When a previously passing test fails it means one of two things, either the code no longer does what was expected or that the requirements for the feature have changed and the test simply needs to be updated to fit the new requirements.
When working with existing code, you should be able to show that all the known scenarios are covered so that when the next change request or bug fix comes along, you will be free to dig into whatever module you see fit without the nagging worry, "what if I break something" which leads to spending more time testing even small fixes then it took to actually change the code.
So, we can't give you a hard and fast number of tests but you should shoot for a level of coverage which increases your confidence in your ability to keep making changes or adding features, otherwise you've probably reached the point of diminished returns.
If you or your team has been tracking metrics, you could see how many bugs are found for every test as the software life-cycle progresses. If you've defined an acceptable threshold where the time spent testing does not justify the number of bugs found, then THAT is the point at which you should stop.
You will probably never find 100% of your bugs.
I spend a lot of time on unit tests, but very little on integration tests. Unit tests allow me to build out a feature in a structure way. And now you have some nice documentation and regression tests that can be run every build
Integration tests are a different matter. They are difficult to maintain and by definition integrate a lot of different pieces of functionality, often with infrastructure that is difficult to work with.
As with everything in life it is limited by time and resources and relative to its importance. Ideally you would test everything that you reasonably think could break. Of course you can be wrong in your estimate, but overtesting to ensure that your assumptions are right depends on how significant a bug would be vs. the need to move on to the next feature/release/project.
Note: My answer primarily address integration testing. TDD is very different. It was covered on SO before, and there you stop testing when you have no more functionality to add. TDD is about design, not bug discovery.
I prefer to unit test as much as possible. One of the greatest side-effects (other than increasing the quality of your code and helping keep some bugs away) is that, in my opinion, high unit test expectations require one to change the way they write code for the better. At least, that's how it worked out for me.
My classes are more cohesive, easier to read, and much more flexible because they're designed to be functional and testable.
That said, I default unit test coverage requirements of 90% (line and branch) using junit and cobertura (for Java). When I feel that these requirements cannot be met due to the nature of a specific class (or bugs in cobertura) then I make exceptions.
Unit tests start with coverage, and really work for you when you've used them to test boundary conditions realistically. For advice on how to implement that goal, the other answers all have it right.
This article gives some very interesting insights on the effectiveness of user testing with different numbers of users. It suggests that you can find about two thirds of your errors with only three users testing the application, and as much as 85% of your errors with just five users.
Unit testing is harder to put a discrete value on. One suggestion to keep in mind is that unit testing can help to organize your thoughts on how to develop the code you're testing. Once you've written the requirements for a piece of code and have a way to check it reliably, you can write it more quickly and reliably.
I test Everything. I hate it, but it's an important part of my work.
I worked in QA for 1.5 years before becoming a developer.
You can never test everything (I was told when trained all the permutations of a single text box would take longer than the known universe).
As a developer it's not your responsibility to know or state the priorities of what is important to test and what not to test. Testing and quality of the final product is a responsibility, but only the client can meaningfully state the priorities of features, unless they have explicitly given this responsibility to you. If there isn't a QA team and you don't know, ask the project manager to find out and prioritise.
Testing is a risk reduction exercise and the client/user will know what is important and what isn't. Using a test first driven development from Extreme Programming will be helpful, so you have a good test base and can regression test after a change.
It's important to note that due to natural selection code can become "immune" to tests. Code Complete says when fixing a defect to write a test case for it and look for similar defects, it's also a good idea to write a test case for defects similar to it.

When to unit-test vs manual test

While unit-testing seems effective for larger projects where the APIs need to be industrial strength (for example development of the .Net framework APIs, etc.), it seems possibly like overkill on smaller projects.
When is the automated TDD approach the best way, and when might it be better to just use manual testing techniques, log the bugs, triage, fix them, etc.
Another issue--when I was a tester at Microsoft, it was emphasized to us that there was a value in having the developers and testers be different people, and that the tension between these two groups could help create a great product in the end. Can TDD break this idea and create a situation where a developer might not be the right person to rigorously find their own mistakes? It may be automated, but it would seem that there are many ways to write the tests, and that it is questionable whether a given set of tests will "prove" that quality is acceptable.
The effectiveness of TDD is independent of project size. I will practice the three laws of TDD even on the smallest programming exercise. The tests don't take much time to write, and they save an enormous amount of debugging time. They also allow me to refactor the code without fear of breaking anything.
TDD is a discipline similar to the discipline of dual-entry-bookkeeping practiced by accountants. It prevents errors in-the-small. Accountants will enter every transaction twice; once as a credit, and once as a debit. If no simple errors were made, then the balance sheet will sum to zero. That zero is a simple spot check that prevents the executives from going to jail.
By the same token programmers write unit tests in advance of their code as a simple spot check. In effect, they write each bit of code twice; once as a test, and once as production code. If the tests pass, the two bits of code are in agreement. Neither practice protects against larger and more complex errors, but both practices are nonetheless valuable.
The practice of TDD is not really a testing technique, it is a development practice. The word "test" in TDD is more or less a coincidence. As such, TDD is not a replacement for good testing practices, and good QA testers. Indeed, it is a very good idea to have experienced testers write QA test plans independently (and often in advance of) the programmers writing the code (and their unit tests).
It is my preference (indeed my passion) that these independent QA tests are also automated using a tool like FitNesse, Selenium, or Watir. The tests should be easy to read by business people, easy to execute, and utterly unambiguous. You should be able to run them at a moment's notice, usually many times per day.
Every system also needs to be tested manually. However, manual testing should never be rote. A test that can be scripted should be automated. You only want to put humans in the loop when human judgement is needed. Therefore humans should be doing exploratory testing, not blindly following test plans.
So, the short answer to the question of when to unit-test versus manual test is that there is no "versus". You should write automated unit tests first for the vast majority of the code you write. You should have automated QA acceptance tests written by testers. And you should also practice strategic exploratory manual testing.
Unit tests aren't meant to replace functional/component tests. Unit tests are really focused, so they won't be hitting database, external services, etc. Integration tests does that, but you can have them really focused. The bottom line, is that on the specific question, the answer is that they don't replace those manual tests.
Now, automated functional tests + automated component tests can certainly replace manual tests. It will depend a lot of the project and the approach to it on who will actually do those.
Update 1: Note that if developers are creating automated functional tests you still want to review that those have the appropriate coverage, complementing them as appropriate. Some developers create automated functional tests with their "unit" test framework, because they still have to do smoke tests regardless of the unit tests, and it really helps having those automated :)
Update 2: Unit testing isn't overkill for a small project, nor is automating the smoke tests or using TDD. What is overkill is having the team doing any of that for their first time on the small project. Doing any of those have an associated learning curve (specially unit testing or TDD), and not always will be done right at first. You also want someone who has been doing it for a while involved, to help avoid pitfalls and get pasts some coding challenges that aren't obvious when starting on it. The issue is that it isn't common for teams to have these skills.
TDD is the best approach whenever it is feasible to do so. TDD testing is automatic, quantifiable through code coverage, and reliable method of ensuring code quality.
Manual testing requires a huge amount of time (as compared to TDD) and suffers from human error.
There is nothing saying that TDD means only developers test. Developers should be responsible for coding a percentage of the test framework. QA should be responsible for a much larger portion. Developers test APIs the way they want to test them. QA tests APIs in ways that I really wouldn't have ever thought to and do things that, while seemingly crazy, are actually done by customers.
I would say that unit-tests are a programmers aid to answer the question:
Does this code do what I think it
does?
This is a question they need to ask themselves alot. Programers like to automate anything they do alot where they can.
The separate test team needs to answer a different question:-
Does this system do what I (and the end users) expect it
to do? Or does it suprise me?
There are a whole massive class of bugs related to the programer or designers having a different idea about what is correct that unit tests will never pickup.
According to studies of various projects (1), Unit tests find 15..50% of the defects (average of 30%). This doesn't make them the worst bug finder in your arsenal, but not a silver bullet either. There are no silver bullets, any good QA strategy consists of multiple techniques.
A test that is automated runs more often, thus it will find defects earlier and reduce total cost of these immensely - that is the true value of test automation.
Invest your ressources wisely and pick the low hanging fruit first.
I find that automated tests are easiest to write and to maintain for small units of code - isolated functions and classes. End user functionality is easier tested manually - and a good tester will find many oddities beyond the required tests. Don't set them up against each other, you need both.
Dev vs. Testers Developers are notoriously bad at testing their own code: reasons are psychological, technical and last not least economical - testers are usually cheaper than developers. But developers can do their part, and make testing easier. TDD makes testing an intrinsic part of program construction, not just an afterthought, that is the true value of TDD.
Another interesting point about testing: There's no point in 100% coverage. Statistically, bugs follow an 80:20 rule - the majority of bugs is found in small sections of code. Some studies suggest that this is even sharper - and tests should focuse on the places where bugs turn up.
(1) Programming Productivity Jones 1986 u.a., quoted from Code Complete, 2nd. ed. But as others have said, unit tests are only one part of tests, integration, regression and system tests can be - at leat partially - automated as well.
My interpretation of the results: "many eyes" has the best defect detection, but only if you have some formal process that makes them actually look.
Every application gets tested.
Some applications get tested in the form of does my code compile and does the code appear to function.
Some applications get tested with Unit tests. Some developers are religious about Unit tests, TDD and code coverage to a fault. Like everything, too much is more often than not bad.
Some applications are luckily enough to get tested via a QA team. Some QA teams automate their testing, others write test cases and manually test.
Michael Feathers, who wrote: Working Effectively with Legacy Code, wrote that code not wrapped in tests is legacy code. Until you have experienced The Big Ball of Mud, I don't think any developer truly understands the benefit of good Application Architecture and a suite of well written Unit Tests.
Having different people test is a great idea. The more people that can look at an application the more likely all the scenarios will get covered, including the ones you didn't intend to happen.
TDD has gotten a bad rap lately. When I think of TDD I think of dogmatic developers meticulously writing tests before they write the implementation. While this is true, what has been overlooked is by writing the tests, (first or shortly after) the developer experiences the method/class in the shoes of the consumer. Design flaws and shortcomings are immediately apparent.
I argue that the size of the project is irrelevant. What is important is the lifespan of the project. The longer a project lives the more the likelihood that a developer other than the one who wrote it will work on it. Unit tests are documentation to the expectations of the application -- A manual of sorts.
Unit tests can only go so far (as can all other types of testing). I look on testing as a kind of "sieve" process. Each different type of testing is like a sieve that you are placing under the outlet of your development process. The stuff that comes out is (hopefully) mostly features for your software product, but it also contains bugs. The bugs come in lots of different shapes and sizes.
Some of the bugs are pretty easy to find because they are big or get caught in basically any kind of sieve. On the other hand, some bugs are smooth and shiny, or don't have a lot of hooks on the sides so they would slip through one type of sieve pretty easily. A different type of sieve might have different shape or size holes so it will be able to catch different types of bugs. The more sieves you have, the more bugs you will catch.
Obviously the more sieves you have in the way, the slower it is for the features to get through as well, so you'll want to try to find a happy medium where you aren't spending too much time testing that you never get to release any software.
The nicest point (IMO) of automated unit tests is that when you change (improve, refactor) the existing code, it's easy to test that you didn't break it. It would be tedious to test everything manually again and again.
Your question seems to be more about automated testing vs manual testing. Unit testing is a form of automated testing but a very specific form.
Your remark about having separate testers and developers is right on the mark though. But that doesn't mean developers shouldn't do some form of verification.
Unit testing is a way for developers to get fast feedback on what they're doing. They write tests to quickly run small units of code and verify their correctness. It's not really testing in the sense you seem to use the word testing just like a syntax check by a compiler isn't testing. Unit testing is a development technique. Code that's been written using this technique is probably of higher quality than code written without but still has to go through quality control.
The question about automated testing vs manual testing for the test department is easier to answer. Whenever the project gets big enough to justify the investment of writing automated tests you should use automated tests. When you've got lots of small one-time tests you should do them manually.
Having been on both sides, QA and development, I would assert that someone should always manually test your code. Even if you are using TDD, there are plenty of things that you as a developer may not be able to cover with unit tests, or may not think about testing. This especially includes usability and aesthetics. Aesthetics includes proper spelling, grammar, and formatting of output.
Real life example 1:
A developer was creating a report we display on our intranet for managers. There were many formulas, all of which the developer tested before the code came to QA. We verified that the formulas were, indeed, producing the correct output. What we asked development to correct, almost immediately, was the fact that the numbers were displayed in pink on a purple background.
Real life example 2:
I write code in my spare time, using TDD. I like to think I test it thoroughly. One day my wife walked by when I had a message dialog up, read it, and promptly asked, "What on Earth is that message supposed to mean?" I thought the message was rather clear, but when I reread it I realized it was talking about parent and child nodes in a tree control, and probably wouldn't make sense to the average user. I reworded the message. In this case, it was a usability issue, which was not caught by my own testing.
unit-testing seems effective for larger projects where the APIs need to be industrial strength, it seems possibly like overkill on smaller projects.
It's true that unit tests of a moving API are brittle, but unit-testing is also effective on API-less projects such as applications. Unit-testing is meant to test the units a project is made with. It allows ensuring every unit works as expected. This is a real safety net when modifying - refactoring - the code.
As far as the size of the project is concerned, It's true that writing unit-tests for a small project can be overkill. And here, I would define small project as a small program, that can be tested manually, but very easily and quickly, in no more than a few seconds. Also a small project can grow, in which case it might be advantageous to have unit tests at hand.
there was a value in having the developers and testers be different people, and that the tension between these two groups could help create a great product in the end.
Whatever the development process, unit-testing is not meant to supersede any other stages of test, but to complement them with tests at the development level, so that developers can get very early feedback, without having to wait for an official build and official test. With unit-testing, development team delivers code that works, downstream, not bug-free code, but code that can be tested by the test team(s).
To sum up, I test manually when it's really very easy, or when writing unit tests is too complex, and I don't aim to 100% coverage.
I believe it is possible to combine the expertise of QA/testing staff (defining the tests / acceptance criteria), with the TDD concept of using a developer owned API (as oppose to GUI or HTTP/messaging interface) to drive an application under test.
It is still critical to have independent QA staff, but we don't need huge manual test teams anymore with modern test tools like FitNesse, Selenium and Twist.
Just to clarify something many people seem to miss:
TDD, in the sense of
"write failing test, write code to make test pass, refactor, repeat"
Is usually most efficient and useful when you write unit tests.
You write a unit test around just the class/function/unit of code you are working on, using mocks or stubs to abstract out the rest of the system.
"Automated" testing usually refers to higher level integration/acceptance/functional tests - you can do TDD around this level of testing, and it's often the only option for heavily ui-driven code, but you should be aware that this sort of testing is more fragile, harder to write test-first, and no substitute for unit testing.
TDD gives me, as the developer, confidence that the change I am making to the code has the intended consequences and ONLY the intended consequences, and thus the metaphor of TDD as a "safety net" is useful; change any code in a system without it and you can have no idea what else you may have broken.
Engineering tension between developers and testers is really bad news; developers cultivate a "well, the testers are paid to find the bugs" mindset (leading to laziness) and the testers -- feeling as if they aren't being seen to do their jobs if they don't find any faults -- throw up as many trivial problems as they can. This is a gross waste of everyone's time.
The best software development, in my humble experience, is where the tester is also a developer and the unit tests and code are written together as part of a pair programming exercise. This immediately puts the two people on the same side of the problem, working together towards the same goal, rather than putting them in opposition to each other.
Unit testing is not the same as functional testing. And as far as automation is concerned, it should normally be considered when the testing cycle will be repeated more than 2 or 3 times... It is preferred for regression testing. If the project is small or it will not have frequent changes or updates then manual testing is a better and less cost effective option. In such cases automation will prove to be more costly with the script writing and maintainence.

Is Unit Testing worth the effort? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am working to integrate unit testing into the development process on the team I work on and there are some sceptics. What are some good ways to convince the sceptical developers on the team of the value of Unit Testing? In my specific case we would be adding Unit Tests as we add functionality or fixed bugs. Unfortunately our code base does not lend itself to easy testing.
Every day in our office there is an exchange which goes something like this:
"Man, I just love unit tests, I've just been able to make a bunch of changes to the way something works, and then was able to confirm I hadn't broken anything by running the test over it again..."
The details change daily, but the sentiment doesn't. Unit tests and test-driven development (TDD) have so many hidden and personal benefits as well as the obvious ones that you just can't really explain to somebody until they're doing it themselves.
But, ignoring that, here's my attempt!
Unit Tests allows you to make big changes to code quickly. You know it works now because you've run the tests, when you make the changes you need to make, you need to get the tests working again. This saves hours.
TDD helps you to realise when to stop coding. Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing.
The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of both being bad / buggy being low. Often it's the test that needs fixing but that's still a good outcome.
TDD helps with coding constipation. When faced with a large and daunting piece of work ahead writing the tests will get you moving quickly.
Unit Tests help you really understand the design of the code you are working on. Instead of writing code to do something, you are starting by outlining all the conditions you are subjecting the code to and what outputs you'd expect from that.
Unit Tests give you instant visual feedback, we all like the feeling of all those green lights when we've done. It's very satisfying. It's also much easier to pick up where you left off after an interruption because you can see where you got to - that next red light that needs fixing.
Contrary to popular belief unit testing does not mean writing twice as much code, or coding slower. It's faster and more robust than coding without tests once you've got the hang of it. Test code itself is usually relatively trivial and doesn't add a big overhead to what you're doing. This is one you'll only believe when you're doing it :)
I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interpret this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete.
Good unit tests can help document and define what something is supposed to do
Unit tests help with code re-use. Migrate both your code and your tests to your new project. Tweak the code till the tests run again.
A lot of work I'm involved with doesn't Unit Test well (web application user interactions etc.), but even so we're all test infected in this shop, and happiest when we've got our tests tied down. I can't recommend the approach highly enough.
Unit testing is a lot like going to the gym. You know it is good for you, all the arguments make sense, so you start working out. There's an initial rush, which is great, but after a few days you start to wonder if it is worth the trouble. You're taking an hour out of your day to change your clothes and run on a hamster wheel and you're not sure you're really gaining anything other than sore legs and arms.
Then, after maybe one or two weeks, just as the soreness is going away, a Big Deadline begins approaching. You need to spend every waking hour trying to get "useful" work done, so you cut out extraneous stuff, like going to the gym. You fall out of the habit, and by the time Big Deadline is over, you're back to square one. If you manage to make it back to the gym at all, you feel just as sore as you were the first time you went.
You do some reading, to see if you're doing something wrong. You begin feel a little bit of irrational spite toward all the fit, happy people extolling the virtues of exercise. You realize that you don't have a lot in common. They don't have to drive 15 minutes out of the way to go to the gym; there is one in their building. They don't have to argue with anybody about the benefits of exercise; it is just something everybody does and accepts as important. When a Big Deadline approaches, they aren't told that exercise is unnecessary any more than your boss would ask you to stop eating.
So, to answer your question, Unit Testing is usually worth the effort, but the amount of effort required isn't going to be the same for everybody. Unit Testing may require an enormous amount of effort if you are dealing with spaghetti code base in a company that doesn't actually value code quality. (A lot of managers will sing Unit Testing's praises, but that doesn't mean they will stick up for it when it matters.)
If you are trying to introduce Unit Testing into your work and are not seeing all the sunshine and rainbows that you have been led to expect, don't blame yourself. You might need to find a new job to really make Unit Testing work for you.
Best way to convince... find a bug, write a unit test for it, fix the bug.
That particular bug is unlikely to ever appear again, and you can prove it with your test.
If you do this enough, others will catch on quickly.
thetalkingwalnut asks:
What are some good ways to convince the skeptical developers on the team of the value of Unit Testing?
Everyone here is going to pile on lots of reasons out of the blue why unit testing is good. However, I find that often the best way to convince someone of something is to listen to their argument and address it point by point. If you listen and help them verbalize their concerns, you can address each one and perhaps convert them to your point of view (or at the very least, leave them without a leg to stand on). Who knows? Perhaps they will convince you why unit tests aren't appropriate for your situation. Not likely, but possible. Perhaps if you post their arguments against unit tests, we can help identify the counterarguments.
It's important to listen to and understand both sides of the argument. If you try to adopt unit tests too zealously without regard to people's concerns, you'll end up with a religious war (and probably really worthless unit tests). If you adopt it slowly and start by applying it where you will see the most benefit for the least cost, you might be able to demonstrate the value of unit tests and have a better chance of convincing people. I realize this isn't as easy as it sounds - it usually requires some time and careful metrics to craft a convincing argument.
Unit tests are a tool, like any other, and should be applied in such a way that the benefits (catching bugs) outweigh the costs (the effort writing them). Don't use them if/where they don't make sense and remember that they are only part of your arsenal of tools (e.g. inspections, assertions, code analyzers, formal methods, etc). What I tell my developers is this:
They can skip writing a test for a method if they have a good argument why it isn't necessary (e.g. too simple to be worth it or too difficult to be worth it) and how the method will be otherwise verified (e.g. inspection, assertions, formal methods, interactive/integration tests). They need to consider that some verifications like inspections and formal proofs are done at a point in time and then need to be repeated every time the production code changes, whereas unit tests and assertions can be used as regression tests (written once and executed repeatedly thereafter). Sometimes I agree with them, but more often I will debate about whether a method is really too simple or too difficult to unit test.
If a developer argues that a method seems too simple to fail, isn't it worth taking the 60 seconds necessary to write up a simple 5-line unit test for it? These 5 lines of code will run every night (you do nightly builds, right?) for the next year or more and will be worth the effort if even just once it happens to catch a problem that may have taken 15 minutes or longer to identify and debug. Besides, writing the easy unit tests drives up the count of unit tests, which makes the developer look good.
On the other hand, if a developer argues that a method seems too difficult to unit test (not worth the significant effort required), perhaps that is a good indication that the method needs to be divided up or refactored to test the easy parts. Usually, these are methods that rely on unusual resources like singletons, the current time, or external resources like a database result set. These methods usually need to be refactored into a method that gets the resource (e.g. calls getTime()) and a method that takes the resource as a argument (e.g. takes the timestamp as a parameter). I let them skip testing the method that retrieves the resource and they instead write a unit test for the method that now takes the resource as a argument. Usually, this makes writing the unit test much simpler and therefore worthwhile to write.
The developer needs to draw a "line in the sand" in how comprehensive their unit tests should be. Later in development, whenever we find a bug, they should determine if more comprehensive unit tests would have caught the problem. If so and if such bugs crop up repeatedly, they need to move the "line" toward writing more comprehensive unit tests in the future (starting with adding or expanding the unit test for the current bug). They need to find the right balance.
Its important to realize the unit tests are not a silver bullet and there is such a thing as too much unit testing. At my workplace, whenever we do a lessons learned, I inevitably hear "we need to write more unit tests". Management nods in agreement because its been banged into their heads that "unit tests" == "good".
However, we need to understand the impact of "more unit tests". A developer can only write ~N lines of code a week and you need to figure out what percentage of that code should be unit test code vs production code. A lax workplace might have 10% of the code as unit tests and 90% of the code as production code, resulting in product with a lot of (albeit very buggy) features (think MS Word). On the other hand, a strict shop with 90% unit tests and 10% production code will have a rock solid product with very few features (think "vi"). You may never hear reports about the latter product crashing, but that likely has as much to do with the product not selling very well as much as it has to do with the quality of the code.
Worse yet, perhaps the only certainty in software development is that "change is inevitable". Assume the strict shop (90% unit tests/10% production code) creates a product that has exactly 2 features (assuming 5% of production code == 1 feature). If the customer comes along and changes 1 of the features, then that change trashes 50% of the code (45% of unit tests and 5% of the production code). The lax shop (10% unit tests/90% production code) has a product with 18 features, none of which work very well. Their customer completely revamps the requirements for 4 of their features. Even though the change is 4 times as large, only half as much of the code base gets trashed (~25% = ~4.4% unit tests + 20% of production code).
My point is that you have to communicate that you understand that balance between too little and too much unit testing - essentially that you've thought through both sides of the issue. If you can convince your peers and/or your management of that, you gain credibility and perhaps have a better chance of winning them over.
I have toyed with unit testing a number of times, and I am still to be convinced that it is worth the effort given my situation.
I develop websites, where much of the logic involves creating, retrieving or updating data in the database. When I have tried to "mock" the database for unit testing purposes, it has got very messy and seemed a bit pointless.
When I have written unit tests around business logic, it has never really helped me in the long run. Because I largely work on projects alone, I tend to know intuitively which areas of code may be affected by something I am working on, and I test these areas manually. I want to deliver a solution to my client as quickly as possible, and unit testing often seems a waste of time. I list manual tests and walk through them myself, ticking them off as I go.
I can see that it may be beneficial when a team of developers are working on a project and updating each other's code, but even then I think that if the developers are of a high quality, good communication and well-written code should often be enough.
One great thing about unit tests is that they serve as documentation for how your code is meant to behave. Good tests are kind of like a reference implementation, and teammates can look at them to see how to integrate their code with yours.
Unit-testing is well worth the initial investment. Since starting to use unit-testing a couple of years ago, I've found some real benefits:
regression testing removes the fear of
making changes to code (there's nothing
like the warm glow of seeing code
work or explode every time a change is
made)
executable code examples for
other team members (and yourself in
six months time..)
merciless refactoring - this is incredibly rewarding, try it!
Code snippets can be a great help in reducing the overhead of creating tests. It isn't difficult to create snippets that enable the creation of a class outline and an associated unit-test fixture in seconds.
You should test as little as possible!
meaning, you should write just enough unit tests to reveal intent. This often gets glossed over. Unit testing costs you. If you make changes and you have to change tests you will be less agile. Keep unit tests short and sweet. Then they have a lot of value.
Too often I see lots of tests that will never break, are big and clumsy and don't offer a lot of value, they just end up slowing you down.
I didn't see this in any of the other answers, but one thing I noticed is that I could debug so much faster. You don't need to drill down through your app with just the right sequence of steps to get to the code your fixing, only to find you've made a boolean error and need to do it all again. With a unit test, you can just step directly into the code you're debugging.
[I have a point to make that I cant see above]
"Everyone unit tests, they don't necessarily realise it - FACT"
Think about it, you write a function to maybe parse a string and remove new line characters. As a newbie developer you either run a few cases through it from the command line by implementing it in Main() or you whack together a visual front end with a button, tie up your function to a couple of text boxes and a button and see what happens.
That is unit testing - basic and badly put together but you test the piece of code for a few cases.
You write something more complex. It throws errors when you throw a few cases through (unit testing) and you debug into the code and trace though. You look at values as you go through and decide if they are right or wrong. This is unit testing to some degree.
Unit testing here is really taking that behaviour, formalising it into a structured pattern and saving it so that you can easily re-run those tests. If you write a "proper" unit test case rather than manually testing, it takes the same amount of time, or maybe less as you get experienced, and you have it available to repeat again and again
For years, I've tried to convince people that they needed to write unit test for their code. Whether they wrote the tests first (as in TDD) or after they coded the functionality, I always tried to explain them all the benefits of having unit tests for code. Hardly anyone disagreed with me. You cannot disagree with something that is obvious, and any smart person will see the benefits of unit test and TDD.
The problem with unit testing is that it requires a behavioral change, and it is very hard to change people's behavior. With words, you will get a lot of people to agree with you, but you won't see many changes in the way they do things.
You have to convince people by doing. Your personal success will atract more people than all the arguments you may have. If they see you are not just talking about unit test or TDD, but you are doing what you preach, and you are successful, people will try to imitate you.
You should also take on a lead role because no one writes unit test right the first time, so you may need to coach them on how to do it, show them the way, and the tools available to them. Help them while they write their first tests, review the tests they write on their own, and show them the tricks, idioms and patterns you've learned through your own experiences. After a while, they will start seeing the benefits on their own, and they will change their behavior to incorporate unit tests or TDD into their toolbox.
Changes won't happen over night, but with a little of patience, you may achieve your goal.
A major part of test-driven development that is often glossed over is the writing of testable code. It seems like some kind of a compromise at first, but you'll discover that testable code is also ultimately modular, maintainable and readable.
If you still need help convincing people this is a nice simple presentation about the advantages of unit testing.
If your existing code base doesn't lend itself to unit testing, and it's already in production, you might create more problems than you solve by trying to refactor all of your code so that it is unit-testable.
You may be better off putting efforts towards improving your integration testing instead. There's lots of code out there that's just simpler to write without a unit test, and if a QA can validate the functionality against a requirements document, then you're done. Ship it.
The classic example of this in my mind is a SqlDataReader embedded in an ASPX page linked to a GridView. The code is all in the ASPX file. The SQL is in a stored procedure. What do you unit test? If the page does what it's supposed to do, should you really redesign it into several layers so you have something to automate?
One of the best things about unit testing is that your code will become easier to test as you do it. Preexisting code created without tests is always a challenge because since they weren't meant to be unit-tested, it's not rare to have a high level of coupling between classes, hard-to-configure objects inside your class - like an e-mail sending service reference - and so on. But don't let this bring you down! You'll see that your overall code design will become better as you start to write unit-tests, and the more you test, the more confident you'll become on making even more changes to it without fear of breaking you application or introducing bugs.
There are several reasons to unit-test your code, but as time progresses, you'll find out that the time you save on testing is one of the best reasons to do it. In a system I've just delivered, I insisted on doing automated unit-testing in spite of the claims that I'd spend way more time doing the tests than I would by testing the system manually. With all my unit tests done, I run more than 400 test cases in less than 10 minutes, and every time I had to do a small change in the code, all it took me to be sure the code was still working without bugs was ten minutes. Can you imagine the time one would spend to run those 400+ test cases by hand?
When it comes to automated testing - be it unit testing or acceptance testing - everyone thinks it's a wasted effort to code what you can do manually, and sometimes it's true - if you plan to run your tests only once. The best part of automated testing is that you can run them several times without effort, and after the second or third run, the time and effort you've wasted is already paid for.
One last piece of advice would be to not only unit test your code, but start doing test first (see TDD and BDD for more)
Unit tests are also especially useful when it comes to refactoring or re-writing a piece a code. If you have good unit tests coverage, you can refactor with confidence. Without unit tests, it is often hard to ensure the you didn't break anything.
In short - yes. They are worth every ounce of effort... to a point. Tests are, at the end of the day, still code, and much like typical code growth, your tests will eventually need to be refactored in order to be maintainable and sustainable. There's a tonne of GOTCHAS! when it comes to unit testing, but man oh man oh man, nothing, and I mean NOTHING empowers a developer to make changes more confidently than a rich set of unit tests.
I'm working on a project right now.... it's somewhat TDD, and we have the majority of our business rules encapuslated as tests... we have about 500 or so unit tests right now. This past iteration I had to revamp our datasource and how our desktop application interfaces with that datasource. Took me a couple days, the whole time I just kept running unit tests to see what I broke and fixed it. Make a change; Build and run your tests; fix what you broke. Wash, Rinse, Repeat as necessary. What would have traditionally taken days of QA and boat loads of stress was instead a short and enjoyable experience.
Prep up front, a little bit of extra effort, and it pays 10-fold later on when you have to start dicking around with core features/functionality.
I bought this book - it's a Bible of xUnit Testing knowledge - tis probably one of the most referenced books on my shelf, and I consult it daily: link text
Occasionally either myself or one of my co-workers will spend a couple of hours getting to the bottom of slightly obscure bug and once the cause of the bug is found 90% of the time that code isn't unit tested. The unit test doesn't exist because the dev is cutting corners to save time, but then looses this and more debugging.
Taking the small amount of time to write a unit test can save hours of future debugging.
I'm working as a maintenance-engineer of a poorly documented, awful and big code base. I wish the people who wrote the code had written the unit tests for it.
Each time I make a change and update the production code I'm scared that I might introduce a bug for not having considered some condition.
If they wrote the test making changes to the code base would be easier and faster.(at the same time the code base would be in a better state)..
I think unit tests prove a lot useful when writing api or frameworks that have to last for many years and to be used/modified/evolved by people other than the original coders.
Unit Testing is definitely worth the effort. Unfortunately you've chosen a difficult (but unfortunately common) scenario into which to implement it.
The best benefit from unit testing you'll get is when using it from the ground up - on a few, select, small projects I've been fortunate enough to write my unit tests before implementing my classes (the interface was already complete at this point). With proper unit tests, you will find and fix bugs in your classes while they're still in their infancy and not anywhere near the complex system that they'll undoubtedly become integrated in in the future.
If your software is solidly object oriented, you should be able to add unit testing at the class level without too much effort. If you aren't that fortunate, you should still try to incorporate unit testing wherever you can. Make sure when you add new functionality the new pieces are well defined with clear interfaces and you'll find unit testing makes your life much easier.
When you said, "our code base does not lend itself to easy testing" is the first sign of a code smell. Writing Unit Tests means you typically write code differently in order to make the code more testable. This is a good thing in my opinion as what I've seen over the years in writing code that I had to write tests for, it forced me to put forth a better design.
I do not know. A lot of places do not do unit test, but the quality of the code is good. Microsoft does unit test, but Bill Gates gave a blue screen at his presentation.
I wrote a very large blog post about the topic. I've found that unit testing alone isn't worth the work and usually gets cut when deadlines get closer.
Instead of talking about unit testing from the "test-after" verification point of view, we should look at the true value found when you set out to write a spec/test/idea before the implementation.
This simple idea has changed the way I write software and I wouldn't go back to the "old" way.
How test first development changed my life
Yes - Unit Testing is definitely worth the effort but you should know it's not a silver bullet. Unit Testing is work and you will have to work to keep the test updated and relevant as code changes but the value offered is worth the effort you have to put in. The ability to refactor with impunity is a huge benefit as you can always validate functionality by running your tests after any change code. The trick is to not get too hung up on exactly the unit-of-work you're testing or how you are scaffolding test requirements and when a unit-test is really a functional test, etc. People will argue about this stuff for hours on end and the reality is that any testing you do as your write code is better than not doing it. The other axiom is about quality and not quantity - I have seen code-bases with 1000's of test that are essentially meaningless as the rest don't really test anything useful or anything domain specific like business rules, etc of the particular domain. I've also seen codebases with 30% code coverage but the tests were relevant, meaningful and really awesome as they tested the core functionality of the code it was written for and expressed how the code should be used.
One of my favorite tricks when exploring new frameworks or codebases is to write unit-tests for 'it' to discover how things work. It's a great way to learn more about something new instead of reading a dry doc :)
I recently went through the exact same experience in my workplace and found most of them knew the theoretical benefits but had to be sold on the benefits to them specifically, so here were the points I used (successfully):
They save time when performing negative testing, where you handle unexpected inputs (null pointers, out of bounds values, etc), as you can do all these in a single process.
They provide immediate feedback at compile time regarding the standard of the changes.
They are useful for testing internal data representations that may not be exposed during normal runtime.
and the big one...
You might not need unit testing, but when someone else comes in and modifies the code without a full understanding it can catch a lot of the silly mistakes they might make.
I discovered TDD a couple of years ago, and have since written all my pet projects using it. I have estimated that it takes roughly the same time to TDD a project as it takes to cowboy it together, but I have such increased confidence in the end product that I can't help a feeling of accomplishment.
I also feel that it improves my design style (much more interface-oriented in case I need to mock things together) and, as the green post at the top writes, it helps with "coding constipation": when you don't know what to write next, or you have a daunting task in front of you, you can write small.
Finally, I find that by far the most useful application of TDD is in the debugging, if only because you've already developed an interrogatory framework with which you can prod the project into producing the bug in a repeatable fashion.
One thing no-one has mentioned yet is getting the commitment of all developers to actually run and update any existing automated test. Automated tests that you get back to and find broken because of new development looses a lot of the value and make automated testing really painful. Most of those tests will not be indicating bugs since the developer has tested the code manually, so the time spent updating them is just waste.
Convincing the skeptics to not destroy the work the others are doing on unit-tests is a lot more important for getting value from the testing and might be easier.
Spending hours updating tests that has broken because of new features each time you update from the repository is neither productive nor fun.
If you are using NUnit one simple but effective demo is to run NUnit's own test suite(s) in front of them. Seeing a real test suite giving a codebase a workout is worth a thousand words...
Unit testing helps a lot in projects that are larger than any one developer can hold in their head. They allow you to run the unit test suite before checkin and discover if you broke something. This cuts down a lot on instances of having to sit and twiddle your thumbs while waiting for someone else to fix a bug they checked in, or going to the hassle of reverting their change so you can get some work done. It's also immensely valuable in refactoring, so you can be sure that the refactored code passes all the tests that the original code did.
With unit test suite one can make changes to code while leaving rest of the features intact. Its a great advantage. Do you use Unit test sutie and regression test suite when ever you finish coding new feature.
I'm agree with the point of view opposite to the majority here:
It's OK Not to Write Unit Tests
Especially prototype-heavy programming (AI for example) is difficult to combine with unit testing.