Related
I'm strongly considering adding unit testing to an existing project that is in production. It was started 18 months ago before I could really see any benefit of TDD (face palm), so now it's a rather large solution with a number of projects and I haven't the foggiest idea where to start in adding unit tests. What's making me consider this is that occasionally an old bug seems to resurface, or a bug is checked in as fixed without really being fixed. Unit testing would reduce or prevents these issues occuring.
By reading similar questions on SO, I've seen recommendations such as starting at the bug tracker and writing a test case for each bug to prevent regression. However, I'm concerned that I'll end up missing the big picture and end up missing fundamental tests that would have been included if I'd used TDD from the get go.
Are there any process/steps that should be adhered to in order to ensure that an existing solutions is properly unit tested and not just bodged in? How can I ensure that the tests are of a good quality and aren't just a case of any test is better than no tests.
So I guess what I'm also asking is;
Is it worth the effort for an
existing solution that's in production?
Would it better to ignore the testing
for this project and add it in a
possible future re-write?
What will be more benefical; spending
a few weeks adding tests or a few
weeks adding functionality?
(Obviously the answer to the third point is entirely dependant on whether you're speaking to management or a developer)
Reason for Bounty
Adding a bounty to try and attract a broader range of answers that not only confirm my existing suspicion that it is a good thing to do, but also some good reasons against.
I'm aiming to write this question up later with pros and cons to try and show management that it's worth spending the man hours on moving the future development of the product to TDD. I want to approach this challenge and develop my reasoning without my own biased point of view.
I've introduced unit tests to code bases that did not have it previously. The last big project I was involved with where I did this the product was already in production with zero unit tests when I arrived to the team. When I left - 2 years later - we had 4500+ or so tests yielding about 33 % code coverage in a code base with 230 000 + production LOC (real time financial Win-Forms application). That may sound low, but the result was a significant improvement in code quality and defect rate - plus improved morale and profitability.
It can be done when you have both an accurate understanding and commitment from the parties involved.
First of all, it is important to understand that unit testing is a skill in itself. You can be a very productive programmer by "conventional" standards and still struggle to write unit tests in a way that scales in a larger project.
Also, and specifically for your situation, adding unit tests to an existing code base that has no tests is also a specialized skill in itself. Unless you or somebody in your team has successful experience with introducing unit tests to an existing code base, I would say reading Feather's book is a requirement (not optional or strongly recommended).
Making the transition to unit testing your code is an investment in people and skills just as much as in the quality of the code base. Understanding this is very important in terms of mindset and managing expectations.
Now, for your comments and questions:
However, I'm concerned that I'll end up missing the big picture and end up missing fundamental tests that would have been included if I'd used TDD from the get go.
Short answer: Yes, you will miss tests and yes they might not initially look like what they would have in a green field situation.
Deeper level answer is this: It does not matter. You start with no tests. Start adding tests, and refactor as you go. As skill levels get better, start raising the bar for all newly written code added to your project. Keep improving etc...
Now, reading in between the lines here I get the impression that this is coming from the mindset of "perfection as an excuse for not taking action". A better mindset is to focus on self trust. So as you may not know how to do it yet, you will figure out how to as you go and fill in the blanks. Therefore, there is no reason to worry.
Again, its a skill. You can not go from zero tests to TDD-perfection in one "process" or "step by step" cook book approach in a linear fashion. It will be a process. Your expectations must be to make gradual and incremental progress and improvement. There is no magic pill.
The good news is that as the months (and even years) pass, your code will gradually start to become "proper" well factored and well tested code.
As a side note. You will find that the primary obstacle to introducing unit tests in an old code base is lack of cohesion and excessive dependencies. You will therefore probably find that the most important skill will become how to break existing dependencies and decoupling code, rather than writing the actual unit tests themselves.
Are there any process/steps that should be adhered to in order to ensure that an existing solutions is properly unit tested and not just bodged in?
Unless you already have it, set up a build server and set up a continuous integration build that runs on every checkin including all unit tests with code coverage.
Train your people.
Start somewhere and start adding tests while you make progress from the customer's perspective (see below).
Use code coverage as a guiding reference of how much of your production code base is under test.
Build time should always be FAST. If your build time is slow, your unit testing skills are lagging. Find the slow tests and improve them (decouple production code and test in isolation). Well written, you should easilly be able to have several thousands of unit tests and still complete a build in under 10 minutes (~1-few ms / test is a good but very rough guideline, some few exceptions may apply like code using reflection etc).
Inspect and adapt.
How can I ensure that the tests are of a good quality and aren't just a case of any test is better than no tests.
Your own judgement must be your primary source of reality. There is no metric that can replace skill.
If you don't have that experience or judgement, consider contracting someone who does.
Two rough secondary indicators are total code coverage and build speed.
Is it worth the effort for an existing solution that's in production?
Yes. The vast majority of the money spent on a custom built system or solution is spent after it is put in production. And investing in quality, people and skills should never be out of style.
Would it better to ignore the testing for this project and add it in a possible future re-write?
You would have to take into consideration, not only the investment in people and skills, but most importantly the total cost of ownership and the expected life time of the system.
My personal answer would be "yes of course" in the majority of cases because I know its just so much better, but I recognize that there might be exceptions.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
Neither. Your approach should be to add tests to your code base WHILE you are making progress in terms of functionality.
Again, it is an investment in people, skills AND the quality of the code base and as such it will require time. Team members need to learn how to break dependencies, write unit tests, learn new habbits, improve discipline and quality awareness, how to better design software, etc. It is important to understand that when you start adding tests your team members likely don't have these skills yet at the level they need to be for that approach to be successful, so stopping progress to spend all time to add a lot of tests simply won't work.
Also, adding unit tests to an existing code base of any sizeable project size is a LARGE undertaking which requires commitment and persistance. You can't change something fundamental, expect a lot of learning on the way and ask your sponsor to not expect any ROI by halting the flow of business value. That won't fly, and frankly it shouldn't.
Thirdly, you want to instill sound business focus values in your team. Quality never comes at the expense of the customer and you can't go fast without quality. Also, the customer is living in a changing world, and your job is to make it easier for him to adapt. Customer alignment requires both quality and the flow of business value.
What you are doing is paying off technical debt. And you are doing so while still serving your customers ever changing needs. Gradually as debt is paid off, the situation improves, and it is easier to serve the customer better and deliver more value. Etc. This positive momentum is what you should aim for because it underlines the principles of sustainable pace and will maintain and improve moral - both for your development team, your customer and your stakeholders.
Is it worth the effort for an existing solution that's in production?
Yes!
Would it better to ignore the testing for this project and add it in a possible future re-write?
No!
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
Adding testing (especially automated testing) makes it much easier to keep the project working in the future, and it makes it significantly less likely that you'll ship stupid problems to the user.
Tests to put in a priori are ones that check whether what you believe the public interface to your code (and each module in it) is working the way you think. If you can, try to also induce each isolated failure mode that your code modules should have (note that this can be non-trivial, and you should be careful to not check too carefully how things fail, e.g., you don't really want to do things like counting the number of log messages produced on failure, since verifying that it is logged at all is enough).
Then put in a test for each current bug in your bug database that induces exactly the bug and which will pass when the bug is fixed. Then fix those bugs! :-)
It does cost time up front to add tests, but you get paid back many times over at the back end as your code ends up being of much higher quality. That matters enormously when you're trying to ship a new version or carry out maintenance.
The problem with retrofitting unit tests is you'll realise you didn't think of injecting a dependency here or using an interface there, and before long you'll be rewriting the entire component. If you have the time to do this, you'll build yourself a nice safety net, but you could have introduced subtle bugs along the way.
I've been involved with many projects which really needed unit tests from day one, and there is no easy way to get them in there, short of a complete rewrite, which cannot usually be justified when the code is working and already making money. Recently, I have resorted to writing powershell scripts that exercise the code in a way that reproduces a defect as soon as it is raised and then keeping these scripts as a suite of regression tests for further changes down the line. That way you can at least start to build up some tests for the application without changing it too much, however, these are more like end to end regression tests than proper unit tests.
I do agree with what most everyone else has said. Adding tests to existing code is valuable. I will never disagree with that point, but I would like to add one caveat.
Although adding tests to existing code is valuable, it does come at a cost. It comes at the cost of not building out new features. How these two things balance out depends entirely on the project, and there are a number of variables.
How long will it take you to put all that code under test? Days? Weeks? Months? Years?
Who are you writing this code for? Paying customers? A professor? An open source project?
What is your schedule like? Do you have hard deadlines you must meet? Do you have any deadlines at all?
Again, let me stress, tests are valuable and you should work to put your old code under test. This is really more a matter of how you approach it. If you can afford to drop everything and put all your old code under test, do it. If that's not realistic, here's what you should do at the very least
Any new code you write should be completely under unit test
Any old code you happen to touch (bug fix, extension, etc.) should be put under unit test
Also, this is not an all or nothing proposition. If you have a team of, say, four people, and you can meet your deadlines by putting one or two people on legacy testing duty, by all means do that.
Edit:
I'm aiming to write this question up later with pros and cons to try and show management that it's worth spending the man hours on moving the future development of the product to TDD.
This is like asking "What are the pros and cons to using source control?" or "What are the pros and cons to interviewing people before hiring them?" or "What are the pros and cons to breathing?"
Sometimes there is only one side to the argument. You need to have automated tests of some form for any project of any complexity. No, tests don't write themselves, and, yes, it will take a little extra time to get things out the door. But in the long run it will take more time and cost more money to fix bugs after the fact than write tests up front. Period. That's all there is to it.
When we started adding tests, it was to a ten-year-old, approximately million-line codebase, with far too much logic in the UI and in the reporting code.
One of the first things we did (after setting up a continuous build server) was to add regression tests. These were end-to-end tests.
Each test suite starts by initializing the database to a known state. We actually have dozens of regression datasets that we keep in Subversion (in a separate repository from our code, because of the sheer size). Each test's FixtureSetUp copies one of these regression datasets into a temp database, and then runs from there.
The test fixture setup then runs some process whose results we're interested in. (This step is optional -- some regression tests exist only to test the reports.)
Then each test runs a report, outputs the report to a .csv file, and compares the contents of that .csv to a saved snapshot. These snapshot .csvs are stored in Subversion next to each regression dataset. If the report output doesn't match the saved snapshot, the test fails.
The purpose of regression tests is to tell you if something changes. That means they fail if you broke something, but they also fail if you changed something on purpose (in which case the fix is to update the snapshot file). You don't know that the snapshot files are even correct -- there might be bugs in the system (and then when you fix those bugs, the regression tests will fail).
Nevertheless, regression tests were a huge win for us. Just about everything in our system has a report, so by spending a few weeks getting a test harness around the reports, we were able to get some level of coverage over a huge part of our code base. Writing the equivalent unit tests would have taken months or years. (Unit tests would have given us far better coverage, and would have been far less fragile; but I'd rather have something now, rather than waiting years for perfection.)
Then we went back and started adding unit tests when we fixed bugs, or added enhancements, or needed to understand some code. Regression tests in no way remove the need for unit tests; they're just a first-level safety net, so that you get some level of test coverage quickly. Then you can start refactoring to break dependencies, so you can add unit tests; and the regression tests give you a level of confidence that your refactoring isn't breaking anything.
Regression tests have problems: they're slow, and there are too many reasons why they can break. But at least for us, they were so worth it. They've caught countless bugs over the last five years, and they catch them within a few hours, rather than waiting for a QA cycle. We still have those original regression tests, spread over seven different continuous-build machines (separate from the one that runs the fast unit tests), and we even add to them from time to time, because we still have so much code that our 6,000+ unit tests don't cover.
It's absolutely worth it. Our app has complex cross-validation rules, and we recently had to make significant changes to the business rules. We ended up with conflicts that prevented the user from saving. I realized it would take forever to sort it out in the applcation (it takes several minutes just to get to the point where the problems were). I'd wanted to introduce automated unit tests and had the framework installed, but I hadn't done anything beyond a couple of dummy tests to make sure things were working. With the new business rules in hand, I started writing tests. The tests quickly identified the conditions that caused the conflicts, and we were able to get the rules clarified.
If you write tests that cover the functionality you're adding or modifying, you'll get an immediate benefit. If you wait for a re-write, you may never have automated tests.
You shouldn't spend a lot of time writing tests for existing things that already work. Most of the time, you don't have a specification for the existing code, so the main thing you're testing is your reverse-engineering ability. On the other hand, if you're going to modify something, you need to cover that functionality with tests so you'll know you made the changes correctly. And of course, for new functionality, write tests that fail, then implement the missing functionality.
I'll add my voice and say yes, it's always useful!
There are some distinctions you should keep in mind, though: black-box vs white-box, and unit vs functional. Since definitions vary, here's what I mean by these:
Black-box = tests that are written without special knowledge of the implementation, typically poking around at the edge cases to make sure things happen as a naive user would expect.
White-box = tests that are written with knowledge of the implementation, which often try to exercise well-known failure points.
Unit tests = tests of individual units (functions, separable modules, etc). For example: making sure your array class works as expected, and that your string comparison function returns the expected results for a wide range of inputs.
Functional tests = tests of the entire system all at once. These tests will exercise a big chunk of the system all at once. For example: init, open a connection, do some real-world stuff, close down, terminate. I like to draw a distinction between these and unit tests, because they serve a different purpose.
When I've added tests to a shipping product late in the game, I found that I got the most bang for the buck from white-box and functional tests. If there's any part of the code that you know is especially fragile, write white-box tests to cover the problem cases to help make sure it doesn't break the same way twice. Similarly, whole-system functional tests are a useful sanity check that helps you make sure you never break the 10 most common use cases.
Black-box and unit tests of small units are useful too, but if your time is limited, it's better to add them early. By the time you're shipping, you've generally found (the hard way) the majority of the edge cases and problems that these tests would have found.
Like the others, I'll also remind you of the two most important things about TDD:
Creating tests is a continuous job. It never stops. You should try to add new tests every time you write new code, or modify existing code.
Your test suite is never infallible! Don't let the fact that you have tests lull you into a false sense of security. Just because it passes the test suite doesn't mean it's working correctly, or that you haven't introduced a subtle performance regression, etc.
You don't mention the implementation language, but if in Java then you could try this approach:
In a seperate source tree build regression or 'smoke' tests, using a tool to generate them, which might get you close to 80% coverage. These tests execute all the code logic paths, and verify from that point on that the code still does exactly what it does currently (even if a bug is present). This gives you a safety net against inadvertently changing behaviour when doing the necessary refactoring to make code easily unit testable by hand.
Product suggestion - I used to use the free web based product Junit Factory, but sadly it's closed now. However the product lives on in the commercially licenced AgitarOne JUnit Generator at http://www.agitar.com/solutions/products/automated_junit_generation.html
For each bug you fix, or feature you add from now on, use a TDD approach to ensure new code is designed to be testable and place these tests in a normal test source tree.
Existing code will also likely need to be changed, or refactored to make it testable as part of adding new features; your smoke tests will give you a safety net against regressions or inadvertent subtle changes to behaviour.
When making changes (bug fixes or features) via TDD, when complete it's likely the companion smoke test is failing. Verify the failures are as expected due to the changes made and remove the less readable smoke test, as your hand written unit test has full coverage of that improved component. Ensure that your test coverage does not decline only stay the same or increase.
When fixing bugs write a failing unit test that exposes the bug first.
Whether it's worth adding unit tests to an app that's in production depends on the cost of maintaining the app. If the app has few bugs and enhancement requests, then maybe it's not worth the effort. OTOH, if the app is buggy or frequently modified then unit tests will be hugely beneficial.
At this point, remember that I'm talking about adding unit tests selectively, not trying to generate a suite of tests similar to those that would exist if you had practiced TDD from the start. Therefore, in response to the second half of your second question: make a point of using TDD on your next project, whether it's a new project or a re-write (apologies, but here is a link to another book that you really should read: Growing Object Oriented Software Guided by Tests)
My answer to your third question is the same as the first: it depends on the context of your project.
Embedded within you post is a further question about ensuring that any retro-fitted testing is done properly. The important thing to ensure is that unit tests really are unit tests, and this (more often than not) means that retrofitting tests requires refactoring existing code to allow decoupling of your layers/components (cf. dependency injection; inversion of control; stubbing; mocking). If you fail to enforce this then your tests become integration tests, which are useful, but less targeted and more brittle than true unit tests.
I would like to start this answer by saying that unit testing is really important because it will help you arrest bugs before they creep into production.
Identify the areas projects/modules where bugs have been re-introduced. start with those projects to write tests. It perfectly makes sense to write tests for new functionality and for bug fix.
Is it worth the effort for an existing
solution that's in production?
Yes. You will see the effect of bugs coming down and maintenance becoming easier
Would it better to ignore the testing
for this project and add it in a
possible future re-write?
I would recommend to start if from now.
What will be more benefical; spending
a few weeks adding tests or a few
weeks adding functionality?
You are asking the wrong question. Definitely, functionality is more important than anything else. But, rather you should ask if spending a few weeks adding test will make my system more stable. Will this help my end user? Will it help a new developer in the team to understand the project and also to ensure that he/she, doesn't introduce a bug due to lack of understanding of the overall impact of a change.
I'm very fond of Refactor the Low-hanging Fruit as an answer to the question of where to begin refactoring. It's a way to ease into better design without biting off more than you can chew.
I think the same logic applies to TDD - or just unit tests: write the tests you need, as you need them; write tests for new code; write tests for bugs as they appear. You're worried about neglecting harder-to-reach areas of the code base, and it's certainly a risk, but as a way to get started: get started! You can mitigate the risk down the road with code coverage tools, and the risk isn't (in my opinion) that big, anyway: if you're covering the bugs, covering the new code, covering the code you're looking at, then you're covering the code that has the greatest need for tests.
yes, it is. when you start adding new functionality it can cause some old code modification and as results it is a source of potential bugs.
(see the first one) before you start adding new functionality all (or almost) code (ideally) should be covered by unit tests.
(see the first and second one) :). a new grandiose functionality can "destroy" the old worked code.
Yes it can: Just try to make sure all code you write from now has a test in place.
If the code that is already in place needs to be modified and can be tested, then do so, but it is better not to be too vigorous in trying to get tests in place for stable code. That sort of thing tends to have a knock-on effect and can spiral out of control.
Is it worth the effort for an existing solution that's in production?
Yes. But you don't have to write all unit tests to get started. Just add them one by one.
Would it better to ignore the testing for this project and add it in a possible future re-write?
No. First time you are adding code which breaks the functionality, you will regret it.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
For new functionality (code) it is simple. You write the unit test first and then the functionality.
For old code you decide on the way. You don't have to have all unit tests in place... Add the ones that hurt you most not having... Time (and errors) will tell on which one you have to focus ;)
Update
6 years after the original answer, I have a slightly different take.
I think it makes sense to add unit tests to all new code you write - and then refactor places where you make changes to make them testable.
Writing tests in one go for all your existing code will not help - but not writing tests for new code you write (or areas you modify) also doesn't make sense. Adding tests as you refactor/add things is probably the best way to add tests and make the code more maintainable in an existing project with no tests.
Earlier answer
Im going to raise a few eyebrows here :)
First of all what is your project - if it is a compiler or a language or a framework or anything else that is not going to change functionally for a long time, then I think its absolutely fantastic to add unit tests.
However, if you are working on an application that is probably going to require changes in functionality (because of changing requirements) then there is no point in taking that extra effort.
Why?
Unit tests only cover code tests - whether the code does what it is designed to - it is not a replacement for manual testing which anyways has to be done (to uncover functional bugs, usability issues and all other kinds of issues)
Unit tests cost time! Now where I come from, that's a precious commodity - and business generally picks better functionality over a complete test suite.
If your application is even remotely useful to users, they are going to request changes - so you will have versions that will do things better, faster and probably do new things - there may also be a lot of refactoring as your code grows. Maintaining a full grown unit test suite in a dynamic environment is a headache.
Unit tests are not going to affect the perceived quality of your product - the quality that the user sees. Sure, your methods might work exactly as they did on day 1, the interface between presentation layer and business layer might be pristine - but guess what? The user does not care! Get some real testers to test your application. And more often than not, those methods and interfaces have to change anyways, sooner or later.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality? - There are hell lot of things that you can do better than writing tests - Write new functionality, improve performance, improve usability, write better help manuals, resolve pending bugs, etc etc.
Now dont get me wrong - If you are absolutely positive that things are not going to change for next 100 years, go ahead, knock yourself out and write those tests. Automated Tests are a great idea for APIs as well, where you absolutely do not want to break third party code. Everywhere else, its just something that makes me ship later!
It's unlikely you'll ever have significant test coverage, so you must be tactical about where you add tests:
As you mentioned, when you find a bug, it's a good time to write a test (to reproduce it), and then fix the bug. If you see the test reproduce the bug, you can be sure it's a good, alid test. Given such a large portion of bugs are regressions (50%?), it's almost always worth writing regression tests.
When you dive into an area of code to modify it, it's a good time to write tests around it. Depending on the nature of the code, different tests are appropriate. One good set of advice is found here.
OTOH, it's not worth just sitting around writing tests around code that people are happy with-- especially if nobody is going to modify it. It just doesn't add value (except maybe understanding the behavior of the system).
Good luck!
You say you don't want to buy another book. So just read Michael Feather's article on working effectively with legacy code. Then buy the book :)
If I were in your place, I would probably take an outside-in approach, starting with functional tests that exercise the whole system. I would try to re-document the system's requirements using a BDD specification language like RSpec, and then write tests to verify those requirements by automating the user interface.
Then I would do defect driven development for newly discovered bugs, writing unit tests to reproduce the problems, and work on the bugs until the tests pass.
For new features, I would stick with the outside-in approach: Start with features documented in RSpec and verified by automating the user interface (which will of course fail initially), then add more finely-grained unit tests as the implementation moves along.
I'm no expert on the process, but from what little experience I have I can tell you that BDD via automated UI testing is not easy, but I think it's worth the effort, and probably would yield the most benefit in your case.
I'm not a seasoned TDD expert by any means, but of course I would say that it's incredibly important to unit test as much as you can. Since the code is already in place, I would start by getting some sort of unit test automation in place. I use TeamCity to exercise all of the tests in my projects, and it gives you a nice summary of how the components did.
With that in place, I'd move on to those really critical business logic-like components that can't fail. In my case, there are some basic trigometry problems that need to be solved for various inputs, so I test the heck out of those. The reason I do this is that when I'm burning the midnight oil, it's very easy to waste time digging down to depths of code that really don't need to be touched, because you know they are tested for all of the possible inputs (in my case, there is a finite number of inputs).
Ok, so now you hopefully feel better about those critical pieces. Instead of sitting down and banging out all of the tests, I would attack them as they come up. If you hit a bug that's a real PITA to fix, write the unit tests for it and get them out of the way.
There are cases where you'll find that testing is tough because you can't instantiate a particular class from the test, so you have to mock it. Oh, but maybe you can't mock it easily because you didn't write to an interface. I take these "whoops" scenarios as an opportunity to implement said interface, because, well, it's a Good Thing.
From there, I'd get your build server or whatever automation you have in place configured with a code coverage tool. They create nasty bar graphs with big red zones where you have poor coverage. Now 100% coverage isn't your goal, nor would 100% coverage necessarily mean your code is bulletproof, but the red bar definitely motivates me when I have free time. :)
There is so many good answers so I will not repeat their content. I checked your profile and it seems you are C# .NET developer. Because of that I'm adding reference to Microsoft PEX and Moles project which can help you with autogenerating unit tests for legacy code. I know that autogeneration is not the best way but at least it is the way to start. Check this very interesting article from MSDN magazine about using PEX for legacy code.
I suggest reading a brilliant article by a TopTal Engineer, that explains where to start adding tests: it contains a lot of maths, but the basic idea is:
1) Measure your code's Afferent Coupling (CA) (how much a class is used by other classes, meaning breaking it would cause widespread damage)
2) Measure your code's Cyclomatic Complexity (CC) (higher complexity = higher change of breaking)
You need to identify classes with high CA and CC, i.e. have a function f(CA,CC) and the classes with the smallest differences between the two metrics should be given the highest priority for test coverage.
Why? Because a high CA but very low CC classes are very important but unlikely to break. On the other hand, low CA but high CC are likely to break, but will cause less damage. So you want to balance.
It depends...
It's great to have unit tests but you need to consider who your users are and what they are willing to tolerate in order to get a more bug-free product. Inevitably by refactoring your code which has no unit tests at present, you will introduce bugs and many users will find it hard to understand that you are making the product temporarily more defective to make it less defective in the long run. Ultimately it's the users who will have the final say...
Yes.
No.
Adding tests.
Going towards a more TDD approach will actually better inform your efforts to add new functionality and make regression testing much easier. Check it out!
Ok let me be honest, I haven't written more than 10 unit tests in my life probably.
I am embarking on a new project, and being the sole programmer means I should be scared ... very scared.
The idea that I can pseudo guarantee that my software works brings about a sense of joy.
Sure I will miss a ton of cases where I should have tested, but that's where I will learn as time goes on.
Unit testing will help me sleep better at night, which is better for my health.
My code will fail, but at least I will have a better idea when it will.
How has unit testing made your life better (or has it?), despite the rest of your team not jumping on the bandwagon?
The far biggest value that unit test have on my project is confidence. With that confidence it's much easier to add new features that weren't planned at the beginning and to tear code apart to change something or turn this around.
With test I know I (or anyone else!) haven't broke something that already worked.
Without test you are brave (or stupid) when you make big changes and deploy them in production in the next minute.
By unit testing, I've reduced the number of "stupid-bugs" that get reported during the testing stage. It's also given me a higher level of confidence in my code.
As Elie said, unit-testing is a great way to flush out "stupid" or simple bugs very early.
For me, it changes the way I think about code; making my code testable makes it less brittle and more flexible (e.g. depenceny injection/inversion of control came quite naturally to me because it's something I did for testing purposes anyway).
The greatest benefit I personally get from a thorough test-suite is the confidence to change complicated code even months after I wrote it without fear of accidentally breaking something.
I'm not yet there, but with some discipline while writing them, unit-tests are a great way to document your code, also.
Okay, as nobody else has stepped up to the plate to play devil's advocate, I'll do it.
Automated unit testing can have good benefits for some projects, but can also have many of the following issues:
It sucks down a ton of engineering resources.
It can take many man-hours to remove environment and setup issues from the equation.
It has a low ROI for some project types, especially GUIs.
It won't catch many errors, because it's impossible to evaluate all execution paths for all but the most trivial programs.
It doesn't catch integration errors.
It doesn't catch broader system-level errors such as functions performed across multiple units, or non-functional areas such as performance.
Test coverage and coverage gates can become an increasingly useless mantra
It requires a sustainable process for ensuring that test case failures are reviewed and addressed immediately. Otherwise the app will evolve out of sync with the unit test suite.
It has a significant opportunity cost - there are other activities such as code reviews that have an equally good or even better ROI.
It can involve a significant culture change.
So developers shouldn't adopt a dogmatic (yes or no) approach towards unit testing, but instead do an ROI calculation for every project.
Using unit testing in conjunction with TDD provides me with motivation and drive to complete the task at hand. Without the small progress of writing test, fixing test, writing test, fixing test I can become unmotivated.
Unit tests are really helpful when you start debugging as well. If your tests fail, then you know where the bug is almost immediately. If they all run, then you know where the bugs aren't (most of the time).
Another area where unit tests help: migrating software. I'm finding that it's a lot easier to prepare Python code that has unit tests to be migrated to Python 3 than code that doesn't have unit tests.
Sure I will miss a ton of cases where I should have tested, but that's where I will learn as time goes on.
This is where Test Driven Development really shines. You don't have to worry quite as much about having the proper tests because that question will be answered for you beforehand.
Of course, just to make sure we're on the same page, "test driven development" means the process of coding where you write the test, verify that the test fails, and then write the code.
For me, it wasn't just unit tests that changed my life, but Test Driven Development (TDD). I liken it to a religious experience in my blog post (shameless, I know) My Year with TDD.
Getting into testing has been a career changing experience for me. I write less bugs, I write more readable code, I write more cohesive code, I know when something is broken (usually), etc, etc. I owe it all to Test Driven Development.
Try it, you like it :)
I've been a TDD(eveloper) since my 2nd* project at uni (back in 1988). I don't know if the term was even in use then.
The best thing is the ability to change things and very quickly check you haven't broken anything else. Easy regression testing.
They are good documentation of object/method usage as well.
*and that was directly because of how the first project went....
the biggest thing unit tests give you is confidence in your code. You know stuff works at a certain level of quality, and you know you can go in and change or rewrite something and not have things fail all over the place where you dont expect to. Verification is only a testrun away.
Now that I foster to test-drive my code, I know when I'm done with a function, a component, or a feature. Therefore I can report progress accurately.
I know it's not bug free code, but it is functional enough to be integrated, built and delivered to QA. I'm confident they'll be able to start testing without being blocked by a segmentation error, or any other silly problem.
I also have an environment ready so that I can quickly write a new test to reproduce any problem that will be reported, and I have a safety net to detect side effects and regressions when I modify or fix the code.
Can't speak for everyone, but I started writing tests because of a fellow developer that wrote tests that were a big help to me when I was learning the system.
I've also found that tests are a good way to verify assumptions when working with a new code base.
Unit testing is not a silver bullet. But it does have benefits, and it is very satisfying.
I find it means my code gets executed a lot earlier, since before I wrote unit tests, it took a lot of coding before there was enough functionality to try out in the application.
I'm also very grateful for my suite of unit tests when I come to refactor. I rewrote a whole date handling module a while ago, and there's no way I would have had the confidence to do that without regression tests.
I must say that I think the improvements in VS 2010, such as Ctrl+Enter (I think that was what it was) that can allow you to quickly stub the interface of a class while writing test (first) is going to make this ALOT easier for me.
Unit test advantages are multiple. To be more specific of how it makes my life better, well, I think that it's increase the confidence in change and give me the possibility to change a lot faster a code later. It increase my life in the long run.
Of course, I can tell that in short time that it's a little bit of more pain because it requires more time, but it's rapidly forgotten when you auto-validate yourself when you do these test.
I'm a big fan of unit testing, though my tests don't provide complete coverage nowadays... mostly because I'm working on a web site and much of my code just grabs data from the database, manipulates it, and spits it out. The manipulation code is generally well tested, but its a real pain to test the database code.
That being said, I can point out a case where unit tests saved me weeks of work...
I was working on a smallish project (4-6 devs) a while back and, after months of work we had reached a state of near completion. At this point, the folks in charge of the product decided that, instead of storing dates (and generating reports using them) in GMT, they wanted everything in EST. Given the product was built to handle large amounts of data/logs and generate information about that data based on timeframes, this was a fairly major change.
Over the course of the next few days, the development team went in and changed everything to deal with EST timestamps. What would have taken us weeks to do had we not had such extensive automated tests, took us just 3 days, allowing us to meet an aggressive schedule. We were able to jump into the code and start changing whatever we needed to; the unit tests giving us courage by knowing the system would complain quickly if we broken something. To this day, I use that experience as an example of how you can never truly understand the benefits of automated testing until it saves you... and it certainly did that for my team.
I'm currently in the process of trying to jump on the bandwagon. Work mates are already doing it before they've written a line of functional code. I'm still writing a full program before I've even run it through the main, let alone unit test it :/
I'll get there in the end I'm sure. But at the moment, I am of ill health through spending 90% of my life debugging :(
I've done quite a bit of work in java, and a year in Ruby.
In Ruby we used extensive testing (TDD). This was ABSOLUTELY REQUIRED. You can write nearly complete garbage into a Ruby file, and if you don't hit that specific line of code, you'll never know, so your tests need near 100% coverage.
In Java, I've never needed much more than a single, simple success path test--and that can usually be thrown away after the code runs. It's really the static type checking, strong typing and using coding patterns like strong encapsulation and parameter checking that makes this possible. You can actually get very close to proving that a small class can't be broken (is bug free) without tests, and when correctly designed, all classes should be small.
Another point of interest: On the Ruby project, we had a refactor that took us 2 days of real code work (split a prime model class into two classes) and 2 weeks of test repairs.
At some point all those tests have a price, they are still code you have to maintain.
That said, I find TDD fun and a good way to get things started, even in Java, and I also would reiterate that I ALWAYS have some success path testing at the very least (even if it's just a quick main method) in virtually every class I write.
Good unit tests that provide sufficient coverage can make you sleep better at night.
If you use assertions, you can find out potential bugs which are missed by the unit tests (sometimes it's not good enough perhaps), and you can sleep even better at night.
It saves me time, because when I run code TDD, it usually just works when it comes to integration time, so no need to spend agonizing hours debugging.
It also gives me confidence having a conversation with other developers claiming that API I created has bugs.
When you get to some critical mass of tests, a nice side-effect is often that if you are about to introduce a bug, it is likely to make some test fail, even if the tests aren't directly against the new, buggy code you are writing.
So you will be alerted to the bug you are about to commit before doing so, and you can then write tests against it and fix it right away.
(This is of course only true if you have tests that are not "just" very narrow test-one-thing-only tests, but I think that is more often the case than not.)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been involved in a lot of projects, both old and new, and one thing that they have in common is that almost none of them have been using unit testing. I prefer to use it, but often the customer isn´t ready to pay for that, and suppose that the code just works as it should.
So, do you use unit testing in your projects, or do you rely on your coding skills?
Using unit-testing is a coding skill.
I find it adds very little overhead to coding time. On top of that the code produced tends to be much easier to understand and to refactor, with an untold reduction in maintenance time.
A full discussion of the benefits here: Is Unit Testing worth the effort?
I'll be honest. I've only recently started writing unit tests, and when I go back to modify an old DLL that's from my bad old days, I'll hunker down and write unit tests to get it near 100% code coverage. I say "near" because the last few percent can be hard to get to due to the way that the code is structured, that it didn't have mocking or unit testing in mind (this happens a lot with the abstract classes or classes that P/Invoke into native code). And while I understand that 100% code coverage is not the same thing as 100% of all code execution paths, I've found that having the tests there is a good way to tell when I'm doing something that's going to break a lot of things.
To be honest, this is probably one reason why it has taken many developers (including myself) so long to get around to adopting unit testing (let's not get into TDD just yet).
I'm not saying that any of these are legitimate reasons, but they did more or less go through my head, and I bet they went through some of yours too:
First, there's a thought along the lines of "Oh, I've written mountains of code with zero tests, and seeing as I'm already crushed by that mountain, I don't possibly have the time to add several more mountains of test code on top of it."
Second, nobody likes to talk about the fact that the classic code-run-debug cycle actually works "good enough" when you're
writing new code that does not alter the behavior of old code
working on a relatively small software project or a throwaway utility
the sole developer on a project (you can keep most of it in your head, up to a point)
Third, unit tests are easier to appreciate when you're maintaining existing code, and if you're always writing new code, well, the benefits aren't immediately obvious. When your job is to churn out a utility and never touch it again, a developer can see little value in the unit test because they probably won't be working on that utility or at the company by the time a maintenance programmer (who wishes there was a unit test) comes around.
Fourth, unit testing is a fairly recent innovation. (Oh, hush ... I know the idea has been around forever, especially in mission-critical and DoD applications, but for the "Joe the Plumber Developer" types like me? Unit testing wasn't mentioned at all during my entire CS undergraduate career in the early part of this decade; in 2008, I hear it's a requirement for all projects from level 101 up. Visual Studio didn't even get a built-in testing framework until this year, and even then only in the professional edition.) Was it blissful ignorance on my part? Sure, and I think a lot of people who code because it's their day job (which is fine) simply aren't aware. If they are, then they know that they have to stay current, but when it comes to learning a new methodology (particularly one that involves writing more code and taking more up-front time when you needed that new feature done yesterday) means that it'll get pushed back. This is the long tail, and in a decade or so our talks about unit testing will seem as quaint as our mutterings about object-oriented programming enabling a new era of development during the 1990s. Heck, if I've started unit testing, the end must be near, because I'm usually an idiot.
Fifth, unit testing some areas of code is really difficult, and this scared me away for a while. Multi-threaded event queues, graphical rendering, and GUIs come to mind. Many developers realize this and think "well heck, unit testing would be great for library code but when was the last time I wrote just a class library." It takes a while to come around. To that end, no unit tests does not necessarily mean bad code lies ahead.
Finally, the level of evangelism that sometimes comes from unit testing advocates can be intimidating and off-putting. Seriously, I thought I was a moron because I just couldn't grasp how to mock a certain interface and why I would want to. Selfishly, I wanted to hate unit testing if just for the fact that everybody would not shut UP about it. No silver bullet.
So, apologies for the rambling post. In summary:
I did not unit test for a long time. Now I do. Unit tests are a great tool, especially during maintenance.
For existing projects, you can at least go in and write a test that documents a current input and its output. When you change that big mess of murky code in the future, you'll be able to see if you've altered that little snapshot. This is also an excellent way to change the development culture in your company.
Let the flames begin! =)
Unit testing can't be an afterthought, it is and has to be something which factors in to your design. I would go so far as to say that even if you never write or call a single unit test, the benefits it has in leading tight component driven software is worth the effort 95% of the time.
Writing and running unit-tests is part of a healthy coding process, it is not an addition the customer should have the choice to pay or not pay for.
Testing strategy is a coding issue just as any other: what datastructures to use, variable naming conventions, comment standards, etc.
I use unit testing, and tdd, whenever I can. However in my experience for every unit test advocate there are three or more developers who don't really believe writing unit tests is worth the effort, and that it slows down development. However these people tend to keep quiet, so you are unlikely to see many here.
I like Bob Martin's analogy: imagine you're a surgeon. What would you say to a patient who wanted to pay the surgery but told you to skip washing up ahead of time?
When a client hires me to code they are hiring me as a professional, as someone with the skills and discipline of a professional. My job is to give them "code just works as it should", and I know the best way for me to do that is to use TDD.
After coming onto a couple of projects that were in production but needed major new functionality, one of my bottom lines as a technical lead starting up a project is that unit-tests are a must.
It just costs too much to try and rewrite code that has been written without unit tests. The code is invariably poorly structured (A multi-thousand line web-service all in a single code behind anyone?) and making changes to it (even when it is well structured) without introducing bugs is a really painful process.
This becomes particularly true when a project enters fire-fighting mode (and not having unit tests contributes to projects getting into that state too) - customers are getting grumpy, they've lost faith in the project, few things worse than being the poor guy trying to get the next fix in without introducing a whole pile of regression bugs, and not even having unit tests.
Those situations can be so easily avoided or at least mitigated by explaining the value of tests up front. Of course there are situations where unit tests aren't so important but they are truly the exception.
So yes - I insist on unit tests and have spent a lot of time fixing the messes made by other developers who relied on their coding skills.
I often use unit testing for complex mathematical algorithms, for example for functions like LineIntersectsLine where you can define some important examples to test. After that, it is easier to control this function. You can simply rewrite/optimize it and test if it still works, or you can add new tests if you encounter bugs and then try to correct these.
I find newer developers benefit more from unit testing than older ones who had to learn from the school of hard knocks of the pitfalls that could make something fail. Unit testing does not lead to good design - it just leads to a design that is more testable. You need to test your code but the way unit testing is preached is dangerous. "It will force you to design your code better", "How can you test whether it is correct?".
I prefer to have well written structured code that when you read it automatically tells you simply what it is trying to accomplish and how it does it. Functions/classes should be small and concise. They should only have one responsibility. Unit tests don't protect against logical errors.
Unit tests give more false positives than anything else particularly when a project is first written. Good design trumps tests - tests should be the verification stage nothing more. I never bought into the testing comes before everything else concept. In my experience this line of thinking favours testability at the expense of extensible code (may be ok for throwaway projects or one off utilities but ironically unit testing isn't as important for these).
I agree with Ken that unit testing is part of software development.
About the cost question, it's true than writing the code plus the unit test is longer than writing just the code. However, when you write the code along with its tests - which is called TDD - Test-Driven Development, you end with "clean code that works". When you just write the code, then you have to make it work, which can be long and painful.
Another benefit is that you know where you are, as the code that has been written has already been unit-tested.
To answer your question, yes, I'm using unit testing on my projects when it's possible. I write unit tests for all the new code and I strive to for legacy code.
I'm a big fan of unit testing, mainly because it's completely addicitive - the "code-test-see horrible red bar-fix-test-see lovely green bar" cycle in JUnit seriously gets the endorphins pumping.
My company makes system components for various companies in the Aerospace Industry.
The FAA requires Modified Condition/Decision Coverage for Quality level A Safety Critical flight software (DO-178-B) So for verification we do (again from DO-178-B):
Analysis of all code and traceability from tests and results to all requirements is typically required (depending on software level).
This process typically also involves:
Requirements based test tools
Code coverage analyser tools
Other names for tests performed in this process can be:
Unit testing
Integration testing
Black box and acceptance testing
Unit testing reveals code errors all the time.
In regards to relying on my coding skills, I find that my coding skills actually improve when I use TDD and rely on my unit tests to correct me when I break a test. You learn how to use many language features because you can make a tweak to test out an assumption.
I know from experience that when I write code without unit tests, I'll get a few issues that come up to fix, but when I've written the unit tests I rarely hear of issues. Also, from time spent writing unit tests I write better code to begin with. I look at methods and think of the ways they could fail and build them from the start not to fail or at least to fail in a better way.
Unit tests ARE essential to writing better code, but it also will make you a better developer because you SEE areas where things could go wrong and fix them before you ever get to testing.
Unit testing is an essential part of development, and (I have found) will actually reduce the time to completion of a project while improving overall quality, especially when done in a TDD fasion.
I do Unit test. I am actully building currently a constraint validation engine and my customers want it bullet proof. Well, without unit test, I would die from stress...
So yes, I use it !
Most functions that I write have tests. Like many have said up there, unit testing is an essential part of software engineering. So, to answer your question, yes, I REALLY write and run tests.
Also, with continuous integration, regression tests will be automated and constantly reported.
Another very useful reminder of why you want to unit test wherever you can just showed up in this post: Top 12 Reasons to Write Unit Tests I need to have that engraved on my retinas.
I can testify personally to the value of thinking about how something will be tested from the beginning, with tests developed through each iteration. I need to be more systematic, myself. I am printing out that list right now.
Your customers would probably save money overall if unit testing was in place. Some of the errors prevented by unit testing are much more of a liability if found later in the development stage rather than during unit testing. It saves so much headache in the future, now that I use it I don't think I could ever go back.
This certainly presupposes that unit testing is a good thing. Our projects have some level of unit testing, but it's inconsistent at best.
What are the most convincing ways that you have used or have had used with you to convince everyone that formalized unit testing is a good thing and that making it required is really in the best interest of the 'largeish' projects we work on. I am not a developer, but I am in Quality Assurance and would like to improve the quality of the work delivered to ensure it is ready to test.
By formalized unit tests, I'm simply talking about
Identifying the Unit Tests to be written
Identifying the test data (or describe it)
Writing these tests
Tracking these tests (and re-using as needed)
Making the results available
A very convincing way is to do formalized unit test yourself, regardless of what your team/company does. This might take some extra effort on your side, especially if you're not experienced with this sort of practice.
When you can then show your code is better and you are being more productive than your fellow developers, they are going to want to know why. Then feed them your favorite unit testing methods.
Once you've convinced your fellow developers, convince management together.
I use Maven with the Surefire and Cobertura plugins for all my builds. The actual test cases are created with JUnit, DbUnit and EasyMock.
Identifying Unit Tests
I try to follow Test Driven Development but to be honest I usually just do that for the handful of the test cases and then come back and create tests for the edge and exception cases later.
Identifying Test Data
DbUnit is great for loading test data for your unit tests.
Writing Test Cases
I use JUnit to create the test cases. I try to write self documenting test cases but will use Javadocs to comment something that is not obvious.
Tracking & Making The Results Available
I integrate the unit testing into my Maven build cycle using the Surefire plugin and I use the Corbertura plugin to measure the coverage achieved by those tests. I always generate and publish a web-site including the Surefire and Cobertura reports as part of my daily build so I can see what tests failed/passed.
The event which convinced me was when we managed to regress a bug three times, in three consecutive releases. Once I realised how much more productive I was as a programmer when I wasn't constantly fixing trivial mistakes after they had gone to the client, and I could have a warm fuzzy feeling that colleagues code would do what they claimed it would, I became a convert.
Back in the day I did Cobol development on Mainframes we did this religiously in the several companies I worked in and it was accepted as the way you did things because the environment enforced it. I think it was a very typical scheme for the era and maybe some of the reasons might be applicable to you:-
Like most mainframe environments we had three realms, development, Quality Assurance and Production. Programmers developed in development and unit tested there, and once they signed off and were happy the unit was migrated to the QA environment (with the test and results docs) where it was system tested by dedicated QA staff. The development to QA migration was a formal step which happened overnight. Once QA'ed the code was migrated to Production - and we had very few bugs.
The motivation to get the unit testing done and right was that if you didn't and a bug was found by QA staff it was obvious that you hadn't done the work. Consequently your reputation depended on how rigorous you were. Of course most people would end up with the occasional bug, but coders who produced solid tested code all the time soon got a star reputation and those who produced buggy code got noticed too. The push would be always to up your game, and consequently the culture produced was one that pushed towards bug free code delivered first time.
Extracting pertinent points -
Coder reputation tied up with delivery of bug free tested code
Significant overhead associated with moving unit tested code to the next level, so motivation not to repeat this and get it right first time.
System testing performed by different people to unit testing - ideally a different team.
I'm sure your environment will differ but the principals might be translatable.
Sometimes by example is the best way. I also find that reminding people that certain things just dont happen when things are under test. Next time somebody asks you to write something, do it with tests regardless. Eventually your peers will be jealous of the ease by which you can change your code and know that it still works.
As for management you need to emphasise how much time gets wasted due to the nuclear explosion that occurs when you need to make a change to codebase X that isnt under test.
Many developers dont realise just how much they refactor without ensuring they are preserving behaviour across the entire system. For me this is the biggest benefit to unit testing and TDD in my opinion.
Software requirements change
Software changes to suit the requirements
The only certainty is change. Changing code that is not under test requires the developer to be aware of every behavioural side effect possible. The reality is that the coders who think they can read into every permutation, does so by a pain staking process of trial and error until nothing breaks obviously. At this point they check in.
The pragmatic programmer recognizes that he/she is not perfect and all knowing, and that tests are like a safety net that allows them to walk the refactoring tightrope quickly and safely.
As for when to write test on greenfield code, I'd have to advocate as much as possible. Spend the time defining the behaviours that you want out of your system and write tests initially to express those higher level constructs. Unit tests can come as thoughts crystallize.
Hope this helps.
Education and/or certification.
Give your team members a formal training in the field of testing - maybe with certification exam (depending on your team members and your own attitude towards certification). You'll take testing to a higher level that way, and your team members will be more likely to take a professional attitude towards testing.
There is a big difference between convincing and requiring.
If you find a way to convince your colleagues to write them - great. However if you create some formalized rules and require them to write unit tests, they will find a way to overcome this. As a result you will get a bunch of unit tests which are worth nothing: There will be unit test for every single class available and they will test setters and getters.
Think twice before creating and enforcing rules. Developers are good at overcoming them.
First time around you just need to go ahead and write them and show people that it's worth it. I've found on three projects that it's the only way to convince people. Some people who don't code (e.g. junior project managers) won't be able to see the value until it's staring them right in the face.
On my software team, we tend to write a small business case on these issues and present them to management in order to have the time available to create and track tests. We explain that the time taken to test is well made up for when crunch time comes and everything is on the line. We also set up a Hudson build server to centralize the tracking of the unit tests. This makes it a lot easier for the developers to keep track of failing tests and to discover recurring problems.
Remind your team or the other developers that they're professionals, not amateurs. Worked for me!
Also, it's an industry standard these days. Without unit testing experience, they are less desirable and less valuable as employees to potential future employers.
As a team lead, it is my responsibility to ensure that my programmers are doing unit testing on all the modules they work on. I suppose at this point, it's not even a question of how to convince them, it's required. Not sometimes, not on largish projects, all the time. Unit testing is the first line of defense against putting something in production that you will have to maintain. If something is put into production that has not been completely unit and system tested, then it will come back to bite you. I guess one of the policies we have here to support this is that if it blows in production, or causes problems, then the programmer responsible for coding and testing that module will be the one that has to take care of the problems, do the cleanup, etc. That alone is a fairly good motivator.
The other is that it is about pride. I work in a shop of about 75 coders, although that is large by some standards, it's really small enough for all of us to know one another. Its also small enough that we know what one another is working on, and when it does move to production, we are aware of any abends, failures, etc. If you are careful, do the unit and system testing, the chances of moving something to production without causing failures increases significantly. It may take a time or two of moving something to production and failing to realize it, but there are great rewards involved in not messing up. It's really nice to hear congratulations in the hallway when you move a project in and it doesn't screw up.
Write a bunch of them and demonstrate that unit testing has improved your productivity and the quality of your code. Without some kind of proof, sometimes people won't believe it's worth it.
So, two years after I asked this question, I find that one unexpected answer was that by moving to a new SDLC was what was needed. Five years ago, we established our first formal SDLC. It improved our situation, but left out some important things, such as automation. We are now in the process of establishing a new SDLC (under new managment) where one of the tenants is automation. Not just automated unit tests, but automated functional tests.
I guess the lesson is that I was thinking too small. If you are going to change how you create software, go 'whole hog' and make a drastic change rather than propose incremental change if you are not used to that.
You could take some inspiration from an initiative at Google. Their test team started putting up examples, tips and benefits inside the toilet cubicles to raise the profile of the merits of test automation.
https://testing.googleblog.com/2007/01/introducing-testing-on-toilet.html
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I am working to integrate unit testing into the development process on the team I work on and there are some sceptics. What are some good ways to convince the sceptical developers on the team of the value of Unit Testing? In my specific case we would be adding Unit Tests as we add functionality or fixed bugs. Unfortunately our code base does not lend itself to easy testing.
Every day in our office there is an exchange which goes something like this:
"Man, I just love unit tests, I've just been able to make a bunch of changes to the way something works, and then was able to confirm I hadn't broken anything by running the test over it again..."
The details change daily, but the sentiment doesn't. Unit tests and test-driven development (TDD) have so many hidden and personal benefits as well as the obvious ones that you just can't really explain to somebody until they're doing it themselves.
But, ignoring that, here's my attempt!
Unit Tests allows you to make big changes to code quickly. You know it works now because you've run the tests, when you make the changes you need to make, you need to get the tests working again. This saves hours.
TDD helps you to realise when to stop coding. Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing.
The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of both being bad / buggy being low. Often it's the test that needs fixing but that's still a good outcome.
TDD helps with coding constipation. When faced with a large and daunting piece of work ahead writing the tests will get you moving quickly.
Unit Tests help you really understand the design of the code you are working on. Instead of writing code to do something, you are starting by outlining all the conditions you are subjecting the code to and what outputs you'd expect from that.
Unit Tests give you instant visual feedback, we all like the feeling of all those green lights when we've done. It's very satisfying. It's also much easier to pick up where you left off after an interruption because you can see where you got to - that next red light that needs fixing.
Contrary to popular belief unit testing does not mean writing twice as much code, or coding slower. It's faster and more robust than coding without tests once you've got the hang of it. Test code itself is usually relatively trivial and doesn't add a big overhead to what you're doing. This is one you'll only believe when you're doing it :)
I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interpret this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete.
Good unit tests can help document and define what something is supposed to do
Unit tests help with code re-use. Migrate both your code and your tests to your new project. Tweak the code till the tests run again.
A lot of work I'm involved with doesn't Unit Test well (web application user interactions etc.), but even so we're all test infected in this shop, and happiest when we've got our tests tied down. I can't recommend the approach highly enough.
Unit testing is a lot like going to the gym. You know it is good for you, all the arguments make sense, so you start working out. There's an initial rush, which is great, but after a few days you start to wonder if it is worth the trouble. You're taking an hour out of your day to change your clothes and run on a hamster wheel and you're not sure you're really gaining anything other than sore legs and arms.
Then, after maybe one or two weeks, just as the soreness is going away, a Big Deadline begins approaching. You need to spend every waking hour trying to get "useful" work done, so you cut out extraneous stuff, like going to the gym. You fall out of the habit, and by the time Big Deadline is over, you're back to square one. If you manage to make it back to the gym at all, you feel just as sore as you were the first time you went.
You do some reading, to see if you're doing something wrong. You begin feel a little bit of irrational spite toward all the fit, happy people extolling the virtues of exercise. You realize that you don't have a lot in common. They don't have to drive 15 minutes out of the way to go to the gym; there is one in their building. They don't have to argue with anybody about the benefits of exercise; it is just something everybody does and accepts as important. When a Big Deadline approaches, they aren't told that exercise is unnecessary any more than your boss would ask you to stop eating.
So, to answer your question, Unit Testing is usually worth the effort, but the amount of effort required isn't going to be the same for everybody. Unit Testing may require an enormous amount of effort if you are dealing with spaghetti code base in a company that doesn't actually value code quality. (A lot of managers will sing Unit Testing's praises, but that doesn't mean they will stick up for it when it matters.)
If you are trying to introduce Unit Testing into your work and are not seeing all the sunshine and rainbows that you have been led to expect, don't blame yourself. You might need to find a new job to really make Unit Testing work for you.
Best way to convince... find a bug, write a unit test for it, fix the bug.
That particular bug is unlikely to ever appear again, and you can prove it with your test.
If you do this enough, others will catch on quickly.
thetalkingwalnut asks:
What are some good ways to convince the skeptical developers on the team of the value of Unit Testing?
Everyone here is going to pile on lots of reasons out of the blue why unit testing is good. However, I find that often the best way to convince someone of something is to listen to their argument and address it point by point. If you listen and help them verbalize their concerns, you can address each one and perhaps convert them to your point of view (or at the very least, leave them without a leg to stand on). Who knows? Perhaps they will convince you why unit tests aren't appropriate for your situation. Not likely, but possible. Perhaps if you post their arguments against unit tests, we can help identify the counterarguments.
It's important to listen to and understand both sides of the argument. If you try to adopt unit tests too zealously without regard to people's concerns, you'll end up with a religious war (and probably really worthless unit tests). If you adopt it slowly and start by applying it where you will see the most benefit for the least cost, you might be able to demonstrate the value of unit tests and have a better chance of convincing people. I realize this isn't as easy as it sounds - it usually requires some time and careful metrics to craft a convincing argument.
Unit tests are a tool, like any other, and should be applied in such a way that the benefits (catching bugs) outweigh the costs (the effort writing them). Don't use them if/where they don't make sense and remember that they are only part of your arsenal of tools (e.g. inspections, assertions, code analyzers, formal methods, etc). What I tell my developers is this:
They can skip writing a test for a method if they have a good argument why it isn't necessary (e.g. too simple to be worth it or too difficult to be worth it) and how the method will be otherwise verified (e.g. inspection, assertions, formal methods, interactive/integration tests). They need to consider that some verifications like inspections and formal proofs are done at a point in time and then need to be repeated every time the production code changes, whereas unit tests and assertions can be used as regression tests (written once and executed repeatedly thereafter). Sometimes I agree with them, but more often I will debate about whether a method is really too simple or too difficult to unit test.
If a developer argues that a method seems too simple to fail, isn't it worth taking the 60 seconds necessary to write up a simple 5-line unit test for it? These 5 lines of code will run every night (you do nightly builds, right?) for the next year or more and will be worth the effort if even just once it happens to catch a problem that may have taken 15 minutes or longer to identify and debug. Besides, writing the easy unit tests drives up the count of unit tests, which makes the developer look good.
On the other hand, if a developer argues that a method seems too difficult to unit test (not worth the significant effort required), perhaps that is a good indication that the method needs to be divided up or refactored to test the easy parts. Usually, these are methods that rely on unusual resources like singletons, the current time, or external resources like a database result set. These methods usually need to be refactored into a method that gets the resource (e.g. calls getTime()) and a method that takes the resource as a argument (e.g. takes the timestamp as a parameter). I let them skip testing the method that retrieves the resource and they instead write a unit test for the method that now takes the resource as a argument. Usually, this makes writing the unit test much simpler and therefore worthwhile to write.
The developer needs to draw a "line in the sand" in how comprehensive their unit tests should be. Later in development, whenever we find a bug, they should determine if more comprehensive unit tests would have caught the problem. If so and if such bugs crop up repeatedly, they need to move the "line" toward writing more comprehensive unit tests in the future (starting with adding or expanding the unit test for the current bug). They need to find the right balance.
Its important to realize the unit tests are not a silver bullet and there is such a thing as too much unit testing. At my workplace, whenever we do a lessons learned, I inevitably hear "we need to write more unit tests". Management nods in agreement because its been banged into their heads that "unit tests" == "good".
However, we need to understand the impact of "more unit tests". A developer can only write ~N lines of code a week and you need to figure out what percentage of that code should be unit test code vs production code. A lax workplace might have 10% of the code as unit tests and 90% of the code as production code, resulting in product with a lot of (albeit very buggy) features (think MS Word). On the other hand, a strict shop with 90% unit tests and 10% production code will have a rock solid product with very few features (think "vi"). You may never hear reports about the latter product crashing, but that likely has as much to do with the product not selling very well as much as it has to do with the quality of the code.
Worse yet, perhaps the only certainty in software development is that "change is inevitable". Assume the strict shop (90% unit tests/10% production code) creates a product that has exactly 2 features (assuming 5% of production code == 1 feature). If the customer comes along and changes 1 of the features, then that change trashes 50% of the code (45% of unit tests and 5% of the production code). The lax shop (10% unit tests/90% production code) has a product with 18 features, none of which work very well. Their customer completely revamps the requirements for 4 of their features. Even though the change is 4 times as large, only half as much of the code base gets trashed (~25% = ~4.4% unit tests + 20% of production code).
My point is that you have to communicate that you understand that balance between too little and too much unit testing - essentially that you've thought through both sides of the issue. If you can convince your peers and/or your management of that, you gain credibility and perhaps have a better chance of winning them over.
I have toyed with unit testing a number of times, and I am still to be convinced that it is worth the effort given my situation.
I develop websites, where much of the logic involves creating, retrieving or updating data in the database. When I have tried to "mock" the database for unit testing purposes, it has got very messy and seemed a bit pointless.
When I have written unit tests around business logic, it has never really helped me in the long run. Because I largely work on projects alone, I tend to know intuitively which areas of code may be affected by something I am working on, and I test these areas manually. I want to deliver a solution to my client as quickly as possible, and unit testing often seems a waste of time. I list manual tests and walk through them myself, ticking them off as I go.
I can see that it may be beneficial when a team of developers are working on a project and updating each other's code, but even then I think that if the developers are of a high quality, good communication and well-written code should often be enough.
One great thing about unit tests is that they serve as documentation for how your code is meant to behave. Good tests are kind of like a reference implementation, and teammates can look at them to see how to integrate their code with yours.
Unit-testing is well worth the initial investment. Since starting to use unit-testing a couple of years ago, I've found some real benefits:
regression testing removes the fear of
making changes to code (there's nothing
like the warm glow of seeing code
work or explode every time a change is
made)
executable code examples for
other team members (and yourself in
six months time..)
merciless refactoring - this is incredibly rewarding, try it!
Code snippets can be a great help in reducing the overhead of creating tests. It isn't difficult to create snippets that enable the creation of a class outline and an associated unit-test fixture in seconds.
You should test as little as possible!
meaning, you should write just enough unit tests to reveal intent. This often gets glossed over. Unit testing costs you. If you make changes and you have to change tests you will be less agile. Keep unit tests short and sweet. Then they have a lot of value.
Too often I see lots of tests that will never break, are big and clumsy and don't offer a lot of value, they just end up slowing you down.
I didn't see this in any of the other answers, but one thing I noticed is that I could debug so much faster. You don't need to drill down through your app with just the right sequence of steps to get to the code your fixing, only to find you've made a boolean error and need to do it all again. With a unit test, you can just step directly into the code you're debugging.
[I have a point to make that I cant see above]
"Everyone unit tests, they don't necessarily realise it - FACT"
Think about it, you write a function to maybe parse a string and remove new line characters. As a newbie developer you either run a few cases through it from the command line by implementing it in Main() or you whack together a visual front end with a button, tie up your function to a couple of text boxes and a button and see what happens.
That is unit testing - basic and badly put together but you test the piece of code for a few cases.
You write something more complex. It throws errors when you throw a few cases through (unit testing) and you debug into the code and trace though. You look at values as you go through and decide if they are right or wrong. This is unit testing to some degree.
Unit testing here is really taking that behaviour, formalising it into a structured pattern and saving it so that you can easily re-run those tests. If you write a "proper" unit test case rather than manually testing, it takes the same amount of time, or maybe less as you get experienced, and you have it available to repeat again and again
For years, I've tried to convince people that they needed to write unit test for their code. Whether they wrote the tests first (as in TDD) or after they coded the functionality, I always tried to explain them all the benefits of having unit tests for code. Hardly anyone disagreed with me. You cannot disagree with something that is obvious, and any smart person will see the benefits of unit test and TDD.
The problem with unit testing is that it requires a behavioral change, and it is very hard to change people's behavior. With words, you will get a lot of people to agree with you, but you won't see many changes in the way they do things.
You have to convince people by doing. Your personal success will atract more people than all the arguments you may have. If they see you are not just talking about unit test or TDD, but you are doing what you preach, and you are successful, people will try to imitate you.
You should also take on a lead role because no one writes unit test right the first time, so you may need to coach them on how to do it, show them the way, and the tools available to them. Help them while they write their first tests, review the tests they write on their own, and show them the tricks, idioms and patterns you've learned through your own experiences. After a while, they will start seeing the benefits on their own, and they will change their behavior to incorporate unit tests or TDD into their toolbox.
Changes won't happen over night, but with a little of patience, you may achieve your goal.
A major part of test-driven development that is often glossed over is the writing of testable code. It seems like some kind of a compromise at first, but you'll discover that testable code is also ultimately modular, maintainable and readable.
If you still need help convincing people this is a nice simple presentation about the advantages of unit testing.
If your existing code base doesn't lend itself to unit testing, and it's already in production, you might create more problems than you solve by trying to refactor all of your code so that it is unit-testable.
You may be better off putting efforts towards improving your integration testing instead. There's lots of code out there that's just simpler to write without a unit test, and if a QA can validate the functionality against a requirements document, then you're done. Ship it.
The classic example of this in my mind is a SqlDataReader embedded in an ASPX page linked to a GridView. The code is all in the ASPX file. The SQL is in a stored procedure. What do you unit test? If the page does what it's supposed to do, should you really redesign it into several layers so you have something to automate?
One of the best things about unit testing is that your code will become easier to test as you do it. Preexisting code created without tests is always a challenge because since they weren't meant to be unit-tested, it's not rare to have a high level of coupling between classes, hard-to-configure objects inside your class - like an e-mail sending service reference - and so on. But don't let this bring you down! You'll see that your overall code design will become better as you start to write unit-tests, and the more you test, the more confident you'll become on making even more changes to it without fear of breaking you application or introducing bugs.
There are several reasons to unit-test your code, but as time progresses, you'll find out that the time you save on testing is one of the best reasons to do it. In a system I've just delivered, I insisted on doing automated unit-testing in spite of the claims that I'd spend way more time doing the tests than I would by testing the system manually. With all my unit tests done, I run more than 400 test cases in less than 10 minutes, and every time I had to do a small change in the code, all it took me to be sure the code was still working without bugs was ten minutes. Can you imagine the time one would spend to run those 400+ test cases by hand?
When it comes to automated testing - be it unit testing or acceptance testing - everyone thinks it's a wasted effort to code what you can do manually, and sometimes it's true - if you plan to run your tests only once. The best part of automated testing is that you can run them several times without effort, and after the second or third run, the time and effort you've wasted is already paid for.
One last piece of advice would be to not only unit test your code, but start doing test first (see TDD and BDD for more)
Unit tests are also especially useful when it comes to refactoring or re-writing a piece a code. If you have good unit tests coverage, you can refactor with confidence. Without unit tests, it is often hard to ensure the you didn't break anything.
In short - yes. They are worth every ounce of effort... to a point. Tests are, at the end of the day, still code, and much like typical code growth, your tests will eventually need to be refactored in order to be maintainable and sustainable. There's a tonne of GOTCHAS! when it comes to unit testing, but man oh man oh man, nothing, and I mean NOTHING empowers a developer to make changes more confidently than a rich set of unit tests.
I'm working on a project right now.... it's somewhat TDD, and we have the majority of our business rules encapuslated as tests... we have about 500 or so unit tests right now. This past iteration I had to revamp our datasource and how our desktop application interfaces with that datasource. Took me a couple days, the whole time I just kept running unit tests to see what I broke and fixed it. Make a change; Build and run your tests; fix what you broke. Wash, Rinse, Repeat as necessary. What would have traditionally taken days of QA and boat loads of stress was instead a short and enjoyable experience.
Prep up front, a little bit of extra effort, and it pays 10-fold later on when you have to start dicking around with core features/functionality.
I bought this book - it's a Bible of xUnit Testing knowledge - tis probably one of the most referenced books on my shelf, and I consult it daily: link text
Occasionally either myself or one of my co-workers will spend a couple of hours getting to the bottom of slightly obscure bug and once the cause of the bug is found 90% of the time that code isn't unit tested. The unit test doesn't exist because the dev is cutting corners to save time, but then looses this and more debugging.
Taking the small amount of time to write a unit test can save hours of future debugging.
I'm working as a maintenance-engineer of a poorly documented, awful and big code base. I wish the people who wrote the code had written the unit tests for it.
Each time I make a change and update the production code I'm scared that I might introduce a bug for not having considered some condition.
If they wrote the test making changes to the code base would be easier and faster.(at the same time the code base would be in a better state)..
I think unit tests prove a lot useful when writing api or frameworks that have to last for many years and to be used/modified/evolved by people other than the original coders.
Unit Testing is definitely worth the effort. Unfortunately you've chosen a difficult (but unfortunately common) scenario into which to implement it.
The best benefit from unit testing you'll get is when using it from the ground up - on a few, select, small projects I've been fortunate enough to write my unit tests before implementing my classes (the interface was already complete at this point). With proper unit tests, you will find and fix bugs in your classes while they're still in their infancy and not anywhere near the complex system that they'll undoubtedly become integrated in in the future.
If your software is solidly object oriented, you should be able to add unit testing at the class level without too much effort. If you aren't that fortunate, you should still try to incorporate unit testing wherever you can. Make sure when you add new functionality the new pieces are well defined with clear interfaces and you'll find unit testing makes your life much easier.
When you said, "our code base does not lend itself to easy testing" is the first sign of a code smell. Writing Unit Tests means you typically write code differently in order to make the code more testable. This is a good thing in my opinion as what I've seen over the years in writing code that I had to write tests for, it forced me to put forth a better design.
I do not know. A lot of places do not do unit test, but the quality of the code is good. Microsoft does unit test, but Bill Gates gave a blue screen at his presentation.
I wrote a very large blog post about the topic. I've found that unit testing alone isn't worth the work and usually gets cut when deadlines get closer.
Instead of talking about unit testing from the "test-after" verification point of view, we should look at the true value found when you set out to write a spec/test/idea before the implementation.
This simple idea has changed the way I write software and I wouldn't go back to the "old" way.
How test first development changed my life
Yes - Unit Testing is definitely worth the effort but you should know it's not a silver bullet. Unit Testing is work and you will have to work to keep the test updated and relevant as code changes but the value offered is worth the effort you have to put in. The ability to refactor with impunity is a huge benefit as you can always validate functionality by running your tests after any change code. The trick is to not get too hung up on exactly the unit-of-work you're testing or how you are scaffolding test requirements and when a unit-test is really a functional test, etc. People will argue about this stuff for hours on end and the reality is that any testing you do as your write code is better than not doing it. The other axiom is about quality and not quantity - I have seen code-bases with 1000's of test that are essentially meaningless as the rest don't really test anything useful or anything domain specific like business rules, etc of the particular domain. I've also seen codebases with 30% code coverage but the tests were relevant, meaningful and really awesome as they tested the core functionality of the code it was written for and expressed how the code should be used.
One of my favorite tricks when exploring new frameworks or codebases is to write unit-tests for 'it' to discover how things work. It's a great way to learn more about something new instead of reading a dry doc :)
I recently went through the exact same experience in my workplace and found most of them knew the theoretical benefits but had to be sold on the benefits to them specifically, so here were the points I used (successfully):
They save time when performing negative testing, where you handle unexpected inputs (null pointers, out of bounds values, etc), as you can do all these in a single process.
They provide immediate feedback at compile time regarding the standard of the changes.
They are useful for testing internal data representations that may not be exposed during normal runtime.
and the big one...
You might not need unit testing, but when someone else comes in and modifies the code without a full understanding it can catch a lot of the silly mistakes they might make.
I discovered TDD a couple of years ago, and have since written all my pet projects using it. I have estimated that it takes roughly the same time to TDD a project as it takes to cowboy it together, but I have such increased confidence in the end product that I can't help a feeling of accomplishment.
I also feel that it improves my design style (much more interface-oriented in case I need to mock things together) and, as the green post at the top writes, it helps with "coding constipation": when you don't know what to write next, or you have a daunting task in front of you, you can write small.
Finally, I find that by far the most useful application of TDD is in the debugging, if only because you've already developed an interrogatory framework with which you can prod the project into producing the bug in a repeatable fashion.
One thing no-one has mentioned yet is getting the commitment of all developers to actually run and update any existing automated test. Automated tests that you get back to and find broken because of new development looses a lot of the value and make automated testing really painful. Most of those tests will not be indicating bugs since the developer has tested the code manually, so the time spent updating them is just waste.
Convincing the skeptics to not destroy the work the others are doing on unit-tests is a lot more important for getting value from the testing and might be easier.
Spending hours updating tests that has broken because of new features each time you update from the repository is neither productive nor fun.
If you are using NUnit one simple but effective demo is to run NUnit's own test suite(s) in front of them. Seeing a real test suite giving a codebase a workout is worth a thousand words...
Unit testing helps a lot in projects that are larger than any one developer can hold in their head. They allow you to run the unit test suite before checkin and discover if you broke something. This cuts down a lot on instances of having to sit and twiddle your thumbs while waiting for someone else to fix a bug they checked in, or going to the hassle of reverting their change so you can get some work done. It's also immensely valuable in refactoring, so you can be sure that the refactored code passes all the tests that the original code did.
With unit test suite one can make changes to code while leaving rest of the features intact. Its a great advantage. Do you use Unit test sutie and regression test suite when ever you finish coding new feature.
I'm agree with the point of view opposite to the majority here:
It's OK Not to Write Unit Tests
Especially prototype-heavy programming (AI for example) is difficult to combine with unit testing.