Related
I'm strongly considering adding unit testing to an existing project that is in production. It was started 18 months ago before I could really see any benefit of TDD (face palm), so now it's a rather large solution with a number of projects and I haven't the foggiest idea where to start in adding unit tests. What's making me consider this is that occasionally an old bug seems to resurface, or a bug is checked in as fixed without really being fixed. Unit testing would reduce or prevents these issues occuring.
By reading similar questions on SO, I've seen recommendations such as starting at the bug tracker and writing a test case for each bug to prevent regression. However, I'm concerned that I'll end up missing the big picture and end up missing fundamental tests that would have been included if I'd used TDD from the get go.
Are there any process/steps that should be adhered to in order to ensure that an existing solutions is properly unit tested and not just bodged in? How can I ensure that the tests are of a good quality and aren't just a case of any test is better than no tests.
So I guess what I'm also asking is;
Is it worth the effort for an
existing solution that's in production?
Would it better to ignore the testing
for this project and add it in a
possible future re-write?
What will be more benefical; spending
a few weeks adding tests or a few
weeks adding functionality?
(Obviously the answer to the third point is entirely dependant on whether you're speaking to management or a developer)
Reason for Bounty
Adding a bounty to try and attract a broader range of answers that not only confirm my existing suspicion that it is a good thing to do, but also some good reasons against.
I'm aiming to write this question up later with pros and cons to try and show management that it's worth spending the man hours on moving the future development of the product to TDD. I want to approach this challenge and develop my reasoning without my own biased point of view.
I've introduced unit tests to code bases that did not have it previously. The last big project I was involved with where I did this the product was already in production with zero unit tests when I arrived to the team. When I left - 2 years later - we had 4500+ or so tests yielding about 33 % code coverage in a code base with 230 000 + production LOC (real time financial Win-Forms application). That may sound low, but the result was a significant improvement in code quality and defect rate - plus improved morale and profitability.
It can be done when you have both an accurate understanding and commitment from the parties involved.
First of all, it is important to understand that unit testing is a skill in itself. You can be a very productive programmer by "conventional" standards and still struggle to write unit tests in a way that scales in a larger project.
Also, and specifically for your situation, adding unit tests to an existing code base that has no tests is also a specialized skill in itself. Unless you or somebody in your team has successful experience with introducing unit tests to an existing code base, I would say reading Feather's book is a requirement (not optional or strongly recommended).
Making the transition to unit testing your code is an investment in people and skills just as much as in the quality of the code base. Understanding this is very important in terms of mindset and managing expectations.
Now, for your comments and questions:
However, I'm concerned that I'll end up missing the big picture and end up missing fundamental tests that would have been included if I'd used TDD from the get go.
Short answer: Yes, you will miss tests and yes they might not initially look like what they would have in a green field situation.
Deeper level answer is this: It does not matter. You start with no tests. Start adding tests, and refactor as you go. As skill levels get better, start raising the bar for all newly written code added to your project. Keep improving etc...
Now, reading in between the lines here I get the impression that this is coming from the mindset of "perfection as an excuse for not taking action". A better mindset is to focus on self trust. So as you may not know how to do it yet, you will figure out how to as you go and fill in the blanks. Therefore, there is no reason to worry.
Again, its a skill. You can not go from zero tests to TDD-perfection in one "process" or "step by step" cook book approach in a linear fashion. It will be a process. Your expectations must be to make gradual and incremental progress and improvement. There is no magic pill.
The good news is that as the months (and even years) pass, your code will gradually start to become "proper" well factored and well tested code.
As a side note. You will find that the primary obstacle to introducing unit tests in an old code base is lack of cohesion and excessive dependencies. You will therefore probably find that the most important skill will become how to break existing dependencies and decoupling code, rather than writing the actual unit tests themselves.
Are there any process/steps that should be adhered to in order to ensure that an existing solutions is properly unit tested and not just bodged in?
Unless you already have it, set up a build server and set up a continuous integration build that runs on every checkin including all unit tests with code coverage.
Train your people.
Start somewhere and start adding tests while you make progress from the customer's perspective (see below).
Use code coverage as a guiding reference of how much of your production code base is under test.
Build time should always be FAST. If your build time is slow, your unit testing skills are lagging. Find the slow tests and improve them (decouple production code and test in isolation). Well written, you should easilly be able to have several thousands of unit tests and still complete a build in under 10 minutes (~1-few ms / test is a good but very rough guideline, some few exceptions may apply like code using reflection etc).
Inspect and adapt.
How can I ensure that the tests are of a good quality and aren't just a case of any test is better than no tests.
Your own judgement must be your primary source of reality. There is no metric that can replace skill.
If you don't have that experience or judgement, consider contracting someone who does.
Two rough secondary indicators are total code coverage and build speed.
Is it worth the effort for an existing solution that's in production?
Yes. The vast majority of the money spent on a custom built system or solution is spent after it is put in production. And investing in quality, people and skills should never be out of style.
Would it better to ignore the testing for this project and add it in a possible future re-write?
You would have to take into consideration, not only the investment in people and skills, but most importantly the total cost of ownership and the expected life time of the system.
My personal answer would be "yes of course" in the majority of cases because I know its just so much better, but I recognize that there might be exceptions.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
Neither. Your approach should be to add tests to your code base WHILE you are making progress in terms of functionality.
Again, it is an investment in people, skills AND the quality of the code base and as such it will require time. Team members need to learn how to break dependencies, write unit tests, learn new habbits, improve discipline and quality awareness, how to better design software, etc. It is important to understand that when you start adding tests your team members likely don't have these skills yet at the level they need to be for that approach to be successful, so stopping progress to spend all time to add a lot of tests simply won't work.
Also, adding unit tests to an existing code base of any sizeable project size is a LARGE undertaking which requires commitment and persistance. You can't change something fundamental, expect a lot of learning on the way and ask your sponsor to not expect any ROI by halting the flow of business value. That won't fly, and frankly it shouldn't.
Thirdly, you want to instill sound business focus values in your team. Quality never comes at the expense of the customer and you can't go fast without quality. Also, the customer is living in a changing world, and your job is to make it easier for him to adapt. Customer alignment requires both quality and the flow of business value.
What you are doing is paying off technical debt. And you are doing so while still serving your customers ever changing needs. Gradually as debt is paid off, the situation improves, and it is easier to serve the customer better and deliver more value. Etc. This positive momentum is what you should aim for because it underlines the principles of sustainable pace and will maintain and improve moral - both for your development team, your customer and your stakeholders.
Is it worth the effort for an existing solution that's in production?
Yes!
Would it better to ignore the testing for this project and add it in a possible future re-write?
No!
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
Adding testing (especially automated testing) makes it much easier to keep the project working in the future, and it makes it significantly less likely that you'll ship stupid problems to the user.
Tests to put in a priori are ones that check whether what you believe the public interface to your code (and each module in it) is working the way you think. If you can, try to also induce each isolated failure mode that your code modules should have (note that this can be non-trivial, and you should be careful to not check too carefully how things fail, e.g., you don't really want to do things like counting the number of log messages produced on failure, since verifying that it is logged at all is enough).
Then put in a test for each current bug in your bug database that induces exactly the bug and which will pass when the bug is fixed. Then fix those bugs! :-)
It does cost time up front to add tests, but you get paid back many times over at the back end as your code ends up being of much higher quality. That matters enormously when you're trying to ship a new version or carry out maintenance.
The problem with retrofitting unit tests is you'll realise you didn't think of injecting a dependency here or using an interface there, and before long you'll be rewriting the entire component. If you have the time to do this, you'll build yourself a nice safety net, but you could have introduced subtle bugs along the way.
I've been involved with many projects which really needed unit tests from day one, and there is no easy way to get them in there, short of a complete rewrite, which cannot usually be justified when the code is working and already making money. Recently, I have resorted to writing powershell scripts that exercise the code in a way that reproduces a defect as soon as it is raised and then keeping these scripts as a suite of regression tests for further changes down the line. That way you can at least start to build up some tests for the application without changing it too much, however, these are more like end to end regression tests than proper unit tests.
I do agree with what most everyone else has said. Adding tests to existing code is valuable. I will never disagree with that point, but I would like to add one caveat.
Although adding tests to existing code is valuable, it does come at a cost. It comes at the cost of not building out new features. How these two things balance out depends entirely on the project, and there are a number of variables.
How long will it take you to put all that code under test? Days? Weeks? Months? Years?
Who are you writing this code for? Paying customers? A professor? An open source project?
What is your schedule like? Do you have hard deadlines you must meet? Do you have any deadlines at all?
Again, let me stress, tests are valuable and you should work to put your old code under test. This is really more a matter of how you approach it. If you can afford to drop everything and put all your old code under test, do it. If that's not realistic, here's what you should do at the very least
Any new code you write should be completely under unit test
Any old code you happen to touch (bug fix, extension, etc.) should be put under unit test
Also, this is not an all or nothing proposition. If you have a team of, say, four people, and you can meet your deadlines by putting one or two people on legacy testing duty, by all means do that.
Edit:
I'm aiming to write this question up later with pros and cons to try and show management that it's worth spending the man hours on moving the future development of the product to TDD.
This is like asking "What are the pros and cons to using source control?" or "What are the pros and cons to interviewing people before hiring them?" or "What are the pros and cons to breathing?"
Sometimes there is only one side to the argument. You need to have automated tests of some form for any project of any complexity. No, tests don't write themselves, and, yes, it will take a little extra time to get things out the door. But in the long run it will take more time and cost more money to fix bugs after the fact than write tests up front. Period. That's all there is to it.
When we started adding tests, it was to a ten-year-old, approximately million-line codebase, with far too much logic in the UI and in the reporting code.
One of the first things we did (after setting up a continuous build server) was to add regression tests. These were end-to-end tests.
Each test suite starts by initializing the database to a known state. We actually have dozens of regression datasets that we keep in Subversion (in a separate repository from our code, because of the sheer size). Each test's FixtureSetUp copies one of these regression datasets into a temp database, and then runs from there.
The test fixture setup then runs some process whose results we're interested in. (This step is optional -- some regression tests exist only to test the reports.)
Then each test runs a report, outputs the report to a .csv file, and compares the contents of that .csv to a saved snapshot. These snapshot .csvs are stored in Subversion next to each regression dataset. If the report output doesn't match the saved snapshot, the test fails.
The purpose of regression tests is to tell you if something changes. That means they fail if you broke something, but they also fail if you changed something on purpose (in which case the fix is to update the snapshot file). You don't know that the snapshot files are even correct -- there might be bugs in the system (and then when you fix those bugs, the regression tests will fail).
Nevertheless, regression tests were a huge win for us. Just about everything in our system has a report, so by spending a few weeks getting a test harness around the reports, we were able to get some level of coverage over a huge part of our code base. Writing the equivalent unit tests would have taken months or years. (Unit tests would have given us far better coverage, and would have been far less fragile; but I'd rather have something now, rather than waiting years for perfection.)
Then we went back and started adding unit tests when we fixed bugs, or added enhancements, or needed to understand some code. Regression tests in no way remove the need for unit tests; they're just a first-level safety net, so that you get some level of test coverage quickly. Then you can start refactoring to break dependencies, so you can add unit tests; and the regression tests give you a level of confidence that your refactoring isn't breaking anything.
Regression tests have problems: they're slow, and there are too many reasons why they can break. But at least for us, they were so worth it. They've caught countless bugs over the last five years, and they catch them within a few hours, rather than waiting for a QA cycle. We still have those original regression tests, spread over seven different continuous-build machines (separate from the one that runs the fast unit tests), and we even add to them from time to time, because we still have so much code that our 6,000+ unit tests don't cover.
It's absolutely worth it. Our app has complex cross-validation rules, and we recently had to make significant changes to the business rules. We ended up with conflicts that prevented the user from saving. I realized it would take forever to sort it out in the applcation (it takes several minutes just to get to the point where the problems were). I'd wanted to introduce automated unit tests and had the framework installed, but I hadn't done anything beyond a couple of dummy tests to make sure things were working. With the new business rules in hand, I started writing tests. The tests quickly identified the conditions that caused the conflicts, and we were able to get the rules clarified.
If you write tests that cover the functionality you're adding or modifying, you'll get an immediate benefit. If you wait for a re-write, you may never have automated tests.
You shouldn't spend a lot of time writing tests for existing things that already work. Most of the time, you don't have a specification for the existing code, so the main thing you're testing is your reverse-engineering ability. On the other hand, if you're going to modify something, you need to cover that functionality with tests so you'll know you made the changes correctly. And of course, for new functionality, write tests that fail, then implement the missing functionality.
I'll add my voice and say yes, it's always useful!
There are some distinctions you should keep in mind, though: black-box vs white-box, and unit vs functional. Since definitions vary, here's what I mean by these:
Black-box = tests that are written without special knowledge of the implementation, typically poking around at the edge cases to make sure things happen as a naive user would expect.
White-box = tests that are written with knowledge of the implementation, which often try to exercise well-known failure points.
Unit tests = tests of individual units (functions, separable modules, etc). For example: making sure your array class works as expected, and that your string comparison function returns the expected results for a wide range of inputs.
Functional tests = tests of the entire system all at once. These tests will exercise a big chunk of the system all at once. For example: init, open a connection, do some real-world stuff, close down, terminate. I like to draw a distinction between these and unit tests, because they serve a different purpose.
When I've added tests to a shipping product late in the game, I found that I got the most bang for the buck from white-box and functional tests. If there's any part of the code that you know is especially fragile, write white-box tests to cover the problem cases to help make sure it doesn't break the same way twice. Similarly, whole-system functional tests are a useful sanity check that helps you make sure you never break the 10 most common use cases.
Black-box and unit tests of small units are useful too, but if your time is limited, it's better to add them early. By the time you're shipping, you've generally found (the hard way) the majority of the edge cases and problems that these tests would have found.
Like the others, I'll also remind you of the two most important things about TDD:
Creating tests is a continuous job. It never stops. You should try to add new tests every time you write new code, or modify existing code.
Your test suite is never infallible! Don't let the fact that you have tests lull you into a false sense of security. Just because it passes the test suite doesn't mean it's working correctly, or that you haven't introduced a subtle performance regression, etc.
You don't mention the implementation language, but if in Java then you could try this approach:
In a seperate source tree build regression or 'smoke' tests, using a tool to generate them, which might get you close to 80% coverage. These tests execute all the code logic paths, and verify from that point on that the code still does exactly what it does currently (even if a bug is present). This gives you a safety net against inadvertently changing behaviour when doing the necessary refactoring to make code easily unit testable by hand.
Product suggestion - I used to use the free web based product Junit Factory, but sadly it's closed now. However the product lives on in the commercially licenced AgitarOne JUnit Generator at http://www.agitar.com/solutions/products/automated_junit_generation.html
For each bug you fix, or feature you add from now on, use a TDD approach to ensure new code is designed to be testable and place these tests in a normal test source tree.
Existing code will also likely need to be changed, or refactored to make it testable as part of adding new features; your smoke tests will give you a safety net against regressions or inadvertent subtle changes to behaviour.
When making changes (bug fixes or features) via TDD, when complete it's likely the companion smoke test is failing. Verify the failures are as expected due to the changes made and remove the less readable smoke test, as your hand written unit test has full coverage of that improved component. Ensure that your test coverage does not decline only stay the same or increase.
When fixing bugs write a failing unit test that exposes the bug first.
Whether it's worth adding unit tests to an app that's in production depends on the cost of maintaining the app. If the app has few bugs and enhancement requests, then maybe it's not worth the effort. OTOH, if the app is buggy or frequently modified then unit tests will be hugely beneficial.
At this point, remember that I'm talking about adding unit tests selectively, not trying to generate a suite of tests similar to those that would exist if you had practiced TDD from the start. Therefore, in response to the second half of your second question: make a point of using TDD on your next project, whether it's a new project or a re-write (apologies, but here is a link to another book that you really should read: Growing Object Oriented Software Guided by Tests)
My answer to your third question is the same as the first: it depends on the context of your project.
Embedded within you post is a further question about ensuring that any retro-fitted testing is done properly. The important thing to ensure is that unit tests really are unit tests, and this (more often than not) means that retrofitting tests requires refactoring existing code to allow decoupling of your layers/components (cf. dependency injection; inversion of control; stubbing; mocking). If you fail to enforce this then your tests become integration tests, which are useful, but less targeted and more brittle than true unit tests.
I would like to start this answer by saying that unit testing is really important because it will help you arrest bugs before they creep into production.
Identify the areas projects/modules where bugs have been re-introduced. start with those projects to write tests. It perfectly makes sense to write tests for new functionality and for bug fix.
Is it worth the effort for an existing
solution that's in production?
Yes. You will see the effect of bugs coming down and maintenance becoming easier
Would it better to ignore the testing
for this project and add it in a
possible future re-write?
I would recommend to start if from now.
What will be more benefical; spending
a few weeks adding tests or a few
weeks adding functionality?
You are asking the wrong question. Definitely, functionality is more important than anything else. But, rather you should ask if spending a few weeks adding test will make my system more stable. Will this help my end user? Will it help a new developer in the team to understand the project and also to ensure that he/she, doesn't introduce a bug due to lack of understanding of the overall impact of a change.
I'm very fond of Refactor the Low-hanging Fruit as an answer to the question of where to begin refactoring. It's a way to ease into better design without biting off more than you can chew.
I think the same logic applies to TDD - or just unit tests: write the tests you need, as you need them; write tests for new code; write tests for bugs as they appear. You're worried about neglecting harder-to-reach areas of the code base, and it's certainly a risk, but as a way to get started: get started! You can mitigate the risk down the road with code coverage tools, and the risk isn't (in my opinion) that big, anyway: if you're covering the bugs, covering the new code, covering the code you're looking at, then you're covering the code that has the greatest need for tests.
yes, it is. when you start adding new functionality it can cause some old code modification and as results it is a source of potential bugs.
(see the first one) before you start adding new functionality all (or almost) code (ideally) should be covered by unit tests.
(see the first and second one) :). a new grandiose functionality can "destroy" the old worked code.
Yes it can: Just try to make sure all code you write from now has a test in place.
If the code that is already in place needs to be modified and can be tested, then do so, but it is better not to be too vigorous in trying to get tests in place for stable code. That sort of thing tends to have a knock-on effect and can spiral out of control.
Is it worth the effort for an existing solution that's in production?
Yes. But you don't have to write all unit tests to get started. Just add them one by one.
Would it better to ignore the testing for this project and add it in a possible future re-write?
No. First time you are adding code which breaks the functionality, you will regret it.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
For new functionality (code) it is simple. You write the unit test first and then the functionality.
For old code you decide on the way. You don't have to have all unit tests in place... Add the ones that hurt you most not having... Time (and errors) will tell on which one you have to focus ;)
Update
6 years after the original answer, I have a slightly different take.
I think it makes sense to add unit tests to all new code you write - and then refactor places where you make changes to make them testable.
Writing tests in one go for all your existing code will not help - but not writing tests for new code you write (or areas you modify) also doesn't make sense. Adding tests as you refactor/add things is probably the best way to add tests and make the code more maintainable in an existing project with no tests.
Earlier answer
Im going to raise a few eyebrows here :)
First of all what is your project - if it is a compiler or a language or a framework or anything else that is not going to change functionally for a long time, then I think its absolutely fantastic to add unit tests.
However, if you are working on an application that is probably going to require changes in functionality (because of changing requirements) then there is no point in taking that extra effort.
Why?
Unit tests only cover code tests - whether the code does what it is designed to - it is not a replacement for manual testing which anyways has to be done (to uncover functional bugs, usability issues and all other kinds of issues)
Unit tests cost time! Now where I come from, that's a precious commodity - and business generally picks better functionality over a complete test suite.
If your application is even remotely useful to users, they are going to request changes - so you will have versions that will do things better, faster and probably do new things - there may also be a lot of refactoring as your code grows. Maintaining a full grown unit test suite in a dynamic environment is a headache.
Unit tests are not going to affect the perceived quality of your product - the quality that the user sees. Sure, your methods might work exactly as they did on day 1, the interface between presentation layer and business layer might be pristine - but guess what? The user does not care! Get some real testers to test your application. And more often than not, those methods and interfaces have to change anyways, sooner or later.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality? - There are hell lot of things that you can do better than writing tests - Write new functionality, improve performance, improve usability, write better help manuals, resolve pending bugs, etc etc.
Now dont get me wrong - If you are absolutely positive that things are not going to change for next 100 years, go ahead, knock yourself out and write those tests. Automated Tests are a great idea for APIs as well, where you absolutely do not want to break third party code. Everywhere else, its just something that makes me ship later!
It's unlikely you'll ever have significant test coverage, so you must be tactical about where you add tests:
As you mentioned, when you find a bug, it's a good time to write a test (to reproduce it), and then fix the bug. If you see the test reproduce the bug, you can be sure it's a good, alid test. Given such a large portion of bugs are regressions (50%?), it's almost always worth writing regression tests.
When you dive into an area of code to modify it, it's a good time to write tests around it. Depending on the nature of the code, different tests are appropriate. One good set of advice is found here.
OTOH, it's not worth just sitting around writing tests around code that people are happy with-- especially if nobody is going to modify it. It just doesn't add value (except maybe understanding the behavior of the system).
Good luck!
You say you don't want to buy another book. So just read Michael Feather's article on working effectively with legacy code. Then buy the book :)
If I were in your place, I would probably take an outside-in approach, starting with functional tests that exercise the whole system. I would try to re-document the system's requirements using a BDD specification language like RSpec, and then write tests to verify those requirements by automating the user interface.
Then I would do defect driven development for newly discovered bugs, writing unit tests to reproduce the problems, and work on the bugs until the tests pass.
For new features, I would stick with the outside-in approach: Start with features documented in RSpec and verified by automating the user interface (which will of course fail initially), then add more finely-grained unit tests as the implementation moves along.
I'm no expert on the process, but from what little experience I have I can tell you that BDD via automated UI testing is not easy, but I think it's worth the effort, and probably would yield the most benefit in your case.
I'm not a seasoned TDD expert by any means, but of course I would say that it's incredibly important to unit test as much as you can. Since the code is already in place, I would start by getting some sort of unit test automation in place. I use TeamCity to exercise all of the tests in my projects, and it gives you a nice summary of how the components did.
With that in place, I'd move on to those really critical business logic-like components that can't fail. In my case, there are some basic trigometry problems that need to be solved for various inputs, so I test the heck out of those. The reason I do this is that when I'm burning the midnight oil, it's very easy to waste time digging down to depths of code that really don't need to be touched, because you know they are tested for all of the possible inputs (in my case, there is a finite number of inputs).
Ok, so now you hopefully feel better about those critical pieces. Instead of sitting down and banging out all of the tests, I would attack them as they come up. If you hit a bug that's a real PITA to fix, write the unit tests for it and get them out of the way.
There are cases where you'll find that testing is tough because you can't instantiate a particular class from the test, so you have to mock it. Oh, but maybe you can't mock it easily because you didn't write to an interface. I take these "whoops" scenarios as an opportunity to implement said interface, because, well, it's a Good Thing.
From there, I'd get your build server or whatever automation you have in place configured with a code coverage tool. They create nasty bar graphs with big red zones where you have poor coverage. Now 100% coverage isn't your goal, nor would 100% coverage necessarily mean your code is bulletproof, but the red bar definitely motivates me when I have free time. :)
There is so many good answers so I will not repeat their content. I checked your profile and it seems you are C# .NET developer. Because of that I'm adding reference to Microsoft PEX and Moles project which can help you with autogenerating unit tests for legacy code. I know that autogeneration is not the best way but at least it is the way to start. Check this very interesting article from MSDN magazine about using PEX for legacy code.
I suggest reading a brilliant article by a TopTal Engineer, that explains where to start adding tests: it contains a lot of maths, but the basic idea is:
1) Measure your code's Afferent Coupling (CA) (how much a class is used by other classes, meaning breaking it would cause widespread damage)
2) Measure your code's Cyclomatic Complexity (CC) (higher complexity = higher change of breaking)
You need to identify classes with high CA and CC, i.e. have a function f(CA,CC) and the classes with the smallest differences between the two metrics should be given the highest priority for test coverage.
Why? Because a high CA but very low CC classes are very important but unlikely to break. On the other hand, low CA but high CC are likely to break, but will cause less damage. So you want to balance.
It depends...
It's great to have unit tests but you need to consider who your users are and what they are willing to tolerate in order to get a more bug-free product. Inevitably by refactoring your code which has no unit tests at present, you will introduce bugs and many users will find it hard to understand that you are making the product temporarily more defective to make it less defective in the long run. Ultimately it's the users who will have the final say...
Yes.
No.
Adding tests.
Going towards a more TDD approach will actually better inform your efforts to add new functionality and make regression testing much easier. Check it out!
First a little background. The company I work for writes web based software that is a hosted solution for our customers (ie ASP (Application Service Provider)). We are adopting agile practices such as Scrum and we execute sprints to build new features for our product.
I am a proponent of TDD (Test Driven Design), and as a part of what I deliver in a sprint I always write tests and I always get them integrated with the build (ie ccnet); however the other developers do not follow this practice and it is not enforced.
Is it a good practice to force a development group into providing unit tests as a part of what is delivered in a sprint?
Unless you are a position of authority, the best thing you can do is to convince them of the value of the test suite.
It's very difficult to get developers to see the light on this issue if they aren't seeing it done right.
Try to pair with another developer and show them the benefits and the clarity that comes from writing the tests FIRST. If you don't do this, they are likely to write all of their code, get it working, and then write tests. So, from their point of view, it will feel like simply an extra task that doesn't help them get things done.
Also keep in mind that people often do not understand how to write good tests. Even more, some do not know how to make use of tools like jmock, which can lead to them getting stuck and giving up on writing a test.
Forcing anything onto anybody is not a good practise in my view. I would show them the benefits of TDD at every opportunity I get. This should automatically get the rest of the team to voluntarily practise TDD.
You don't want lip-service unit testing, you want whole-hearted unit testing. That isn't something that can be forced. What you need to do is influence your teammates over time to see the benefits of unit testing and to develop a unit testing culture.
To start with you need to understand that different people change for different reasons. In Crossing the Chasm terms, visionaries will adopt new techniques because they are better, but pragmatists adopt new techniques either because they solve a problem/pain the currently have or because everyone else is adopting it.
Your mission then it to show how unit testing can solve a pain your team currently feels. As you win over people one-by-one eventually you can reach a tipping point where unit testing is the norm and everyone goes along with it. However if you can't tie unit testing to a pain your team feels then your efforts to convince them will likely fail.
Considering unit-testing improves code and software quality on a long term, I would say that, yes, it is good practice to have your developpers provide unit-tests -- be it part of some kind of sprint or not.
The two main barriers to unit-tests I've seen are :
the developpers don't get the point : "why would we write more code just to test ?"
"we don't have time to write unit-tests"
To answer the first point, you'll have to provide some sort of demonstration / formation, I suppose ; if you can get developpers to see why unit-tests are useful, they will like them, and use/develop them ; but they need to seen why those are useful : unless you are their boss, you cannot force people to develop unit-tests.
And, even if you are their boss, they will probably not do the best possible job if they are being forced : unit-testing is often done better if people understand why and how !
To answer the second point... Well, you obviously need to get your developpers some "special" time to developp unit-tests ; it can mean less time to do manual testing, btw.
Another thing is : it is hard to know "what to test", and "how to test" : you will need to explain / demonstrate that to your colleagues : some things cannot be tested, some things don't need to be, and some things are not "unit-testable" -- well, I suppose, unless you software is really well engineered ^^
I've gotten in that position on many jobs and contracts in the past, so I've finally gotten discouraged and embraced the darkness by advocating Development Driven Development.
I've found that when most managers embrace XP, they're embracing throwing out the documentation, not really doing TDD. Programmers on most teams are rewarded for quick hacks that get the defect out of their queue, and it's one manager in about ten who has the guts to stand up to senior management as an advocate of the overall quality of the product as opposed to the bottom-line-for-this-quarter way of doing business. After all, most software jobs that pay anything are corporate sponsored, and Freud proved that corporations are insane.
Or at least, he should have.
There is a great Joel on Software article on this called Getting Things Done When You're Only a Grunt. His strategies applied to united testing would be the following:
Just Do It
Most important, I think. Regularly write tests, do not make it appear as if you yourself see them as a minor part of development.
Harness the Power of Viral Marketing
If you see an error in someone else's code, write a test that triggers it and present both to him. Maybe he sees your point.
Create a Pocket of Excellence
Identify the team members who are open to the idea but not quite sure how to work with unit tests and set up a number of test classes until everyone of them knows how to do it. Then turn towards the other ones. Things are a lot easier to establish once you are not the only one anymore.
Neutralize The Bozos
There will be team members who are almost impossible to get to write tests. Maybe those should be dealt with by regularly breaking their code with a new commit - and then pointing out that since they don't have tests it was hard for you to notice.
it depends on your Version Control, but there are Version Control Software that enable you to run scripts before check-in or before merge to the Production branch. and it will not let the developer check in if the unit testing is failed.
First, talk to your manager. If he is convinced that testing is a good thing, add a coverage test to your build system. If the coverage of your unit tests falls below a certain level, you can handle it as a failed build. This gives your colleagues a measure, a way to see when the fail to deliver tested code.
In your case, NCover seems to integrate nicely into CC.NET.
Create a matrix that you want developers to achieve.
Make sure there is a subtask for Unit test creation for every story.
No. In my experience, TDD isn't all that useful in practice. I sometimes use it for really general fundamental classes (like geometry or generic data structures) that lend themselves naturally to automated tests. But for UI components or business logic, I find it's more trouble than it's worth.
A new project we began introduced a lot of new technologies we weren't so familiar with, and an architecture that we don't have a lot of practice in. In other words, the interfaces and interactions between service classes etc of what we're building are fairly volatile, even more so due to internal and customer feedback. Though I've always been frustrated by the ever-moving specification, I see this to some degree a necessary part of building something we've never built before - if we just stuck to the original design and scope, the end product would probably be a whole lot less innovative and useful than it's becoming.
I also introduced test-driven development (TDD), as the benefits are well-documented and conceptually I loved the idea. Two more new things to learn - NUnit and mocking - but seeing all those green circles made it all worthwhile.
Over time, however, those constant changes in design seemed to mean I was spending a whole lot more time changing my tests than I was on writing the code itself. For this reason alone, I've gone back to the old ways of testing - that is, not automated.
While I have no doubt that the application would be far more robust with hundreds of excellent unit tests, I've found the trade-off of time to launch the product to be mostly unacceptable. My question is, then - have any of you also found that TDD can be a hassle if you're prototyping something / building a beta version? Does TDD go much more naturally hand-in-hand with something where the specifications are more fixed, or where the developers have more experience in the language and technologies? Or have I done something fundamentally wrong?
Note that I'm not trying to criticise TDD here - just I'm not sure it's always the best fit for all situations.
The short answer is that TDD is very valuable for beta versions, but may be less so for prototyping.
I think it is very important to distinguish between beta versions and prototyping.
A beta version is essentially a production version that is just still in development, so you should definitely use TDD in that scenario.
A prototype/proof of concept is something you build with the express intent of throwing it away once you've gotten the answers out of it that you wanted.
It's true that project managers will tend to push for the prototype to be used as a basis for production code, but it is very important to resist that. If you know that's not possible, treat the prototype code as you would your production code, because you know it is going to become your production code in the future - and that means you should use TDD with it as well.
When you are learning a new technology, most code samples etc. are not written with unit tests in mind, so it can be difficult to translate the new technology to the unit testing mindset. It most definitely feels like a lot of overhead.
In my experience, however, unit testing often really forces you to push the boundaries of the new technology that you are learning. Very often, you need to research and learn all the different hooks the new technology provides, because you need to be able to isolate the technology via DI or the like.
Instead of only following the beaten path, unit testing frequently forces you to learn the technology in much more depth, so what may feel like overhead is actually just a more in-depth prototype - one that is often more valuable, because it covers more ground.
Personally, I think unit testing a new technology is a great learning tool.
The symptoms you seem to experience regarding test maintainability is a bit orthogonal, I think. Your tests may be Overspecified, which is something that can happen just as well when working with known technologies (but I think it is probably easier to fall into this trap when you are also learning a new technology at the same time).
The book xUnit Test Patterns describes the Overspecified Test antipattern and provides a lot of guidance and patterns that can help you write more maintainable tests.
I've found that thoroughly testing early results in lots of code thrown away and an empty feeling in the pit of your stomach.
Test what needs to be tested and not a line of code more. When you figure out how much that is, let me know.
When prototyping, I would say it depends on the type of prototyping. In evolutionary prototyping, where the prototype evolves into the final application, I would utilize unit testing as early as possible. If you are using throw-away prototyping, I wouldn't bother with unit testing - the final application is going to be nothing like the prototype.
I'm not sure what you mean by "beta", since a beta is almost a finished product. As soon as you start working on code that is going to be shipped, or has a potential to be shipped, make sure everything is well tested.
Now, pure test-driven development might be extreme, but it is important to make sure that all shippable code is as tested as possible, at the unit, integration, and system level.
have any of you also found that TDD can be a hassle if you're prototyping something / building a beta version?
I have.. Tons of times :)
Does TDD go much more naturally hand-in-hand with something where the specifications are more fixed, or where the developers have more experience in the language and technologies?
Not really. TDD works quite nice with changing requirements, but TDD is really for ensuring a stable and contract-driven design: Two things which prototypes doesn't really need that badly..
Or have I done something fundamentally wrong?
Doesn't look like it :) You've just seen that TDD consists of other things than golden trees..
"Over time, however, those constant changes in design seemed to mean I was spending a whole lot more time changing my tests than I was on writing the code itself"
Good. You should spend a lot of time on testing. It's important, and it's how you demonstrate that your code is right. "Less code than test" is a good benchmark.
That means that you were writing effective tests that demonstrated your expectations for the underlying technology.
You may want to consider doing this.
Some tests are "essential" or "core" or "enduring" features of the application independent of any technology choices. Focus on these. The should never change.
Some tests confirm the technology or implementation choices. These change all the time. Perhaps you should segregate these so that the technology changes lead to focused changes here.
Being given a roadmap with a moving X is frustrating.
TDD or no TDD.. 'having to spend majority of the time changing the tests instead of the code' indicates either that the specs were changed radically or you just over-mocked yourself a.k.a "fragile tests". I'd need more input from you to comment further.
Spike/Prototyping means trying to build something as a proof of concept or validation a design. With this definition, I'd say that you don't need to TDD your proto because your primary goal is learning / reducing the unknowns. Once you've done that you should throw away your proto and build your production version with automated tests (use TDD now). You ship these to the customer not 'tested prototypes'
However if manual testing has been working well for you, persist with it. I like to prove to myself and others that my code works at the push of a button - avoid human boredom of repetitive tasks and get thorough testing.
Shipping Prototypes will bite you sooner and harder than you ever imagine. Take it from me.
Prototyping is meant to be the used for "Would this kind of thing work"-exploration.
So there is no need for Testing. BUT! Always throw your Prototype away and code from ground zero!
In agile development there's the concept of a "spike" - a deep but narrow investigation of a solution or a technology. Once you're comfortable with how things should work you start over with a new implementation with a higher quailty level.
There's a pitfall with software labeled as "beta" - all of the sudden you end up with something not intended for production as a vital part of your application and you haven't got the time to redo it. This will most likely come back and bite you later on. A protoype should really be just a prototype - no more, no less.
I don't have a prescription for you, but I do have a diagnosis:
if you ever end up in a debugger, you shoulda had more tests. Even in the very earliest prototypes, you want the code you write to work, right? If I don't do TDD, and I run in to a bug, it's hard to retrofit unit tests to find the bug. It's tempting to go to the debugger. So, I aim my TDD efforts to produce a good enough set of tests so that I don't need the debugger. Doing this well requires lots of practice.
If you want to refactor but don't because of risk, you shoulda had more tests. If I'm going to be working with some code for more than a couple hours, I want to refactor to keep my velocity high. If I hesitate to refactor, it's gonna hurt my productivity, so I do everything I can to make refactoring safe and easy. Again, a good selection of tests is exactly what I need.
For pure prototyping, as people have said, not so useful. But often a prototype has elements of the application that will follow.
As you build it, what are the solid concepts that you expect to continue? What are the key domain models? It makes sense to build tests around these. Then, they provide a solid base on which to explore ideas. By having part of the code base solid and tested, it allows you to go further with the prototype than you could with no tests.
It's always a balance of picking the right things to test when. It sounds like from your description you are adopting quite a few new things at once-- sometimes this can be too much, and you need to retreat a little bit to optimize your progress.
What I typically do for this prototyping code is to write it without tests (or not many), but do the work under my src/test/java or wherever I'm putting my test code. That way, I won't inadvertently put it into production. If there's something I've prototyped that I like, then I'll create something in src/main/java (my prod code) and TDD it, pulling over code from the prototype one piece at a time as I add tests.
Over time, however, those constant changes in design seemed to mean I was spending a whole lot more time changing my tests than I was on writing the code itself.
I write (and run) automated tests of the system's I/O. The system's I/O depends on the functional requirements, and doesn't change when the system's implementation changes, so I can refactor the implementation without changing the automated tests: Should one test internal implementation, or only test public behaviour?
When I get in contact with a new technology, I usually write a couple of tests which I use as the basis of the prototype. The main advantage here is that it allows me to get used to the technology in small, digestible parts and that I document my progress in the only valid form of documentation: code.
That way, I can test assumptions (what happens when I persist an empty string in a database? DB2 will return an empty string, Oracle will return NULL) and make sure they don't change while I work on the prototype.
Think of this work as many, tiny prototypes all rolled into a couple of unit tests.
Now the design of my product will change over time but these spike tests are still valid: The underlying framework doesn't change just because I now pursue a different solution. In fact, these tests will allow me to migrate to the next version of the technology since they will ring an alarm when one of the assumptions I've been using has changed. Otherwise, I'd have to stay with the current version or be very careful with any upgrade since I couldn't be sure what it would break.
I would be cautious about dismissing TDD when prototyping. Primarily because prototypes tend to (but not always) evolve into the final product. If you can be sure that the prototypes are thrown out once they have established what ever it was they were started for, then you can be a bit more lax. If however it there is a chance that either the prototype will evolve into the final product, or parts of the code will be ported into the final product, then would endeavor to follow TDD principles.
Not only does this make your code more testable, but more importantly (in my mind) it encourages you to follow good software design principles.
If it's actually a prototype and you are going to throw it away when you're done, then take whatever route gets you through that phase. But, ask yourself, how often do your prototypes actually get thrown away vs. how often do they creep into you're final product?
There was a good presentation at JavaOne on TDD. One of the really nice outcomes was that you tended to understand your actual requirements much, much better when you had to author tests to meet them. This is in addition to all of the refactoring and code quality benefits that came along with the approach.
I have been using TDD to drive the project that I am currently working on and the results have been fairly satisfying. I did run into a problem (described here; still without a solution or any suggestions!) where there are some aspects of a particular method which may not be able to be tested (as in my example; briefly, I want to be able to handle a ManagementException which has a specific ErrorCode - but it doesn't seem possible for me to set up a test which throws a ManagementException like that).
So, how does one deal with that? Do we simply accept the fact that some logical paths are untestable (because of the framework that we are working in or limitations in the testing framework(s) that are currently available)?
Some designs do not lend themselves to testability.. especially ones that do not have testability as one of the design goals. Generally TDDed designs do not fall into this category.
To answer your original question, I've posted a response which involves using reflection to slot in the requested error code. However this may not work in all situations and is not a general solution.
The tradeoff here is the effort in writing the test vs the benefit of having that particular piece of code under automated tests. If you feel that the cost to benefit ratio is huge and probability of failure is miniscule, you may write it up as an exceptional manual test, a comment to future developers and verify it manually for now. I'd say be pragmatic, if you've spent 30-40 mins of a couple of developers' brain time trying to get it under test, maybe you need to step back and rethink your strategy. Have a look at Michael Feather's 'Working effectively with legacy code' on some suggestions to overcome barriers to testability.
I don't think you could say that anything is logically untestable, but you will certainly find areas of code where the effort required to test them would be better spent elsewhere.
This is a great question, and one which I also found myself contemplating recently.
So first, I wouldn't say some logical paths are "untestable" - at most they are probably very hard to test with automatic unit testing. You could probably still test most of these problematic paths with some serious heavy duty system tests.
Consider this - anything you test can be thought to run inside a virtual machine under your control and you can (theoretically) simulate every aspect of its operation in order to test your software. Whether or not this is practical for most applications is another question.
I've just tried answering your original question (and collided in midflight with somebody else saying the same thing more concisely, or most of it at least;-). Anyway, there surely exist frameworks that are way too rigid (thanks to private and friends), and if you can't use introspection to go around that (despite having done all proper incantations), then you're just using a language that's too rigid as well as a framework that is.
I'd be astonished if that was the case in an overall system that supports dynamic languages (as .NET now does) such as IronRuby and IronPython -- maybe if C# won't let you go around accessibility limitations via introspection, the dynamic languages could serve.
That said, it is surely possible for the overall environment to be designed so badly and so rigidly to make it impossible to unit-test certain things -- even though I'm not entirely convinced that this is the case in your current situation.
Some things cannot be tested in an automated unit test because the language/framework/situation is just not open to it. The way to handle that is to reduce that area as much as possible and keep it so simple that it is highly unlikely to be a source of bugs or behavior changes later on.
There is also more to testing than just unit testing, and those areas (such as Acceptance testing, QA, etc.) are not covered by unit testing as well.
This certainly presupposes that unit testing is a good thing. Our projects have some level of unit testing, but it's inconsistent at best.
What are the most convincing ways that you have used or have had used with you to convince everyone that formalized unit testing is a good thing and that making it required is really in the best interest of the 'largeish' projects we work on. I am not a developer, but I am in Quality Assurance and would like to improve the quality of the work delivered to ensure it is ready to test.
By formalized unit tests, I'm simply talking about
Identifying the Unit Tests to be written
Identifying the test data (or describe it)
Writing these tests
Tracking these tests (and re-using as needed)
Making the results available
A very convincing way is to do formalized unit test yourself, regardless of what your team/company does. This might take some extra effort on your side, especially if you're not experienced with this sort of practice.
When you can then show your code is better and you are being more productive than your fellow developers, they are going to want to know why. Then feed them your favorite unit testing methods.
Once you've convinced your fellow developers, convince management together.
I use Maven with the Surefire and Cobertura plugins for all my builds. The actual test cases are created with JUnit, DbUnit and EasyMock.
Identifying Unit Tests
I try to follow Test Driven Development but to be honest I usually just do that for the handful of the test cases and then come back and create tests for the edge and exception cases later.
Identifying Test Data
DbUnit is great for loading test data for your unit tests.
Writing Test Cases
I use JUnit to create the test cases. I try to write self documenting test cases but will use Javadocs to comment something that is not obvious.
Tracking & Making The Results Available
I integrate the unit testing into my Maven build cycle using the Surefire plugin and I use the Corbertura plugin to measure the coverage achieved by those tests. I always generate and publish a web-site including the Surefire and Cobertura reports as part of my daily build so I can see what tests failed/passed.
The event which convinced me was when we managed to regress a bug three times, in three consecutive releases. Once I realised how much more productive I was as a programmer when I wasn't constantly fixing trivial mistakes after they had gone to the client, and I could have a warm fuzzy feeling that colleagues code would do what they claimed it would, I became a convert.
Back in the day I did Cobol development on Mainframes we did this religiously in the several companies I worked in and it was accepted as the way you did things because the environment enforced it. I think it was a very typical scheme for the era and maybe some of the reasons might be applicable to you:-
Like most mainframe environments we had three realms, development, Quality Assurance and Production. Programmers developed in development and unit tested there, and once they signed off and were happy the unit was migrated to the QA environment (with the test and results docs) where it was system tested by dedicated QA staff. The development to QA migration was a formal step which happened overnight. Once QA'ed the code was migrated to Production - and we had very few bugs.
The motivation to get the unit testing done and right was that if you didn't and a bug was found by QA staff it was obvious that you hadn't done the work. Consequently your reputation depended on how rigorous you were. Of course most people would end up with the occasional bug, but coders who produced solid tested code all the time soon got a star reputation and those who produced buggy code got noticed too. The push would be always to up your game, and consequently the culture produced was one that pushed towards bug free code delivered first time.
Extracting pertinent points -
Coder reputation tied up with delivery of bug free tested code
Significant overhead associated with moving unit tested code to the next level, so motivation not to repeat this and get it right first time.
System testing performed by different people to unit testing - ideally a different team.
I'm sure your environment will differ but the principals might be translatable.
Sometimes by example is the best way. I also find that reminding people that certain things just dont happen when things are under test. Next time somebody asks you to write something, do it with tests regardless. Eventually your peers will be jealous of the ease by which you can change your code and know that it still works.
As for management you need to emphasise how much time gets wasted due to the nuclear explosion that occurs when you need to make a change to codebase X that isnt under test.
Many developers dont realise just how much they refactor without ensuring they are preserving behaviour across the entire system. For me this is the biggest benefit to unit testing and TDD in my opinion.
Software requirements change
Software changes to suit the requirements
The only certainty is change. Changing code that is not under test requires the developer to be aware of every behavioural side effect possible. The reality is that the coders who think they can read into every permutation, does so by a pain staking process of trial and error until nothing breaks obviously. At this point they check in.
The pragmatic programmer recognizes that he/she is not perfect and all knowing, and that tests are like a safety net that allows them to walk the refactoring tightrope quickly and safely.
As for when to write test on greenfield code, I'd have to advocate as much as possible. Spend the time defining the behaviours that you want out of your system and write tests initially to express those higher level constructs. Unit tests can come as thoughts crystallize.
Hope this helps.
Education and/or certification.
Give your team members a formal training in the field of testing - maybe with certification exam (depending on your team members and your own attitude towards certification). You'll take testing to a higher level that way, and your team members will be more likely to take a professional attitude towards testing.
There is a big difference between convincing and requiring.
If you find a way to convince your colleagues to write them - great. However if you create some formalized rules and require them to write unit tests, they will find a way to overcome this. As a result you will get a bunch of unit tests which are worth nothing: There will be unit test for every single class available and they will test setters and getters.
Think twice before creating and enforcing rules. Developers are good at overcoming them.
First time around you just need to go ahead and write them and show people that it's worth it. I've found on three projects that it's the only way to convince people. Some people who don't code (e.g. junior project managers) won't be able to see the value until it's staring them right in the face.
On my software team, we tend to write a small business case on these issues and present them to management in order to have the time available to create and track tests. We explain that the time taken to test is well made up for when crunch time comes and everything is on the line. We also set up a Hudson build server to centralize the tracking of the unit tests. This makes it a lot easier for the developers to keep track of failing tests and to discover recurring problems.
Remind your team or the other developers that they're professionals, not amateurs. Worked for me!
Also, it's an industry standard these days. Without unit testing experience, they are less desirable and less valuable as employees to potential future employers.
As a team lead, it is my responsibility to ensure that my programmers are doing unit testing on all the modules they work on. I suppose at this point, it's not even a question of how to convince them, it's required. Not sometimes, not on largish projects, all the time. Unit testing is the first line of defense against putting something in production that you will have to maintain. If something is put into production that has not been completely unit and system tested, then it will come back to bite you. I guess one of the policies we have here to support this is that if it blows in production, or causes problems, then the programmer responsible for coding and testing that module will be the one that has to take care of the problems, do the cleanup, etc. That alone is a fairly good motivator.
The other is that it is about pride. I work in a shop of about 75 coders, although that is large by some standards, it's really small enough for all of us to know one another. Its also small enough that we know what one another is working on, and when it does move to production, we are aware of any abends, failures, etc. If you are careful, do the unit and system testing, the chances of moving something to production without causing failures increases significantly. It may take a time or two of moving something to production and failing to realize it, but there are great rewards involved in not messing up. It's really nice to hear congratulations in the hallway when you move a project in and it doesn't screw up.
Write a bunch of them and demonstrate that unit testing has improved your productivity and the quality of your code. Without some kind of proof, sometimes people won't believe it's worth it.
So, two years after I asked this question, I find that one unexpected answer was that by moving to a new SDLC was what was needed. Five years ago, we established our first formal SDLC. It improved our situation, but left out some important things, such as automation. We are now in the process of establishing a new SDLC (under new managment) where one of the tenants is automation. Not just automated unit tests, but automated functional tests.
I guess the lesson is that I was thinking too small. If you are going to change how you create software, go 'whole hog' and make a drastic change rather than propose incremental change if you are not used to that.
You could take some inspiration from an initiative at Google. Their test team started putting up examples, tips and benefits inside the toilet cubicles to raise the profile of the merits of test automation.
https://testing.googleblog.com/2007/01/introducing-testing-on-toilet.html