How to reduce the time spent on testing? - unit-testing

I just looked back through the project that nearly finished recently and found a very serious problem. I spent most of bank time on testing the code, reproducing the different situations "may" cause code errors.
Do you have any idea or experience to share on how to reduce the time spent on testing, so that makes the development much more smoothly?
I tried follow the concept of test-driven for all my code , but I found it really hard to achieve this, really need some help from the senior guys here.
Thanks
Re: all
Thanks for the answers above here, initially my question was how to reduce the time on general testing, but now, the problem is down to how to write the effecient automate test code.
I will try to improve my skills on how to write the test suit to cut down this part of time.
However, I still really struggle with how to reduce the time I spent on reproduce the errors , for instance, A standard blog project will be easy to reproduce the situations may cause the errors but a complicate bespoke internal system may "never" can be tested throught out easily, is it worthy ? Do you have any idea on how to build a test plan on this kind of project ?
Thanks for the further answers still.

Test driven design is not about testing (quality assurance). It has been poorly named from the outset.
It's about having machine runnable assumptions and specifications of program behavior and is done by programmers during programming to ensure that assumptions are explicit.
Since those tasks have to be done at some point in the product lifecycle, it's simply a shift of the work. Whether it's more or less efficient is a debate for another time.
What you refer to I would not call testing. Having strong TDD does mean that the testing phase does not have to be relied upon as heavily for errors which would be caught long before they reach a test build (as they are with experience programmers with a good spec and responsive stakeholders in a non-TDD environment).
If you think the upfront tests (runnable spec) is a serious problem, I guess it comes down to how much work the relative stages of development are expected to cost in time and money?

I think I understand. Above the developer-test level, you have the customer test level, and it sounds like, at that level, you are finding a lot of bugs.
For every bug you find, you have to stop, take your testing hat off, put your reproduction hat on, and figure out a precise reproduction strategy. Then you have to document the bug, perhaps put it in a bug-tracking system. Then you have to put the testing hat on. In the mean time, you've lost whatever setup you were working on and lost track of where you were on whatever test plan you were following.
Now - if that didn't have to happen - if you had far few bugs - you could zip along right through testing, right?
It's doubtful that GUI-driving test automation will help with this problem. You'll spend a great amount of time recording and maintaining the tests, and those regression tests will take a fair amount of time to return the investment. Initially, you'll go much slower with GUI-Driving customer-facing tests.
So (I submit) that what might really help is higher /initial/ code quality coming out of development activities. Micro-tests -- also called developer-tests or test-driven-development in the original sense - might really help with that. Another thing that can help is pair programming.
Assuming you can't grab someone else to pair, I'd spend an hour looking at your bug tracking system. I would look at the past 100 defects and try to categorize them into root causes. "Training issue" is not a cause, but "off by one error" might be.
Once you have them categorized and counted, put them in a spreadsheet and sort. Whatever root cause occurs the most often is the root cause you prevent first. If you really want to get fancy, multiply the root cause by some number that is the pain amount it causes. (Example: If in those 100 bugs you have 30 typos on menus, which as easy to fix, and 10 hard-to-reproduce javascript errors, you may want to fix the javascript issue first.)
This assumes you can apply some magical 'fix' to each of those root causes, but it's worth a shot. For example: Transparent icons in IE6 may be because IE6 can not easily process .png files. So have a version control trigger that rejects .gif's on checkin and the issue is fixed.
I hope that helps.

The Subversion team has developed some pretty good test routines, by automating the whole process.
I've begun using this process myself, for example by writing tests before implementing the new features. It works very well, and generates consistent testing through the whole programming process.
SQLite also have a decent test system with some very good documentation about how it's done.

In my experience with test driven development, the time saving comes well after you have written out the tests, or at least after you have written the base test cases. The key thing being here is that you actually have to write our your automated tests. The way your phrased your question leads me to believe you weren't actually writing automated tests. After you have your tests written you can easily go back later and update the tests to cover bugs they didn't previously find (for better regression testing) and you can easily and relatively quickly refactor your code with the ease of mind that the code will still work as expected after you have substantially changed it.

You wrote:
"Thanks for the answers above here,
initially my question was how to
reduce the time on general testing,
but now, the problem is down to how to
write the efficient automate test
code."
One method that has been proven in multiple empirical studies to work extremely well to maximize testing efficiency is combinatorial testing. In this approach, a tester will identify WHAT KINDS of things should be tested (and input it into a simple tool) and the tool will identify HOW to test the application. Specifically, the tool will generate test cases that specify what combinations of test conditions should be executed in which test script and the order that each test script should be executed in.
In the August, 2009 IEEE Computer article I co-wrote with Dr. Rick Kuhn, Dr. Raghu Kacker, and Dr. Jeff Lei, for example, we highlight a 10 project study I led where one group of testers used their standard test design methods and a second group of testers, testing the same application, used a combinatorial test case generator to identify test cases for them. The teams using the combinatorial test case generator found, on average, more than twice as many defects per tester hour. That is strong evidence for efficiency. In addition, the combinatorial testers found 13% more defects overall. That is strong evidence for quality/thoroughness.
Those results are not unusual. Additional information about this approach can be found at http://www.combinatorialtesting.com/clear-introductions-1 and our tool overview here. It contains screen shots and and explanation of how of our the tool makes testing more efficient by identifying a subset of tests that maximize coverage.
Also free version of our Hexawise test case generator can be found at www.hexawise.com/users/new

There is nothing inherently wrong with spending a lot of time testing if you are testing productively. Keep in mind, test-driven development means writing the (mostly automated) tests first (this can legitimately take a long time if you write a thorough test suite). Running the tests shouldn't take much time.

It sounds like your problem is you are not doing automatic testing. Using automated unit and integration tests can greatly reduce the amount of time you spend testing.

First, it's good that you recognise that you need help -- now go and find some :)
The idea is to use the tests to help you think about what the code should do, they're part of your design time.
You should also think about the total cost of ownership of the code. What is the cost of a bug making it through to production rather than being fixed first? If you're in a bank, are there serious implications about getting the numbers wrong? Sometimes, the right stuff just takes time.

One of the hardest things about any project of significant size is designing the underlying archetecture, and the API. All of this is exposed at the level of unit tests. If you're writing your tests first, then that aspect of design happens when your coding your tests, rather than the program logic. This is compounded by added effort of making code testable. Once you've got your tests, the program logic is usually quite obvious.
That being said, there seem to be some interesting automatic test builders on the horizon.

Related

Can unit testing be successfully added into an existing production project? If so, how and is it worth it?

I'm strongly considering adding unit testing to an existing project that is in production. It was started 18 months ago before I could really see any benefit of TDD (face palm), so now it's a rather large solution with a number of projects and I haven't the foggiest idea where to start in adding unit tests. What's making me consider this is that occasionally an old bug seems to resurface, or a bug is checked in as fixed without really being fixed. Unit testing would reduce or prevents these issues occuring.
By reading similar questions on SO, I've seen recommendations such as starting at the bug tracker and writing a test case for each bug to prevent regression. However, I'm concerned that I'll end up missing the big picture and end up missing fundamental tests that would have been included if I'd used TDD from the get go.
Are there any process/steps that should be adhered to in order to ensure that an existing solutions is properly unit tested and not just bodged in? How can I ensure that the tests are of a good quality and aren't just a case of any test is better than no tests.
So I guess what I'm also asking is;
Is it worth the effort for an
existing solution that's in production?
Would it better to ignore the testing
for this project and add it in a
possible future re-write?
What will be more benefical; spending
a few weeks adding tests or a few
weeks adding functionality?
(Obviously the answer to the third point is entirely dependant on whether you're speaking to management or a developer)
Reason for Bounty
Adding a bounty to try and attract a broader range of answers that not only confirm my existing suspicion that it is a good thing to do, but also some good reasons against.
I'm aiming to write this question up later with pros and cons to try and show management that it's worth spending the man hours on moving the future development of the product to TDD. I want to approach this challenge and develop my reasoning without my own biased point of view.
I've introduced unit tests to code bases that did not have it previously. The last big project I was involved with where I did this the product was already in production with zero unit tests when I arrived to the team. When I left - 2 years later - we had 4500+ or so tests yielding about 33 % code coverage in a code base with 230 000 + production LOC (real time financial Win-Forms application). That may sound low, but the result was a significant improvement in code quality and defect rate - plus improved morale and profitability.
It can be done when you have both an accurate understanding and commitment from the parties involved.
First of all, it is important to understand that unit testing is a skill in itself. You can be a very productive programmer by "conventional" standards and still struggle to write unit tests in a way that scales in a larger project.
Also, and specifically for your situation, adding unit tests to an existing code base that has no tests is also a specialized skill in itself. Unless you or somebody in your team has successful experience with introducing unit tests to an existing code base, I would say reading Feather's book is a requirement (not optional or strongly recommended).
Making the transition to unit testing your code is an investment in people and skills just as much as in the quality of the code base. Understanding this is very important in terms of mindset and managing expectations.
Now, for your comments and questions:
However, I'm concerned that I'll end up missing the big picture and end up missing fundamental tests that would have been included if I'd used TDD from the get go.
Short answer: Yes, you will miss tests and yes they might not initially look like what they would have in a green field situation.
Deeper level answer is this: It does not matter. You start with no tests. Start adding tests, and refactor as you go. As skill levels get better, start raising the bar for all newly written code added to your project. Keep improving etc...
Now, reading in between the lines here I get the impression that this is coming from the mindset of "perfection as an excuse for not taking action". A better mindset is to focus on self trust. So as you may not know how to do it yet, you will figure out how to as you go and fill in the blanks. Therefore, there is no reason to worry.
Again, its a skill. You can not go from zero tests to TDD-perfection in one "process" or "step by step" cook book approach in a linear fashion. It will be a process. Your expectations must be to make gradual and incremental progress and improvement. There is no magic pill.
The good news is that as the months (and even years) pass, your code will gradually start to become "proper" well factored and well tested code.
As a side note. You will find that the primary obstacle to introducing unit tests in an old code base is lack of cohesion and excessive dependencies. You will therefore probably find that the most important skill will become how to break existing dependencies and decoupling code, rather than writing the actual unit tests themselves.
Are there any process/steps that should be adhered to in order to ensure that an existing solutions is properly unit tested and not just bodged in?
Unless you already have it, set up a build server and set up a continuous integration build that runs on every checkin including all unit tests with code coverage.
Train your people.
Start somewhere and start adding tests while you make progress from the customer's perspective (see below).
Use code coverage as a guiding reference of how much of your production code base is under test.
Build time should always be FAST. If your build time is slow, your unit testing skills are lagging. Find the slow tests and improve them (decouple production code and test in isolation). Well written, you should easilly be able to have several thousands of unit tests and still complete a build in under 10 minutes (~1-few ms / test is a good but very rough guideline, some few exceptions may apply like code using reflection etc).
Inspect and adapt.
How can I ensure that the tests are of a good quality and aren't just a case of any test is better than no tests.
Your own judgement must be your primary source of reality. There is no metric that can replace skill.
If you don't have that experience or judgement, consider contracting someone who does.
Two rough secondary indicators are total code coverage and build speed.
Is it worth the effort for an existing solution that's in production?
Yes. The vast majority of the money spent on a custom built system or solution is spent after it is put in production. And investing in quality, people and skills should never be out of style.
Would it better to ignore the testing for this project and add it in a possible future re-write?
You would have to take into consideration, not only the investment in people and skills, but most importantly the total cost of ownership and the expected life time of the system.
My personal answer would be "yes of course" in the majority of cases because I know its just so much better, but I recognize that there might be exceptions.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
Neither. Your approach should be to add tests to your code base WHILE you are making progress in terms of functionality.
Again, it is an investment in people, skills AND the quality of the code base and as such it will require time. Team members need to learn how to break dependencies, write unit tests, learn new habbits, improve discipline and quality awareness, how to better design software, etc. It is important to understand that when you start adding tests your team members likely don't have these skills yet at the level they need to be for that approach to be successful, so stopping progress to spend all time to add a lot of tests simply won't work.
Also, adding unit tests to an existing code base of any sizeable project size is a LARGE undertaking which requires commitment and persistance. You can't change something fundamental, expect a lot of learning on the way and ask your sponsor to not expect any ROI by halting the flow of business value. That won't fly, and frankly it shouldn't.
Thirdly, you want to instill sound business focus values in your team. Quality never comes at the expense of the customer and you can't go fast without quality. Also, the customer is living in a changing world, and your job is to make it easier for him to adapt. Customer alignment requires both quality and the flow of business value.
What you are doing is paying off technical debt. And you are doing so while still serving your customers ever changing needs. Gradually as debt is paid off, the situation improves, and it is easier to serve the customer better and deliver more value. Etc. This positive momentum is what you should aim for because it underlines the principles of sustainable pace and will maintain and improve moral - both for your development team, your customer and your stakeholders.
Is it worth the effort for an existing solution that's in production?
Yes!
Would it better to ignore the testing for this project and add it in a possible future re-write?
No!
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
Adding testing (especially automated testing) makes it much easier to keep the project working in the future, and it makes it significantly less likely that you'll ship stupid problems to the user.
Tests to put in a priori are ones that check whether what you believe the public interface to your code (and each module in it) is working the way you think. If you can, try to also induce each isolated failure mode that your code modules should have (note that this can be non-trivial, and you should be careful to not check too carefully how things fail, e.g., you don't really want to do things like counting the number of log messages produced on failure, since verifying that it is logged at all is enough).
Then put in a test for each current bug in your bug database that induces exactly the bug and which will pass when the bug is fixed. Then fix those bugs! :-)
It does cost time up front to add tests, but you get paid back many times over at the back end as your code ends up being of much higher quality. That matters enormously when you're trying to ship a new version or carry out maintenance.
The problem with retrofitting unit tests is you'll realise you didn't think of injecting a dependency here or using an interface there, and before long you'll be rewriting the entire component. If you have the time to do this, you'll build yourself a nice safety net, but you could have introduced subtle bugs along the way.
I've been involved with many projects which really needed unit tests from day one, and there is no easy way to get them in there, short of a complete rewrite, which cannot usually be justified when the code is working and already making money. Recently, I have resorted to writing powershell scripts that exercise the code in a way that reproduces a defect as soon as it is raised and then keeping these scripts as a suite of regression tests for further changes down the line. That way you can at least start to build up some tests for the application without changing it too much, however, these are more like end to end regression tests than proper unit tests.
I do agree with what most everyone else has said. Adding tests to existing code is valuable. I will never disagree with that point, but I would like to add one caveat.
Although adding tests to existing code is valuable, it does come at a cost. It comes at the cost of not building out new features. How these two things balance out depends entirely on the project, and there are a number of variables.
How long will it take you to put all that code under test? Days? Weeks? Months? Years?
Who are you writing this code for? Paying customers? A professor? An open source project?
What is your schedule like? Do you have hard deadlines you must meet? Do you have any deadlines at all?
Again, let me stress, tests are valuable and you should work to put your old code under test. This is really more a matter of how you approach it. If you can afford to drop everything and put all your old code under test, do it. If that's not realistic, here's what you should do at the very least
Any new code you write should be completely under unit test
Any old code you happen to touch (bug fix, extension, etc.) should be put under unit test
Also, this is not an all or nothing proposition. If you have a team of, say, four people, and you can meet your deadlines by putting one or two people on legacy testing duty, by all means do that.
Edit:
I'm aiming to write this question up later with pros and cons to try and show management that it's worth spending the man hours on moving the future development of the product to TDD.
This is like asking "What are the pros and cons to using source control?" or "What are the pros and cons to interviewing people before hiring them?" or "What are the pros and cons to breathing?"
Sometimes there is only one side to the argument. You need to have automated tests of some form for any project of any complexity. No, tests don't write themselves, and, yes, it will take a little extra time to get things out the door. But in the long run it will take more time and cost more money to fix bugs after the fact than write tests up front. Period. That's all there is to it.
When we started adding tests, it was to a ten-year-old, approximately million-line codebase, with far too much logic in the UI and in the reporting code.
One of the first things we did (after setting up a continuous build server) was to add regression tests. These were end-to-end tests.
Each test suite starts by initializing the database to a known state. We actually have dozens of regression datasets that we keep in Subversion (in a separate repository from our code, because of the sheer size). Each test's FixtureSetUp copies one of these regression datasets into a temp database, and then runs from there.
The test fixture setup then runs some process whose results we're interested in. (This step is optional -- some regression tests exist only to test the reports.)
Then each test runs a report, outputs the report to a .csv file, and compares the contents of that .csv to a saved snapshot. These snapshot .csvs are stored in Subversion next to each regression dataset. If the report output doesn't match the saved snapshot, the test fails.
The purpose of regression tests is to tell you if something changes. That means they fail if you broke something, but they also fail if you changed something on purpose (in which case the fix is to update the snapshot file). You don't know that the snapshot files are even correct -- there might be bugs in the system (and then when you fix those bugs, the regression tests will fail).
Nevertheless, regression tests were a huge win for us. Just about everything in our system has a report, so by spending a few weeks getting a test harness around the reports, we were able to get some level of coverage over a huge part of our code base. Writing the equivalent unit tests would have taken months or years. (Unit tests would have given us far better coverage, and would have been far less fragile; but I'd rather have something now, rather than waiting years for perfection.)
Then we went back and started adding unit tests when we fixed bugs, or added enhancements, or needed to understand some code. Regression tests in no way remove the need for unit tests; they're just a first-level safety net, so that you get some level of test coverage quickly. Then you can start refactoring to break dependencies, so you can add unit tests; and the regression tests give you a level of confidence that your refactoring isn't breaking anything.
Regression tests have problems: they're slow, and there are too many reasons why they can break. But at least for us, they were so worth it. They've caught countless bugs over the last five years, and they catch them within a few hours, rather than waiting for a QA cycle. We still have those original regression tests, spread over seven different continuous-build machines (separate from the one that runs the fast unit tests), and we even add to them from time to time, because we still have so much code that our 6,000+ unit tests don't cover.
It's absolutely worth it. Our app has complex cross-validation rules, and we recently had to make significant changes to the business rules. We ended up with conflicts that prevented the user from saving. I realized it would take forever to sort it out in the applcation (it takes several minutes just to get to the point where the problems were). I'd wanted to introduce automated unit tests and had the framework installed, but I hadn't done anything beyond a couple of dummy tests to make sure things were working. With the new business rules in hand, I started writing tests. The tests quickly identified the conditions that caused the conflicts, and we were able to get the rules clarified.
If you write tests that cover the functionality you're adding or modifying, you'll get an immediate benefit. If you wait for a re-write, you may never have automated tests.
You shouldn't spend a lot of time writing tests for existing things that already work. Most of the time, you don't have a specification for the existing code, so the main thing you're testing is your reverse-engineering ability. On the other hand, if you're going to modify something, you need to cover that functionality with tests so you'll know you made the changes correctly. And of course, for new functionality, write tests that fail, then implement the missing functionality.
I'll add my voice and say yes, it's always useful!
There are some distinctions you should keep in mind, though: black-box vs white-box, and unit vs functional. Since definitions vary, here's what I mean by these:
Black-box = tests that are written without special knowledge of the implementation, typically poking around at the edge cases to make sure things happen as a naive user would expect.
White-box = tests that are written with knowledge of the implementation, which often try to exercise well-known failure points.
Unit tests = tests of individual units (functions, separable modules, etc). For example: making sure your array class works as expected, and that your string comparison function returns the expected results for a wide range of inputs.
Functional tests = tests of the entire system all at once. These tests will exercise a big chunk of the system all at once. For example: init, open a connection, do some real-world stuff, close down, terminate. I like to draw a distinction between these and unit tests, because they serve a different purpose.
When I've added tests to a shipping product late in the game, I found that I got the most bang for the buck from white-box and functional tests. If there's any part of the code that you know is especially fragile, write white-box tests to cover the problem cases to help make sure it doesn't break the same way twice. Similarly, whole-system functional tests are a useful sanity check that helps you make sure you never break the 10 most common use cases.
Black-box and unit tests of small units are useful too, but if your time is limited, it's better to add them early. By the time you're shipping, you've generally found (the hard way) the majority of the edge cases and problems that these tests would have found.
Like the others, I'll also remind you of the two most important things about TDD:
Creating tests is a continuous job. It never stops. You should try to add new tests every time you write new code, or modify existing code.
Your test suite is never infallible! Don't let the fact that you have tests lull you into a false sense of security. Just because it passes the test suite doesn't mean it's working correctly, or that you haven't introduced a subtle performance regression, etc.
You don't mention the implementation language, but if in Java then you could try this approach:
In a seperate source tree build regression or 'smoke' tests, using a tool to generate them, which might get you close to 80% coverage. These tests execute all the code logic paths, and verify from that point on that the code still does exactly what it does currently (even if a bug is present). This gives you a safety net against inadvertently changing behaviour when doing the necessary refactoring to make code easily unit testable by hand.
Product suggestion - I used to use the free web based product Junit Factory, but sadly it's closed now. However the product lives on in the commercially licenced AgitarOne JUnit Generator at http://www.agitar.com/solutions/products/automated_junit_generation.html
For each bug you fix, or feature you add from now on, use a TDD approach to ensure new code is designed to be testable and place these tests in a normal test source tree.
Existing code will also likely need to be changed, or refactored to make it testable as part of adding new features; your smoke tests will give you a safety net against regressions or inadvertent subtle changes to behaviour.
When making changes (bug fixes or features) via TDD, when complete it's likely the companion smoke test is failing. Verify the failures are as expected due to the changes made and remove the less readable smoke test, as your hand written unit test has full coverage of that improved component. Ensure that your test coverage does not decline only stay the same or increase.
When fixing bugs write a failing unit test that exposes the bug first.
Whether it's worth adding unit tests to an app that's in production depends on the cost of maintaining the app. If the app has few bugs and enhancement requests, then maybe it's not worth the effort. OTOH, if the app is buggy or frequently modified then unit tests will be hugely beneficial.
At this point, remember that I'm talking about adding unit tests selectively, not trying to generate a suite of tests similar to those that would exist if you had practiced TDD from the start. Therefore, in response to the second half of your second question: make a point of using TDD on your next project, whether it's a new project or a re-write (apologies, but here is a link to another book that you really should read: Growing Object Oriented Software Guided by Tests)
My answer to your third question is the same as the first: it depends on the context of your project.
Embedded within you post is a further question about ensuring that any retro-fitted testing is done properly. The important thing to ensure is that unit tests really are unit tests, and this (more often than not) means that retrofitting tests requires refactoring existing code to allow decoupling of your layers/components (cf. dependency injection; inversion of control; stubbing; mocking). If you fail to enforce this then your tests become integration tests, which are useful, but less targeted and more brittle than true unit tests.
I would like to start this answer by saying that unit testing is really important because it will help you arrest bugs before they creep into production.
Identify the areas projects/modules where bugs have been re-introduced. start with those projects to write tests. It perfectly makes sense to write tests for new functionality and for bug fix.
Is it worth the effort for an existing
solution that's in production?
Yes. You will see the effect of bugs coming down and maintenance becoming easier
Would it better to ignore the testing
for this project and add it in a
possible future re-write?
I would recommend to start if from now.
What will be more benefical; spending
a few weeks adding tests or a few
weeks adding functionality?
You are asking the wrong question. Definitely, functionality is more important than anything else. But, rather you should ask if spending a few weeks adding test will make my system more stable. Will this help my end user? Will it help a new developer in the team to understand the project and also to ensure that he/she, doesn't introduce a bug due to lack of understanding of the overall impact of a change.
I'm very fond of Refactor the Low-hanging Fruit as an answer to the question of where to begin refactoring. It's a way to ease into better design without biting off more than you can chew.
I think the same logic applies to TDD - or just unit tests: write the tests you need, as you need them; write tests for new code; write tests for bugs as they appear. You're worried about neglecting harder-to-reach areas of the code base, and it's certainly a risk, but as a way to get started: get started! You can mitigate the risk down the road with code coverage tools, and the risk isn't (in my opinion) that big, anyway: if you're covering the bugs, covering the new code, covering the code you're looking at, then you're covering the code that has the greatest need for tests.
yes, it is. when you start adding new functionality it can cause some old code modification and as results it is a source of potential bugs.
(see the first one) before you start adding new functionality all (or almost) code (ideally) should be covered by unit tests.
(see the first and second one) :). a new grandiose functionality can "destroy" the old worked code.
Yes it can: Just try to make sure all code you write from now has a test in place.
If the code that is already in place needs to be modified and can be tested, then do so, but it is better not to be too vigorous in trying to get tests in place for stable code. That sort of thing tends to have a knock-on effect and can spiral out of control.
Is it worth the effort for an existing solution that's in production?
Yes. But you don't have to write all unit tests to get started. Just add them one by one.
Would it better to ignore the testing for this project and add it in a possible future re-write?
No. First time you are adding code which breaks the functionality, you will regret it.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality?
For new functionality (code) it is simple. You write the unit test first and then the functionality.
For old code you decide on the way. You don't have to have all unit tests in place... Add the ones that hurt you most not having... Time (and errors) will tell on which one you have to focus ;)
Update
6 years after the original answer, I have a slightly different take.
I think it makes sense to add unit tests to all new code you write - and then refactor places where you make changes to make them testable.
Writing tests in one go for all your existing code will not help - but not writing tests for new code you write (or areas you modify) also doesn't make sense. Adding tests as you refactor/add things is probably the best way to add tests and make the code more maintainable in an existing project with no tests.
Earlier answer
Im going to raise a few eyebrows here :)
First of all what is your project - if it is a compiler or a language or a framework or anything else that is not going to change functionally for a long time, then I think its absolutely fantastic to add unit tests.
However, if you are working on an application that is probably going to require changes in functionality (because of changing requirements) then there is no point in taking that extra effort.
Why?
Unit tests only cover code tests - whether the code does what it is designed to - it is not a replacement for manual testing which anyways has to be done (to uncover functional bugs, usability issues and all other kinds of issues)
Unit tests cost time! Now where I come from, that's a precious commodity - and business generally picks better functionality over a complete test suite.
If your application is even remotely useful to users, they are going to request changes - so you will have versions that will do things better, faster and probably do new things - there may also be a lot of refactoring as your code grows. Maintaining a full grown unit test suite in a dynamic environment is a headache.
Unit tests are not going to affect the perceived quality of your product - the quality that the user sees. Sure, your methods might work exactly as they did on day 1, the interface between presentation layer and business layer might be pristine - but guess what? The user does not care! Get some real testers to test your application. And more often than not, those methods and interfaces have to change anyways, sooner or later.
What will be more benefical; spending a few weeks adding tests or a few weeks adding functionality? - There are hell lot of things that you can do better than writing tests - Write new functionality, improve performance, improve usability, write better help manuals, resolve pending bugs, etc etc.
Now dont get me wrong - If you are absolutely positive that things are not going to change for next 100 years, go ahead, knock yourself out and write those tests. Automated Tests are a great idea for APIs as well, where you absolutely do not want to break third party code. Everywhere else, its just something that makes me ship later!
It's unlikely you'll ever have significant test coverage, so you must be tactical about where you add tests:
As you mentioned, when you find a bug, it's a good time to write a test (to reproduce it), and then fix the bug. If you see the test reproduce the bug, you can be sure it's a good, alid test. Given such a large portion of bugs are regressions (50%?), it's almost always worth writing regression tests.
When you dive into an area of code to modify it, it's a good time to write tests around it. Depending on the nature of the code, different tests are appropriate. One good set of advice is found here.
OTOH, it's not worth just sitting around writing tests around code that people are happy with-- especially if nobody is going to modify it. It just doesn't add value (except maybe understanding the behavior of the system).
Good luck!
You say you don't want to buy another book. So just read Michael Feather's article on working effectively with legacy code. Then buy the book :)
If I were in your place, I would probably take an outside-in approach, starting with functional tests that exercise the whole system. I would try to re-document the system's requirements using a BDD specification language like RSpec, and then write tests to verify those requirements by automating the user interface.
Then I would do defect driven development for newly discovered bugs, writing unit tests to reproduce the problems, and work on the bugs until the tests pass.
For new features, I would stick with the outside-in approach: Start with features documented in RSpec and verified by automating the user interface (which will of course fail initially), then add more finely-grained unit tests as the implementation moves along.
I'm no expert on the process, but from what little experience I have I can tell you that BDD via automated UI testing is not easy, but I think it's worth the effort, and probably would yield the most benefit in your case.
I'm not a seasoned TDD expert by any means, but of course I would say that it's incredibly important to unit test as much as you can. Since the code is already in place, I would start by getting some sort of unit test automation in place. I use TeamCity to exercise all of the tests in my projects, and it gives you a nice summary of how the components did.
With that in place, I'd move on to those really critical business logic-like components that can't fail. In my case, there are some basic trigometry problems that need to be solved for various inputs, so I test the heck out of those. The reason I do this is that when I'm burning the midnight oil, it's very easy to waste time digging down to depths of code that really don't need to be touched, because you know they are tested for all of the possible inputs (in my case, there is a finite number of inputs).
Ok, so now you hopefully feel better about those critical pieces. Instead of sitting down and banging out all of the tests, I would attack them as they come up. If you hit a bug that's a real PITA to fix, write the unit tests for it and get them out of the way.
There are cases where you'll find that testing is tough because you can't instantiate a particular class from the test, so you have to mock it. Oh, but maybe you can't mock it easily because you didn't write to an interface. I take these "whoops" scenarios as an opportunity to implement said interface, because, well, it's a Good Thing.
From there, I'd get your build server or whatever automation you have in place configured with a code coverage tool. They create nasty bar graphs with big red zones where you have poor coverage. Now 100% coverage isn't your goal, nor would 100% coverage necessarily mean your code is bulletproof, but the red bar definitely motivates me when I have free time. :)
There is so many good answers so I will not repeat their content. I checked your profile and it seems you are C# .NET developer. Because of that I'm adding reference to Microsoft PEX and Moles project which can help you with autogenerating unit tests for legacy code. I know that autogeneration is not the best way but at least it is the way to start. Check this very interesting article from MSDN magazine about using PEX for legacy code.
I suggest reading a brilliant article by a TopTal Engineer, that explains where to start adding tests: it contains a lot of maths, but the basic idea is:
1) Measure your code's Afferent Coupling (CA) (how much a class is used by other classes, meaning breaking it would cause widespread damage)
2) Measure your code's Cyclomatic Complexity (CC) (higher complexity = higher change of breaking)
You need to identify classes with high CA and CC, i.e. have a function f(CA,CC) and the classes with the smallest differences between the two metrics should be given the highest priority for test coverage.
Why? Because a high CA but very low CC classes are very important but unlikely to break. On the other hand, low CA but high CC are likely to break, but will cause less damage. So you want to balance.
It depends...
It's great to have unit tests but you need to consider who your users are and what they are willing to tolerate in order to get a more bug-free product. Inevitably by refactoring your code which has no unit tests at present, you will introduce bugs and many users will find it hard to understand that you are making the product temporarily more defective to make it less defective in the long run. Ultimately it's the users who will have the final say...
Yes.
No.
Adding tests.
Going towards a more TDD approach will actually better inform your efforts to add new functionality and make regression testing much easier. Check it out!

Why do code quality discussions evoke strong reactions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)
Code quality does not seem popular. Some examples from my experience include
Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.
Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)
So why is code quality so unpopular/considered boring?
EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.
One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.
How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".
Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.
I've also seen more than enough articles about continuous integration.
There are the common excuses for not
writing tests, but they are only
excuses. If one wants to write some
tests for his/her new code, then it is
possible
Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"?
Even if you're working on some embedded platform with no unit testing frameworks?
Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?
No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.
Personally, I get tired of "code quality" discussions because they tend to
be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".
I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.
Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"
Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?
Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.
Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.
What is there to measure?
How do we measure it?
Who will measure it?
What will I gain/lose from measuring it?
Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.
Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.
EDIT *
I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"
Laziness / Considered boring
Management feeling it's unnecessary -
Ignorant "Just do it right" attitude.
"This small project doesn't need code
quality management" turns into "Now
it would be too costly to implement
code quality management on this large
project"
I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.
Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.
Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.
Why is code quality so unpopular?
Because our profession is unprofessional.
However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.
I guess the answer is the same as to the question 'Why is code quality not popular?'
I believe the top reasons are:
Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.
Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.
It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.
At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.
Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.
One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.
For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.
Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.
I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.
Probably every Java developer knows JUnit...
While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.
I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)
For me some of those things were:
Finding a workflow that accomodates unit testing.
Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
Having some tests is better than having no tests at all.
More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
You'll have to practice to get good at it.
So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.
EDIT:
In this blogpost I go in depth on some of the reasons given here against unit testing.
Code Quality is unpopular? Let me dispute that fact.
Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects.
There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents).
In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.
And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).
That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.
Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means).
In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.
The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.
It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.
It reminds me of this Monty Python skit:
"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "
I'd say for many reasons.
First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.
There is a threshold where the quality requirements are of such a level that unit testing is required.
There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?
Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.
I am all in favour of code inspections as they let programmers share experience and code style from other programmers.
"There are the common excuses for not writing tests, but they are only excuses."
Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.
Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.
Code quality is subjective. Subjective topics are always tedious.
Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)
99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).
Does it work? = Concrete.
Is it pretty? = In the eye of the beholder.
Read Fred Brooks' The Mythical Man Month. There is no silver bullet.
Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...
Code quality is context specific and hard to generalize no matter how much effort people try to make it so.
It's similar to the difference between theory and application.
I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.
I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.
The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.
So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.
It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.
Been there - seen that.
A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.
Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.
Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.
I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.
My conclusion is that developers do not want to write tests.
I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?
The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.
It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.
Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.
Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.
Usually even compiler warnings like potential null pointer access are ignored.
I've never worked in a shop where compiler warnings were tolerated.
One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".
That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.
A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).
Also, commenting isn't near as "cool" as actually writing a slick piece of code.
Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?
Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...
The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.
People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."
We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:
http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html
As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.
But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.
I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).
Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.
I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).
I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.
I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.
I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.
I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.
I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.
someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).
Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.
You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

How does Test Driven Design help a one man software project?

I've spent a lot of time building out tests for my latest project, and I'm really not sure what the ROI was on the time spent.
I'm a one man operation, and I'm building web applications. I don't necessarily have to "prove" that my software works to anyone (except my users), and I'm worried that I spent a good deal of time needlessly rebugging test code in the past months.
My question is, while I like the idea of TDD for small to large software teams, how does it help a one man team build high quality code quickly?
Thanks
=> ran across this today, from the blog of joel spolsky, one of the founders of stackoverflow:
http://www.joelonsoftware.com/items/2009/09/23.html
"Zawinski didn’t do many unit tests. They “sound great in principle. Given a leisurely development pace, that’s certainly the way to go. But when you’re looking at, ‘We’ve got to go from zero to done in six weeks,’ well, I can’t do that unless I cut something out. And what I’m going to cut out is the stuff that’s not absolutely critical. And unit tests are not critical. If there’s no unit test the customer isn’t going to complain about that.”"
as i'm getting older i think i'm realizing more and more that it's just all about speed and functionality. i'd love to build unit tests. but since we only have so much time at our disposal, i'd rather build it faster, and rely on beta testing and good automated error reporting to weed out any problems as they crop up. if the project eventually gets big enough that this bites me in the a**, it will be generating enough revenue that i can justify a rebuild.
I think that a situation like yours it helps greatly when you have to change/refactor/optimize something on which a lot of code depends... By using unit-testing you can quickly ensure that everything that worked before the change, still works afterwards :) In other words, it gives you confidence.
TDD doesn't really have anything to do with team size. It has to do with creating the smallest amount of software needed with the right interface that works correctly.
The TDD process requires you to write only just enough code to satisfy a test, so you don't end up creating code you don't need.
Using TDD to design a class makes you think as a client of the class, so you end up creating a better interface more often than if you developed it without TDD.
TDD, by its nature will acheive 100% code coverage, proving your code works. A side-effect of this is that you and others can now more safely change your class because it has a full suite of automated tests.
I should add that its iterative nature creates a positive feedback loop as well, so as you iterate you gain more and more confidence in your code.
TDD is not only about testing, it is also about designing your classes / API.
I mean: by writing a test first, you are forced to think on how you want to use your class. So, you first think about the interface of your class, how you want to use your class, and hence, your object model becomes more usable and readable.
rebugging is always needless - just don't delete the bugs in the first place...
For a real answer, you can't do better than 'it depends'. If:
you don't tend to have the kind of
problems that automated unit testing can
find (as opposed to performance, visual or
aesthetic ones)
you have some other way of designing the code (e.g. UML)
you don't tend to have cause to change things while keeping them working
It could well be the case that TDD doesn't really work out for you.
Or maybe you are doing it wrong, and if you did it differently it would work better.
Or, just maybe, it is actually working but you don't realise it. One thing about working solo is that self-assessment is difficult.
In general, people's self-views hold
only a tenuous to modest relationship
with their actual behavior and
performance. The correlation between
self-ratings of skill and actual
performance in many domains is
moderate to meager—indeed, at times,
other people's predictions of a
person's outcomes prove more accurate
than that person's self-predictions.
In addition, people overrate
themselves. On average, people say
that they are "above average" in skill
(a conclusion that defies statistical
possibility), overestimate the
likelihood that they will engage in
desirable behaviors and achieve
favorable outcomes, furnish overly
optimistic estimates of when they will
complete future projects, and reach
judgments with too much confidence.
Several psychological processes
conspire to produce flawed
self-assessments.
In theory team size should not matter. TDD is supposed to pay off because:
You test the code so you get the bugs out
When you maintain or refactor the code you know you didn't break it because you easily can test it
You produce better code because you focus on edge cases as you write the code
You produce better code because you can refactor the code with confidence
And generally I do find that the approach is valuable. I must admit to being in two minds about the ongoing maintenance of some tests.
Even before the term TDD became popular, I've always written a little main function to test whatever piece I was working on. I'd throw it away right afterwards. I have trouble understanding the mentality of programmers who could write code that never been executed and plug it in. When you find a "Bug" after doing that a few times it can take days or even weeks to track down.
Unit testing is a slightly better way to go because your tests hang around and you can see the intent.
Unit testing will find the bug much faster than testing from a UI after integration--it'll just save you time.
Now saving all your unit tests and creating a suite can be of less value if you are single, especially if you like to refactor a lot (You can easily spend more time refactoring tests than code), but the tests are still worth while to create.
Plus, test-driven development is kinda fun to a degree.
I've found that people concentrate too much on the TEST in TDD. There are many who don't believe that this is what the originators had in mind.
I've found that BDD is quite useful, no matter what the problem size. It concentrates on the gathering of how the system is supposed to behave.
Some people take it all the way to the creation of automated unit tests. I use them as both specifications and test cases. Because they're in English, it is easy for the business to understand them, as well as the QA department.
In general, it is a formalized way of recording specs, so that code can be written. Isn't that what the ultimate goal is?
Here are a few links
What's in a Story
Introducing BDD
Designing Klingon Warships Using Behaviour Driven Development

How has unit testing made your life better?

Ok let me be honest, I haven't written more than 10 unit tests in my life probably.
I am embarking on a new project, and being the sole programmer means I should be scared ... very scared.
The idea that I can pseudo guarantee that my software works brings about a sense of joy.
Sure I will miss a ton of cases where I should have tested, but that's where I will learn as time goes on.
Unit testing will help me sleep better at night, which is better for my health.
My code will fail, but at least I will have a better idea when it will.
How has unit testing made your life better (or has it?), despite the rest of your team not jumping on the bandwagon?
The far biggest value that unit test have on my project is confidence. With that confidence it's much easier to add new features that weren't planned at the beginning and to tear code apart to change something or turn this around.
With test I know I (or anyone else!) haven't broke something that already worked.
Without test you are brave (or stupid) when you make big changes and deploy them in production in the next minute.
By unit testing, I've reduced the number of "stupid-bugs" that get reported during the testing stage. It's also given me a higher level of confidence in my code.
As Elie said, unit-testing is a great way to flush out "stupid" or simple bugs very early.
For me, it changes the way I think about code; making my code testable makes it less brittle and more flexible (e.g. depenceny injection/inversion of control came quite naturally to me because it's something I did for testing purposes anyway).
The greatest benefit I personally get from a thorough test-suite is the confidence to change complicated code even months after I wrote it without fear of accidentally breaking something.
I'm not yet there, but with some discipline while writing them, unit-tests are a great way to document your code, also.
Okay, as nobody else has stepped up to the plate to play devil's advocate, I'll do it.
Automated unit testing can have good benefits for some projects, but can also have many of the following issues:
It sucks down a ton of engineering resources.
It can take many man-hours to remove environment and setup issues from the equation.
It has a low ROI for some project types, especially GUIs.
It won't catch many errors, because it's impossible to evaluate all execution paths for all but the most trivial programs.
It doesn't catch integration errors.
It doesn't catch broader system-level errors such as functions performed across multiple units, or non-functional areas such as performance.
Test coverage and coverage gates can become an increasingly useless mantra
It requires a sustainable process for ensuring that test case failures are reviewed and addressed immediately. Otherwise the app will evolve out of sync with the unit test suite.
It has a significant opportunity cost - there are other activities such as code reviews that have an equally good or even better ROI.
It can involve a significant culture change.
So developers shouldn't adopt a dogmatic (yes or no) approach towards unit testing, but instead do an ROI calculation for every project.
Using unit testing in conjunction with TDD provides me with motivation and drive to complete the task at hand. Without the small progress of writing test, fixing test, writing test, fixing test I can become unmotivated.
Unit tests are really helpful when you start debugging as well. If your tests fail, then you know where the bug is almost immediately. If they all run, then you know where the bugs aren't (most of the time).
Another area where unit tests help: migrating software. I'm finding that it's a lot easier to prepare Python code that has unit tests to be migrated to Python 3 than code that doesn't have unit tests.
Sure I will miss a ton of cases where I should have tested, but that's where I will learn as time goes on.
This is where Test Driven Development really shines. You don't have to worry quite as much about having the proper tests because that question will be answered for you beforehand.
Of course, just to make sure we're on the same page, "test driven development" means the process of coding where you write the test, verify that the test fails, and then write the code.
For me, it wasn't just unit tests that changed my life, but Test Driven Development (TDD). I liken it to a religious experience in my blog post (shameless, I know) My Year with TDD.
Getting into testing has been a career changing experience for me. I write less bugs, I write more readable code, I write more cohesive code, I know when something is broken (usually), etc, etc. I owe it all to Test Driven Development.
Try it, you like it :)
I've been a TDD(eveloper) since my 2nd* project at uni (back in 1988). I don't know if the term was even in use then.
The best thing is the ability to change things and very quickly check you haven't broken anything else. Easy regression testing.
They are good documentation of object/method usage as well.
*and that was directly because of how the first project went....
the biggest thing unit tests give you is confidence in your code. You know stuff works at a certain level of quality, and you know you can go in and change or rewrite something and not have things fail all over the place where you dont expect to. Verification is only a testrun away.
Now that I foster to test-drive my code, I know when I'm done with a function, a component, or a feature. Therefore I can report progress accurately.
I know it's not bug free code, but it is functional enough to be integrated, built and delivered to QA. I'm confident they'll be able to start testing without being blocked by a segmentation error, or any other silly problem.
I also have an environment ready so that I can quickly write a new test to reproduce any problem that will be reported, and I have a safety net to detect side effects and regressions when I modify or fix the code.
Can't speak for everyone, but I started writing tests because of a fellow developer that wrote tests that were a big help to me when I was learning the system.
I've also found that tests are a good way to verify assumptions when working with a new code base.
Unit testing is not a silver bullet. But it does have benefits, and it is very satisfying.
I find it means my code gets executed a lot earlier, since before I wrote unit tests, it took a lot of coding before there was enough functionality to try out in the application.
I'm also very grateful for my suite of unit tests when I come to refactor. I rewrote a whole date handling module a while ago, and there's no way I would have had the confidence to do that without regression tests.
I must say that I think the improvements in VS 2010, such as Ctrl+Enter (I think that was what it was) that can allow you to quickly stub the interface of a class while writing test (first) is going to make this ALOT easier for me.
Unit test advantages are multiple. To be more specific of how it makes my life better, well, I think that it's increase the confidence in change and give me the possibility to change a lot faster a code later. It increase my life in the long run.
Of course, I can tell that in short time that it's a little bit of more pain because it requires more time, but it's rapidly forgotten when you auto-validate yourself when you do these test.
I'm a big fan of unit testing, though my tests don't provide complete coverage nowadays... mostly because I'm working on a web site and much of my code just grabs data from the database, manipulates it, and spits it out. The manipulation code is generally well tested, but its a real pain to test the database code.
That being said, I can point out a case where unit tests saved me weeks of work...
I was working on a smallish project (4-6 devs) a while back and, after months of work we had reached a state of near completion. At this point, the folks in charge of the product decided that, instead of storing dates (and generating reports using them) in GMT, they wanted everything in EST. Given the product was built to handle large amounts of data/logs and generate information about that data based on timeframes, this was a fairly major change.
Over the course of the next few days, the development team went in and changed everything to deal with EST timestamps. What would have taken us weeks to do had we not had such extensive automated tests, took us just 3 days, allowing us to meet an aggressive schedule. We were able to jump into the code and start changing whatever we needed to; the unit tests giving us courage by knowing the system would complain quickly if we broken something. To this day, I use that experience as an example of how you can never truly understand the benefits of automated testing until it saves you... and it certainly did that for my team.
I'm currently in the process of trying to jump on the bandwagon. Work mates are already doing it before they've written a line of functional code. I'm still writing a full program before I've even run it through the main, let alone unit test it :/
I'll get there in the end I'm sure. But at the moment, I am of ill health through spending 90% of my life debugging :(
I've done quite a bit of work in java, and a year in Ruby.
In Ruby we used extensive testing (TDD). This was ABSOLUTELY REQUIRED. You can write nearly complete garbage into a Ruby file, and if you don't hit that specific line of code, you'll never know, so your tests need near 100% coverage.
In Java, I've never needed much more than a single, simple success path test--and that can usually be thrown away after the code runs. It's really the static type checking, strong typing and using coding patterns like strong encapsulation and parameter checking that makes this possible. You can actually get very close to proving that a small class can't be broken (is bug free) without tests, and when correctly designed, all classes should be small.
Another point of interest: On the Ruby project, we had a refactor that took us 2 days of real code work (split a prime model class into two classes) and 2 weeks of test repairs.
At some point all those tests have a price, they are still code you have to maintain.
That said, I find TDD fun and a good way to get things started, even in Java, and I also would reiterate that I ALWAYS have some success path testing at the very least (even if it's just a quick main method) in virtually every class I write.
Good unit tests that provide sufficient coverage can make you sleep better at night.
If you use assertions, you can find out potential bugs which are missed by the unit tests (sometimes it's not good enough perhaps), and you can sleep even better at night.
It saves me time, because when I run code TDD, it usually just works when it comes to integration time, so no need to spend agonizing hours debugging.
It also gives me confidence having a conversation with other developers claiming that API I created has bugs.
When you get to some critical mass of tests, a nice side-effect is often that if you are about to introduce a bug, it is likely to make some test fail, even if the tests aren't directly against the new, buggy code you are writing.
So you will be alerted to the bug you are about to commit before doing so, and you can then write tests against it and fix it right away.
(This is of course only true if you have tests that are not "just" very narrow test-one-thing-only tests, but I think that is more often the case than not.)

Does anyone have metrics on the utility of formal Unit Testing?

Does anyone have metrics on the utility of formal Unit Testing? I see a lot of attention being paid to unit testing tools and I was curious why?
I stopped formal unit testing over 5 or 6 years ago and the net gain in productivity seems quite high. I stopped unit testing because I noticed that it never caught anything - let alone anything useful. The type of errors that unit testing detects seem to be preventable by not drinking more than 2 glasses of wine/beer per hour (or 2 joints per hour). Also - unit testing seems to create risk by allowing the developer to think that there is some safeguard to catch their mistakes.
I do test to ensure that the code works as it should, but I do not use any tools. I test based on the changes being made. My production error rate for new code is approximately zero. My error rate for changes to code is about 2-3 bugs per quarter. The above measures are based on 4 production applications that I develop/support.
I acknowledge your superiority as human being and a coder.
I, however, am a mere moron, and without Python unittest, I would be lost.
I cannot refactor without unit tests, it just takes too much thinking.
I can barely code without unit tests, it's too hard to be absolutely sure I absolutely understand absolutely every nuance.
I unit test because I'm an idiot. Since you don't make mistakes, you clearly don't need to unit test. I salute you.
Edit. For me, unit tests aren't about metrics or costs. I don't need any randomized, controlled experiments to show me the value. I cannot work without them. Indeed, I refuse to work without them. In a similar vein, I won't work without a compiler, a text editor, or source code control; I won't work without requirements; I refuse to program without doing design first.
I do not see unit testing as a replacement for traditional testing, but rather as an extra step to ensure correctness of code. Some particular areas where I find unit testing useful are:
When refactoring/changing existing code. Unit tests will verify that at least those cases still work as expected. The more tests you have, the more sure you can be that the code changes did not break existing code.
When submitting bug reports. Having a unit test which exposes a bug is a great way of demonstrating the bug AND knowing when it has been fixed.
A means of designing interfaces. You have some test code to check the interfaces out with.
Probably a few others I've forgotten about :-P
PS: How do you know you make no bugs? I don't think that I introduce bugs into code I work on, but that certainly doesn't make it so. IMHO, it is naive to think that your code is bug free.
(Regarding unit testing, if you know your code may contain bugs - and I would say most code does - wouldn't you want to use every possible means to catch them?)
Here is some White Paper about Unit Test that might help you:
White paper #1
White paper #2
White paper #3
But, Martin Fowler put it, the anecdotal evidence in support of unit tests and TDD is overwhelming, but you cannot measure productivity.
Unit testing is good because you can change a part and know if somewhere else it has modified something. Some people are "in love" with Unit Testing and should calm theirselve. I believe in Unit Testing but people who try to covert everything are AS dangerous of people who do not unit test.
Here is a thread that has some research about the TDD approach
Research on TDD
Is there hard evidence of the ROI of unit testing?
I don't have any metrics to point at, but I think the rise in popularity is because the rest of us have had experience that's the opposite of yours. :)
With unit tests, I can fix bugs in production code and install the new version within the hour the bug was found and I can be sure that the new version isn't worse than what we had before - because the tests tell me so. It might be better, though.
They give me a lower watermark below which the quality of my code can never sink. They allow me to keep track of the bigger picture and have the tests find the small mistakes that I tend to make. They also allow me to develop in a very relaxed style.
Since I test, I tend to deliver on time, my code quality has improved a lot and the result usually works as expected. Also, I'm much faster since I can cut corners which would be too dangerous to try if I didn't have the tests.
That said, I also don't have any hard numbers nor do I know any source despite the fact that I'm doing unit test and TDD for years. My love for tests is based on pure word of mouth and personal experience.
I've found that unit testing helps me when adding new functionality. In this scenario I used to worry that what I was adding was going to break something in some remote part of the application. But with appropriate unit tests I know whether or not I've broken something the moment I run the tests.
Here's an interesting discussion on the utility of unit tests.
If you don't like unit tests, another concept you might want to look into is Design By Contract. Which basically asserts that if certain input conditions are met then there will be a guaranteed output according to the contract.
I'm a development manager. For my organization, setting up and migrating to nhibernate involved some setup costs and added to our development time. Some of the developers liked it, some thought it was a waste of time.
There hasn't been a noticeable change in error rates, but perhaps it's too early to tell.
From my perspective, I think it helps junior developers who aren't sure of their work, but for the senior developers, it seems to slow them down - it's one more thing to keep updated. I'm not sure if we'll continue using this, revert back to our old ways (ad hoc unit testing), or let developers make a personal choice.
There are several tools that measure the code coverage with unit tests.
They are an essential part together with unit test to ensure the code is not only tested, but completly tested.
Everything else is just pure magic ;)
If you want to refactor code, by definition you need some way of telling if the changes broke the code. Lacking divine insight, I find that unit testing is a pretty good tool, but ymmv.
I've specifically had a lot of gain using Test Driven Development (TDD) with C++ on a huge monolithic server application.
When I'm working on an area of code, I first ensure that that area is covered with tests before I change or write new code.
In this use case I have huge gains in productivity.
To build and run my test: 10 seconds.
To build, run and manually test the full server: min 5 minutes.
This means I'm able to iterate an area of code very quickly and only test fully when I need to.
In addition to this I have also utilised integration testing which take longer to build and run, but only exercise the specific piece of functionality I am interested in.
I have some sympathy for what you're saying. My experience is similar in that my new bugs are seldom caught by unit tests. If at all, the unit tests are modified after the bug has been found to ensure that it doesn't reappear.
Where unit tests have helped me is in the design of my (Java) classes. I have often refactored the classes to make them testable (removal of singletons for example) which, I think, has improved the overall design. For me, that's reason enough to use them.
In my experience Unit Testing helped me with the following stuff :
Now I can give all of my focus to the code block / function / class that I'm writing without worrying about anything else, because I know If I do something stupid or cause a side effect my tests will tell me
I can refactor stuff by knowing that I'm not breaking stuff
I'm sure that my app is working as expected
Before a release I don't check every single functionality manually to confirm that app still works,
I'm sure my application is always stable in some level
However this is a bit related with me since I'm really bad about "managing multiple stuff at a time" and "focusing". Therefore Unit Testing works for me and literally save my day so many times, where I introduced a new feature and it broke some old functionality.
If this is not the case for you just don't use it. If you are still having the same outcome, performance and quality with the same amount of bugs then that means you don't need unit testing. Or you need to revise your unit testing methodologies.
P.S. Based on your bug rate and what you've said you sound like a genius anyway (assuming these are medium or big projects), so I'd say don't use Unit Tests, it looks like you are doing fine. But for the rest of the world who are not genius such as me I strongly recommend Unit Test because it worked for me.
Dunno about you, but I've just checked in two fixes for two coredumps created by changes in existing code that my unit tests caught. And I just had a major production issue that would have been caught by a unit test if I had more trust in its results (our unittests are a bit more on a functional side than I'd like to admit).
It seems to me that formal Unit Testing with the popular testing tools is pretty much like U.S. airport security.
It provides the illusion of security
It makes people 'feel' good
It's very inconvenient (extremely inconvenient if you got the wrong color skin)
People will angrily wave their fist at you if you criticise it...
These same people will be left befuddled when their process fails them and they will jump on the next band wagon...
I think people have different perspectives on software. In my opinion, software is a means of making money (hopefully by providing increased revenues or saving money). I've seen the posts for TDD which is the closest I see as a scientific way to answer the question, but the methodology lacks scientific rigor. None of the articles specified had a baseline or fairly contrasted alternative method.
I guess, the fans of formal unit testing will continue to feel secure in their ways. I will continue to write my spec on a scrap of paper and put in the bowl at the feet of my statue of St. Anthony and say a prayer. Who is to say which way is more effective, but my way sure feels good... Maybe I'll write a white paper about it.
Remember the rise in popularity of 70's and 80's haircuts and clothes... that didn't work out so well for those of us who lived in those decades.
Formal unit testing takes considerable work and effort to maintain. I'd guess that it takes 20-50% of the time it takes to actually develop the software. What I'm asking is for the known price of adding 20-50% overhead to every development effort, is the gain noteworthy and/or proveable.
By not doing formal unit testing, you are forcing the developer to think through the appropriate things to test. The developer takes more ownership of the product.
Formal Unit testing sounds like snake oil juice... Everyone and his brother say it is good, useful, cool, etc., but there has not been a randomized controlled trial to prove that it actually saves time or money. All the responses thus far are subjective testimonies.
What I'd like to know is if there is a software manager who can demonstrate higher productivity (or even higher quality) after the introduction of unit testing.
lol - the facetious rant by S.Lott is the highest ranked response... Given this forum is anonymous (for me anyway), your respect is not what I'm seeking. I'd consider myself barely above mediocre. I've worked with exceptional developers... those guys generally don't even do basic tests of their code - they just know it will work.