Related
Let's say that I am refactoring some classes that have unit tests already written. Let's assume that the test coverage is reasonable in the sense that it covers most use cases.
While refactoring I change some implementations. Move some variables, add/remove a few, abstract out things into some functions etc. The api of the classes and it's function has remained the same though.
Should I be adding tests when I am refactoring these classes? Or should I add a new test for every bit of refactoring? This is what I usually do when I am building up code rather than refactoring.
PS: Apologies if this is really vague.
Usually unit tests are work-/design-/use case-specifications of how your refactored System Under Test/Class Under Test (eg.: classes) should really work. So by stating this I would really just go as:
Write the test according to your specification
Refactor the code to adhere to the specification
Check the assertation result of the test
In practice I have came to the conclusion that you don't need to test each and every line of your code just for the sake of high percentage of code coverage, but make sure you always test parts of your code where the behaviour or logic lies.
If your changes are verified by current tests, there's no need to add new ones. Check your code coverage. If there are holes in your new code, that means you made an unverified functional change.
New tests might be valuable if a newly extracted class is moved to another project, where you don't have the original tests.
Is it generally accepted that you cannot test code unless the code is setup to be tested?
A hypothetical bit of code:
public void QueueOrder(SalesOrder order)
{
if (order.Date < DateTime.Now-20)
throw new Exception("Order is too old to be processed");
...
}
Some would consider refactoring it into:
protected DateTime MinOrderAge;
{
return DateTime.Now-20;
}
public void QueueOrder(SalesOrder order)
{
if (order.Date < MinOrderAge)
throw new Exception("Order is too old to be processed");
...
}
Note: You can come up with even more complicated solutions; involving an IClock interface and factory. It doesn't affect my question.
The issue with changing the above code is that the code has changed. The code has changed without the customer asking for it to be changed. And any change requires meetings and conference calls. And so i'm at the point where it's easier not to test anything.
If i'm not willing/able to make changes: does it make me not able to perform testing?
Note: The above pseudo-code might look like C#, but that's only so it's readable. The question is language agnostic.
Note: The hypothetical code snippet, problem, need for refactoring, and refactoring are hypothetical. You can insert your own hypothetical code sample if you take umbrage with mine.
Note: The above hypothetical code is hypothetical. Any relation to any code, either living or dead, is purely coincidental.
Note: The code is hypothetical, but any answers are not. The question is not subjective: as i believe there is an answer.
Update: The problem here, of course, is that i cannot guarantee that change in the above example didn't break anything. Sure i refactored one piece of code out to a separate method, and the code is logically identical.
But i cannot guarantee that adding a new protected method didn't offset the Virtual Method Table of the object, and if this class is in a DLL then i've just introduced an access violation.
The answer is yes, some code will need to change to make it testable.
But there is likely lots of code that can be tested without having to change it. I would focus on writing tests for that stuff first, then writing tests for the rest when other customer requirements give you the opportunity to refactor it in a testable way.
Code can be written from the start to be testable. If it is not written from the start with testability in mind, you can still test it, you may just run into some difficulties.
In your hypothetical code, you could test the original code by creating a SalesOrder with a date far in the past, or you could mock out DateTime.Now. Having the code refactored as you showed is nicer for testing, but it isn't absolutely necessary.
If your code is not designed to be tested then it is more difficult to test it. In your example you would have to override the DateTime.Now Method which is propably no easy task.
I you think it adds little value to add tests to your code or the changing of existing code is not allowed then you should not do it.
However if you belief in TDD then you should write new code with tests.
You can unit test your original example using a Mock object framework. In this case I would mock the SalesOrder object several times, configuring a different Date value each time, and test. This avoids changing any code that ships and allows you to validate the algorithm in question that the order date is not too far in the past.
For a better overall view of what's possible given the dependencies you're dealing with, and the language features you have at your disposal, I recommend Working Effective with Legacy Code.
This is easy to accomplish in some dynamic languages. For example I can hook inside the import/using statements and replace an actual dependency with a stub one, even if the SUT (System Under Test) uses it as an implicit dependency. Or I can redefine those symbols (classes, methods, functions, etc.). I'm not saying this is the way to go. Things should be refactored, but it is easier to write some characterization tests.
The problem with this sort of code is always, that it's creating and depending on a lot of static classes, framework types, etc. etc. ...
A very good solution to 'inject' fakes for all these objects is Typemock Isolator (which is commercial, but worth every penny). So yes, you certainly can test legacy code, which was written without testability in mind. I've done it on a big project with Typemock and had very good results.
Alternatively to Typemock, you may use the free MS Moles framework, which does basically the same. It's only that it has a quite unintuitive API and is much harder to learn and use.
HTH.
Thomas
Mockito + PowerMock for Mockito.
You'll be able to test almost everything without dramatically changing your code. But some setters will be needed to inject the mocks.
I was reading the Joel Test 2010 and it reminded me of an issue i had with unit testing.
How do i really unit test something? I dont unit test functions? only full classes? What if i have 15 classes that are <20lines. Should i write a 35line unit test for each class bringing 15*20 lines to 15*(20+35) lines (that's from 300 to 825, nearly 3x more code).
If a class is used by only two other classes in the module, should i unit test it or would the test against the other two classes suffice? what if they are all < 30lines of code should i bother?
If i write code to dump data and i never need to read it such as another app is used. The other app isnt command line or it is but no way to verify if the data is good. Do i still need to unit test it?
What if the app is a utility and the total is <500lines of code. Or is used that week and will be used in the future but always need to be reconfiguration because it is meant for a quick batch process and each project will require tweaks because the desire output is unchanged. (i'm trying to say theres no way around it, for valid reasons it will always be tweaked) do i unit test it and if so how? (maybe we dont care if we break a feature used in the past but not in the present or future).
etc.
I think this should be a wiki. Maybe people would like to say an exactly of what they should unit test (or should not)? maybe links to books are good. I tried one but it never clarified what should be unit tested, just the problems of writing unit testing and solutions.
Also if classes are meant to only be in that project (by design, spec or whatever other reason) and the class isnt useful alone (lets say it generates the html using data that returns html ready comments) do i really need to test it? say by checking if all public functions allow null comment objects when my project doesnt ever use null comment. Its those kind of things that make me wonder if i am unit testing the wrong code. Also tons of classes are throwaway when the project. Its the borderline throwaway or not very useful alone code which bothers me.
Here's what I'm hearing, whether you meant it this way or not: a whole litany of issues and excuses why unit testing might not be applicable to your code. In other words: "I don't see what I'll be getting out of unit tests, and they're a lot of bother to write; maybe they're not for me?"
You know what? You may be right. Unit tests are not a panacea. There are huge, wide swaths of testing that unit testing can't cover.
I think, though, that you're misestimating the cost of maintenance, and what things can break in your code. So here are my thoughts:
Should I test small classes? Yes, if there are things in that class that can possibly break.
Should I test functions? Yes, if there are things in this function that can possibly break. Why wouldn't you? Or is your concern over whether it's considered a unit or not? That's just quibbling over names, and shouldn't have any bearing on whether you should write unit tests for it! But it's common in my experience to see a method or function described as a unit under test.
Should I unit test a class if it's used by two other classes? Yes, if there's anything that can possibly break in that class. Should I test it separately? The advantage of doing so is to be able to isolate breakages straight down to the shared class, instead of hunting through the using classes to see if it was they that broke or one of their dependencies.
Should I test data output from my class if another program will read it? Hell yes, especially if that other program is a 3rd-party one! This is a great application of unit tests (or perhaps system tests, depending on the isolation involved in the test): to prove to yourself that the data you output is precisely what you think you should have output. I think you'll find that has the power to simplify support calls immeasurably. (Though please note it's not a substitute for good acceptance testing on that customer's end.)
Should I test throwaway code? Possibly. Will pursuing a TDD strategy get your throwaway code out the door faster? It might. Will having solid unit-tested chunks that you can adapt to new constraints reduce the need to throw code away? Perhaps.
Should I test code that's constantly changing? Yes. Just make sure all applicable tests are brought up to date and pass! Constantly changing code can be particularly susceptible to errors, after all, and enabling safe change is another of unit testing's great benefits. Plus, it probably puts a burden on your invariant code to be as robust as possible, to enable this velocity of change. And you know how you can convince yourself whether a piece of code is robust...
Should I test features that are no longer needed? No, you can remove the test, and probably the code as well (testing to ensure you didn't break anything in the process, of course!). Don't leave unit test rot around, especially if the test no longer works or runs, or people in your org will move away from unit tests and you'll lose the benefit. I've seen this happen. It's not pretty.
Should I test code that doesn't get used by my project, even if it was written in the context of my project? Depends on what the deliverable of your project is, and what the priorities of your project are. But are you sure nobody outside of your project will use it? If they won't, and you aren't, perhaps it's just dead code, in which case see above. From my point of view, I wouldn't feel I'd done a complete job with a class if my testing didn't cover all its important functionality, whether the project used all that functionality or not. I like classes that feel complete, but I keep an eye towards not overengineering a bunch of stuff I don't need. If I put something in a class, then, I intend for it to be used, and will therefore want to make sure it works. It's an issue of personal quality and satisfaction to me.
Don't get fixated on counting lines of code. Write as much test code as you need to convince yourself that every key piece of functionality is being thoroughly tested. As an extreme example, the SQLite project has a tests:source-code ratio of more than 600:1. I use the term "extreme" in a good sense here; the ludicrous amount of testing that goes on is possibly the predominant reason that SQLite has taken over the world.
How can you do all those calculations? Ideally you should never be in a situation where you could count the lines of your completed class and then start writting the unit test from scratch. Those 2 types of code (real code and test code) should be developed and evolved together, and the only LOC metric that should really worry you in the end is 0 LOCs for test code.
Relative LOC counts for code and tests are pointless. What matters more is test coverage. What matters most is finding the bugs.
When I'm writing unit tests, I tend to focus my efforts on testing complicated code that is more likely to contain bugs. Simple stuff (e.g. simple getter and setter methods) is unlikely to contain bugs, and can be tested indirectly by higher-level unit tests.
Some Time ago, i had The same question you have posted in mind. I studied a lot of articles, Tutorials, books and so on... Although These resources give me a good starting point, i still was insecure about how To apply efficiently Unit Testing code. After coming across xUnit Test Patterns: Refactoring Test Code and put it in my shelf for about one year (You know, we have a lot of stuffs To study), it gives me what i need To apply efficiently Unit Testing code. With a lot of useful patterns (and advices), you will see how you can become an Unit Testing coder. Topics as
Test strategy patterns
Basic patterns
Fixture setup patterns
Result verification patterns
Test double patterns
Test organization patterns
Database patterns
Value patterns
And so on...
I will show you, for instance, derived value pattern
A derived input is often employed when we need to test a method that takes a complex object as an argument. For example, thorough input validation testing requires we exercise the method with each of the attributes of the object set to one or more possible invalid values. Because The first rejected value could cause Termination of The method, we must verify each bad attribute in a separate call. We can instantiate The invalid object easily by first creating a valid object and then replacing one of its attributes with a invalid value.
A Test organization pattern which is related To your question (Testcase class per feature)
As The number of Test methods grows, we need To decide on which Testcase class To put each Test method... Using a Testcase class per feature gives us a systematic way To break up a large Testcase class into several smaller ones without having To change out Test methods.
But before reading
(source: xunitpatterns.com)
My advice: read carefully
You seem to be concerned that there could be more test-code than the code-under-test.
I think the ratios could we be higher than you say. I would expect any serious test to exercise a wide range of inputs. So your 20 line class might well have 200 lines of test code.
I do not see that as a problem. The interesting thing for me is that writing tests doesn't seem to slow me down. Rather it makes me focus on the code as I write it.
So, yes test everything. Try not to think of testing as a chore.
I am part of a team that have just started adding test code to our existing, and rather old, code base.
I use 'test' here because I feel that it can be very vague as to weather it is a unit test, or a system test, or an integration test, or whatever. The differences between the terms have large grey areas, and don't add a lot of value.
Because we live in the real world, we don't have time to add test code for all of the existing functionality. We still have Dave the test guy, who finds most bugs. Instead, as we develop we write tests. You know how you run your code before you tell your boss that it works? Well, use a unit framework (we use Junit) to do those runs. And just keep them all, rather than deleting them. Whatever you normally do to convince yourself that it works. Do that.
If it is easy to write the code, do it. If not, leave it to Dave until you think of a good way to do automate it, or until you get that spare time between projects where 'they' are trying to decide what to put into the next release.
for java u can use junit
JUnit
JUnit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.
* Getting Started
* Documentation
* JUnit related sites/projects
* Mailing Lists
* Get Involved
Getting Started
To get started with unit testing and JUnit read the article: JUnit Cookbook.
This article describes basic test writing using JUnit 4.
You find additional samples in the org.junit.samples package:
* SimpleTest.java - some simple test cases
* VectorTest.java - test cases for java.util.Vector
JUnit 4.x only comes with a textual TestRunner. For graphical feedback, most major IDE's support JUnit 4. If necessary, you can run JUnit 4 tests in a JUnit 3 environment by adding the following method to each test class:
public static Test suite() {
return new JUnit4TestAdapter(ThisClass.class);
}
Documentation
JUnit Cookbook
A cookbook for implementing tests with JUnit.
Javadoc
API documentation generated with javadoc.
Frequently asked questions
Some frequently asked questions about using JUnit.
Release notes
Latest JUnit release notes
License
The terms of the common public license used for JUnit.
The following documents still describe JUnit 3.8.
The JUnit 3.8 version of this homepage
Test Infected - Programmers Love Writing Tests
An article demonstrating the development process with JUnit.
JUnit - A cooks tour
JUnit Related Projects/Sites
* junit.org - a site for software developers using JUnit. It provides instructions for how to integrate JUnit with development tools like JBuilder and VisualAge/Java. As well as articles about and extensions to JUnit.
* XProgramming.com - various implementations of the xUnit testing framework architecture.
Mailing Lists
There are three junit mailing lists:
* JUnit announce: junit-announce#lists.sourceforge.net Archives/Subscribe/Unsubscribe
* JUnit users list: junit#yahoogroups.com Archives/Subscribe/Unsubscribe
* JUnit developer list: junit-devel#lists.sourceforge.net Archives/Subscribe/Unsubscribe
Get Involved
JUnit celebrates programmers testing their own software. As a result bugs, patches, and feature requests which include JUnit TestCases have a better chance of being addressed than those without.
JUnit source code is now hosted on GitHub.
One possibility is to reduce the 'test code' to a language that describes your tests, and an interpreter to run the tests. Teams I have been a part of have used this to wonderful ends, allowing us to write significantly more tests than the "lines of code" would have indicated.
This allowed our tests to be written much more quickly and greatly increased the test legibility.
I am going to answer what I believe are the main points of your question. First, how much test-code should you write? Well, Test-Driven Development can be of some help here. I do not use it as strictly as it is proposed in theory, but I find that writing a test first often helps me to understand the problem I want to solve much better. Also, it will usually lead to good test-coverage.
Secondly, which classes should you test? Again, TDD (or more precisely some of the principles behind it) can be of help. If you develop your system top down and write your tests first, you will have tests for the outer class when writing the inner class. These tests should fail if the inner class has bugs.
TDD is also tightly coupled with the idea of Design for Testability.
My answer is not intended to solve all your problems, but to give you some ideas.
I think it's impossible to write a comprehensive guide of exactly what you should and shouldn't unit test. There are simply too many permutations and types of objects, classes, and functions, to be able to cover them all.
I suggest applying personal responsibility to the testing, and determining the answer yourself. It's your code, and you're responsible for it working. If it breaks, you have to pay the consequences of fixing the code, repairing the data, taking responsibility for the lost revenue, and apologizing to the people whose application broke while they were trying to use it. Bottom line - your code should never break. So what do you have to do to ensure this?
Sometimes unit testing can work well to help you test out all of the specific methods in a library. Sometimes unit testing is just busy-work, because you can tell the code is working based on your use of the code during higher-level testing. You're the developer, you're responsible for making sure the code never breaks - what do you think is the best way to achieve that?
If you think unit testing is a waste of time in a specific circumstance - it probably is. If you've tested the code in all of the application use-case scenarios and they all work, the code is probably good.
If anything is happening in the code that you don't understand - even if the end result is acceptable - then you need to do some more testing to make sure there's nothing you don't understand.
To me, this seems like common sense.
Unit testing is mostly for testing your units from aspect of functionality. You can test and see if a specific input come, will we receive the expected value or will we throw the right exception?
Unit tests are very useful. I recommend you to write down these tests. However, not everything is required to be tested. For example, you don't need to test simple getters and setters.
If you want to write your unit tests in Java via Eclipse, please look at "How To Write Java Unit Tests". I hope it helps.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.
I used to be an advocate of unit testing but it doesn't seem to be working on our current project.
Can anyone help identify the problem and how we can make unit tests work for us rather than against us?
Tips:
Avoid writing procedural code
Tests can be a bear to maintain if they're written against procedural-style code that relies heavily on global state or lies deep in the body of an ugly method.
If you're writing code in an OO language, use OO constructs effectively to reduce this.
Avoid global state if at all possible.
Avoid statics as they tend to ripple through your codebase and eventually cause things to be static that shouldn't be. They also bloat your test context (see below).
Exploit polymorphism effectively to prevent excessive ifs and flags
Find what changes, encapsulate it and separate it from what stays the same.
There are choke points in code that change a lot more frequently than other pieces. Do this in your codebase and your tests will become more healthy.
Good encapsulation leads to good, loosely coupled designs.
Refactor and modularize.
Keep tests small and focused.
The larger the context surrounding a test, the more difficult it will be to maintain.
Do whatever you can to shrink tests and the surrounding context in which they are executed.
Use composed method refactoring to test smaller chunks of code.
Are you using a newer testing framework like TestNG or JUnit4?
They allow you to remove duplication in tests by providing you with more fine-grained hooks into the test lifecycle.
Investigate using test doubles (mocks, fakes, stubs) to reduce the size of the test context.
Investigate the Test Data Builder pattern.
Remove duplication from tests, but make sure they retain focus.
You probably won't be able to remove all duplication, but still try to remove it where it's causing pain. Make sure you don't remove so much duplication that someone can't come in and tell what the test does at a glance. (See Paul Wheaton's "Evil Unit Tests" article for an alternative explanation of the same concept.)
No one will want to fix a test if they can't figure out what it's doing.
Follow the Arrange, Act, Assert Pattern.
Use only one assertion per test.
Test at the right level to what you're trying to verify.
Think about the complexity involved in a record-and-playback Selenium test and what could change under you versus testing a single method.
Isolate dependencies from one another.
Use dependency injection/inversion of control.
Use test doubles to initialize an object for testing, and make sure you're testing single units of code in isolation.
Make sure you're writing relevant tests
"Spring the Trap" by introducing a bug on purpose and make sure it gets caught by a test.
See also: Integration Tests Are A Scam
Know when to use State Based vs Interaction Based Testing
True unit tests need true isolation. Unit tests don't hit a database or open sockets. Stop at mocking these interactions. Verify you talk to your collaborators correctly, not that the proper result from this method call was "42".
Demonstrate Test-Driving Code
It's up for debate whether or not a given team will take to test-driving all code, or writing "tests first" for every line of code. But should they write at least some tests first? Absolutely. There are scenarios in which test-first is undoubtedly the best way to approach a problem.
Try this exercise: TDD as if you meant it (Another Description)
See also: Test Driven Development and the Scientific Method
Resources:
Test Driven by Lasse Koskela
Growing OO Software, Guided by Tests by Steve Freeman and Nat Pryce
Working Effectively with Legacy Code by Michael Feathers
Specification By Example by Gojko Adzic
Blogs to check out: Jay Fields, Andy Glover, Nat Pryce
As mentioned in other answers already:
XUnit Patterns
Test Smells
Google Testing Blog
"OO Design for Testability" by Miskov Hevery
"Evil Unit Tests" by Paul Wheaton
"Integration Tests Are A Scam" by J.B. Rainsberger
"The Economics of Software Design" by J.B. Rainsberger
"Test Driven Development and the Scientific Method" by Rick Mugridge
"TDD as if you Meant it" exercise originally by Keith Braithwaite, also workshopped by Gojko Adzic
Are you testing small enough units of code? You shouldn't see too many changes unless you are fundamentally changing everything in your core code.
Once things are stable, you will appreciate the unit tests more, but even now your tests are highlighting the extent to which changes to your framework are propogated through.
It is worth it, stick with it as best you can.
Without more information it's hard to make a decent stab at why you're suffering these problems. Sometimes it's inevitable that changing interfaces etc. will break a lot of things, other times it's down to design problems.
It's a good idea to try and categorise the failures you're seeing. What sort of problems are you having? E.g. is it test maintenance (as in making them compile after refactoring!) due to API changes, or is it down to the behaviour of the API changing? If you can see a pattern, then you can try to change the design of the production code, or better insulate the tests from changing.
If changing a handful of things causes untold devastation to your test suite in many places, there are a few things you can do (most of these are just common unit testing tips):
Develop small units of code and test
small units of code. Extract
interfaces or base classes where it
makes sense so that units of code
have 'seams' in them. The more
dependencies you have to pull in (or
worse, instantiate inside the class
using 'new'), the more exposed to
change your code will be. If each
unit of code has a handful of
dependencies (sometimes a couple or
none at all) then it is better
insulated from change.
Only ever assert on what the test
needs. Don't assert on intermediate,
incidental or unrelated state. Design by
contract and test by contract (e.g.
if you're testing a stack pop method,
don't test the count property after
pushing -- that should be in a
separate test).
I see this problem
quite a bit, especially if each test
is a variant. If any of that
incidental state changes, it breaks
everything that asserts on it
(whether the asserts are needed or
not).
Just as with normal code, use factories and builders
in your unit tests. I learned that one when about 40 tests
needed a constructor call updated after an API change...
Just as importantly, use the front
door first. Your tests should always
use normal state if it's available. Only used interaction based testing when you have to (i.e. no state to verify against).
Anyway the gist of this is that I'd try to find out why/where the tests are breaking and go from there. Do your best to insulate yourself from change.
One of the benefits of unit testing is that when you make changes like this you can prove that you're not breaking your code. You do have to keep your tests in sync with your framework, but this rather mundane work is a lot easier than trying to figure out what broke when you refactored.
I would insists you to stick with the TDD. Try to check your Unit Testing framework do one RCA (Root Cause Analysis) with your team and identify the area.
Fix the unit testing code at suite level and do not change your code frequently specially the function names or other modules.
Would appreciate if you can share your case study well, then we can dig out more at the problem area?
Good question!
Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.
I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing
HTH
If the problem is that your tests are getting out of date with the actual code, you could do one or both of:
Train all developers to not pass code reviews that don't update unit tests.
Set up an automatic test box that runs the full set of units tests after every check-in and emails those who break the build. (We used to think that that was just for the "big boys" but we used an open source package on a dedicated box.)
Well if the logic has changed in the code, and you have written tests for those pieces of code, I would assume the tests would need to be changed to check the new logic. Unit tests are supposed to be fairly simple code that tests the logic of your code.
Your unit tests are doing what they are supposed to do. Bring to the surface any breaks in behavior due to changes in the framework, immediate code or other external sources. What this is supposed to do is help you determine if the behavior did change and the unit tests need to be modified accordingly, or if a bug was introduced thus causing the unit test to fail and needs to be corrected.
Don't give up, while its frustrating now, the benefit will be realized.
I'm not sure about the specific issues that make it difficult to maintain tests for your code, but I can share some of my own experiences when I had similar issues with my tests breaking. I ultimately learned that the lack of testability was largely due to some design issues with the class under test:
Using concrete classes instead of interfaces
Using singletons
Calling lots of static methods for business logic and data access instead of interface methods
Because of this, I found that usually my tests were breaking - not because of a change in the class under test - but due to changes in other classes that the class under test was calling. In general, refactoring classes to ask for their data dependencies and testing with mock objects (EasyMock et al for Java) makes the testing much more focused and maintainable. I've really enjoyed some sites in particular on this topic:
Google testing blog
The guide to writing testable code
Why should you have to change your unit tests every time you make changes to your framework? Shouldn't this be the other way around?
If you're using TDD, then you should first decide that your tests are testing the wrong behavior, and that they should instead verify that the desired behavior exists. Now that you've fixed your tests, your tests fail, and you have to go squish the bugs in your framework until your tests pass again.
Everything comes with price of course. At this early stage of development it's normal that a lot of unit tests have to be changed.
You might want to review some bits of your code to do more encapsulation, create less dependencies etc.
When you near production date, you'll be happy you have those tests, trust me :)
Aren't your unit tests too black-box oriented ? I mean ... let me take an example : suppose you are unit testing some sort of container, do you use the get() method of the container to verify a new item was actually stored, or do you manage to get an handle to the actual storage to retrieve the item directly where it is stored ? The later makes brittle tests : when you change the implementation, you're breaking the tests.
You should test against the interfaces, not the internal implementation.
And when you change the framework you'd better off trying to change the tests first, and then the framework.
I would suggest investing into a test automation tool. If you are using continuous integration you can make it work in tandem. There are tools aout there which will scan your codebase and will generate tests for you. Then will run them. Downside of this approach is that its too generic. Because in many cases unit test's purpose is to break the system.
I have written numerous tests and yes I have to change them if the codebase changes.
There is a fine line with automation tool you would definatelly have better code coverage.
However, with a well wrttien develper based tests you will test system integrity as well.
Hope this helps.
If your code is really hard to test and the test code breaks or requires much effort to keep in sync, then you have a bigger problem.
Consider using the extract-method refactoring to yank out small blocks of code that do one thing and only one thing; without dependencies and write your tests to those small methods.
The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
The alternative is that when your Framework changes, you don't test the changes. Or you don't test the Framework at all. Is that what you want?
You may try refactoring your Framework so that it is composed from smaller pieces that can be tested independently. Then when your Framework changes, you hope that either (a) fewer pieces change or (b) the changes are mostly in the ways in which the pieces are composed. Either way will get you better reuse of both code and tests. But real intellectual effort is involved; don't expect it to be easy.
I found that unless you use IoC / DI methodology that encourages writing very small classes, and follow Single Responsibility Principle religiously, the unit-tests end up testing interaction of multiple classes which makes them very complex and therefore fragile.
My point is, many of the novel software development techniques only work when used together. Particularly MVC, ORM, IoC, unit-testing and Mocking. The DDD (in the modern primitive sense) and TDD/BDD are more independent so you may use them or not.
Sometime designing the TDD tests launch questioning on the design of the application itself. Check if your classes have been well designed, your methods are only performing one thing a the time ... With good design it should be simple to write code to test simple method and classes.
I have been thinking about this topic myself. I'm very sold on the value of unit tests, but not on strict TDD. It seems to me that, up to a certain point, you may be doing exploratory programming where the way you have things divided up into classes/interfaces is going to need to change. If you've invested a lot of time in unit tests for the old class structure, that's increased inertia against refactoring, painful to discard that additional code, etc.
Having just read the first four chapters of Refactoring: Improving the Design of Existing Code, I embarked on my first refactoring and almost immediately came to a roadblock. It stems from the requirement that before you begin refactoring, you should put unit tests around the legacy code. That allows you to be sure your refactoring didn't change what the original code did (only how it did it).
So my first question is this: how do I unit-test a method in legacy code? How can I put a unit test around a 500 line (if I'm lucky) method that doesn't do just one task? It seems to me that I would have to refactor my legacy code just to make it unit-testable.
Does anyone have any experience refactoring using unit tests? And, if so, do you have any practical examples you can share with me?
My second question is somewhat hard to explain. Here's an example: I want to refactor a legacy method that populates an object from a database record. Wouldn't I have to write a unit test that compares an object retrieved using the old method, with an object retrieved using my refactored method? Otherwise, how would I know that my refactored method produces the same results as the old method? If that is true, then how long do I leave the old deprecated method in the source code? Do I just whack it after I test a few different records? Or, do I need to keep it around for a while in case I encounter a bug in my refactored code?
Lastly, since a couple people have asked...the legacy code was originally written in VB6 and then ported to VB.NET with minimal architecture changes.
For instructions on how to refactor legacy code, you might want to read the book Working Effectively with Legacy Code. There's also a short PDF version available here.
Good example of theory meeting reality. Unit tests are meant to test a single operation and many pattern purists insist on Single Responsibilty, so we have lovely clean code and tests to go with it. However, in the real (messy) world, code (especially legacy code) does lots of things and has no tests. What this needs is dose of refactoring to clean the mess.
My approach is to build tests, using the Unit Test tools, that test lots of things in a single test. In one test, I may be checking the DB connection is open, changing lots of data, and doing a before/after check on the DB. I inevitably find myself writing helper classes to do the checking, and more often than not those helpers can then be added into the code base, as they have encapsulated emergent behaviour/logic/requirements. I don't mean I have a single huge test, what I do mean is mnay tests are doing work which a purist would call an integration test - does such a thing still exist? Also I've found it useful to create a test template and then create many tests from that, to check boundary conditions, complex processing etc.
BTW which language environment are we talking about? Some languages lend themselves to refactoring better than others.
From my experience, I'd write tests not for particular methods in the legacy code, but for the overall functionality it provides. These might or might not map closely to existing methods.
Write tests at what ever level of the system you can (if you can), if that means running a database etc then so be it. You will need to write a lot more code to assert what the code is currently doing as a 500 line+ method is going to possibly have a lot of behaviour wrapped up in it. As for comparing the old versus the new, if you write the tests against the old code, they pass and they cover everything it does then when you run them against the new code you are effectively checking the old against the new.
I did this to test a complex sql trigger I wanted to refactor, it was a pain and took time but a month later when we found another issue in that area it was worth having the tests there to rely on.
In my experience this is the reality when working on Legacy code. Book (Working with Legacy..) mentioned by Esko is an excellent work which describes various approaches which can take you there.
I have seen similar issues with out unit-test itself which has grown to become system/functional test. Most important thing to develop tests for Legacy or existing code is to define the term "unit". It can be even functional unit like "reading from database" etc. Identify key functional units and maintain tests which adds value.
As an aside, there was recent talk between Joel S. and Martin F. on TDD/unit-tests. My take is that it is important to define unit and keep focus on it! URLS: Open Letter, Joel's transcript and podcast
That really is one of the key problems of trying to refit legacy code. Are you able to break the problem domain down to something more granular? Does that 500+ line method make anything other than system calls to JDK/Win32/.NET Framework JARs/DLLs/assemblies? I.e. Are there more granular function calls within that 500+ line behemoth that you could unit test?
The following book: The Art of Unit Testing contains a couple of chapters with some interesting ideas on how to deal with legacy code in terms of developing Unit Tests.
I found it quite helpful.