Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This has been driving me nuts lately...
What is refactoring?
Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.
And how do we make sure we don't break anything during refactoring?
Before refactoring a section of code, a solid set of automatic unit tests is needed. The tests are used to demonstrate that the behavior of the module is correct before the refactoring.
Okay fine. But how do I proceed if I find a code smell in the unit tests themselves? Say, a test method that does too much? How do I make sure I don't break anything while refactoring the unit tests?
Do I need some kind of meta-tests? Is it unit tests all the way down?
Or do unit tests simply not obey the normal rules of refactoring?
In my experience, there are two reasons to trust tests:
Review
You've seen it fail
Both of these are activities that happen when a test is written. If you keep tests immutable, you can keep trusting them.
Every time you modify a test, it becomes less trustworthy.
You can somewhat alleviate that problem by repeating the above process: review the changes to the tests, and temporarily change the System Under Test (SUT) so that you can see the tests fail as expected.
When modifying tests, keep the SUT unchanged. Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
With respect this is an older post, it was referenced in a comment on my post about TDD in practice. So upon review, I'd like to throw in my two cents.
Mainly because I feel the accepted answer makes the slippery statement:
Every time you modify a test, it becomes less trustworthy.
I take issue with the word modify. In regards to refactoring such words like change, modify, etc are often avoided as they carry implications counter to refactoring.
If you modify a test in the traditional sense there is risk you introduced a change that made the test less trustworthy.
However, if you modify a test in the refactor sense then the test should be no less trustworthy.
This brings me back to the original question:
How do I refactor unit tests?
Quite simply, the same as you would any other code - in isolation.
So, if you want to refactor your tests, don't change the code, just change your tests.
Do I need test for my tests?
No. In fact, Kent Beck addresses this exact question in his Full Stack Radio interview, saying:
Your code is the test for your tests
Mark Seemann also notes this in his answer:
Tests and production code keep each other in check, so varying one while keeping the other locked is safest.
In the end, this is not so much about how to refactor tests as much as it is refactoring in general. The same principles apply, namely refactoring restructures code without changing its external behavior. If you don't change the external behavior, then no trust is lost.
How do I make sure I don't break anything while refactoring the unit tests?
Keep the old tests as a reference.
To elaborate: unit tests with good coverage are worth their weight in results. You don't keep them for amazing program structure or lack of duplication; they're essentially a dataset of useful input/output pairs.
So when "refactoring" tests, it only really matters that the program tested with the new set shows the same behaviour. Every difference should be carefully, manually inspected, because new program bugs might have been found.
You might also accidentally reduce the coverage when refactoring. That's harder to find, and requires specialized coverage analysis tools.
you don't know you won't break anything. to avoid the problem of 'who will test our tests?' you should keep tests as simple as possible to reduce the possibility of making an error.
when you refactor tests you can always use automatic refactoring or other 'trusted' methods like method extraction etc.
you also often use existing testing frameworks. they are tested by their creators. so when you start to build your own (even simple one) framework, complex helper methods etc, you can always test it
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Just wondering on the pros and cons on TDD/automated unit testing and looking for the community's view on whether it's acceptable for professional developers to write applications without supporting unit tests?
Re-asked on Programmers: https://softwareengineering.stackexchange.com/questions/159572/is-it-acceptable-as-a-professional-developer-not-to-write-unit-tests
I bet I'll get -1 -ed for this, but I still say: if you have other measures to ensure quality, including avoiding regression, program validation, program verification, then no.
The only problem is usually that people don't have any other tools than unit testing to achieve this.
In case you have formally tested models (there's a tool, that actually tests it, or it was constructed in a way which ensures it's valid), and you have formally tested ways to ensure that the actually running software is conform to that model, then it's fine.
Example: if you are sure, that the code you wrote in ruby will act as you'd expect it (because you or someone else tested the ruby interpreter and it doesn't have bugs, or you use only a subset of features known to be safe) then its fine. Usually, we trust C compilers and CPUs in this manner.
Also, if a program is only to be used once,there's no regression problem! If I write a one-liner in bash, which will calculate something for me, I might test it first manually on fake data, then run it on the real one - no need to write an automated test.
If you take the blame, you can also go with along with assumptions: I assume usually, that eclipse is pretty good at creating setters and getters, and I don't test on those. Also, I assume, that in case there'd be any problem with java's Collection classes in Java 7, it'd have turned out by now. But in case there's a trouble, it's your personal trouble. Don't blame anyone.
Personally, I rarely use unit testing on certain codes as I formally test them while they're still flowcharts on a piece of paper, and I ensure that I only use subsets of the language/libraries which are known to work in such situations. Also, I never let code out without peer review. Still, it's sometimes better if there's someone who runs an acceptance test on them...
It is up to you. The question is more philosophical in nature.
Unit tests are just a tool to help you. You can chose to ignore them. However, if you are going to work on a more than trivial project I would advise you to use unit tests.
Yes, they take time, too to write. But in the end you will save a lot if there is any refactoring done or some parts of the code need to be changed.
As always: It depends.
Generally speaking, unit tests are a good thing: they catch a whole class of errors, they verify that particular parts of your code work as expected under given circumstances, and they make it easier to track down errors when something does go wrong. So unless you have good reasons not to, you should write unit tests.
Good reasons not to write unit tests include:
Making relatively small changes to a codebase that is structured badly and hardly testable because of this (usually, the reason is that there is little separation of concerns, and testable units cannot be isolated for testing without intrusive changes to the codebase itself).
The nature of the problem domain makes the code inherently untestable. This is rare, but it happens - for example, it is very hard to come up with meaningful unit tests for a routine that draws a GUI: you'd have to make it render to a mocked surface and then check individual pixels, but you'd also have to mock all the parameters that influence layout decisions, etc.; while this is theoretically possible, it's not usually worth the effort, and one should opt for manual or semi-automatic testing in such cases.
The project is a tiny throwaway program with such a small scope and such a short lifecycle that the benefit gained from unit testing (increased maintainability, decreased complexity) is marginal. Keep in mind, however, that software tends to live longer than it was designed for, and your one-off throwaway script might very well end up becoming a mission-critical part of the company's processes.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm in a project where we don't do TDD, because our bosses and the cliente are very "old styled" people. Because I can't do design through TDD but I feel fear to changes, I would like to write unit tests for my own safety. But, how those unit test would be? Do I have to write a test for each method specification for test that they do what it's supposed they do? Do I have to test for each new functionality like TDD but without design? I have a mess in my mind.
Thanks in advance.
You probably can't hurt anything by doing unit tests - regardless of how well they're done - except for one possible side-effect, and that is false confidence.
We tend not to do hardcore TDD, nevertheless the unit test coverage ranges from non-existent to moderate depending on project, and is becoming increasingly valuable as the idea settles in.
For general pointers, I'd say the following are key priorities for you right now:
Test what you know to be important
Test what you know to be fragile
Write tests to expose any new bugs, then solve the bug by making modifications that pass the test
Apply TDD where possible to any new features
Acknowledge that you can't TDD an existing project. By its nature, TDD only applies to new ground, whether that's a new product, or a new feature for a legacy product. Don't let this fact dishearten you.
Yes. TDD is yet another software development technique which happens to utilize unit testing heavily. Unit testing as a process is fine on its own without TDD behind. Not to mention, sometimes it's not even possible to do TDD yet you do write tests (think legacy systems/existing untested code testing).
But as for what you should test. Depending how deep with testing you want (can) go, you can start with end user oriented functionality, through system components testing (ie. class contracts) up to simply assuring your code does what you claim it does - that's pretty much the final, most fine-grained unit test which you'll most likely have a lot of.
In general, what to test is not an easy question, I've happened to answer several variants of that question already, to give you few tips:
test what your code does, not what it does not
if you got certain requirement, have a test covering it
test single feature at time
focus on public contract, skip private bits
Also, reading few of the top voted unit testing questions might give you some ideas on why you will benefit from testing, regardless of using TDD or not.
When you say "we don't do TDD", do you mean that others don't practise TDD or that your bosses forbid you from practising TDD? If it's the first one, then you can practise TDD as much as you want, as long as you don't try to force other people to do it. If it's the other one, then tell your bosses that they pay you to write code the best way you know how, and TDD is part of how you do that.
You can certainly write tests without practising TDD. People do it all the time. Use the old saying, "Test until fear turns to boredom". Write tests for whatever you fear might not work correctly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm fairly new to the unit testing world, and I just decided to add test coverage for my existing app this week.
This is a huge task, mostly because of the number of classes to test but also because writing tests is all new to me.
I've already written tests for a bunch of classes, but now I'm wondering if I'm doing it right.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
This is just a feeling, and as said earlier, I have no experience of testing. If some more experienced testers out there could give me advices on how to write great tests for an existing app, that would be greatly appreciated.
Edit : I would love to thank Stack Overflow, I had great inputs in less that 15 minutes that answered more of the hours of online reading I just did.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.
I think you are doing it wrong.
A unit test should:
test one method
provide some specific arguments to that method
test that the result is as expected
It should not look inside the method to see what it is doing, so changing the internals should not cause the test to fail. You should not directly test that private methods are being called. If you are interested in finding out whether your private code is being tested then use a code coverage tool. But don't get obsessed by this: 100% coverage is not a requirement.
If your method calls public methods in other classes, and these calls are guaranteed by your interface, then you can test that these calls are being made by using a mocking framework.
You should not use the method itself (or any of the internal code it uses) to generate the expected result dynamically. The expected result should be hard-coded into your test case so that it does not change when the implementation changes. Here's a simplified example of what a unit test should do:
testAdd()
{
int x = 5;
int y = -2;
int expectedResult = 3;
Calculator calculator = new Calculator();
int actualResult = calculator.Add(x, y);
Assert.AreEqual(expectedResult, actualResult);
}
Note that how the result is calculated is not checked - only that the result is correct. Keep adding more and more simple test cases like the above until you have have covered as many scenarios as possible. Use your code coverage tool to see if you have missed any interesting paths.
For unit testing, I found both Test Driven (tests first, code second) and code first, test second to be extremely useful.
Instead of writing code, then writing test. Write code then look at what you THINK the code should be doing. Think about all the intended uses of it and then write a test for each. I find writing tests to be faster but more involved than the coding itself. The tests should test the intention. Also thinking about the intentions you wind up finding corner cases in the test writing phase. And of course while writing tests you might find one of the few uses causes a bug (something I often find, and I am very glad this bug did not corrupt data and go unchecked).
Yet testing is almost like coding twice. In fact I had applications where there was more test code (quantity) than application code. One example was a very complex state machine. I had to make sure that after adding more logic to it, the entire thing always worked on all previous use cases. And since those cases were quite hard to follow by looking at the code, I wound up having such a good test suite for this machine that I was confident that it would not break even after making changes, and the tests saved my ass a few times. And as users or testers were finding bugs with the flow or corner cases unaccounted for, guess what, added to tests and never happened again. This really gave users confidence in my work in addition to making the whole thing super stable. And when it had to be re-written for performance reasons, guess what, it worked as expected on all inputs thanks to the tests.
All the simple examples like function square(number) is great and all, and are probably bad candidates to spend lots of time testing. The ones that do important business logic, thats where the testing is important. Test the requirements. Don't just test the plumbing. If the requirements change then guess what, the tests must too.
Testing should not be literally testing that function foo invoked function bar 3 times. That is wrong. Check if the result and side-effects are correct, not the inner mechanics.
It's worth noting that retro-fitting unit tests into existing code is far more difficult than driving the creation of that code with tests in the first place. That's one of the big questions in dealing with legacy applications... how to unit test? This has been asked many times before (so you may be closed as a dupe question), and people usually end up here:
Moving existing code to Test Driven Development
I second the accepted answer's book recommendation, but beyond that there's more information linked in the answers there.
Don't write tests to get full coverage of your code. Write tests that guarantee your requirements. You may discover codepaths that are unnecessary. Conversely, if they are necessary, they are there to fulfill some kind of requirement; find it what it is and test the requirement (not the path).
Keep your tests small: one test per requirement.
Later, when you need to make a change (or write new code), try writing one test first. Just one. Then you'll have taken the first step in test-driven development.
Unit testing is about the output you get from a function/method/application.
It does not matter at all how the result is produced, it just matters that it is correct.
Therefore, your approach of counting calls to inner methods and such is wrong.
What I tend to do is sit down and write what a method should return given certain input values or a certain environment, then write a test which compares the actual value returned with what I came up with.
Try writing a Unit Test before writing the method it is going to test.
That will definitely force you to think a little differently about how things are being done. You'll have no idea how the method is going to work, just what it is supposed to do.
You should always be testing the results of the method, not how the method gets those results.
tests are supposed to improve maintainability. If you change a method and a test breaks that can be a good thing. On the other hand, if you look at your method as a black box then it shouldn't matter what is inside the method. The fact is you need to mock things for some tests, and in those cases you really can't treat the method as a black box. The only thing you can do is to write an integration test -- you load up a fully instantiated instance of the service under test and have it do its thing like it would running in your app. Then you can treat it as a black box.
When I'm writing tests for a method, I have the feeling of rewriting a second time what I
already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some
inner methods to be called a number of times, with certain arguments), that it seems that
if I ever refactor the method, the tests will fail even if the final behavior of the
method did not change.
This is because you are writing your tests after you wrote your code. If you did it the other way around (wrote the tests first) it wouldnt feel this way.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.
I used to be an advocate of unit testing but it doesn't seem to be working on our current project.
Can anyone help identify the problem and how we can make unit tests work for us rather than against us?
Tips:
Avoid writing procedural code
Tests can be a bear to maintain if they're written against procedural-style code that relies heavily on global state or lies deep in the body of an ugly method.
If you're writing code in an OO language, use OO constructs effectively to reduce this.
Avoid global state if at all possible.
Avoid statics as they tend to ripple through your codebase and eventually cause things to be static that shouldn't be. They also bloat your test context (see below).
Exploit polymorphism effectively to prevent excessive ifs and flags
Find what changes, encapsulate it and separate it from what stays the same.
There are choke points in code that change a lot more frequently than other pieces. Do this in your codebase and your tests will become more healthy.
Good encapsulation leads to good, loosely coupled designs.
Refactor and modularize.
Keep tests small and focused.
The larger the context surrounding a test, the more difficult it will be to maintain.
Do whatever you can to shrink tests and the surrounding context in which they are executed.
Use composed method refactoring to test smaller chunks of code.
Are you using a newer testing framework like TestNG or JUnit4?
They allow you to remove duplication in tests by providing you with more fine-grained hooks into the test lifecycle.
Investigate using test doubles (mocks, fakes, stubs) to reduce the size of the test context.
Investigate the Test Data Builder pattern.
Remove duplication from tests, but make sure they retain focus.
You probably won't be able to remove all duplication, but still try to remove it where it's causing pain. Make sure you don't remove so much duplication that someone can't come in and tell what the test does at a glance. (See Paul Wheaton's "Evil Unit Tests" article for an alternative explanation of the same concept.)
No one will want to fix a test if they can't figure out what it's doing.
Follow the Arrange, Act, Assert Pattern.
Use only one assertion per test.
Test at the right level to what you're trying to verify.
Think about the complexity involved in a record-and-playback Selenium test and what could change under you versus testing a single method.
Isolate dependencies from one another.
Use dependency injection/inversion of control.
Use test doubles to initialize an object for testing, and make sure you're testing single units of code in isolation.
Make sure you're writing relevant tests
"Spring the Trap" by introducing a bug on purpose and make sure it gets caught by a test.
See also: Integration Tests Are A Scam
Know when to use State Based vs Interaction Based Testing
True unit tests need true isolation. Unit tests don't hit a database or open sockets. Stop at mocking these interactions. Verify you talk to your collaborators correctly, not that the proper result from this method call was "42".
Demonstrate Test-Driving Code
It's up for debate whether or not a given team will take to test-driving all code, or writing "tests first" for every line of code. But should they write at least some tests first? Absolutely. There are scenarios in which test-first is undoubtedly the best way to approach a problem.
Try this exercise: TDD as if you meant it (Another Description)
See also: Test Driven Development and the Scientific Method
Resources:
Test Driven by Lasse Koskela
Growing OO Software, Guided by Tests by Steve Freeman and Nat Pryce
Working Effectively with Legacy Code by Michael Feathers
Specification By Example by Gojko Adzic
Blogs to check out: Jay Fields, Andy Glover, Nat Pryce
As mentioned in other answers already:
XUnit Patterns
Test Smells
Google Testing Blog
"OO Design for Testability" by Miskov Hevery
"Evil Unit Tests" by Paul Wheaton
"Integration Tests Are A Scam" by J.B. Rainsberger
"The Economics of Software Design" by J.B. Rainsberger
"Test Driven Development and the Scientific Method" by Rick Mugridge
"TDD as if you Meant it" exercise originally by Keith Braithwaite, also workshopped by Gojko Adzic
Are you testing small enough units of code? You shouldn't see too many changes unless you are fundamentally changing everything in your core code.
Once things are stable, you will appreciate the unit tests more, but even now your tests are highlighting the extent to which changes to your framework are propogated through.
It is worth it, stick with it as best you can.
Without more information it's hard to make a decent stab at why you're suffering these problems. Sometimes it's inevitable that changing interfaces etc. will break a lot of things, other times it's down to design problems.
It's a good idea to try and categorise the failures you're seeing. What sort of problems are you having? E.g. is it test maintenance (as in making them compile after refactoring!) due to API changes, or is it down to the behaviour of the API changing? If you can see a pattern, then you can try to change the design of the production code, or better insulate the tests from changing.
If changing a handful of things causes untold devastation to your test suite in many places, there are a few things you can do (most of these are just common unit testing tips):
Develop small units of code and test
small units of code. Extract
interfaces or base classes where it
makes sense so that units of code
have 'seams' in them. The more
dependencies you have to pull in (or
worse, instantiate inside the class
using 'new'), the more exposed to
change your code will be. If each
unit of code has a handful of
dependencies (sometimes a couple or
none at all) then it is better
insulated from change.
Only ever assert on what the test
needs. Don't assert on intermediate,
incidental or unrelated state. Design by
contract and test by contract (e.g.
if you're testing a stack pop method,
don't test the count property after
pushing -- that should be in a
separate test).
I see this problem
quite a bit, especially if each test
is a variant. If any of that
incidental state changes, it breaks
everything that asserts on it
(whether the asserts are needed or
not).
Just as with normal code, use factories and builders
in your unit tests. I learned that one when about 40 tests
needed a constructor call updated after an API change...
Just as importantly, use the front
door first. Your tests should always
use normal state if it's available. Only used interaction based testing when you have to (i.e. no state to verify against).
Anyway the gist of this is that I'd try to find out why/where the tests are breaking and go from there. Do your best to insulate yourself from change.
One of the benefits of unit testing is that when you make changes like this you can prove that you're not breaking your code. You do have to keep your tests in sync with your framework, but this rather mundane work is a lot easier than trying to figure out what broke when you refactored.
I would insists you to stick with the TDD. Try to check your Unit Testing framework do one RCA (Root Cause Analysis) with your team and identify the area.
Fix the unit testing code at suite level and do not change your code frequently specially the function names or other modules.
Would appreciate if you can share your case study well, then we can dig out more at the problem area?
Good question!
Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.
I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing
HTH
If the problem is that your tests are getting out of date with the actual code, you could do one or both of:
Train all developers to not pass code reviews that don't update unit tests.
Set up an automatic test box that runs the full set of units tests after every check-in and emails those who break the build. (We used to think that that was just for the "big boys" but we used an open source package on a dedicated box.)
Well if the logic has changed in the code, and you have written tests for those pieces of code, I would assume the tests would need to be changed to check the new logic. Unit tests are supposed to be fairly simple code that tests the logic of your code.
Your unit tests are doing what they are supposed to do. Bring to the surface any breaks in behavior due to changes in the framework, immediate code or other external sources. What this is supposed to do is help you determine if the behavior did change and the unit tests need to be modified accordingly, or if a bug was introduced thus causing the unit test to fail and needs to be corrected.
Don't give up, while its frustrating now, the benefit will be realized.
I'm not sure about the specific issues that make it difficult to maintain tests for your code, but I can share some of my own experiences when I had similar issues with my tests breaking. I ultimately learned that the lack of testability was largely due to some design issues with the class under test:
Using concrete classes instead of interfaces
Using singletons
Calling lots of static methods for business logic and data access instead of interface methods
Because of this, I found that usually my tests were breaking - not because of a change in the class under test - but due to changes in other classes that the class under test was calling. In general, refactoring classes to ask for their data dependencies and testing with mock objects (EasyMock et al for Java) makes the testing much more focused and maintainable. I've really enjoyed some sites in particular on this topic:
Google testing blog
The guide to writing testable code
Why should you have to change your unit tests every time you make changes to your framework? Shouldn't this be the other way around?
If you're using TDD, then you should first decide that your tests are testing the wrong behavior, and that they should instead verify that the desired behavior exists. Now that you've fixed your tests, your tests fail, and you have to go squish the bugs in your framework until your tests pass again.
Everything comes with price of course. At this early stage of development it's normal that a lot of unit tests have to be changed.
You might want to review some bits of your code to do more encapsulation, create less dependencies etc.
When you near production date, you'll be happy you have those tests, trust me :)
Aren't your unit tests too black-box oriented ? I mean ... let me take an example : suppose you are unit testing some sort of container, do you use the get() method of the container to verify a new item was actually stored, or do you manage to get an handle to the actual storage to retrieve the item directly where it is stored ? The later makes brittle tests : when you change the implementation, you're breaking the tests.
You should test against the interfaces, not the internal implementation.
And when you change the framework you'd better off trying to change the tests first, and then the framework.
I would suggest investing into a test automation tool. If you are using continuous integration you can make it work in tandem. There are tools aout there which will scan your codebase and will generate tests for you. Then will run them. Downside of this approach is that its too generic. Because in many cases unit test's purpose is to break the system.
I have written numerous tests and yes I have to change them if the codebase changes.
There is a fine line with automation tool you would definatelly have better code coverage.
However, with a well wrttien develper based tests you will test system integrity as well.
Hope this helps.
If your code is really hard to test and the test code breaks or requires much effort to keep in sync, then you have a bigger problem.
Consider using the extract-method refactoring to yank out small blocks of code that do one thing and only one thing; without dependencies and write your tests to those small methods.
The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
The alternative is that when your Framework changes, you don't test the changes. Or you don't test the Framework at all. Is that what you want?
You may try refactoring your Framework so that it is composed from smaller pieces that can be tested independently. Then when your Framework changes, you hope that either (a) fewer pieces change or (b) the changes are mostly in the ways in which the pieces are composed. Either way will get you better reuse of both code and tests. But real intellectual effort is involved; don't expect it to be easy.
I found that unless you use IoC / DI methodology that encourages writing very small classes, and follow Single Responsibility Principle religiously, the unit-tests end up testing interaction of multiple classes which makes them very complex and therefore fragile.
My point is, many of the novel software development techniques only work when used together. Particularly MVC, ORM, IoC, unit-testing and Mocking. The DDD (in the modern primitive sense) and TDD/BDD are more independent so you may use them or not.
Sometime designing the TDD tests launch questioning on the design of the application itself. Check if your classes have been well designed, your methods are only performing one thing a the time ... With good design it should be simple to write code to test simple method and classes.
I have been thinking about this topic myself. I'm very sold on the value of unit tests, but not on strict TDD. It seems to me that, up to a certain point, you may be doing exploratory programming where the way you have things divided up into classes/interfaces is going to need to change. If you've invested a lot of time in unit tests for the old class structure, that's increased inertia against refactoring, painful to discard that additional code, etc.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
People at my company see unit testing as a lot of extra work, that offers fewer benefits than existing functional tests. Are unit and integration tests worth it? Note a large existing codebase that wasn't designed with testing in mind.
Most people are unaware what automated unit tests are for:
To experiment with a new technology
To document how to use a part of the code
To make sure a dead bug stays dead
To allow you to refactor the code
To allow you to change any major parts of the code
Create a lower watermark below which the quality of your product cannot possibly drop
Increase development speed because now, you know that something works (instead of hoping it does until a customer reports a bug).
So if any of these reasons bring you a benefit, automated unit tests are for you. If not, then don't waste your time.
(I'm assuming that you're using "functional test" to mean a test involving the whole system or application being up and running.)
I would unit test new functionality as I wrote it, for three reasons:
It helps me get to working code quicker. The turnaround time for "unit test failed, fix code, unit test passed" is generally a lot shorter than "functional test failed, fix code, functional test passed".
It helps me to design my code in a cleaner way
It helps me understand my code and what it's meant to be doing when I come to maintain it. If I make a change, it will give me more confidence that I haven't broken anything.
(This includes bug fixes, as suggested by Epaga.)
I would strongly recommend Michael Feathers' "Working Effectively with Legacy Code" to give you tips on how to start unit testing a codebase which wasn't designed for it.
It depends on whether your functional tests are automated or done manually. If it's the latter, then any kind of automated test suite is useful since the cost of running those unit / integration tests is far lower than running manual functional tests. You can show real ROI there. I would recommend starting with writing some integration tests and if time / budget allows in the future, take a look at unit testing then.
Retroactively writing unit tests for legacy code can very often NOT be worth it. Stick with functional tests, and automate them.
Then what we've done is have the guideline that any bug fixes (or new features) must be accompanied by unit tests at least testing the fix. That way you get the project at least going in the right direction.
And I have to agree with Jon Skeet (how could I not?) in recommending "Working Effectively With Legacy Code", it really was a helpful skim/read.
As it happens, I read a paper last night on this very subject. The authors compare projects within four groups at Microsoft and IBM, contrasting, in hindsight, projects which used both unit testing and functional testing and projects which used functional testing alone. To quote the authors:
"The results of the case studies
indicate that the preview release
defect density of the four products
decreased between 40% and 90% relative
to similar projects that did not use
the TDD practice. Subjectively, the
teams experienced a 15 to 35% increase
in initial development time after
adopting TDD."
This indicates that it is certainly worth doing unit testing when you add new functionality to your project.
Yes they are worth it, I am now faster at coding since I started unit testing my code. I spend less time fixing bugs and more time thinking about what my code should do.
One application I was bought in to consult on the FAT (test)ing of consisted of a 21,000 lines switch statement. Most units of functionality were a few dozen to a couple of hundred lines in a case statement. The application was built in several variants, so there were many #ifdef sections of the switch.
It was not designed for unit testing - it was not factored at all.
( It was designed in the sense there was a definite, easy to comprehend architecture - malloc a struct, send the main loop a user message with the pointer to the struct as the lparam and then free it when the message is processed. But form did not follow function, which is the central tenet of good design. )
To add unit testing to new functionality would mean a major break with the pattern; either you would need to put your code somewhere other than the big switch, and double the complexity of the variant selection mechanism, or make a large amount of scaffolding to put the messages in the queue to trigger the new functionality.
So though it's certainly desirable to unit test new functionality, it's not always practical if a system isn't already well factored. Either there's a significant amount of work to refactor the system to allow unit testing, or you end up bench-testing the code and cut and pasting it into the existing framework - and a copy of unit tested code isn't unit tested code.
You test when you want to know something about something. If you know that your product (system, unit, service, component...) is going to work, then there's no need to test it. If you're uncertain as to whether it will work, you probably have some questions about it. Whether those questions are worth answering is a matter of risk and priorities.
If you're sure that your product will work, and you don't have any questions about it, there is still one question that's worth asking: why don't I have any questions?
---Michael B.
Unit testing indeed is extra work but it pays off in the long run. Here are the
advantages over integration testing :
you get a regression suite that acts as a safety net in case of refactoring - the same can be said of integration tests, although it can be tough to say if
the test covers a piece of code.
unit tests give an immediate feedback when modifying the code - and this
feedback can be very accurate, pointing to the method where the anomaly
is.
those tests are cheap to run : they run very fast (a few seconds typically),
without any installation or deployment, just compile and test. So they can be run often.
it is easy to add a new test to reproduce a problem once it is identified,
and it augments the regression suite, or to answer a question ("what happen if
this function is not called with a null parameter ...").
There clearly is some overlap between the two, but they are complementary as they both offer advantages.
Now, like any software engineering process, testing has to be taylored according
to the project needs.
With a large legacy codebase, legacy in the sense of not unit tested, I would
recommend to restrict unit tests to new features added to the code as
unit tests can be hard to introduce. In this regard,
I can only second (third ?) the recommendation of the "Working
Effectively with legacy code" book to help bringing unit testing in an existing
codebase.