Related
During a proposal about implementing unit tests for our projects, my manager argued that it could be more expensive to create unit tests since you will be maintaining two sets of code. His argument is whenever there is a change in requirement/functionality, the unit tests can be obsolete, hence the developers should update the tests in order to pass the automated build. He said that it can be inconvenient for the developers since their coding time might be reduced and spent fixing the tests.
The reason I wanted to implement unit tests is to minimize bug occurence especially the critical ones before the codes are turned to validation team for functional test. I also believe that the cost of creating unit tests can be recovered by having better quality systems.
Right now, his challenge to me is to create easy to maintain tests that would be easy to modify or if possible tests that will not break easily. I am using xUnit testing frameworks like JUnit and mocking frameworks like mockito and powermock to help in testing.
I'm looking for tips and techniques on how to write easy to maintain tests or how to avoid brittle test. Are there other tools which can come handy in creating such tests. I'm writing codes in Java and C++. Thanks.
I think you're facing a difference in culture - your manager fears the potential time sink that testing provides. It is a known fact that a TDD/BDD process is more expensive up-front - but as time goes on you start to reap the rewards as "changing just this one isolated thing..." - no longer throws up painful/embarrassing/costly bugs.
My suggestion is that you do some research and put together a document that tries to sell the process to your manager, putting forwards a business case based on what has happened in your business already that could/would have been solved with a solid test suite.
There is one book that goes into TDD better than any website, article etc I've ever seen. I'd highly recommend it as reading for anyone wanting to practise TDD/BDD/OOP:
http://www.growing-object-oriented-software.com/ (I don't earn any money from linking this! - but it is a superb addition to my desk!).
From my experience, it is almost impossible to truly convince a "Unit Testing Skeptic".
My advice: Start adding tests and increase coverage on a selection of the most important and frequently changing parts of your product. Do this on your own time, and possibly with an "accomplice", and time your work.
After that, show the skeptics examples of regression bugs caught by these tests, and how your time spent was actually worth it. If you succeed in doing so, management will see value in devoting resources for unit tests.
As per the technical challenge, I agree with other answers here that practice makes perfect, but there are some guidelines that could help you:
Test only one thing per test. If you have 100 tests that assert the same condition, when this condition changes, you'll have to update 100 tests.
If your "Class Under Test" is huge, with a ton of logic in it, it will be very hard to test it. Try to refactor your classes into small, coherent units of logic. When one of these units will change, the number of broken tests will be relatively small.
Test your public interface, not implementation details. This will allow developers to fix bugs, improve performance, and refactor, with minimal effect on tests.
There are many more guidelines on the first page of Google's "unit testing guidelines", and you can read The Art Of Unit Testing for an extensive coverage on writing good unit tests.
Good luck!
Whether you write unit tests or not should not concern your management. What usually matters is that functionality is implemented fast and with no bugs. How you achieve it is up to you, developers; unit testing is just one method. It works best for loosely coupled code where you can easily mock out dependencies.
But writing good unit tests is not just about decision to write them, or tools you use, it is mostly about experience which you can gain only by practicing. There are no simple recipes that will let you write good unit tests, as there are no for code. If you just force people with no experience to write unit tests, surely productivity will slump, and there will be no apparent benefits. Ultimately, you should be writing unit tests because you believe it helps you, not because someone else thinks it should help you.
The answer is that the unit tests should test implementation and not functionality. So if the developers refactor the code, nothing should change with the unit tests because they aren't testing the internals, just the results.
Of course, if you change the interface or the behavior drastically, of course the tests will change, but then you will want to be testing the new code ANYWAY, so you'd still be writing tests.
Long story short, there is a lot of research out there that a good test suite saves time in the long run by a huge margin.
Having unified test-datafactories may reduce maintanance cost if system-under-test changes.
tests offten have similar test-setups with only small variations.
when i started unittesting i used copy&paste to create a new test from an existing.
every test had a long test-setup.
after changes in the system-under-test i had to update many tests.
today i use test-datafactories where only the difference to the standard is assigned.
example
void customerUnder18ShouldNotbeGrantedAccess() {
// Arrange
Customer customer = TestdataFactory.CreateStandardCustomer();
customer.setAge(16);
// Act...
// Assert ..
}
Here are more usefull tips.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s).
There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits?
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written?
EDIT
What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input.
FOLLOW UP
We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!
If the team is floundering at implementing TDD, but they weren't creating any Unit Tests before...then start them off by creating Unit Tests after their code is written. Even Unit tests written after the code are better than no Unit Tests at all!
Once they're proficient at Unit Testing (and everything that comes with it), then you can work on getting them to create the tests first...and code second.
It is absolutely still worth writing the unit tests after code is written. It's just that sometimes it's often harder because your code wasn't designed to be testable, and you may have overcomplicated it.
I think a good pragmatic way to bring a team into TDD is to provide the alternative method of "test-during development" in the transition period, or possibly in the long-term. They should be encouraged to TDD sections of code that seem natural to them. However, in sections of code that seem difficult to approach test-first or when using objects that are predetermined by a non-agile A&D process, developers can be given the option of writing a small section of the code, then writing tests to cover that code, and repeating this process. Writing unit tests for some code immediately after writing that code is better than not writing any unit tests at all.
It's in my humble opinion better to have 50% test coverage with "code first, test after" and a 100% completed library, than 100% test coverage and a 50% completed library with TDD. After a while, your fellow developers will hopefully find it entertaining and educational to write tests for all of the public code they write, so TDD will sneak its way into their development routine.
TDD is about design! So if you use it, you will be sure to have a testable design of your code, making it easier to write your tests. If you write tests after the code is written they are still valuable but IMHO you will be wasting time since you will probably not have a testable design.
One suggestion I can give to you to try to convince your team to adopt TDD is using some of the techniques described in Fearless Change: Patterns for Introducing New Ideas, by Mary Lynn Manns and Linda Rising.
I just read this on a calendar: "Every rule, executed to its utmost, becomes ridiculous or even dangerous." So my suggestion is not to be religious about it. Every member of your team must find a balance between what they feel "right" when it comes to testing. This way, every member of your team will be most productive (instead of, say, thinking "why do I have to write this sti**** test??").
So some tests are better than none, tests after the code are better than few tests and testing before the code is better than after. But each step has its own merits and you shouldn't frown upon even small steps.
If they're new to testing than IMO start off testing code that's already been written and slowly graduate to writing tests first. As someone trying to learn TDD and new to unit testing, I've found it kind of hard to do a complete 180 and change my mindset to write tests before code, so the approach I'm taking is sort of a 50-50 mix; when I know exactly what the code will look like, I'll write the code and then write a test to verify it. For situations where I'm not entirely sure then I'll start with a test and work my way backwards.
Also remember that there is nothing wrong with writing tests to verify code, instead of writing code to satisfy tests. If your team doesn't want to go the TDD route then don't force it on them.
I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests.
There is absolutely no doubt about the fact that there is value in unit tested code (regardless of when tests were written) and I include "the code is unit tested" in the "Definition of Done". People may use TDD or not, as long as they test.
Regarding version control, I like to use "development branches" with a unit tested policy (i.e. the code compiles and builds, all unit tests pass). When features are done, they are published from development branches to the trunk. In other words, the trunk branch is the "Done branch" (No junk on the trunk!) and has a shippable policy (can release at any time) which is more strict and includes more things than "unit tested".
This is something that your team will have to have its own successes with before they begin to believe it in. I'll rant about my nUnit epiphany for anyone who cares:
About 5 years ago I discovered nUnit when working on a project. We had almost completed V1.0 and I created a few tests just to try out this new tool. We had a lot of bugs (obviously!) because we were a new team, on a tight deadline, high expectations (sound familiar?) etc. Anyway we got 1.0 in and started on 1.1. We re-orged the team a little bit and I got 2 devs assigned to me. I did a 1-hour demo for them and told them that everything we wrote had to have a test case with it. We constantly ran "behind" the rest of the team during the 1.1 dev cycle because we were writing more code, the unit tests. We ended up working more but here's the payoff -- when we finally got into testing we had exactly 0 bugs in our code. We helped everyone else debug and repair their bugs. In the postmortem, when the bug counts showed up, it got EVERYONE's attention.
I'm not dumb enough to think you can test your way to success but I am a true believer when it comes to unit tests. The project adopted nUnit and it soon spread to the company for all .Net projects as a result of 1 success. Total time period for our V1.1 release was 9 dev weeks so it was definitely NOT an overnight success. But long term, it proved successful for our project and the company we built solutions for.
There is no doubt that testing (First, While or even After) will save your bacon, and improve your productivity and confidence. I recommend adopting it!
I was in a similar situation, because I was a "noob" developer, I was often frustrated when working on team project by the fact that a contribution had broken the build. I did not know if I was to blame or even in some cases, who to blame. But I was more concerned that I was doing to same thing to my fellow developers. This realisation then motivated to adopt some TDD strategies. Our team started have silly games, and rules, like you cannot go home till all your tests pass, or if you submit something without a test, then you have to buy everyone "beer/lunch/etc" and it made TDD more fun.
One of the most useful aspect of unit testing is ensuring the continuing correctness of already working code. When you can refactor at will, let an IDE remind you of compile time errors, and then click a button to let your tests spot any potential runtime errors--sometimes arriving in previously trivial blocks of code, then I think you will find your team starting to appreciate TDD. So starting with testing existing code is definitely useful.
Also, to be blunt, I have learned more about how to write testable code by trying to test written code than from starting with TDD. It can be too abstract at first if you are trying to think of contracts that will both accomplish the end goal and allow testing. But when you look at code and can say "This singleton here completely spoils dependency injection and makes testing this impossible," you start to develop an appreciation for what patterns make your testing life easier.
Well, if you do not write tests firsts it's not "Test Driven", it's just testing. It has benefits in itself and if you allready have a code base adding tests for it is certainly usefull even if it's not TDD but merely testing.
Writing tests first is about focusing on what the code should do before writing it. Yes you also get a test doing that and it's good, but some may argue it's not even the most important point.
What I would do is train the team on toy projects like these (see Coding Dojo, Katas) using TDD (if you can get experienced TDD programmers to participate in such workshop it would be even better). When they'll see the benefits they will use TDD for the real project. But meanwhile do not force them, it they do not see the benefit they won't do it right.
If you have design sessions before writing code or have to produce a design doc, then you could add Unit Tests as the tangible outcome of a session.
This could then serve as a specification as to how your code should work. Encourage pairing on the design session, to get people talking about how something should work and what it should do in given scenarios. What are the edge cases, with explicit test cases for them so everyone knows what it's going to do if given a null argument for example.
An aside but BDD also may be of interest
You may find some traction by showing an example or two where TDD results in less code being written - because you only write code required to make the test pass, the temptation to gold-plate or engage in YAGNI is easier to resist. Code you don't write doesn't need to be maintained, refactored, etc, so it's a "real savings" that can help sell the concept of TDD.
If you can clearly demonstrate the value in terms of time, cost, code and bugs saved, you may find it's an easier sell.
Starting to build JUnit test classes is the way to start, for existing code it's the only way to start. In my experience it is very usefull to create test classes for existing code. If management thinks this will invest too much time, you can propose to only write test classes when the corresponding class is found to contain a bug, or is in need of cleanup.
For the maintenance process the approach to get the team over the line would be to write JUnit tests to reproduce bugs before you fix them, i.e.
bug is reported
create JUnit test class if needed
add a test that reproduces the bug
fix your code
run the test to show the current code does not reproduce the bug
You can explain that by "documenting" bugs in this way will prevent those bugs from creeping back in later. That is a benefit the team can experience immediately.
I have done this in many organizations and I have found the single best way to get TDD started and followed is to set up pair programming. If you have someone else you can count on that knows TDD then the two of you can split up and pair with other developers to actually do some paired programming using TDD. If not I would train someone who will help you to do this before presenting it to the rest of the team.
One of the major hurdles with unit testing and especially TDD is that developers don't know how to do it, so they can not see how it can be worth their time. Also when you first start out, it is much slower and doesn't seem to provide benefits. It is only really providing you benefits when you are good at it. By setting up paired programming sessions you can quickly get developers to be able to learn it quickly and get good at it quicker. Additionally they will be able to see immediate benefits from it as you work though it together.
This approach has worked many times for me in the past.
One powerful way to discover the benefits of TDD is to do a significant rewrite of some existing functionality, perhaps for performance reasons. By creating a suite of tests that do a good job covering all the functionality of the existing code, this then gives you the confidence to refactor to your heart's content with full confidence that your changes are safe.
Note that in this case I'm talking about testing the design or contract - unit tests that test implementation details will not be suitable here. But then again, TDD can't test implementation by definition, as they are supposed to be written before the implementation.
TDD is a tool that developers can use to produce better code. I happen to feel that the exercise of writing testable code is as least as valuable as the tests themselves. Isolating the IUT (Implementation Under Test) for testing purposes has the side affect of decoupling your code.
TDD isn't for everyone, and there's no magic that will get a team to choose to do it. The risk is that unit test writers that don't know what's worth testing will write a lot of low value tests, which will be cannon fodder for the TDD skeptics in your organization.
I usually make automated Acceptance Tests non-negotiable, but allow developers to adopt TDD as it suits them. I have my experienced TDDers train/mentor the rest and "prove" the usefulness by example over a period of many months.
This is as much a social/cultural change as it is a technical one.
I read a lot of posts that convinced me I should start writing unit test, I also started to use dependency injection (Unity) for the sake of easier mocking, but I'm still not quite sure on what stage I should start writing the unit tests and mocks, and how or where to start.
Would the preferred way be writing the unit tests before the methods as described in the TDD methodology?
Is there any different methodology or manner for unit testing?
Test first / test after:
It should be noted that 'test first' as part of TDD is just as much (if not more) to do with design as it is to do with unit testing. It's a software development technique in its own right -- writing the tests results in a constant refining of the design.
On a separate note: If there is one significant advantage to TDD from a pure unit testing perspective, it is that it is much harder (though not impossible) to write a test that's wrong when doing TDD. If you write the test beforehand, it should always fail because the logic required to make the test pass does not yet exist. If you write the test afterwards, the logic should be there, but if the test is bugged or is testing the wrong thing, it may pass regardless.
I.e. if you write a bad test before, you may get a green light when you expect a red (so you know the test is bad). If you write a bad test afterwards, you will get a green light when you expected a green (unaware of the bad test).
Books
The pragmatic unit testing book is well worth a look, as is Roy Osherove's "The Art of Unit Testing". The pragmatic book is more narrowly focussed on the different types of test inputs you can try to find bugs, whereas TAOUT covers a wider spread of topics such as test doubles, strategies, maintainability etc. Either book is good; it depends what you want from it.
Also, here's a link to a talk Roy Osherove did on unit testing. It's worth a watch (so are some of the test review videos he recorded, as he points out various problems and dos/don'ts along with reasons why).
How to start
There's nothing better than writing code. Find a fairly simple class that doesn't reference much else. Then, start writing some tests.
Always ask yourself "what do I want to try and prove with this test?" before you write it, then give it a decent name (usually involving the method being called, the scenario and the expected result, e.g. on a stack: "Pop WhenStackIsEmpty ThrowsException").
Think of all the inputs you can throw at it, different combinations of methods that may yield interesting results and so forth.
If you are curious about unit testing the best way to learn it is try it. You will probably start writing integration tests at first, but that is fine. When they seem too difficult to maintain or too much work to write, read more about what kind of tests there are (unit, functional, integration) and try to learn the difference.
If you are interested in starting with TDD, Uncle Bob is a good source. Particalulary this.
More on unit testing
Ensure that you get consistent test results. Running the same test repeatedly should return the same results consistently.
The tests should not require configuration.
Testing order should not matter. This means partial test runs can work correctly. Also, if you keep this design philosophy in mind, it will likely aid in your test creation.
Remember that any testing is better than no testing. The difficulty can be found in writing good clean unit tests that promote ease of creation and maintenance. The more difficult the testing framework, the more opposition there will be to employing it. Testing is your friend.
In C# and with visual studio I find following procedure very helpful:
Think! Make a small upfront design. You need to have a clear picture what classes you need and how objects should relate with each other. Concentrate only on one class/object (the class/object you want to implement) and one relationship. Otherwise you end up with a too heavyweight design. I often end up with multiple sketches (only a few boxes and arrows) on a spare sheet of paper.
Create the class in the production code and name it appropriately.
Pick one behaviour of the class you want to implement and create a method stub for it. With visual studio creating empty method stubs is a piece of cake.
Write a test for it. Therefor you will need to initialize that object, call the method and make an assert to verify the result. In the easiest case the assertion requires another method stub or a property in the production code.
Compile and let the test runner show you the red bar!
Code the required behavior in the production code to see the green bar appear.
Move to the next behaviour.
For this procedure two things are very important:
You need a clear picture what you want and how the class/object should look like. At least spend some time one it.
Think about behaviours of the class/object. This will make the tests and the production code behaviour-centric, which is a very natural approach to think about classes/objects.
Test first or not test first?
I find it usually harder to retrofitting tests to existing production code. In most cases this is due to tangled dependencies to other objects, when the object which is under test needs to be initialized. TDD usually avoids this, because you want to initialize the object in the test cases with less effort as possible. This will result to a very loose coupling.
When I retrofit tests, the must cumbersome job is the task of initializing the object under test. Also the assertions may be a lot of work because of tangled dependencies. For this you will need to change the production code and break dependencies. By using dependency injection properly this should not be an issue.
Yes, the preferred way of doing TDD is to write the test first (as implied by the name Test-Driven Development). When you start out with TDD it can be hard to know where to start writing the tests, so I would suggest to be pragmatic about it. After all, the most important part of our job is about delivering working code, not so much how the code was crafted.
So you can start by writing tests for existing code. Once you get a hang of how the unit tests are structured, which ones that seem to do a good job and which ones that seem not so god, then you will find it easier to dive more into the test-first approach. I have found that I write tests first to a greater extent as time goes by. It simply becomes more natural with increased experience.
Steve Sanderson has a great writeup on TDD best practices.
http://feeds.codeville.net/~r/SteveCodeville/~3/DWmOM3O0M2s/
Also, there's a great set of tutorials for doing an ASP.net mvc project that discusses a lot TDD principles (if you don't mind learning ASP.net MVC along the way)
http://www.asp.net/learn/mvc-videos/ Look for the "Storefront" series at the bottom of the page.
MOQ seems to be the hot mocking framework lately, you may want to look into that as well
In summary, try to write a test to validate something you'r trying to archive, then implement the code to make it work.
The pattern is known as Red - Green - Refactor. Also do your best to minimize dependencies so that your tests can focus on one component.
Personally, I use Visual Studio Unit Tests. I'm not a hardcore TDD developer, but what i like to do is this:
Create a new project and define a few of the fundamental classes based on the system design (that way I can at least get some intellisense)
create a unit tests project and start writing unit tests to satisfy the functionality i'm trying to implement.
Make them fail
Make them pass (implement)
Refactor
Repeat, try to make the test more stringent or create more tests until i feel its solid.
I also feel its very useful to add functionality onto an exiting code base. If you want to add some new feature, first create the unit test for what you want to add, step through the code to see what you have to change, then go through the TDD process.
Choose a small non-critical application and implement it using TDD. At first the new way of thinking will feel weird, but maybe after a week or two practice it fill feel natural.
Here is a tutorial application (in branch "tutorial") that shows what kinds of tests to write. In that tutorial you write code to pass the predefined test cases, so that you get into the rhythm, and later you then write your own tests. The README file contains instructions. It's written in Java, but you can easily adapt it to C#.
I would take on TDD, test-first development, before mocks and dependency injection. To be sure, mocks can help you better isolate your units - and thus do better unit testing - but to my mind, mocking and DI are more advanced concepts that can interfere with the clarity of just writing your tests first.
Mocks, and DI, have their place; they're good tools to have in your toolbox. But they take some sophistication, a more advanced understanding, than the typical testing neophyte has. Writing your tests first, however, is exactly as simple as it sounds. So it's easier to take on, and it's powerful all by itself (without mocks and DI). You'll get earlier, easier wins by writing mock-free tests first, than by trying to begin with mocks, and TDD, and DI all at once.
Start with test-first; when you are very comfortable with it, and when your code is telling you you need mocks, then take on mocks.
I have worked for companies which take unit testing/integration testing too far and those that do too little so I like to think I have a good balance between the two.
I would recommend TDD - Test Driven Development. This ensures you have good coverage but it also keeps focusing your attention on the right place and problem.
So the first thing you do for every piece of new development is write a unit test - even if you don't have a single class to test.
Think about what you are testing. Now run the test. Why wouldn't it compile? Because you need classA. Create the class and run the test. Why doesn't it compile? Because it doesn't have methodA. Write method one and run unit test again. Why does the test fail? Because methodA isn't implemented. Implement methodA and run test. Why does it fail? Because methodA doesn't return the correct expected value...etc
You continue like this writing unit tests as you develop and then eventually the test will pass and the piece of functionality will be complete.
Extending on Steve Freeman's answer: Dave Astel's book is called "Test-driven Development - A practical guide". If the kind of application you're writing is a GUI application then this should be helpful. I read Kent Becks' book but I couldn't figure out how to start a project with TDD. Astel's book test-drives a complete non-trivial GUI application from start to finish using stories. It helped me a lot to acutally start with TDD, it showed me where and how to start.
Test driven development can be confusing for beginners, a lot of books that I read when I was learning TDD would teach you how to write Unit Tests for a Calculator class but there seems to be very less help for building real world apps, that are more data centric if I dare say. For me the breakthrough was when I understood what is Behaviour Driven Development or BDD and how to start doing testing from outside in. Now I can simply advice you to focus on your application behaviour and write unit tests to verify it. There is a lot of debate going on between TDD and BDD but I think that well written automated tests at every level add value and to write them we need to focus on behaviour.
Hadi Hariri has an excellent post here
http://hadihariri.com/2012/04/11/what-bdd-has-taught-me/
I have also written some articles on the topic that I feel will help in understanding all the concepts related to TDD here
http://codecooked.com/introduction-to-unit-testing-and-test-driven-development/
http://codecooked.com/different-types-of-tests-and-which-ones-to-use/
Read Pragmatic Unit Testing in C# with NUnit. It has comprehensive information about starting writing testes and structuring the code to make it more unit testing friendly.
If you haven't written unit tests before, then just pick some classes and begin to write your unit tests, and continue to work on developing more unit tests.
As you gain experience you can then begin to mock out the database for example, by using the Unity framework, but, I would suggest starting simply and gaining experience before making this leap.
Once you are comfortable with how to write unit tests, then you can try doing TDD.
I prefer KentBeck's approach which is nicely explained in the book, Test Driven Development by Example - Kent Beck.
from your question i can infer you are not sure with the test frame work - choosing the correct test frame work is very important for TDD or writing unit tests(in general).
Only practical problem with TDD is "Refactoring"(we need to refactor test code as well) takes lot of time.
I think Dave Astels' book is still one of the best introductions. It's for Java, but you should be able to translate.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.
I used to be an advocate of unit testing but it doesn't seem to be working on our current project.
Can anyone help identify the problem and how we can make unit tests work for us rather than against us?
Tips:
Avoid writing procedural code
Tests can be a bear to maintain if they're written against procedural-style code that relies heavily on global state or lies deep in the body of an ugly method.
If you're writing code in an OO language, use OO constructs effectively to reduce this.
Avoid global state if at all possible.
Avoid statics as they tend to ripple through your codebase and eventually cause things to be static that shouldn't be. They also bloat your test context (see below).
Exploit polymorphism effectively to prevent excessive ifs and flags
Find what changes, encapsulate it and separate it from what stays the same.
There are choke points in code that change a lot more frequently than other pieces. Do this in your codebase and your tests will become more healthy.
Good encapsulation leads to good, loosely coupled designs.
Refactor and modularize.
Keep tests small and focused.
The larger the context surrounding a test, the more difficult it will be to maintain.
Do whatever you can to shrink tests and the surrounding context in which they are executed.
Use composed method refactoring to test smaller chunks of code.
Are you using a newer testing framework like TestNG or JUnit4?
They allow you to remove duplication in tests by providing you with more fine-grained hooks into the test lifecycle.
Investigate using test doubles (mocks, fakes, stubs) to reduce the size of the test context.
Investigate the Test Data Builder pattern.
Remove duplication from tests, but make sure they retain focus.
You probably won't be able to remove all duplication, but still try to remove it where it's causing pain. Make sure you don't remove so much duplication that someone can't come in and tell what the test does at a glance. (See Paul Wheaton's "Evil Unit Tests" article for an alternative explanation of the same concept.)
No one will want to fix a test if they can't figure out what it's doing.
Follow the Arrange, Act, Assert Pattern.
Use only one assertion per test.
Test at the right level to what you're trying to verify.
Think about the complexity involved in a record-and-playback Selenium test and what could change under you versus testing a single method.
Isolate dependencies from one another.
Use dependency injection/inversion of control.
Use test doubles to initialize an object for testing, and make sure you're testing single units of code in isolation.
Make sure you're writing relevant tests
"Spring the Trap" by introducing a bug on purpose and make sure it gets caught by a test.
See also: Integration Tests Are A Scam
Know when to use State Based vs Interaction Based Testing
True unit tests need true isolation. Unit tests don't hit a database or open sockets. Stop at mocking these interactions. Verify you talk to your collaborators correctly, not that the proper result from this method call was "42".
Demonstrate Test-Driving Code
It's up for debate whether or not a given team will take to test-driving all code, or writing "tests first" for every line of code. But should they write at least some tests first? Absolutely. There are scenarios in which test-first is undoubtedly the best way to approach a problem.
Try this exercise: TDD as if you meant it (Another Description)
See also: Test Driven Development and the Scientific Method
Resources:
Test Driven by Lasse Koskela
Growing OO Software, Guided by Tests by Steve Freeman and Nat Pryce
Working Effectively with Legacy Code by Michael Feathers
Specification By Example by Gojko Adzic
Blogs to check out: Jay Fields, Andy Glover, Nat Pryce
As mentioned in other answers already:
XUnit Patterns
Test Smells
Google Testing Blog
"OO Design for Testability" by Miskov Hevery
"Evil Unit Tests" by Paul Wheaton
"Integration Tests Are A Scam" by J.B. Rainsberger
"The Economics of Software Design" by J.B. Rainsberger
"Test Driven Development and the Scientific Method" by Rick Mugridge
"TDD as if you Meant it" exercise originally by Keith Braithwaite, also workshopped by Gojko Adzic
Are you testing small enough units of code? You shouldn't see too many changes unless you are fundamentally changing everything in your core code.
Once things are stable, you will appreciate the unit tests more, but even now your tests are highlighting the extent to which changes to your framework are propogated through.
It is worth it, stick with it as best you can.
Without more information it's hard to make a decent stab at why you're suffering these problems. Sometimes it's inevitable that changing interfaces etc. will break a lot of things, other times it's down to design problems.
It's a good idea to try and categorise the failures you're seeing. What sort of problems are you having? E.g. is it test maintenance (as in making them compile after refactoring!) due to API changes, or is it down to the behaviour of the API changing? If you can see a pattern, then you can try to change the design of the production code, or better insulate the tests from changing.
If changing a handful of things causes untold devastation to your test suite in many places, there are a few things you can do (most of these are just common unit testing tips):
Develop small units of code and test
small units of code. Extract
interfaces or base classes where it
makes sense so that units of code
have 'seams' in them. The more
dependencies you have to pull in (or
worse, instantiate inside the class
using 'new'), the more exposed to
change your code will be. If each
unit of code has a handful of
dependencies (sometimes a couple or
none at all) then it is better
insulated from change.
Only ever assert on what the test
needs. Don't assert on intermediate,
incidental or unrelated state. Design by
contract and test by contract (e.g.
if you're testing a stack pop method,
don't test the count property after
pushing -- that should be in a
separate test).
I see this problem
quite a bit, especially if each test
is a variant. If any of that
incidental state changes, it breaks
everything that asserts on it
(whether the asserts are needed or
not).
Just as with normal code, use factories and builders
in your unit tests. I learned that one when about 40 tests
needed a constructor call updated after an API change...
Just as importantly, use the front
door first. Your tests should always
use normal state if it's available. Only used interaction based testing when you have to (i.e. no state to verify against).
Anyway the gist of this is that I'd try to find out why/where the tests are breaking and go from there. Do your best to insulate yourself from change.
One of the benefits of unit testing is that when you make changes like this you can prove that you're not breaking your code. You do have to keep your tests in sync with your framework, but this rather mundane work is a lot easier than trying to figure out what broke when you refactored.
I would insists you to stick with the TDD. Try to check your Unit Testing framework do one RCA (Root Cause Analysis) with your team and identify the area.
Fix the unit testing code at suite level and do not change your code frequently specially the function names or other modules.
Would appreciate if you can share your case study well, then we can dig out more at the problem area?
Good question!
Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.
I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing
HTH
If the problem is that your tests are getting out of date with the actual code, you could do one or both of:
Train all developers to not pass code reviews that don't update unit tests.
Set up an automatic test box that runs the full set of units tests after every check-in and emails those who break the build. (We used to think that that was just for the "big boys" but we used an open source package on a dedicated box.)
Well if the logic has changed in the code, and you have written tests for those pieces of code, I would assume the tests would need to be changed to check the new logic. Unit tests are supposed to be fairly simple code that tests the logic of your code.
Your unit tests are doing what they are supposed to do. Bring to the surface any breaks in behavior due to changes in the framework, immediate code or other external sources. What this is supposed to do is help you determine if the behavior did change and the unit tests need to be modified accordingly, or if a bug was introduced thus causing the unit test to fail and needs to be corrected.
Don't give up, while its frustrating now, the benefit will be realized.
I'm not sure about the specific issues that make it difficult to maintain tests for your code, but I can share some of my own experiences when I had similar issues with my tests breaking. I ultimately learned that the lack of testability was largely due to some design issues with the class under test:
Using concrete classes instead of interfaces
Using singletons
Calling lots of static methods for business logic and data access instead of interface methods
Because of this, I found that usually my tests were breaking - not because of a change in the class under test - but due to changes in other classes that the class under test was calling. In general, refactoring classes to ask for their data dependencies and testing with mock objects (EasyMock et al for Java) makes the testing much more focused and maintainable. I've really enjoyed some sites in particular on this topic:
Google testing blog
The guide to writing testable code
Why should you have to change your unit tests every time you make changes to your framework? Shouldn't this be the other way around?
If you're using TDD, then you should first decide that your tests are testing the wrong behavior, and that they should instead verify that the desired behavior exists. Now that you've fixed your tests, your tests fail, and you have to go squish the bugs in your framework until your tests pass again.
Everything comes with price of course. At this early stage of development it's normal that a lot of unit tests have to be changed.
You might want to review some bits of your code to do more encapsulation, create less dependencies etc.
When you near production date, you'll be happy you have those tests, trust me :)
Aren't your unit tests too black-box oriented ? I mean ... let me take an example : suppose you are unit testing some sort of container, do you use the get() method of the container to verify a new item was actually stored, or do you manage to get an handle to the actual storage to retrieve the item directly where it is stored ? The later makes brittle tests : when you change the implementation, you're breaking the tests.
You should test against the interfaces, not the internal implementation.
And when you change the framework you'd better off trying to change the tests first, and then the framework.
I would suggest investing into a test automation tool. If you are using continuous integration you can make it work in tandem. There are tools aout there which will scan your codebase and will generate tests for you. Then will run them. Downside of this approach is that its too generic. Because in many cases unit test's purpose is to break the system.
I have written numerous tests and yes I have to change them if the codebase changes.
There is a fine line with automation tool you would definatelly have better code coverage.
However, with a well wrttien develper based tests you will test system integrity as well.
Hope this helps.
If your code is really hard to test and the test code breaks or requires much effort to keep in sync, then you have a bigger problem.
Consider using the extract-method refactoring to yank out small blocks of code that do one thing and only one thing; without dependencies and write your tests to those small methods.
The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
The alternative is that when your Framework changes, you don't test the changes. Or you don't test the Framework at all. Is that what you want?
You may try refactoring your Framework so that it is composed from smaller pieces that can be tested independently. Then when your Framework changes, you hope that either (a) fewer pieces change or (b) the changes are mostly in the ways in which the pieces are composed. Either way will get you better reuse of both code and tests. But real intellectual effort is involved; don't expect it to be easy.
I found that unless you use IoC / DI methodology that encourages writing very small classes, and follow Single Responsibility Principle religiously, the unit-tests end up testing interaction of multiple classes which makes them very complex and therefore fragile.
My point is, many of the novel software development techniques only work when used together. Particularly MVC, ORM, IoC, unit-testing and Mocking. The DDD (in the modern primitive sense) and TDD/BDD are more independent so you may use them or not.
Sometime designing the TDD tests launch questioning on the design of the application itself. Check if your classes have been well designed, your methods are only performing one thing a the time ... With good design it should be simple to write code to test simple method and classes.
I have been thinking about this topic myself. I'm very sold on the value of unit tests, but not on strict TDD. It seems to me that, up to a certain point, you may be doing exploratory programming where the way you have things divided up into classes/interfaces is going to need to change. If you've invested a lot of time in unit tests for the old class structure, that's increased inertia against refactoring, painful to discard that additional code, etc.
I'm soon to do a 10min Grok talk on Unit Testing at my company. I've been trying it myself, and feel that it can certainly bring benefits to the company. We already do WebInject testing in our dedicated QA team, But I want to try and sell unit testing to the devs.
So with only 10mins what would you cover and why?
we're a Microsoft Shop C# Web Apps, I've used NUnit in my experience.
Unit testing is all about confidence.
It allows you to be confident that your code is solid, and that other people can rely on it when they're writing their own parts of a system. If you can get across that unit testing will help to eliminate the trepidation that comes with the first release of a new system, I would hope that your audience will soon become very interested.
I'd start with a problem a lot of programmers might be familiar with: that is the fear of making a change to existing code because of the fear they might break something. How that prevents work from happening, or prevents it being done properly (because they're afraid to refactor) and so leads to having to rewrite everything every x years.
Unit Testing -> Refactoring -> Living Code.
Edit:
btw, I would Not lead with the 'all code without unit tests is legacy code' quote from Michael Feathers. It certainly made me feel defensive the first time I heard it. By the time people stop feeling affronted the 10 minutes will be over :-) (personally I think that quote is more true than it is helpful).
Here's a good format for a short talk on a technique X:
why you decided to try it X the first place
what you personally have gained from using X
what limitations you've noticed, things that X doesn't address
Don't "sell" or spend lots of time on the theory. Do prepare beforehand and point people to books, URLs of articles or tutorials that you think are most helpful. Those who are interested after your talk can look up the details on the Web.
Try to briefly talk about the aspect of Test Driven Development: write tests first and the interfaces as you go, then implement everything.
Maybe also about continuous integration, this means that as soon as you check something into your source control system, the project gets compiled and all the tests run so the developer knows immediately if he has done something wrong.
If there are any project managers in the audience, also be so fair to tell them that unit testing will make your project take 15-30 % more time, but it will be worth it in the long run.
You could mention that it will be a difficult learning curve, and it will feel like productivity is being impacted, but the benefits are worth it:
e.g. effectively the creation of an automated regression test suite, which in turn allows you to make bigger additions or modifications to existing without worrying that you are breaking some existing functionality.
Creation of production code will be slower, but this should be offset by the higher quality of the code, i.e. fewer bugs, which in the long run means overall higher productivity.
I think 10 minutes is enough to present a simple example of how unit testing can save you time.
Implement a class (you can TDD if you feel like it) and show how a unit test can catch a modification that breaks the class.
Also, you can highlight how you can be faster developing components if you test the isolatedly (i.e. you don't need to bring up your web application, log in, go to your functionality, test); you can just run your tests.
You might be able to perform this on a piece of code from your company- and maybe show how a unit test might have caught a bug you have had recently.
If you give a demonstration, do it on a working piece of code from a project that everyone is familiar with. Avoid contrived examples. Books on TDD are already full of them, and they don't really sell how TDD can work for a real project.
For the love of god, emphasize that unit tests are for testing "units" of logic. I hate looking at a QA suite of NUnit tests that nobody expected to have to maintain, where each "unit test" tests valid outputs for 150 (binary) input files and then shits itself if one fails, without telling you which one.
I would demonstrate:
The confidence it gives in code you produce
The confidence it gives when you change code because it passes the unit tests
The benefits of code coverage, no more "Oh that else statement was never tested!"
The benefits of running unit tests per each build on a CI platform like Hudson
FWIW we run the crappy visual studio testing via MSTEST on our Hudson box and I've got an xslt that Hudson uses to convert the results to the nunit format so Hudson can decipher them. Just putting that out there in case they want you to stick with a Microsoft testing platform.
Accountability, as highlighted by Kent Beck, is another trait that unit testing facilitates. Listen to his podcast at IT Conversations. (His point on accountability starts at 30:34.)
From a business perspective, you may want to highlight the fact that unit tests can "de-risk" any changes you make to your code. Once you have a suite of unit tests, you can make changes to the code base and know what breaks and what doesn't.
I might not be a bad idea to go over user testing. If you have a good set of tests, you can bring failing tests to the users after you make changes to have them validate that the new results are correct. Additionally, you can streamline requirements gathering if you have the users write new unit test definitions for you. They don't need to be able to code, but they do need to be able to give you the appropriate inputs and expected outputs (otherwise how would they know if the changes they asked for were working?).
Visual Studio has a pretty nice set of tools for unit testing, so an example or two may go a long way toward giving your group an idea of what unit testing is like in practice.
Wear this t-shirt ;-)
Well prepared live demo:
Find a "bug" in your "application"
Write a unit test that covers this bug.
Fix this bug
Show, your code is green.
So you can prove, that there's no way, that this bug will occur once more!
Another way to do this:
Propose an issue that can be solved by creating an algorithm. Something relatively simple, of course. Next, code this algorithm in a DLL project. Try to sneak in some weaknesses (i <= array.Length is always a good one). Next, ask them how they would test this DLL.
Most devs run their apps to test them. But you can't run a DLL. You might get some suggesting you create a console app to create methods that exercise the algorithm. Show them how you can craft unit tests to do this.
Have a good set of resources for follow up/self directed learning:
the pragmatic unit test for java/c# are good books on the subject
the kent beck paper on unit testing
links to any larger samples using the testing framework of choice