Unit test naming best practices [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 1 year ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are the best practices for naming unit test classes and test methods?
This was discussed on SO before, at What are some popular naming conventions for Unit Tests?
I don't know if this is a very good approach, but currently in my testing projects, I have one-to-one mappings between each production class and a test class, e.g. Product and ProductTest.
In my test classes I then have methods with the names of the methods I am testing, an underscore, and then the situation and what I expect to happen, e.g. Save_ShouldThrowExceptionWithNullName().

Update (Jul 2021)
It's been quite a while since my original answer (almost 12 years) and best practices have been changing a lot during this time. So I feel inclined to update my own answer and offer different naming strategies to the readers.
Many comments and answers point out that the naming strategy I propose in my original answer is not resistant to refactorings and ends up with difficult to understand names, and I fully agree.
In the last years, I ended up using a more human readable naming schema where the test name describes what we want to test, in the line described by Vladimir Khorikov.
Some examples would be:
Add_credit_updates_customer_balance
Purchase_without_funds_is_not_possible
Add_affiliate_discount
But as you can see it's quite a flexible schema but the most important thing is that reading the name you know what the test is about without including technical details that may change over time.
To name the projects and test classes I still adhere to the original answer schema.
Original answer (Oct 2009)
I like Roy Osherove's naming strategy. It's the following:
[UnitOfWork_StateUnderTest_ExpectedBehavior]
It has every information needed on the method name and in a structured manner.
The unit of work can be as small as a single method, a class, or as large as multiple classes. It should represent all the things that are to be tested in this test case and are under control.
For assemblies, I use the typical .Tests ending, which I think is quite widespread and the same for classes (ending with Tests):
[NameOfTheClassUnderTestTests]
Previously, I used Fixture as suffix instead of Tests, but I think the latter is more common, then I changed the naming strategy.

I like to follow the "Should" naming standard for tests while naming the test fixture after the unit under test (i.e. the class).
To illustrate (using C# and NUnit):
[TestFixture]
public class BankAccountTests
{
[Test]
public void Should_Increase_Balance_When_Deposit_Is_Made()
{
var bankAccount = new BankAccount();
bankAccount.Deposit(100);
Assert.That(bankAccount.Balance, Is.EqualTo(100));
}
}
Why "Should"?
I find that it forces the test writers to name the test with a sentence along the lines of "Should [be in some state] [after/before/when] [action takes place]"
Yes, writing "Should" everywhere does get a bit repetitive, but as I said it forces writers to think in the correct way (so can be good for novices). Plus it generally results in a readable English test name.
Update:
I've noticed that Jimmy Bogard is also a fan of 'should' and even has a unit test library called Should.
Update (4 years later...)
For those interested, my approach to naming tests has evolved over the years. One of the issues with the Should pattern I describe above as its not easy to know at a glance which method is under test. For OOP I think it makes more sense to start the test name with the method under test. For a well designed class this should result in readable test method names. I now use a format similar to <method>_Should<expected>_When<condition>. Obviously depending on the context you may want to substitute the Should/When verbs for something more appropriate. Example:
Deposit_ShouldIncreaseBalance_WhenGivenPositiveValue()

I like this naming style:
OrdersShouldBeCreated();
OrdersWithNoProductsShouldFail();
and so on.
It makes really clear to a non-tester what the problem is.

Kent Beck suggests:
One test fixture per 'unit' (class of your program). Test fixtures are classes themselves. The test fixture name should be:
[name of your 'unit']Tests
Test cases (the test fixture methods) have names like:
test[feature being tested]
For example, having the following class:
class Person {
int calculateAge() { ... }
// other methods and properties
}
A test fixture would be:
class PersonTests {
testAgeCalculationWithNoBirthDate() { ... }
// or
testCalculateAge() { ... }
}

Class Names. For test fixture names, I find that "Test" is quite common in the ubiquitous language of many domains. For example, in an engineering domain: StressTest, and in a cosmetics domain: SkinTest. Sorry to disagree with Kent, but using "Test" in my test fixtures (StressTestTest?) is confusing.
"Unit" is also used a lot in domains. E.g. MeasurementUnit. Is a class called MeasurementUnitTest a test of "Measurement" or "MeasurementUnit"?
Therefore I like to use the "Qa" prefix for all my test classes. E.g. QaSkinTest and QaMeasurementUnit. It is never confused with domain objects, and using a prefix rather than a suffix means that all the test fixtures live together visually (useful if you have fakes or other support classes in your test project)
Namespaces. I work in C# and I keep my test classes in the same namespace as the class they are testing. It is more convenient than having separate test namespaces. Of course, the test classes are in a different project.
Test method names. I like to name my methods WhenXXX_ExpectYYY. It makes the precondition clear, and helps with automated documentation (a la TestDox). This is similar to the advice on the Google testing blog, but with more separation of preconditions and expectations. For example:
WhenDivisorIsNonZero_ExpectDivisionResult
WhenDivisorIsZero_ExpectError
WhenInventoryIsBelowOrderQty_ExpectBackOrder
WhenInventoryIsAboveOrderQty_ExpectReducedInventory

I use Given-When-Then concept.
Take a look at this short article http://cakebaker.42dh.com/2009/05/28/given-when-then/. Article describes this concept in terms of BDD, but you can use it in TDD as well without any changes.

I recently came up with the following convention for naming my tests, their classes and containing projects in order to maximize their descriptivenes:
Lets say I am testing the Settings class in a project in the MyApp.Serialization namespace.
First I will create a test project with the MyApp.Serialization.Tests namespace.
Within this project and of course the namespace I will create a class called IfSettings (saved as IfSettings.cs).
Lets say I am testing the SaveStrings() method. -> I will name the test CanSaveStrings().
When I run this test it will show the following heading:
MyApp.Serialization.Tests.IfSettings.CanSaveStrings
I think this tells me very well, what it is testing.
Of course it is usefull that in English the noun "Tests" is the same as the verb "tests".
There is no limit to your creativity in naming the tests, so that we get full sentence headings for them.
Usually the Test names will have to start with a verb.
Examples include:
Detects (e.g. DetectsInvalidUserInput)
Throws (e.g. ThrowsOnNotFound)
Will (e.g. WillCloseTheDatabaseAfterTheTransaction)
etc.
Another option is to use "that" instead of "if".
The latter saves me keystrokes though and describes more exactly what I am doing, since I don't know, that the tested behavior is present, but am testing if it is.
[Edit]
After using above naming convention for a little longer now, I have found, that the If prefix can be confusing, when working with interfaces. It just so happens, that the testing class IfSerializer.cs looks very similar to the interface ISerializer.cs in the "Open Files Tab".
This can get very annoying when switching back and forth between the tests, the class being tested and its interface. As a result I would now choose That over If as a prefix.
Additionally I now use - only for methods in my test classes as it is not considered best practice anywhere else - the "_" to separate words in my test method names as in:
[Test] public void detects_invalid_User_Input()
I find this to be easier to read.
[End Edit]
I hope this spawns some more ideas, since I consider naming tests of great importance as it can save you a lot of time that would otherwise have been spent trying to understand what the tests are doing (e.g. after resuming a project after an extended hiatus).

See:
http://googletesting.blogspot.com/2007/02/tott-naming-unit-tests-responsibly.html
For test method names, I personally find using verbose and self-documented names very useful (alongside Javadoc comments that further explain what the test is doing).

I think one of the most important things is be consistent in your naming convention (and agree it with other members of your team). To many times I see loads of different conventions used in the same project.

In VS + NUnit I usually create folders in my project to group functional tests together. Then I create unit test fixture classes and name them after the type of functionality I'm testing. The [Test] methods are named along the lines of Can_add_user_to_domain:
- MyUnitTestProject
+ FTPServerTests <- Folder
+ UserManagerTests <- Test Fixture Class
- Can_add_user_to_domain <- Test methods
- Can_delete_user_from_domain
- Can_reset_password

I should add that the keeping your tests in the same package but in a parallel directory to the source being tested eliminates the bloat of the code once your ready to deploy it without having to do a bunch of exclude patterns.
I personally like the best practices described in "JUnit Pocket Guide" ... it's hard to beat a book written by the co-author of JUnit!

Related

To mock or not to mock? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
As far as I know from eXtreme Programming and unit testing, tests must be done by another developer before another one develop the tested method (or from the same developers but test must be written before the method implementation).
Ok, seems good, we just need to test if a method has a good behavior when I give it some parameters.
But the difference between theory and practice is that in theory there isn't but in practice there is...
The first time I've tried to test, I've found it difficult in some cases due to relations between objects. I discovered mocking practice and I found it very useful but some concepts make me doubt.
First, mocking implicit says : "You know how the method runs because you must to know what other objects it needs...". Well, in theory that's my friend bob which writes the test and he just knows that the method must return true when I give it "john" string... It's me who code this method using a dao to access a database instead of using an hashtable in memory...
How my poor friend Bob will write its test ? He will anticipate my work...
Ok, seems to not be the pure theory but no matter. But if I look at the documentation of a lot of mock frameworks, they allow me to test how many times a method is called and in what order ! ouch...
But if my friend Bob must test this method like that to ensure good use of dependencies, the method must be written before the test, isn't it ?
Hum... Help my friend Bob...
When do we stop to use mock mechanism (order verification and so on) ?
When mock mechanisms are useful ?
Theory, practice and mock : what is the best balance ?
What you seem to be missing from your description is the concept of separating contract from implementation. In C# and Java, we have interfaces. In C++, a class composed only of pure virtual functions can fill this role. These aren't really necessary, but helpful in establishing the logical separation. So instead of the confusion you seem to be experiencing, practice should go more like: Bob writes the unit tests for one particular class/unit of functionality. In doing so, he defines one or more interfaces (contracts) for other classes/units that will be needed to support this one. Instead of needing to write those right now, he fills them in with mock objects to provide for the indirect input and output required by his test and the system under test. Thus the output of a set of unit tests is not just the tests to drive development of a single unit, but that plus the contracts required to be implemented by other code to support the unit currently under development.
I'm not sure if I understand your question.
Use mocks to verify collaboration between objects. For example, suppose you have a Login() method that takes a username and password. Now suppose you want this method to Log failed log in attempts. In your unit test you would create a mock Logger object and set an expectation on it that it will be called. Then you would dependency inject it into your login class and call your Login method with a bad username and password to trigger a log message.
The other tool you have in your unit testing tool bag is stubs. Use stubs when you're not testing collaborations but to fake dependencies in order to get your class under test to run.
Roy Osherove, the author of the The Art of Unit Testing, has a good video on mocks: TDD - Understanding Mock Objects
Also I recommend going to his website http://artofunittesting.com/ and watching the free videos on the right side under the heading "Unit Testing Videos".
When you are writing a unit test you are testing the outcome and/or behavior of the class under test against an expected outcome and/or behavior.
Expectations can change over the time that you develop the class - new requirements can come in that change how the class should behave or what the outcome of calling a particular method is. It is never set in stone and unit tests and the class under tests evolve together.
Initially you might start out with just a few basic tests on a very granular level, which then evolve into more and more tests, some of which might be very particular to the actual implementation of your class under test (at least as far as the observable behavior of that class is concerned).
To some degree you can write out many of your tests against a raw stub of your class under test, that produces the expected behavior but mostly has no implementation yet. Then you can refactor/develop the class "for real".
In my opinion it is a pipe dream though to write all your tests in the beginning and then fully develop the class - in my experience both tests and class under tests evolve together. Both can be written by the same developer too.
Then again I am certainly not a TDD purist, just trying to get the most out of unit tests in a pragmatic way.
I'm not sure what exactly is the problem. So I may not accurately answer the question, but I'll give it a try.
Suppose you are writing system A, where A need to get data (let's say a String for simplicity) from a provider, and then A inverse that String and send it to another system C.
B and C are provided to you, and they actually interfaces, the implementations in real life may be BImpl, and CImpl.
for the purposes of your work, you know, that you need to call the readData() from system B, and sendData(String) from system C. Your friend Bob should know that as well, you shouldn't send the data before you get it. Also if you get "abcd" you should send "dcba"
looks like both you and Bob should know this, he writes the tests, and you write the code... where is the problems in that?
of course real life is more complicated, but you should still be able to model it with simple interactions that you unit test.

Faking a Method of the Object Under Test

Is there a reason why you shouldn't create a partial fake of an object or just fake one method on the object that you are testing of it for the sake of testing another method? This might be helpful to save you from making an entire new mock object, or when there is an external dependency in the method you are faking which you can't reasonably get rid of and would like to keep out of all the other unit tests?
The objects you want to do this for are trying to do too many things. In particular, if you have an external dependency, you would normally create an object to isolate that dependency. The Façade pattern is one example of this. If your objects weren't designed with testability in mind you may have to do some refactoring. Take a look at Michael Feathers' PDF on working with legacy code(PDF). He also has a book by the same title that goes into much more detail.
It is a very bad idea to mock/fake part of a class to test another.
Doing this, you are not testing what the real code does in the conditions under test leading to unreliable test results.
It also increases the maintenance burden of the faked part of the class. If this is in effect for the whole test program, the fake implementation also makes other tests on the faked method harder.
You need to ask yourself why you need to fake out the part under test.
If it is because the method is accessing a file or database, then you should define an interface and pass an instance of that interface to the class constructor or method. This allows you to test different scenarios in the same test application.
If it is because you are using singletons, you should rethink your design to make it more testable: removing singletons will remove implicit dependencies and maintenance nightmares.
If you are using static methods/free-standing functions to access data in a registry or settings file, you should really move that out of the function under test and pass the data as a parameter or provide a settings provider interface. This will make the code more flexible and robust.
If it is to break a dependency for the purpose of testing (e.g. faking out a vector method to test a method in a matrix class) then you should not be faking that -- you should treat the code under test as what is defined by the class under test by its public interface: methods; pre-conditions, post-conditions, invariants, documentation, parameters and exception specifications.
You can use knowledge of the implementation details to test special edge cases, but trigger those through the main API, not by faking an implementation detail.
For example, suppose you faked std::vector::at() but the implementation switched to use operator[] instead. Your test would break or silently pass.
If the method you want to fake is virtual (as in, not static and not final), then you can subclass your object in your test, override the method in the subclass, and exercise the subclass in the test. No mock-object libraries required.
(Ideally you should consider refactoring, this is not a great long-term solution. But it is a way to get legacy code under test so you can start the refactoring process more easily.)
The Extract and Override technique described in Chapter 3 of Roy Osherove's The Art of Unit Testing does seem to be a way to fake part of the class under test (pp. 71-77). Osherove does not address the concerns raised in some of the other answers to this question.
In addition, Michael Feathers discusses this in Working Effectively with Legacy Code. He terms the resulting class a testing subclass (227) and the technique Subclass and Override Method (401). Now, granted, Feathers is not giving an exposition of pristine techniques that are recommended on new code. But he still gives it serious treatment as a potentially helpful technique.
I also asked my former computer professor about this. He is well-read and currently works full-time in the software industry, where he has advanced rapidly. He said that this technique definitely has a good application, and that there are several dozen classes in the codebase at his company that are under test in this way. He said that, like any technique, it can be overused.
I originally wrote the question when I was new to unit testing and knew next to nothing about dependency injection. Now, after some experience with both, I would add that the need to use this testing technique could be a smell. It may be a sign that need to rework your approach to dependencies. If the method that needs to be faked is one that is inherited from a base class, it may mean that you need to take the adage "favor composition over inheritance" more seriously. You should inject your dependencies rather than inheriting them.
There are some really nice packages for facilitating this kind of stuff. For instance, from the Mockito docs:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
does some real magic that's hard to believe at first. When you call
String firstMember = mockedList.get(0);
you'll get back "first", because of what you said in the "when" statement.

How to determine if an existing class can be unit-tested?

Recently, i took ownership of some c++ code. I am going to maintain this code, and add new features later on.
I know many people say that it is usually not worth adding unit-tests to existing code, but i would still like to add some tests which will at least partially cover the code. In particular, i would like to add tests which reproduce bugs which i fixed.
Some of the classes are constructed with some pretty complex state, which can make it more difficult to unit-test.
I am also willing to refactor the code to make it easier to test.
Is there any good article you recommend on guidelines which help to identify classes which are easier to unit-test? Do you have any advice of your own?
While Martin Fowler's book on refactoring is a treasure trove of information, why not take a look at "Working Effectively with Legacy Code."
Also, if you're going to be dealing with classes where there's a ton of global variables or huge amounts of state transitions I'd put in a lot of integration checks. Separate out as much of the code which interacts with the code you're refactoring to make sure that all expected inputs in the order they are recieved continue to produce the same outputs. This is critical as it's very easy to "fix" a subtle bug that might have been addressed somewhere else.
Take notes too. If you do find that there is a bug which another function/class expects and handles properly you'll want to change both at the same time. That's difficult unless you keep thorough records.
Presumably the code was written for a purpose, and a unit test will check if the purpose is met, i.e. the pre-conditions and post-conditions hold for the methods.
If the public class methods are such that you can externally check the state it can be unit tested easily enough (black-box test). If the class state is invisible or if you have to test tricky private methods, your test class may need to be a friend (white-box test).
A class that is hard to unit test will be one that
Has enormous dependencies, i.e. tightly coupled
Is intended to work in a high-volume or multi-threaded environment. There you would use a system test rather than a unit test and the actual output may not be totally determinate.
I written a fair number of blog posts about unit testing, non-trivial, C++ code: http://www.lenholgate.com/blog/2004/05/practical-testing.html
I've also written quite a lot about adding tests to existing code: http://www.lenholgate.com/blog/testing/
Almost everything can and should be unit tested. If not directly, then by using mock classes.
Since you decided to refactor your classes, try to use BDD or TDD approach.
To prevent breaking existing functionality, the only way is to have good integration tests, but usually it takes time to execute them all for a complex system.
Without more details on what you do, it is not that easy to give more implementation details. Some are :
use MVP or presenter first for developing gui
use design patterns where appropriate
use function and member pointers, or observer design pattern to break dependencies
I think that if you're having to come up with some "measure" to test if a class is testable, you're already fscked. You should be able to tell just by looking at it: can you write an independent program that links to this class alone and makes sure it works?
If a class is too huge so that you can't be sure just by looking at it...chances are it probably isn't testable. People that don't know how to make small, distinct interfaces generally don't know how to adhere to any other principle either.
In the end though, the way to find out if a class is testable is to try to put it in a harness. If you end up having to pull in half your program to do it, try refactoring. If you find that you can't even perform the most basic refactor without having to rewrite the entire program, analyze the expense of doing so.
We at IPL published a paper It's testing Jim, but not as we know it which explores the practical problems of testing C++ and suggests some techniques to address them that may well be of use given your question. These techniques are also well supported in Cantata++ - our C/C++ unit and integration testing tool.

Useful design patterns for unit testing/TDD?

Reading this question has helped me solidify some of the problems I've always had with unit-testing, TDD, et al.
Since coming across the TDD approach to development I knew that it was the right path to follow. Reading various tutorials helped me understand how to make a start, but they have always been very simplistic - not really something that one can apply to an active project. The best I've managed is writing tests around small parts of my code - things like libraries, that are used by the main app but aren't integrated in any way. While this has been useful it equates to about 5% of the code-base. There's very little out there on how to go to the next step, to help me get some tests into the main app.
Comments such as "Most code without unit tests is built with hard dependencies (i.e.'s new's all over the place) or static methods." and "...it's not rare to have a high level of coupling between classes, hard-to-configure objects inside your class [...] and so on." have made me realise that the next step is understanding how to de-couple code to make it testable.
What should I be looking at to help me do this? Is there a specific set of design patterns that I need to understand and start to implement which will allow easier testing?
Here Mike Clifton describes 24 test patterns from 2004. Its a useful heuristic when designing unit tests.
http://www.codeproject.com/Articles/5772/Advanced-Unit-Test-Part-V-Unit-Test-Patterns
Pass/Fail Patterns
These patterns are your first line of defence (or attack, depending on your perspective) to guarantee good code. But be warned, they are deceptive in what they tell you about the code.
The Simple-Test Pattern
The Code-Path Pattern
The Parameter-Range Pattern
Data Transaction Patterns
Data transaction patterns are a start at embracing the issues of data persistence and communication. More on this topic is discussed under "Simulation Patterns". Also, these patterns intentionally omit stress testing, for example, loading on the server. This will be discussed under "Stress-Test Patterns".
The Simple-Data-I/O Pattern
The Constraint-Data Pattern
The Rollback Pattern
Collection Management Patterns
A lot of what applications do is manage collections of information. While there are a variety of collections available to the programmer, it is important to verify (and thus document) that the code is using the correct collection. This affects ordering and constraints.
The Collection-Order Pattern
The Enumeration Pattern The
Collection-Constraint Pattern
The Collection-Indexing Pattern
Performance Patterns
Unit testing should not just be concerned with function but also with form. How efficiently does the code under test perform its function? How fast? How much memory does it use? Does it trade off data insertion for data retrieval effectively? Does it free up resources correctly? These are all things that are under the purview of unit testing. By including performance patterns in the unit test, the implementer has a goal to reach, which results in better code, a better application, and a happier customer.
The Performance-Test Pattern
Process Patterns
Unit testing is intended to test, well, units...the basic functions of the application. It can be argued that testing processes should be relegated to the acceptance test procedures, however I don't buy into this argument. A process is just a different type of unit. Testing processes with a unit tester provide the same advantages as other unit testing--it documents the way the process is intended to work and the unit tester can aid the implementer by also testing the process out of sequence, rapidly identifying potential user interface issues as well. The term "process" also includes state transitions and business rules, both of which must be validated.
The Process-Sequence Pattern
The Process-State Pattern
The Process-Rule Pattern
Simulation Patterns
Data transactions are difficult to test because they often require a preset configuration, an open connection, and/or an online device (to name a few). Mock objects can come to the rescue by simulating the database, web service, user event, connection, and/or hardware with which the code is transacting. Mock objects also have the ability to create failure conditions that are very difficult to reproduce in the real world--a lossy connection, a slow server, a failed network hub, etc.
Mock-Object Pattern
The Service-Simulation Pattern
The Bit-Error-Simulation Pattern
The Component-Simulation Pattern
Multithreading Patterns
Unit testing multithreaded applications is probably one of the most difficult things to do because you have to set up a condition that by its very nature is intended to be asynchronous and therefore non-deterministic. This topic is probably a major article in itself, so I will provide only a very generic pattern here. Furthermore, to perform many threading tests correctly, the unit tester application must itself execute tests as separate threads so that the unit tester isn't disabled when one thread ends up in a wait state
The Signalled Pattern
The Deadlock-Resolution Pattern
Stress-Test Patterns
Most applications are tested in ideal environments--the programmer is using a fast machine with little network traffic, using small datasets. The real world is very different. Before something completely breaks, the application may suffer degradation and respond poorly or with errors to the user. Unit tests that verify the code's performance under stress should be met with equal fervor (if not more) than unit tests in an ideal environment.
The Bulk-Data-Stress-Test Pattern
The Resource-Stress-Test Pattern
The Loading-Test Pattern
Presentation Layer Patterns
One of the most challenging aspects of unit testing is verifying that information is getting to the user right at the presentation layer itself and that the internal workings of the application are correctly setting presentation layer state. Often, presentation layers are entangled with business objects, data objects, and control logic. If you're planning on unit testing the presentation layer, you have to realize that a clean separation of concerns is mandatory. Part of the solution involves developing an appropriate Model-View-Controller (MVC) architecture. The MVC architecture provides a means to develop good design practices when working with the presentation layer. However, it is easily abused. A certain amount of discipline is required to ensure that you are, in fact, implementing the MVC architecture correctly, rather than just in word alone.
The View-State Test Pattern
The Model-State Test Pattern
I'd say you need mainly two things to test, and they go hand in hand:
Interfaces, interfaces, interfaces
dependency injection; this in conjunction with interfaces will help you swap parts at will to isolate the modules you want to test. You want to test your cron-like system that sends notifications to other services? instanciate it and substitute your real-code implementation for everything else by components obeying the correct interface but hard-wired to react in the way you want to test: mail notification? test what happens when the smtp server is down by throwing an exception
I myself haven't mastered the art of unit testing (and i'm far from it), but this is where my main efforts are going currently. The problem is that i still don't design for tests, and as a result my code has to bend backwards to accomodate...
Michael Feather's book Working Effectively With Legacy Code is exactly what you're looking for. He defines legacy code as 'code without tests' and talks about how to get it under test.
As with most things it's one step at a time. When you make a change or a fix try to increase the test coverage. As time goes by you'll have a more complete set of tests. It talks about techniques for reducing coupling and how to fit test pieces between application logic.
As noted in other answers dependency injection is one good way to write testable (and loosely coupled in general) code.
Arrange, Act, Assert is a good example of a pattern that helps you structure your testing code around particular use cases.
Here's some hypothetical C# code that demonstrates the pattern.
[TestFixture]
public class TestSomeUseCases() {
// Service we want to test
private TestableServiceImplementation service;
// IoC-injected mock of service that's needed for TestableServiceImplementation
private Mock<ISomeService> dependencyMock;
public void Arrange() {
// Create a mock of auxiliary service
dependencyMock = new Mock<ISomeService>();
dependencyMock.Setup(s => s.GetFirstNumber(It.IsAny<int>)).Return(1);
// Create a tested service and inject the mock instance
service = new TestableServiceImplementation(dependencyMock.Object);
}
public void Act() {
service.ProcessFirstNumber();
}
[SetUp]
public void Setup() {
Arrange();
Act();
}
[Test]
public void Assert_That_First_Number_Was_Processed() {
dependencyMock.Verify(d => d.GetFirstNumber(It.IsAny<int>()), Times.Exactly(1));
}
}
If you have a lot of scenarios to test, you can extract a common abstract class with concrete Arrange & Act bits (or just Arrange) and implement the remaining abstract bits & test functions in the inherited classes that group test functions.
Gerard Meszaros' xUnit Test Patterns: Refactoring Test Code is chock full of patterns for unit testing. I know you're looking for patterns on TDD, but I think you will find a lot of useful material in this book
The book is on safari so you can get a really good look at what's inside to see if it might be helpful:
http://my.safaribooksonline.com/9780131495050
have made me realise that the next step is understanding how to de-couple code to make it testable.
What should I be looking at to help me do this? Is there a specific set of design patterns that I need to understand and start to implement which will allow easier testing?
Right on! SOLID is what you are looking for (yes, really). I keep recommending these 2 ebooks, specially the one on SOLID for the issue at hand.
You also have to understand that its very hard if you are introducing unit testing to an existing code base. Unfortunately tightly coupled code is far too common. This doesn't mean not to do it, but for a good time it'll be just like you mentioned, tests will be more concentrated in small pieces.
Over time these grow into a larger scope, but it does depend on the size of the existing code base, the size of the team and how many are actually doing it instead of adding to the problem.
Design patterns aren't directly relevant to TDD, as they are implementation details. You shouldn't try to fit patterns into your code just because they exist, but rather they tend to appear as your code evolves. They also become useful if your code is smelly, since they help resolve such issues. Don't develop code with design patterns in mind, just write code. Then get tests passing, and refactor.
A lot of problems like this can be solved with proper encapsulation. Or, you might have this problem if you are mixing your concerns. Say you've got code that validates a user, validates a domain object, then saves the domain object all in one method or class. You've mixed your concerns, and you aren't going to be happy. You need to separate those concerns (authentication/authorization, business logic, persistence) so you can test them in isolation.
Design patterns help, but a lot of the exotic ones have very narrow problems to which they can be applied. Patterns like composite, command, are used often, and are simple.
The guideline is: if it is very difficult to test something, you can probably refactor it into smaller problems and test the refactored bits in isolation. So if you have a 200 line method with 5 levels of if statements and a few for-loops, you might want to break that sucker up.
So, start by seeing if you can make complicated code simpler by separating your concerns, and then see if you can make complicated code simpler by breaking it up. Of course if a design pattern jumps out at you, then go for it.
Dependency Injection/IoC. Also read up on dependency injection frameworks such as SpringFramework and google-guice. They also target how to write testable code.
Test patterns are design patterns. Both are intended to guide the construction of a piece of software.The basic test patterns that are used in software test automation are many in number and i have selected following:
"Fluent builder test pattern"
They are defined as
"A coding technique known as the "fluent builder pattern" forces the developer or test automation engineer to create the objects sequentially by invoking each setter function one at a time until all necessary properties are defined."
Fluent builder test pattern.
In the Field of Software test Automation where normally we use different automation frameworks for web based testing including Selenium, TestNG and Maven we have this test pattern. Although the fluent builder pattern isn't expressly used for unit tests but it describe how it might be useful in the organising steps. Two components make up a builder class and a number of clearly defined Set methods, each of which is in charge of changing just one aspect of the state of the produced object. In order to connect all method calls together, each of these methods returns the builder itself. A construct method uses the previously set state to create and output the objects. This Specific Test Pattern is used with a number of programming languages like C#, JAVA and Javascript. This Test Pattern is used in different testing frameworks like JUNIT, NUNIT for Asp.net.
Complex objects are a typical occurrence in our Test Scripts solutions. objects with numerous fields, each of which is challenging to construct. The Most Important Advantages of the Fluent Builder test Patterns are
1- The Code is more maintainable
2-The Code of the Automated Script is readable and simple to understand for Static reviews and Static analysis and even helpful for white box testing.
3-When you create an automated test script in any language like Java, the possibility for errors become less when you create objects for different methods.
Understanding the benefits of the builder pattern is one of the most crucial considerations you should make. As  previously stated, if we design objects that are difficult to construct, your code will be of higher quality in those situations. When an object's function Object() { [native code] } takes more than a few parameters and those parameters are objects with nested classes, you should consider using this pattern.
Fluent Builder Test Patterns in Java:
Although it appears straightforward, there is a problem. If there are too many setters methods and the object development is complicated, the Test Engineer may frequently forget to use some of the setters as they construct the object. As a result, none of the setters for many significant object attributes are being called because many of them will be null.
Missing the set property can be an expensive procedure in terms of development and maintenance for many enterprise systems' fundamental entities, such as Order or Loan, which may be begun in many different areas of the code. We make the developer call all necessary setter methods before the build function is called. 
In STLC(Software Testing Lifecycle):
We would have known that developing an automated test for an application is not particularly challenging if you were an automation engineer. Maintaining the current exams is challenging instead! When you have a large number of automated tests, that too.  you also have junior team members who are responsible for maintaining the thousands of tests suite. In STLC (Software Testing Lifecycle),
We collaborate with others. We collaborate with other QA engineers who have a variety of skill sets! It could be really challenging for some team members to comprehend the code. You might assume that they would be less productive the longer they spent trying to grasp the code. You must develop an appropriate architecture for the page objects and tests as the lead architect of the automation framework to make maintenance simple and code reusibility as well. So everyone in the STLC is familiar and have a good understanding of the framework
Try utilising the fluent builder pattern if setting up the state for multiple unit tests is a bit messy or hard. It is now very easy to read the test state. Because a single function on the builder could encapsulate a lot of state-setup code that can be written once and used by multiple tests, employing the fluent builder pattern in unit tests in some circumstances can assist to prevent code duplication.

Large Amounts of Setup Code: Does it indicate an issue and what strategies exist to deal with it

A part of my project interacts with iTunes using COM. The goal of the test in question is to verify that my object asks the iTunes API to export all of the album artwork for a collection of tracks to a file.
I have successfully written a test which can prove my code is doing this, however to accomplish that I have had to stub out a chunk of the iTunes implementation, while thiis is to be expected in a unit test I am concerned at the ratio of stub setup code vs. code doing actual testing
My questions:
Is the fact that there is more stub setup code then acting code indicitive of another underlying problem in my code>
There is a lot of setup code and I don't believe repeating it per test is a good idea. What is the best way to refactor this code so that this setup code is seperate from, but available to other tests that need to utilise the stubs.
This seams like the kind of question that might have been asked before, so I applagise in advance if I have created a duplicate
For reference, here is the complete unit test that I am concerned about
[Fact]
public void Add_AddTrackCollection_AsksiTunesToExportArtForEachTrackInCollectionToAFile()
{
var trackCollection = MockRepository.GenerateStub<IITTrackCollection>(null);
var track = MockRepository.GenerateStub<IITTrack>(null);
var artworkCollection = MockRepository.GenerateStub<IITArtworkCollection>(null);
var artwork = MockRepository.GenerateMock<IITArtwork>(null);
var artworkCache = new ArtworkCache();
trackCollection.Stub<IITTrackCollection, int>(collection => {return collection.Count; }).Return(5);
trackCollection.Stub<IITTrackCollection, IITTrack>(collection => { return trackCollection[0]; }).IgnoreArguments().Return(track);
track.Stub<IITTrack, IITArtworkCollection>(stub => { return stub.Artwork; }).Return(artworkCollection);
artworkCollection.Stub<IITArtworkCollection, int>(collection => { return collection.Count; }).Return(1);
artworkCollection.Stub<IITArtworkCollection, IITArtwork>(collection => { return artworkCollection[0]; }).IgnoreArguments().Return(artwork);
artwork.Expect<IITArtwork>(stub => { stub.SaveArtworkToFile(null); }).IgnoreArguments().Repeat.Times(trackCollection.Count-1);
artwork.Replay();
artworkCache.Add(trackCollection);
artwork.VerifyAllExpectations();
//refactor all the iTunes fake-out that isn't specific to this test into its own method and call that from ctor.
The need for a lot of setup code indicates that your code under test interacts deeply with a complex interface; that's not necessarily a problem (it's not, per se, a "code smell") though it may suggest that eventually you might put a facade in front of that complex interface and interact with that (at that time testing the facade may require some setup, but your "substantial" application code will be simpler AND simpler to test), at least if you can identify any repetitive patterns of interaction you may be using.
Widespread unit-test frameworks such as junit, unittest, &c, recognize the need for repetitive setup code and thereby define "test case" classes with a setUp method (and a corresponding tearDown one in case you need to undo something there). If you have otherwise disparate groups of tests that yet need SOME common setup code, but also some setup that's per-group but different from other groups, you can share the "common setup code" by inheriting from an appropriate abstract base class, calling a global function, or as otherwise appropriate in your favorite language.
To address both of your questions:
In both common unit test patterns (Arrange/Act/Assert, and Four-Phase Test), there will almost always be more Arrange code than Act code, since by definition, Act should only contain a single statement. However, it is still a good idea to attempt to minimize the Arrange code as much as possible.
There are more than one way to refactor Setup code into reusable code. As Alex writes in his answer, many unit testing frameworks support setup methods. This is called Implicit Fixture Setup, and in my opinion something that should be avoided, since it does not communicate intent very well. Instead, I prefer explicit setup methods, usually encapsulated into a Fixture Object.
In general, the need for complex Setup code should always cause you to consider if you can model your API differently. This is not always the case, but when it is, you often end up with a better and more concise API than you started out with. That's one of the advantages of TDD.
For the cases where setup is just complex because input is complex, I will recommend AutoFixture, which is a general-purpose Test Data Builder.
Many of the patterns I have used in this answer are described in xUnit Test Patterns, which is an excellent book.