Useful design patterns for unit testing/TDD? - unit-testing

Reading this question has helped me solidify some of the problems I've always had with unit-testing, TDD, et al.
Since coming across the TDD approach to development I knew that it was the right path to follow. Reading various tutorials helped me understand how to make a start, but they have always been very simplistic - not really something that one can apply to an active project. The best I've managed is writing tests around small parts of my code - things like libraries, that are used by the main app but aren't integrated in any way. While this has been useful it equates to about 5% of the code-base. There's very little out there on how to go to the next step, to help me get some tests into the main app.
Comments such as "Most code without unit tests is built with hard dependencies (i.e.'s new's all over the place) or static methods." and "...it's not rare to have a high level of coupling between classes, hard-to-configure objects inside your class [...] and so on." have made me realise that the next step is understanding how to de-couple code to make it testable.
What should I be looking at to help me do this? Is there a specific set of design patterns that I need to understand and start to implement which will allow easier testing?

Here Mike Clifton describes 24 test patterns from 2004. Its a useful heuristic when designing unit tests.
http://www.codeproject.com/Articles/5772/Advanced-Unit-Test-Part-V-Unit-Test-Patterns
Pass/Fail Patterns
These patterns are your first line of defence (or attack, depending on your perspective) to guarantee good code. But be warned, they are deceptive in what they tell you about the code.
The Simple-Test Pattern
The Code-Path Pattern
The Parameter-Range Pattern
Data Transaction Patterns
Data transaction patterns are a start at embracing the issues of data persistence and communication. More on this topic is discussed under "Simulation Patterns". Also, these patterns intentionally omit stress testing, for example, loading on the server. This will be discussed under "Stress-Test Patterns".
The Simple-Data-I/O Pattern
The Constraint-Data Pattern
The Rollback Pattern
Collection Management Patterns
A lot of what applications do is manage collections of information. While there are a variety of collections available to the programmer, it is important to verify (and thus document) that the code is using the correct collection. This affects ordering and constraints.
The Collection-Order Pattern
The Enumeration Pattern The
Collection-Constraint Pattern
The Collection-Indexing Pattern
Performance Patterns
Unit testing should not just be concerned with function but also with form. How efficiently does the code under test perform its function? How fast? How much memory does it use? Does it trade off data insertion for data retrieval effectively? Does it free up resources correctly? These are all things that are under the purview of unit testing. By including performance patterns in the unit test, the implementer has a goal to reach, which results in better code, a better application, and a happier customer.
The Performance-Test Pattern
Process Patterns
Unit testing is intended to test, well, units...the basic functions of the application. It can be argued that testing processes should be relegated to the acceptance test procedures, however I don't buy into this argument. A process is just a different type of unit. Testing processes with a unit tester provide the same advantages as other unit testing--it documents the way the process is intended to work and the unit tester can aid the implementer by also testing the process out of sequence, rapidly identifying potential user interface issues as well. The term "process" also includes state transitions and business rules, both of which must be validated.
The Process-Sequence Pattern
The Process-State Pattern
The Process-Rule Pattern
Simulation Patterns
Data transactions are difficult to test because they often require a preset configuration, an open connection, and/or an online device (to name a few). Mock objects can come to the rescue by simulating the database, web service, user event, connection, and/or hardware with which the code is transacting. Mock objects also have the ability to create failure conditions that are very difficult to reproduce in the real world--a lossy connection, a slow server, a failed network hub, etc.
Mock-Object Pattern
The Service-Simulation Pattern
The Bit-Error-Simulation Pattern
The Component-Simulation Pattern
Multithreading Patterns
Unit testing multithreaded applications is probably one of the most difficult things to do because you have to set up a condition that by its very nature is intended to be asynchronous and therefore non-deterministic. This topic is probably a major article in itself, so I will provide only a very generic pattern here. Furthermore, to perform many threading tests correctly, the unit tester application must itself execute tests as separate threads so that the unit tester isn't disabled when one thread ends up in a wait state
The Signalled Pattern
The Deadlock-Resolution Pattern
Stress-Test Patterns
Most applications are tested in ideal environments--the programmer is using a fast machine with little network traffic, using small datasets. The real world is very different. Before something completely breaks, the application may suffer degradation and respond poorly or with errors to the user. Unit tests that verify the code's performance under stress should be met with equal fervor (if not more) than unit tests in an ideal environment.
The Bulk-Data-Stress-Test Pattern
The Resource-Stress-Test Pattern
The Loading-Test Pattern
Presentation Layer Patterns
One of the most challenging aspects of unit testing is verifying that information is getting to the user right at the presentation layer itself and that the internal workings of the application are correctly setting presentation layer state. Often, presentation layers are entangled with business objects, data objects, and control logic. If you're planning on unit testing the presentation layer, you have to realize that a clean separation of concerns is mandatory. Part of the solution involves developing an appropriate Model-View-Controller (MVC) architecture. The MVC architecture provides a means to develop good design practices when working with the presentation layer. However, it is easily abused. A certain amount of discipline is required to ensure that you are, in fact, implementing the MVC architecture correctly, rather than just in word alone.
The View-State Test Pattern
The Model-State Test Pattern

I'd say you need mainly two things to test, and they go hand in hand:
Interfaces, interfaces, interfaces
dependency injection; this in conjunction with interfaces will help you swap parts at will to isolate the modules you want to test. You want to test your cron-like system that sends notifications to other services? instanciate it and substitute your real-code implementation for everything else by components obeying the correct interface but hard-wired to react in the way you want to test: mail notification? test what happens when the smtp server is down by throwing an exception
I myself haven't mastered the art of unit testing (and i'm far from it), but this is where my main efforts are going currently. The problem is that i still don't design for tests, and as a result my code has to bend backwards to accomodate...

Michael Feather's book Working Effectively With Legacy Code is exactly what you're looking for. He defines legacy code as 'code without tests' and talks about how to get it under test.
As with most things it's one step at a time. When you make a change or a fix try to increase the test coverage. As time goes by you'll have a more complete set of tests. It talks about techniques for reducing coupling and how to fit test pieces between application logic.
As noted in other answers dependency injection is one good way to write testable (and loosely coupled in general) code.

Arrange, Act, Assert is a good example of a pattern that helps you structure your testing code around particular use cases.
Here's some hypothetical C# code that demonstrates the pattern.
[TestFixture]
public class TestSomeUseCases() {
// Service we want to test
private TestableServiceImplementation service;
// IoC-injected mock of service that's needed for TestableServiceImplementation
private Mock<ISomeService> dependencyMock;
public void Arrange() {
// Create a mock of auxiliary service
dependencyMock = new Mock<ISomeService>();
dependencyMock.Setup(s => s.GetFirstNumber(It.IsAny<int>)).Return(1);
// Create a tested service and inject the mock instance
service = new TestableServiceImplementation(dependencyMock.Object);
}
public void Act() {
service.ProcessFirstNumber();
}
[SetUp]
public void Setup() {
Arrange();
Act();
}
[Test]
public void Assert_That_First_Number_Was_Processed() {
dependencyMock.Verify(d => d.GetFirstNumber(It.IsAny<int>()), Times.Exactly(1));
}
}
If you have a lot of scenarios to test, you can extract a common abstract class with concrete Arrange & Act bits (or just Arrange) and implement the remaining abstract bits & test functions in the inherited classes that group test functions.

Gerard Meszaros' xUnit Test Patterns: Refactoring Test Code is chock full of patterns for unit testing. I know you're looking for patterns on TDD, but I think you will find a lot of useful material in this book
The book is on safari so you can get a really good look at what's inside to see if it might be helpful:
http://my.safaribooksonline.com/9780131495050

have made me realise that the next step is understanding how to de-couple code to make it testable.
What should I be looking at to help me do this? Is there a specific set of design patterns that I need to understand and start to implement which will allow easier testing?
Right on! SOLID is what you are looking for (yes, really). I keep recommending these 2 ebooks, specially the one on SOLID for the issue at hand.
You also have to understand that its very hard if you are introducing unit testing to an existing code base. Unfortunately tightly coupled code is far too common. This doesn't mean not to do it, but for a good time it'll be just like you mentioned, tests will be more concentrated in small pieces.
Over time these grow into a larger scope, but it does depend on the size of the existing code base, the size of the team and how many are actually doing it instead of adding to the problem.

Design patterns aren't directly relevant to TDD, as they are implementation details. You shouldn't try to fit patterns into your code just because they exist, but rather they tend to appear as your code evolves. They also become useful if your code is smelly, since they help resolve such issues. Don't develop code with design patterns in mind, just write code. Then get tests passing, and refactor.

A lot of problems like this can be solved with proper encapsulation. Or, you might have this problem if you are mixing your concerns. Say you've got code that validates a user, validates a domain object, then saves the domain object all in one method or class. You've mixed your concerns, and you aren't going to be happy. You need to separate those concerns (authentication/authorization, business logic, persistence) so you can test them in isolation.
Design patterns help, but a lot of the exotic ones have very narrow problems to which they can be applied. Patterns like composite, command, are used often, and are simple.
The guideline is: if it is very difficult to test something, you can probably refactor it into smaller problems and test the refactored bits in isolation. So if you have a 200 line method with 5 levels of if statements and a few for-loops, you might want to break that sucker up.
So, start by seeing if you can make complicated code simpler by separating your concerns, and then see if you can make complicated code simpler by breaking it up. Of course if a design pattern jumps out at you, then go for it.

Dependency Injection/IoC. Also read up on dependency injection frameworks such as SpringFramework and google-guice. They also target how to write testable code.

Test patterns are design patterns. Both are intended to guide the construction of a piece of software.The basic test patterns that are used in software test automation are many in number and i have selected following:
"Fluent builder test pattern"
They are defined as
"A coding technique known as the "fluent builder pattern" forces the developer or test automation engineer to create the objects sequentially by invoking each setter function one at a time until all necessary properties are defined."
Fluent builder test pattern.
In the Field of Software test Automation where normally we use different automation frameworks for web based testing including Selenium, TestNG and Maven we have this test pattern. Although the fluent builder pattern isn't expressly used for unit tests but it describe how it might be useful in the organising steps. Two components make up a builder class and a number of clearly defined Set methods, each of which is in charge of changing just one aspect of the state of the produced object. In order to connect all method calls together, each of these methods returns the builder itself. A construct method uses the previously set state to create and output the objects. This Specific Test Pattern is used with a number of programming languages like C#, JAVA and Javascript. This Test Pattern is used in different testing frameworks like JUNIT, NUNIT for Asp.net.
Complex objects are a typical occurrence in our Test Scripts solutions. objects with numerous fields, each of which is challenging to construct. The Most Important Advantages of the Fluent Builder test Patterns are
1- The Code is more maintainable
2-The Code of the Automated Script is readable and simple to understand for Static reviews and Static analysis and even helpful for white box testing.
3-When you create an automated test script in any language like Java, the possibility for errors become less when you create objects for different methods.
Understanding the benefits of the builder pattern is one of the most crucial considerations you should make. As  previously stated, if we design objects that are difficult to construct, your code will be of higher quality in those situations. When an object's function Object() { [native code] } takes more than a few parameters and those parameters are objects with nested classes, you should consider using this pattern.
Fluent Builder Test Patterns in Java:
Although it appears straightforward, there is a problem. If there are too many setters methods and the object development is complicated, the Test Engineer may frequently forget to use some of the setters as they construct the object. As a result, none of the setters for many significant object attributes are being called because many of them will be null.
Missing the set property can be an expensive procedure in terms of development and maintenance for many enterprise systems' fundamental entities, such as Order or Loan, which may be begun in many different areas of the code. We make the developer call all necessary setter methods before the build function is called. 
In STLC(Software Testing Lifecycle):
We would have known that developing an automated test for an application is not particularly challenging if you were an automation engineer. Maintaining the current exams is challenging instead! When you have a large number of automated tests, that too.  you also have junior team members who are responsible for maintaining the thousands of tests suite. In STLC (Software Testing Lifecycle),
We collaborate with others. We collaborate with other QA engineers who have a variety of skill sets! It could be really challenging for some team members to comprehend the code. You might assume that they would be less productive the longer they spent trying to grasp the code. You must develop an appropriate architecture for the page objects and tests as the lead architect of the automation framework to make maintenance simple and code reusibility as well. So everyone in the STLC is familiar and have a good understanding of the framework
Try utilising the fluent builder pattern if setting up the state for multiple unit tests is a bit messy or hard. It is now very easy to read the test state. Because a single function on the builder could encapsulate a lot of state-setup code that can be written once and used by multiple tests, employing the fluent builder pattern in unit tests in some circumstances can assist to prevent code duplication.

Related

Learning About Unit Testing Using When and Should and TDD

The tests at my new job are nothing like the tests I have encountered before.
When they're writing their unit tests (presumably before the code), they create a class starting with "When". The name describes the scenario under which the tests will run (the fixture). They'll created subclasses for each branch through the code. All of the tests within the class start with "should" and they test different aspects of the code after running. So, they will have a method for verifying that each mock (DOC) is called correctly and for checking the return value, if applicable. I am a little confused by this method because it means the exact same execution code is being run for each test and this seems wasteful. I was wondering if there is a technique similar to this that they may have adapted. A link explaining the style and how it is supposed to be implemented would be great. I sounds similar to some approaches of BDD I've seen.
I also noticed that they've moved the repeated calls to "execute" the SUT into the setup methods. This causes issues when they are expecting exceptions, because they can't use built-in tools for performing the check (Python unittest's assertRaises). This also means storing the return value as a backing field of the test class. They also have to store many of the mocks as backing fields. Across class hierarchies it becomes difficult to tell the configuration of each mock.
They also test code a little differently. It really comes down to what they consider an integration test. They mock out anything that steals the context away from the function being tested. This can mean private methods within the same class. I have always limited mocking to resources that can affect the results of the test, such as databases, the file system or dates. I can see some value in this approach. However, the way it is being used now, I can see it leading to fragile tests (tests that break with every code change). I get concerned because without an integration test, in this case, you could be using a 3rd party API incorrectly but your unit tests would still pass. I'd like to learn more about this approach as well.
So, any resources about where to learn more about some of these approaches would be nice. I'd hate to pass up a great learning opportunity just because I don't understand they way they are doing things. I would also like to stop focusing on the negatives of these approaches and see where the benefits come in.
If I understood you explanation in the first paragraph correctly, that's quite similar to what I often do. (Depending on whether the testing framework makes it easy or not. Also many mocking frameworks don't support it, but spy frameworks like Mockito do better.)
For example see the stack example here which has a common setup (adding things to the stack) and then a bunch of independent tests which each check one thing. Here's still another example, this time one where none of the tests (#Test) modify the common fixture (#Before), but each of them focuses on checking just one independent thing that should happen. If the tests are very well focused, then it should be possible to change the production code to make any single test fail while all other tests pass (I wrote about that recently in Unit Test Focus Isolation).
The main idea is to have each test check a single feature/behavior, so that when tests fail it's easier to find out why it failed. See this TDD tutorial for more examples and to learn that style.
I'm not worried about the same code paths executed multiple times, when it takes a millisecond to run one test (if it takes more than a couple of seconds to run all unit tests, the tests are probably too big). From your explanation I'm more worried that the tests might be too tightly coupled to the implementation, instead of the feature, if it's systematic that there is one test for each mock. The name of the test would be a good indicator of how well structured or how fragile the tests are - does it describe a feature or how that feature is implemented.
About mocking, a good book to read is Growing Object-Oriented Software Guided by Tests. One should not mock 3rd party APIs (APIs which you don't own and can't modify), for the reason you already mentioned, but one should create an abstraction over it which better fits the needs of the system using it and works the way you want it. That abstraction needs to be integration tested with the 3rd party API, but in all tests using the abstraction you can mock it.
First, the pattern that you are using is based on Cucumber - here's a link. The style is from the BDD (Behavior-driven development) approach. It has two advantages over traditional TDD:
Language - one of the tenants of BDD is that the language you use influences the thoughts you have by forcing you to speak in the language of the end user, you will end up writing different tests than when you write tests from the focus of a programmer
Tests lock code - BDD locks the code at the appropriate level. One problem common in testing is that you write a large number of tests, which makes your codebase more brittle as when you change the code you must also change a large number of tests too. BDD forces you to lock the behavior of your code, rather than the implementation of your code. This way, when a test breaks, it is more likely to be meaningful.
It is worth noting that you do not have to use the Cucumber style of testing to achieve these affects and using it does add an extra layer of overhead. But very few programmers have been successful in keeping the BDD mindset while using traditional xUnit tools (TDD).
It also sounds like you have some scenarios where you would like to say 'When I do , then verify '. Because the current BDD xUnit frameworks only allow you to verify primitives (strings, ints, doubles, booleans....), this usually results in a large number of individual tests (one for each Assert). It is possible to do more complicated verifications using a Golden Master paradigm test tool, such as ApprovalTests. Here's a video example of this.
Finally, here's a link to Dan North's blog - he started it all.

Unit testing programs that mostly interact with external resources

I would like to start doing more unit testing in my applications, but it seems to me that most of the stuff I do is just no suitable to be unit tested. I know how unit tests are supposed to work in textbook examples, but in real world applications they do not seem of much use.
Some applications I write have very simple logic and complex interactions with things that are outside my control. For instance I would like to write a daemon which reacts to signals sent by some applications, and changes some user settings in the OS. I can see three difficulties:
first I have to be able to talk with the applications and be notified of their events;
then I need to interact with OS whenever I receive a signal, in order to change the appropriate user settings;
finally all of this should work as a daemon.
All these things are potentially delicate: I will have to browse possibly complex APIs and I may introduce bugs, say by misinterpreting some parameters. What can unit testing do for me? I can mock both the external application and the OS, and check that given a signal from the application, I will call the appropriate API method on the OS. This is... well, the trivial part of the application.
Actually most of the things I do involve interaction with databases, the filesystem or other applications, and these are the most delicate parts.
For another example look at my build tool PHPmake. I would like to refactor it, as it is not very well-written, but I fear to do this as I have no tests. So I would like to add some. The point is that the things which may be broken by refactoring may not be caught by unit tests:
One of things to do is deciding which things are to be built and which one are already up to date, and this depends on the time of last modification of the files. This time is actually changed by external processes, when some build command is fired.
I want to be sure that the output of external processes is displayed correctly. Sometimes the buikd commands require some input, and that should be also managed correctly. But I do not know a priori which processes will be ran - it may be anything.
Some logic is involved in pattern matching, and this may seem to be testable part. But the functions which do the pattern matching use (ni addition to their own logic) the PHP function glob, which works with the filesystem. If I just mock a tree in place of the actual filesystem, glob will not work.
I could go on with more examples, but the point is the following. Unless I have some delicate algorithms, most of what I do involves interaction with external resources, and this is not suitable for unit testing. More that this, often this interaction is actually the non-trivial part. Still many people see unit testing as a basic tool. What am I missing? How can I learn be a better tester?
I think you open a number of issues in your question.
Firstly, when your application integrates with external environments such as OS, other threads, etc. then you have to separate (1) the logic that is tied in with the external enviroment and (2) your business-code.. that is, the stuff your application does. This is no different to how you would separate GUI and SERVER in an application (or web application).
Secondly, you ask if you should test simple logic. I'd say, it depends. Often simple fetch/store functionality is nice to have tests for. It's like the foundation of your application.. hence its important. Other business stuff built upon your foundation that is very simple, you may easily find yourself both feeling that you are wasting your time, and mostly you are :-)
Thirdly, refactory an existing program and testing it in its existing state may be a problem. If your PHP program produces a set of files on the basis of some input, well, maybe thats your entry point to tests are. Sure the tests may be high-level, but it's an easy way to ensure that after the refactoring, your program produces the same output. Hence, aim for higher-level tests in that situation in the start phase of your refactoring efforts.
I'd like to recommend some literature, but I can only come up with one title. "Working Effectively with Legacy Code" By Micheal Feathers. It's a good start. Another would be "xUnit Test Patterns: Refactoring Test Code" by Gerard Meszaros (although that book is much more sloppy and FULL of copy paste text).
As regards your issue about existing code bases that aren't currently covered by tests in which you would like to start refactoring, I would suggest reading:
Working Effectively with Legacy Code By Micheal Feathers.
That book gives you techniques on how to deal with the issues you might be facing with PHPMake. It provides ways to introduce seams for testing, where there previously weren't any.
Additionally, with code that touches say the file systems, you can abstract the file system calls behind a thin wrapper, using the Adapter Pattern. The unit tests would be against a fake implementation of the abstract interface that the wrapping class implements.
At some point you get to a low enough level where a unit of code can't be isolated for unit testing as these depend on library or API calls (such as in the production implementation of the wrapper). Once this happens integration tests are really the only automated developer tests you can write.
I recommend this google tech-talk on unit testing.
The video boils down to
write your code so that it knows as little about how it will be used as possible. The less assumptions your code makes, the easier it is to test. Avoid complex logic in constructors, the use of singletons, static class members, and so on.
isolate your code from the external world (comms, databases, real time), and make sure that your code only talks to your isolation layer. Otherwise, writing tests will be a nightmare in terms of 'fake environment' setup.
unit tests should test stories; that is what we really understand and care for; given a class with a method foo(), testFoo() is uninformative. They actually recommend test names like itShouldCloseConnectionEvenWhenExceptionThrown(). Ideally, your stories should cover enough functionality that you can rebuild the spec from the stories.
NOTE: the video and this post use Java as an example; however, the main points stand for any language.
"Unit tests" tests one unit of your code. No external tools should be involved. This seems to be complicated for your first app (without knowing to much about it ;)) but the phpMake is unit-testable - I'm sure ... because ant, gradle and maven are unit-testable too ;)!
But of course you can test your first application automated too. There are several different layers one could test an application.
So the task for you is to find an automated way to test your app - be it integration testing or whatever.
E.g. you could write shell scripts, which asserts some output! With that you make sure your application behaves correctly ...
Tests of interactions with external resources are integration tests, not unit tests.
Tests of your code to see how it would behave if particular external interactions had occurred can be unit tests. These should be done by writing your code to use dependency injection, and then, in the unit test, injecting mock objects as dependencies.
For example, consider a piece of code that adds the results of a call to one service to the results of a call to another service:
public int AddResults(IService1 svc1, IService2 svc2, int parameter)
{
return svc1.Call(parameter) + svc2.Call(parameter);
}
You can test this by passing in mock objects for the two services:
private class Service1Returns1 : IService1
{
public int Call(int parameter){return 1;}
}
private class Service2Returns1 : IService2
{
public int Call(int parameter){return 1;}
}
public void Test1And1()
{
Assert.AreEqual(2, AddResults(new Service1Returns1(), new Service2Returns1(), 0));
}
First of all, if unit testing doesn't seem like it would be much use in your applications, why do you even want to start doing more of it? What is motivating you to care about it? It is definitely a waste of time if a) you do everything perfect the first time and nothing ever changes or b) you decide it's a waste of time and do it poorly.
If you do think that you really want to do unit testing, the answer to your questions are all the same: encapsulation. In your daemon example, you could create a ApplcationEventObeservationProxy with a very narrow interface that just implements pass through methods. The purpose of this class is to do nothing but completely encapsulate the rest of your code from the third-party event observing library (nothing means nothing -- no logic here). Do the same thing for OS settings. Then you can completely unit test the class that does actions based on events. I'd recommend have a separate class for the daemon-ness that just wraps your main class -- it will make the testing easier.
There are a couple of benefits to this approach outside of unit testing. One is that if you encapsulate the code that interacts directly with the OS, it's easier to switch it out. This kind of code is particularly prone to breakage outside of your control (i.e., MS patchsets). You will also probably want to support more than one OS, and if the OS specific logic is not tangled with the rest of your logic, it will be easier. The other benefit is that you'll be forced to realize that there is more business logic in your app than you think. :)
Finally, don't forget that unit testing is a foundation for a good product, but not the only ingredient. Having a set of tests that explore and verify the OS API calls you'll be using is a good strategy for the "hard" parts of this problem. You should also have end to end tests that ensure the events in your applications cause the OS setting changes.
As other answers suggested Working Effectively with Legacy Code By Micheal Feathers is a good read. If you have to deal with legacy code, and you want to make sure that the systems interaction work as expected, try writing integration tests first. And then it is more appropriate to write Unit Tests to test the behaviour of methods that are valued from the requirements point of view. You Tests serve a whole different purpose than the integration tests. Unit Tests are more likely to improve the design of your system than testing how everything hangs to gather.

Makes sense to use TDD for library/API code and BDD as integration tests?

I am a complete newbie to BDD and I would like to understand where does it come into play in a development cycle. In TDD approaches, we would write unit-tests commonly for libraries or apis, we would mock objects and that was great because it could even drive our design. These tests would be written before the actual code which is nice.
I understand that BDD is more about spec/scenario testing and I can see that it is a perfect fit to test a business requirement against the actual code. But what is the best practice for writing these tests? Do we still keep writing individual tests (as in TDD) mocking out dependencies and writing unit-tests for every single thing that could go wrong? Then write our bdd tests? Do we write the bdd tests first? Do we write only bdd tests even against individual components?
I use .NET and usually write asp.net mvc apps but this is more of a theory question and independent of the underlying programming language.
Thanks a lot.
Don't know about the right way but this is my experience. After analyzing the specification document I try to extract as many different 'stories' and describe them using BDD stories files. As you already know every sentence should start with either one of three keywords given, when and then.
After translating the whole specification to BDD test stories I write a class that implement steps i.e. execute each one of sentences used in stories.
Next step is starting writing an implementation which will be called by methods that execute script sentences by setting initial state (given), state transitions (when) and checking the destination state of the app (then).
Whenever I need to implement an actual class method I use TDD to do a thorough testing in isolation.
Of course, in order to run the test part of the code that is not yet implemented may be temporarily mocked.
BDD is a set of patterns around describing and building models of the behaviour of a system (and, at a higher level, a project vision) which can help us to have conversations about that behaviour, for various levels of granularity.
We use conversational patterns like "Given a context, when this event happens then this outcome should occur", and encourage questions like, "Should it? Always? Are there any other contexts we're missing?"
We can do that at a unit level, a scenario level or even into the analysis space.
I tend to work from the highest level inwards. Here's an article I wrote which describes what that looks like, right the way from project vision to unit tests.
The first bit of code I write will be the scenarios. Here are some scenarios written without any BDD frameworks, just plain old NUnit, showing how you can make these English-readable with domain language, etc.
Then I start with the User Interface. This could be a GUI, web-page, or an interface for another system to use my system. When this is done I can get feedback on whether my users like it. I frequently hard-code data, etc., just so that I can get that feedback.
Once I know roughly what my GUI will look like, I can start creating the behaviour behind it. I usually start with the behaviour of the controller. I will write class-level examples which describe the class's behaviour and responsibilities. I use mocks in place of collaborating classes I haven't written yet. This is the equivalent of TDD, but rather than writing tests which pin the code down so that nobody breaks it, I'm writing examples of how you can use my code, which show how it behaves and why it's valuable so that you can change it safely. I also use Given / When / Then for this! But I tend to use more technical language and don't worry about it being English-readable. Frequently my Given / When / Then are just in comments. Here's an example of class behaviour from the same codebase as the scenarios, so you can see what the difference is.
Hope this helps, and good luck with the BDD!
You can use it similarly as TDD, unit testing or otherwise. The difference is gonna come from testing "Business Behavior". Spec Flow and Features is one option, unit testing syntax would look something like this
[Given(#"a user exists with username ""(.*)"" and password ""(.*)""")]
public void GivenAUserExistsWithUsernameAdminAndPasswordPassword(string userName, string password) {
}
[Then(#"that user should be able to login using ""(.*)"" and ""(.*)""")]
public void ThenThatUserShouldBeAbleToLoginUsingAdminAndPassword(string userName, string password) {
Assert.IsTrue(_authService.IsValidLogin(userName, password));
}

Unit test adoption [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.
I used to be an advocate of unit testing but it doesn't seem to be working on our current project.
Can anyone help identify the problem and how we can make unit tests work for us rather than against us?
Tips:
Avoid writing procedural code
Tests can be a bear to maintain if they're written against procedural-style code that relies heavily on global state or lies deep in the body of an ugly method.
If you're writing code in an OO language, use OO constructs effectively to reduce this.
Avoid global state if at all possible.
Avoid statics as they tend to ripple through your codebase and eventually cause things to be static that shouldn't be. They also bloat your test context (see below).
Exploit polymorphism effectively to prevent excessive ifs and flags
Find what changes, encapsulate it and separate it from what stays the same.
There are choke points in code that change a lot more frequently than other pieces. Do this in your codebase and your tests will become more healthy.
Good encapsulation leads to good, loosely coupled designs.
Refactor and modularize.
Keep tests small and focused.
The larger the context surrounding a test, the more difficult it will be to maintain.
Do whatever you can to shrink tests and the surrounding context in which they are executed.
Use composed method refactoring to test smaller chunks of code.
Are you using a newer testing framework like TestNG or JUnit4?
They allow you to remove duplication in tests by providing you with more fine-grained hooks into the test lifecycle.
Investigate using test doubles (mocks, fakes, stubs) to reduce the size of the test context.
Investigate the Test Data Builder pattern.
Remove duplication from tests, but make sure they retain focus.
You probably won't be able to remove all duplication, but still try to remove it where it's causing pain. Make sure you don't remove so much duplication that someone can't come in and tell what the test does at a glance. (See Paul Wheaton's "Evil Unit Tests" article for an alternative explanation of the same concept.)
No one will want to fix a test if they can't figure out what it's doing.
Follow the Arrange, Act, Assert Pattern.
Use only one assertion per test.
Test at the right level to what you're trying to verify.
Think about the complexity involved in a record-and-playback Selenium test and what could change under you versus testing a single method.
Isolate dependencies from one another.
Use dependency injection/inversion of control.
Use test doubles to initialize an object for testing, and make sure you're testing single units of code in isolation.
Make sure you're writing relevant tests
"Spring the Trap" by introducing a bug on purpose and make sure it gets caught by a test.
See also: Integration Tests Are A Scam
Know when to use State Based vs Interaction Based Testing
True unit tests need true isolation. Unit tests don't hit a database or open sockets. Stop at mocking these interactions. Verify you talk to your collaborators correctly, not that the proper result from this method call was "42".
Demonstrate Test-Driving Code
It's up for debate whether or not a given team will take to test-driving all code, or writing "tests first" for every line of code. But should they write at least some tests first? Absolutely. There are scenarios in which test-first is undoubtedly the best way to approach a problem.
Try this exercise: TDD as if you meant it (Another Description)
See also: Test Driven Development and the Scientific Method
Resources:
Test Driven by Lasse Koskela
Growing OO Software, Guided by Tests by Steve Freeman and Nat Pryce
Working Effectively with Legacy Code by Michael Feathers
Specification By Example by Gojko Adzic
Blogs to check out: Jay Fields, Andy Glover, Nat Pryce
As mentioned in other answers already:
XUnit Patterns
Test Smells
Google Testing Blog
"OO Design for Testability" by Miskov Hevery
"Evil Unit Tests" by Paul Wheaton
"Integration Tests Are A Scam" by J.B. Rainsberger
"The Economics of Software Design" by J.B. Rainsberger
"Test Driven Development and the Scientific Method" by Rick Mugridge
"TDD as if you Meant it" exercise originally by Keith Braithwaite, also workshopped by Gojko Adzic
Are you testing small enough units of code? You shouldn't see too many changes unless you are fundamentally changing everything in your core code.
Once things are stable, you will appreciate the unit tests more, but even now your tests are highlighting the extent to which changes to your framework are propogated through.
It is worth it, stick with it as best you can.
Without more information it's hard to make a decent stab at why you're suffering these problems. Sometimes it's inevitable that changing interfaces etc. will break a lot of things, other times it's down to design problems.
It's a good idea to try and categorise the failures you're seeing. What sort of problems are you having? E.g. is it test maintenance (as in making them compile after refactoring!) due to API changes, or is it down to the behaviour of the API changing? If you can see a pattern, then you can try to change the design of the production code, or better insulate the tests from changing.
If changing a handful of things causes untold devastation to your test suite in many places, there are a few things you can do (most of these are just common unit testing tips):
Develop small units of code and test
small units of code. Extract
interfaces or base classes where it
makes sense so that units of code
have 'seams' in them. The more
dependencies you have to pull in (or
worse, instantiate inside the class
using 'new'), the more exposed to
change your code will be. If each
unit of code has a handful of
dependencies (sometimes a couple or
none at all) then it is better
insulated from change.
Only ever assert on what the test
needs. Don't assert on intermediate,
incidental or unrelated state. Design by
contract and test by contract (e.g.
if you're testing a stack pop method,
don't test the count property after
pushing -- that should be in a
separate test).
I see this problem
quite a bit, especially if each test
is a variant. If any of that
incidental state changes, it breaks
everything that asserts on it
(whether the asserts are needed or
not).
Just as with normal code, use factories and builders
in your unit tests. I learned that one when about 40 tests
needed a constructor call updated after an API change...
Just as importantly, use the front
door first. Your tests should always
use normal state if it's available. Only used interaction based testing when you have to (i.e. no state to verify against).
Anyway the gist of this is that I'd try to find out why/where the tests are breaking and go from there. Do your best to insulate yourself from change.
One of the benefits of unit testing is that when you make changes like this you can prove that you're not breaking your code. You do have to keep your tests in sync with your framework, but this rather mundane work is a lot easier than trying to figure out what broke when you refactored.
I would insists you to stick with the TDD. Try to check your Unit Testing framework do one RCA (Root Cause Analysis) with your team and identify the area.
Fix the unit testing code at suite level and do not change your code frequently specially the function names or other modules.
Would appreciate if you can share your case study well, then we can dig out more at the problem area?
Good question!
Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.
I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing
HTH
If the problem is that your tests are getting out of date with the actual code, you could do one or both of:
Train all developers to not pass code reviews that don't update unit tests.
Set up an automatic test box that runs the full set of units tests after every check-in and emails those who break the build. (We used to think that that was just for the "big boys" but we used an open source package on a dedicated box.)
Well if the logic has changed in the code, and you have written tests for those pieces of code, I would assume the tests would need to be changed to check the new logic. Unit tests are supposed to be fairly simple code that tests the logic of your code.
Your unit tests are doing what they are supposed to do. Bring to the surface any breaks in behavior due to changes in the framework, immediate code or other external sources. What this is supposed to do is help you determine if the behavior did change and the unit tests need to be modified accordingly, or if a bug was introduced thus causing the unit test to fail and needs to be corrected.
Don't give up, while its frustrating now, the benefit will be realized.
I'm not sure about the specific issues that make it difficult to maintain tests for your code, but I can share some of my own experiences when I had similar issues with my tests breaking. I ultimately learned that the lack of testability was largely due to some design issues with the class under test:
Using concrete classes instead of interfaces
Using singletons
Calling lots of static methods for business logic and data access instead of interface methods
Because of this, I found that usually my tests were breaking - not because of a change in the class under test - but due to changes in other classes that the class under test was calling. In general, refactoring classes to ask for their data dependencies and testing with mock objects (EasyMock et al for Java) makes the testing much more focused and maintainable. I've really enjoyed some sites in particular on this topic:
Google testing blog
The guide to writing testable code
Why should you have to change your unit tests every time you make changes to your framework? Shouldn't this be the other way around?
If you're using TDD, then you should first decide that your tests are testing the wrong behavior, and that they should instead verify that the desired behavior exists. Now that you've fixed your tests, your tests fail, and you have to go squish the bugs in your framework until your tests pass again.
Everything comes with price of course. At this early stage of development it's normal that a lot of unit tests have to be changed.
You might want to review some bits of your code to do more encapsulation, create less dependencies etc.
When you near production date, you'll be happy you have those tests, trust me :)
Aren't your unit tests too black-box oriented ? I mean ... let me take an example : suppose you are unit testing some sort of container, do you use the get() method of the container to verify a new item was actually stored, or do you manage to get an handle to the actual storage to retrieve the item directly where it is stored ? The later makes brittle tests : when you change the implementation, you're breaking the tests.
You should test against the interfaces, not the internal implementation.
And when you change the framework you'd better off trying to change the tests first, and then the framework.
I would suggest investing into a test automation tool. If you are using continuous integration you can make it work in tandem. There are tools aout there which will scan your codebase and will generate tests for you. Then will run them. Downside of this approach is that its too generic. Because in many cases unit test's purpose is to break the system.
I have written numerous tests and yes I have to change them if the codebase changes.
There is a fine line with automation tool you would definatelly have better code coverage.
However, with a well wrttien develper based tests you will test system integrity as well.
Hope this helps.
If your code is really hard to test and the test code breaks or requires much effort to keep in sync, then you have a bigger problem.
Consider using the extract-method refactoring to yank out small blocks of code that do one thing and only one thing; without dependencies and write your tests to those small methods.
The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
The alternative is that when your Framework changes, you don't test the changes. Or you don't test the Framework at all. Is that what you want?
You may try refactoring your Framework so that it is composed from smaller pieces that can be tested independently. Then when your Framework changes, you hope that either (a) fewer pieces change or (b) the changes are mostly in the ways in which the pieces are composed. Either way will get you better reuse of both code and tests. But real intellectual effort is involved; don't expect it to be easy.
I found that unless you use IoC / DI methodology that encourages writing very small classes, and follow Single Responsibility Principle religiously, the unit-tests end up testing interaction of multiple classes which makes them very complex and therefore fragile.
My point is, many of the novel software development techniques only work when used together. Particularly MVC, ORM, IoC, unit-testing and Mocking. The DDD (in the modern primitive sense) and TDD/BDD are more independent so you may use them or not.
Sometime designing the TDD tests launch questioning on the design of the application itself. Check if your classes have been well designed, your methods are only performing one thing a the time ... With good design it should be simple to write code to test simple method and classes.
I have been thinking about this topic myself. I'm very sold on the value of unit tests, but not on strict TDD. It seems to me that, up to a certain point, you may be doing exploratory programming where the way you have things divided up into classes/interfaces is going to need to change. If you've invested a lot of time in unit tests for the old class structure, that's increased inertia against refactoring, painful to discard that additional code, etc.

How do I know when to use state based testing versus mock testing?

Which scenarios, areas of an application/system, etc. are best suited for 'classic' state based testing versus using mock objects?
I'm going to tackle it from a TDD/BDD perspective, where the tests are driving the designs.
First off it depends what style of design you buy into, because remember this is about design first. As Martin Fowler discusses in this excellent article Mocks Aren't Stubs there are two school of thoughts and they do produce different types of designs.
If you want to buy into the mockist approach then I highly suggest you start by looking at the mockobjects site and their article Mock Roles, Not Objects. They also have a book coming out.
The truth is even if you do believe that the mock-first style of design is not for you, or that you don't want to do it right through your application (e.g. not when testing your domain/service layer) then you will still want to use test doubles. xUnit Test Patterns explains the different types of test doubles and their purposes.
Personally I never mock domain classes, so never mocking entities/value objects. However I've been trying out the mock objects style approach a lot recently, it does change my designs a bit and I find the working style comfortable. Working in this style in an MVC app I'll probably start with an automated acceptance test, then I'll write a controller test that mocks out any non-domain objects (e.g. repositories/services), then I'll move down to testing those repositories/services again mocking out their dependencies. I stop when I reach a class with no troublesome dependencies such as a domain entities/value object. I could go on and test against specific role interfaces which are then implemented by my domain classes, which is what the mockobjects guys would recommend, but I don't currently see a lot of value in that approach.
Obviously it is worth adding that design for test is important here, but remember that although 90% of IoC/mocking/DIP examples show interface-implementation pairs (ICustomerRepository/CustomerRepository) there is a lot of value in instead looking for role interfaces.
You should be using mocks for dependencies. I don't think that its an either-or; Usually you will create mocks for dependencies, set expectations (whether it is calls or state) on them, then run the unit under test. Then you would check its state, and verify the expectations on the mocks, afterwards.
Using mock objects doesnt mean you're not doing state based testing.
When using services, whether my own or third party, I design to interfaces as a matter of course, so I tend to focus on the interactions there. This encourages me to design minimal interfaces.
I check state on anything focusing on value objects, just as simple calculations, lookups, and the like.
The next time you find yourself designing something that typically follows Model/View/Controller-or-Presenter, I highly recommend trying the Presenter First approach (Google it) using interfaces for the Model and View. This will give you a great feel for how to use stubs/mocks effectively.
Its a matter of style.. Mockist vs Classic TDDers.
Personally.. I'd go testing real classes as far as possible. Tone down on Mocks as much as possible; only to decouple things like IO (filesystems, DB Connections, network), Third party components, etc. things that are slow/difficult to get under test.
As an experienced TDD'er, this is a question that I'm frequently asked by other developers. For me, it's a mistake to get sucked into a Mockist vs. Classicist debate, as such a discussion is misleading. State-based and behaviuour-based unit testing are two different tools in your toolbox, and there's no reason why they should be mutually exclusive.
State-based unit testing is suitable when you want to query the internal properties of an object after talking to its external interface. If one or more of its collaborators involve potentially expensive calls to other objects, then by all means stub those calls and simply disregard collaborators.
Behaviour-based unit testing is a good idea when you want to consider the "how" of a unit test and focus upon discoving relationships between objects, as opposed to the traditional "what" questions of a state-based unit test. If you wish to assert that collaborators are used in a certain way and/or sequential order, use mocks - stubs that provide assertions.
I suggest you focus upon standard unit testing practices - exercise as little production code as possible, and have one assertion per test. This will force you to think "what exactly is it that I want to exercise and assert in this test?", and the answer to that question will help you choose the correct unit testing tools.