Writing maintainable unit tests with mock objects - unit-testing

This is a simplified version of a class I'm writing a unit test for
class SomeClass {
void methodA() {
methodB();
methodC();
methodD();
}
void methodB() {
//does something
}
void methodC() {
//does something
}
void methodD() {
//does something
}
}
While writing the unit tests for this class, I've mocked out objects using EasyMock used in each method. It was easy to set up the mock objects and their expectation
In method B,C,and D. But to test method A, I have to set up A LOT more mock objects and their expectations. Also, I’m testing method A in different conditions, meaning I have to setup the mock objects many times with different expectations.
In the end, my unit test becomes hard to maintain and pretty cluttered. I was wondering if anyone has or seen a good solution to this problem.

If I understand your question correctly, I think that this is a matter of design. The nice thing about unit testing is that writing tests often forces you to make your design better. If you need to mock too many things while testing a method it often means you should split your class into two smaller classes, which will be easier to test (and write, and maintain, and bugfix, and reuse, etc.).
In your case, the method A seems to be at a higher level than methods A, B, C. You can consider removing it to a higher level class, that would wrap SomeClass:
class HigherLevelClass {
ISomeClass someClass;
public HigherLevelClass(ISomeClass someClass)
{
this.someClass = someClass;
}
void methodA() {
someClass.methodB();
someClass.methodC();
someClass.methodD();
}
}
class SomeClass : ISomeClass {
void methodB() {
//does something
}
void methodC() {
//does something
}
void methodD() {
//does something
}
}
Now when you are testing methodA all you need to mock is the small ISomeClass interface and the three method calls.

You could extract common setup code into separate (possibly parametrized) methods, then call them whenever appropriate. If the tests for methodA have a very different fixture from the tests of the other methods, there may not be much to put into the #Before method itself, so you need to call the appropriate combination of setup helper methods from the test methods themselves. It is still a bit cumbersome, but better than duplicating code all over the place.
Depending on what unit test framework you use, there may be other options too, but the above should work with any framework.

This is an example of a Fragile test because the mock setups have too intimate knowledge of the SUT.
I don't know EasyMock, but with Moq you don't need to setup void methods. However, with Moq the methods would have to be public or protected and virtual.

For each test you're writing, consider the behaviour which is valuable for that test. You'll have some contexts you're setting up which the behaviour relies on, and some outcomes as a result of the behaviour that you want to verify.
Set up relevant contexts, verify the outcomes, and use NiceMocks for everything else.
I prefer Mockito (Java) or Moq (.NET) which work this way by default. Here's Mockito's page on Mockito vs. EasyMock so you can get the idea (EasyMock didn't have NiceMock before Mockito came along):
http://code.google.com/p/mockito/wiki/MockitoVSEasyMock
You can probably use EasyMock's NiceMock in a similar way. Hopefully this will help you detangle your tests. You can always import both frameworks and use them alongside each other / incrementally switch over if it helps.
Good luck!

I’m testing method A in different conditions, meaning I have to setup the mock objects many times with different expectations.
If you care of what methodA is doing and which collaborator function has to be called then you have to setup different expectations... I don't see how you can skip this step?!
If you testLogout you would expect a call to myCollaborator.logout() otherwise if you testLogin you would expect something like myCollaborator.login().
If you have many methods with lots/different expectations maybe is the case to split your class in collaborators

Related

Ignoring mock calls during setup phase

I often face the problem that mock objects need to be brought in a certain state before the "interesting" part of a test can start.
For example, let's say I want to test the following class:
struct ToTest
{
virtual void onEnable();
virtual void doAction();
};
Therefore, I create the following mock class:
struct Mock : ToTest
{
MOCK_METHOD0(onEnable, void());
MOCK_METHOD0(doAction, void());
};
The first test is that onEnable is called when the system that uses a ToTest object is enabled:
TEST(SomeTest, OnEnable)
{
Mock mock;
// register mock somehow
// interesting part of the test
EXPECT_CALL(mock, onEnable());
EnableSystem();
}
So far, so good. The second test is that doAction is called when the system performs an action and is enabled. Therefore, the system should be enabled before the interesting part of the test can start:
TEST(SomeTest, DoActionWhenEnabled)
{
Mock mock;
// register mock somehow
// initialize system
EnableSystem();
// interesting part of the test
EXPECT_CALL(mock, doAction());
DoSomeAction();
}
This works but gives an annoying warning about an uninteresting call to onEnable. There seem to be two common fixes of this problem:
Using NiceMock<Mock> to suppress all such warnings; and
Add an EXPECT_CALL(mock, onEnable()) statement.
I don't want to use the first method since there might be other uninteresting calls that really should not happen. I also don't like the second method since I already tested (in the first test) that onEnable is called when the system is enabled; hence, I don't want to repeat that expectation in all tests that work on enabled systems.
What I would like to be able to do is say that all mock calls up to a certain point should be completely ignored. In this example, I want expectations to be only checked starting from the "interesting part of the test" comment.
Is there a way to accomplish this using Google Mock?
The annoying thing is that the necessary functions are there: gmock/gmock-spec-builders.h defines Mock::AllowUninterestingCalls and others to control the generation of warnings for a specific mock object. Using these functions, it should be possible to temporarily disable warnings about uninteresting calls.
That catch is, however, that these functions are private. The good thing is that class Mock has some template friends (e.g., NiceMock) that can be abused. So I created the following workaround:
namespace testing
{
// HACK: NiceMock<> is a friend of Mock so we specialize it here to a type that
// is never used to be able to temporarily make a mock nice. If this feature
// would just be supported, we wouldn't need this hack...
template<>
struct NiceMock<void>
{
static void allow(const void* mock)
{
Mock::AllowUninterestingCalls(mock);
}
static void warn(const void* mock)
{
Mock::WarnUninterestingCalls(mock);
}
static void fail(const void* mock)
{
Mock::FailUninterestingCalls(mock);
}
};
typedef NiceMock<void> UninterestingCalls;
}
This lets me access the private functions through the UninterestingCalls typedef.
The flexibility you're looking for is not possible in gmock, by design. From the gmock Cookbook (emphasis mine):
[...] you should be very cautious about when to use naggy or strict mocks, as they tend to make tests more brittle and harder to maintain. When you refactor your code without changing its externally visible behavior, ideally you should't need to update any tests. If your code interacts with a naggy mock, however, you may start to get spammed with warnings as the result of your change. Worse, if your code interacts with a strict mock, your tests may start to fail and you'll be forced to fix them. Our general recommendation is to use nice mocks (not yet the default) most of the time, use naggy mocks (the current default) when developing or debugging tests, and use strict mocks only as the last resort.
Unfortunately, this is an issue that we, and many other developers, have encountered. In his book, Modern C++ Programming with Test-Driven Development, Jeff Langr writes (Chapter 5, on Test Doubles):
What about the test design? We split one test into two when we changed from a hand-rolled mock solution to one using Google Mock. If we expressed everything in a single test, that one test could set up the expectations to cover all three significant events. That’s an easy fix, but we’d end up with a cluttered test.
[...]
By using NiceMock, we take on a small risk. If the code later somehow changes to invoke another method on the [...] interface, our tests aren’t going to know about it. You should use NiceMock when you need it, not habitually. Seek to fix your design if you seem to require it often.
You might be better off using a different mock class for your second test.
class MockOnAction : public ToTest {
// This is a non-mocked function that does nothing
virtual void onEnable() {}
// Mocked function
MOCK_METHOD0(doAction, void());
}
In order for this test to work, you can have onEnable do nothing (as shown above). Or it can do something special like calling the base class or doing some other logic.
virtual void onEnable() {
// You could call the base class version of this function
ToTest::onEnable();
// or hardcode some other logic
// isEnabled = true;
}

Is it bad practice to unit test a method that is calling another method I am already testing?

Consider you have the following method:
public Foo ParseMe(string filepath)
{
// break up filename
// validate filename & extension
// retrieve info from file if it's a certain type
// some other general things you could do, etc
var myInfo = GetFooInfo(filename);
// create new object based on this data returned AND data in this method
}
Currently I have unit tests for GetFooInfo, but I think I also need to build unit tests for ParseMe. In a situation like this where you have a two methods that return two different properties - and a change in either of them could break something - should unit tests be created for both to determine the output is as expected?
I like to err on the side of caution and be more wary about things breaking and ensuring that maintenance later on down the road is easier, but I feel very skeptical about adding very similar tests in the test project. Would this be bad practice or is there any way to do this more efficiently?
I'm marking this as language agnostic, but just in case it matters I am using C# and NUnit - Also, I saw a post similar to this in title only, but the question is different. Sorry if this has already been asked.
ParseMe looks sufficiently non-trivial to require a unit test. To answer your precise question, if "you have a two methods that return two different properties - and a change in either of them could break something" you should absolutely unit test them.
Even if the bulk of the work is in GetFooInfo, at minimum you should test that it's actually called. I know nothing about NUnit, but I know in other frameworks (like RSpec) you can write tests like GetFooInfo.should be_called(:once).
It is not a bad practice to test a method that is calling another method. In fact, it is a good practice. If you have a method calling another method, it is probably performing additional functionality, which should be tested.
If you find yourself unit testing a method that calls a method that is also being unit tested, then you are probably experiencing code reuse, which is a good thing.
I agree with #tsm - absolutely test both methods (assuming both are public).
This may be a smell that the method or class is doing too much - violating the Single Responsibility Principle. Consider doing an Extract Class refactoring and decoupling the two classes (possibly with Dependency Injection). That way you could test both pieces of functionality independently. (That said, I'd only do that if the functionality was sufficiently complex to warrant it. It's a judgment call.)
Here's an example in C#:
public interface IFooFileInfoProvider
{
FooInfo GetFooInfo(string filename);
}
public class Parser
{
private readonly IFooFileInfoProvider _fooFileInfoProvider;
public Parser(IFooFileInfoProvider fooFileInfoProvider)
{
// Add a null check
_fooFileInfoProvider = fooFileInfoProvider;
}
public Foo ParseMe(string filepath)
{
string filename = Path.GetFileName(filepath);
var myInfo = _fooFileInfoProvider.GetFooInfo(filename);
return new Foo(myInfo);
}
}
public class FooFileInfoProvider : IFooFileInfoProvider
{
public FooInfo GetFooInfo(string filename)
{
// Do I/O
return new FooInfo(); // parameters...
}
}
Many developers, me included, take a programming by contract approach. That requires you to consider each method as a black box. If the method delegates to another method to accomplish its task does not matter, when you are testing the method. But you should also test all large or complicated parts of your program as units. So whether you need to unit test the GetFooInfo depends on how complicated that method is.

UnitTesting the public interface of a class when the internals are already covered?

I've covered the internal implementation of my class with UnitTests. Is it still useful to test my public interface now (which is one function that is little more then a chain of calls to the internal methods)?
It feels like I would be adding additional tests that test the same thing(s).
Or is testing the public interface more an integration test (even though I've stubbed my data access so it's all processed in memory) in that way that it tests whether all the UnitTested methods work well together?
Example:
internal bool internalCheck() {
// complex logic that is being unit tested
}
internal void internalDoSomething() {
// do stuff. is being unit tested
}
public void DoIt() {
if (internalCheck()) {
internalDoSomething();
}
}
Now if I am going to add tests that test DoIt, I will basically end up retesting all logic flows for internalCheck and assert that when it returns true, the internalDoSomething is being called.
Hmm I think I've figured it out:
I need to mock the class itself and check just that the correct calls are being made, pretty much ignoring real inputs/outputs. To test the public method, rather than retesting the internalCheck, I use a mocking framework to have internalCheck return what flow I want to test and then verify that the correct methods have been called.
It depends how you have defined your unit. Usually the is the class, so this would still be a unit test.
Even if you have have tested the internal methods, it is still important to test the public interface, to make sure that the internal methods work together.

TDD Function Tests

Should I write unit test for all nested methods or if writing one test for caller is enough?
For instance:
void Main()
{
var x = new A().AFoo();
}
public class A
{
public int AFoo()
{
// some logic
var x = new B().BFoo();
// might have some logic
return x;
}
}
public class B
{
public int BFoo()
{
// some logic
return ???;
}
}
Is that sufficient to write unit test for Main() method or I need to write tests for Main, A.AFoo(), B.BFoo() methods? How deep should I go?
Thanks in advance.
A testing purist would say that you need to create unit tests for classes A and B.
Each class should have all methods tested. If a method can do more than one thing (if you have an if statement, for example), then you should have a test for each path. If the tests are getting too complicated, it's probably a good idea to refactor the code to make the tests simpler.
Note as it stands right now, its hard to test A in isolation because it depends on B. If B is simple, as it is right now, it's probably ok. You might want to name your tests for A integration tests because technically they test both A and B together. Another option would be to have the method AFoo accept as a parameter the instance of B on which it operates. That way you could mock an instance of B and have a true unit test.
Unit tests are supposed to work on units, in the case of OOP the units are classes and the methods of the classes. That means that you should write a separate test class for each class under consideration, and at least one testing method for each method provided in the class. What is more, it is important to isolate the classes as much as possible so that a bug in class B does not cause a failure on class A. This is why Inversion of Control (Dependency Injection) is so useful, because if you can inject the instance of class B into the instance of class A, you can change B to be just a Mock object.
One of the reasons we write unit tests is to explain, in code, exactly how the methods of each class are expected to behave under all conditions, including and especially edge cases. It is hard to detail the expected behaviour of class B by writing tests on the main method.
I would recommend reading some material online explaining test driven development and how to mock objects, and perhaps use some of the excellent mocking libraries that exist such as JMock. See this question for more links.
Unit tests should help you to reduce your debugging effort. So when you just write unit tests for AFoo and none for BFoo, and one of your test fails, you probably won't know if the problem is part of class A or class B. Writing tests for BFoo too will help you to isolate the error in smaller amount of time.

tdd - should I mock here or use real implementation

I'm writing program arguments parser, just to get better in TDD, and I stuck with the following problem. Say I have my parser defined as follows:
class ArgumentsParser {
public ArgumentsParser(ArgumentsConfiguration configuration) {
this.configuration = configuration;
}
public void parse(String[] programArguments) {
// all the stuff for parsing
}
}
and I imagine to have ArgumentsConfiguration implementation like:
class ArgumentsConfiguration {
private Map<String, Class> map = new HashMap<String, Class>();
public void addArgument(String argName, Class valueClass) {
map.add(argName, valueClass);
}
// get configured arguments methods etc.
}
This is my current stage. For now in test I use:
#Test
public void shouldResultWithOneAvailableArgument() {
ArgumentsConfiguration config = prepareSampleConfiguration();
config.addArgument("mode", Integer.class);
ArgumentsParser parser = new ArgumentsParser(configuration);
parser.parse();
// ....
}
My question is if such way is correct? I mean, is it ok to use real ArgumentsConfiguration in tests? Or should I mock it out? Default (current) implementation is quite simple (just wrapped Map), but I imagine it can be more complicated like fetching configuration from kind of datasource. Then it'd be natural to mock such "expensive" behaviour. But what is preferred way here?
EDIT:
Maybe more clearly: should I mock ArgumentsConfiguration even without writing any implementation (just define its public methods), use mock for testing and deal with real implementation(s) later, or should I use the simplest one in tests, and let them cover this implementation indirectly. But if so, what about testing another Configuration implementation provided later?
Then it'd be natural to mock such "expensive" behaviour.
That's not the point. You're not mocking complex classes.
You're mocking to isolate classes completely.
Complete isolation assures that the tests demonstrate that classes follow their interface and don't have hidden implementation quirks.
Also, complete isolation makes debugging a failed test much, much easier. It's either the test, the class under test or the mocked objects. Ideally, the test and mocks are so simple they don't need any debugging, leaving just the class under test.
The correct answer is that you should mock anything that you're not trying to test directly (e.g.: any dependencies that the object under test has that do not pertain directly to the specific test case).
In this case, because your ArgumentsConfiguration is so simple, I'd recommend using the real implementation until your requirements demand something more complicated. There doesn't seem to be any logic in your ArgumentsConfiguration class, so it's safe to use the real object. If the time comes where the configuration is more complicated, then an approach you should probably take would be not to create a configuration that talks to some data source, but instead generate the ArgumentsConfiguration object from that datasource. Then you could have a test that makes sure it generates the configuration properly from the datasource and you don't need unnecessary abstractions.