Test: stub vs real implementation - unit-testing

I have been wondering about the general use of stubs for unit tests vs using real (production) implementations, and specifically whether we don't run into a rather nasty problem when using stubs as illustrated here:
Suppose we have this (pseudo) code:
public class A {
public int getInt() {
if (..) {
return 2;
}
else {
throw new AException();
}
}
}
public class B {
public void doSomething() {
A a = new A();
try {
a.getInt();
}
catch(AException e) {
throw new BException(e);
}
}
}
public class UnitTestB {
#Test
public void throwsBExceptionWhenFailsToReadInt() {
// Stub A to throw AException() when getInt is called
// verify that we get a BException on doSomething()
}
}
Now suppose we at some point later when we have written hundreds of tests more, realize that A shouldn't really throw AException but instead AOtherException. We correct this:
public class A {
public int getInt() {
if (..) {
return 2;
}
else {
throw new AOtherException();
}
}
}
We have now changed the implementation of A to throw AOtherException and we then run all our tests. They pass. What's not so good is that the unit test for B passes but is wrong. If we put together A and B in production at this stage, B will propagate AOtherException because its implementation thinks A throws AException.
If we instead had used the real implementation of A for our throwsBExceptionWhenFailsToReadInt test, then it would have failed after the change of A because B wouldn't throw the BException anymore.
It's just a frightening thought that if we had thousand of tests structured like the above example, and we changed one tiny thing, then all the unit tests would still run even though the behavior of many of the units would be wrong! I may be missing something, and I'm hoping some of you clever folks could enlighten me as to what it is.

When you say
We have now changed the implementation of A to throw AOtherException and we then run all our tests. They pass.
I think that's incorrect. You obviously haven't implemented your unit test, but Class B will not catch AException and thus not throw BException because AException is now AOtherException. Maybe I'm missing something, but wouldn't your unit test fail in asserting that BException is thrown at that point? You will need to update your class code to appropriately handle the exception type of AOtherException.

If you change the interface of class A then your stub code will not build (I assume you use the same header file for production and stub versions) and you will know about it.
But in this case you are changing the behaviour of your class because the exception type is not really part of the interface. Whenever you change the behaviour of your class you really have to find all the stub versions and check if you need to change their behaviour as well.
The only solution I can think of for this particular example is to use a #define in the header file to define the exception type. This could get messy if you need to pass parameters to the exception's contructor.
Another technique I have used (again not applicable to this particular example) is to derive your production and stub classes from a virtual base class. This separates the interface from the implementation, but you still have to look at both implementations if you change the behaviour of the class.

It's normal that the test you wrote using stubs doesn't fail since it is intended to verify that object B communicates well with A and can handle the response from getInt() assuming that getInt() throws an AException. It is not intended to check if getInt() really throws an AException at any point.
You can call that kind of test you wrote a "collaboration test".
Now what you need to be complete is the counterpart test that checks if getInt() will ever throw an AException (or a AOtherException, for that matter) in the first place. It's a "contract test".
J B Rainsberger has a great presentation on the contract and collaboration tests technique.
With that technique here's how you'd typically go, solving the whole "false green test" problem :
Identify that getInt() now needs to throw a AOtherException rather than an AException
Write a contract test verifying that getInt() does throw a AOtherException under given circumstances
Write the corresponding production code to make the test pass
Realize you need collaboration tests for that contract test : for each collaborator using getInt(), can it handle the AOtherException we're going to throw ?
Implement those collaboration tests (let's say you don't notice there's already a collaboration test checking for AException at that point yet).
Write production code that matches the tests and realize that B already expects an AException when calling getInt() but not a AOtherException.
Refer to the existing collaboration test containing the stubbed A throwing an AException and realize it's obsolete and you need to delete it.
This is if you start using that technique just now, but assuming you adopted it from the start, there wouldn't be any real problem since what you'd naturally do is change the contract test of getInt() to make it expect AOtherException, and change the corresponding collaboration tests just after that (the golden rule is that a contract test always goes with a collaboration test so with time it becomes a no-brainer).
If we instead had used the real implementation of A for our
throwsBExceptionWhenFailsToReadInt test, then it would have failed
after the change of A because B wouldn't throw the BException anymore.
Sure, but this would have been a whole other kind of test -an integration test, actually. An integration test verifies both sides of the coin : does object B handle response R from object A correctly, and does object A ever respond that way in the first place ? It's only normal for a test like this to fail when the implementation of A used in the test starts to respond R' instead of R.

The specific example you have mentioned is a tricky one.. the compiler cannot catch it or notify you. In this case, you'd have to be diligent to find all usages and update the corresponding tests.
That said, this type of issue should be a fraction of the tests - you cannot wave away the benefits just for this corner case.
See also: TDD how to handle a change in a mocked object - there was a similar discussion on the testdrivendevelopment forums (linked in the above question). To quote Steve Freeman (of GOOS fame and a proponent of the interaction-based tests)
All of this is true. In practice, combined with a judicious
combination of higher level tests, I haven't seen this to be a big
problem. There's usually something bigger to deal with first.

Ancient thread, I know, but I thought I'd add that JUnit has a really handy feature for exception handling. Instead of doing try/catch in your test, tell JUnit that you expect a certain exception to be thrown by the class.
#Test(expected=AOtherException)
public void ensureCorrectExceptionForA {
A a = new A();
a.getInt();
}
Extending this to your class B you can omit some of the try/catch and let the framework detect the correct usage of exceptions.

Related

Can method call be tested without Mockito.verify?

If i need to test if a method within class under test has been called or not, can it be done without Mockito (or any mocking tool for that matter)?
Reason asking is that wherever i read about Mockito and similar tools, it says one should never mock CUT but its dependencies (that part is clear).
So, if thats the case then there are only 2 options left:
there is some other way of testing it without mocking
or
the fact the method was called should not be tested itself but some side effect or methods return value
For example (trivial and non-realworld), class MyClass can have 2 methods: A() and B(). A conditionay calls B based on some internal state.
After arranging state & acting by calling A() we want to assert that B() was called.
Either its not possible without mocking the whole CUT or 2 methods like this in a single class are always SRP violation smell and call for redesign where B() should actually be (mocked) dependency of MyClass CUT.
So, whats correct?
Usually I tend to not even use spies, instead I prefer to write my code in a way that for any class I write:
I test only non-private methods, since they're entry points into the class under test. So, in your example, if a() calls b(), maybe b() should be be private and, as a consequence, should not be tested. To generalize, a() is something that a class "can do" (a behavior), so I test the behavior, and not the method itself. If this behavior internally calls other things - well, its an internal matter of that class, if possible I don't make any assumptions on how does the class work internally, and always prefer "white-box" testing.
I only test "one" non-private method in a test.
All the methods should return something (best option) or at least call dependencies, or change internal state of the object under test. The list of dependencies is always clean-to-understand, I can't instantiate the object of CUT without supplying it a list of dependencies. For example, using constructor dependency injection is a good way of doing this. I mock only dependencies indeed, and never mock / spy CUT. Dependencies are never static but injected.
Now with these simple rules, the need to "test if a method within class under test has been called or not" basically can boil down to one of the following:
you're talking about private method. In this case - don't test it, test only public things.
The method is public - in this case you explicitly call it in unit test, so its irrelevant.
Now lets ask why do you want to test this if a method within CUT has been called or not?
If you want to make sure that it changed something. If this "something" is within the class - in other words, its internal state has changed, check in test that the change is indeed done in the state by calling another method that allows to query the state
If this "something" is a code that is managed by dependency, create a mock of this dependency and verify that it was called with the expected parameters.
Take a look at the Mockito Documentation (https://static.javadoc.io/org.mockito/mockito-core/3.0.0/org/mockito/Mockito.html#13)
When using a Spy you can 'replace' a method in the same class that is under test.
#ExtendWith(MockitoExtension.class)
public class Test {
class MyClass {
public void a() {
b();
}
public void b() {
}
}
#Test
public void test() {
MyClass testClass = new MyClass();
MyClass spy = Mockito.spy(testClass);
Mockito.doNothing().when(spy).b();
spy.a();
Mockito.verify(spy, Mockito.times(1)).b();
}
}
So whether that is something that should be done is a different question ;)
I think it highly depends on what method B() is actually doing and whether that is supposed be part of MyClass in the first place.
Either its not possible without mocking the whole CUT
In this case we do not mock the whole CUT only the method you do not want to be called.
Reason asking is that wherever i read about Mockito and similar tools, it says one should never mock CUT but its dependencies (that part is clear).
I believe this statement is not entirely accurate in correlation with spying.
The whole point of spying in my eyes is to use it on the class under test. Why would one want to spy on a dependecy that is not even supposed to be part of the test in the first place?

Unit testing: Required to mock methods of the class itself?

I've been doing some unit testing and just getting into the topic as a whole.
I stumbled upon the following scenario, suppose I have a class like this:
class A{
public B mehtod_1(B b){
b = method_2(b);
b = method_3(b);
b += 1;
return b;
}
public B method_2(B b){
// do something to B without external dependency
return B;
}
public B method_3(B b){
// do something else to B without external dependency
return B;
}
}
I can write tests for method_2 and method_3 without a problem, do different tests by configuring B in different ways and asserting the expected transformation on B after the call, those methods are atomic.
So my question is:
If I was to test method_1 in an atomic way I would have to mock the calls to method_2 and method_3 since if I would actually call these methods I would not test method_1 in an atomic manor.
In the latter case is method_2 was broken then the tests for method_1 and method_2 would break, and that would be misleading. If I'd mock the method_2 call inside the method_1 test, only the method_2 test would fail, giving a clearer indication of where the error is (namely somewhere in the business logic of method_1 given all other invoked methods worked as expected).
Did I understand the concept here correctly?
On the other hand it is correct, if both tests fail, since in the real world, method_1 cannot work without method_2 working.
My gut would say atomicity of tests is what is desired, meaning the first solution where there is one test for method_1, for every possible outcome of method_2 and method_3 (statically mocked).
Is there a "correct"/common/best practice way?
Immediate answer: in case we are talking Java here; and partial mocking is really of interest to you, you can look into using Mockito's spy concept.
But beyond that: you are getting unit testing wrong. What you call atomicity; I call worrying about implementation details. But it shouldn't matter "what exactly" that "method under test" actually does. You want to test the what, not the how.
Meaning: if that method has to call some other method(s) (that work fine in your unit test environment; without mocking); then there is no need thinking about mocking them!
You see: you care about the contract of each of your methods. That contract is what you want to test: given these input parameters, I expect that result/side effect/exception ...
Nonetheless, the fact that you have multiple public methods; and that they somehow depend on each other might be an indication of a design problem (as in: does it make sense that they are all public; is there some abstraction hiding in your interface that you should better express in other ways?). But that can only be decided given real code; real context.

Ignoring mock calls during setup phase

I often face the problem that mock objects need to be brought in a certain state before the "interesting" part of a test can start.
For example, let's say I want to test the following class:
struct ToTest
{
virtual void onEnable();
virtual void doAction();
};
Therefore, I create the following mock class:
struct Mock : ToTest
{
MOCK_METHOD0(onEnable, void());
MOCK_METHOD0(doAction, void());
};
The first test is that onEnable is called when the system that uses a ToTest object is enabled:
TEST(SomeTest, OnEnable)
{
Mock mock;
// register mock somehow
// interesting part of the test
EXPECT_CALL(mock, onEnable());
EnableSystem();
}
So far, so good. The second test is that doAction is called when the system performs an action and is enabled. Therefore, the system should be enabled before the interesting part of the test can start:
TEST(SomeTest, DoActionWhenEnabled)
{
Mock mock;
// register mock somehow
// initialize system
EnableSystem();
// interesting part of the test
EXPECT_CALL(mock, doAction());
DoSomeAction();
}
This works but gives an annoying warning about an uninteresting call to onEnable. There seem to be two common fixes of this problem:
Using NiceMock<Mock> to suppress all such warnings; and
Add an EXPECT_CALL(mock, onEnable()) statement.
I don't want to use the first method since there might be other uninteresting calls that really should not happen. I also don't like the second method since I already tested (in the first test) that onEnable is called when the system is enabled; hence, I don't want to repeat that expectation in all tests that work on enabled systems.
What I would like to be able to do is say that all mock calls up to a certain point should be completely ignored. In this example, I want expectations to be only checked starting from the "interesting part of the test" comment.
Is there a way to accomplish this using Google Mock?
The annoying thing is that the necessary functions are there: gmock/gmock-spec-builders.h defines Mock::AllowUninterestingCalls and others to control the generation of warnings for a specific mock object. Using these functions, it should be possible to temporarily disable warnings about uninteresting calls.
That catch is, however, that these functions are private. The good thing is that class Mock has some template friends (e.g., NiceMock) that can be abused. So I created the following workaround:
namespace testing
{
// HACK: NiceMock<> is a friend of Mock so we specialize it here to a type that
// is never used to be able to temporarily make a mock nice. If this feature
// would just be supported, we wouldn't need this hack...
template<>
struct NiceMock<void>
{
static void allow(const void* mock)
{
Mock::AllowUninterestingCalls(mock);
}
static void warn(const void* mock)
{
Mock::WarnUninterestingCalls(mock);
}
static void fail(const void* mock)
{
Mock::FailUninterestingCalls(mock);
}
};
typedef NiceMock<void> UninterestingCalls;
}
This lets me access the private functions through the UninterestingCalls typedef.
The flexibility you're looking for is not possible in gmock, by design. From the gmock Cookbook (emphasis mine):
[...] you should be very cautious about when to use naggy or strict mocks, as they tend to make tests more brittle and harder to maintain. When you refactor your code without changing its externally visible behavior, ideally you should't need to update any tests. If your code interacts with a naggy mock, however, you may start to get spammed with warnings as the result of your change. Worse, if your code interacts with a strict mock, your tests may start to fail and you'll be forced to fix them. Our general recommendation is to use nice mocks (not yet the default) most of the time, use naggy mocks (the current default) when developing or debugging tests, and use strict mocks only as the last resort.
Unfortunately, this is an issue that we, and many other developers, have encountered. In his book, Modern C++ Programming with Test-Driven Development, Jeff Langr writes (Chapter 5, on Test Doubles):
What about the test design? We split one test into two when we changed from a hand-rolled mock solution to one using Google Mock. If we expressed everything in a single test, that one test could set up the expectations to cover all three significant events. That’s an easy fix, but we’d end up with a cluttered test.
[...]
By using NiceMock, we take on a small risk. If the code later somehow changes to invoke another method on the [...] interface, our tests aren’t going to know about it. You should use NiceMock when you need it, not habitually. Seek to fix your design if you seem to require it often.
You might be better off using a different mock class for your second test.
class MockOnAction : public ToTest {
// This is a non-mocked function that does nothing
virtual void onEnable() {}
// Mocked function
MOCK_METHOD0(doAction, void());
}
In order for this test to work, you can have onEnable do nothing (as shown above). Or it can do something special like calling the base class or doing some other logic.
virtual void onEnable() {
// You could call the base class version of this function
ToTest::onEnable();
// or hardcode some other logic
// isEnabled = true;
}

Is it bad practice to unit test a method that is calling another method I am already testing?

Consider you have the following method:
public Foo ParseMe(string filepath)
{
// break up filename
// validate filename & extension
// retrieve info from file if it's a certain type
// some other general things you could do, etc
var myInfo = GetFooInfo(filename);
// create new object based on this data returned AND data in this method
}
Currently I have unit tests for GetFooInfo, but I think I also need to build unit tests for ParseMe. In a situation like this where you have a two methods that return two different properties - and a change in either of them could break something - should unit tests be created for both to determine the output is as expected?
I like to err on the side of caution and be more wary about things breaking and ensuring that maintenance later on down the road is easier, but I feel very skeptical about adding very similar tests in the test project. Would this be bad practice or is there any way to do this more efficiently?
I'm marking this as language agnostic, but just in case it matters I am using C# and NUnit - Also, I saw a post similar to this in title only, but the question is different. Sorry if this has already been asked.
ParseMe looks sufficiently non-trivial to require a unit test. To answer your precise question, if "you have a two methods that return two different properties - and a change in either of them could break something" you should absolutely unit test them.
Even if the bulk of the work is in GetFooInfo, at minimum you should test that it's actually called. I know nothing about NUnit, but I know in other frameworks (like RSpec) you can write tests like GetFooInfo.should be_called(:once).
It is not a bad practice to test a method that is calling another method. In fact, it is a good practice. If you have a method calling another method, it is probably performing additional functionality, which should be tested.
If you find yourself unit testing a method that calls a method that is also being unit tested, then you are probably experiencing code reuse, which is a good thing.
I agree with #tsm - absolutely test both methods (assuming both are public).
This may be a smell that the method or class is doing too much - violating the Single Responsibility Principle. Consider doing an Extract Class refactoring and decoupling the two classes (possibly with Dependency Injection). That way you could test both pieces of functionality independently. (That said, I'd only do that if the functionality was sufficiently complex to warrant it. It's a judgment call.)
Here's an example in C#:
public interface IFooFileInfoProvider
{
FooInfo GetFooInfo(string filename);
}
public class Parser
{
private readonly IFooFileInfoProvider _fooFileInfoProvider;
public Parser(IFooFileInfoProvider fooFileInfoProvider)
{
// Add a null check
_fooFileInfoProvider = fooFileInfoProvider;
}
public Foo ParseMe(string filepath)
{
string filename = Path.GetFileName(filepath);
var myInfo = _fooFileInfoProvider.GetFooInfo(filename);
return new Foo(myInfo);
}
}
public class FooFileInfoProvider : IFooFileInfoProvider
{
public FooInfo GetFooInfo(string filename)
{
// Do I/O
return new FooInfo(); // parameters...
}
}
Many developers, me included, take a programming by contract approach. That requires you to consider each method as a black box. If the method delegates to another method to accomplish its task does not matter, when you are testing the method. But you should also test all large or complicated parts of your program as units. So whether you need to unit test the GetFooInfo depends on how complicated that method is.

What is the unit testing strategy for method call forwarding?

I have the following scenario:
public class CarManager
{
..
public long AddCar(Car car)
{
try
{
string username = _authorizationManager.GetUsername();
...
long id = _carAccessor.AddCar(username, car.Id, car.Name, ....);
if(id == 0)
{
throw new Exception("Car was not added");
}
return id;
} catch (Exception ex) {
throw new AddCarException(ex);
}
}
public List AddCars(List cars)
{
List ids = new List();
foreach(Car car in cars)
{
ids.Add(AddCar(car));
}
return ids;
}
}
I am mocking out _reportAccessor, _authorizationManager etc.
Now I want to unittest the CarManager class.
Should I have multiple tests for AddCar() such as
AddCarTest()
AddCarTestAuthorizationManagerException()
AddCarTestCarAccessorNoId()
AddCarTestCarAccessorException()
For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()? < p/>
Please help.
There are two issues here:
Unit tests should do more than test methods one at a time. They should be designed to prove that your class can do the job it was designed for when integrated with the rest of the system. So you should mock out the dependencies and then write a test for each way in which you class will actually be used. For each (non-trivial) class you write there will be scenarios that involve the client code calling methods in a particular pattern.
There is nothing wrong with AddCars calling AddCar. You should repeat tests for error handling but only when it serves a purpose. One of the unofficial rules of unit testing is 'test to the point of boredom' or (as I like to think of it) 'test till the fear goes away'. Otherwise you would be writing tests forever. So if you are confident a test will add no value them don't write it. You may be wrong of course, in which case you can come back later and add it in. You don't have to produce a perfect test first time round, just a firm basis on which you can build as you better understand what your class needs to do.
Unit Test should focus only to its corresponding class under testing. All attributes of class that are not of same type should be mocked.
Suppose you have a class (CarRegistry) that uses some kind of data access object (for example CarPlatesDAO) which loads/stores car plate numbers from Relational database.
When you are testing the CarRegistry you should not care about if CarPlateDAO performs correctly; Since our DAO has it's own unit test.
You just create mock that behaves like DAO and returns correct or wrong values according to expected behavior. You plug this mock DAO to your CarRegistry and test only the target class without caring if all aggregated classes are "green".
Mocking allows separation of testable classes and better focus on specific functionality.
When unittesting the AddCar class, create tests that will exercise every codepath. If _authorizationManager.GetUsername() can throw an exception, create a test where your mock for this object will throw. BTW: don't throw or catch instances of Exception, but derive a meaningful Exception class.
For the AddCars method, you definitely should call AddCar. But you might consider making AddCar virtual and override it just to test that it's called with all cars in the list.
Sometimes you'll have to change the class design for testability.
Should I have multiple tests for
AddCar() such as
AddCarTest()
AddCarTestAuthorizationManagerException()
AddCarTestCarAccessorNoId()
AddCarTestCarAccessorException()
Absolutely! This tells you valuable information
For AddCars() should I repeat all previous tests as AddCars() calls AddCar() - it seems
like repeating oneself? Should I perhaps not be calling AddCar() from AddCars()?
Calling AddCar from AddCars is a great idea, it avoids violating the DRY principle. Similarly, you should be repeating tests. Think of it this way - you already wrote tests for AddCar, so when testing AddCards you can assume AddCar does what it says on the tin.
Let's put it this way - imagine AddCar was in a different class. You would have no knowledge of an authorisation manager. Test AddCars without the knowledge of what AddCar has to do.
For AddCars, you need to test all normal boundary conditions (does an empty list work, etc.) You probably don't need to test the situation where AddCar throws an exception, as you're not attempting to catch it in AddCars.
Writing tests that explore every possible scenario within a method is good practice. That's how I unit test in my projects. Tests like AddCarTestAuthorizationManagerException(), AddCarTestCarAccessorNoId(), or AddCarTestCarAccessorException() get you thinking about all the different ways your code can fail which has led to me find new kinds of failures for a method I might have otherwise missed as well as improve the overall design of the class.
In a situation like AddCars() calling AddCar() I would mock the AddCar() method and count the number of times it's called by AddCars(). The mocking library I use allows me to create a mock of CarManager and mock only the AddCar() method but not AddCars(). Then your unit test can set how many times it expects AddCar() to be called which you would know from the size of the list of cars passed in.