Unit testing: Required to mock methods of the class itself? - unit-testing

I've been doing some unit testing and just getting into the topic as a whole.
I stumbled upon the following scenario, suppose I have a class like this:
class A{
public B mehtod_1(B b){
b = method_2(b);
b = method_3(b);
b += 1;
return b;
}
public B method_2(B b){
// do something to B without external dependency
return B;
}
public B method_3(B b){
// do something else to B without external dependency
return B;
}
}
I can write tests for method_2 and method_3 without a problem, do different tests by configuring B in different ways and asserting the expected transformation on B after the call, those methods are atomic.
So my question is:
If I was to test method_1 in an atomic way I would have to mock the calls to method_2 and method_3 since if I would actually call these methods I would not test method_1 in an atomic manor.
In the latter case is method_2 was broken then the tests for method_1 and method_2 would break, and that would be misleading. If I'd mock the method_2 call inside the method_1 test, only the method_2 test would fail, giving a clearer indication of where the error is (namely somewhere in the business logic of method_1 given all other invoked methods worked as expected).
Did I understand the concept here correctly?
On the other hand it is correct, if both tests fail, since in the real world, method_1 cannot work without method_2 working.
My gut would say atomicity of tests is what is desired, meaning the first solution where there is one test for method_1, for every possible outcome of method_2 and method_3 (statically mocked).
Is there a "correct"/common/best practice way?

Immediate answer: in case we are talking Java here; and partial mocking is really of interest to you, you can look into using Mockito's spy concept.
But beyond that: you are getting unit testing wrong. What you call atomicity; I call worrying about implementation details. But it shouldn't matter "what exactly" that "method under test" actually does. You want to test the what, not the how.
Meaning: if that method has to call some other method(s) (that work fine in your unit test environment; without mocking); then there is no need thinking about mocking them!
You see: you care about the contract of each of your methods. That contract is what you want to test: given these input parameters, I expect that result/side effect/exception ...
Nonetheless, the fact that you have multiple public methods; and that they somehow depend on each other might be an indication of a design problem (as in: does it make sense that they are all public; is there some abstraction hiding in your interface that you should better express in other ways?). But that can only be decided given real code; real context.

Related

Unit testing composite functions

Say you have 3 functions, functionA, functionB, and functionC
functionC relies on functionA and functionB
functionA(a) {
return a
}
functionB(b) {
return b
}
functionC(a, b){
return functionA(a) + functionB(b);
}
Now this is obviously a super simplified example.. but what's the correct way to test functionC? If I'm already testing functionA and functionB and theyre passing wouldn't testing functionC wind up being more than a unit test since it would be relying on functionA and functionB returns..
For the first two functions you would focus on their public contract - you would write as many tests as required to ensure all results for different cases are as expected.
But for the third function it might be sufficient to understand that the function should be invoking the other two functions. So you probably need less tests. You don't need to test again all cases required for testing A and B. You only want to verify the expected "plumbing" that C is responsible for.
In my opinion your tests should not know that functionC is using functionA and functionB. Normally you create automatic tests, to support change (code maintenance). What if you change the implementation of C? All the tests of functionC become invalid as well, that is unnecessary and dangerous, because that means, the refactorer must understand all the tests as well. Even if he/she is convinced, that he/she is not changing the contract. If you have a great testcoverage, why should he/she do that? So the public contract of functionC is to be tested in full!
There is a further danger, if the tests know too much about the inner workings of the sut(functionC) they tend to reimplement the code inside. So the same (probably faulty) code that does the implementation, checks if the implementation is correct.
Just an example: How would you implement the (whitebox) test of functionC. Mock functionA and functionB and look if the sum of the mocked results is produced. That is just good for the test-coverage (kpi??) but can be also quite misleading.
But what about the high extra effort of testing the functionality of functionA and functionB twice. If that is so, then probably reuse of the testing code is easily possible, if the reuse is not possible, I think that confirms my earlier statements the more.
The GhostCat answer is simple, fine and focus on the essential.
I will detail about some other points to consider, particularly the refactoring question.
Unit tests focus on API
The classes API (public functions) have to be unit tested.
If these 3 functions are public, each one has to be tested.
Besides, unit tests don't focus on implementation but expected behavior.
Today the composite function add individual function results, tomorrow it could substract them or anything else.
Testing the C() composite function doesn't mean testing again all scenarios of A() and B(), it means testing the expected behavior for C().
In some cases, unit testing a composite function in integration with individual functions doesn't generate many duplication concerning individual functions.
In other cases, it does. I will present it in the next point.
Example where testing the C() composite function may cause a duplication concern in the tests.
Suppose that the A() function accepts two integers :
function A(int a, int b){ ...}
It has the following constraints about the input parameters :
they have to be >=0
they have be inferior to 100
their sum has be inferior to 100
If one of these is not respected, an exception is thrown.
In the A() unit test, we will test each one of these scenarios. Each one probably in a distinct test case :
#Test
void A_throws_exception_when_one_of_params_is_not_superior_or_equal_to_0(){
...
}
#Test(expected = InvalidParamException.class);
void A_throws_exception_when_one_of_params_is_not_inferior_to_100(){
...
}
#Test(expected = InvalidParamException.class);
void A_throws_exception_when_params_sum_is_not_inferior_to_100(){
...
}
Aside the error cases, we could also multiple nominal scenarios for the A() function according to the passed parameters.
Suppose that the B() function has also multiple nominal and error scenarios.
So what about the unit test of C() that aggregates them ?
You should of course not re-test each one of these cases. It is a lot of duplication and besides it will have more combination by crossing cases of the two functions.
The next point presents how to prevent duplication.
Possible refactoring to improve the design and reduce the duplication in the unit tests of the composite function
As you write composite functions, the first thing that you should wonder is whether the composite functions should not be located in a specific component.
composite component -> unitary component(s)
Decoupling them may improve the overall design and give more specific responsibilities to components.
In addition, it provides also a natural way to reduce duplication in the unit tests of the composite component.
Indeed, you can, if required, stub/mock unitary component behaviors and don't need to create detailed fixtures for them.
Composite component unit tests can so focus on the composite component behavior.
So in our previous example, instead of testing all cases of A() and B() as we unit-testing the C() function, we could stub or mock A() and B() in order that they behavior as expected for the C() scenarios.
For example for the C() test scenario with error cases related to A() and B(), we don't need to repeat each A() or B() scenario cases :
#Test(expected = InvalidParamException.class);
void C_throws_exception_when_a_param_is_invalid(){
when(A(any,any)).thenThrow(new InvalidParamException());
C();
}
#Test(expected = InvalidParamException.class);
void C_throws_exception_when_b_param_is_invalid(){
when(B(any,any)).thenThrow(new InvalidParamException());
C();
}

Do I need to verify interaction with mocks or just check the method inputs and outputs?

Is it necessary to verify the interactions with the Mock objects? So let's say I have a class:
Class A{
B b;
public A(B b){
this.b = b;
}
int getObjectFromDatabase(int id){
Object o = b.get(id);
// do some extra work
return result
}
}
Now I'm testing the getObjectFromDatabase method. I have passed the Mock object of Class B. Do I need to verify the interaction that b.get(id) is being called or not? Or is it a good idea to just check the input and output result I get?
In general, verifying that a stubbed call happened is not necessary, and leads to brittle tests (tests that fail even when the implementation remains correct). In your case, it probably doesn't matter whether get(id) is called, or how many times; it only matters that the object returned is correct, and it's likely that a correct result will require calling b.get(id) at some point.
Mockito has some explicit advice in its verify(T) Javadoc:
Although it is possible to verify a stubbed invocation, usually it's just redundant. Let's say you've stubbed foo.bar(). If your code cares what foo.bar() returns then something else breaks(often before even verify() gets executed). If your code doesn't care what get(0) returns then it should not be stubbed. Not convinced? See here.
Though it may make sense to verify(b).get(id) a certain number of times (including zero) when testing caching or lazy-load behavior, or in other circumstances where the interaction is a crucial part of the tested behavior, strive to test for the correct output/state instead of verifying your expected interactions.

Test: stub vs real implementation

I have been wondering about the general use of stubs for unit tests vs using real (production) implementations, and specifically whether we don't run into a rather nasty problem when using stubs as illustrated here:
Suppose we have this (pseudo) code:
public class A {
public int getInt() {
if (..) {
return 2;
}
else {
throw new AException();
}
}
}
public class B {
public void doSomething() {
A a = new A();
try {
a.getInt();
}
catch(AException e) {
throw new BException(e);
}
}
}
public class UnitTestB {
#Test
public void throwsBExceptionWhenFailsToReadInt() {
// Stub A to throw AException() when getInt is called
// verify that we get a BException on doSomething()
}
}
Now suppose we at some point later when we have written hundreds of tests more, realize that A shouldn't really throw AException but instead AOtherException. We correct this:
public class A {
public int getInt() {
if (..) {
return 2;
}
else {
throw new AOtherException();
}
}
}
We have now changed the implementation of A to throw AOtherException and we then run all our tests. They pass. What's not so good is that the unit test for B passes but is wrong. If we put together A and B in production at this stage, B will propagate AOtherException because its implementation thinks A throws AException.
If we instead had used the real implementation of A for our throwsBExceptionWhenFailsToReadInt test, then it would have failed after the change of A because B wouldn't throw the BException anymore.
It's just a frightening thought that if we had thousand of tests structured like the above example, and we changed one tiny thing, then all the unit tests would still run even though the behavior of many of the units would be wrong! I may be missing something, and I'm hoping some of you clever folks could enlighten me as to what it is.
When you say
We have now changed the implementation of A to throw AOtherException and we then run all our tests. They pass.
I think that's incorrect. You obviously haven't implemented your unit test, but Class B will not catch AException and thus not throw BException because AException is now AOtherException. Maybe I'm missing something, but wouldn't your unit test fail in asserting that BException is thrown at that point? You will need to update your class code to appropriately handle the exception type of AOtherException.
If you change the interface of class A then your stub code will not build (I assume you use the same header file for production and stub versions) and you will know about it.
But in this case you are changing the behaviour of your class because the exception type is not really part of the interface. Whenever you change the behaviour of your class you really have to find all the stub versions and check if you need to change their behaviour as well.
The only solution I can think of for this particular example is to use a #define in the header file to define the exception type. This could get messy if you need to pass parameters to the exception's contructor.
Another technique I have used (again not applicable to this particular example) is to derive your production and stub classes from a virtual base class. This separates the interface from the implementation, but you still have to look at both implementations if you change the behaviour of the class.
It's normal that the test you wrote using stubs doesn't fail since it is intended to verify that object B communicates well with A and can handle the response from getInt() assuming that getInt() throws an AException. It is not intended to check if getInt() really throws an AException at any point.
You can call that kind of test you wrote a "collaboration test".
Now what you need to be complete is the counterpart test that checks if getInt() will ever throw an AException (or a AOtherException, for that matter) in the first place. It's a "contract test".
J B Rainsberger has a great presentation on the contract and collaboration tests technique.
With that technique here's how you'd typically go, solving the whole "false green test" problem :
Identify that getInt() now needs to throw a AOtherException rather than an AException
Write a contract test verifying that getInt() does throw a AOtherException under given circumstances
Write the corresponding production code to make the test pass
Realize you need collaboration tests for that contract test : for each collaborator using getInt(), can it handle the AOtherException we're going to throw ?
Implement those collaboration tests (let's say you don't notice there's already a collaboration test checking for AException at that point yet).
Write production code that matches the tests and realize that B already expects an AException when calling getInt() but not a AOtherException.
Refer to the existing collaboration test containing the stubbed A throwing an AException and realize it's obsolete and you need to delete it.
This is if you start using that technique just now, but assuming you adopted it from the start, there wouldn't be any real problem since what you'd naturally do is change the contract test of getInt() to make it expect AOtherException, and change the corresponding collaboration tests just after that (the golden rule is that a contract test always goes with a collaboration test so with time it becomes a no-brainer).
If we instead had used the real implementation of A for our
throwsBExceptionWhenFailsToReadInt test, then it would have failed
after the change of A because B wouldn't throw the BException anymore.
Sure, but this would have been a whole other kind of test -an integration test, actually. An integration test verifies both sides of the coin : does object B handle response R from object A correctly, and does object A ever respond that way in the first place ? It's only normal for a test like this to fail when the implementation of A used in the test starts to respond R' instead of R.
The specific example you have mentioned is a tricky one.. the compiler cannot catch it or notify you. In this case, you'd have to be diligent to find all usages and update the corresponding tests.
That said, this type of issue should be a fraction of the tests - you cannot wave away the benefits just for this corner case.
See also: TDD how to handle a change in a mocked object - there was a similar discussion on the testdrivendevelopment forums (linked in the above question). To quote Steve Freeman (of GOOS fame and a proponent of the interaction-based tests)
All of this is true. In practice, combined with a judicious
combination of higher level tests, I haven't seen this to be a big
problem. There's usually something bigger to deal with first.
Ancient thread, I know, but I thought I'd add that JUnit has a really handy feature for exception handling. Instead of doing try/catch in your test, tell JUnit that you expect a certain exception to be thrown by the class.
#Test(expected=AOtherException)
public void ensureCorrectExceptionForA {
A a = new A();
a.getInt();
}
Extending this to your class B you can omit some of the try/catch and let the framework detect the correct usage of exceptions.

TDD Function Tests

Should I write unit test for all nested methods or if writing one test for caller is enough?
For instance:
void Main()
{
var x = new A().AFoo();
}
public class A
{
public int AFoo()
{
// some logic
var x = new B().BFoo();
// might have some logic
return x;
}
}
public class B
{
public int BFoo()
{
// some logic
return ???;
}
}
Is that sufficient to write unit test for Main() method or I need to write tests for Main, A.AFoo(), B.BFoo() methods? How deep should I go?
Thanks in advance.
A testing purist would say that you need to create unit tests for classes A and B.
Each class should have all methods tested. If a method can do more than one thing (if you have an if statement, for example), then you should have a test for each path. If the tests are getting too complicated, it's probably a good idea to refactor the code to make the tests simpler.
Note as it stands right now, its hard to test A in isolation because it depends on B. If B is simple, as it is right now, it's probably ok. You might want to name your tests for A integration tests because technically they test both A and B together. Another option would be to have the method AFoo accept as a parameter the instance of B on which it operates. That way you could mock an instance of B and have a true unit test.
Unit tests are supposed to work on units, in the case of OOP the units are classes and the methods of the classes. That means that you should write a separate test class for each class under consideration, and at least one testing method for each method provided in the class. What is more, it is important to isolate the classes as much as possible so that a bug in class B does not cause a failure on class A. This is why Inversion of Control (Dependency Injection) is so useful, because if you can inject the instance of class B into the instance of class A, you can change B to be just a Mock object.
One of the reasons we write unit tests is to explain, in code, exactly how the methods of each class are expected to behave under all conditions, including and especially edge cases. It is hard to detail the expected behaviour of class B by writing tests on the main method.
I would recommend reading some material online explaining test driven development and how to mock objects, and perhaps use some of the excellent mocking libraries that exist such as JMock. See this question for more links.
Unit tests should help you to reduce your debugging effort. So when you just write unit tests for AFoo and none for BFoo, and one of your test fails, you probably won't know if the problem is part of class A or class B. Writing tests for BFoo too will help you to isolate the error in smaller amount of time.

Writing maintainable unit tests with mock objects

This is a simplified version of a class I'm writing a unit test for
class SomeClass {
void methodA() {
methodB();
methodC();
methodD();
}
void methodB() {
//does something
}
void methodC() {
//does something
}
void methodD() {
//does something
}
}
While writing the unit tests for this class, I've mocked out objects using EasyMock used in each method. It was easy to set up the mock objects and their expectation
In method B,C,and D. But to test method A, I have to set up A LOT more mock objects and their expectations. Also, I’m testing method A in different conditions, meaning I have to setup the mock objects many times with different expectations.
In the end, my unit test becomes hard to maintain and pretty cluttered. I was wondering if anyone has or seen a good solution to this problem.
If I understand your question correctly, I think that this is a matter of design. The nice thing about unit testing is that writing tests often forces you to make your design better. If you need to mock too many things while testing a method it often means you should split your class into two smaller classes, which will be easier to test (and write, and maintain, and bugfix, and reuse, etc.).
In your case, the method A seems to be at a higher level than methods A, B, C. You can consider removing it to a higher level class, that would wrap SomeClass:
class HigherLevelClass {
ISomeClass someClass;
public HigherLevelClass(ISomeClass someClass)
{
this.someClass = someClass;
}
void methodA() {
someClass.methodB();
someClass.methodC();
someClass.methodD();
}
}
class SomeClass : ISomeClass {
void methodB() {
//does something
}
void methodC() {
//does something
}
void methodD() {
//does something
}
}
Now when you are testing methodA all you need to mock is the small ISomeClass interface and the three method calls.
You could extract common setup code into separate (possibly parametrized) methods, then call them whenever appropriate. If the tests for methodA have a very different fixture from the tests of the other methods, there may not be much to put into the #Before method itself, so you need to call the appropriate combination of setup helper methods from the test methods themselves. It is still a bit cumbersome, but better than duplicating code all over the place.
Depending on what unit test framework you use, there may be other options too, but the above should work with any framework.
This is an example of a Fragile test because the mock setups have too intimate knowledge of the SUT.
I don't know EasyMock, but with Moq you don't need to setup void methods. However, with Moq the methods would have to be public or protected and virtual.
For each test you're writing, consider the behaviour which is valuable for that test. You'll have some contexts you're setting up which the behaviour relies on, and some outcomes as a result of the behaviour that you want to verify.
Set up relevant contexts, verify the outcomes, and use NiceMocks for everything else.
I prefer Mockito (Java) or Moq (.NET) which work this way by default. Here's Mockito's page on Mockito vs. EasyMock so you can get the idea (EasyMock didn't have NiceMock before Mockito came along):
http://code.google.com/p/mockito/wiki/MockitoVSEasyMock
You can probably use EasyMock's NiceMock in a similar way. Hopefully this will help you detangle your tests. You can always import both frameworks and use them alongside each other / incrementally switch over if it helps.
Good luck!
I’m testing method A in different conditions, meaning I have to setup the mock objects many times with different expectations.
If you care of what methodA is doing and which collaborator function has to be called then you have to setup different expectations... I don't see how you can skip this step?!
If you testLogout you would expect a call to myCollaborator.logout() otherwise if you testLogin you would expect something like myCollaborator.login().
If you have many methods with lots/different expectations maybe is the case to split your class in collaborators