google mock expect_call to retire another expectation - c++

I have two mocks. Only one of them should be called for a single run and I would like to use the expectations to figure out if the execute() function is successful without knowing from the given preconditions.
How can this be achieved?
Mock1 successMock;
Mock2 failMock;
EXPECT_CALL(successMock, performOnSuccess()).Times(1);
EXPECT_CALL(failMock, performOnFail()).Times(0);
execute(successMock, failMock);
Either above should be valid or below expectations but only one of them.
EXPECT_CALL(successMock, performOnSuccess()).Times(0);
EXPECT_CALL(failMock, performOnFail()).Times(1);

What you expect is that one (and only one) of the collaborators has been called.
One viable solution (as presented in other languages) is to make these mocks to increment a shared counter in the test scope.
You can achieve this with GoogleMock by defining actions. It would be something like this:
SuccessMock successMock;
FailMock failMock;
int callCounter = 0;
ON_CALL(successMock, performOnSuccess())
.WillByDefault(InvokeWithoutArgs([&]()
{
callCounter++;
});
ON_CALL(failMock, performOnFailure())
.WillByDefault(InvokeWithoutArgs([&]()
{
callsCounter++;
});
execute(successMock, failMock);
ASSERT_THAT(callsCounter, Eq(1));
Having said this, this test has some randomness in it that I don't really like. You should be end up having one test expecting failure and another one expecting success.
I wrote an simplified gist for this.

Related

How to best test a service that who's effects get over-written by a stubbed out dependency

First off, my example is in php but this is not a php question, just a question on testing best practices
So I have this function that I would like to test
public function createNewTodo(CreateTodoQuery $query): TodoResponseObject
{
$new_todo = TodoFactory::createNew($query->getUserId(), $query->getTitle())
->withDescription($query->getDescription());
$new_todo = $this->todo_repository->save($new_todo);
return TodoResponseObject::fromDomain($new_todo);
}
In order to test this function, I will need to stub out my dependency (todo_repository). I want to have one test that asserts that what I get back is an instance of a TodoResponseObject. Simple enough
Now the challenging bit: I want to assert that the todo object that gets created with the parameters set in the query. Since I'm going to be stubbing out the todo_repository, I can't actually do that, as my test will just assert on the values of what I configured my stub to return. I could do something like an assertCalledWith type deal, but then at that point I'm going into the anti-pattern of testing that is "testing implementation, not functionality".
So how best could I get around this, and what would be the best way to test this?
See Sandi Metz, Magic Tricks of Testing
If you want to test that your code sent the right message to the factory, then the usual answer is to use a test double (a mock, or a spy) that tracks the messages sent to it so that you can verify them later.
This might require changing the design of your code so that you can more easily substitute one factory implementation for another (for instance, by wrapping a decorator around the "real" factory method).
Another possibility is to split the factory invocation into a separate method, and test that method's handling of the parameters
public function createTodo(CreateTodoQuery $query) {
return TodoFactory::createNew($query->getUserId(), $query->getTitle())
->withDescription($query->getDescription());
}
Changing the design of your implementation so that it better fits with your testing is normal in tdd.

Should we modify a function signature for unit testing?

Suppose we have a function add() as below:
void add(int a, int b) {
int sum=a+b;
cout<<sum;
sendSumToStorage(sum);
}
This simple function adds to input values, prints the sum to the console and also sends it to some external storage (say, a file). This is how we ideally want it in the application (meaning, we don't want it to return anything).
For purposes of unit testing, is it valid (from a design perspective) if we modify the function signature so that it returns the sum? We could then have a test like:
bool checkAdd() {
int res=add(3, 4);
if(res==7) return true;
else return false;
}
Better yet, is this (returning a value) the only way we could unit test it? Is there some valid way in which we could unit test the add() function without changing the function signature?
A function like the one in your example is considered really bad practice.
Why do I say this?
Well, you have a method called add which adds two numbers AND calls something else. Basically your method doesn't do one thing, but two, which violates the Single Responsibility Principle.
This makes things much harder to test because you can't test just the add method in isolation.
So you would separate that into two methods with good names which reflect what they do and test them separately.
If you don't want to have issues with state between your methods, you will have to start returning results where it makes sense.
Ignoring the fact that this example has a bad desing.
For cases like this, when you want to check some internal behaviour instead of API you should rather try using some testing libraries like gtest and gmock.
It allows you to describe more sophisticated expectations than just function result.
For example you can set expectation that some method will be called during code execution using EXPECT_CALL macro.
More details here:
https://github.com/google/googletest/blob/master/googlemock/docs/ForDummies.md
Answering your question, it's always a bad practice to modify any part of tested code for the purpose of testing. In that case you are not longer testing production code. As it was suggested before it's better to split the functionality into smaller parts and test them isolated.
Changing the design of code to improve testability is very common and generally considered as a good practice. Obviously, not all such changes are necessarily real improvements - sometimes better solutions exist.
In your case, the code is difficult to test because it combines computations (the addition) with interactions with depended-on components (output and storing data). In your case (as Andrei has pointed out) the function also violates the SRP, but mixing computations and interactions generally makes testing more difficult, even in cases where the SRP is not violated.
I understand, you would change the function such that it will return the computed value in addition to printing it and writing it to the storage. This will allow you to test the computational part of the function. The function will then, however, only be partially tested. And, the purpose of the function will be further obfuscated.
If instead you split the function into one function doing the computation and one doing the interactions, you can thoroughly test the first with unit-testing, and use integration-testing for the other. Again, the usefulness of this approach is independent of whether the code violates the SRP or not.

What are strict and non-strict mocks?

I have started using moq for mocking. Can someone explain me the concept of strict and non-strict mocks? How can they can be used in moq?
edit:
in which scenario do we use which type of mock?
I'm not sure about moq specifically, but here's how strict mocks work in Rhino. I declare that I expect a call to foo.Bar on my object foo:
foo.Expect(f => f.Bar()).Returns(5);
If the calling code does
foo.Bar();
then I'm fine because the expectations are exactly met.
However, if the calling code is:
foo.Quux(12);
foo.Bar();
then my expectation failed because I did not explicitly expect a call to foo.Quux.
To summarize, a strict mock will fail immediately if anything differs from the expectations. On the other hand, a non-strict mock (or a stub) will gladly "ignore" the call to foo.Quux and it should return a default(T) for the return type T of foo.Quux.
The creator of Rhino recommends that you avoid strict mocks (and prefer stubs) because you generally don't want your test to fail when receiving an unexpected call as above. It makes refactoring your code much more difficult when you have to fix dozens of test that relied on the exact original behavior.
Ever come across Given / When / Then?
Given a context
When I perform some events
Then an outcome should occur
This pattern appears in BDD's scenarios, and is also relevant for unit tests.
If you're setting up context, you're going to use the information which that context provides. For instance, if you're looking up something by Id, that's context. If it doesn't exist, the test won't run. In this case, you want to use a NiceMock or a Stub or whatever - Moq's default way of running.
If you want to verify an outcome, you can use Moq's verify. In this case, you want to record the relevant interactions. Fortunately, this is also Moq's default way of running. It won't complain if something happens that you weren't interested in for that test.
StrictMock is there for when you want no unexpected interactions to occur. It's how old-style mocking frameworks used to run. If you're doing BDD-style examples, you probably won't want this. It has a tendency to make tests a bit brittle and harder to read than if you separate the aspects of behaviour you're interested in. You have to set up expectations for both the context and the outcome, for all outcomes which will occur, regardless of whether they're of interest or not.
For instance, if you're testing a controller and mocking out both your validator and your repository, and you want to verify that you've saved your object, with a strict mock you also have to verify that you've validated the object first. I prefer to see those two aspects of behaviour in separate examples, because it makes it easier for me to understand the value and behaviour of the controller.
In the last four years I haven't found a single example which required the use of a strict mock - either it was an outcome I wanted to verify (even if I verify the number of times it's called) or a context for which I can tell if I respond correctly to the information provided. So in answer to your question:
non-strict mock: usually
strict mock: preferably never
NB: I am strongly biased towards BDD, so hard-core TDDers may disagree with me, and it will be right for the way that they are working.
Here's a good article.
I usually end up having something like this
public class TestThis {
private final Collaborator1 collaborator1;
private final Collaborator2 collaborator2;
private final Collaborator2 collaborator3;
TestThis(Collaborator1 collaborator1, Collaborator2 collaborator2, Collaborator3 collaborator3) {
this.collaborator1 = collaborator1;
this.collaborator2 = collaborator2;
this.collaborator3 = collaborator3;
}
public Login login(String username) {
User user = collaborator1.getUser(username);
collaborator2.notify(user);
return collaborator3.login(user);
}
}
...and I use Strict mocks for the 3 collaborators to test login(username). I don't see how Strict Mocks should never be used.
I have a simple convention:
Use strict mocks when the system under test (SUT) is delegating the call to the underlying mocked layer without really modifying or applying any business logic to the arguments passed to itself.
Use loose mocks when the SUT applies business logic to the arguments passed to itself and passes on some derived/modified values to the mocked layer.
For eg:
Lets say we have database provider StudentDAL which has two methods:
Data access interface looks something like below:
public Student GetStudentById(int id);
public IList<Student> GetStudents(int ageFilter, int classId);
The implementation which consumes this DAL looks like below:
public Student FindStudent(int id)
{
//StudentDAL dependency injected
return StudentDAL.GetStudentById(id);
//Use strict mock to test this
}
public IList<Student> GetStudentsForClass(StudentListRequest studentListRequest)
{
//StudentDAL dependency injected
//age filter is derived from the request and then passed on to the underlying layer
int ageFilter = DateTime.Now.Year - studentListRequest.DateOfBirthFilter.Year;
return StudentDAL.GetStudents(ageFilter , studentListRequest.ClassId)
//Use loose mock and use verify api of MOQ to make sure that the age filter is correctly passed on.
}

Organizing unit test within a test class

Suppose I have several unit tests in a test class ([TestClass] in VSUnit in my case). I'm trying to test just one thing in each test (doesn't mean just one Assert though). Imagine there's one test (e.g. Test_MethodA() ) that tests a method used in other tests as well. I do not want to put an assert on this method in other tests that use it to avoid duplicity/maintainability issues so I have the assert in only this one test. Now when this test fails, all tests that depend on correct execution of that tested method fail as well. I want to be able to locate the problem faster so I want to be somehow pointed to Test_MethodA. It would e.g. help if I could make some of the tests in the test class execute in a particular order and when they fail I'd start looking for cause of the failure in the first failing test. Do you have any idea how to do this?
Edit: By suggesting that a solution would be to execute the tests in a particular order I have probably went too far and in the wrong direction. I don't care about the order of the tests. It's just that some of the tests will always fail if a prequisite isn't valid. E.g. I have a test class that tests a DAO class (ok, probably not a UNIT test, but there's logic in the database stored procedures that needs to be tested but that's not the point here I think). I need to insert some records into a table in order to test that a method responsible for retrieving the records (let's call it GetAll()) gets them all in the correct order e.g. I do the insert by using a method on the DAO class. Let's call it Insert(). I have tests in place that verify that the Insert() method works as expected. Now I want to test the GetAll() method. In order to get the database in a desired state I use the Insert() method. If Insert() doesn't work, most tests for GetAll() will fail. I'd prefer to mark the tests that can't pass because Insert() doesn't work as inconclusive rather than failed. It would ease finding the cause of the problem if I know which method/test to look into first.
You can't (and shouldn't) execute unit tests in a specific order. The underlying reason for this is to prevent Interacting Tests - I realize that your motivation for requesting such a feature is different, but that's the reason why unit test frameworks don't allow you to order tests. In fact, last time I checked, xUnit.net even randomizes the order.
One could argue that the fact that some of your tests depend on a different method call on the same class is a symptom of tight coupling, but that's not always the case (state machines come to mind).
However, if possible, consider using a Back Door instead of the other method in question.
If you can't do either that or decouple the interdependency (e.g. by making the first method virtual and using the Extract and Override technique), you will have to live with it.
Here's an example:
public class MyClass
{
public virtual void FirstMethod() { // do something... }
public void SecondMethod() {}
}
Since FirstMethod is virtual, you can derive from MyClass and override its behavior. You can also use a dynamic mock to do that for you. With Moq, it would look like this:
var sutStub = new Mock<MyClass>();
// by default, Moq overrides all virtual methods without calling base
// Now invoke both methods in sequence:
sutStub.Object.FirstMethod(); // overriden by Moq, so it does nothing
sutSutb.Object.SecondMethod();
I think I would indeed have the assertion on the method_A() result in every tests relying on its result, even if this introduces some duplication. Then I would use the assertion message to point to the method_A() failure.
assert("method_A() returned true", true, rc);
Perhaps will I end extracting the method_A() call and the assertion into an helper function to remove the duplication.
Now let's imagine method_A() queries an object and returns it, or NULL when no object is found. Then this assertion is a guard ; and it is necessary with languages suchas C, C++ that do not have NullPointerException.
I'm afraid you can't do this. The only solution is to redesign your code and break it up into smaller methods so that unit tests can call these one by one. Of course this isn't always desirable.
With Visual Studio you can order your tests: see here. But I'd like to advise you to stay away from this technique as much as possible: unit tests are meant to be run anywhere, anytime and in every order.
EDIT: why is this a problem for you? All failing tests point to the same method anyway...

What is the Pattern for Unit Testing flow control

I have a method that checks some assumptions and either follows the happy path, or terminates along the unhappy paths. I've either designed it poorly, or I'm missing the method for testing that the control of the flow.
if (this.officeInfo.OfficeClosed)
{
this.phoneCall.InformCallerThatOfficeIsClosedAndHangUp();
return;
}
if (!this.operators.GetAllOperators().Any())
{
this.phoneCall.InformCallerThatNoOneIsAvailableAndSendToVoicemail();
return;
}
Call call=null;
forach(var operator in this.operators.GetAllOperators())
{
call = operator.Call();
if(call!=null) {break;}
}
and so on. I've got my dependencies injected. I've got my mocks moq'd. I can make sure that this or that is called, but I don't know how to test that the "return" happens. If TDD means I don't write a line until I have a test that fails without it, I'm stuck.
How would you test it? Or is there a way to write it that makes it more testable?
Update: Several answers have been posted saying that I should test the resultant calls, not the flow control. The problem I have with this approach, is that every test is required to setup and test the state and results of the other tests. This seems really unwieldy and brittle. Shouldn't I be able to test the first if clause alone, and then test the second one alone? Do I really need to have a logarithmically expanding set of tests that start looking like Method_WithParameter_DoesntInvokeMethod8IfMethod7IsTrueandMethod6IsTrueAndMethod5IsTrueAndMethod4IsTrueAndMethod3IsFalseAndMethod2IsTrueAndMethod1isAaaaccck()?
I think you want to test the program's outputs: for example, that when this.officeInfo.OfficeClosed then the program does invoke this.phoneCall.InformCallerThatOfficeIsClosedAndHangUp() and does not invoke other methods such as this.operators.GetAllOperators().
I think that your test does this by asking its mock objects (phoneCall, etc.) which of their methods was invoked, or by getting them to throw an exception if any of their methods are invoked unexpectedly.
One way to do it is to make a log file of the program's inputs (e.g. 'OfficeClosed returns true') and outputs: then run the test, let the test generate the log file, and then assert that the contents of the generated log file match the expected log file contents for that test.
I'm not sure that's really the right approach. You care about whether or not the method produced the expected result, not necessarily how control "flowed" through the particular method. For example, if phoneCall.InformCallerThatOfficeIsClosedAndHangUp is called, then I assume some result is recorded somewhere. So in your unit test, you would be asserting that result was indeed recorded (either by checking a database record, file, etc.).
With that said, it's important to ensure that your unit tests indeed cover your code. For that, you can use a tool like NCover to ensure that all of your code is being excercised. It'll generate a coverage report which will show you exactly which lines were executed by your unit tests and more importantly, which ones weren't.
You could go ballistic and use a strategy pattern. Something along the lines of having an interface IHandleCall, with a single void method DoTheRightThing(), and 3 classes HandleOfficeIsClosed, HandleEveryoneIsBusy, HandleGiveFirstOperatorAvailable, which implement the interface. And then have code like:
IHandleCall handleCall;
if (this.officeInfo.OfficeClosed)
{
handleCall = new HandleOfficeIsClosed();
}
else if other condition
{
handleCall = new OtherImplementation();
}
handleCall.DoTheRightThing();
return;
That way you can get rid of the multiple return points in your method. Note that this is a very dirty outline, but essentially at that point you should extract the if/else into some factory, and then the only thing you have to test is that your class calls the factory, and that handleCall.DoTheRightThing() is called - (and of course that the factory returns the right strategy).
In any case, because you have already guarded against no operator available, you could simplify the end to:
var operator = this.operators.FindFirst();
call = operator.Call();
Don't test the flow control, just test the expected behavior. That is, unit testing does not care about the implementation details, only that the behavior of the method matches the specifications of the method. So if Add(int x, int y) should produce the result 4 on input x = 2, y = 2, then test that the output is 4 but don't worry about how Add produced the result.
To put it another way, unit testing should be invariant under implementation details and refactoring. But if you're testing implementation details in your unit testing, then you can't refactor without breaking the unit tests. For example, if you implement a method GetPrime(int k) to return the kth prime then check that GetPrime(10) returns 29 but don't test the flow control inside the method. If you implement GetPrime using the Sieve of Eratóstenes and have tested the flow control inside the method and later refactor to use the Sieve of Atkin, your unit tests will break. Again, all that matters is that GetPrime(10) returns 29, not how it does it.
If you are stuck using TDD it's a good thing: it means that TDD drives your design and you are looking into how to change it so you can test it.
You can either:
1) verify state: check SUT state after SUT execution or
2) verify behavior: check that mock object calls complied with test expectations
If you don't like how either of these approaches look in your test it's time to refactor the code.
The pattern described by Aaron Feng and K. Scott Allen would solve for my problem and it's testability concerns. The only issue I see is that it requires all the computation to be performed up front. The decision data object needs to be populated before all of the conditionals. That's great unless it requires successive round trips to the persistence storage.