Turtle Mock: How to ignore unexpected calls? - c++

Is it possible to ignore unexpected method calls for Turtle Mock? During my test the mocked method is called multiple times. I want to check only one invocation with specific parameters per test. Now I have to write one enormous test where I have to write all method invocations.

The expectation selection algorithm describes how you can set multiple invocations:
Each method call is then handled by processing the expectations in the order they have been defined :
looking for a match with valid parameter constraints evaluated from left to right
checking that the invocation count for this match is not exhausted
So if you set the one you expect and a generic one such as
MOCK_EXPECT( v.display ).once().with( 0 );
MOCK_EXPECT( v.display );
it should quiet the other calls while still making sure the one you care about will be fulfilled.
Now if you wanted to enforce the order of the calls, for instance to make sure the one you’re interested in happens first, you would have to use a sequence, such as
mock::sequence s;
MOCK_EXPECT( v.display ).once().with( 0 ).in( s );
MOCK_EXPECT( v.display ).in( s );

Related

How can I have different global parameters for different constraint handlers in SCIP Optimization Suite?

Say, I have three constraint handlers ConsHdlr1, ConsHdlr2 and ConsHdlr3 and I want them to have different parameter values, as in ConsHdlr1's minefficacy should be 0.001, ConsHdlr2's should be 0.0001 while ConsHdlr3's is 0.005. I see that there is a separating/minefficacy = 0.0001 parameter but I assume it applies to all the three constraint handlers. Is there a way to specify parameters for each constraint handler separately? I was hoping to set the value of parameters to my desired value when the cut loop starts and reset it when it ends but I am not sure where to put those.
I think the only thing you can do it is check if the cut you generated fulfills the efficacy requirements that you want directly inside your constraint handler and then force the cut into the LP with the correct flag in SCIPaddRow.

Implementing WillN in GoogleMock?

Is there a neater and/or briefer way to set multiple identical actions than repeated use of WillOnce? Is there a way for WillRepeatedly to have a cardinality, for example?
I can only find examples that chain WillOnce potentially followed by a single WillRepeatedly, which is less than ideal for situations where I may want to return a value the first N times a function is called and then return a different value the last time, e.g. using a mock to represent obj in the following example and have it loop N times:
while (obj.IsDone())
{
SomeAction(obj.NextItem());
}
You can use Times.
// This is the final call
EXPECT_CALL(obj, IsDone())
.WillOnce(Return(true));
// These are the intermediate calls
EXPECT_CALL(obj, IsDone())
.Times(N)
.WillRepeatedly(Return(false))
.RetiresOnSaturation();
The mock object's IsDone method will return false the first N times it's called. After that, the most recent expectation will have been satisfied, so we instruct it to no longer apply by using RetiresOnSaturation. Subsequent calls to IsDone will be handled by the first expectation, causing it to return true. If it's called any more times, the test will fail.
If you omit RetiresOnSaturation, then the second expectation will continue to apply; it will continue returning false, and you'll get messages alerting you that the "over-saturated and active" expectation is failing.

Defining Idempotence

So "idempotence" can be defined as:
An action, that if performed N times has the same effect as performing the action only once.
Got it, easy enough.
My question is about the subtlety of this definition -is an action considered idempotent by itself, or must you also consider the data being passed into the action?
Let me clarify with an example:
Suppose I have a PUT method that updates some resource, we'll call it f(x)
Obviously, f(3) is idempotent, as long as I supply 3 as the input. And equally obvious, f(5) will change the value of the resource (i.e., it will no longer be 3 or whatever value was there previously)
So when we talk about idempotence, are we referring to the generalization of the action/function like (i.e., f(x)), or are we referring to action/function + the data being passed into it (i.e., f(3))?
Suppose I have a PUT method that updates some resource, we'll call it
f(x)
Obviously, f(3) is idempotent, as long as I supply 3 as the input. And
equally obvious, f(5) will change the value of the resource (i.e., it
will no longer be 3 or whatever value was there previously).
This is only obvious is the server implementation is such that PUT respects this idempotent property. In the context of HTTP, RFC 2616 says:
Methods can also have the property of "idempotence" in that (aside
from error or expiration issues) the side-effects of N > 0 identical
requests is the same as for a single request.
Going a bit off topic...
In a distributed system like the web, you may also want to consider commutativity and concurrent requests. For example N+1 of the same PUT(x1) request should have the same effect, but you don't know if another client made a different PUT(x2) request in between yours, so while nPUT(x1)=PUT(x1) and mPUT(x2)=PUT(x2), the two sets of requests could be interleaved.
Idempotence requires that the action holds for all values over its domain, i.e., f(f(x)) = f(x) for all x. Another way to think about it is that an operation is idempotent if the composition of the operation with itself is just that operation.
You're assuming idempotence means that the state of the server will be changed at most once by a series of invocations. Most of the time, people use this term to mean that the state on the server won't be changed at all by any number of invocations. Under these circumstances, the distinction between your two cases is immaterial.
This is not quite the definition of idempotence. A function is idempotent if for any item x, f(f(x)) == f(x).
PUT is a side effect of your f() function here, not the result of it.

Should I unit-test with data that should not be passed in a function (invalid input)?

I am trying to use TDD for my coding practice. I would like to ask should I test with a data that should not happen in a function BUT this data may possibly break your program.
Here is one of a easy example to illustrate to what I ask :
a ROBOT function that has a one INT parameter. In this function I know that the valid range would only be 0-100. If -1, 101 is used, the function will be break.
function ROBOT (int num){
...
...
...
return result;
}
So I decided some automated test cases for this function...
1. function ROBOT with input argument 0
2. function ROBOT with input argument 1
3. function ROBOT with input argument 10
4. function ROBOT with input argument 100
But should I write test cases with input argument -1 or 101 for this ROBOT function IF I would guard that in my other function that call function ROBOT???
5. function ROBOT with input argument -1
6. function ROBOT with input argument 101
I don't know if it is necessary cause I think it is redundancy to test -1 and 101. And If it is really necessary to cover all the cases, I have to write more code to guard -1 and 101.
So in Common practice of TDD, will you write test case on -1 and 101 as well???
Yes, you should test those invalid inputs. BUT, if your language has accessibility modifiers and ROBOT() is private you shouldn't be testing it; you should only test public functions/methods.
The functional testing technique is called Boundary Value Analysis.
If your range is 0-100, your boundary values are 0 and 100. You should test, at least:
below the boundary value
the boundary value
above the boundary value
In this case:
-1,0,1,
99,100,101
You assume everything below -1 to -infinity behaves the same, everything between 1-99 behaves the same and everything above 101 behaves the same. This is called Equivalence Partitioning. The ranges outside and between the boundary values are called partitions and you assume that they will have equivalent behaviour.
You should always consider using -1 as a test case to make sure nothing funny happens with negative numbers and a text string if the parameter is not strongly typed.
If the expected outcome is that an exception is thrown with invalid input values, then a test that the exceptions get properly thrown would be appropriate.
Edit:
As I noted in my comment below, if these cases will break your application, you should throw an exception. If it really is logically impossible for these cases to occur, then I would say no, you don't need to throw an exception, and you don't need test cases to cover it.
Note that if your system is well componentized, and this function is one component, the fact that it is logically impossible now doesn't mean it will always be logically impossible. It may be used differently down the road.
In short, if it can break, then you should test it. Also validate data at the earliest point possible.
The answer depends on whether you control the inputs passed to Robot. If Robot is an internal class (C#) ; values only flow in from RobotClientX which is a public type. Then I'd put the guard checks in RobotClientX, write tests for it. I'd not write tests for Robot, because invalid values cannot materialize in-between.
e.g. if I put my validations in the GUI such that all invalid values are filtered off at the source, then I don't check for invalid values in all classes below the GUI (Unless I've also exposed a public API which bypasses the GUI).
On the other hand, if Robot is publicly visible i.e. Anyone can call Robot with any value that they please, then I need tests that document it's behavior given specific kinds of input.. invalid being one of them. e.g. if you pass an out-of-range value, it'd throw an ArgumentException.
You said your method will raise an exception if the argument is not valid.
So, yes you should, because you should test that the exception gets raised.
If other code guards against calling that method incorrectly, and no one else will be writing code to call that method, then I don't see a reason to test with invalid values. To me, it would seem a waste of time.
The programming by contract style of design and implementation draws attention to the fact that a single function (method) should be responsible for only some things, not for everything. The other functions that it calls (delegates to) and which call it also have responsibilities. This partition of responsibilities is at the heart of dividing the task of programming into smaller tasks that can be performed separately. The contract part of programming by contract is that the specification of a function says what a function must do if and only if the caller of the function fulfills the responsibilities placed on the caller by that specification. The requirement that the input integer is within the range [0,100] is that kind of requirement.
Now, unit tests should not test implementation details. They should test that the function conforms to its specification. This enables the implementation to change without the tests breaking. It makes refactoring possible.
Combining those two ideas, how can we write a test for a function that is given some particular invalid input? We should check that the function behaves according to the specification. But the specification does not say what the function must do in this case. So we can not write any checks of the program state after the invalid function call; the behaviour is undefined. So we can not write such a test at all.
My answer is that, no, you don't want exceptions, you don't want to have to have ROBOT() check for out of range input. The clients should be so well behaved that they don't pass garbage values in.
You might want to document this - Just say that clients must be careful about the values they pass in.
Besides where are you going to get invalid values from? Well, user input or by converting strings to numbers. But in those cases it should be the conversion routines that perform the checks and give feedback about whether the values are valid or not. The values should be guaranteed to be valid long before they get anywhere near ROBOT()!

Designing a robust unit test - testing the same logic in several different ways?

In unit test design, it is very easy to fall into the trap of actually just calling your implementation logic.
For example, if testing an array of ints which should all be two higher than the other (2, 4, 6, 8, etc), is it really enough to get the return value from the method and assert that this pattern is the case?
Am I missing something? It does seem like a single unit test method needs to be made more robust by testing the same expectation in several ways. So the above expectation can be asserted by checking the increase of two is happening but also the next number is divisible by 2. Or is this just redundant logic?
So in short, should a unit test test the one expectation in several ways? For example, if I wanted to test that my trousers fit me, I would/could measure the length, put it next to my leg and see the comparison, etc. Is this the sort of logic needed for unit testing?
Thanks
Your unit tests should check all of your assumptions. Whether you do that in 1 test or multiple tests is a personal preference.
In the example you stated above, you had two different assumptions: (1) Each value should increment by 2. (2) All values should be even.
Should (-8,-6,-4,-2) pass/fail?
Remember, ensuring your code fails when it's supposed to is just as important, if not more important, then making sure it passes when it's supposed to.
If you assert that your array contains 2,4,6,8 -- then your testing logic might be flawed because your test would pass if you just returned an array with those elements, but not with, say, 6,8,10,12. You need to test that calculation is correct. So you need to test it with multiple arrays, in this particular case.
I find that making sure the test fails, then making the test pass, in the true spirit of TDD, helps flush out what the correct test is...
The array you are testing, must be generated in some sort of logic. Isn't it better to test this logic to ensure that the resulting array always meets your requirements?
For example, if testing an array of
ints which should all be two higher
than the other (2, 4, 6, 8, etc), is
it really enough to get the return
value from the method and assert that
this pattern is the case?
Perhaps you need to think a little more about how the function would be used. Will it be use with very large numbers? If so, the you may want to try some tests with very large numbers. Will it be used with negative numbers?
Am I missing something? It does seem
like a single unit test method needs
to be made more robust by testing the
same expectation in several ways. So
the above expectation can be asserted
by checking the increase of two is
happening but also the next number is
divisible by 2. Or is this just
redundant logic?redundant logic?
Hmm... well 1,3,5,9 would pass the assertEachValueIncrementsByTwo test, but it would not pass the assertValuesDivisibleByTwo test. Does it matter that they are divisible by 2? If so, then you really should test that. If not, then it's a pointless redundant test.
You should try to find more than 1 test for your methods, but redundant tests for the sake of more testing is not going to help you. Adding the assertValuesDivisibleByTwo test when that is not really required will just confuse later developers who are trying to modify your code.
If you can't think of any more tests, try writing a random input function that will generate 100 random test arrays each time you run your tests. You'd be surprised how many bugs escape under the radar when you only check one or two input sets.
I'd recommend multiple tests. If you ever need to change the behaviour you'd like to have as few tests to change as possible. This also makes it easier to find what the problem is. If your really blow the implementation and get [1,3,4,5] your one test will fail, but you'll only get one failure for the first thing you test when there are actually two different problems.
Try naming your tests. If you can't say in one clear method name what you're testing break up the test.
testEntriesStepByTwo
testEntriesAllEven
Also don't forget the edge cases. The empty list will likely pass the 'each entry is 2 more than the previous' one and 'all entries are even' tests, but should it?