There is a rule about matchers for parameters of stubbed method: all parameters are all matchers. An InvalidUseOfMatchersException will be thrown if matchers are combined with raw values. In this situation, the eq() matcher will help. For example, I want to verify any integer divided by 0 will throw MyException. The stubbed div() has two parameters. The first is given by anyInt(), the second is given by eq(0) rather than raw 0.
public interface MatcherDemo {
double div(int x, int y) throws Exception;
}
#Test(expected = MyException.class)
public void test() throws Exception {
when(demo.div(anyInt(), eq(0))).thenThrow(new MyException());
demo.div(5, 0);
}
But, I found that the eq() matcher can not be applied to double. If the div()'s signature is modified such as:
double div(double x, double y) throws Exception;
then the Mockito will throw an InvalidUseOfMatchersException.
I wonder if this is because the comparison of double can not be done precisely, and how can I do?
I am not sure what you tried, but using
when(demo.div(anyDouble(), eq(0d))).thenThrow(new MyException());
doesn`t seem to have any problems.
Maybe you forgot to change the eq(0) expression to a double as well?
But I get a UnnecessarStubbing Exception in that case, because it doesn`t map to the correct method.
(Tested with JUnit 5 & Mockito 2.27.0)
Related
I have a mock that represents an API wrapper.
class MockApiWrapper : public ApiWrapper {
public:
MockNrfWrapper();
virtual ~MockNrfWrapper();
MOCK_METHOD1(api_do, void(int param));
};
Lets assume that api_do should never be called with param = 0. Since I use this mock "everywhere", I would like to append an assertion/expect to each call made to api_do. Example:
void MyClass::InvalidCallsToApi(void) {
// api->api_do(0); // Fails "global assert"
// api->api_do(1); // Fails by specific test
api->api_do(2); // Valid call
}
TEST(MyTestCase, FirstTest) {
// Mock always checks that api_do is not called
// with argument of 0
EXPECT_CALL(api, api_do(Ne(1));
uut->InvalidCallsToApi();
}
I tried doing this with an ON_CALL and Invoke in the constructor, but either it was overridden by the added EXPECT in the test, or I got compilation error (couldn't do ASSERT or EXPECT in invoked call).
I hope my problem statement is clear. Thanks in advance for any input!
I've came up with one solution, it's not the nicest, but acceptable IMO.
Code:
struct BInterface {
virtual void foo(int) = 0;
};
struct BMock : public BInterface {
MOCK_METHOD1(foo, void(int));
BMock() {
ON_CALL(*this, foo(0))
.WillByDefault(::testing::InvokeWithoutArgs([](){ADD_FAILURE() << "This function can't be called with argument 0";}));
}
};
void testedMethod(int a) {
BInterface* myB = new BMock;
myB->foo(a);
delete myB;
}
TEST(myTest, okCase) {
testedMethod(1);
}
TEST(myTest, notOkCase) {
testedMethod(0);
}
Explanation:
We add a default action to BMock, for every call of foo method with argument 0.
In this action, we call a lambda, which uses GTest macro ADD_FAILURE() to generate a non-fatal fail - equivalent of EXPECT_* macros. You can use FAIL() instead for a fatal failure like in ASSERT_* macros.
We use ON_CALL macro in mock's constructor, which allows to avoid calling it with every other mock object.
Limitations:
The same trick won't work with EXPECT_CALL for example - I don't know GMock implementaion, but I assume EXPECT_CALL requires a fully initialized object.
A call with matcher that accepts 0 will still pass (i.e. EXPECT_CALL(myB, foo(::testing::_));, but that's the case in every other GMock expectations. GMock will always shadow older expectations when newer ones are encountered. You have to create your expectations in such a way that they won't override the previous expectations.
Adding .RetiresOnSaturation() to all your EXPECT_CALL will make sure that calls are forwarded to default action (set by ON_CALL), when they are not interesting.
Custom matchers will be helpful in cases when there are multiple disallowed values.
MATCHER(IsValidApiArg, ""){return arg == 0 || arg == 1;}
ON_CALL(*this, api_foo(!IsValidApiArg)
.WillByDefault(::testing::InvokeWithoutArgs([](){ADD_FAILURE();}));
EXPECT_CALL(myMock, api_foo(IsValidApiArg));
Note: I still can't believe that GMock doesn't provide a default action for simply generating a failure. Perhaps you can find something better suitable deep in documentation.
You can also create a custom action for that, to avoid all that Invoke and lambdas.
I'm working on a custom checker for the clang static analyzer that checks for incorrect use of CPython APIs. I've made some progress, but I'm stuck: how can I get a clang::QualType value given the name of a type?
For example, I'd like to write something like this:
QualType ty = findTheTypeNamed("Py_ssize_t");
I've spent time looking at the code for clang::ASTContext and clang::QualType, but I'm lost.
How can I get a clang::QualType from the name of a type?
The asString narrowing matcher turns a string into a qualtype.
Here is the associated documentation :
Matches if the matched type is represented by the given string.
Given
class Y { public: void x(); };
void z() { Y* y; y->x(); }
cxxMemberCallExpr(on(hasType(asString("class Y *"))))
matches y->x()
I read through Google Mock: Return() a list of values and found out how to return a single element from a vector on each EXPECT_CALL, as such I wrote the following code which works:
{
testing::InSequence s1;
for (auto anElem:myVecCollection) {
EXPECT_CALL(myMockInstance, execute())
.WillOnce(testing::Return(anElem));
}
}
so far so good...
Now I read not to use EXPECT_CALL unless you need to. https://groups.google.com/forum/#!topic/googlemock/pRyZwyWmrRE
My use case, myMockInstance is really a stub providing data to the SUT(software under test).
However, a simple EXPECT_CALL to ON_CALL replacement will not work(??), since ON_CALL with WillByDefault only calculates the return type only once(??)
As such I tried setting up an ACTION.
ACTION_P(IncrementAndReturnPointee, p)
{
return (p)++;
}
ON_CALL(myMockInstance, execute())
.WillByDefault(testing::Return
(*(IncrementAndReturnPointee(myVecCollection.cbegin()))));
Clang gives
error: expected expression 'ACTION_P(IncrementAndReturnPointee, p)'
Then I tried setting up a functor and use the Invoke method on it.
struct Funct
{
Funct() : i(0){}
myClass mockFunc(std::vector<myClass> &aVecOfMyclass)
{
return aVecOfMyclass[i++];
}
int i;
};
Funct functor;
ON_CALL(myMockInstance, execute())
.WillByDefault(testing::Return(testing::Invoke(&functor, functor.mockFunc(myVecCollection))));
Clang gives
no matching function for call to 'ImplicitCast_'
: value_(::testing::internal::ImplicitCast_<Result>(value)) {}
Now , I am fairly new to google-mock but have used google-test extensively.
I am a bit lost with the Google-Mock doc. I wanted to know, whether I am on the right path, in terms of what I needed.
If one of you could point to me , which approach is the correct one; or whether I am even close to the right approach, I can take it from there and debug the "close to right approach" further.
Thanks
testing::Return is an action. Your code should look like:
ACTION_P(IncrementAndReturnPointee, p)
{
return *(p++);
}
ON_CALL(myMockInstance, execute())
.WillByDefault(IncrementAndReturnPointee(myVecCollection.cbegin()));
As a side note, it doesn't look like a good idea to use a finite collection myVecCollection. You will probably get a more robust test if you figure out an implementation of the action that creates a new element to return on the fly.
Is there any way to "force" a function parameter to follow some rule in C++ ?
For the sake of example, let say I want to write a function which computes the n'th derivative of a mathematical function. Let suppose the signature of the function is this one :
double computeNthDerivative(double x, unsigned int n);
Now, let say I want to forbid users to input 0 for n. I could just use an assert or test the value and throw an exception if the user input is 0.
But is there any other way of doing this kind of stuff ?
Edit : Conditions would be set at compile time, but the check must be done at the run-time.
You can prevent the use of 0 at compile time, using templates.
template <int N>
double computeNthDerivative(double x)
{
// Disallow its usage for 0 by using static_assert.
static_assert(N != 0, "Using 0 is not allowed");
// Implement the logic for non-zero N
}
To prevent the use of the function for 0 at run time, it's best to throw an exception.
double computeNthDerivative(double x, unsinged int n)
{
if ( n == 0 )
{
throw std::out_of_range("Use of the function for n = 0 is not allowed.");
}
// Implement the logic for non-zero n
}
class Policy {
private:
String myPolicy;
public :
Policy(String regEx) : myPolicy(regEx) {
}
void verify(int n) {
regEx strtok , sprintf, blah, blah n
};
class Asserted {
private:
Policy policy;
public:
Asserted(Policy policy, int n) throw AAAHHHHH {
policy.verify(n);
}
};
Then finally
Asserted assert = new Asserted(Policy("[1-9]", 8))
double computeNthDerivative(2.6, assert);
I think the best way here is to throw an exception. This is what exceptions are for, even the name seems to suggest this.
As to the assert macro, there is one important caveat. If you use the assert macro, the program will abort if the assertion is not met. However, if you ever make a release build where the NDEBUG macro is set, all assertions will be removed during compilation. This means that you cannot check for valid user input with this macro (because you should build a release build).
The only rules are that you give in. If users are the ones that you want to restrict, you have to check what they give in.
Same goes for the functions, however in your case what you showed as an example it is better to check the variable after the cin (or whatever imput you prefer) rather than checking it in the function itself. For this i would just go
if n!=0;
your function
else break;
So if you are looking for a "Policy" based solution you could create a separate class which accepts a defining regular expression (or whatever you define as a policy) and the input, in this case n, which would then be used as the input to your function.
I have come to something of a crossroads. I recently wrote a 10,000 line application with no TDD (a mistake I know). I definitely ran into a very large amount of errors but now I want to retrofit the project. Here is the problem I ran into though. Lets take a example of a function that does division:
public int divide (int var1, int var2){
if (var1 == 0 || var2 == 0)
throw new RuntimeException("One of the parameters is zero");
return var1 / var2;
}
In this situation I'm throwing a runtime error so that I can fail and at least find out that my code is broke somewhere. The question is 2 fold. First, am I making the correct use of the exceptions here? Secondly how do I write a test to work with this exception? Obviously I want it to pass the test but in this case it's going to throw an exception.
Not too sure how one would work that out. Is there a different way that this is generally handled with TDD?
Thanks
First, your first argument (the numerator) being zero probably shouldn't cause an exception to be thrown. The answer should just be zero. Only throw an exception when a user tries to divide by zero.
Second, there are two ways (using JUnit) to test that exceptions are thrown when they should be. The first "classic" method:
#Test
public void testForExpectedExceptionWithTryCatch()
throws Exception {
try {
divide (1, 0);
fail("division by zero should throw an exception!");
} catch (RuntimeException expected) {
// this is exactly what you expect so
// just ignore it and let the test pass
}
}
The newer method in JUnit 4 uses annotations to cut down on the amount of code you need to write:
#Test(expected = RuntimeException.class)
public void testForExpectedExceptionWithAnnotation()
throws Exception {
divide (1, 0);
}
Here, because we added (expected = RuntimeException.class) to the annotation, the test will fail if the call to divide doesn't throw a RuntimeException.
To answer your first question:
If it's quite likely the denominator argument to divide will be 0 then you shouldn't be using exception handling to trap the error. Exceptions are expensive and shouldn't be used to control program flow. So you should still check, but return an error code (or use a nullable type as the return value) and your calling code should check on this and handle it appropriately.
public int? divide (int var1, int var2)
{
if (var2 == 0)
{
return null; // Calling method must check for this
}
return var1 / var2;
}
If zeros are truly the exception - e.g. there should be no way that they can be passed - then do as you do now.
To answer your second question:
In your test methods that check the failure code you need an exception handler:
try
{
divide (1, 0);
// If it gets here the test failed
}
catch (RuntimeException ex)
{
// If it gets here the test passed
}
I am not answering your main question.
I would suggest using ArgumentException instead of RuntimeException.
EDIT: I am assuming .net :)
Your question was language-agnostic, so my answer might not apply, but NUnit in .NET (and I believe JUnit too) have a specific notation for testing exceptions. In NUnit, your test would look like this:
[Test]
[ExpectedException(typeof(RuntimeException))]
public void DivideByZeroShouldThrow()
{
divide(1,0);
}
The test will fail if the right type of exception is not thrown during the execution.
The try/catch approach works too, and has its advantages (you can pinpoint exactly where you expect the exception to occur), but it can end up being pretty tedious to write.
The first question is answered well by ChrisF and Bill the Lizard.
I just want to add an alternative to the exception test, with C++11 you can use a lambda directly in your test.
Assert::ExpectException<std::invalid_argument>([] { return divide(1,0); }, "Division by zero should throw an exception.");
This is equivalent to:
try
{
divide(1,0);
Assert::Fail("Division by zero should throw an exception.");
}
catch(std::invalid_argument)
{
//test passed
}