I have this piece of code
function aFunctionForUnitTesting(aParameter)
return (aFirstCheck(aParameter) &&
aOtherOne(aParameter) &&
aLastOne(aParameter)
);
How can I unit test this ?
My problem is the following, let's say I create this unit test :
FailWhenParameterDoesntFillFirstCheck()
{
Assert.IsFalse(new myObject().aFunctionForUnitTesting(aBadParameter));
}
How do I know that this test is ok because of the first check (it might have failed because of the second or the third, so my function firstCheck might be bugged) ?
You need to pass a different value of your parameter, that will pass the first check and fail on the second.
FailWhenParameterDoesntFillFirstCheck()
{
Assert.IsFalse(new myObject().aFunctionForUnitTesting(aBadParameterThatPassesFirstCheckButFailsTheSecond));
}
You don't have to write all these cases. It depends on what level of code coverage you want. You can get 'modified condition/decision coverage' in four tests:
- all params ok
- one of each param failing the test
'Multiple condition coverage' requires 8 tests here. Not sure I'd bother unless I was working for Airbus :-)
Related
I have these two lines at different points of my code:
Message<T1> reply = (Message<T1>) template.sendAndReceive(channel1, message);
Message<T2> reply = (Message<T2>) template.sendAndReceive(channel2, message);
I am doing some unit testing and the test covers both statements. When I try to mock the behaviour, I define some behaviour like this:
Mockito.when(template.sendAndReceive(Mockito.any(MessageChannel.class), Matchers.<GenericMessage<T1>>any() )).thenReturn(instance1);
Mockito.when(template.sendAndReceive(Mockito.any(MessageChannel.class), Matchers.<GenericMessage<T2>>any() )).thenReturn(null);
When I execute the unit tests and do some debugging , the first statement returns null
Do you have any idea what the matchers seem not to work ? and it always takes the last definition of the mock . I am using Mockito 1.1.10
When I execute the unit tests and do some debugging , the first
statement returns null
This happened because you did stub the same method invocation twice with thenReturn(..); and the last one with null won.
The proper way to achieve your goal is to provide a list of consecutive return values to be returned when the method is called:
Mockito.when(template.sendAndReceive(Matchers.any(MessageChannel.class), Matchers.any(GenericMessage.class)))
.thenReturn(instance1, null);
In this case, the returned value for the first invocation will be instance1, and all subsequent invocations will return null. See an example here.
Another option, as Ashley Frieze suggested, would be making template.sendAndReceive return different values based on arguments:
Mockito.when(template.sendAndReceive(Matchers.same(channel1), Matchers.any(GenericMessage.class)))
.thenReturn(instance1);
Mockito.when(template.sendAndReceive(Matchers.same(channel2), Matchers.any(GenericMessage.class)))
.thenReturn(null);
Or even shorter, we can omit second line, because default return value for unstubbed mock method invocations is null:
Mockito.when(template.sendAndReceive(Matchers.same(channel1), Matchers.any(GenericMessage.class)))
.thenReturn(instance1);
Here we are assume that some channel1 and channel2 are in scope of test class and are injected into object under test (at least it seems so from code snippet you provided in the question).
W.r.t. Nunit;
Is there a mechanism to conditionally ignore a specific test case?
Something in the lines of :
[TestCase(1,2)]
[TestCase(3,4, Ignore=true, IgnoreReason="Doesn't meet conditionA", Condition=IsConditionA())]
public voidTestA(int a, int b)
So is there any such mechanism or the only way to do so is to create separate test for each case and do Assert.Ignore in the test body?
You could add the following to the body of the test:
if (a==3 && b == 4 && !IsConditionA()) { Assert.Ignore() }
This you would have to do for every testcase you would want to ignore.
You would not replicate the testbody in this case, but you would add to it for every ignored testcase.
I think it helps test readability to minimize the conditional logic inside the test body. But you can definitely generate the tests cases dynamically using the testcasesource attribute on the test and in a separate method dynamically generate a list of test cases to run using the nunit testcasedata object.
So only the tests that you need to/ are valid to execute are run but you still have a chance to log etc the cases.
http://www.nunit.org/index.php?p=testCaseSource&r=2.6.4
I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.
I have a "best practices" question. I'm writing a test for a certain method, but there are multiple entry values. Should I write one test for each entry value or should I change the entryValues variable value, and call the .assert() method (doing it for all range of possible values)?
Thank you for your help.
Best regards,
Pedro Magueija
edited: I'm using .NET. Visual Studio 2010 with VB.
If one is having to write many tests which vary only in initial input and final output one should use a data driven test. This allows you to define the test once along with a mapping between input and output. The unit testing framework will then interpret it as being one test per case. How to actually do this depends on which framework you are using.
It's better to have separate unit tests for each input/output sets covering the full spectrum of possible values for the method you are trying to test (or at least for those input/output sets that you want to unit test).
Smaller tests are easier to read.
The name is part of the documentation of the test.
Separate methods give a more precise indication of what has failed.
So if you have a single method like:
void testAll() {
// setup1
assert()
// setup2
assert()
// setup3
assert()
}
In my experience this gets very big very quickly, and so becomes hard to read and understand, so I would do:
void testDivideByZero() {
// setup
assert()
}
void testUnderflow() {
// setup
assert()
}
void testOverflow() {
// setup
assert()
}
Should I write one test for each entry
value or should I change the
entryValues variable value, and call
the .assert() method (doing it for all
range of possible values)?
If you have one code path typically you do not test all possible inputs. What you usually want to test are "interesting" inputs that make good exemplars of the data you will get.
For example if I have a function
define add_one(num) {
return num+1;
}
I can't write a test for all possible values so I may use MAX_NEGATIVE_INT, -1, 0, 1, MAX_POSITIVE_INT as my test set because they are a good representatives of interesting values I might get.
You should have at least one input for every code path. If you have a function where every value corresponds to a unique code path then I would consider writing a tests for the complete range of possible values. And example of this would be a command parser.
define execute(directive) {
if (directive == 'quit') { exit; }
elsif (directive == 'help') { print help; }
elsif (directive == 'connect') { intialize_connection(); }
else { warn("unknown directive"); }
}
For the purpose of clarity I used elifs rather than a dispatch table. I think this make it's clear that each unique value that comes in has a different behavior and therefore you would need to test every possible value.
Are you talking about this difference?
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
}
- (void) testSomething2
{
[foo callBarWithValue:y];
assert…
}
vs.
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
[foo callBarWithValue:y];
assert…
}
The first version is better in that when a test fails, you’ll have better idea what does not work. The second version is obviously more convenient. Sometimes I even stuff the test values into a collection to save work. I usually choose the first approach when I might want to debug just that single case separately. And of course, I only choose the latter when the test values really belong together and form a coherent unit.
you have two options really, you don't mention which test framework or language you are using so one may not be applicable.
1) if your test framework supports it use a RowTest, MBUnit and Nunit support this if you're using .NET this would allow you to put multiple attributes on your method and each line would be executed as a separate test
2) If not write a test per condition and ensure you give it a meaningful name so that if (when) the test fails you can find the problem easily and it means something to you.
EDIT
Its called TestCase in Nunit Nunit TestCase Explination
This is my doubt on what we regard as a "unit" while unit-testing.
say I have a method like this,
public String myBigMethod()
{
String resultOne = moduleOneObject.someOperation();
String resultTwo = moduleTwoObject.someOtherOperation(resultOne);
return resultTwo;
}
( I have unit-tests written for someOperation() and someOtherOperation() seperately )
and this myBigMethod() kinda integrates ModuleOne and ModuleTwo by using them as above,
then, is the method "myBigMethod()" still considered as a "unit" ?
Should I be writing a test for this "myBigMethod()" ?
Say I have written a test for myBigMethod()... If testSomeOperation() fails, it would also result in testMyBigMethod() to fail... Now testMyBigMethod()'s failure might show a not-so-correct-location of the bug.
One-Cause causing two tests to fail doesn't look so good to me. But donno if there's any better way...? Is there ?
Thanks !
You want to test the logic of myBigMethod without testing the dependencies.
It looks like the specification of myBigMethod is:
Call moduleOneObject.someOperation
Pass the result into moduleTwoObject.someOtherOperation
Return the result
The key to testing just this behavior is to break the dependencies on moduleOneObject and moduleTwoObject. Typically this is done by passing the dependencies into the class under test in the constructor (constructor injection) or setting them via properties (setter injection).
The question isn't just academic because in practice moduleOneObject and moduleTwoObject could go out and hit external systems such as a database. A true unit test doesn't hit external systems as that would make it an "integration test".
The test for myBigMethod() should test the combination of the results of the other two methods called. So, yes it should fail if either of the methods it depends on fails, but it should be testing more. There should be some case where someOperation() and someOtherOperation() work correctly, but myBigMethod() can still fail. If that's not possible, then there's no need to test myBigMethod().