I'm implementing families of algorithms in Java and want to run the same set of algorithm family tests on each algorithm implementation. Let's look at the sorting algorithm case.
I have a few implementations of sorting algorithms. I want to run the same suite of parameterized tests on each implementation without copy/pasting the tests for each interface implementation.
An answer I've seen a few times to "how do I run the same set of tests on multiple class implementations" is to use a parameterized test with a list of the implementation classes to be tested as the input. However, these examples use the class as the only parameter. I want to run parameterized tests on the class under test.
The online examples of parameterized tests I've encountered use simple parameters (a single object/primitive or list of objects/primitives). In my case, I would want to provide the class to be tested and an array of values. I think this is possible, but it feels ugly and I would have to repeat the same test cases for each class type like so (not actual java syntax):
BubbleSorter.class, [1,2,3]
BubbleSorter.class, [3,2,1]
BubbleSorter.class, [-1,2,0]
MergeSorter.class, [1,2,3]
MergeSorter.class, [3,2,1]
MergeSorter.class, [-1,2,0]
InsertionSorter.class, [1,2,3]
...
Ideally, there would be some way to set the sort implementation once and let all of the parameterized tests worry only about the lists of values to be sorted and not which sort class to use.
My gut feeling says to use a parent abstract SorterTest containing the parameterized tests and use a factory method to allow each subclass (ex: MergeSorterTest) to decide which sorter implementation is used. However, a quick google about this seems to imply using inheritance to re-use tests cases in test code is frowned upon.
What is the recommended method for this situation? Is inheritance in test code allowable in some circumstances like this, or is there always a better alternative?
Also I'm using JUnit 5.
In my case, I would want to provide the class to be tested and an array of values.
You can combine multiple sources within e.g. a #MethodSource. Let's assume you have something like a common Sorter interface:
class SorterTest {
#ParameterizedTest
#MethodSource("args")
void test(Sorter sorter, List<Integer> list) {
// ...
}
static Stream<Arguments> args() {
// Combines each sorter implementation with each list to be sorted.
return sorters().flatMap(sorter -> lists().map(list -> Arguments.of(sorter, list)));
}
static Stream<Sorter> sorters() {
return Stream.of(new BubbleSorter(), new MergeSorter(), new InsertionSorter());
}
static Stream<List<Integer>> lists() {
return Stream.of(List.of(1, 2, 3), List.of(3, 2, 1), List.of(-1, 2, 0));
}
}
If you don't want to use a test oracle that provides the expected results (i.e. the sorted lists via a reference implementation), you can also combine three streams: sorters(), unsorted(), sorted(). Moreover, you can use Suppliers if you want to create your classes under test lazily:
static Stream<Supplier<Sorter>> sorters() {
return Stream.of(BubbleSorter::new, MergeSorter::new, InsertionSorter::new);
}
Related
I have a concern regarding usage of "improperly" initialized objects in unit tests.
Let's say I want to tests a function:
void foo(SomeClass)
{
//do some stuff based on SomeClass.value
}
where:
class SomeClass
{
OtherClass* ptr;
int uninterestingValue;
int value;
};
What I want to do is create function (available only in unit test, namely in anonymous namespace) which:
SomeClass createDummy()
{
SomeClass dummy(nullptr, 0, //initialize uninteresting fields with nullptr/0
42) //42 will be used for testing purpose
}
Reason: creating object of SomeClass type is complex, as it is used to represent final state of data processing in system. I'd like to simplify it a bit.
Would it be considered a bad approach?
Are there better ways to achieve this (talking specificly about C++/googletest?
You have three parts to your question. I won't address whether or not there are better ways, but the approach your talking about is not bad.
The idea of using dummy objects with only the important parts implemented is standard. There are several variations on the idea of using test doubles. Test doubles include mocks, fakes, stubs, and spies. Using those terms to search google should give you plenty of information about that idea. Here's an article that explains the difference. Since you are using Google Test, you should look into how you can use the Google Mock extension to make your test doubles.
The second idea is about having a builder to create the dummy objects. Two relevant patterns are Object Mother and Test Data Builder. You can search those terms or get started with this article.
I'm having trouble getting my unit tests to stay independent of each other. For instance, I have a linked list with two append methods, one that takes a single element and appends it to the list, and one that takes another list and appends the whole thing; but I can't test the second append method (the one that takes a whole list) without using the first append method to populate the list I'm passing in. How do I keep the unit tests for these two methods separate from each other?
The situation you describe happens everywhere in testing: You have some class or library to test. The class or library has certain methods / functions that need to be tested, and for the test of some of these, you have to call other methods / functions of that same library.
In other words, when breaking down the test according to the four phase test pattern (setup, exercise, evaluate, cleanup), you want to call your class / lib in the exercise phase. However, it seems annoying that you have to call some elements of it also in the setup phase, and, possibly, also in the evaluate and/or cleanup phases.
This is simply unavoidable: You mentioned that in the setup for the list append function you had to use the single-element append function. But, it is even worse: You also had to use the constructor of your list class - no chance to get away without that one. But, the constructor could also be buggy...
What can certainly happen is, that tests fail (or, mistakenly pass) because the functions called in the setup are defective. A proper test suite, however, should (as was mentioned in the comments) also have tests for the other (call them lower-level) functions.
For example, you should have a number of tests which check that the constructor of your class works correctly. If at some point you modify the constructor so it becomes defective, all tests that use the constructor in the setup phase are no longer trustworthy. But, some of the tests that test the constructor itself (and thus call it in the exercise phase) should fail now.
From the overview of the test results you will then be able to identify the root cause of the test failures. This requires some understanding about the dependencies: Which of the tests focus on the lower-level aspects and which are higher-level in the sense that they depend on some lower-level functionality to work.
There are some ways to make these dependencies more apparent and therefore make it easier to analyse test failures later - but none of these are essential:
In the test-code, you put the tests for the lower-level aspects at the top of the file, and the more dependent tests further to the bottom. Then, when several tests fail, first look at the test that is closest to the top of the file. Note that the order of tests in the test code does not necessarily imply an execution order: JUnit for example does not care in which order the test methods are written in the test class.
As it was suggested in the comments, you may in addition configure the test framework to run the lower level tests before the others.
You can create one method which itself is not a unit test method but instead creates the conditions for multiple tests, then performs verification of the results. Your actual unit test methods will call into this other method. So you can use the same data set for multiple tests, and not introduce dependencies between test methods.
I don't know what language you are using, but here is an example for Objective-C in Xcode 5 with the new XCTest framework. I would do something like this:
- (void)performTestWithArray:(NSArray *)list
{
NSMutableArray *initialList = ...; // create the initial list you will use with multiple tests
[initialList addObjectsFromArray:list];
XCTAssertTrue(testCondition, #"message");
}
- (void)testAddSingleElement
{
NSArray *array = #[ #"one element" ];
[self performTestWithArray:array];
}
- (void)testAddList
{
NSArray *array = #[ #"first element", #"second element", #"third element" ];
[self performTestWithArray:array];
}
Consider the following class:
public class MyIntSet
{
private List<int> _list = new List<int>();
public void Add(int num)
{
if (!_list.Contains(num))
_list.Add(num);
}
public bool Contains(int num)
{
return _list.Contains(num);
}
}
Following the "only test one thing" principle, suppose I want to test the "Add" function.
Consider the following possibility for such a test:
[TestClass]
public class MyIntSetTests
{
[TestMethod]
public void Add_AddOneNumber_SetContainsAddedNumber()
{
MyIntSet set = new MyIntSet();
int num = 0;
set.Add(num);
Assert.IsTrue(set.Contains(num));
}
}
My problem with this solution is that it actually tests 2 methods: Add() and Contains().
Theoretically, there could be a bug in both, that only manifests in scenarios where they are not called one after the other. Of course, Contains() now servers as a thin wrapper for List's Contains() which shouldn't be tested in itself, but what if it changes to something more complex in the future? Perhaps a simple "thin wrap" method should always be kept for testing purposes ?
An alternative approach might suggest mocking out or exposing (possibly using InternalsVisibleTo or PrivateObject) the private _list member and have the test inspect it directly, but that could potentially create test maintainability problems if someday the internal list is replaced by some other collection (maybe C5).
Is there a better way to do this?
Are any of my arguments against the above implementations flawed?
Thanks in advance,
JC
Your test seems perfectly OK to me. You may have misunderstood a principle of unit testing.
A single test should (ideally) only test one thing, that is true, but that does not mean that it should test only one method; rather it should only test one behaviour (an invariant, adherence to a certain business rule, etc.) .
Your test tests the behaviour "if you add to a new set, it is no longer empty", which is a single behaviour :-).
To address your other points:
Theoretically, there could be a bug in both, that only manifests in scenarios where they are not called one after the other.
True, but that just means you need more tests :-). For example, add two numbers, then call Contains, or call Contains without Add.
An alternative approach might suggest mocking out or exposing (possibly using InternalsVisibleTo) the private _list member and have the test inspect it directly, but that could potentially create test maintainability problems[...]
Very true, so don't do this. A unit test should always be against the public interface of the unit under test. That's why it's called a unit test, and not a "messing around inside a unit"-test ;-).
There are two possibilities.
You've exposed a flaw in your design. You should carefully consider if the actions that your Add method is executing is clear to the consumer. If you don't want people adding duplicates to the list, why even have a Contains() method? The user is going to be confused when it's not added to the list and no error is thrown. Even worse, they might duplicate the functionality by writing the exact same code before they call .Add() on their list collection. Perhaps it should be removed, and replaced with an indexer? It's not clear from your list class that it's not meant to hold duplicates.
The design is fine, and your public methods should rely on each other. This is normal, and there is no reason you can't test both methods. The more test cases you have, theoretically the better.
As an example, say you have a functions that just calls down into other layers, which may already be unit tested. That doesn't mean you don't write unit tests for the function even if it's simply a wrapper.
In practice, your current test is fine. For something this simple it's very unlikely that bugs in add() and contains() would mutually conspire to hide each other. In cases where you are really concerned about testing add() and add() alone, one solution is to make your _list variable available to your unit test code.
[TestClass]
public void Add_AddOneNumber_SetContainsAddedNumber() {
MyIntSet set = new MyIntSet();
set.add(0);
Assert.IsTrue(set._list.Contains(0));
}
Doing this has two drawbacks. One: it requires access to the private _list variable, which is a little complex in C# (I recommend the reflection technique). Two: it makes your test code dependent on the actual implementation of your Set implementation, which means you'll have to modify the test if you ever change the implementation. I'd never do this for something as simple as a collections class, but in some cases it may be useful.
Imagine a system of filters (maybe audio filters, or text stream filters).
A Filter base class has a do_filter() method, which takes some input, modifies it (perhaps), and returns that as output.
Several subclasses exist, built with TDD, and each has a set of tests which test them in isolation.
Along comes a composite class, of an unrelated type Widget, which has two members of different Filter types (a and b), which deal with quite different input - that is, certain input which would be modified by filter a is passed through unmodified by filter b, and vice versa. Its process_data() method calls each filter member's do_filter().
While developing the composite class, there emerge tests that check the assumption that Widget's filters aren't both processing the same data.
The problem is, these sort of tests look identical to the individual filter's test. Although there might be other tests, which test input which should be modified by both filters, many of the tests could almost be copied and pasted from each of the filter's tests, with only small modifications needed to have them test with Widget (such as calling process_data()), but the input data and the assert checks are identical.
This duplication smells pretty bad. But it seems right to want to test the components' interactions. What sort of options will avoid this sort of duplication?
Within one Test suite/class have a method
public void TestForFooBehaviour(IFilter filter)
{
/* whatever you would normally have in a test method */
}
Then invoke this method from both the original test on the simple filter as well as from the composite filter. This also works for abstract base classes. Obviously FooBehaviour should be a meaningful description of the aspect of filters you are testing. Do this for each behaviour you want to test.
If you language supports duck typing or generics feel free to use it if it helps.
I fairly frequently extract test-logic to separate classes, so I'd extract the filter test to a separate class that is essentially not a unit test by itself. Especially if your test classes are physically separated from your production code this is really a decent way to solve this problem (i.e. No-one will think it is production code since it's in the test space)
I asked something similar about a abstract base class and unit testing here, it has some interesting points that you might find useful.
How to unit test abstract classes: extend with stubs?
Considering such code:
class ToBeTested {
public:
void doForEach() {
for (vector<Contained>::iterator it = m_contained.begin(); it != m_contained.end(); it++) {
doOnce(*it);
doTwice(*it);
doTwice(*it);
}
}
void doOnce(Contained & c) {
// do something
}
void doTwice(Contained & c) {
// do something
}
// other methods
private:
vector<Contained> m_contained;
}
I want to test that if I fill vector with 3 values my functions will be called in proper order and quantity. For example my test can look something like this:
tobeTested.AddContained(one);
tobeTested.AddContained(two);
tobeTested.AddContained(three);
BEGIN_PROC_TEST()
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
tobeTested.doForEach()
END_PROC_TEST()
How do you recommend to test this? Are there any means to do this with CppUnit or GoogleTest frameworks? Maybe some other unit test framework allow to perform such tests?
I understand that probably this is impossible without calling any debug functions from these functions, but at least can it be done automatically in some test framework. I don't like to scan trace logs and check their correctness.
UPD: I'm trying to check not only the state of an objects, but also the execution order to avoid performance issues on the earliest possible stage (and in general I want to know that my code is executed exactly as I expected).
You should be able to use any good mocking framework to verify that calls to a collaborating object are done in a specific order.
However, you don't generally test that one method makes some calls to other methods on the same class... why would you?
Generally, when you're testing a class, you only care about testing its publicly visible state. If you test
anything else, your tests will prevent you from refactoring later.
I could provide more help, but I don't think your example is consistent (Where is the implementation for the AddContained method?).
If you're interested in performance, I recommend that you write a test that measures performance.
Check the current time, run the method you're concerned about, then check the time again. Assert that the total time taken is less than some value.
The problem with check that methods are called in a certain order is that your code is going to have to change, and you don't want to have to update your tests when that happens. You should focus on testing the actual requirement instead of testing the implementation detail that meets that requirement.
That said, if you really want to test that your methods are called in a certain order, you'll need to do the following:
Move them to another class, call it Collaborator
Add an instance of this other class to the ToBeTested class
Use a mocking framework to set the instance variable on ToBeTested to be a mock of the Collborator class
Call the method under test
Use your mocking framework to assert that the methods were called on your mock in the correct order.
I'm not a native cpp speaker so I can't comment on which mocking framework you should use, but I see some other commenters have added their suggestions on this front.
You could check out mockpp.
Instead of trying to figure out how many functions were called, and in what order, find a set of inputs that can only produce an expected output if you call things in the right order.
Some mocking frameworks allow you to set up ordered expectations, which lets you say exactly which function calls you expect in a certain order. For example, RhinoMocks for C# allows this.
I am not a C++ coder so I'm not aware of what's available for C++, but that's one type of tool that might allow what you're trying to do.
http://msdn.microsoft.com/en-au/magazine/cc301356.aspx
This is a good article about Context Bound Objects. It contains some so advanced stuff, but if you are not lazy and really want to understand this kind of things it will be really helpful.
At the end you will be able to write something like:
[CallTracingAttribute()]
public class TraceMe : ContextBoundObject
{...}
You could use ACE (or similar) debug frameworks, and in your test, configure the debug object to stream to a file. Then you just need to check the file.