How do unit tests change when a base class is driven out? - unit-testing

This is in part a followup to this question.
I'm not sure the best way to ask this, so I'll try a short story to set the scene:
Once upon a time, there was a class ‘A’, which had a unit test class ‘ATests’ responsible for testing its behaviour through the public interface. They lived happily together for a while and then a change happened, and class ‘B’ came along, which as it turned out had a lot in common with class ‘A’, so a base class was introduced.
The public behaviour of the base class is already covered by the tests for class A. So, the question is what happens next?
• Does class B need to have tests for the common (base class behaviour)? It seems like the behaviour is a part of B, so it should be tested, but should these tests be shared with those for class A? For the base class? If so, what’s the best way to share?
• Does the new base class need unit tests, or is it ok for base classes to be tested through the tests of their children? Does it matter if the base class is abstract?
• Is it enough to ensure that classes A & B derive from the base class and ‘trust’ the unit tests for the base class to test the common behaviour (so the tests don’t need to be replicated in the child classes)? The tests for A & B only need to test they’re new/changed behaviour?
• Am I following totally the wrong approach having approximately one unit test class per real class?
I’ve taken different views at different times and the different approaches can have quite an impact on the ability to refactor the code, time taken to write tests etc. What approaches have people found works best?

Personally, given time, I tend to test all three (base and two derived). It shows that you're not inadvertently overriding the base methods and changing their behavior, and your inherited class still provides the expected semantics. If the behavior really doesn't change, then it could be as simple as a copy-paste job, but it provides more complete coverage.
Note the "given time" part, though. If time is an issue (and it always is), testing the base class or the inherited functionality would probably be lower priority. But testing is great inoculation against yourself, and makes you much more confident when refactoring later, so you're only shortchanging yourself, your customers, and/or your maintainers by not doing as complete coverage as you have time for.
However, pawning repetitive things like this off on dedicated testers or a QA team, if you have one, is perfectly acceptable. But buy them a beer sometimes :-) They make you look better!

You might look at code coverage tools; they can show you if you're actually testing all of the code. Personally if I have a test covering the base class behavior and I"m not overriding that, I won't test it again. The goal is to have a code change (potentially) break only one test.
Don't feel the need to stick to one unit test class per real class. One unit test class per unique setup fixture is one way, and then there's the whole BDD school...

Refactoring test code is just as important as refactoring production code. Both should be treated as first class citizens. So if you are extracting public methods to the base class then that should have its own set of tests. If your test cases are designed properly where each tests tests one thing each then the test refactoring should be easy.
If you are extracting protected functionality then probably its it a slightly grey area. If test methods are new to the class then I would expect them to be tested in the derived class simply because they are probably there for the derived class to function properly. Ideally this should be kept to a minimum as the base class functionality becomes less obvious.
The derived classes then will have the remaining public methods tested in their own set of tests.
Again if you change the production code and not the tests then you should incorporate a coverage tool so you feel confident enough your tests are covered enough.

Related

Should I use inherited tests?

I am reviewing some code where the developer has some classes ClassA and ClassB. They both inherit from ParentClass so both must implement a number of abstract methods (e.g. return5Values(2))
In ClassA the values are all double the previous value: [2,4,8,16,32]
In ClassB the values are all +1 the previous value [2,3,4,5,6]
There are also other constraints such as raising an error if the parameter is negative etc.
Other tests like getting the 3rd value only, also exist etc.
(Obviously these are just fake examples to get my point across)
Now, instead of writing a lot of similar tests for both ClassA and ClassB, what the developer has done is created ParentClassChildTests which contains a some code something like this:
public void testVariablesAreCorrect() {
returnedValues = clazz.return5Values(2)
# Does a bunch of other things as well
# ...
assertEqual(expectedValues, returnedValues)
}
ClassATests now inherits from ParentClassChildTest and must define expectedValues as a class variable.
The expectedValues are used within a few different tests as well, so they aren't being defined just for this single test.
Now when ClassATests and ClassBTests are run, it also runs all the tests inside ParentClassChildTests.
My question is: Is this a good method to avoid a lot of duplicate tests and ensure everything works as expected in child classes? Are there any major issues this can lead to? Or a better way of handling this?
Whilst this is all Java code, my question isn't about any particular testing framework or language but the idea in general of inheriting from a parent class which also has tests in it.
The situation that it is possible and sensible to re-use tests for different implementations of an interface / base class is not very common. The following aspects limit the applicability:
Derived classes have different dependencies, which may require different mocks to be created and to be set up. In such a case, the test methods can not be identical. Even if classA and classB currently do not have dependencies or the same dependencies with (coincidentially) the same setup, this can change over time or the next class classC will have different dependencies.
Each derived class will implement different algorithms. In your case, return5Values performs different algorithms in ClassA and ClassB. Due to the different algorithms, the behaviour of the SUT for the same set up and the same inputs may be different: For example, each algorithm will run into overflows at different points. Even the call return5Values(2) that allows to use a derived test for classA and classB today, could with a potential future classC lead to an overflow scenario with possible exceptions thrown.
The different algorithms implemented in the derived classes will have different potential bugs and different corner cases. That is, the necessary set up and inputs for the SUT will have to be different to stimulate the respective boundaries. For some implementations, testing the call return5Values(2) may simply not bring any benefit while test for other parameters than 2 are necessary.
If you share test methods between the classes and only provide the parameters, it is not the test method which is associated with the tests' intent - each parameter set has its own intent. The intent/scenario, however, should ideally be part of the output of each individual test.
Given all these problems, inheritance of test methods does not seem to be the best approach for re-use here. Instead, it may be more beneficial to have have some common helper functions that can be used by the different derived classes.
Having class hierarchies in tests creates dependencies between them. A UnitTest serves the purpose of testing a Unit in isolation where Unit refers to a certain class. I'd argue that it is ok to have helpers and utils to avoid duplicating very basic functionality.
As much as possible unit tests should allow for quick and independent changes of a certain Unit. Having a commonly enforced structure for all tests increases the amount of work to be done if the implementation of unrelated parts of the application changes.
When it comes to integration testing there will be shared functionality for setting up the infrastructure. So the answer is a very clear it depends. Generally it is favorable to reduce dependencies between tests as much as possible and having a base test that determines the inner workings of a derived test is detrimental to that goal.

reusing business logic in a test class to save time

Just having a conversation with someone in the office about using a business logic class to build up some data in order to test another class.
Basically, he has class A which takes a complex type as a parameter and then generates a collection of a different complex type as a result. He's written tests around this class already. Now he's moved on to testing another class (class B) which takes the result of class A then performs some logic on it.
He's asked the question, "should I use class A to build up a scenario to test class B with".
At first I said yes as class A has tests around it. But then I figured well what if there's some bugs in class A that we haven't found yet... so I guess there must be a better way to address this scenario.
Does anyone have any thoughts on this? Is it OK to use existing logic to save time writing other tests?
Regards,
James
Stances on that might differ. Generally, if code is tested and you assume it works, you're free to use it. This is especially true when using already tested methods to help with testing others (within single class/unit). However, as this happens within single unit it's a bit different from your case.
Now, when dealing with 2 separate classes I'd say you should avoid such approach. Purely for the reason that those two classes might not be related in obvious way or their context/scope of usage might vastly differ. As in, somebody might change class A without even knowing class B exists. Class B tests suddenly break even though no changes were made to B code. This brings unnecessary confusion and is situation you usually don't want to find yourself in.
Instead, I suggest creating helper method within tested class B file. With stubs/fakes and tools like AutoFixture, you should be able to easily reproduce generation logic used by class A and have your own "copy" contained in class B tests.
In order to test class B, the result returned from the class A should be replicated somewhere in your test. If class A returns a list of persons, you will have an helper function in your test returning a fake List<Person> to use for your test.
Because you are only testing the class B, class A should not be used in your test.
NUnit provides a built in functionality in order to provide data for your tests, have a look to :
http://www.nunit.org/index.php?p=testCaseSource&r=2.5
Or you can simply create a DataFactory class with methods that returns the data (simple objects, collections etc..) you will consume in your tests.

Many Test classes or one Test class with many methods?

I have a PersonDao that I'm writing unit tests against.
There are about 18-20 methods in PersonDao of the form -
getAllPersons()
getAllPersonsByCategory()
getAllPersonsUnder21() etc
My Approach to testing this was to create a PersonDaoTest with about 18 test methods testing each of the method in PersonDao
Then I created a PersonDaoPaginationTest that tested these 18 methods by applying pagination parameters.
Is this in anyway against the TDD best practices? I was told that this creates confusion and is against the best practices since this is non-standard. What was suggested is merging the two classes into PersonDaoTest instead.
As I understand is, the more broken down into many classes your code is, the better, please comment.
The fact that you have a set of 18 tests that you are going to have to duplicate to test a new feature is a smell that suggests that your PersonDao class is taking on multiple responsibilities. Specifically, it appears to be responsible both for querying/filter and for pagination. You may want to take a look at whether you can do a bit of design work to extract the pagination functionality into a separate class which could then be tested independently.
But in answer to your question, if you find that you have a class that you want to remain complex, then it's perfectly fine to use multiple test classes as a way of organizing a large number of tests. #Gishu's answer of grouping tests by their setup is a good approach. #Ryan's answer of grouping by "facets" or features is another good approach.
Can't give you a sweeping answer without looking at the code... except use whatever seems coherent to you and your team.
I've found that grouping tests based on their setup works out nicely in most cases. i.e if 5 tests require the same setup, they usually fit nicely into a test-fixture. if the 6th test requires a different setup (more or less) break it out into a separate test fixture.
This also leads to test-fixtures that are feature-cohesive (i.e. tests grouped on feature), give it a try. I'm not aware of any best practice that says you need to have one test class per production class... in practice I find I have n test classes per production classes, the best practice would be to use good names and keep related tests close (in a named folder).
My 2 cents: when you have a large class like that that has different "facets" to it, like pagination, I find it can often make for more understandable tests to not pack them all into one class. I can't claim to be a TDD guru, but I practice test-first development religiously, so to speak. I don't do it often, but it's not exactly rare, either, that I'll write more than a single test class for a particular class. Many people seem to forget good coding practices like separation of concerns when writing tests, though. I'm not sure why.
I think one test class per class is fine - if your implementation has many methods, then your test class will have many methods - big deal.
You may consider a couple of things however:
Your methods seem a bit "overly specific" and could use some abstraction or generalisation, for example instead of getAllPersonsUnder21() consider getAllPersonsUnder(int age)
If there are some more general aspects of your class, consider testing them using some common test code using call backs. For a trivial example to illustrate testing that both getAllPersons() returns multiple hits, do this:
#Test
public void testGetAllPersons() {
assertMultipleHits(new Callable<List<?>> () {
public List<?> call() throws Exception {
return myClass.getAllPersons(); // Your call back is here
}
});
}
public static void assertMultipleHits(Callable<List<?>> methodWrapper) throws Exception {
assertTrue("failure to get multiple items", methodWrapper.call().size() > 0);
}
This static method can be used by any class to test if "some method" returns multiple hits. You could extends this to do lots of tests over the same callback, for example running it with and without a DB connection up, testing that it behaves correctly in each case.
I'm working on test automation of a web app using selenium. It is not unit testing but you might find that some principles apply. Tests are very complex and we figured out that the only way to implement tests in a way that meets all our requirements was having 1 test per class. So we consider that each class is an individual test, then, we were able to use methods as the different steps of the test. For example:
public SignUpTest()
{
public SignUpTest(Map<String,Object> data){}
public void step_openSignUpPage(){}
public void step_fillForm(){}
public void step_submitForm(){}
public void step_verifySignUpWasSuccessfull(){}
}
All the steps are dependent, they follow the order specified and if someone fail the others will not be executed.
Of course, each step is a test by itself, but they all together form the sing up test.
The requirements were something like:
Tests must be data driven, this is, execute the same test in parallel with different inputs.
Tests must run in different browsers in parallel as well. So each
test will run "input_size x browsers_count" times in parallel.
Tests will focus in a web workflow, for example, "sign up with valid data" and they will be split into smaller tests units for each step of the workflow. It will make things easier to
maintain, and debug (when you have a failure, it will say:
SignUpTest.step_fillForm() and you'll know immediately what's wrong).
Tests steps share the same test input and state (for example, the id of the user created). Imagine if you put in the same class
steps of different tests, for example:
public SignUpTest()
{
public void signUpTest_step_openSignUpPage(){}
public void signUpTest_step_step_fillForm(){}
public void signUpTest_step_step_submitForm(){}
public void signUpTest_step_verifySignUpWasSuccessfull(){}
public void signUpNegativeTest_step_openSignUpPage(){}
public void signUpNegativeTest_step_step_fillFormWithInvalidData(){}
public void signUpNegativeTest_step_step_submitForm(){}
public void signUpNegativeTest_step_verifySignUpWasNotSuccessfull(){}
}
Then, having in the same class state belonging to the 2 tests will be
a mess.
I hope I was clear and you may find this useful. At the end, choosing what will represent your test: if a class or a method is just a decision that I think will depend int: what is the target of a test (in my case, a workflow around a feature), what's easier to implement and maintain, if a test fail how you make the failure more accurate and how you make it easier to debug, what will lead you to more readable code, etc.

What is wrong with making a unit test a friend of the class it is testing? [duplicate]

This question already has answers here:
How do I test a class that has private methods, fields or inner classes?
(58 answers)
Closed 5 years ago.
In C++, I have often made a unit test class a friend of the class I am testing. I do this because I sometimes feel the need to write a unit test for a private method, or maybe I want access to some private member so I can more easily setup the state of the object so I can test it. To me this helps preserve encapsulation and abstraction because I am not modifying the public or protected interface of the class.
If I buy a third party library, I wouldn't want its public interface to be polluted with a bunch of public methods I don't need to know about simply because the vendor wanted to unit test!
Nor do I want have to worry about a bunch of protected members that I don't need to know about if I am inheriting from a class.
That is why I say it preserves abstraction and encapsulation.
At my new job they frown against using friend classes even for unit tests. They say because the class should not "know" anything about the tests and that you do not want tight coupling of the class and its test.
Can someone please explain these reasons to me more so that I may understand better? I just do not see why using a friend for unit tests is bad.
Ideally, you shouldn't need to unit test private methods at all. All a consumer of your class should care about is the public interface, so that's what you should test. If a private method has a bug, it should be caught by a unit test that invokes some public method on the class which eventually ends up calling the buggy private method. If a bug manages to slip by, this indicates that your test cases don't fully reflect the contract you wish your class to implement. The solution to this problem is almost certainly to test public methods with more scrutiny, not to have your test cases dig into the class's implementation details.
Again, this is the ideal case. In the real world, things may not always be so clear, and having a unit testing class be a friend of the class it tests might be acceptable, or even desirable. Still, it's probably not something you want to do all the time. If it seems to come up often enough, that might a sign that your classes are too large and/or performing too many tasks. If so, further subdividing them by refactoring complex sets of private methods into separate classes should help remove the need for unit tests to know about implementation details.
You should consider that there are different styles and methods to test: Black box testing only tests the public interface (treating the class as a black box). If you have an abstract base class you can even use the same tests against all your implementations.
If you use White box testing, you might even look at the details of the implementation. Not only about which private methods a class has, but what kind of conditional statements are included (i.e. if you want to increase your condition coverage because you know that the conditions were hard to code). In white box testing, you definitely have "high coupling" between classes/implementation and the tests which is necessary because you want to test the implementation and not the interface.
As bcat pointed out, it's often helpful to use composition and more but smaller classes instead of many private methods. This simplifies white box testing because you can more easily specify the test cases to get a good test coverage.
I feel that Bcat gave a very good answer, but I would like to expound on the exceptional case that he alludes to
In the real world, things may not always be so clear, and having a
unit testing class be a friend of the class it tests might be
acceptable, or even desirable.
I work in a company with a large legacy codebase, which has two problems both of which contribute to making a friend unit-test desirable.
We suffer from obscenely large functions and classes which require refactoring, but in order to refactor it is helpful to have tests.
Much of our code is dependent on database access, which for various reasons should not be brought into the unit tests.
In some cases Mocking is useful to alleviate the latter problem, but very often this just leads to uneccessarily complex design (class heirarchies where none would otherwise be needed), while one could very simply refactor the code in the following way:
class Foo{
public:
some_db_accessing_method(){
// some line(s) of code with db dependance.
// a bunch of code which is the real meat of the function
// maybe a little more db access.
}
}
Now we have the situation where the meat of the function needs refactoring, so we'd like a unit test. It shouldn't be exposed publicly. Now, there's a wonderful technique called mocking that could be used in this situation, but the fact is that in this case a mock is overkill. It would require me to increase the complexity of the design with an unecessary hierarchy.
A far more pragmatic approach would be to do something like this:
class Foo{
public:
some_db_accessing_method(){
// db code as before
unit_testable_meat(data_we_got_from_db);
// maybe more db code.
}
private:
unit_testable_meat(...);
}
The latter gives me all of the benefits I need from unit testing, including giving me that precious safety net to catch errors produced when I refactor the code in the meat. In order to unit test it, I have to friend a UnitTest class, but I would strongly argue that this is is far better than an otherwise useless code heirarchy just to allow me to use a Mock.
I think this should become an idiom, and I think it's a suitable, pragmatic solution to increase the ROI of unit testing.
Like bcat suggested, as much as possible, you need to find bugs using public interface itself. But if you want to do things like printing private variables and comparing with expected result etc(Helpful for developers to debug the issues easily), then you can make UnitTest class as friend to class to be tested. But you may need to add it under a macro like below.
class Myclass
{
#if defined(UNIT_TEST)
friend class UnitTest;
#endif
};
Enable flag UNIT_TEST only when Unit testing is required.
For other releases, you need to disable this flag.
I don't see anything wrong with using a friend unit testing class in many cases. Yes, decomposing a large class into smaller ones is sometimes a better way to go. I think people are a bit too hasty to dismiss using the friend keyword for something like this - it might not be ideal object oriented design, but I can sacrifice a little idealism for better test coverage if that's what I really need.
Typically you only test the public interface so that you are free to redesign and refactor the implementation. Adding test cases for private members defines a requirement and restriction on the implementation of your class.
Make the functions you want to test protected.
Now in your unit test file, create a derived class.
Create public wrapper functions that call your the class-under-test protected functions.

what to unit test after refactoring common code into separate class

Let's say I've got two classes, each fully tested. There's duplication though, so I refactor the common code into a new class. Should I unit test that new class? If so, how?
Here's the options I can see:
don't unit test the new class (it's already fully tested by the original tests)
copy+paste the original tests to the new test class
move the original tests to the new test class and replace original tests with mocks
leave original tests alone, but write more fine-grained tests in the new test class
There's duplication though, so I refactor the common code into a new class.
I'm taking that to mean that the two old classes now inherit the common behavior from the new class. If that's the case, then the old test cases should already be testing the common behavior and there's probably no need to write separate tests.
If that's not the case (like if you're creating a utility class whose methods are called by the two original classes), then I'd probably move the tests to their own unit test class so they only need to be in one place.
If your refactored class is covered by existing tests, you are probably ok.
Looking at your options, i would also do number 4. If you did some refactoring, you probably made something more generic than it was before. In that case, you could test the generic functionality in a generic manner. So I would do 4 if your refactored solution is more generic. If it is just moving code around to be DRY, i would probably do 1.
If it was me then I'd give the new class it's own set of unit tests. Those tests would be a copy + paste of the previous tests which ran against the same code.
Although you're duplicating work, in the long term you need to think about how this new class might change and having those tests in a new unit test class/fixture will be cleaner for you.