what to unit test after refactoring common code into separate class - unit-testing

Let's say I've got two classes, each fully tested. There's duplication though, so I refactor the common code into a new class. Should I unit test that new class? If so, how?
Here's the options I can see:
don't unit test the new class (it's already fully tested by the original tests)
copy+paste the original tests to the new test class
move the original tests to the new test class and replace original tests with mocks
leave original tests alone, but write more fine-grained tests in the new test class

There's duplication though, so I refactor the common code into a new class.
I'm taking that to mean that the two old classes now inherit the common behavior from the new class. If that's the case, then the old test cases should already be testing the common behavior and there's probably no need to write separate tests.
If that's not the case (like if you're creating a utility class whose methods are called by the two original classes), then I'd probably move the tests to their own unit test class so they only need to be in one place.

If your refactored class is covered by existing tests, you are probably ok.
Looking at your options, i would also do number 4. If you did some refactoring, you probably made something more generic than it was before. In that case, you could test the generic functionality in a generic manner. So I would do 4 if your refactored solution is more generic. If it is just moving code around to be DRY, i would probably do 1.

If it was me then I'd give the new class it's own set of unit tests. Those tests would be a copy + paste of the previous tests which ran against the same code.
Although you're duplicating work, in the long term you need to think about how this new class might change and having those tests in a new unit test class/fixture will be cleaner for you.

Related

Is there a standard for whether a class under test should be constructed in a fixture or in a test?

I'm just curious, are there any standard guidelines that state whether an instance of a class under test should be constructed in a fixture or in the actual test case?
Thanks!
I'm not aware of a standard reference on that topic. Here's what I'd do:
If I had only one test to write, or if I needed an instance of the class under test that was constructed differently than any other instance of that class in my test suite, I'd just instantiate it in the test. Why make it any more complicated that you have to? If I needed to use the same instance over and over again, I'd put it in a fixture.
I do think it's important to construct only the fixtures you need for a given test case, so that there's nothing to mislead the reader. That means either using whatever scoping mechanism your test framework provides (e.g. an rspec context block or a whole new xUnit TestCase) to construct a given fixture only before the tests that need it, or moving instance construction from fixtures to test. To avoid duplication, you can always write a method to construct an instance and call it from as many tests as you want.
I tend to avoid putting anything inside a fixture.
After a while the CUT state tends to get out of hand as the number of tests in that fixture increase. Each test require a simialr but different behavior which could or could not be added to some initialization/setup method.
Having the CUT in the fixture level creates a shared states between the tests which can cause test failure due to run order - which is a pain to find and fix.
Another readability issue happens when a test fails - people tend to forget the initialization that might have happened in another method.
There are better ways to avoid code duplication - Using AutoMocking container to create objects with fake parameters or Factory methods which enable a different initialization for each test (if required) and create more readable and maintainable tests.

How do unit tests change when a base class is driven out?

This is in part a followup to this question.
I'm not sure the best way to ask this, so I'll try a short story to set the scene:
Once upon a time, there was a class ‘A’, which had a unit test class ‘ATests’ responsible for testing its behaviour through the public interface. They lived happily together for a while and then a change happened, and class ‘B’ came along, which as it turned out had a lot in common with class ‘A’, so a base class was introduced.
The public behaviour of the base class is already covered by the tests for class A. So, the question is what happens next?
• Does class B need to have tests for the common (base class behaviour)? It seems like the behaviour is a part of B, so it should be tested, but should these tests be shared with those for class A? For the base class? If so, what’s the best way to share?
• Does the new base class need unit tests, or is it ok for base classes to be tested through the tests of their children? Does it matter if the base class is abstract?
• Is it enough to ensure that classes A & B derive from the base class and ‘trust’ the unit tests for the base class to test the common behaviour (so the tests don’t need to be replicated in the child classes)? The tests for A & B only need to test they’re new/changed behaviour?
• Am I following totally the wrong approach having approximately one unit test class per real class?
I’ve taken different views at different times and the different approaches can have quite an impact on the ability to refactor the code, time taken to write tests etc. What approaches have people found works best?
Personally, given time, I tend to test all three (base and two derived). It shows that you're not inadvertently overriding the base methods and changing their behavior, and your inherited class still provides the expected semantics. If the behavior really doesn't change, then it could be as simple as a copy-paste job, but it provides more complete coverage.
Note the "given time" part, though. If time is an issue (and it always is), testing the base class or the inherited functionality would probably be lower priority. But testing is great inoculation against yourself, and makes you much more confident when refactoring later, so you're only shortchanging yourself, your customers, and/or your maintainers by not doing as complete coverage as you have time for.
However, pawning repetitive things like this off on dedicated testers or a QA team, if you have one, is perfectly acceptable. But buy them a beer sometimes :-) They make you look better!
You might look at code coverage tools; they can show you if you're actually testing all of the code. Personally if I have a test covering the base class behavior and I"m not overriding that, I won't test it again. The goal is to have a code change (potentially) break only one test.
Don't feel the need to stick to one unit test class per real class. One unit test class per unique setup fixture is one way, and then there's the whole BDD school...
Refactoring test code is just as important as refactoring production code. Both should be treated as first class citizens. So if you are extracting public methods to the base class then that should have its own set of tests. If your test cases are designed properly where each tests tests one thing each then the test refactoring should be easy.
If you are extracting protected functionality then probably its it a slightly grey area. If test methods are new to the class then I would expect them to be tested in the derived class simply because they are probably there for the derived class to function properly. Ideally this should be kept to a minimum as the base class functionality becomes less obvious.
The derived classes then will have the remaining public methods tested in their own set of tests.
Again if you change the production code and not the tests then you should incorporate a coverage tool so you feel confident enough your tests are covered enough.

What is wrong with making a unit test a friend of the class it is testing? [duplicate]

This question already has answers here:
How do I test a class that has private methods, fields or inner classes?
(58 answers)
Closed 5 years ago.
In C++, I have often made a unit test class a friend of the class I am testing. I do this because I sometimes feel the need to write a unit test for a private method, or maybe I want access to some private member so I can more easily setup the state of the object so I can test it. To me this helps preserve encapsulation and abstraction because I am not modifying the public or protected interface of the class.
If I buy a third party library, I wouldn't want its public interface to be polluted with a bunch of public methods I don't need to know about simply because the vendor wanted to unit test!
Nor do I want have to worry about a bunch of protected members that I don't need to know about if I am inheriting from a class.
That is why I say it preserves abstraction and encapsulation.
At my new job they frown against using friend classes even for unit tests. They say because the class should not "know" anything about the tests and that you do not want tight coupling of the class and its test.
Can someone please explain these reasons to me more so that I may understand better? I just do not see why using a friend for unit tests is bad.
Ideally, you shouldn't need to unit test private methods at all. All a consumer of your class should care about is the public interface, so that's what you should test. If a private method has a bug, it should be caught by a unit test that invokes some public method on the class which eventually ends up calling the buggy private method. If a bug manages to slip by, this indicates that your test cases don't fully reflect the contract you wish your class to implement. The solution to this problem is almost certainly to test public methods with more scrutiny, not to have your test cases dig into the class's implementation details.
Again, this is the ideal case. In the real world, things may not always be so clear, and having a unit testing class be a friend of the class it tests might be acceptable, or even desirable. Still, it's probably not something you want to do all the time. If it seems to come up often enough, that might a sign that your classes are too large and/or performing too many tasks. If so, further subdividing them by refactoring complex sets of private methods into separate classes should help remove the need for unit tests to know about implementation details.
You should consider that there are different styles and methods to test: Black box testing only tests the public interface (treating the class as a black box). If you have an abstract base class you can even use the same tests against all your implementations.
If you use White box testing, you might even look at the details of the implementation. Not only about which private methods a class has, but what kind of conditional statements are included (i.e. if you want to increase your condition coverage because you know that the conditions were hard to code). In white box testing, you definitely have "high coupling" between classes/implementation and the tests which is necessary because you want to test the implementation and not the interface.
As bcat pointed out, it's often helpful to use composition and more but smaller classes instead of many private methods. This simplifies white box testing because you can more easily specify the test cases to get a good test coverage.
I feel that Bcat gave a very good answer, but I would like to expound on the exceptional case that he alludes to
In the real world, things may not always be so clear, and having a
unit testing class be a friend of the class it tests might be
acceptable, or even desirable.
I work in a company with a large legacy codebase, which has two problems both of which contribute to making a friend unit-test desirable.
We suffer from obscenely large functions and classes which require refactoring, but in order to refactor it is helpful to have tests.
Much of our code is dependent on database access, which for various reasons should not be brought into the unit tests.
In some cases Mocking is useful to alleviate the latter problem, but very often this just leads to uneccessarily complex design (class heirarchies where none would otherwise be needed), while one could very simply refactor the code in the following way:
class Foo{
public:
some_db_accessing_method(){
// some line(s) of code with db dependance.
// a bunch of code which is the real meat of the function
// maybe a little more db access.
}
}
Now we have the situation where the meat of the function needs refactoring, so we'd like a unit test. It shouldn't be exposed publicly. Now, there's a wonderful technique called mocking that could be used in this situation, but the fact is that in this case a mock is overkill. It would require me to increase the complexity of the design with an unecessary hierarchy.
A far more pragmatic approach would be to do something like this:
class Foo{
public:
some_db_accessing_method(){
// db code as before
unit_testable_meat(data_we_got_from_db);
// maybe more db code.
}
private:
unit_testable_meat(...);
}
The latter gives me all of the benefits I need from unit testing, including giving me that precious safety net to catch errors produced when I refactor the code in the meat. In order to unit test it, I have to friend a UnitTest class, but I would strongly argue that this is is far better than an otherwise useless code heirarchy just to allow me to use a Mock.
I think this should become an idiom, and I think it's a suitable, pragmatic solution to increase the ROI of unit testing.
Like bcat suggested, as much as possible, you need to find bugs using public interface itself. But if you want to do things like printing private variables and comparing with expected result etc(Helpful for developers to debug the issues easily), then you can make UnitTest class as friend to class to be tested. But you may need to add it under a macro like below.
class Myclass
{
#if defined(UNIT_TEST)
friend class UnitTest;
#endif
};
Enable flag UNIT_TEST only when Unit testing is required.
For other releases, you need to disable this flag.
I don't see anything wrong with using a friend unit testing class in many cases. Yes, decomposing a large class into smaller ones is sometimes a better way to go. I think people are a bit too hasty to dismiss using the friend keyword for something like this - it might not be ideal object oriented design, but I can sacrifice a little idealism for better test coverage if that's what I really need.
Typically you only test the public interface so that you are free to redesign and refactor the implementation. Adding test cases for private members defines a requirement and restriction on the implementation of your class.
Make the functions you want to test protected.
Now in your unit test file, create a derived class.
Create public wrapper functions that call your the class-under-test protected functions.

Unit testing factory methods which have a concrete class as a return type

So I have a factory class and I'm trying to work out what the unit tests should do. From this question I could verify that the interface returned is of a particular concrete type that I would expect.
What should I check for if the factory is returning concrete types (because there is no need - at the moment - for interfaces to be used)? Currently I'm doing something like the following:
[Test]
public void CreateSomeClassWithDependencies()
{
// m_factory is instantiated in the SetUp method
var someClass = m_factory.CreateSomeClassWithDependencies();
Assert.IsNotNull(someClass);
}
The problem with this is that the Assert.IsNotNull seems somewhat redundant.
Also, my factory method might be setting up the dependencies of that particular class like so:
public SomeClass CreateSomeClassWithDependencies()
{
return new SomeClass(CreateADependency(), CreateAnotherDependency(),
CreateAThirdDependency());
}
And I want to make sure that my factory method sets up all these dependencies correctly. Is there no other way to do this then to make those dependencies public/internal properties which I then check for in the unit test? (I'm not a big fan of modifying the test subjects to suit the testing)
Edit: In response to Robert Harvey's question, I'm using NUnit as my unit testing framework (but I wouldn't have thought that it would make too much of a difference)
Often, there's nothing wrong with creating public properties that can be used for state-based testing. Yes: It's code you created to enable a test scenario, but does it hurt your API? Is it conceivable that other clients would find the same property useful later on?
There's a fine line between test-specific code and Test-Driven Design. We shouldn't introduce code that has no other potential than to satisfy a testing requirement, but it's quite alright to introduce new code that follow generally accepted design principles. We let the testing drive our design - that's why we call it TDD :)
Adding one or more properties to a class to give the user a better possibility of inspecting that class is, in my opinion, often a reasonable thing to do, so I don't think you should dismiss introducing such properties.
Apart from that, I second nader's answer :)
If the factory is returning concrete types, and you're guaranteeing that your factory always returns a concrete type, and not null, then no, there isn't too much value in the test. It does allows you to make sure, over time that this expectation isn't violated, and things like exceptions aren't thrown.
This style of test simply makes sure that, as you make changes in the future, your factory behaviour won't change without you knowing.
If your language supports it, for your dependencies, you can use reflection. This isn't always the easiest to maintain, and couples your tests very tightly to your implementation. You have to decide if that's acceptable. This approach tends to be very brittle.
But you really seem to be trying to separate which classes are constructed, from how the constructors are called. You might just be better off with using a DI framework to get that kind of flexibility.
By new-ing up all your types as you need them, you don't give yourself many seams (a seam is a place where you can alter behaviour in your program without editing in that place) to work with.
With the example as you give it though, you could derive a class from the factory. Then override / mock CreateADependency(), CreateAnotherDependency() and CreateAThirdDependency(). Now when you call CreateSomeClassWithDependencies(), you are able to sense whether or not the correct dependencies were created.
Note: the definition of "seam" comes from Michael Feather's book, "Working Effectively with Legacy Code". It contains examples of many techniques to add testability to untested code. You may find it very useful.
What we do is create the dependancies with factories, and we use a dependancy injection framework to substitute mock factories for the real ones when the test is run. Then we set up the appropriate expectations on those mock factories.
You can always check stuff with reflection. There is no need to expose something just for unit tests. I find it quite rare that I need to reach in with reflection and it may be a sign of bad design.
Looking at your sample code, yes the Assert not null seems redundant, depending on the way you designed your factory, some will return null objects from the factory as opposed to exceptioning out.
As I understand it you want to test that the dependencies are built correctly and passed to the new instance?
If I was not able to use a framework like google guice, I would probably do it something like this (here using JMock and Hamcrest):
#Test
public void CreateSomeClassWithDependencies()
{
dependencyFactory = context.mock(DependencyFactory.class);
classAFactory = context.mock(ClassAFactory.class);
myDependency0 = context.mock(MyDependency0.class);
myDependency1 = context.mock(MyDependency1.class);
myDependency2 = context.mock(MyDependency2.class);
myClassA = context.mock(ClassA.class);
context.checking(new Expectations(){{
oneOf(dependencyFactory).createDependency0(); will(returnValue(myDependency0));
oneOf(dependencyFactory).createDependency1(); will(returnValue(myDependency1));
oneOf(dependencyFactory).createDependency2(); will(returnValue(myDependency2));
oneOf(classAFactory).createClassA(myDependency0, myDependency1, myDependency2);
will(returnValue(myClassA));
}});
builder = new ClassABuilder(dependencyFactory, classAFactory);
assertThat(builder.make(), equalTo(myClassA));
}
(if you cannot mock ClassA you can assign a non-mock version to myClassA using new)

Is it okay to store test-cases inside the corresponding class? (C++)

I have started to write my tests as static functions in the class I want to test. These functions usually test functionality, they create a lot of objects, measure memory consumption (to detect leaks), etc.
Is this good or bad practice? Will it bite me in the long run?
I keep them separate, because I don't want test classes to be included in the deployable artifacts. No sense increasing the size of the .exe or making them available to clients.
I'd recommend writing unit tests with CppUnit.
No, you should write unit tests instead.
I would say that it isn't best-practice, but it is "okay", in that it won't break your program or cause it to crash. There are a lot of things that are okay to do, but you shouldn't do.
Test code is fundamentally different from production code in terms of ownership, deployment, non-functional requirements and so on. Therefore it is better to keep it separate from the code being tested, in separate files and probably even in separate directories.
To facilitate whitebox unit testing for the class under test, you often need to declare the test class/test functions as friend. Some classes are unit-testable with the public members only so adding friends is not always necessary.
Combining test code and code under test is simple: you just link the object files together in the same project.
Sometimes you can see unit test code that #includes the code under test but I would advice against that - for example, if you have test coverage measurement tooling in place (highly recommended!), the measures won't be correct for the code under test.
If you have your test cases inside your class, it's hard to have things like fixtures.
I am also going to give a shout out to Boost.Test. The learning curve is a little high but it is amazing once you get used to it.
That is bad practice.
You should have a separate class to test the class you are creating. The way you are doing you are bloating production code with test code. This is not what you should do.
The way you want to test the class Foo is like this:
//Foo.cpp
class Foo {
public:
int GetInt() { return 15; }
};
//FooTest.cpp
TEST(FooTest, testGetIntShouldReturn15) {
Foo foo;
ASSERT_EQUAL(15, foo.GetInt());
}