Unit testing, built in production dependencies - unit-testing

Im reading The Art Of Unit Testing" and there is a specific paragraph im not sure about.
"One of the reasons you may want to avoid using a base class instead of an interface is that a base class from the production code may already have (and probably has) built-in production dependencies that you’ll have to know about and override. This makes implementing derived classes for testing harder than implementing an interface, which lets you know exactly what the underlying implementation is and gives you full control over it."
can someone please give me an example of a built-in production dependency?
Thanks

My interpretation of this is basically anything where you have no control over the underlying implementation, but still rely on it. This could be in your own code or in third party libraries.
Something like:
class MyClass : BaseConfigurationProvider
{
}
abstract class BaseConfigurationProvider
{
string connectionString;
protected BaseConfigurationProvider()
{
connectionString = GetFromConfiguration();
}
}
This has a dependency on where the connection string is returned from, perhaps a config file or perhaps a random text file - either way, difficult external state handling for a unit test on MyClass.
Whereas the same given an interface:
class MyClass : IBaseConfigurationProvider
{
string connectionString;
public MyClass(IBaseConfigurationProvider provider)
{
connectionString = provider.GetConnectionString();
}
}
interface IBaseConfigurationProvider
{
string GetConnectionString();
}
You are in full control of the implementation at least, and the use of an interface means that test versions of implementations can be used during unit tests, or you can inject dependencies into consuming classes (as I have done above). In this scenario, the dependency is on the need to resolve a connection string. The tests can provide a different or empty string.

one example which i can think of is use of Session variable inside that of asp.net (im a .net guy)
because you have no control over how asp.net populates the session variable you cannot test it simply by making a test case you will have to either override it somehow or make a mock object
and this happens because all the context and cookies arent present when you are testing

Related

Is it bad practice to unit test a method that is calling another method I am already testing?

Consider you have the following method:
public Foo ParseMe(string filepath)
{
// break up filename
// validate filename & extension
// retrieve info from file if it's a certain type
// some other general things you could do, etc
var myInfo = GetFooInfo(filename);
// create new object based on this data returned AND data in this method
}
Currently I have unit tests for GetFooInfo, but I think I also need to build unit tests for ParseMe. In a situation like this where you have a two methods that return two different properties - and a change in either of them could break something - should unit tests be created for both to determine the output is as expected?
I like to err on the side of caution and be more wary about things breaking and ensuring that maintenance later on down the road is easier, but I feel very skeptical about adding very similar tests in the test project. Would this be bad practice or is there any way to do this more efficiently?
I'm marking this as language agnostic, but just in case it matters I am using C# and NUnit - Also, I saw a post similar to this in title only, but the question is different. Sorry if this has already been asked.
ParseMe looks sufficiently non-trivial to require a unit test. To answer your precise question, if "you have a two methods that return two different properties - and a change in either of them could break something" you should absolutely unit test them.
Even if the bulk of the work is in GetFooInfo, at minimum you should test that it's actually called. I know nothing about NUnit, but I know in other frameworks (like RSpec) you can write tests like GetFooInfo.should be_called(:once).
It is not a bad practice to test a method that is calling another method. In fact, it is a good practice. If you have a method calling another method, it is probably performing additional functionality, which should be tested.
If you find yourself unit testing a method that calls a method that is also being unit tested, then you are probably experiencing code reuse, which is a good thing.
I agree with #tsm - absolutely test both methods (assuming both are public).
This may be a smell that the method or class is doing too much - violating the Single Responsibility Principle. Consider doing an Extract Class refactoring and decoupling the two classes (possibly with Dependency Injection). That way you could test both pieces of functionality independently. (That said, I'd only do that if the functionality was sufficiently complex to warrant it. It's a judgment call.)
Here's an example in C#:
public interface IFooFileInfoProvider
{
FooInfo GetFooInfo(string filename);
}
public class Parser
{
private readonly IFooFileInfoProvider _fooFileInfoProvider;
public Parser(IFooFileInfoProvider fooFileInfoProvider)
{
// Add a null check
_fooFileInfoProvider = fooFileInfoProvider;
}
public Foo ParseMe(string filepath)
{
string filename = Path.GetFileName(filepath);
var myInfo = _fooFileInfoProvider.GetFooInfo(filename);
return new Foo(myInfo);
}
}
public class FooFileInfoProvider : IFooFileInfoProvider
{
public FooInfo GetFooInfo(string filename)
{
// Do I/O
return new FooInfo(); // parameters...
}
}
Many developers, me included, take a programming by contract approach. That requires you to consider each method as a black box. If the method delegates to another method to accomplish its task does not matter, when you are testing the method. But you should also test all large or complicated parts of your program as units. So whether you need to unit test the GetFooInfo depends on how complicated that method is.

How can I refactor and unit test complex legacy Java EE5 EJB methods?

My colleagues and I are currently introducing unit tests to our legacy Java EE5 codebase. We use mostly JUnit and Mockito. In the process of writing tests, we have noticed that several methods in our EJBs were hard to test because they did a lot of things at once.
I'm fairly new to the whole testing business, and so I'm looking for insight in how to better structure the code or the tests. My goal is to write good tests without a headache.
This is an example of one of our methods and its logical steps in a service that manages a message queue:
consumeMessages
acknowledgePreviouslyDownloadedMessages
getNewUnreadMessages
addExtraMessages (depending on somewhat complex conditions)
markMessagesAsDownloaded
serializeMessageObjects
The top-level method is currently exposed in the interface, while all sub-methods are private. As far as I understand it, it would be bad practice to just start testing private methods, as only the public interface should matter.
My first reaction was to just make all the sub-methods public and test them in isolation, then in the top-level method just make sure that it calls the sub-methods. But then a colleague mentioned that it might not be a good idea to expose all those low-level methods at the same level as the other one, as it might cause confusion and other developers might start using when they should be using the top-level one. I can't fault his argument.
So here I am.
How do you reconcile exposing easily testable low-level methods versus avoiding to clutter the interfaces? In our case, the EJB interfaces.
I've read in other unit test questions that one should use dependency injection or follow the single responsibility principle, but I'm having trouble applying it in practice. Would anyone have pointers on how to apply that kind of pattern to the example method above?
Would you recommend other general OO patterns or Java EE patterns?
At first glance, I would say that we probably need to introduce a new class, which would 1) expose public methods that can be unit tested but 2) not be exposed in the public interface of your API.
As an example, let's imagine that you are designing an API for a car. To implement the API, you will need an engine (with complex behavior). You want to fully test your engine, but you don't want to expose details to the clients of the car API (all I know about my car is how to push the start button and how to switch the radio channel).
In that case, what I would do is something like that:
public class Engine {
public void doActionOnEngine() {}
public void doOtherActionOnEngine() {}
}
public class Car {
private Engine engine;
// the setter is used for dependency injection
public void setEngine(Engine engine) {
this.engine = engine;
}
// notice that there is no getter for engine
public void doActionOnCar() {
engine.doActionOnEngine();
}
public void doOtherActionOnCar() {
engine.doActionOnEngine();
engine.doOtherActionOnEngine(),
}
}
For the people using the Car API, there is no way to access the engine directly, so there is no risk to do harm. On the other hand, it is possible to fully unit test the engine.
Dependency Injection (DI) and Single Responsibility Principle (SRP) are highly related.
SRP is basicly stating that each class should only do one thing and delegate all other matters to separate classes. For instance, your serializeMessageObjects method should be extracted into its own class -- let's call it MessageObjectSerializer.
DI means injecting (passing) the MessageObjectSerializer object as an argument to your MessageQueue object -- either in the constructor or in the call to the consumeMessages method. You can use DI frameworks to do this for, but I recommend to do it manually, to get the concept.
Now, if you create an interface for the MessageObjectSerializer, you can pass that to the MessageQueue, and then you get the full value of the pattern, as you can create mocks/stubs for easy testing. Suddenly, consumeMessages doesn't have to pay attention to how serializeMessageObjects behaves.
Below, I have tried to illustrate the pattern. Note, that when you want to test consumeMessages, you don't have to use the the MessageObjectSerializer object. You can make a mock or stub, that does exactly what you want it to do, and pass it instead of the concrete class. This really makes testing so much easier. Please, forgive syntax errors. I did not have access to Visual Studio, so it is written in a text editor.
// THE MAIN CLASS
public class MyMessageQueue()
{
IMessageObjectSerializer _serializer;
//Constructor that takes the gets the serialization logic injected
public MyMessageQueue(IMessageObjectSerializer serializer)
{
_serializer = serializer;
//Also a lot of other injection
}
//Your main method. Now it calls an external object to serialize
public void consumeMessages()
{
//Do all the other stuff
_serializer.serializeMessageObjects()
}
}
//THE SERIALIZER CLASS
Public class MessageObjectSerializer : IMessageObjectSerializer
{
public List<MessageObject> serializeMessageObjects()
{
//DO THE SERILIZATION LOGIC HERE
}
}
//THE INTERFACE FOR THE SERIALIZER
Public interface MessageObjectSerializer
{
List<MessageObject> serializeMessageObjects();
}
EDIT: Sorry, my example is in C#. I hope you can use it anyway :-)
Well, as you have noticed, it's very hard to unit test a concrete, high-level program. You have also identified the two most common issues:
Usually the program is configured to use specific resources, such as a specific file, IP address, hostname etc. To counter this, you need to refactor the program to use dependency injection. This is usually done by adding parameters to the constructor that replace the ahrdcoded values.
It's also very hard to test large classes and methods. This is usually due to the combinatorical explosion in the number of tests required to test a complex piece of logic. To counter this, you will usually refactor first to get lots more (but shorter) methods, then trying to make the code more generic and testable by extracting several classes from your original class that each have a single entry method (public) and several utility methods (private). This is essentially the single responsibility principle.
Now you can start working your way "up" by testing the new classes. This will be a lot easier, as the combinatoricals are much easier to handle at this point.
At some point along the way you will probably find that you can simplify your code greatly by using these design patterns: Command, Composite, Adaptor, Factory, Builder and Facade. These are the most common patterns that cut down on clutter.
Some parts of the old program will probably be largely untestable, either because they are just too crufty, or because it's not worth the trouble. Here you can settle for a simple test that just checks that the output from known input has not changed. Essentially a regression test.

How to write jUnit tests for private methods [duplicate]

How do I use JUnit to test a class that has internal private methods, fields or nested classes?
It seems bad to change the access modifier for a method just to be able to run a test.
If you have somewhat of a legacy Java application, and you're not allowed to change the visibility of your methods, the best way to test private methods is to use reflection.
Internally we're using helpers to get/set private and private static variables as well as invoke private and private static methods. The following patterns will let you do pretty much anything related to the private methods and fields. Of course, you can't change private static final variables through reflection.
Method method = TargetClass.getDeclaredMethod(methodName, argClasses);
method.setAccessible(true);
return method.invoke(targetObject, argObjects);
And for fields:
Field field = TargetClass.getDeclaredField(fieldName);
field.setAccessible(true);
field.set(object, value);
Notes:
TargetClass.getDeclaredMethod(methodName, argClasses) lets you look into private methods. The same thing applies for
getDeclaredField.
The setAccessible(true) is required to play around with privates.
The best way to test a private method is via another public method. If this cannot be done, then one of the following conditions is true:
The private method is dead code
There is a design smell near the class that you are testing
The method that you are trying to test should not be private
When I have private methods in a class that are sufficiently complicated that I feel the need to test the private methods directly, that is a code smell: my class is too complicated.
My usual approach to addressing such issues is to tease out a new class that contains the interesting bits. Often, this method and the fields it interacts with, and maybe another method or two can be extracted in to a new class.
The new class exposes these methods as 'public', so they're accessible for unit testing. The new and old classes are now both simpler than the original class, which is great for me (I need to keep things simple, or I get lost!).
Note that I'm not suggesting that people create classes without using their brain! The point here is to use the forces of unit testing to help you find good new classes.
I have used reflection to do this for Java in the past, and in my opinion it was a big mistake.
Strictly speaking, you should not be writing unit tests that directly test private methods. What you should be testing is the public contract that the class has with other objects; you should never directly test an object's internals. If another developer wants to make a small internal change to the class, which doesn't affect the classes public contract, he/she then has to modify your reflection based test to ensure that it works. If you do this repeatedly throughout a project, unit tests then stop being a useful measurement of code health, and start to become a hindrance to development, and an annoyance to the development team.
What I recommend doing instead is using a code coverage tool, such as Cobertura, to ensure that the unit tests you write provide decent coverage of the code in private methods. That way, you indirectly test what the private methods are doing, and maintain a higher level of agility.
From this article: Testing Private Methods with JUnit and SuiteRunner (Bill Venners), you basically have 4 options:
Don't test private methods.
Give the methods package access.
Use a nested test class.
Use reflection.
Generally a unit test is intended to exercise the public interface of a class or unit. Therefore, private methods are implementation detail that you would not expect to test explicitly.
Just two examples of where I would want to test a private method:
Decryption routines - I would not
want to make them visible to anyone to see just for
the sake of testing, else anyone can
use them to decrypt. But they are
intrinsic to the code, complicated,
and need to always work (the obvious exception is reflection which can be used to view even private methods in most cases, when SecurityManager is not configured to prevent this).
Creating an SDK for community
consumption. Here public takes on a
wholly different meaning, since this
is code that the whole world may see
(not just internal to my application). I put
code into private methods if I don't
want the SDK users to see it - I
don't see this as code smell, merely
as how SDK programming works. But of
course I still need to test my
private methods, and they are where
the functionality of my SDK actually
lives.
I understand the idea of only testing the "contract". But I don't see one can advocate actually not testing code—your mileage may vary.
So my trade-off involves complicating the JUnit tests with reflection, rather than compromising my security and SDK.
The private methods are called by a public method, so the inputs to your public methods should also test private methods that are called by those public methods. When a public method fails, then that could be a failure in the private method.
In the Spring Framework you can test private methods using this method:
ReflectionTestUtils.invokeMethod()
For example:
ReflectionTestUtils.invokeMethod(TestClazz, "createTest", "input data");
Another approach I have used is to change a private method to package private or protected then complement it with the #VisibleForTesting annotation of the Google Guava library.
This will tell anybody using this method to take caution and not access it directly even in a package. Also a test class need not be in same package physically, but in the same package under the test folder.
For example, if a method to be tested is in src/main/java/mypackage/MyClass.java then your test call should be placed in src/test/java/mypackage/MyClassTest.java. That way, you got access to the test method in your test class.
To test legacy code with large and quirky classes, it is often very helpful to be able to test the one private (or public) method I'm writing right now.
I use the junitx.util.PrivateAccessor-package for Java. It has lots of helpful one-liners for accessing private methods and private fields.
import junitx.util.PrivateAccessor;
PrivateAccessor.setField(myObjectReference, "myCrucialButHardToReachPrivateField", myNewValue);
PrivateAccessor.invoke(myObjectReference, "privateMethodName", java.lang.Class[] parameterTypes, java.lang.Object[] args);
Having tried Cem Catikkas' solution using reflection for Java, I'd have to say his was a more elegant solution than I have described here. However, if you're looking for an alternative to using reflection, and have access to the source you're testing, this will still be an option.
There is possible merit in testing private methods of a class, particularly with test-driven development, where you would like to design small tests before you write any code.
Creating a test with access to private members and methods can test areas of code which are difficult to target specifically with access only to public methods. If a public method has several steps involved, it can consist of several private methods, which can then be tested individually.
Advantages:
Can test to a finer granularity
Disadvantages:
Test code must reside in the same
file as source code, which can be
more difficult to maintain
Similarly with .class output files, they must remain within the same package as declared in source code
However, if continuous testing requires this method, it may be a signal that the private methods should be extracted, which could be tested in the traditional, public way.
Here is a convoluted example of how this would work:
// Import statements and package declarations
public class ClassToTest
{
private int decrement(int toDecrement) {
toDecrement--;
return toDecrement;
}
// Constructor and the rest of the class
public static class StaticInnerTest extends TestCase
{
public StaticInnerTest(){
super();
}
public void testDecrement(){
int number = 10;
ClassToTest toTest= new ClassToTest();
int decremented = toTest.decrement(number);
assertEquals(9, decremented);
}
public static void main(String[] args) {
junit.textui.TestRunner.run(StaticInnerTest.class);
}
}
}
The inner class would be compiled to ClassToTest$StaticInnerTest.
See also: Java Tip 106: Static inner classes for fun and profit
As others have said... don't test private methods directly. Here are a few thoughts:
Keep all methods small and focused (easy to test, easy to find what is wrong)
Use code coverage tools. I like Cobertura (oh happy day, it looks like a new version is out!)
Run the code coverage on the unit tests. If you see that methods are not fully tested add to the tests to get the coverage up. Aim for 100% code coverage, but realize that you probably won't get it.
If using Spring, ReflectionTestUtils provides some handy tools that help out here with minimal effort. For example, to set up a mock on a private member without being forced to add an undesirable public setter:
ReflectionTestUtils.setField(theClass, "theUnsettableField", theMockObject);
Private methods are consumed by public ones. Otherwise, they're dead code. That's why you test the public method, asserting the expected results of the public method and thereby, the private methods it consumes.
Testing private methods should be tested by debugging before running your unit tests on public methods.
They may also be debugged using test-driven development, debugging your unit tests until all your assertions are met.
I personally believe it is better to create classes using TDD; creating the public method stubs, then generating unit tests with all the assertions defined in advance, so the expected outcome of the method is determined before you code it. This way, you don't go down the wrong path of making the unit test assertions fit the results. Your class is then robust and meets requirements when all your unit tests pass.
If you're trying to test existing code that you're reluctant or unable to change, reflection is a good choice.
If the class's design is still flexible, and you've got a complicated private method that you'd like to test separately, I suggest you pull it out into a separate class and test that class separately. This doesn't have to change the public interface of the original class; it can internally create an instance of the helper class and call the helper method.
If you want to test difficult error conditions coming from the helper method, you can go a step further. Extract an interface from the helper class, add a public getter and setter to the original class to inject the helper class (used through its interface), and then inject a mock version of the helper class into the original class to test how the original class responds to exceptions from the helper. This approach is also helpful if you want to test the original class without also testing the helper class.
Testing private methods breaks the encapsulation of your class because every time you change the internal implementation you break client code (in this case, the tests).
So don't test private methods.
The answer from JUnit.org FAQ page:
But if you must...
If you are using JDK 1.3 or higher, you can use reflection to subvert
the access control mechanism with the aid of the PrivilegedAccessor.
For details on how to use it, read this article.
If you are using JDK 1.6 or higher and you annotate your tests with
#Test, you can use Dp4j to inject reflection in your test methods. For
details on how to use it, see this test script.
P.S. I'm the main contributor to Dp4j. Ask me if you need help. :)
If you want to test private methods of a legacy application where you can't change the code, one option for Java is jMockit, which will allow you to create mocks to an object even when they're private to the class.
PowerMockito is made for this.
Use a Maven dependency:
<dependency>
<groupId>org.powermock</groupId>
<artifactId>powermock-core</artifactId>
<version>2.0.7</version>
<scope>test</scope>
</dependency>
Then you can do
import org.powermock.reflect.Whitebox;
...
MyClass sut = new MyClass();
SomeType rval = Whitebox.invokeMethod(sut, "myPrivateMethod", params, moreParams);
I tend not to test private methods. There lies madness. Personally, I believe you should only test your publicly exposed interfaces (and that includes protected and internal methods).
If you're using JUnit, have a look at junit-addons. It has the ability to ignore the Java security model and access private methods and attributes.
Here is my generic function to test private fields:
protected <F> F getPrivateField(String fieldName, Object obj)
throws NoSuchFieldException, IllegalAccessException {
Field field =
obj.getClass().getDeclaredField(fieldName);
field.setAccessible(true);
return (F)field.get(obj);
}
Please see below for an example;
The following import statement should be added:
import org.powermock.reflect.Whitebox;
Now you can directly pass the object which has the private method, method name to be called, and additional parameters as below.
Whitebox.invokeMethod(obj, "privateMethod", "param1");
I would suggest you refactoring your code a little bit. When you have to start thinking about using reflection or other kind of stuff, for just testing your code, something is going wrong with your code.
You mentioned different types of problems. Let's start with private fields. In case of private fields I would have added a new constructor and injected fields into that. Instead of this:
public class ClassToTest {
private final String first = "first";
private final List<String> second = new ArrayList<>();
...
}
I'd have used this:
public class ClassToTest {
private final String first;
private final List<String> second;
public ClassToTest() {
this("first", new ArrayList<>());
}
public ClassToTest(final String first, final List<String> second) {
this.first = first;
this.second = second;
}
...
}
This won't be a problem even with some legacy code. Old code will be using an empty constructor, and if you ask me, refactored code will look cleaner, and you'll be able to inject necessary values in test without reflection.
Now about private methods. In my personal experience when you have to stub a private method for testing, then that method has nothing to do in that class. A common pattern, in that case, would be to wrap it within an interface, like Callable and then you pass in that interface also in the constructor (with that multiple constructor trick):
public ClassToTest() {
this(...);
}
public ClassToTest(final Callable<T> privateMethodLogic) {
this.privateMethodLogic = privateMethodLogic;
}
Mostly all that I wrote looks like it's a dependency injection pattern. In my personal experience it's really useful while testing, and I think that this kind of code is cleaner and will be easier to maintain. I'd say the same about nested classes. If a nested class contains heavy logic it would be better if you'd moved it as a package private class and have injected it into a class needing it.
There are also several other design patterns which I have used while refactoring and maintaining legacy code, but it all depends on cases of your code to test. Using reflection mostly is not a problem, but when you have an enterprise application which is heavily tested and tests are run before every deployment everything gets really slow (it's just annoying and I don't like that kind of stuff).
There is also setter injection, but I wouldn't recommended using it. I'd better stick with a constructor and initialize everything when it's really necessary, leaving the possibility for injecting necessary dependencies.
A private method is only to be accessed within the same class. So there is no way to test a “private” method of a target class from any test class. A way out is that you can perform unit testing manually or can change your method from “private” to “protected”.
And then a protected method can only be accessed within the same package where the class is defined. So, testing a protected method of a target class means we need to define your test class in the same package as the target class.
If all the above does not suits your requirement, use the reflection way to access the private method.
As many above have suggested, a good way is to test them via your public interfaces.
If you do this, it's a good idea to use a code coverage tool (like EMMA) to see if your private methods are in fact being executed from your tests.
Today, I pushed a Java library to help testing private methods and fields. It has been designed with Android in mind, but it can really be used for any Java project.
If you got some code with private methods or fields or constructors, you can use BoundBox. It does exactly what you are looking for.
Here below is an example of a test that accesses two private fields of an Android activity to test it:
#UiThreadTest
public void testCompute() {
// Given
boundBoxOfMainActivity = new BoundBoxOfMainActivity(getActivity());
// When
boundBoxOfMainActivity.boundBox_getButtonMain().performClick();
// Then
assertEquals("42", boundBoxOfMainActivity.boundBox_getTextViewMain().getText());
}
BoundBox makes it easy to test private/protected fields, methods and constructors. You can even access stuff that is hidden by inheritance. Indeed, BoundBox breaks encapsulation. It will give you access to all that through reflection, but everything is checked at compile time.
It is ideal for testing some legacy code. Use it carefully. ;)
First, I'll throw this question out: Why do your private members need isolated testing? Are they that complex, providing such complicated behaviors as to require testing apart from the public surface? It's unit testing, not 'line-of-code' testing. Don't sweat the small stuff.
If they are that big, big enough that these private members are each a 'unit' large in complexity—consider refactoring such private members out of this class.
If refactoring is inappropriate or infeasible, can you use the strategy pattern to replace access to these private member functions / member classes when under unit test? Under unit test, the strategy would provide added validation, but in release builds it would be simple passthrough.
I want to share a rule I have about testing which particularly is related to this topic:
I think that you should never adapt production code in order to
indulge easer writing of tests.
There are a few suggestions in other posts saying you should adapt the original class in order to test a private method - please red this warning first.
If we change the accessibility of a method/field to package private or protected, just in order to have it accessible to tests, then we defeat the purpose of existence of private access directive.
Why should we have private fields/methods/classes at all when we want to have test-driven development? Should we declare everything as package private, or even public then, so we can test without any effort?—I don't think so.
From another point of view: Tests should not burden performance and execution of the production application.
If we change production code just for the sake of easier testing, that may burden performance and the execution of the application in some way.
If someone starts to change private access to package private, then a developer may eventually come up to other "ingenious ideas" about adding even more code to the original class. This would make additional noise to readability and can burden the performance of the application.
With changing of a private access to some less restrictive, we are opening the possibility to a developer for misusing the new situation in the future development of the application. Instead of enforcing him/her to develop in the proper way, we are tempting him/her with new possibilities and giving him ability to make wrong choices in the future.
Of course there might be a few exceptions to this rule, but with clear understanding, what is the rule and what is the exception? We need to be absolutely sure we know why that kind of exception is introduced.

How Do You Create Test Objects For Third Party Legacy Code

I have a code base where many of the classes I implement derive from classes that are provided by other divisions of my company. Working with these other devisions often have the working relationship as though they are third party middle ware vendors.
I'm trying to write test code without modifying these base classes. However, there are issues with creating meaningful test
objects due to the lack of interfaces:
//ACommonClass.h
#include "globalthermonuclearwar.h" //which contains deep #include dependencies...
#include "tictactoe.h" //...and need to exist at compile time to get into test...
class Something //which may or may not inherit from another class similar to this...
{
public:
virtual void fxn1(void); //which often calls into many other classes, similar to this
//...
int data1; //will be the only thing I can test against, but is often meaningless without fxn1 implemented
//...
};
I'd normally extract an interface and work from there, but as these are "Third Party", I can't commit these changes.
Currently, I've created a separate file that holds fake implementations for functions that are defined in the third-party supplied base class headers on a need to know basis, as has been described in the book "Working with Legacy Code".
My plan was to continue to use these definitions and provide alternative test implementations for each third party class that I needed:
//SomethingRequiredImplementations.cpp
#include "ACommonClass.h"
void CGlobalThermoNuclearWar::Simulate(void) {}; // fake this and all other required functions...
// fake implementations for otherwise undefined functions in globalthermonuclearwar.h's #include files...
void Something::fxn1(void) { data1 = blah(); } //test specific functionality.
But before I start doing that I was wondering if any one has tried providing actual objects on a code base similar to mine, which would allow creating new test specific classes to use in place of actual third-party classes.
Note all code bases in question are written in C++.
Mock objects are suitable for this kind of task. They allow you to simulate the existence of other components without needing them to be present. You simply define the expected input and output in your tests.
Google have a good mocking framework for C++.
I'm running into a very similar problem at the moment. I don't want to add a bunch of interfaces that are only there for the purpose of testing, so I can't use any of the existing mock object libraries. To get around this I do the same thing, creating a different file with fake implementations, and having my tests link the fake behaviour, and production code links the real behaviour.
What I wish I could do at this point, is take the internals of another mock framework, and use it inside my fake objects. It would look a little something like this:
Production.h
class ConcreteProductionClass { // regular everyday class
protected:
ConcreteProductionClass(); // I've found the 0 arg constructor useful
public:
void regularFunction(); // regular function that I want to mock
}
Mock.h
class MockProductionClass
: public ConcreteProductionClass
, public ClassThatLetsMeSetExpectations
{
friend class ConcreteProductionClass;
MockTypes membersNeededToSetExpectations;
public:
MockClass() : ConcreteProductionClass() {}
}
ConcreteProductionClass::regularFunction() {
membersNeededToSetExpectations.PassOrFailTheTest();
}
ProductionCode.cpp
void doSomething(ConcreteProductionClass c) {
c.regularFunction();
}
Test.cpp
TEST(myTest) {
MockProductionClass m;
m.SetExpectationsAndReturnValues();
doSomething(m);
ASSERT(m.verify());
}
The most painful part of all this is that the other mock frameworks are so close to this, but don't do it exactly, and the macros are so convoluted that it's not trivial to adapt them. I've begun looking into this on my spare time, but it's not moving along very quickly. Even if I got my method working the way I want, and had the expectation setting code in place, this method still has a couple drawbacks, one of them being that your build commands can get to be kind of long if you have to link against a lot of .o files rather than one .a, but that's manageable. It's also impossible to fall through to the default implementation, since we're not linking it. Anyway, I know this doesn't answer the question, or really even tell you anything you don't already know, but it shows how close the C++ community is to being able to mock classes that don't have a pure virtual interface.
You might want to consider mocking instead of faking as a potential solution. In some cases you may need to write wrapper classes that are mockable if the original classes aren't. I've done this with framework classes in C#/.Net, but not C++ so YMMV.
If I have a class that I need under test that derives from something I can't (or don't want to) run under test I'll:
Make a new logic-only class.
Move the code-i-wanna-test to the logic class.
Use an interface to talk back to the real class to interact with the base class and/or things I can't or won't put in the logic.
Define a test class using that same interface. This test class could have nothing but noops or fancy code that simulates the real classes.
If I have a class that I just need to use in testing, but using the real class is a problem (dependencies or unwanted behaviors):
I'll define a new interface that looks like all of the public methods I need to call.
I'll create a mock version of the object that supports that interface for testing.
I'll create another class that is constructed with a "real" version of that class. It also supports that interface. All interface calls a forwarded to the real object methods.
I'll only do this for methods I actually call - not ALL the public methods. I'll add to these classes as I write more tests.
For example, I wrap MFC's GDI classes like this to test Windows GDI drawing code. Templates can make some of this easier - but we often end up not doing that for various technical reasons (stuff with Windows DLL class exporting...).
I'm sure all this is in Feather's Working with Legacy Code book - and what I'm describing has actual terms. Just don't make me pull the book off the shelf...
One thing you did not indicate in your question is the reason why your classes derive from base classes from the other division. Is the relationship really a IS-A relationshiop ?
Unless your classes needs to be used by a framework, you could consider favoring delegation over inheritance. Then you can use dependency injection to provide your class with a mock of their class in the unit tests.
Otherwise, an idea would be to write a script to extract and create the interface your need from the header they provide, and integrate this to the compilation process so your unit test can ve checked in.

Should I use "glass box" testing when it leads to *fewer* tests?

For example, I'm writing tests against a CsvReader. It's a simple class that enumerates and splits rows of text. Its only raison d'être is ignoring commas within quotes. It's less than a page.
By "black box" testing the class, I've checked things like
What if the file doesn't exist?
What if I don't have permission on the file?
What if the file has non-Windows line-breaks?
But in fact, all of these things are the StreamReader's business. My class works without doing anything about these cases. So in essence, my tests are catching errors thrown by StreamReader, and testing behavior handled by the framework. It feels like a lot of work for nothing.
I've seen the related questions
Should QA test from a strictly black-box perspective?
Rigor in capturing test cases for unit testing
My question is, am I missing the point of "glass box" testing if I use what I know to avoid this kind of work?
This really depends on the interface of your CsvReader, you need to consider what the user of the class is expecting.
For example, if one of the parameters is a file name and the file does not exist what should happen? This should not be dependent upon whether you use a stream reader or not. The unit tests should test for the observable external behaviour of your class and in some cases, to dig slightly deeper and additionally ensure certain implementation details are covered, e.g. the file is closed when the reader has finished.
However you don't want the Unit Tests to be dependent upon all of the details or to assume that because of an implementation detail something will happen.
All of the examples you mention in your question involve observable behaviour (in this case exceptional circumstances) of your class and therefore should have unit tests related to them.
I don't think you should waste time testing things that are not your code. It's a design choice, not a testing choice, whether to handle the errors of the underlying framework or let them propagate up to the caller. FWIW, I think you're right to let them propagate up. Once you've made the design decision, though, your unit testing should cover your code (and cover it well) without testing the underlying framework. Using dependency injection and a mock Stream is probably a good idea, too.
[EDIT] Example of dependency injection (see link above for more info)
Not using dependency injection we have:
public class CvsReader {
private string filename;
public CvsReader(string filename)
{
this.filename = filename;
}
public string Read()
{
StreamReader reader = new StreamReader( this.filename );
string contents = reader.ReadToEnd();
.... do some stuff with contents...
return contents;
}
}
With dependency injection (constructor injection) we do:
public class CvsReader {
private IStream stream;
public CvsReader( IStream stream )
{
this.stream = stream;
}
public string Read()
{
StreamReader reader = new StreamReader( this.stream );
string contents = reader.ReadToEnd();
... do some stuff with contents ...
return contents;
}
}
This allows the CvsReader to be more easily testable. We pass an instance implementing the interface we depend on in the constructor, in this case IStream. Because of this we can create another class (perhaps a mock class) that implements IStream, but doesn't necessarily do file I/O. We can use this class to feed to our reader whatever data we want without involving any of the underlying framework. In this case, I'd use a MemoryStream since we just reading from it. In we wanted to, though, we could use a mock class and give it a richer interface that lets our tests configure the responses that it gives. This way we can test the code that we write and not involve the underlying framework code at all. Alternatively we could also pass a TextReader, but the normal dependency injection pattern uses interfaces and I wanted to show the pattern with interfaces. Arguably passing in a TextReader would be better since the code above still is dependent on the StreamReader implementation.
Yes, but that would strictly be for the purposes of unit testing:
You could abstract your CSV reader implementation from any particular StreamReader by defining an abstract stream reader interface and testing your own implementation with a mock stream reader that implements that interface. Your mock reader would obviously be immune to errors like non-existent files, permission problems, OS differences etc. You would therefore entirely be testing your own code and can achieve 100% code coverage.
I tend to agree with tvanfosson: If you inherit from a StreamReader and extend it in some way, your unit tests should only exercise the functionality you've added or altered. Otherwise, you're going to waste a lot of time and cognitive energy writing, reading and maintaining tests that don't add any value.
Although markj is correct that tests should cover the "observable external behaviour" of a class, I think it's appropriate to consider where that behaviour comes from. If it's behaviour via inheritance from another (presumably unit tested) class, then I see no benefit in adding your own unit tests. OTOH, if it's behaviour via composition then it might justify some tests to ensure the pass-throughs are working properly.
My preference would be to unit test the specific functionality you alter, and then write integration tests that check for error conditions, but in the context of the business need you're ultimately supporting.
Just an FYI, if this is .NET, you should consider not reinventing the wheel.
For C#
Add a reference to Microsoft.VisualBasic
Use the fantastic class Microsoft.VisualBasic.FileIO.TextFieldParser() to handle your CSV parsing needs.
Microsoft already tested it, so you won't have to.
Enjoy.
You should always be managing errors that your framework throws; that way your application is robust & doesn't crash on catastrophic errors...