How do you unit test different class access levels? - unit-testing

I admit - I'm a complete novice when it comes to unit testing. I can grasp the concepts easily enough (test one thing, break-fix-test-repeat, etc.), but I'm having a bit of a problem getting my mind around this one...
I've been tasked with rewriting a large section of our application, and I've got the class structure down pretty well. We have our test projects mixed right in with the rest of the solution, and all the references are lining up the way we want them to. Unfortunately, there are a few Friend classes that can only be accessed from inside the same namespace. As it stands, the test class is not a member of this namespace, so I cannot get direct access to any of those underlying methods, which REALLY need to be tested.
From what I've been reading, I could create a public mockup of the classes in question and test it that way, but I'm concerned that down the road someone will make a change in the production code and not copy it out to the test code, defeating the purpose of testing entirely. Another option would be to change the access level on the classes themselves, but that would involve a lot of overhead and fiddling with the code already in place. The idea of writing an interface has also come up, but creating a whole structure of interfaces for the sake of testing hasn't flown in management.
Am I just missing something here? What would be the best way to make sure those underlying classes are indeed functioning correctly without changing the access to them?

I'm not sure if you refer to .NET/C# projects, but you could add the InternalsVisibleTo attribute to the AssemblyInfo.cs file, to expose your internal classes to the unit test assembly.
Let's say you create a unit test project called "MyApplication.Tests", add this to the "MyApplication" project AssemblyInfo.cs file (located under "Properties"):
[assembly: InternalsVisibleTo("MyApplication.Tests")]

You can also make a subclass of the test-subject that is in the same namespace as the test-subject, and the subclass could expose whatever features necessary for testing.
Assuming you have some way of giving this subclass a "test" scope, you're home free. (You dont want this class in your regular code since it breaks encapsulation)

I think that your unit tests should not require anything of the source code, so the first answer certainly works. Have you considered using Reflection? I think it gets around changing the source code; there's a good discussion of this here: CodeProject

Related

Faking a Method of the Object Under Test

Is there a reason why you shouldn't create a partial fake of an object or just fake one method on the object that you are testing of it for the sake of testing another method? This might be helpful to save you from making an entire new mock object, or when there is an external dependency in the method you are faking which you can't reasonably get rid of and would like to keep out of all the other unit tests?
The objects you want to do this for are trying to do too many things. In particular, if you have an external dependency, you would normally create an object to isolate that dependency. The Façade pattern is one example of this. If your objects weren't designed with testability in mind you may have to do some refactoring. Take a look at Michael Feathers' PDF on working with legacy code(PDF). He also has a book by the same title that goes into much more detail.
It is a very bad idea to mock/fake part of a class to test another.
Doing this, you are not testing what the real code does in the conditions under test leading to unreliable test results.
It also increases the maintenance burden of the faked part of the class. If this is in effect for the whole test program, the fake implementation also makes other tests on the faked method harder.
You need to ask yourself why you need to fake out the part under test.
If it is because the method is accessing a file or database, then you should define an interface and pass an instance of that interface to the class constructor or method. This allows you to test different scenarios in the same test application.
If it is because you are using singletons, you should rethink your design to make it more testable: removing singletons will remove implicit dependencies and maintenance nightmares.
If you are using static methods/free-standing functions to access data in a registry or settings file, you should really move that out of the function under test and pass the data as a parameter or provide a settings provider interface. This will make the code more flexible and robust.
If it is to break a dependency for the purpose of testing (e.g. faking out a vector method to test a method in a matrix class) then you should not be faking that -- you should treat the code under test as what is defined by the class under test by its public interface: methods; pre-conditions, post-conditions, invariants, documentation, parameters and exception specifications.
You can use knowledge of the implementation details to test special edge cases, but trigger those through the main API, not by faking an implementation detail.
For example, suppose you faked std::vector::at() but the implementation switched to use operator[] instead. Your test would break or silently pass.
If the method you want to fake is virtual (as in, not static and not final), then you can subclass your object in your test, override the method in the subclass, and exercise the subclass in the test. No mock-object libraries required.
(Ideally you should consider refactoring, this is not a great long-term solution. But it is a way to get legacy code under test so you can start the refactoring process more easily.)
The Extract and Override technique described in Chapter 3 of Roy Osherove's The Art of Unit Testing does seem to be a way to fake part of the class under test (pp. 71-77). Osherove does not address the concerns raised in some of the other answers to this question.
In addition, Michael Feathers discusses this in Working Effectively with Legacy Code. He terms the resulting class a testing subclass (227) and the technique Subclass and Override Method (401). Now, granted, Feathers is not giving an exposition of pristine techniques that are recommended on new code. But he still gives it serious treatment as a potentially helpful technique.
I also asked my former computer professor about this. He is well-read and currently works full-time in the software industry, where he has advanced rapidly. He said that this technique definitely has a good application, and that there are several dozen classes in the codebase at his company that are under test in this way. He said that, like any technique, it can be overused.
I originally wrote the question when I was new to unit testing and knew next to nothing about dependency injection. Now, after some experience with both, I would add that the need to use this testing technique could be a smell. It may be a sign that need to rework your approach to dependencies. If the method that needs to be faked is one that is inherited from a base class, it may mean that you need to take the adage "favor composition over inheritance" more seriously. You should inject your dependencies rather than inheriting them.
There are some really nice packages for facilitating this kind of stuff. For instance, from the Mockito docs:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
does some real magic that's hard to believe at first. When you call
String firstMember = mockedList.get(0);
you'll get back "first", because of what you said in the "when" statement.

Do I only have to mock out external dependencies in a unit test? What's about internal dependencies?

Do I only have to mock out external dependencies in a unit test?
What if my method that I want to test, has a dependency on another class within the same assembly? Do I have to mock out the dependency for going sure to test only one thing and there for to make a unit test instead of an integration test?
Is an integration test a test that tests dependencies in general or do I have to difference between internal and external dependencies?
An example would be a method that has 2000 lines of code with 5 method invocations (all methods coming from the same assembly).
Generally a proper unit test is testing only that single piece of code. So a scenario like this is where you start to ask yourself about the coupling of these two classes. Does Class A internally depend on the implementation of Class B? Or does it just need to be supplied an instance of Type B (notice the difference between a class and a type)?
If the latter, then mock it because you're not testing Class B, just Class A.
If the former, then it sounds like creating the test has identified some coupling that can (perhaps even should) be re-factored.
Edit: (in response to your comment) I guess a key thing to remember while doing this (and retro-fitting unit tests into a legacy system is really, really difficult) is to mentally separate the concepts of a class and a type.
The unit tests are not for Class A, they are for Type A. Class A is an implementation of Type A which will either pass or fail the tests. Class A may have an internal dependency on Type B and need it to be supplied, but Type A might not. Type A is a contract of functionality, which is further expressed by its unit tests.
Does Type A specify in its contract that implementations will require an instance of Type B? Or does Class A resolve an instance of it internally? Does Type A need to specify this, or is it possible that different implementations of Type A won't need an instance of Type B?
If Type A requires an instance of Type B, then it should expose this externally and you'd supply the mock in your tests. If Class A internally resolves an instance of Type B, then you'd likely want to be using an IoC container where you'd bootstrap it with the mock of Type B before running the tests.
Either way, Type B should be a mock and not an implementation. It's just a matter of breaking that coupling, which may or may not be difficult in a legacy system. (And, additionally, may or may not have a good ROI for the business.)
Working with a code base you're describing isn't easy with multiple problems combined into something you don't know how to start changing. There are strong dependencies between classes as well as between problems and maybe even no overall design.
In my experience, this takes a lot of effort and time as well as skill in doing this kind of work. A very good resource to learn how to work with legacy code is Michael Feather's book: Working Effectively with Legacy Code.
In short, there are safe refactorings you can do without risking to break things, which might help you get started. There are also other refactorings which require tests to protect how things work. Tests are essential when refactoring code. This doesn't of course come with a 100% guarantee that things don't break, because there might be so many hidden "features" and complexity you cannot be aware of when you start. Depending on the code base the amount of work you need to do varies greatly, but for large code bases there is usually a lot of work.
You'll need to understand what the code does, either by simply knowing it or by finding out what the current code does. In either case, you start by writing "larger" tests which are not really unit tests, they just protect the current code. They might cover larger parts, more like integration/functional tests. These are your guards when you start to refactor the code. When you have such tests in place and you feel comfortable what the code does, you can start refactoring the parts the "larger" tests cover. For the smaller parts you change you write proper unit tests. Iterating doing various refactorings will at some point make the initial large tests unnecessary because you now have a much better code base and unit tests (or you simply keep them as functional test).
Now, coming back to your question.
I understand what you mean with your question, but I'd still like to change it slightly because there are more important aspects than external and internal. I believe a better question is to ask which dependencies do I need to break to get a better design and to write unit tests?
The answer to this question is you should break all dependencies you are not in control over, slow, non-deterministic or pulls in too much state for a single unit test. These are for sure all external (filesystem, printer, network etc.). Also note that multi-threading is not suitable for unit tests because this is not deterministic. For internal dependencies I assume you mean classes with members or functions calling other functions. The answer to this is maybe. You need to decide if you are in control and if the design is good. Probably in your case you are not in control and the code is not good, so here you need to refactor things to get things under control and into a better design. Michael Feather's book is great here, but you need to find how to apply the things on your code base of couse.
One very good technique for breaking dependencies is dependency injection. In short, it changes the design so that you pass in the members a class uses instead of letting the class itself instantiate them. For these you have an interface (abstract base class) for these dependencies you pass in, so you can easily change what you pass in. For instance, using this you can have different member implementations for a class in production and when you do unit test. This is a great technique and also leads to good design if use wisely.
Good luck and take your time! ;)
Generally speaking, a method with 2000 lines of code is just plain BAD. I usually start to look for reasons to make new classes -- not even methods, but classes -- when i have to use the pagedown key more than three or four times to browse through it (and collapsable regions doesn't count).
So, yes you do need to get rid of dependencies from outside and inside of the assembly, and you need to think of responsibility of the class. It sounds like this one has way too much weight on its shoulders, and it sounds like it is very close to impossible to write unittests for. If you think testability, you will automatically start to inject dependencies, and downsize your classes, and BAM!!!There you have it; nice and pretty code!! :-)
Regards,
Morten

How to determine if an existing class can be unit-tested?

Recently, i took ownership of some c++ code. I am going to maintain this code, and add new features later on.
I know many people say that it is usually not worth adding unit-tests to existing code, but i would still like to add some tests which will at least partially cover the code. In particular, i would like to add tests which reproduce bugs which i fixed.
Some of the classes are constructed with some pretty complex state, which can make it more difficult to unit-test.
I am also willing to refactor the code to make it easier to test.
Is there any good article you recommend on guidelines which help to identify classes which are easier to unit-test? Do you have any advice of your own?
While Martin Fowler's book on refactoring is a treasure trove of information, why not take a look at "Working Effectively with Legacy Code."
Also, if you're going to be dealing with classes where there's a ton of global variables or huge amounts of state transitions I'd put in a lot of integration checks. Separate out as much of the code which interacts with the code you're refactoring to make sure that all expected inputs in the order they are recieved continue to produce the same outputs. This is critical as it's very easy to "fix" a subtle bug that might have been addressed somewhere else.
Take notes too. If you do find that there is a bug which another function/class expects and handles properly you'll want to change both at the same time. That's difficult unless you keep thorough records.
Presumably the code was written for a purpose, and a unit test will check if the purpose is met, i.e. the pre-conditions and post-conditions hold for the methods.
If the public class methods are such that you can externally check the state it can be unit tested easily enough (black-box test). If the class state is invisible or if you have to test tricky private methods, your test class may need to be a friend (white-box test).
A class that is hard to unit test will be one that
Has enormous dependencies, i.e. tightly coupled
Is intended to work in a high-volume or multi-threaded environment. There you would use a system test rather than a unit test and the actual output may not be totally determinate.
I written a fair number of blog posts about unit testing, non-trivial, C++ code: http://www.lenholgate.com/blog/2004/05/practical-testing.html
I've also written quite a lot about adding tests to existing code: http://www.lenholgate.com/blog/testing/
Almost everything can and should be unit tested. If not directly, then by using mock classes.
Since you decided to refactor your classes, try to use BDD or TDD approach.
To prevent breaking existing functionality, the only way is to have good integration tests, but usually it takes time to execute them all for a complex system.
Without more details on what you do, it is not that easy to give more implementation details. Some are :
use MVP or presenter first for developing gui
use design patterns where appropriate
use function and member pointers, or observer design pattern to break dependencies
I think that if you're having to come up with some "measure" to test if a class is testable, you're already fscked. You should be able to tell just by looking at it: can you write an independent program that links to this class alone and makes sure it works?
If a class is too huge so that you can't be sure just by looking at it...chances are it probably isn't testable. People that don't know how to make small, distinct interfaces generally don't know how to adhere to any other principle either.
In the end though, the way to find out if a class is testable is to try to put it in a harness. If you end up having to pull in half your program to do it, try refactoring. If you find that you can't even perform the most basic refactor without having to rewrite the entire program, analyze the expense of doing so.
We at IPL published a paper It's testing Jim, but not as we know it which explores the practical problems of testing C++ and suggests some techniques to address them that may well be of use given your question. These techniques are also well supported in Cantata++ - our C/C++ unit and integration testing tool.

How do I prevent my unit tests from requiring knowledge about implementation internals when using mock objects?

I'm still in the learning stages regarding unit-testing and in particular regarding mocking (I'm using the PascalMock and DUnit frameworks). One thing I now stumbled over was that I couldn't find a way around hard-coding implementation details of the tested class/interface into my unit test and that just feels wrong...
For example: I want to test a class that implements a very simple interface for reading and writing application settings (basically name/value pairs). The interface that is presented to the consumer is completely agnostic to where and how the values are actually stored (e.g. registry, INI-file, XML, database, etc.). Naturally, the access layer is implemented by yet a different class that gets injected into the tested class on construction. I created a mock object for this access layer and I am now able to fully test the interface-implementing class without actually reading or writing anything to any registry/INI-file/whatever.
However, in order to ensure the mock behaves exactly like the real thing when accessed by the tested class, my unit tests have to set up the mock object by very explicitly defining expected method calls and the return values expected by the tested class. This means that if I should ever have to make changes to the interface of the access layer or to the way that the tested class uses that layer I will also have to change the unit tests for the class that internally uses that interface even though the interface of the class I'm actually testing hasn't changed at all. Is this something I will just have to live with when using mocks or is there a better way to design the class-dependencies that would avoid this?
to ensure the mock behaves exactly like the real thing when accessed by the tested class, my unit tests have to set up the mock object by very explicitly defining expected method calls and the return values expected by the tested class.
Correct.
changes to the interface of the access layer or to the way that the tested class uses that layer I will also have to change the unit tests
Correct.
even though the interface of the class I'm actually testing hasn't changed at all.
"Actually testing"? You mean the exposed interface class? That's fine.
The way the "tested" (interface) class uses the access layer means you've changed the internal interface to the access layer. Interface changes (even internal ones) require test changes and may lead to breakage if you've done something wrong.
Nothing wrong with this. Indeed, the whole point is that any change to the access layer must require changes to the mocks to assure that the change "works".
Testing is not supposed to be "robust". It's supposed to be brittle. If you make a change that alters internal behavior, then things can break. If your tests were too robust they wouldn't test anything -- they'd just work. And that's wrong.
Tests should only work for the exact right reason.
Is this something I will just have to
live with when using mocks or is there
a better way to design the
class-dependencies that would avoid
this?
A lot of times mocks (particularly sensitive frameworks like JMock) force you to account for details that don't relate directly to the behavior you're trying to test, and sometimes this can even be helpful by exposing suspect code that is doing too much and has too many calls/dependencies.
However in your case, if I read your description right, it sounds like you really don't have a problem. If you design the read/write layer correctly and with an appropriate level of abstraction, you shouldn't have to change it.
This means that if I should ever have
to make changes to the interface of
the access layer or to the way that
the tested class uses that layer I
will also have to change the unit
tests for the class that internally
uses that interface even though the
interface of the class I'm actually
testing hasn't changed at all.
Isn't the point of writing the abstracted access layer to avoid this? In general, following the Open/Closed principle, an interface of this sort shouldn't change and shouldn't break the contract with the class that consumes it, and by extension it won't break your unit tests either. Now if you change the order of the method calls, or have to make new calls to the abstracted layer, then, yes, particularly with some frameworks, your mock expectations will break. This is just part of the cost of using mocks, and it's perfectly acceptable. But the interface itself should, in general, remain stable.
Just to put some names to your example,
RegistryBasedDictionary implements the Role (interface) Dictionary.
RegistryBasedDictionary has a dependency on the Role RegistryAccessor, implemented by RegistryWinAPIWrapper.
You are currently interested in testing RegistryBasedDictionary. The unit tests would inject a mock dependency for the RegistryAccessor Role and would test the expected interaction with the dependencies.
The trick here to avoid unnecessary test-maintenance is to "Specify precisely what should happen.. and no more." (From the GOOS book (must-read for mock flavored TDD), so if order of dependency method calls doesn't matter, don't specify it in the test. That leaves you free to change the order of calls in the implementation.)
Design the Roles such that the they do not contain any leaks from the actual implementations - keep the Roles implementation-agnostic.
The only reason to change RegistryBasedDictionary tests, would be a change in the behavior of RegistryBasedDictionary and not in any of its dependencies. So if its interaction with its dependencies or the roles/contracts change, the tests would need to be updated. That is the price of interaction-based tests you need to pay, for the flexibility to test in isolation. However in practice, it doesn't that happen that often.

Should non-public functions be unit tested and how?

I am writing unit tests for some of my code and have run into a case where I have an object with a small exposed interface but complex internal structures as each exposed method runs through a large number of internal functions including dependancies on the object's state. This makes the methods on the external interface quite difficult to unit test.
My initial question is should I be aiming to unit test these internal functions as well, as they are simpler and hence easier to write tests for? My gut feeling says yes, which leads to the follow-up question of if so, how would I go about doing this in C++?
The options I've come up with are to change these internal functions from private to protected and use either a friend class or inheritence to access these internal functions. Is this the best/only method of doing this will preserving some of the semantics of keeping the internal methods hidden?
If your object is performing highly complex operations that are extremely hard to test through the limited public interface, an option is to factor out some of that complex logic into utility classes that encapsulate specific tasks. You can then unit test those classes individually. It's always a good idea to organize your code into easily digestible chunks.
Short answer: yes.
As to how, I caught a passing reference on SO a few days ago:
#define private public
in the unit testing code evaluated before the relevant headers are read...
Likewise for protected.
Very cool idea.
Slightly longer answer: Test if the code is not obviously correct. Which means essentially any code that does something non-trivial.
On consideration, I am wondering about this. You won't be able to link against the same object file that you use in the production build. Now, unit testing is a bit of an artificial environment, so perhaps this is not a deal-breaker. Can anyone shed some light on the pros and cons of this trick?
My feeling personally is that if testing the public interface is not sufficient to adequately test the private methods, you probably need to decompose the class further. My reasoning is: private methods should be doing only enough to support all use-cases of the public interface.
But my experience with unit testing is (unfortunately) slim; if someone can provide a compelling example where a large chunk of private code cannot be separated out, I'm certainly willing to reconsider!
There are several possible approaches. presuming your class is X:
Only use the public interface of X. You will have extensive setup problems and may need a coverage tool to make sure that your code is covered, but there are no special tricks involved.
Use the "#define private public" or similar trick to link against a version of X.o that is exposed to everyone.
Add a public "static X::unitTest()" method. This means that your code will ship linked to your testing framework. (However, one company I worked with used this for remote diagnostics of the software.)
Add "class TestX" as a friend of X. TestX is not shipped in you production dll/exe. It is only defined in your test program, but it has access to X's internals.
Others...
My opinion is no, generally they should not be tested directly.
Unit tests are white box, from a higher perspective of the system, but they should be black box from the perspective of the tested class interface (its public methods and their expected behavior).
So for example, a string class (that wouldn't need legacy char* support):
you should verify that its length() method is working correcly.
you should not have to verify that it puts the '\0' char at the end of its internal buffer. This is an implementation detail.
This allows you to refactor the implementation almost without touching the tests later on
This helps you reduce coupling by enforcing class responsibilities
This makes your tests easier to maintain
The exception is for quite complex helper methods you would like to verify more thoroughly.
But then, this may be a hint that this piece of code should be "official" by making it public static or extracted in its own class with its public methods.
I would say to use a code coverage tool to check if these function are already tested somehow.
Theoretically if your public API is passing all the tests then the private functions are working fine, as long as every possible scenario is covered. That's the main problem, I think.
I know there is tools for that working with C/C++. CoverageMeter is one of them.
Unless you're making a general purpose library you should try and limit what you have built to what you will be using. Extend as you need it.
As such, you should have full code coverage, and you should test it all.
Perhaps your code is a bit smelly? Is it time to refactor?
If you have a large class doing lots of things internally, perhaps it should be broken into several smaller classes with interfaces you could test separately.
I've always thought this would tend to fall into place if you use test driven development. There are two ways of approaching the development, either you start with your public interface and write a new test before each addition to the complex private methods or you start off working on the complex stuff as public and then refactor the code to make the methods private and the tests you've already written to use the new public interface. Either way you should get full coverage.
Of course I've never managed to write a whole app (or even class) in a strict tdd way and the refactoring of the complex stuff into utility classes is the way to go if possible.
You could always use a compile switch around the private: like
#if defined(UNIT_TEST)
Or with code coverage tools verify that your unit testing of your public functions fully exercise the private ones.
Yes you should. Its called white box testing, this means that you have to know alot about the internals of the program to test it properly.
I would create public 'stubs' that call the private functions for testing. #ifdef the stubs so that you can compile them out when the testing is complete.
You might find it productive, to write a test program. In the test program, create a class which uses the class to be tested as a base.
Add methods to your new class, to test the functions that aren't visible at the public interface. Have your simply test program, call these methods to validate the functions you are concerned about.
If your class is performing complex internal calculations, a utility class or even an external function may be the best option to break out the calculation. But if the object has a complex internal structure, the object should have consistency check functions. Ie, if the object represents a specialized tree, the class should have methods to check that the tree is still correct. Additional functions such as tree depth are often useful to users of the class. Some of these functions may be declared inside #ifdef DEBUG or similar constructs to limit run time space in embedded applications. Using internal functions that are only compiled when DEBUG is set are much better from an encapsulation standpoint. You aren't breaking encapsulation. Additionally, the implementation dependent tests are kept with the implementation so it is obvious the test needs to change when the implementation changes.