C++ Unit Tests, mocking objects - c++

I'm currently looking at some unit test librarys in C++ and have some questions:
there seem to be no mocking facility in boost.test but I can hardly think of doing unit tests without creating mock objects/functions. How would you do that in boost.test, are you doing it manually (how? I mean, there are several ways I can think of, none of these seem nice) or are you simply doing without mock objects?
googletest and googlemock looks like nice libraries with mockingsupport however, it requires every object that shall be mocked to be virtual. I don't really like this, it is not that I'm worrying about the performance (I could define a macro to get it out of production code anyway) but I find this very intrusive. I wonder if there's another solution which does not require that much change to the existing code? (love clojure there)

Boost::Test does not have a mocking framework or library. If you want mocks, you have to do it yourself, or use something like GMock. Of course, you could use google mock with Boost::Test without problems.
How else would you expect something to be mockable? That's how it works in every other programming language! (Okay, not with duck typing, but that carries more overhead than virtual methods) If you're concerned about performance:
Implement everything in terms of virtuals as specified in the general google mock docs.
Profile your code for places where that's not sufficient
Replace those profiled sections (or rather, the segment of your code which indicates performance is a problem) with high-perf dependency injection instead.
Don't replace everything with high-perf DI, because that would send compile times through the roof.
In all seriousness though, I don't think the virtual calls are going to make huge differences in performance. The one case where virtuals are bad are where they're located inside of inner loops (such as in the iostream library where they're called possibly for every character of input or output), and even then only in performance sensitive code.
EDIT: I missed the very important word not in the above question #2 -- that you're not worried about performance. If that's the case then my answer is you're effectively screwed. A plain function or method call in C++ generates a plain method call, and there's no opprotunity for you to change where that call points. In most cases this doesn't require too much code change, because correct C++ code uses references wherever possible, which won't need to be modified despite the fact that virtuals are being used. You will have to watch out however for anyone using value semantics, because they will be subject to the slicing problem.

Turtle was designed explicitly for use with Boost.Test and looks very nice to me.

Disclaimer I work at Typemock.
Typemock Isolator++ can mock anything!! You don't need virtual - everything is Mockable
See explanation here
So you can fake public, private, abstract (without actually creating a concrete class), non virtual, out arguments, live instance etc...
And...
It fakes everything recursively
class MyClass
{
int GetResult() { return -1; }
}
We'll use the following code
MyClass* fakeMyClass = FAKE<MyClass>(); // every call to fakeMyClass will be faked
WHEN_CALLED(fakeMyClass->GetResult()).Return(10);

Related

Should I test inheritance and implementation?

Are unit tests supposed to check if a class implements an interface using reflection (same question with inheritance)? If no, why?
If the implementation is removed, the code may still compile, and the tests might still be successful (it depends on what the code does).
Unit tests should test anything that may not work. If the programming language doesn't ensure a class implements all methods in a contract, then you'd probably want to check this in tests.
You should test what is important to your code. Things like inheritance, interfaces and the like are at best implementation details which should be hidden away from the raw data result.
That is to say, if your code passes without the inheritance, it probably didn't need the inheritance and it should be cleaned up as cruft.
A few times in my career I had a situation where I wrote this kind of unit test, mostly for testing conventions.
E.g. there was an implicit assumption that all unit tests should inherit from BaseTest (although technically everything worked fine without this, we wanted that for the sake of coherence) and we had a unit test that enforced exactly that :).
So yes, that makes perfect sense, if necessary.

Faking a Method of the Object Under Test

Is there a reason why you shouldn't create a partial fake of an object or just fake one method on the object that you are testing of it for the sake of testing another method? This might be helpful to save you from making an entire new mock object, or when there is an external dependency in the method you are faking which you can't reasonably get rid of and would like to keep out of all the other unit tests?
The objects you want to do this for are trying to do too many things. In particular, if you have an external dependency, you would normally create an object to isolate that dependency. The Façade pattern is one example of this. If your objects weren't designed with testability in mind you may have to do some refactoring. Take a look at Michael Feathers' PDF on working with legacy code(PDF). He also has a book by the same title that goes into much more detail.
It is a very bad idea to mock/fake part of a class to test another.
Doing this, you are not testing what the real code does in the conditions under test leading to unreliable test results.
It also increases the maintenance burden of the faked part of the class. If this is in effect for the whole test program, the fake implementation also makes other tests on the faked method harder.
You need to ask yourself why you need to fake out the part under test.
If it is because the method is accessing a file or database, then you should define an interface and pass an instance of that interface to the class constructor or method. This allows you to test different scenarios in the same test application.
If it is because you are using singletons, you should rethink your design to make it more testable: removing singletons will remove implicit dependencies and maintenance nightmares.
If you are using static methods/free-standing functions to access data in a registry or settings file, you should really move that out of the function under test and pass the data as a parameter or provide a settings provider interface. This will make the code more flexible and robust.
If it is to break a dependency for the purpose of testing (e.g. faking out a vector method to test a method in a matrix class) then you should not be faking that -- you should treat the code under test as what is defined by the class under test by its public interface: methods; pre-conditions, post-conditions, invariants, documentation, parameters and exception specifications.
You can use knowledge of the implementation details to test special edge cases, but trigger those through the main API, not by faking an implementation detail.
For example, suppose you faked std::vector::at() but the implementation switched to use operator[] instead. Your test would break or silently pass.
If the method you want to fake is virtual (as in, not static and not final), then you can subclass your object in your test, override the method in the subclass, and exercise the subclass in the test. No mock-object libraries required.
(Ideally you should consider refactoring, this is not a great long-term solution. But it is a way to get legacy code under test so you can start the refactoring process more easily.)
The Extract and Override technique described in Chapter 3 of Roy Osherove's The Art of Unit Testing does seem to be a way to fake part of the class under test (pp. 71-77). Osherove does not address the concerns raised in some of the other answers to this question.
In addition, Michael Feathers discusses this in Working Effectively with Legacy Code. He terms the resulting class a testing subclass (227) and the technique Subclass and Override Method (401). Now, granted, Feathers is not giving an exposition of pristine techniques that are recommended on new code. But he still gives it serious treatment as a potentially helpful technique.
I also asked my former computer professor about this. He is well-read and currently works full-time in the software industry, where he has advanced rapidly. He said that this technique definitely has a good application, and that there are several dozen classes in the codebase at his company that are under test in this way. He said that, like any technique, it can be overused.
I originally wrote the question when I was new to unit testing and knew next to nothing about dependency injection. Now, after some experience with both, I would add that the need to use this testing technique could be a smell. It may be a sign that need to rework your approach to dependencies. If the method that needs to be faked is one that is inherited from a base class, it may mean that you need to take the adage "favor composition over inheritance" more seriously. You should inject your dependencies rather than inheriting them.
There are some really nice packages for facilitating this kind of stuff. For instance, from the Mockito docs:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
does some real magic that's hard to believe at first. When you call
String firstMember = mockedList.get(0);
you'll get back "first", because of what you said in the "when" statement.

Do I only have to mock out external dependencies in a unit test? What's about internal dependencies?

Do I only have to mock out external dependencies in a unit test?
What if my method that I want to test, has a dependency on another class within the same assembly? Do I have to mock out the dependency for going sure to test only one thing and there for to make a unit test instead of an integration test?
Is an integration test a test that tests dependencies in general or do I have to difference between internal and external dependencies?
An example would be a method that has 2000 lines of code with 5 method invocations (all methods coming from the same assembly).
Generally a proper unit test is testing only that single piece of code. So a scenario like this is where you start to ask yourself about the coupling of these two classes. Does Class A internally depend on the implementation of Class B? Or does it just need to be supplied an instance of Type B (notice the difference between a class and a type)?
If the latter, then mock it because you're not testing Class B, just Class A.
If the former, then it sounds like creating the test has identified some coupling that can (perhaps even should) be re-factored.
Edit: (in response to your comment) I guess a key thing to remember while doing this (and retro-fitting unit tests into a legacy system is really, really difficult) is to mentally separate the concepts of a class and a type.
The unit tests are not for Class A, they are for Type A. Class A is an implementation of Type A which will either pass or fail the tests. Class A may have an internal dependency on Type B and need it to be supplied, but Type A might not. Type A is a contract of functionality, which is further expressed by its unit tests.
Does Type A specify in its contract that implementations will require an instance of Type B? Or does Class A resolve an instance of it internally? Does Type A need to specify this, or is it possible that different implementations of Type A won't need an instance of Type B?
If Type A requires an instance of Type B, then it should expose this externally and you'd supply the mock in your tests. If Class A internally resolves an instance of Type B, then you'd likely want to be using an IoC container where you'd bootstrap it with the mock of Type B before running the tests.
Either way, Type B should be a mock and not an implementation. It's just a matter of breaking that coupling, which may or may not be difficult in a legacy system. (And, additionally, may or may not have a good ROI for the business.)
Working with a code base you're describing isn't easy with multiple problems combined into something you don't know how to start changing. There are strong dependencies between classes as well as between problems and maybe even no overall design.
In my experience, this takes a lot of effort and time as well as skill in doing this kind of work. A very good resource to learn how to work with legacy code is Michael Feather's book: Working Effectively with Legacy Code.
In short, there are safe refactorings you can do without risking to break things, which might help you get started. There are also other refactorings which require tests to protect how things work. Tests are essential when refactoring code. This doesn't of course come with a 100% guarantee that things don't break, because there might be so many hidden "features" and complexity you cannot be aware of when you start. Depending on the code base the amount of work you need to do varies greatly, but for large code bases there is usually a lot of work.
You'll need to understand what the code does, either by simply knowing it or by finding out what the current code does. In either case, you start by writing "larger" tests which are not really unit tests, they just protect the current code. They might cover larger parts, more like integration/functional tests. These are your guards when you start to refactor the code. When you have such tests in place and you feel comfortable what the code does, you can start refactoring the parts the "larger" tests cover. For the smaller parts you change you write proper unit tests. Iterating doing various refactorings will at some point make the initial large tests unnecessary because you now have a much better code base and unit tests (or you simply keep them as functional test).
Now, coming back to your question.
I understand what you mean with your question, but I'd still like to change it slightly because there are more important aspects than external and internal. I believe a better question is to ask which dependencies do I need to break to get a better design and to write unit tests?
The answer to this question is you should break all dependencies you are not in control over, slow, non-deterministic or pulls in too much state for a single unit test. These are for sure all external (filesystem, printer, network etc.). Also note that multi-threading is not suitable for unit tests because this is not deterministic. For internal dependencies I assume you mean classes with members or functions calling other functions. The answer to this is maybe. You need to decide if you are in control and if the design is good. Probably in your case you are not in control and the code is not good, so here you need to refactor things to get things under control and into a better design. Michael Feather's book is great here, but you need to find how to apply the things on your code base of couse.
One very good technique for breaking dependencies is dependency injection. In short, it changes the design so that you pass in the members a class uses instead of letting the class itself instantiate them. For these you have an interface (abstract base class) for these dependencies you pass in, so you can easily change what you pass in. For instance, using this you can have different member implementations for a class in production and when you do unit test. This is a great technique and also leads to good design if use wisely.
Good luck and take your time! ;)
Generally speaking, a method with 2000 lines of code is just plain BAD. I usually start to look for reasons to make new classes -- not even methods, but classes -- when i have to use the pagedown key more than three or four times to browse through it (and collapsable regions doesn't count).
So, yes you do need to get rid of dependencies from outside and inside of the assembly, and you need to think of responsibility of the class. It sounds like this one has way too much weight on its shoulders, and it sounds like it is very close to impossible to write unittests for. If you think testability, you will automatically start to inject dependencies, and downsize your classes, and BAM!!!There you have it; nice and pretty code!! :-)
Regards,
Morten

Possible to unit test code that wasn't initially design to be tested, without changing any code?

Is it generally accepted that you cannot test code unless the code is setup to be tested?
A hypothetical bit of code:
public void QueueOrder(SalesOrder order)
{
if (order.Date < DateTime.Now-20)
throw new Exception("Order is too old to be processed");
...
}
Some would consider refactoring it into:
protected DateTime MinOrderAge;
{
return DateTime.Now-20;
}
public void QueueOrder(SalesOrder order)
{
if (order.Date < MinOrderAge)
throw new Exception("Order is too old to be processed");
...
}
Note: You can come up with even more complicated solutions; involving an IClock interface and factory. It doesn't affect my question.
The issue with changing the above code is that the code has changed. The code has changed without the customer asking for it to be changed. And any change requires meetings and conference calls. And so i'm at the point where it's easier not to test anything.
If i'm not willing/able to make changes: does it make me not able to perform testing?
Note: The above pseudo-code might look like C#, but that's only so it's readable. The question is language agnostic.
Note: The hypothetical code snippet, problem, need for refactoring, and refactoring are hypothetical. You can insert your own hypothetical code sample if you take umbrage with mine.
Note: The above hypothetical code is hypothetical. Any relation to any code, either living or dead, is purely coincidental.
Note: The code is hypothetical, but any answers are not. The question is not subjective: as i believe there is an answer.
Update: The problem here, of course, is that i cannot guarantee that change in the above example didn't break anything. Sure i refactored one piece of code out to a separate method, and the code is logically identical.
But i cannot guarantee that adding a new protected method didn't offset the Virtual Method Table of the object, and if this class is in a DLL then i've just introduced an access violation.
The answer is yes, some code will need to change to make it testable.
But there is likely lots of code that can be tested without having to change it. I would focus on writing tests for that stuff first, then writing tests for the rest when other customer requirements give you the opportunity to refactor it in a testable way.
Code can be written from the start to be testable. If it is not written from the start with testability in mind, you can still test it, you may just run into some difficulties.
In your hypothetical code, you could test the original code by creating a SalesOrder with a date far in the past, or you could mock out DateTime.Now. Having the code refactored as you showed is nicer for testing, but it isn't absolutely necessary.
If your code is not designed to be tested then it is more difficult to test it. In your example you would have to override the DateTime.Now Method which is propably no easy task.
I you think it adds little value to add tests to your code or the changing of existing code is not allowed then you should not do it.
However if you belief in TDD then you should write new code with tests.
You can unit test your original example using a Mock object framework. In this case I would mock the SalesOrder object several times, configuring a different Date value each time, and test. This avoids changing any code that ships and allows you to validate the algorithm in question that the order date is not too far in the past.
For a better overall view of what's possible given the dependencies you're dealing with, and the language features you have at your disposal, I recommend Working Effective with Legacy Code.
This is easy to accomplish in some dynamic languages. For example I can hook inside the import/using statements and replace an actual dependency with a stub one, even if the SUT (System Under Test) uses it as an implicit dependency. Or I can redefine those symbols (classes, methods, functions, etc.). I'm not saying this is the way to go. Things should be refactored, but it is easier to write some characterization tests.
The problem with this sort of code is always, that it's creating and depending on a lot of static classes, framework types, etc. etc. ...
A very good solution to 'inject' fakes for all these objects is Typemock Isolator (which is commercial, but worth every penny). So yes, you certainly can test legacy code, which was written without testability in mind. I've done it on a big project with Typemock and had very good results.
Alternatively to Typemock, you may use the free MS Moles framework, which does basically the same. It's only that it has a quite unintuitive API and is much harder to learn and use.
HTH.
Thomas
Mockito + PowerMock for Mockito.
You'll be able to test almost everything without dramatically changing your code. But some setters will be needed to inject the mocks.

Should non-public functions be unit tested and how?

I am writing unit tests for some of my code and have run into a case where I have an object with a small exposed interface but complex internal structures as each exposed method runs through a large number of internal functions including dependancies on the object's state. This makes the methods on the external interface quite difficult to unit test.
My initial question is should I be aiming to unit test these internal functions as well, as they are simpler and hence easier to write tests for? My gut feeling says yes, which leads to the follow-up question of if so, how would I go about doing this in C++?
The options I've come up with are to change these internal functions from private to protected and use either a friend class or inheritence to access these internal functions. Is this the best/only method of doing this will preserving some of the semantics of keeping the internal methods hidden?
If your object is performing highly complex operations that are extremely hard to test through the limited public interface, an option is to factor out some of that complex logic into utility classes that encapsulate specific tasks. You can then unit test those classes individually. It's always a good idea to organize your code into easily digestible chunks.
Short answer: yes.
As to how, I caught a passing reference on SO a few days ago:
#define private public
in the unit testing code evaluated before the relevant headers are read...
Likewise for protected.
Very cool idea.
Slightly longer answer: Test if the code is not obviously correct. Which means essentially any code that does something non-trivial.
On consideration, I am wondering about this. You won't be able to link against the same object file that you use in the production build. Now, unit testing is a bit of an artificial environment, so perhaps this is not a deal-breaker. Can anyone shed some light on the pros and cons of this trick?
My feeling personally is that if testing the public interface is not sufficient to adequately test the private methods, you probably need to decompose the class further. My reasoning is: private methods should be doing only enough to support all use-cases of the public interface.
But my experience with unit testing is (unfortunately) slim; if someone can provide a compelling example where a large chunk of private code cannot be separated out, I'm certainly willing to reconsider!
There are several possible approaches. presuming your class is X:
Only use the public interface of X. You will have extensive setup problems and may need a coverage tool to make sure that your code is covered, but there are no special tricks involved.
Use the "#define private public" or similar trick to link against a version of X.o that is exposed to everyone.
Add a public "static X::unitTest()" method. This means that your code will ship linked to your testing framework. (However, one company I worked with used this for remote diagnostics of the software.)
Add "class TestX" as a friend of X. TestX is not shipped in you production dll/exe. It is only defined in your test program, but it has access to X's internals.
Others...
My opinion is no, generally they should not be tested directly.
Unit tests are white box, from a higher perspective of the system, but they should be black box from the perspective of the tested class interface (its public methods and their expected behavior).
So for example, a string class (that wouldn't need legacy char* support):
you should verify that its length() method is working correcly.
you should not have to verify that it puts the '\0' char at the end of its internal buffer. This is an implementation detail.
This allows you to refactor the implementation almost without touching the tests later on
This helps you reduce coupling by enforcing class responsibilities
This makes your tests easier to maintain
The exception is for quite complex helper methods you would like to verify more thoroughly.
But then, this may be a hint that this piece of code should be "official" by making it public static or extracted in its own class with its public methods.
I would say to use a code coverage tool to check if these function are already tested somehow.
Theoretically if your public API is passing all the tests then the private functions are working fine, as long as every possible scenario is covered. That's the main problem, I think.
I know there is tools for that working with C/C++. CoverageMeter is one of them.
Unless you're making a general purpose library you should try and limit what you have built to what you will be using. Extend as you need it.
As such, you should have full code coverage, and you should test it all.
Perhaps your code is a bit smelly? Is it time to refactor?
If you have a large class doing lots of things internally, perhaps it should be broken into several smaller classes with interfaces you could test separately.
I've always thought this would tend to fall into place if you use test driven development. There are two ways of approaching the development, either you start with your public interface and write a new test before each addition to the complex private methods or you start off working on the complex stuff as public and then refactor the code to make the methods private and the tests you've already written to use the new public interface. Either way you should get full coverage.
Of course I've never managed to write a whole app (or even class) in a strict tdd way and the refactoring of the complex stuff into utility classes is the way to go if possible.
You could always use a compile switch around the private: like
#if defined(UNIT_TEST)
Or with code coverage tools verify that your unit testing of your public functions fully exercise the private ones.
Yes you should. Its called white box testing, this means that you have to know alot about the internals of the program to test it properly.
I would create public 'stubs' that call the private functions for testing. #ifdef the stubs so that you can compile them out when the testing is complete.
You might find it productive, to write a test program. In the test program, create a class which uses the class to be tested as a base.
Add methods to your new class, to test the functions that aren't visible at the public interface. Have your simply test program, call these methods to validate the functions you are concerned about.
If your class is performing complex internal calculations, a utility class or even an external function may be the best option to break out the calculation. But if the object has a complex internal structure, the object should have consistency check functions. Ie, if the object represents a specialized tree, the class should have methods to check that the tree is still correct. Additional functions such as tree depth are often useful to users of the class. Some of these functions may be declared inside #ifdef DEBUG or similar constructs to limit run time space in embedded applications. Using internal functions that are only compiled when DEBUG is set are much better from an encapsulation standpoint. You aren't breaking encapsulation. Additionally, the implementation dependent tests are kept with the implementation so it is obvious the test needs to change when the implementation changes.