Mock non-virtual methods using pre-processor - c++

I am working on an embedded project in C++ where the resources are a bit limited. This means I try to be careful about using too many virtual methods unless I really need them. I am planning to start implementing unit-tests perhaps with Gtest.
Because the project is not too big in code size most of the code is written directly in the header file. The only penalty is a bit more compiling time. As it takes ~20 seconds to compile this is not an issue and in my opinion makes code easier to write/read.
If the project was running in Windows I would be all for creating new abstraction layers (or adding seam classes) and having more virtual methods where I need them for unit-testing. Because of the nature of the project I don't want to add extra virtual methods just for unit-testing purposes. I think it would be valid to do it but seems a bit unreasonable when the resources are scarce.
The project is running well and unit-testing is nice to have so I would like to keep code modifications to a minimum. One of the solutions to the problem is to do some trickery with the templates using "hi-perf dependency injection". This to me seems like duck typing and still requires modifying the original source code where the class is used.
I was thinking why not just use the pre-processor:
#ifdef TEST
#define TESTABLE virtual
#else
#define TESTABLE
#endif
Then when the time comes that a method needs to be mocked you could always modify the original source code for that class and write something like:
TESTABLE void somePreviouslyUnmockeableMethod(void);
I am aware one of the disadvantages is that I would have now two source codes, one for production and one for testing. The same happens anyway when one decides to use a Mock class instead of the real class when unit-testing. If the code changes were quite big the changes with the pre-processor could be a real problem but I think with this definition they are kept to a minimum.
Is this a valid solution? Are the some negative consequences I may be overlooking?
Note that all the modules already include a ProjectSettings.h file where the TESTABLE definition could be added with ease.

If making the classes virtual is really too costly (i.e. you measured that this would be too much overhead), you can use hi-perf dependency injection technique. The cost is then only i longer compilation time but no runtime overhead. In production you instantiate your class with real object and in tests - with a mock class that have methods with the same signatures as the real clas.

Related

Test Concrete Wrapper Classes

If created an interface which contains signatures for the static methods I would have a way to test the classes that were dependent on those static methods.
But how would I test the concrete wrapper class that implements the interface and wraps those static methods?
Do i just not test the concrete wrapper class?
It seems like wrapping the static methods has just moved the problem elsewhere. I still have some class, wrapper class, that is hard to test.
You're on the right track, but you're not moving the problem somewhere else -- you're isolating it to prevent that problem from leaking throughout your code base and other tests. I would only view this as a problem if you've made the wrong assumptions about the wrapped behavior or neglected to prove those assumptions. Some testing is required, but at varying degrees of difficulty. How far you choose to test those assumptions will be a trade-off you need to make.
For some scenarios, the wrapper can be tested but will require physical dependencies that you can control. For example, a static method that modifies a file will require that you take the time to setup the file system, or a helper class that downloads a file from a web-server. I like to think of these tests as "integration tests" instead of "unit tests" as they act upon their dependencies instead of running in isolation. As these tests may take longer to execute that standard tests, you may want to exclude them and run them less frequently than your faster unit tests.
For other scenarios, you may not be able to write a test without considerable effort. For example, you may have wrapped the logic that shows a dialog in a separate window -- testing this logic in code would be a lot work, but launching the application and verifying that it works may be enough. You'll take a hit on code-coverage but to me this is an acceptable trade-off.
The key challenge is understanding the assumptions of the wrapped behavior -- eg, what type of exception is thrown from a remote server if web-request is malformed or the network connection is lost? If you don't capture these assumptions correctly, those faults will bubble up into other areas of your code.

Do I only have to mock out external dependencies in a unit test? What's about internal dependencies?

Do I only have to mock out external dependencies in a unit test?
What if my method that I want to test, has a dependency on another class within the same assembly? Do I have to mock out the dependency for going sure to test only one thing and there for to make a unit test instead of an integration test?
Is an integration test a test that tests dependencies in general or do I have to difference between internal and external dependencies?
An example would be a method that has 2000 lines of code with 5 method invocations (all methods coming from the same assembly).
Generally a proper unit test is testing only that single piece of code. So a scenario like this is where you start to ask yourself about the coupling of these two classes. Does Class A internally depend on the implementation of Class B? Or does it just need to be supplied an instance of Type B (notice the difference between a class and a type)?
If the latter, then mock it because you're not testing Class B, just Class A.
If the former, then it sounds like creating the test has identified some coupling that can (perhaps even should) be re-factored.
Edit: (in response to your comment) I guess a key thing to remember while doing this (and retro-fitting unit tests into a legacy system is really, really difficult) is to mentally separate the concepts of a class and a type.
The unit tests are not for Class A, they are for Type A. Class A is an implementation of Type A which will either pass or fail the tests. Class A may have an internal dependency on Type B and need it to be supplied, but Type A might not. Type A is a contract of functionality, which is further expressed by its unit tests.
Does Type A specify in its contract that implementations will require an instance of Type B? Or does Class A resolve an instance of it internally? Does Type A need to specify this, or is it possible that different implementations of Type A won't need an instance of Type B?
If Type A requires an instance of Type B, then it should expose this externally and you'd supply the mock in your tests. If Class A internally resolves an instance of Type B, then you'd likely want to be using an IoC container where you'd bootstrap it with the mock of Type B before running the tests.
Either way, Type B should be a mock and not an implementation. It's just a matter of breaking that coupling, which may or may not be difficult in a legacy system. (And, additionally, may or may not have a good ROI for the business.)
Working with a code base you're describing isn't easy with multiple problems combined into something you don't know how to start changing. There are strong dependencies between classes as well as between problems and maybe even no overall design.
In my experience, this takes a lot of effort and time as well as skill in doing this kind of work. A very good resource to learn how to work with legacy code is Michael Feather's book: Working Effectively with Legacy Code.
In short, there are safe refactorings you can do without risking to break things, which might help you get started. There are also other refactorings which require tests to protect how things work. Tests are essential when refactoring code. This doesn't of course come with a 100% guarantee that things don't break, because there might be so many hidden "features" and complexity you cannot be aware of when you start. Depending on the code base the amount of work you need to do varies greatly, but for large code bases there is usually a lot of work.
You'll need to understand what the code does, either by simply knowing it or by finding out what the current code does. In either case, you start by writing "larger" tests which are not really unit tests, they just protect the current code. They might cover larger parts, more like integration/functional tests. These are your guards when you start to refactor the code. When you have such tests in place and you feel comfortable what the code does, you can start refactoring the parts the "larger" tests cover. For the smaller parts you change you write proper unit tests. Iterating doing various refactorings will at some point make the initial large tests unnecessary because you now have a much better code base and unit tests (or you simply keep them as functional test).
Now, coming back to your question.
I understand what you mean with your question, but I'd still like to change it slightly because there are more important aspects than external and internal. I believe a better question is to ask which dependencies do I need to break to get a better design and to write unit tests?
The answer to this question is you should break all dependencies you are not in control over, slow, non-deterministic or pulls in too much state for a single unit test. These are for sure all external (filesystem, printer, network etc.). Also note that multi-threading is not suitable for unit tests because this is not deterministic. For internal dependencies I assume you mean classes with members or functions calling other functions. The answer to this is maybe. You need to decide if you are in control and if the design is good. Probably in your case you are not in control and the code is not good, so here you need to refactor things to get things under control and into a better design. Michael Feather's book is great here, but you need to find how to apply the things on your code base of couse.
One very good technique for breaking dependencies is dependency injection. In short, it changes the design so that you pass in the members a class uses instead of letting the class itself instantiate them. For these you have an interface (abstract base class) for these dependencies you pass in, so you can easily change what you pass in. For instance, using this you can have different member implementations for a class in production and when you do unit test. This is a great technique and also leads to good design if use wisely.
Good luck and take your time! ;)
Generally speaking, a method with 2000 lines of code is just plain BAD. I usually start to look for reasons to make new classes -- not even methods, but classes -- when i have to use the pagedown key more than three or four times to browse through it (and collapsable regions doesn't count).
So, yes you do need to get rid of dependencies from outside and inside of the assembly, and you need to think of responsibility of the class. It sounds like this one has way too much weight on its shoulders, and it sounds like it is very close to impossible to write unittests for. If you think testability, you will automatically start to inject dependencies, and downsize your classes, and BAM!!!There you have it; nice and pretty code!! :-)
Regards,
Morten

C++ Unit Tests, mocking objects

I'm currently looking at some unit test librarys in C++ and have some questions:
there seem to be no mocking facility in boost.test but I can hardly think of doing unit tests without creating mock objects/functions. How would you do that in boost.test, are you doing it manually (how? I mean, there are several ways I can think of, none of these seem nice) or are you simply doing without mock objects?
googletest and googlemock looks like nice libraries with mockingsupport however, it requires every object that shall be mocked to be virtual. I don't really like this, it is not that I'm worrying about the performance (I could define a macro to get it out of production code anyway) but I find this very intrusive. I wonder if there's another solution which does not require that much change to the existing code? (love clojure there)
Boost::Test does not have a mocking framework or library. If you want mocks, you have to do it yourself, or use something like GMock. Of course, you could use google mock with Boost::Test without problems.
How else would you expect something to be mockable? That's how it works in every other programming language! (Okay, not with duck typing, but that carries more overhead than virtual methods) If you're concerned about performance:
Implement everything in terms of virtuals as specified in the general google mock docs.
Profile your code for places where that's not sufficient
Replace those profiled sections (or rather, the segment of your code which indicates performance is a problem) with high-perf dependency injection instead.
Don't replace everything with high-perf DI, because that would send compile times through the roof.
In all seriousness though, I don't think the virtual calls are going to make huge differences in performance. The one case where virtuals are bad are where they're located inside of inner loops (such as in the iostream library where they're called possibly for every character of input or output), and even then only in performance sensitive code.
EDIT: I missed the very important word not in the above question #2 -- that you're not worried about performance. If that's the case then my answer is you're effectively screwed. A plain function or method call in C++ generates a plain method call, and there's no opprotunity for you to change where that call points. In most cases this doesn't require too much code change, because correct C++ code uses references wherever possible, which won't need to be modified despite the fact that virtuals are being used. You will have to watch out however for anyone using value semantics, because they will be subject to the slicing problem.
Turtle was designed explicitly for use with Boost.Test and looks very nice to me.
Disclaimer I work at Typemock.
Typemock Isolator++ can mock anything!! You don't need virtual - everything is Mockable
See explanation here
So you can fake public, private, abstract (without actually creating a concrete class), non virtual, out arguments, live instance etc...
And...
It fakes everything recursively
class MyClass
{
int GetResult() { return -1; }
}
We'll use the following code
MyClass* fakeMyClass = FAKE<MyClass>(); // every call to fakeMyClass will be faked
WHEN_CALLED(fakeMyClass->GetResult()).Return(10);

Should non-public functions be unit tested and how?

I am writing unit tests for some of my code and have run into a case where I have an object with a small exposed interface but complex internal structures as each exposed method runs through a large number of internal functions including dependancies on the object's state. This makes the methods on the external interface quite difficult to unit test.
My initial question is should I be aiming to unit test these internal functions as well, as they are simpler and hence easier to write tests for? My gut feeling says yes, which leads to the follow-up question of if so, how would I go about doing this in C++?
The options I've come up with are to change these internal functions from private to protected and use either a friend class or inheritence to access these internal functions. Is this the best/only method of doing this will preserving some of the semantics of keeping the internal methods hidden?
If your object is performing highly complex operations that are extremely hard to test through the limited public interface, an option is to factor out some of that complex logic into utility classes that encapsulate specific tasks. You can then unit test those classes individually. It's always a good idea to organize your code into easily digestible chunks.
Short answer: yes.
As to how, I caught a passing reference on SO a few days ago:
#define private public
in the unit testing code evaluated before the relevant headers are read...
Likewise for protected.
Very cool idea.
Slightly longer answer: Test if the code is not obviously correct. Which means essentially any code that does something non-trivial.
On consideration, I am wondering about this. You won't be able to link against the same object file that you use in the production build. Now, unit testing is a bit of an artificial environment, so perhaps this is not a deal-breaker. Can anyone shed some light on the pros and cons of this trick?
My feeling personally is that if testing the public interface is not sufficient to adequately test the private methods, you probably need to decompose the class further. My reasoning is: private methods should be doing only enough to support all use-cases of the public interface.
But my experience with unit testing is (unfortunately) slim; if someone can provide a compelling example where a large chunk of private code cannot be separated out, I'm certainly willing to reconsider!
There are several possible approaches. presuming your class is X:
Only use the public interface of X. You will have extensive setup problems and may need a coverage tool to make sure that your code is covered, but there are no special tricks involved.
Use the "#define private public" or similar trick to link against a version of X.o that is exposed to everyone.
Add a public "static X::unitTest()" method. This means that your code will ship linked to your testing framework. (However, one company I worked with used this for remote diagnostics of the software.)
Add "class TestX" as a friend of X. TestX is not shipped in you production dll/exe. It is only defined in your test program, but it has access to X's internals.
Others...
My opinion is no, generally they should not be tested directly.
Unit tests are white box, from a higher perspective of the system, but they should be black box from the perspective of the tested class interface (its public methods and their expected behavior).
So for example, a string class (that wouldn't need legacy char* support):
you should verify that its length() method is working correcly.
you should not have to verify that it puts the '\0' char at the end of its internal buffer. This is an implementation detail.
This allows you to refactor the implementation almost without touching the tests later on
This helps you reduce coupling by enforcing class responsibilities
This makes your tests easier to maintain
The exception is for quite complex helper methods you would like to verify more thoroughly.
But then, this may be a hint that this piece of code should be "official" by making it public static or extracted in its own class with its public methods.
I would say to use a code coverage tool to check if these function are already tested somehow.
Theoretically if your public API is passing all the tests then the private functions are working fine, as long as every possible scenario is covered. That's the main problem, I think.
I know there is tools for that working with C/C++. CoverageMeter is one of them.
Unless you're making a general purpose library you should try and limit what you have built to what you will be using. Extend as you need it.
As such, you should have full code coverage, and you should test it all.
Perhaps your code is a bit smelly? Is it time to refactor?
If you have a large class doing lots of things internally, perhaps it should be broken into several smaller classes with interfaces you could test separately.
I've always thought this would tend to fall into place if you use test driven development. There are two ways of approaching the development, either you start with your public interface and write a new test before each addition to the complex private methods or you start off working on the complex stuff as public and then refactor the code to make the methods private and the tests you've already written to use the new public interface. Either way you should get full coverage.
Of course I've never managed to write a whole app (or even class) in a strict tdd way and the refactoring of the complex stuff into utility classes is the way to go if possible.
You could always use a compile switch around the private: like
#if defined(UNIT_TEST)
Or with code coverage tools verify that your unit testing of your public functions fully exercise the private ones.
Yes you should. Its called white box testing, this means that you have to know alot about the internals of the program to test it properly.
I would create public 'stubs' that call the private functions for testing. #ifdef the stubs so that you can compile them out when the testing is complete.
You might find it productive, to write a test program. In the test program, create a class which uses the class to be tested as a base.
Add methods to your new class, to test the functions that aren't visible at the public interface. Have your simply test program, call these methods to validate the functions you are concerned about.
If your class is performing complex internal calculations, a utility class or even an external function may be the best option to break out the calculation. But if the object has a complex internal structure, the object should have consistency check functions. Ie, if the object represents a specialized tree, the class should have methods to check that the tree is still correct. Additional functions such as tree depth are often useful to users of the class. Some of these functions may be declared inside #ifdef DEBUG or similar constructs to limit run time space in embedded applications. Using internal functions that are only compiled when DEBUG is set are much better from an encapsulation standpoint. You aren't breaking encapsulation. Additionally, the implementation dependent tests are kept with the implementation so it is obvious the test needs to change when the implementation changes.

C++ Unit Testing Legacy Code: How to handle #include?

I've just started writing unit tests for a legacy code module with large physical dependencies using the #include directive. I've been dealing with them a few ways that felt overly tedious (providing empty headers to break long #include dependency lists, and using #define to prevent classes from being compiled) and was looking for some better strategies for handling these problems.
I've been frequently running into the problem of duplicating almost every header file with a blank version in order to separate the class I'm testing in it's entirety, and then writing substantial stub/mock/fake code for objects that will need to be replaced since they're now undefined.
Anyone know some better practices?
The depression in the responses is overwhelming... But don't fear, we've got the holy book to exorcise the demons of legacy C++ code. Seriously just buy the book if you are in line for more than a week of jousting with legacy C++ code.
Turn to page 127: The case of the horrible include dependencies. (Now I am not even within miles of Michael Feathers but here as-short-as-I-could-manage answer..)
Problem: In C++ if a classA needs to know about ClassB, Class B's declaration is straight-lifted / textually included in the ClassA's source file. And since we programmers love to take it to the wrong extreme, a file can recursively include a zillion others transitively. Builds take years.. but hey atleast it builds.. we can wait.
Now to say 'instantiating ClassA under a test harness is difficult' is an understatement. (Quoting MF's example - Scheduler is our poster problem child with deps galore.)
#include "TestHarness.h"
#include "Scheduler.h"
TEST(create, Scheduler) // your fave C++ test framework macro
{
Scheduler scheduler("fred");
}
This will bring out the includes dragon with a flurry of build errors.
Blow#1 Patience-n-Persistence: Take on each include one at a time and decide if we really need that dependency. Let's assume SchedulerDisplay is one of them, whose displayEntry method is called in Scheduler's ctor.
Blow#2 Fake-it-till-you-make-it (Thanks RonJ):
#include "TestHarness.h"
#include "Scheduler.h"
void SchedulerDisplay::displayEntry(const string& entryDescription) {}
TEST(create, Scheduler)
{
Scheduler scheduler("fred");
}
And pop goes the dependency and all its transitive includes.
You can also reuse the Fake methods by encapsulating it in a Fakes.h file to be included in your test files.
Blow#3 Practice: It may not be always that simple.. but you get the idea. After the first few duels, the process of breaking deps will get easy-n-mechanical
Caveats (Did I mention there are caveats? :)
We need a separate build for test cases in this file ; we can have only 1 definition for the SchedulerDisplay::displayEntry method in a program. So create a separate program for scheduler tests.
We aren't breaking any dependencies in the program, so we are not making the code cleaner.
You need to maintain those fakes as long as we need the tests.
Your sense of aesthetics may be offended for a while.. just bite your lip and 'bear with us for a better tomorrow'
Use this technique for a very huge class with severe dependency issues. Don't use often or lightly.. Use this as a starting point for deeper refactorings. Over time this testing program can be taken behind the barn as you extract more classes (WITH their own tests).
For more.. please do read the book. Invaluable. Fight on bro!
Since you're testing legacy code I'm assuming you can't refactor said code to have less dependencies (e.g. by using the pimpl idiom)
That leaves you with little options I'm afraid. Every header that was included for a type or function will need a mock object for that type or function for everything to compile, there's little you can do...
I am not answering your question directly but I am afraid that unit testing just may not be the thing to do if you work with large amounts of legacy code.
After leading an XP team on a green field development project I really loved my Unit tests. Things happened and a few years later I find myself working on a large legacy code base that has lots of quality problems.
I tried to find a way to add units tests to the application but in the end just got stuck in a catch-22:
In order to write meaning full unit tests the code would need to be refactored.
Without unit tests it will be too dangerous to refactor the code.
If you feel like a hero and drink the cool-aid on unit testing then you may still give it a try but there is a real risk that you end up with just more test code of little value that now also needs to be maintained.
Sometimes it is just best to work on the code in the way that is "designed" to be worked on.
I don't know if this will work for your project but
you might try to attack the problem from the link phase of your build.
This would completely eliminate your #include problem.
All you would need to do is re-implement the interfaces in the included files to do what ever you want and then just link to the mock object files that you have created to implement the interfaces in the include file.
The big disadvantage to this method is a more complected build system.
If you keep writing stubs/mock/fake codes you risk doing unit testing on a class that has different behavior then when compiled on the main project.
But if those includes are there and have no added behavior then it's Ok.
I'd try not changing anything on the includes while doing the unit testing so you're sure (as far you can be on legacy code :) ) that you testing the real code.
You're definitely between a rock and a hard place with legacy code with large dependencies. You've got a long hard slog ahead to sort it all out.
From what you say, it seems you are trying to keep the source code intact for each module in turn, placing it in a test harness with external dependencies mocked out. My suggestion here would be to take the even braver step of attempting some refactoring to eliminate (or invert) the dependencies, which is probably the very step you are trying to avoid.
I suggest this because I'm guessing the dependencies are going to kill you as you write tests. You will certainly be better off in the long term if you can eliminate the dependencies.