Currently my project is composed of various concrete classes. Now as I'm getting into unit testing it looks like I'm supposed to create an interface for each and every class (effectively doubling the number of classes in my project)? I happen to be using Google Mock as a mocking framework. See Google Mock CookBook on Interfaces. While before I might have just classes Car and Engine, now I would have abstract classes (aka C++ interfaces) Car and Engine and then the implementation classes CarImplementation and EngineImpl or whatever. This would allow me to stub out Car's dependency on Engine.
There are two lines of thought I have come across in researching this:
Only use interfaces when you may have the need for more than one
implementation of a given abstraction and/or for use in public APIs,
so otherwise don't create interfaces unnecessarily.
Unit tests stubs/mocks
often are the "other implementation", and so, yes, you should create
intefaces.
When unit testing, should I create an interface for each class in my project? (I'm leaning towards creating interfaces for ease of testing)
Think you've got a number of options. As you say, one option is to create interfaces. Say you have classes
class Engine:
{
public:
void start(){ };
};
class Car
{
public:
void start()
{
// do car specific stuff
e_.start();
private:
Engine e;
};
To introduce interfaces - you would have to change Car to take an Engine
class Car
{
public:
Car(Engine* engine) :
e_(engine)
{}
void start()
{
// do car specific stuff
e_->start();
private:
Engine *e_;
};
If you've only got one type of engine - you've suddenly made your Car objects harder to use (who creates the engines, who owns the engines). Cars have a lot of parts - so this problem will continue to increase.
If you want seperate implementations, another way would be with templates. This removes the need for interfaces.
class Car<type EngineType = Engine>
{
public:
void start()
{
// do car specific stuff
e_.start();
private:
EngineType e;
};
In your mocks, you could then create Cars with specialised engines:
Car<MockEngine> testEngine;
Another, different approach, would be to add methods to Engine to allow it to be tested, something like:
class Engine:
{
public:
void start();
bool hasStarted() const;
};
You could then either add a check method to Car, or inherit from Car to test.
class TestCar : public Car
{
public:
bool hasEngineStarted() { return e_.hasStarted(); }
};
This would require Engine to be changed from private to protected in the Car class.
Depending on the real world situation, will depend on which solution is best. Also, each developer will have their own holy grail of how they believe code should be unit tested. My personal views is to keep the client/customer in mind. Lets assume your clients (perhaps other developers in your team) will be creating Cars and don't care about Engines. I would therefore not want to expose the concepts of Engines (a class internal to my library) just so I can unit test the thing. I would opt for not creating interfaces and testing the two classes together (third option I gave).
there are two categories of testing regarding implementation visibility: black-box testing and white-box testing
black-box testing focuses on testing implementation through their interfaces, and validating the adjust to their spec.
white-box testing tests granular details about the implementation that SHOULD NOT in general be accessible from the outside. This sort of testing will validate that the implementation components work as intended. So their results are mostly of interest to developers trying to figure out what is broken, or needs mantainance
mocks by their definition fit into modular architectures, but it doesn't follow that all classes in a project need to be entirely modular out themselves. Its perfectly fine to draw some line when a group of classes will know about each other. They as a group can present to other modules from the persepective of some facade interface class. However, you'll still want to have white-box test drivers inside this module with knowledge about the implementation details. Hence this sort of testing is not a good fit for mocks.
It follows trivially from this that you don't need to have mocks or interfaces for everything. Just take the high-level design components that implement facade interfaces and create mocks for them. It will give you the sweet spot where mock testing pays off IMHO
having said that, try to use the tool to your needs, rather than letting the tool force you into changes you think will not be beneficial in the long run
Creating interfaces for every class within your project may or may not be necessary. This is entirely a design decision. I've found that it mostly isn't. Often in n-tier design you wish you abstract the layer between data access and logic. I would argue that you should work towards this as it aids in testing the logic without much infrastructure necessary for the tests. Methods of abstraction like dependency injection and IoC would require you to do something like this and would make it easier to test said logic.
I would examine what you are trying to test and focus on the areas that you view as most prone to error. This can help you decide whether interfaces are necessary.
Related
Suppose that I have a struct Car and it has some methods I want to test. For example IgniteEngine, SwitchGear, and drive. As you can see that drive depends on the other methods. I need a way to mock IgniteEngine and SwitchGear.
I think I am supposed to use interface but I don't quite understand how to accomplish it.
Suppose Car is now an interface
type Car interface {
IgniteEngine()
SwitchGear()
Drive()
}
I can create a MockCar and two mocked functions for IgniteEngine and SwitchGear but now how do I test the source code for Drive?
Do I copy and paste my source code into the mock object? That seems silly. Did I get the idea wrong about how to perform mocking? Does mocking only work when I do dependency injection?
Now what if Drive actually depends on external library like a database or a message broker system?
Thank you
I don't think that the problem is in the interface per se, it is more how the Car is implemented. Good testable code favour composition, so if you have something like:
type Engine interface {
Ignite()
}
type Clutch interface {
SwitchGear()
}
Then you can have a Car like this:
type Car struct {
engine Engine
clutch Clutch
}
func (c *Car) IgniteEngine() {
c.engine.Ignite()
}
...
In this way you can substitute engines and clutches in the Car and create mock clutches and engines that produce exactly the behaviour that you need to test your Drive method.
Basically, maybe, it's a big juggling game, and involves applying interfaces, encapsulation and abstraction at varying degrees.
By making Car an interface and applying dependency injection, it allows your test to easily exercise the components that rely on car.
func GoToStore(car Honda) {
car.IgniteEngine()
car.Drive()
}
This silly function drives a honda to the store. It has a tight coupling to the Honda class. Perhaps hondas are really expensive and you can't afford to use one in your tests. Creating an interface and having GoToStore operate on an interface decouples you from the dependency on honda. GoToStore becomes honda-agnostic. It can operate on ANY car, it could maybe even operate on ANY vehicle. Depedency injection here, is amazingly powerful. Maybe one of the most powerful things in OOP. It also allows you to trivially stub an in memory car in a test suite and make assertion on it.
func GoToStore(car Car) {
car.IgniteEngine()
car.Drive()
}
type StubCar struct {
ignited bool
driven bool
}
func (c *StubCar) IgniteEngine() { c.ignited = true }
func (c *StubCar) Drive() { c.driven = true}
func TestGoToStore(t *testing.T) {
c := &StubCar{}
GoToStore(c)
assert.true(c.ignited)
assert.true(c.driven)
}
The same tricksiness can be applied to your concrete car classes. Suppose you have a Honda that you want to drive around, and the expensive part is the engine. By having your honda operate on an engine interface, you can then switch the really expensive powerful engine that you can only afford for production, out for a weedwacker engine during testing.
Dependencies can be pushed really far, to the boundaries of your application, but at some point something needs to configure the real engine, the real database drivers, the real expensive pieces, etc, and inject those into your production application.
Now what if Drive actually depends on external library like a database or a message broker system?
At some level these integrations have to be tested. Hopefully in an automated way in your CI that's not flaky, fast and reliable. But could really be a one time manual thing. What Depedency injection and interfaces allow is for you to test many of the common use cases using a stub object in memory using a unit test. Even with interfaces and dependency injection, at some point, you still have to integrate. Tools like docker-compose make it much more sane to run your database and message broker integration tests quickly in your CI pipeline :)
This could be extended to the concrete honda class.
type Honda struct {
mysql MySQL
}
func (h *Honda) Drive() {
h.mysql.UpdateState("driving")
}
The same principle can be applied here so that you can verify your Honda code works with a stub data store.
type EngineMetrics interface {
UpdateState(string)
}
type Honda struct {
engineMetrics EngineMetrics
}
func (h *Honda) Drive() {
h.engineMetrics.UpdateState("driving")
}
By using dependency injection and interfaces the honda is decoupled from the concrete metrics database implementation, allowing a test stub to be used to verify it works.
It is often told that writing unit tests one must test only a single class and mock all the collaborators. I am trying to learn TDD to make my code design better and now I am stuck with a situation where this rule should be broken. Or shouldn't it?
An example: class under test has a method that gets a Person, creates an Employee based on the Person and returns the Employee.
public class EmployeeManager {
private DataMiner dataMiner;
public Employee getCoolestEmployee() {
Person dankestPerson = dataMiner.getDankestPerson();
Employee employee = new Employee();
employee.setName(dankestPerson.getName() + "bug in my code");
return employee;
}
// ...
}
Should Employee be considered a collaborator? If not, why not? If yes, how do I properly test that 'Employee' is created correctly?
Here is the test I have in mind (using JUnit and Mockito):
#Test
public void coolestEmployeeShouldHaveDankestPersonsName() {
when(dataMinerMock.getDankestPerson()).thenReturn(dankPersonMock);
when(dankPersonMock.getName()).thenReturn("John Doe");
Employee coolestEmployee = employeeManager.getCoolestEmployee();
assertEquals("John Doe", coolestEmployee.getName());
}
As you see, I have to use coolestEmployee.getName() - method of the Employee class that is not under Test.
One possible solution that comes to mind is to extract the task of transforming Persons into Employees to a new method of Employee class, something like
public Employee createFromPerson(Person person);
Am I overthinking the problem? What is the correct way?
The goal of a unit test is to quickly and reliably determine whether a single system is broken. That doesn't mean you need to simulate the entire world around it, just that you should ensure that collaborators you use are fast, deterministic, and well-tested.
Data objects—POJOs and generated value objects in particular—tend to be stable and well-tested, with very few dependencies. Like other heavily-stateful objects, they also tend to be very tedious to mock, because mocking frameworks don't tend to have powerful control over state (e.g. getX should return n after setX(n)). Assuming Employee is a data object, it is likely a good candidate for actual use in unit tests, provided that any logic it contains is well-tested.
Other collaborators not to mock in general:
JRE classes and interfaces. (Never mock a List, for instance. It'll be impossible to read, and your test won't be any better for it.)
Deterministic third-party classes. (If any classes or methods change to be final, your mock will fail; besides, if you're using a stable version of the library, it's no spurious source of failure either.)
Stateful classes, just because mocks are much better at testing interactions than state. Consider a fake instead, or some other test double.
Fast and well-tested other classes that have few dependencies. If you have confidence in a system, and there's no hazard to your test's determinism or speed, there's no need to mock it.
What does that leave? Non-deterministic or slow service classes or wrappers that you've written, that are stateless or that change very little during your test, and that may have many collaborators of their own. In these cases, it would be hard to write a fast and deterministic test using the actual class, so it makes a lot of sense to use a test double—and it'd be very easy to create one using a mocking framework.
See also: Martin Fowler's article "Mocks Aren't Stubs", which talks about all sorts of test doubles along with their advantages and disadvantages.
Getters that only read a private field are usually not worth testing. By default, you can rely on them pretty safely in other tests. Therefore I wouldn't worry about using dankestPerson.getName() in a test for EmployeeManager.
There's nothing wrong with your test as far as testing goes. The design of the production code might be different - mocking dankestPerson probably means that it has an interface or abstract base class, which might be a sign of overengineering especially for a business entity. What I would do instead is just new up a Person, set its name to the expected value and set up dataMinerMock to return it.
Also, the use of "Manager" in a class name might indicate a lack of cohesion and too broad a range of responsibilities.
My colleagues and I are currently introducing unit tests to our legacy Java EE5 codebase. We use mostly JUnit and Mockito. In the process of writing tests, we have noticed that several methods in our EJBs were hard to test because they did a lot of things at once.
I'm fairly new to the whole testing business, and so I'm looking for insight in how to better structure the code or the tests. My goal is to write good tests without a headache.
This is an example of one of our methods and its logical steps in a service that manages a message queue:
consumeMessages
acknowledgePreviouslyDownloadedMessages
getNewUnreadMessages
addExtraMessages (depending on somewhat complex conditions)
markMessagesAsDownloaded
serializeMessageObjects
The top-level method is currently exposed in the interface, while all sub-methods are private. As far as I understand it, it would be bad practice to just start testing private methods, as only the public interface should matter.
My first reaction was to just make all the sub-methods public and test them in isolation, then in the top-level method just make sure that it calls the sub-methods. But then a colleague mentioned that it might not be a good idea to expose all those low-level methods at the same level as the other one, as it might cause confusion and other developers might start using when they should be using the top-level one. I can't fault his argument.
So here I am.
How do you reconcile exposing easily testable low-level methods versus avoiding to clutter the interfaces? In our case, the EJB interfaces.
I've read in other unit test questions that one should use dependency injection or follow the single responsibility principle, but I'm having trouble applying it in practice. Would anyone have pointers on how to apply that kind of pattern to the example method above?
Would you recommend other general OO patterns or Java EE patterns?
At first glance, I would say that we probably need to introduce a new class, which would 1) expose public methods that can be unit tested but 2) not be exposed in the public interface of your API.
As an example, let's imagine that you are designing an API for a car. To implement the API, you will need an engine (with complex behavior). You want to fully test your engine, but you don't want to expose details to the clients of the car API (all I know about my car is how to push the start button and how to switch the radio channel).
In that case, what I would do is something like that:
public class Engine {
public void doActionOnEngine() {}
public void doOtherActionOnEngine() {}
}
public class Car {
private Engine engine;
// the setter is used for dependency injection
public void setEngine(Engine engine) {
this.engine = engine;
}
// notice that there is no getter for engine
public void doActionOnCar() {
engine.doActionOnEngine();
}
public void doOtherActionOnCar() {
engine.doActionOnEngine();
engine.doOtherActionOnEngine(),
}
}
For the people using the Car API, there is no way to access the engine directly, so there is no risk to do harm. On the other hand, it is possible to fully unit test the engine.
Dependency Injection (DI) and Single Responsibility Principle (SRP) are highly related.
SRP is basicly stating that each class should only do one thing and delegate all other matters to separate classes. For instance, your serializeMessageObjects method should be extracted into its own class -- let's call it MessageObjectSerializer.
DI means injecting (passing) the MessageObjectSerializer object as an argument to your MessageQueue object -- either in the constructor or in the call to the consumeMessages method. You can use DI frameworks to do this for, but I recommend to do it manually, to get the concept.
Now, if you create an interface for the MessageObjectSerializer, you can pass that to the MessageQueue, and then you get the full value of the pattern, as you can create mocks/stubs for easy testing. Suddenly, consumeMessages doesn't have to pay attention to how serializeMessageObjects behaves.
Below, I have tried to illustrate the pattern. Note, that when you want to test consumeMessages, you don't have to use the the MessageObjectSerializer object. You can make a mock or stub, that does exactly what you want it to do, and pass it instead of the concrete class. This really makes testing so much easier. Please, forgive syntax errors. I did not have access to Visual Studio, so it is written in a text editor.
// THE MAIN CLASS
public class MyMessageQueue()
{
IMessageObjectSerializer _serializer;
//Constructor that takes the gets the serialization logic injected
public MyMessageQueue(IMessageObjectSerializer serializer)
{
_serializer = serializer;
//Also a lot of other injection
}
//Your main method. Now it calls an external object to serialize
public void consumeMessages()
{
//Do all the other stuff
_serializer.serializeMessageObjects()
}
}
//THE SERIALIZER CLASS
Public class MessageObjectSerializer : IMessageObjectSerializer
{
public List<MessageObject> serializeMessageObjects()
{
//DO THE SERILIZATION LOGIC HERE
}
}
//THE INTERFACE FOR THE SERIALIZER
Public interface MessageObjectSerializer
{
List<MessageObject> serializeMessageObjects();
}
EDIT: Sorry, my example is in C#. I hope you can use it anyway :-)
Well, as you have noticed, it's very hard to unit test a concrete, high-level program. You have also identified the two most common issues:
Usually the program is configured to use specific resources, such as a specific file, IP address, hostname etc. To counter this, you need to refactor the program to use dependency injection. This is usually done by adding parameters to the constructor that replace the ahrdcoded values.
It's also very hard to test large classes and methods. This is usually due to the combinatorical explosion in the number of tests required to test a complex piece of logic. To counter this, you will usually refactor first to get lots more (but shorter) methods, then trying to make the code more generic and testable by extracting several classes from your original class that each have a single entry method (public) and several utility methods (private). This is essentially the single responsibility principle.
Now you can start working your way "up" by testing the new classes. This will be a lot easier, as the combinatoricals are much easier to handle at this point.
At some point along the way you will probably find that you can simplify your code greatly by using these design patterns: Command, Composite, Adaptor, Factory, Builder and Facade. These are the most common patterns that cut down on clutter.
Some parts of the old program will probably be largely untestable, either because they are just too crufty, or because it's not worth the trouble. Here you can settle for a simple test that just checks that the output from known input has not changed. Essentially a regression test.
I have a PersonDao that I'm writing unit tests against.
There are about 18-20 methods in PersonDao of the form -
getAllPersons()
getAllPersonsByCategory()
getAllPersonsUnder21() etc
My Approach to testing this was to create a PersonDaoTest with about 18 test methods testing each of the method in PersonDao
Then I created a PersonDaoPaginationTest that tested these 18 methods by applying pagination parameters.
Is this in anyway against the TDD best practices? I was told that this creates confusion and is against the best practices since this is non-standard. What was suggested is merging the two classes into PersonDaoTest instead.
As I understand is, the more broken down into many classes your code is, the better, please comment.
The fact that you have a set of 18 tests that you are going to have to duplicate to test a new feature is a smell that suggests that your PersonDao class is taking on multiple responsibilities. Specifically, it appears to be responsible both for querying/filter and for pagination. You may want to take a look at whether you can do a bit of design work to extract the pagination functionality into a separate class which could then be tested independently.
But in answer to your question, if you find that you have a class that you want to remain complex, then it's perfectly fine to use multiple test classes as a way of organizing a large number of tests. #Gishu's answer of grouping tests by their setup is a good approach. #Ryan's answer of grouping by "facets" or features is another good approach.
Can't give you a sweeping answer without looking at the code... except use whatever seems coherent to you and your team.
I've found that grouping tests based on their setup works out nicely in most cases. i.e if 5 tests require the same setup, they usually fit nicely into a test-fixture. if the 6th test requires a different setup (more or less) break it out into a separate test fixture.
This also leads to test-fixtures that are feature-cohesive (i.e. tests grouped on feature), give it a try. I'm not aware of any best practice that says you need to have one test class per production class... in practice I find I have n test classes per production classes, the best practice would be to use good names and keep related tests close (in a named folder).
My 2 cents: when you have a large class like that that has different "facets" to it, like pagination, I find it can often make for more understandable tests to not pack them all into one class. I can't claim to be a TDD guru, but I practice test-first development religiously, so to speak. I don't do it often, but it's not exactly rare, either, that I'll write more than a single test class for a particular class. Many people seem to forget good coding practices like separation of concerns when writing tests, though. I'm not sure why.
I think one test class per class is fine - if your implementation has many methods, then your test class will have many methods - big deal.
You may consider a couple of things however:
Your methods seem a bit "overly specific" and could use some abstraction or generalisation, for example instead of getAllPersonsUnder21() consider getAllPersonsUnder(int age)
If there are some more general aspects of your class, consider testing them using some common test code using call backs. For a trivial example to illustrate testing that both getAllPersons() returns multiple hits, do this:
#Test
public void testGetAllPersons() {
assertMultipleHits(new Callable<List<?>> () {
public List<?> call() throws Exception {
return myClass.getAllPersons(); // Your call back is here
}
});
}
public static void assertMultipleHits(Callable<List<?>> methodWrapper) throws Exception {
assertTrue("failure to get multiple items", methodWrapper.call().size() > 0);
}
This static method can be used by any class to test if "some method" returns multiple hits. You could extends this to do lots of tests over the same callback, for example running it with and without a DB connection up, testing that it behaves correctly in each case.
I'm working on test automation of a web app using selenium. It is not unit testing but you might find that some principles apply. Tests are very complex and we figured out that the only way to implement tests in a way that meets all our requirements was having 1 test per class. So we consider that each class is an individual test, then, we were able to use methods as the different steps of the test. For example:
public SignUpTest()
{
public SignUpTest(Map<String,Object> data){}
public void step_openSignUpPage(){}
public void step_fillForm(){}
public void step_submitForm(){}
public void step_verifySignUpWasSuccessfull(){}
}
All the steps are dependent, they follow the order specified and if someone fail the others will not be executed.
Of course, each step is a test by itself, but they all together form the sing up test.
The requirements were something like:
Tests must be data driven, this is, execute the same test in parallel with different inputs.
Tests must run in different browsers in parallel as well. So each
test will run "input_size x browsers_count" times in parallel.
Tests will focus in a web workflow, for example, "sign up with valid data" and they will be split into smaller tests units for each step of the workflow. It will make things easier to
maintain, and debug (when you have a failure, it will say:
SignUpTest.step_fillForm() and you'll know immediately what's wrong).
Tests steps share the same test input and state (for example, the id of the user created). Imagine if you put in the same class
steps of different tests, for example:
public SignUpTest()
{
public void signUpTest_step_openSignUpPage(){}
public void signUpTest_step_step_fillForm(){}
public void signUpTest_step_step_submitForm(){}
public void signUpTest_step_verifySignUpWasSuccessfull(){}
public void signUpNegativeTest_step_openSignUpPage(){}
public void signUpNegativeTest_step_step_fillFormWithInvalidData(){}
public void signUpNegativeTest_step_step_submitForm(){}
public void signUpNegativeTest_step_verifySignUpWasNotSuccessfull(){}
}
Then, having in the same class state belonging to the 2 tests will be
a mess.
I hope I was clear and you may find this useful. At the end, choosing what will represent your test: if a class or a method is just a decision that I think will depend int: what is the target of a test (in my case, a workflow around a feature), what's easier to implement and maintain, if a test fail how you make the failure more accurate and how you make it easier to debug, what will lead you to more readable code, etc.
I have a code base where many of the classes I implement derive from classes that are provided by other divisions of my company. Working with these other devisions often have the working relationship as though they are third party middle ware vendors.
I'm trying to write test code without modifying these base classes. However, there are issues with creating meaningful test
objects due to the lack of interfaces:
//ACommonClass.h
#include "globalthermonuclearwar.h" //which contains deep #include dependencies...
#include "tictactoe.h" //...and need to exist at compile time to get into test...
class Something //which may or may not inherit from another class similar to this...
{
public:
virtual void fxn1(void); //which often calls into many other classes, similar to this
//...
int data1; //will be the only thing I can test against, but is often meaningless without fxn1 implemented
//...
};
I'd normally extract an interface and work from there, but as these are "Third Party", I can't commit these changes.
Currently, I've created a separate file that holds fake implementations for functions that are defined in the third-party supplied base class headers on a need to know basis, as has been described in the book "Working with Legacy Code".
My plan was to continue to use these definitions and provide alternative test implementations for each third party class that I needed:
//SomethingRequiredImplementations.cpp
#include "ACommonClass.h"
void CGlobalThermoNuclearWar::Simulate(void) {}; // fake this and all other required functions...
// fake implementations for otherwise undefined functions in globalthermonuclearwar.h's #include files...
void Something::fxn1(void) { data1 = blah(); } //test specific functionality.
But before I start doing that I was wondering if any one has tried providing actual objects on a code base similar to mine, which would allow creating new test specific classes to use in place of actual third-party classes.
Note all code bases in question are written in C++.
Mock objects are suitable for this kind of task. They allow you to simulate the existence of other components without needing them to be present. You simply define the expected input and output in your tests.
Google have a good mocking framework for C++.
I'm running into a very similar problem at the moment. I don't want to add a bunch of interfaces that are only there for the purpose of testing, so I can't use any of the existing mock object libraries. To get around this I do the same thing, creating a different file with fake implementations, and having my tests link the fake behaviour, and production code links the real behaviour.
What I wish I could do at this point, is take the internals of another mock framework, and use it inside my fake objects. It would look a little something like this:
Production.h
class ConcreteProductionClass { // regular everyday class
protected:
ConcreteProductionClass(); // I've found the 0 arg constructor useful
public:
void regularFunction(); // regular function that I want to mock
}
Mock.h
class MockProductionClass
: public ConcreteProductionClass
, public ClassThatLetsMeSetExpectations
{
friend class ConcreteProductionClass;
MockTypes membersNeededToSetExpectations;
public:
MockClass() : ConcreteProductionClass() {}
}
ConcreteProductionClass::regularFunction() {
membersNeededToSetExpectations.PassOrFailTheTest();
}
ProductionCode.cpp
void doSomething(ConcreteProductionClass c) {
c.regularFunction();
}
Test.cpp
TEST(myTest) {
MockProductionClass m;
m.SetExpectationsAndReturnValues();
doSomething(m);
ASSERT(m.verify());
}
The most painful part of all this is that the other mock frameworks are so close to this, but don't do it exactly, and the macros are so convoluted that it's not trivial to adapt them. I've begun looking into this on my spare time, but it's not moving along very quickly. Even if I got my method working the way I want, and had the expectation setting code in place, this method still has a couple drawbacks, one of them being that your build commands can get to be kind of long if you have to link against a lot of .o files rather than one .a, but that's manageable. It's also impossible to fall through to the default implementation, since we're not linking it. Anyway, I know this doesn't answer the question, or really even tell you anything you don't already know, but it shows how close the C++ community is to being able to mock classes that don't have a pure virtual interface.
You might want to consider mocking instead of faking as a potential solution. In some cases you may need to write wrapper classes that are mockable if the original classes aren't. I've done this with framework classes in C#/.Net, but not C++ so YMMV.
If I have a class that I need under test that derives from something I can't (or don't want to) run under test I'll:
Make a new logic-only class.
Move the code-i-wanna-test to the logic class.
Use an interface to talk back to the real class to interact with the base class and/or things I can't or won't put in the logic.
Define a test class using that same interface. This test class could have nothing but noops or fancy code that simulates the real classes.
If I have a class that I just need to use in testing, but using the real class is a problem (dependencies or unwanted behaviors):
I'll define a new interface that looks like all of the public methods I need to call.
I'll create a mock version of the object that supports that interface for testing.
I'll create another class that is constructed with a "real" version of that class. It also supports that interface. All interface calls a forwarded to the real object methods.
I'll only do this for methods I actually call - not ALL the public methods. I'll add to these classes as I write more tests.
For example, I wrap MFC's GDI classes like this to test Windows GDI drawing code. Templates can make some of this easier - but we often end up not doing that for various technical reasons (stuff with Windows DLL class exporting...).
I'm sure all this is in Feather's Working with Legacy Code book - and what I'm describing has actual terms. Just don't make me pull the book off the shelf...
One thing you did not indicate in your question is the reason why your classes derive from base classes from the other division. Is the relationship really a IS-A relationshiop ?
Unless your classes needs to be used by a framework, you could consider favoring delegation over inheritance. Then you can use dependency injection to provide your class with a mock of their class in the unit tests.
Otherwise, an idea would be to write a script to extract and create the interface your need from the header they provide, and integrate this to the compilation process so your unit test can ve checked in.