How can I stub/mock non-pointer member variables in C++? - c++

Many websites on unit testing say to extract an interface and code to the interface (which makes sense), but that requires using polymorphism via pointers. Is it possible to accomplish this without pointers so I don't have to modify the production code? I would rather not use pointers and manage memory.
Conditional compilation is allowed.
I am specifically using gmock for my stubs/mocks.
Some things that I've researched are:
Using references
involves writing special copy constructors or making it non-copyable
still have to manage memory with new/delete
not sure if this will cause unforseen problems down the line
Creating via code-generation a collection of pointer-wrapper classes. The interface stays the same with a few added methods for testing.
seems like it would work but require upkeep
example of what I mean down below
Please note that gmock mock objects are not copyable, therefore I cannot constructor inject them. (https://groups.google.com/forum/#!topic/googlemock/GD73UXjQowE/discussion)
Problem Example
class Example
{
public:
Example();
~Example();
private:
// I want to stub out _foo.
Dependency _foo;
};
Pointer Wrapper Class Example
#ifndef UNIT_TEST
Foo _foo;
#else
PtrWrapFoo _foo;
#endif
...
_foo.setImpl(StubFoo *aStubFoo);
...
void PtrWrapFoo::doSomething()
{
_impl->doSomething();
}

In the past I have implemented Dependency in a separate compilation unit and linked against that instead of the original.
This is what Michael Feathers calls a Link Seam.

After careful consideration, I have decided to abandon the idea. It becomes too difficult to try and manage the circumstances in which the dependency should use a particular implementation (e.g. real, mock, fake), given the testing scenario.
All dependencies that need testing now have interfaces, which are memberless. My production code uses pointers for dependencies, which is a a reality I have to live with if I want testable code. I was persuaded to this notion after reading Roy Osherove's book, The Art of Unit Testing. My regular constructors instantiate the real, concrete class. I also have extra constructors/setters that are conditionally compiled for unit tests so I can properly set up dependencies by using stubs/mocks.
I have reduced my need to write extra code by using a tool to extract an interface from a class.
Overall, the new design is good and adequately sidesteps the problem of mocking non-pointer member variables with minimal overhead.

Related

In a C++ unit test context, should an abstract base class have other abstract base classes as function parameters?

I try to implement uni tests for our C++ legacy code base. I read through Michael Feathers "Working effectively with legacy code" and got some idea how to achieve my goal. I use GooleTest/GooleMock as a framework and already implemented some first tests involving mock objects.
To do that, I tried the "Extract interface" approach, which worked quite well in one case:
class MyClass
{
...
void MyFunction(std::shared_ptr<MyOtherClass> parameter);
}
became:
class MyClass
{
...
void MyFunction(std::shared_ptr<IMyOtherClass> parameter);
}
and I passed a ProdMyOtherClass in production and a MockMyOtherClass in test. All good so far.
But now, I have another class using MyClass like:
class WorkOnMyClass
{
...
void DoSomeWork(std::shared_ptr<MyClass> parameter);
}
If I want to test WorkOnMyClass and I want to mock MyClass during that test, I have to extract interface again. And that leads to my question, which I couldn't find an answer to so far: how would the interface look like? My guess is, that it should be all abstract, so:
class IMyClass
{
...
virtual void MyFunction(std::shared_ptr<IMyOtherClass> parameter) = 0;
}
That leaves me with three files for every class: all virtual base interface class, production implementation using all production parameters and mock implementation using all mock parameters. Is this the correct approach?
I only found simple examples, where function parameters are primitives, but not classes, which in turn need tests themselves (and may therefore require interfaces).
TLDR in bold
As Jeffery Coffin has already pointed out, there is no one right way to do what you're seeking to accomplish. There is no "one-size fits all" in software, so take all these answers with a grain of salt, and use your best judgement for your project and circumstances. That being said, here's one potential alternative:
Beware of mocking hell:
The approach you've outlined will work: but it might not be best (or it might be, only you can decide). Typically the reason you're tempted to use mocks is because there's some dependency you're looking to break. Extract Interface is an okay pattern, but it's probably not resolving the core issue. I've leaned heavily on mocks in the past and have had situations where I really regret it. They have their place, but I try to use them as infrequently as possible, and with the lowest-level and smallest possible class. You can get into mocking hell, which you're about to enter since you have to reason about your mocks having mocks. Usually when this happens its because there's a inheritance/composition structure and the base/children share a dependency. If possible, you want to refactor so that the dependency isn't so heavily ingrained in your classes.
Isolating the "real" dependency:
A better pattern might be Parameterize Constructor (another Michael Feathers WEWLC pattern).
WLOG, lets say your rogue dependency is a database (maybe it's not a database, but the idea still holds). Maybe MyClass and MyOtherClass both need access to it. Instead of Extracting Interface for both of these classes, try to isolate the dependency and pass it in to the constructors for each class.
Example:
class MyClass {
public:
MyClass(...) : ..., db(new ProdDatabase()) {}; // Old constructor, but give it a "default" database now
MyClass(..., Database* db) : ..., db(db) {}; // New constructor
...
private:
Database* db; // Decide on semantics about owning a database object, maybe you want to have the destructor of this class handle it, or maybe not
// MyOtherClass* moc; //Maybe, depends on what you're trying to do
};
and
class MyOtherClass {
public:
// similar to above, but you might want to disallow this constructor if it's too risky to have two different dependency objects floating around.
MyOtherClass(...) : ..., db(new ProdDatabase());
MyOtherClass(..., Database* db) : ..., db(db);
private:
Database* db; // Ownership?
};
And now that we see this layout, it makes us realize that you might even want MyOtherClass to simply be a member of MyClass (depends what you're doing and how they're related). This will avoid mistakes in instantiating MyOtherClass and ease the burden of the dependency ownership.
Another alternative is to make the Database a singleton to ease the burden of ownership. This will work well for a Database, but in general the singleton pattern won't hold for all dependencies.
Pros:
Allows for clean (standard) dependency injection, and it tackles the core issue of isolating the true dependency.
Isolating the real dependency makes it so that you avoid mocking hell and can just pass the dependency around.
Better future proofed design, high reusability of the pattern, and likely less complex. The next class that needs the dependency won't have to mock themselves, instead they just rope in the dependency as a parameter.
Cons:
This pattern will probably take more time/effort than Extract Interface. In legacy systems, sometimes this doesn't fly. I've committed all sorts of sins because we needed to move a feature out...yesterday. It's okay, it happens. Just be aware of the design gotchas and technical debt you accrue...
It's also a bit more error prone.
Some general legacy tips I use (the things WEWLC doesn't tell you):
Don't get hell-bent about avoiding a dependency if you don't need to avoid it. This is especially true when working with legacy systems where refactorings are risky in general. Instead, you can have your tests call an actual database (or whatever the dependency is), but have the test suite connect to a small "test" database instead of the "prod" database. The cost of standing up a small test db is usually quite small. The cost of crashing prod because you goofed up a mock or a mock fell out of sync with reality is typically a lot higher. This will also save you a lot of coding.
Avoid mocks (especially heavy mocking) where possible. I am becoming more and more convinced as I age as a software engineer that mocks are mini-design smells. They are the quick and dirty: but usually illustrate a larger problem.
Envision the ideal API, and try to build what you envision. You can't actually build the ideal API, but imagine you can refactor everything instantly and have the API you desire. This is a good starting point for improving a legacy system, and make tradeoffs/sacrifices with your current design/implementation as you go.
HTH, good luck!
The first point to keep in mind is that there probably is no one way that's right and the others wrong--any answer is a matter of opinion as much as fact (though the opinions can be informed by fact).
That said, I'd urge at least a little caution against the use of inheritance for this case. Most such books/authors are oriented pretty heavily toward Java, where inheritance is treated as the Swiss army knife (or perhaps Leatherman) of techniques, used for every task where it might sort of come close to making a little sense, regardless of whether its really the right tool for the job or not. In C++, inheritance tends to be viewed much more narrowly, used only when/if/where there's nearly no alternative (and the alternative is to hand-roll what's essentially inheritance on your own anyway).
The primary unique feature of inheritance is run-time polymorphism. For example, we have a collection of (pointers to) objects, and the objects in the collection aren't all the same type (but are all related via inheritance). We use virtual functions to provide a common interface to the objects of the various types.
At least as I read things, that's not the case here at all though. In a given build, you'll deal with either mock objects or production objects, but you'll always know at compile time whether the objects in use are mock or production--you won't ever have a collection of a mixture of mock objects and production objects, and need to determine at run time whether a particular object is mock or production.
Assuming that's correct, inheritance is almost certainly the wrong tool for the job. When you're dealing with static polymorphism (i.e., the behavior is determined at compile time) there are better tools (albeit, ones Feather and company apparentlyy feel obliged to ignore, simply because Java fails to provide them).
In this case, it's pretty trivial to handle all the work at build time, without polluting your production code with any extra complexity at all. For one example, you can create a source directory with mock and production subdirectories. In the mock directory you have foo.cpp, bar.cpp and baz.cpp that implement the mock versions of classes Foo, Bar and Baz respectively. In the production directory you have production versions of the same. At build time, you tell the build tool whether to build the production or mock version, and it chooses the directory where it gets the source code based on that.
Semi-unrelated aside
I also note that you're using a shared_ptr as a parameter. This is yet another huge red flag. I find uses for shared_ptr to be exceptionally rare. The vast majority of times I've seen it used, it wasn't really what should have been used. A shared_ptr is intended for cases of shared ownership of an object--but most use seems to be closer to "I haven't bothered to figure out the ownership of this object". The shared_ptr isn't all that huge of a problem in itself, but it's usually a symptom of larger problems.

Is it OK for code that gets a dependency by dependency injection to use a default implementation of the dependency unless told otherwise?

Suppose my production code starts off like this:
public class SmtpSender
{
....
}
public void DoCoolThing()
{
var smtpSender = new SmtpSender("mail.blah.com"....);
smtpSender.Send(...)
}
I have a brainwave and decide to unit test this function using a fake SmtpSender and dependency injection:
public interface ISmtpSender
{
...
}
public class SmtpSender : ISmtpSender
{
...
}
public class FakeSmtpSender : ISmtpSender
{
...
}
public void DoCoolThing(ISmtpSender smtpSender)
{
smtpSender.Send(...)
}
Then I think wait, all my production code will always want the real one. So why not make the parameter optional, then fill it in with default SmtpSender if not provided?
public void DoCallThing(ISmtpSender smtpSender = null)
{
smtpSender = smtpSender ?? new SmtpSender("...");
smtpSender.Send(...);
}
This allows the unit test to fake it while leaving the production code unaffected.
Is this an acceptable implementation of dependency injection?
If not, what are the pitfalls?
No, having callers depend on concrete implementations of dependencies is not the best way to do dependency injection. Component implementations should always depend only on component interfaces, not on other component implementations.
If one implementation depends on another,
It gives the caller a completely different kind of responsibility than it would otherwise have, that of configuring itself. The caller will thus be less coherent.
It gives the caller a dependency on a class that it would not otherwise have. That means that tests of the caller can break if the dependency is broken, classes needed by the dependency are loaded when the caller is loaded instead of when they're needed, and in compiled languages there would be a compilation-time dependency. All of these make developing large systems harder.
If more than one caller wants the same dependency, each such caller might use this same trick. Then you would have duplication of the concrete dependency among callers, instead of having the concrete dependency in one place (wherever you're specifying all your injected dependencies).
It prevents you from having a useful kind of circular dependency among implementations. Very occasionally I've needed implementations to depend on one another in a circular way. Pure dependency injection makes that relatively safe: implementations always depend on interfaces, so there isn't a compile-time circularity. (Dependency injection frameworks make it possible to construct all the implementations despite the circularity by interposing proxies.) With a concrete dependency, you'll never be able to do that.
I would address the issue you're trying to address by putting the production choice of implementation in the code or config file or whatever that specifies all of your implementations.
It's a valid approach to use, but needs to be strictly limited to certain scenarios. There are several pitfalls to watch out for.
Pros:
Establishes a seam in your code, allowing different implementations to be used as collaborators
Allows clients to use your code without having to specify a collaborator - particularly if most clients would end up using the exact same collaborator
Cons:
Introduces a hidden collaborator into the system
Tightly couples the SUT to the concrete class
Even though other implementations could be provided, if the concrete class changes it will directly affect the SUT (e.g., the constructor changes)
Can easily tightly couple the SUT to other classes as well
For example: not only do you create the dependency, you create all of its dependencies too
In general, it's something you should only reserve for very lightweight defaults (think Null Objects). Otherwise it's very easy for things to quickly spin out of control.
What you're trying to do is similar to using a default constructor in order to make things a little more convenient. You should determine if the convenience is going to have long term benefits, or if it will ultimately come back to haunt you.
Even though you're not using a default constructor, I think you should consider the parallels. Mark Seemann suggests that default constructors could be a design smell: (emphasis mine)
Default constructors are code smells. There you have it. That probably sounds outrageous, but consider this: object-orientation is about encapsulating behavior and data into cohesive pieces of code (classes). Encapsulation means that the class should protect the integrity of the data it encapsulates. When data is required, it must often be supplied through a constructor. Conversely, a default constructor implies that no external data is required. That's a rather weak statement about the invariants of the class.
With all of this in mind, I think the example that you provided would be a risky approach unless the default SMTP sender is a Null Object (for example: doesn't actually send anything). Otherwise there is too much information hidden from the composition root, which just opens the gate to unpleasant surprises.

fake/mock nonvirtual C++ methods

It known that in C++ mocking/faking nonvirtual methods for testing is hard. For example, cookbook of googlemock has two suggestion - both mean to modify original source code in some way (templating and rewriting as interface).
It appear this is very bad problem for C++ code. How can be done best if you can't modify original code that needs to be faked/mocked? Duplicating whole code/class (with it whole base class hierarchy??)
One way that we sometimes use is to split the original .cpp file into at least two parts.
Then the test apparatus can supply its own implementations; effectively using the linker to do the dirty work for us.
This is called the "Link Seam" in some circles.
I followed the Link Seam link from sdg's answer. There I read about different types of seams, but I was most impressed by Preprocessing Seams. This made me think about exploiting further the preprocessor. It turned out that it is possible to mock any external dependency without actually changing the calling code.
To do this, you have to compile the calling source file with a substitute dependency definition.
Here is an example how to do it.
dependency.h
#ifndef DEPENDENCY_H
#define DEPENDENCY_H
class Dependency
{
public:
//...
int foo();
//...
};
#endif // DEPENDENCY_H
caller.cpp
#include "dependency.h"
int bar(Dependency& dependency)
{
return dependency.foo() * 2;
}
test.cpp
#include <assert.h>
// block original definition
#define DEPENDENCY_H
// substitute definition
class Dependency
{
public:
int foo() { return 21; }
};
// include code under test
#include "caller.cpp"
// the test
void test_bar()
{
Dependency mockDependency;
int r = bar(mockDependency);
assert(r == 42);
}
Notice that the mock does not need to implement complete Dependency, just the minimum (used by caller.cpp) so the test can compile and execute.
This way you can mock non-virtual, static, global functions or almost any dependency without changing the productive code.
Another reason I like this approach is that everything related to the test is in one place. You don't have to tweak compiler and linker configurations here and there.
I have applied this technique successfully on a real world project with big fat dependencies.
I have described it in more detail in Include mock.
Code has to be written to be testable, by whatever test techniques you use. If you want to test using mocks, that means some form of dependency injection.
Non-virtual calls with no dependence on a template parameter pose the same problem as final and static methods in Java[*] - the code under test has explicitly said, "I want to call this code, not some unknown bit of code that's dependent in some way on an argument". You, the tester, want it to call different code under test from what it normally calls. If you can't change the code under test then you, the tester, will lose that argument. You might as well ask how to introduce a test version of line 4 of a 10-line function without changing the code under test.
If the class to be mocked is in a different TU from the class under test, you can write a mock with the same name as the original and link that instead. Whether you can generate that mock using your mocking framework in the normal way, I'm not so sure.
If you like, I suppose it's a "very bad problem for C++" that it's possible to write code that's hard to test. It shares this "problem" with a great number of other languages...
[*] My Java knowledge is quite low-power. There may be some clever way of mocking such methods in Java, which aren't applicable to C++. If so, please disregard them in order to see the analogy ;-)
I think it is not possible to do it with standard C++ right now (but lets hope that a powerful compile-time reflection will come to C++ soon...). However, there are a number of options for doing so.
You might have a look at Injector++. It is Windows only right now, but plans to add support for Linux & Mac.
Another option is CppFreeMock, which seems to work with GCC, but has no recent activities.
HippoMocks also provide such ability, but only for free functions. It doesn't support it for class member functions.
I'm not completely sure, but it seems that all the above achieve this with overwriting the target function at runtime so that it jumps to the faked function.
The there is C-Mock, which is an extension to Google Mock allowing you to mock non-virtual functions by redefining them, and relying on the fact that original functions are in dynamic libraries. It is limited to GNU/Linux platform.
Finally, you might also try PowerFake (for which, I'm the author) as introduced here.
It is not a mocking framework (currently) and it provides the possibility for replacing production functions with test ones. I hope to be able to integrate it to one or more mocking frameworks; if not, it'll become one.
Update: It has an integration with FakeIt.
Update 2: Added support for Google Mock
It also overrides the original function during linking (so, it won't work if a function is called in the same translation unit in which it is defined), but uses a different trick than C-Mock as it uses GNU ld's --wrap option. It also needs some changes to your build system for tests, but doesn't affect the main code in any way (except if you are forced to put a function in a separate .cpp file); but support for easily integrating it into CMake projects is provided.
But, it is currently limited to GCC/GNU ld (works also with MinGW).
Update: It supports GCC & Clang compilers, and GNU ld & LLVM lld linkers (or any compatible linker).
#zaharpopov you can use Typemock IsolatorPP to create mocks of non-virtual class and methods without changing your code (or legacy code).
for example if you have a non-virtual class called MyClass:
class MyClass
{
public:
int GetResult() { return -1; }
}
you can mock it with typemock like so:
MyClass* fakeMyClass = FAKE<MyClass>();
WHEN_CALLED(fakeMyClass->GetResult()).Return(10);
By the way the classes or methods that you want to test can also be private as typemock can mock them too, for example:
class MyClass
{
private:
int PrivateMethod() { return -1; }
}
MyClass* myClass = new MyClass();
PRIVATE_WHEN_CALLED(myClass, PrivateMethod).Return(1);
for more information go here.
You very specifically say "if you can't modify original code", which the techniques you mention in your question (and all the other current "answers") do.
Without changing that source, you can still generally (for common OSes/tools) preload an object that defines its own version of the function(s) you wish to intercept. They can even call the original functions afterwards. I provided an example of doing this in (my) question Good Linux TCP/IP monitoring tools that don't need root access?.
That is easier then you think. Just pass the constructed object to the constructor of the class you are testing. In the class store the reference to that object. Then it is easy to use mock classes.
EDIT :
The object that you are passing to the constructor needs an interface, and that class store just the reference to the interface.
struct Abase
{
virtual ~Abase(){}
virtual void foo() = 0;
};
struct Aimp : public Abase
{
virtual ~Aimp(){}
virtual void foo(){/*do stuff*/}
};
struct B
{
B( Aimp &objA ) : obja( objA )
{
}
void boo()
{
objA.foo();
}
Aimp &obja;
};
int main()
{
//...
Aimp realObjA;
B objB( realObjA );
// ...
}
In the test, you can pass the mock object easy.
I used to create an interface for the parts I needed to mock. Then I simply created a stub class that derived from this interface and passed this instance to my classes under test. Yes, it is a lot of hard work, but I found it worth it for some circumstances.
Oh, by interface I mean a struct with only pure virtual methods. Nothing else!

Interfaces vs Templates for dependency injection in C++

To be able to unit test my C++ code I usually pass the constructor of the class under test one or several objects that can be either "production code" or fake/mock objects (let's call these injection objects). I have done this either by
Creating an interface that both the "production code" class and the fake/mock class inherits.
Making the class under test a template class that takes the types of the injection objects as template parameters, and instances of the injection objects as parameters to the constructor.
Some random thoughts:
Until we have concepts (C++0x), only documentation and parameter naming will hint what to provide the class under test (when using templates).
It is not always possible to create interfaces for legacy code
The interface is basically only created to be able to do dependency injection
In the same way: templating the class under test is done only to enable dependency injection
What are your thoughts? Are there other solutions to this problem?
With C++, there's another option - you give your mock classes exact same names as the real classes, and when linking your unit tests, just link them with mock object/library files instead of real ones.
I think interface option is better, but one doesn't have to create common base class just for test. You can inherit your mock class from production class and override necessary methods. You'll have to make the methods virtual though, but that's how tools like mockpp work and they also allow automate this process a little bit.
Templates will have slightly less performance penalties for runtime (less indirections, less calls, more inline optimizations), but will make you suffer a very high penalty for compilation times...
I think that for this purpose, interfaces are better (until we have concepts in C++0x TR1)... unless if you can't slow down some "bottleneck code". Interfaces are more dynamic and switchable at run-time.
Remember that you can construct your class with default injection objects (the real ones), but you can have factories that inject the mock ones on your tests... you don't even need to subclass.
Don't know if it helps, but you can have template constructors:
struct Class_Under_Test
{
template <typename Injected>
Class_Under_Test()
{
...
// and even specialize them
template <>
Class_Under_Test <A_Specific_Injection_Class>
{
...
Only the one that is actually used will get included.

How Do You Create Test Objects For Third Party Legacy Code

I have a code base where many of the classes I implement derive from classes that are provided by other divisions of my company. Working with these other devisions often have the working relationship as though they are third party middle ware vendors.
I'm trying to write test code without modifying these base classes. However, there are issues with creating meaningful test
objects due to the lack of interfaces:
//ACommonClass.h
#include "globalthermonuclearwar.h" //which contains deep #include dependencies...
#include "tictactoe.h" //...and need to exist at compile time to get into test...
class Something //which may or may not inherit from another class similar to this...
{
public:
virtual void fxn1(void); //which often calls into many other classes, similar to this
//...
int data1; //will be the only thing I can test against, but is often meaningless without fxn1 implemented
//...
};
I'd normally extract an interface and work from there, but as these are "Third Party", I can't commit these changes.
Currently, I've created a separate file that holds fake implementations for functions that are defined in the third-party supplied base class headers on a need to know basis, as has been described in the book "Working with Legacy Code".
My plan was to continue to use these definitions and provide alternative test implementations for each third party class that I needed:
//SomethingRequiredImplementations.cpp
#include "ACommonClass.h"
void CGlobalThermoNuclearWar::Simulate(void) {}; // fake this and all other required functions...
// fake implementations for otherwise undefined functions in globalthermonuclearwar.h's #include files...
void Something::fxn1(void) { data1 = blah(); } //test specific functionality.
But before I start doing that I was wondering if any one has tried providing actual objects on a code base similar to mine, which would allow creating new test specific classes to use in place of actual third-party classes.
Note all code bases in question are written in C++.
Mock objects are suitable for this kind of task. They allow you to simulate the existence of other components without needing them to be present. You simply define the expected input and output in your tests.
Google have a good mocking framework for C++.
I'm running into a very similar problem at the moment. I don't want to add a bunch of interfaces that are only there for the purpose of testing, so I can't use any of the existing mock object libraries. To get around this I do the same thing, creating a different file with fake implementations, and having my tests link the fake behaviour, and production code links the real behaviour.
What I wish I could do at this point, is take the internals of another mock framework, and use it inside my fake objects. It would look a little something like this:
Production.h
class ConcreteProductionClass { // regular everyday class
protected:
ConcreteProductionClass(); // I've found the 0 arg constructor useful
public:
void regularFunction(); // regular function that I want to mock
}
Mock.h
class MockProductionClass
: public ConcreteProductionClass
, public ClassThatLetsMeSetExpectations
{
friend class ConcreteProductionClass;
MockTypes membersNeededToSetExpectations;
public:
MockClass() : ConcreteProductionClass() {}
}
ConcreteProductionClass::regularFunction() {
membersNeededToSetExpectations.PassOrFailTheTest();
}
ProductionCode.cpp
void doSomething(ConcreteProductionClass c) {
c.regularFunction();
}
Test.cpp
TEST(myTest) {
MockProductionClass m;
m.SetExpectationsAndReturnValues();
doSomething(m);
ASSERT(m.verify());
}
The most painful part of all this is that the other mock frameworks are so close to this, but don't do it exactly, and the macros are so convoluted that it's not trivial to adapt them. I've begun looking into this on my spare time, but it's not moving along very quickly. Even if I got my method working the way I want, and had the expectation setting code in place, this method still has a couple drawbacks, one of them being that your build commands can get to be kind of long if you have to link against a lot of .o files rather than one .a, but that's manageable. It's also impossible to fall through to the default implementation, since we're not linking it. Anyway, I know this doesn't answer the question, or really even tell you anything you don't already know, but it shows how close the C++ community is to being able to mock classes that don't have a pure virtual interface.
You might want to consider mocking instead of faking as a potential solution. In some cases you may need to write wrapper classes that are mockable if the original classes aren't. I've done this with framework classes in C#/.Net, but not C++ so YMMV.
If I have a class that I need under test that derives from something I can't (or don't want to) run under test I'll:
Make a new logic-only class.
Move the code-i-wanna-test to the logic class.
Use an interface to talk back to the real class to interact with the base class and/or things I can't or won't put in the logic.
Define a test class using that same interface. This test class could have nothing but noops or fancy code that simulates the real classes.
If I have a class that I just need to use in testing, but using the real class is a problem (dependencies or unwanted behaviors):
I'll define a new interface that looks like all of the public methods I need to call.
I'll create a mock version of the object that supports that interface for testing.
I'll create another class that is constructed with a "real" version of that class. It also supports that interface. All interface calls a forwarded to the real object methods.
I'll only do this for methods I actually call - not ALL the public methods. I'll add to these classes as I write more tests.
For example, I wrap MFC's GDI classes like this to test Windows GDI drawing code. Templates can make some of this easier - but we often end up not doing that for various technical reasons (stuff with Windows DLL class exporting...).
I'm sure all this is in Feather's Working with Legacy Code book - and what I'm describing has actual terms. Just don't make me pull the book off the shelf...
One thing you did not indicate in your question is the reason why your classes derive from base classes from the other division. Is the relationship really a IS-A relationshiop ?
Unless your classes needs to be used by a framework, you could consider favoring delegation over inheritance. Then you can use dependency injection to provide your class with a mock of their class in the unit tests.
Otherwise, an idea would be to write a script to extract and create the interface your need from the header they provide, and integrate this to the compilation process so your unit test can ve checked in.