I'm trying to get dependency inversion, or at least understand how to apply it, but the problem I have at the moment is how to deal with dependencies that are pervasive. The classic example of this is trace logging, but in my application I have many services that most if not all code will depend on (trace logging, string manipulation, user message logging etc).
None of the solutions to this would appear to be particularly palatable:
Using constructor dependency injection would mean that most of the constructors would have several, many, standard injected dependencies because most classes explicitly require those dependencies (they are not just passing them down to objects that they construct).
Service locator pattern just drives the dependencies underground, removing them from the constructor but hiding them so that it's not even explicit that the dependencies are required
Singleton services are, well, Singletons, and also serve to hide the dependencies
Lumping all those common services together into a single CommonServices interface and injecting that aswell a) violates the Law of Demeter and b) is really just another name for a Service Locator, albeit a specific rather than a generic one.
Does anyone have any other suggestions for how to structure these kinds of dependencies, or indeed any experience of any of the above solutions?
Note that I don't have a particular DI framework in mind, in fact we're programming in C++ and would be doing any injection manually (if indeed dependencies are injected).
Service locator pattern just drives the dependencies underground,
Singleton services are, well, Singletons, and also serve to hide the
dependencies
This is a good observation. Hiding the dependencies doesn't remove them. Instead you should address the number of dependencies a class needs.
Using constructor dependency injection would mean that most of the
constructors would have several, many, standard injected dependencies
because most classes explicitly require those dependencies
If this is the case, you are probably violating the Single Responsibility Principle. In other words, those classes are probably too big and do too much. Since you are talking about logging and tracing, you should ask yourself if you aren't logging too much. But in general, logging and tracing are cross-cutting concerns and you should not have to add them to many classes in the system. If you correctly apply the SOLID principles, this problem goes away (as explained here).
The Dependency Inversion principle is part of the SOLID Principles and is an important principle for among other things, to promote testability and reuse of the higher-level algorithm.
Background:
As indicated on Uncle Bob's web page, Dependency Inversion is about depend on abstractions, not on concretions.
In practice, what happens is that some places where your class instantiates another class directly, need to be changed such that the implementation of the inner class can be specified by the caller.
For instance, if I have a Model class, I should not hard code it to use a specific database class. If I do that, I cannot use the Model class to use a different database implementation. This might be useful if you have a different database provider, or you may want to replace the database provider with a fake database for testing purposes.
Rather than the Model doing a "new" on the Database class, it will simply use an IDatabase interface that the Database class implements. The Model never refers to a concrete Database class. But then who instantiates the Database class? One solution is Constructor Injection (part of Dependency Injection). For this example, the Model class is given a new constructor that takes an IDatabase instance which it is to use, rather than instantiate one itself.
This solves the original problem of the Model no longer references the concrete Database class and uses the database through the IDatabase abstraction. But it introduces the problem mentioned in the Question, which is that it goes against Law of Demeter. That is, in this case, the caller of Model now has to know about IDatabase, when previously it did not. The Model is now exposing to its clients some detail about how it gets its job done.
Even if you were okay with this, there's another issue that seems to confuse a lot of people, including some trainers. There's as an assumption that any time a class, such as Model, instantiates another class concretely, then it's breaking the Dependency Inversion principle and therefore it is bad. But in practice, you can't follow these types of hard-and-fast rules. There are times when you need to use concrete classes. For instance, if you're going to throw an exception you have to "new it up" (eg. threw new BadArgumentException(...)). Or use classes from the base system such as strings, dictionaries, etc.
There's no simple rule that works in all cases. You have to understand what it is that you're trying to accomplish. If you're after testability, then the fact that the Model classes references the Database class directly is not itself a problem. The problem is the fact that the Model class has no other means of using another Database class. You solve this problem by implementing the Model class such that it uses IDatabase, and allows a client to specify an IDatabase implementation. If one is not specified by the client, the Model can then use a concrete implementation.
This is similar to the design of the many libraries, including C++ Standard Library. For instance, looking at the declaration std::set container:
template < class T, // set::key_type/value_type
class Compare = less<T>, // set::key_compare/value_compare
class Alloc = allocator<T> > // set::allocator_type
> class set;
You can see that it allows you to specify a comparer and an allocator, but most of the time, you take the default, especially the allocator. The STL has many such facets, especially in the IO library where detailed aspects of streaming can be augmented for localization, endianness, locales, etc.
In addition to testability, this allows the reuse of the higher-level algorithm with entirely different implementation of the classes that the algorithm internally uses.
And finally, back to the assertion I made previously with regard to scenarios where you would not want to invert the dependency. That is, there are times when you need to instantiate a concrete class, such as when instantiating the exception class, BadArgumentException. But, if you're after testability, you can also make the argument that you do, in fact, want to invert dependency of this as well. You may want to design the Model class such that all instantiations of exceptions are delegated to a class and invoked through an abstract interface. That way, code that tests the Model class can provide its own exception class whose usage the test can then monitor.
I've had colleagues give me examples where they abstract instantiation of even system calls, such as "getsystemtime" simply so they can test daylight savings and time-zone scenarios through their unit-testing.
Follow the YAGNI principle -- don't add abstractions simply because you think you might need it. If you're practicing test-first development, the right abstractions becomes apparent and only just enough abstraction is implemented to pass the test.
class Base {
public:
void doX() {
doA();
doB();
}
virtual void doA() {/*does A*/}
virtual void doB() {/*does B*/}
};
class LoggedBase public : Base {
public:
LoggedBase(Logger& logger) : l(logger) {}
virtual void doA() {l.log("start A"); Base::doA(); l.log("Stop A");}
virtual void doB() {l.log("start B"); Base::doB(); l.log("Stop B");}
private:
Logger& l;
};
Now you can create the LoggedBase using an abstract factory that knows about the logger. Nobody else has to know about the logger, nor do they need to know about LoggedBase.
class BaseFactory {
public:
virtual Base& makeBase() = 0;
};
class BaseFactoryImp public : BaseFactory {
public:
BaseFactoryImp(Logger& logger) : l(logger) {}
virtual Base& makeBase() {return *(new LoggedBase(l));}
};
The factory implementation is held in a global variable:
BaseFactory* baseFactory;
And is initialized to an instance of BaseFactoryImp by 'main' or some function close to main. Only that function knows about BaseFactoryImp and LoggedBase. Everyone else is blissfully ignorant of them all.
Related
I quite often find myself creating interfaces that I am using just at the signature to inject a dependency, ending up with class AIface and class AImpl : public AIface. And quite often I never implement any other subclass of class AIface
Is there any advantage of this approach vs using directly the implementation with all public method virtual?
Longer Explanation:
Say we have a zoo with a cleaning service. We do TDD, and we want to be able to test the Zoo with a fake FeedingSvc, so we go for dependency injecton.
What is the difference between:
class FeedingSvcIface{
virtual void performFeeding() = 0;
} ;
class RoboticFeedingSvc: public FeedingSvcIface{
void performFeeding();
};
Class Zoo{
Zoo(FeedingSvcIface&);
//...
};
vs
class RoboticFeedingSvc{
virtual void performFeeding();
};
Class Zoo{
Zoo(RoboticFeedingSvc&);
//...
};
(And if ever needed, extract the interface in the future)
In terms of testing, the former seems easier.
I usually find natural to add interfaces when there is a I speak to a class that "crosses layers" but some times it is just about testing.
I know that in the future I might have to implement other types of FeedingSvcs but why doing the abstraction today if I don't really needed?,
I might split two classes just to encapsulate some logic.
The advantage of sticking to best practices, design patterns or other idioms is that although you make a bit of extra effort now, you gain more in the long run.
Imagine the scenario where you work in a team, with multiple developers, some experienced, some not.
You are the creator of the Zoo mechanism, but you decide, that for the time being, you will implement the Zoo on a KISS principle without adding the extra abstraction . You set yourself a mental note (or a even a nice little comment) stating that "If there shall be multiple distinct behaviors of the RoboticFeedingSvc there shall be Abstraction over the dependency injection !".
Now , because of your really awesome work, you get to go on a vacation and some junior developer will remain to mantain your code.
One of the tasks of the developer will be to introduce a ManualFeeding option and a hybrid option. How many ways to do this can you think about (with disregards to any coding principle) ?
Because you, the creator, didn't enforce the way the mechanism grows, the junior developer will look at your comment, add a "LoL u mad bro :) " comment , and then choose one of the following :
Create a base interface to be derived by other FeedingSvcs (you got lucky here)
Create a dependency injection to the RobotFeedingSvc using a strategy pattern (have some functors to be set in terms of how to feed something)
Make RobotFeedingSvc a composite between Feeder, Feeded, and some Action function
Make the RobotFeedingSvc a singleton factory (because singletons factories are awesome and fancy ) that somehow is used inside the Zoo to return the apropriate feeding technique (important thing here is that he used singleton and factory)
Create a templated version of the Zoo that takes a templated version of RobotFeedingSvc that is partially sepecialized according to given FeedingPolicy and Feeder (because he just bumped into templates, and templates should be used everywhere).
I guess we could sum up the story in fewers lines :
Making the initial effort to properly make the abstractions layer required in your application to make it scalable in terms of functionality will help other developers (including here future you ) to quickly understand the proper way to implement new features using the existing code instead of just hacking through it with some wild ideas.
Forcing your Zoo Class to take an interface instead of a concrete class is pretty much equivalent to leave a comment saying that new functionalities need to implement this interface.
Allowing a concrete class to be passed as parameter might switch focus on how to change the concrete class rather then implement something on top of it.
Another more technical reason would be the following :
He needs to add new functionality , but he's not allowed to change the Zoo implementation. What now ?
Suppose my production code starts off like this:
public class SmtpSender
{
....
}
public void DoCoolThing()
{
var smtpSender = new SmtpSender("mail.blah.com"....);
smtpSender.Send(...)
}
I have a brainwave and decide to unit test this function using a fake SmtpSender and dependency injection:
public interface ISmtpSender
{
...
}
public class SmtpSender : ISmtpSender
{
...
}
public class FakeSmtpSender : ISmtpSender
{
...
}
public void DoCoolThing(ISmtpSender smtpSender)
{
smtpSender.Send(...)
}
Then I think wait, all my production code will always want the real one. So why not make the parameter optional, then fill it in with default SmtpSender if not provided?
public void DoCallThing(ISmtpSender smtpSender = null)
{
smtpSender = smtpSender ?? new SmtpSender("...");
smtpSender.Send(...);
}
This allows the unit test to fake it while leaving the production code unaffected.
Is this an acceptable implementation of dependency injection?
If not, what are the pitfalls?
No, having callers depend on concrete implementations of dependencies is not the best way to do dependency injection. Component implementations should always depend only on component interfaces, not on other component implementations.
If one implementation depends on another,
It gives the caller a completely different kind of responsibility than it would otherwise have, that of configuring itself. The caller will thus be less coherent.
It gives the caller a dependency on a class that it would not otherwise have. That means that tests of the caller can break if the dependency is broken, classes needed by the dependency are loaded when the caller is loaded instead of when they're needed, and in compiled languages there would be a compilation-time dependency. All of these make developing large systems harder.
If more than one caller wants the same dependency, each such caller might use this same trick. Then you would have duplication of the concrete dependency among callers, instead of having the concrete dependency in one place (wherever you're specifying all your injected dependencies).
It prevents you from having a useful kind of circular dependency among implementations. Very occasionally I've needed implementations to depend on one another in a circular way. Pure dependency injection makes that relatively safe: implementations always depend on interfaces, so there isn't a compile-time circularity. (Dependency injection frameworks make it possible to construct all the implementations despite the circularity by interposing proxies.) With a concrete dependency, you'll never be able to do that.
I would address the issue you're trying to address by putting the production choice of implementation in the code or config file or whatever that specifies all of your implementations.
It's a valid approach to use, but needs to be strictly limited to certain scenarios. There are several pitfalls to watch out for.
Pros:
Establishes a seam in your code, allowing different implementations to be used as collaborators
Allows clients to use your code without having to specify a collaborator - particularly if most clients would end up using the exact same collaborator
Cons:
Introduces a hidden collaborator into the system
Tightly couples the SUT to the concrete class
Even though other implementations could be provided, if the concrete class changes it will directly affect the SUT (e.g., the constructor changes)
Can easily tightly couple the SUT to other classes as well
For example: not only do you create the dependency, you create all of its dependencies too
In general, it's something you should only reserve for very lightweight defaults (think Null Objects). Otherwise it's very easy for things to quickly spin out of control.
What you're trying to do is similar to using a default constructor in order to make things a little more convenient. You should determine if the convenience is going to have long term benefits, or if it will ultimately come back to haunt you.
Even though you're not using a default constructor, I think you should consider the parallels. Mark Seemann suggests that default constructors could be a design smell: (emphasis mine)
Default constructors are code smells. There you have it. That probably sounds outrageous, but consider this: object-orientation is about encapsulating behavior and data into cohesive pieces of code (classes). Encapsulation means that the class should protect the integrity of the data it encapsulates. When data is required, it must often be supplied through a constructor. Conversely, a default constructor implies that no external data is required. That's a rather weak statement about the invariants of the class.
With all of this in mind, I think the example that you provided would be a risky approach unless the default SMTP sender is a Null Object (for example: doesn't actually send anything). Otherwise there is too much information hidden from the composition root, which just opens the gate to unpleasant surprises.
My team has written several C++ classes which implement event handling via pure virtual callbacks - for example, when a message is received from another process, the base class which handles IPC messaging calls its own pure virtual function, and a derived class handles the event in an override of that function. The base class knows the event has occurred; the derived class knows what to do with it.
I now want to combine the features provided by these base classes in a higher-level class, so for example when a message arrives from another process, my new class can then forward it on over its network connection using a similar event-driven networking class. It looks like I have two options:
(1) composition: derive classes from each of the event-handling base classes and add objects of those derived classes to my new class as members, or:
(2) multiple inheritance: make my new class a derived class of all of the event-handling base classes.
I've tried both (1) and (2), and I'm not satisfied with my implementation of either.
There's an extra complication: some of the base classes have been written using initialisation and shutdown methods instead of using constructors and destructors, and of course these methods have the same names in each class. So multiple inheritance causes function name ambiguity. Solvable with using declarations and/or explicit scoping, but not the most maintainable-looking thing I've ever seen.
Even without that problem, using multiple inheritance and overriding every pure virtual function from each of several base classes is going to make my new class very big, bordering on "God Object"-ness. As requirements change (read: "as requirements are added") this isn't going to scale well.
On the other hand, using separate derived classes and adding them as members of my new class means I have to write lots of methods on each derived class to exchange information between them. This feels very much like "getters and setters" - not quite as bad, but there's a lot of "get this information from that class and hand it to this one", which has an inefficient feel to it - lots of extra methods, lots of extra reads and writes, and the classes have to know a lot about each other's logic, which feels wrong. I think a full-blown publish-and-subscribe model would be overkill, but I haven't yet found a simple alternative.
There's also a lot of duplication of data if I use composition. For example, if my class's state depends on whether its network connection is up and running, I have to either have a state flag in every class affected by this, or have every class query the networking class for its state every time a decision needs to be made. If I had just one multiply-inherited class, I could just use a flag which any code in my class could access.
So, multiple inheritance, composition, or perhaps something else entirely? Is there a general rule-of-thumb on how best to approach this kind of thing?
From your description I think you've gone for a "template method" style approach where the base does work and then calls a pure virtual that the derived class implements rather than a "callback interface" approach which is pretty much the same except that the pure virtual method is on a completely separate interface that's passed in to the "base" as a parameter to the constructor. I personally prefer the later as I find it considerably more flexible when the time comes to plug objects together and build higher level objects.
I tend to go for composition with the composing class implementing the callback interfaces that the composed objects require and then potentially composing again in a similar style at a higher level.
You can then decide if it's appropriate to compose by having the composing object implement the callback interfaces and pass them in to the "composed" objects in their constructors OR you can implement the callback interface in its own object possibly with a simpler and more precise callback interface that your composing object implements, and compose both the "base object" and the "callback implementation object"...
Personally I wouldn't go with an "abstract event handling" interface as I prefer my code to be explicit and clear even if that leads to it being slightly less generic.
I'm not totally clear on what your new class is trying to achieve, but it sounds like you're effectively having to provide a new implementation somewhere for all of these abstract event classes.
Personally I would plump for composition. Multiple inheritance quickly becomes a nightmare, especially when things have to change, and composition keeps the existing separation of concerns.
You state that each derived object will have to communicate with the network class, but can you try and reduce this to the minimum. For instance, each derived event object is purely responsible for packaging up the event info into some kind of generic packet, and then that packet is passed to the network class to do the guts of sending?
Without knowing exactly what your new class is doing it's hard to comment, or suggest better patterns, but the more I code, the more I am learning to agree with the old adage "favour composition over inheritance"
I am relatively new to "design patterns" as they are referred to in a formal sense. I've not been a professional for very long, so I'm pretty new to this.
We've got a pure virtual interface base class. This interface class is obviously to provide the definition of what functionality its derived children are supposed to do. The current use and situation in the software dictates what type of derived child we want to use, so I recommended creating a wrapper that will communicate which type of derived child we want and return a Base pointer that points to a new derived object. This wrapper, to my understanding, is a factory.
Well, a colleague of mine created a static function in the Base class to act as the factory. This causes me trouble for two reasons. First, it seems to break the interface nature of the Base class. It feels wrong to me that the interface would itself need to have knowledge of the children derived from it.
Secondly, it causes more problems when I try to re-use the Base class across two different Qt projects. One project is where I am implementing the first (and probably only real implementation for this one class... though i want to use the same method for two other features that will have several different derived classes) derived class and the second is the actual application where my code will eventually be used. My colleague has created a derived class to act as a tester for the real application while I code my part. This means that I've got to add his headers and cpp files to my project, and that just seems wrong since I'm not even using his code for the project while I implement my part (but he will use mine when it is finished).
Am I correct in thinking that the factory really needs to be a wrapper around the Base class rather than the Base acting as the factory?
You do NOT want to use your interface class as the factory class. For one, if it is a true interface class, there is no implementation. Second, if the interface class does have some implementation defined (in addition to the pure virtual functions), making a static factory method now forces the base class to be recompiled every time you add a child class implementation.
The best way to implement the factory pattern is to have your interface class separate from your factory.
A very simple (and incomplete) example is below:
class MyInterface
{
public:
virtual void MyFunc() = 0;
};
class MyImplementation : public MyInterface
{
public:
virtual void MyFunc() {}
};
class MyFactory
{
public:
static MyInterface* CreateImplementation(...);
};
I'd have to agree with you. Probably one of the most important principles of object oriented programming is to have a single responsibility for the scope of a piece of code (whether it's a method, class or namespace). In your case, your base class serves the purpose of defining an interface. Adding a factory method to that class, violates that principle, opening the door to a world of shi... trouble.
Yes, a static factory method in the interface (base class) requires it to have knowledge of all possible instantiations. That way, you don't get any of the flexibility the Factory Method pattern is intended to bring.
The Factory should be an independent piece of code, used by client code to create instances. You have to decide somewhere in your program what concrete instance to create. Factory Method allows you to avoid having the same decision spread out through your client code. If later you want to change the implementation (or e.g. for testing), you have just one place to edit: this may be e.g. a simple global change, through conditional compilation (usually for tests), or even via a dependency injection configuration file.
Be careful about how client code communicates what kind of implementation it wants: that's not an uncommon way of reintroducing the dependencies factories are meant to hide.
It's not uncommon to see factory member functions in a class, but it makes my eyes bleed. Often their use have been mixed up with the functionality of the named constructor idiom. Moving the creation function(s) to a separate factory class will buy you more flexibility also to swap factories during testing.
When the interface is just for hiding the implementation details and there will be only one implementation of the Base interface ever, it could be ok to couple them. In that case, the factory function is just a new name for the constructor of the actual implementation.
However, that case is rare. Except when explicit designed having only one implementation ever, you are better off to assume that multiple implementations will exist at some point in time, if only for testing (as you discovered).
So usually it is better to split the Factory part into a separate class.
I was wondering what would make a programmer to choose either Pimpl idiom or pure virtual class and inheritance.
I understand that pimpl idiom comes with one explicit extra indirection for each public method and the object creation overhead.
The Pure virtual class in the other hand comes with implicit indirection(vtable) for the inheriting implementation and I understand that no object creation overhead.
EDIT: But you'd need a factory if you create the object from the outside
What makes the pure virtual class less desirable than the pimpl idiom?
When writing a C++ class, it's appropriate to think about whether it's going to be
A Value Type
Copy by value, identity is never important. It's appropriate for it to be a key in a std::map. Example, a "string" class, or a "date" class, or a "complex number" class. To "copy" instances of such a class makes sense.
An Entity type
Identity is important. Always passed by reference, never by "value". Often, doesn't make sense to "copy" instances of the class at all. When it does make sense, a polymorphic "Clone" method is usually more appropriate. Examples: A Socket class, a Database class, a "policy" class, anything that would be a "closure" in a functional language.
Both pImpl and pure abstract base class are techniques to reduce compile time dependencies.
However, I only ever use pImpl to implement Value types (type 1), and only sometimes when I really want to minimize coupling and compile-time dependencies. Often, it's not worth the bother. As you rightly point out, there's more syntactic overhead because you have to write forwarding methods for all of the public methods. For type 2 classes, I always use a pure abstract base class with associated factory method(s).
Pointer to implementation is usually about hiding structural implementation details. Interfaces are about instancing different implementations. They really serve two different purposes.
The pimpl idiom helps you reduce build dependencies and times especially in large applications, and minimizes header exposure of the implementation details of your class to one compilation unit. The users of your class should not even need to be aware of the existence of a pimple (except as a cryptic pointer to which they are not privy!).
Abstract classes (pure virtuals) is something of which your clients must be aware: if you try to use them to reduce coupling and circular references, you need to add some way of allowing them to create your objects (e.g. through factory methods or classes, dependency injection or other mechanisms).
I was searching an answer for the same question.
After reading some articles and some practice I prefer using "Pure virtual class interfaces".
They are more straight forward (this is a subjective opinion). Pimpl idiom makes me feel I'm writing code "for the compiler", not for the "next developer" that will read my code.
Some testing frameworks have direct support for Mocking pure virtual classes
It's true that you need a factory to be accessible from the outside.
But if you want to leverage polymorphism: that's also "pro", not a "con". ...and a simple factory method does not really hurts so much
The only drawback (I'm trying to investigate on this) is that pimpl idiom could be faster
when the proxy-calls are inlined, while inheriting necessarily need an extra access to the object VTABLE at runtime
the memory footprint the pimpl public-proxy-class is smaller (you can do easily optimizations for faster swaps and other similar optimizations)
I hate pimples! They do the class ugly and not readable. All methods are redirected to pimple. You never see in headers, what functionalities has the class, so you can not refactor it (e. g. simply change the visibility of a method). The class feels like "pregnant". I think using iterfaces is better and really enough to hide the implementation from the client. You can event let one class implement several interfaces to hold them thin. One should prefer interfaces!
Note: You do not necessary need the factory class. Relevant is that the class clients communicate with it's instances via the appropriate interface.
The hiding of private methods I find as a strange paranoia and do not see reason for this since we hav interfaces.
There's a very real problem with shared libraries that the pimpl idiom circumvents neatly that pure virtuals can't: you cannot safely modify/remove data members of a class without forcing users of the class to recompile their code. That may be acceptable under some circumstances, but not e.g. for system libraries.
To explain the problem in detail, consider the following code in your shared library/header:
// header
struct A
{
public:
A();
// more public interface, some of which uses the int below
private:
int a;
};
// library
A::A()
: a(0)
{}
The compiler emits code in the shared library that calculates the address of the integer to be initialized to be a certain offset (probably zero in this case, because it's the only member) from the pointer to the A object it knows to be this.
On the user side of the code, a new A will first allocate sizeof(A) bytes of memory, then hand a pointer to that memory to the A::A() constructor as this.
If in a later revision of your library you decide to drop the integer, make it larger, smaller, or add members, there'll be a mismatch between the amount of memory user's code allocates, and the offsets the constructor code expects. The likely result is a crash, if you're lucky - if you're less lucky, your software behaves oddly.
By pimpl'ing, you can safely add and remove data members to the inner class, as the memory allocation and constructor call happen in the shared library:
// header
struct A
{
public:
A();
// more public interface, all of which delegates to the impl
private:
void * impl;
};
// library
A::A()
: impl(new A_impl())
{}
All you need to do now is keep your public interface free of data members other than the pointer to the implementation object, and you're safe from this class of errors.
Edit: I should maybe add that the only reason I'm talking about the constructor here is that I didn't want to provide more code - the same argumentation applies to all functions that access data members.
We must not forget that inheritance is a stronger, closer coupling than delegation. I would also take into account all the issues raised in the answers given when deciding what design idioms to employ in solving a particular problem.
Although broadly covered in the other answers maybe I can be a bit more explicit about one benefit of pimpl over virtual base classes:
A pimpl approach is transparent from the user view point, meaning you can e.g. create objects of the class on the stack and use them directly in containers. If you try to hide the implementation using an abstract virtual base class, you will need to return a shared pointer to the base class from a factory, complicating it's use. Consider the following equivalent client code:
// Pimpl
Object pi_obj(10);
std::cout << pi_obj.SomeFun1();
std::vector<Object> objs;
objs.emplace_back(3);
objs.emplace_back(4);
objs.emplace_back(5);
for (auto& o : objs)
std::cout << o.SomeFun1();
// Abstract Base Class
auto abc_obj = ObjectABC::CreateObject(20);
std::cout << abc_obj->SomeFun1();
std::vector<std::shared_ptr<ObjectABC>> objs2;
objs2.push_back(ObjectABC::CreateObject(13));
objs2.push_back(ObjectABC::CreateObject(14));
objs2.push_back(ObjectABC::CreateObject(15));
for (auto& o : objs2)
std::cout << o->SomeFun1();
In my understanding these two things serve completely different purposes. The purpose of the pimple idiom is basically give you a handle to your implementation so you can do things like fast swaps for a sort.
The purpose of virtual classes is more along the line of allowing polymorphism, i.e. you have a unknown pointer to an object of a derived type and when you call function x you always get the right function for whatever class the base pointer actually points to.
Apples and oranges really.
The most annoying problem about the pimpl idiom is it makes it extremely hard to maintain and analyse existing code. So using pimpl you pay with developer time and frustration only to "reduce build dependencies and times and minimize header exposure of the implementation details". Decide yourself, if it is really worth it.
Especially "build times" is a problem you can solve by better hardware or using tools like Incredibuild ( www.incredibuild.com, also already included in Visual Studio 2017 ), thus not affecting your software design. Software design should be generally independent of the way the software is built.