Factory Pattern in C++ -- doing this correctly? - c++

I am relatively new to "design patterns" as they are referred to in a formal sense. I've not been a professional for very long, so I'm pretty new to this.
We've got a pure virtual interface base class. This interface class is obviously to provide the definition of what functionality its derived children are supposed to do. The current use and situation in the software dictates what type of derived child we want to use, so I recommended creating a wrapper that will communicate which type of derived child we want and return a Base pointer that points to a new derived object. This wrapper, to my understanding, is a factory.
Well, a colleague of mine created a static function in the Base class to act as the factory. This causes me trouble for two reasons. First, it seems to break the interface nature of the Base class. It feels wrong to me that the interface would itself need to have knowledge of the children derived from it.
Secondly, it causes more problems when I try to re-use the Base class across two different Qt projects. One project is where I am implementing the first (and probably only real implementation for this one class... though i want to use the same method for two other features that will have several different derived classes) derived class and the second is the actual application where my code will eventually be used. My colleague has created a derived class to act as a tester for the real application while I code my part. This means that I've got to add his headers and cpp files to my project, and that just seems wrong since I'm not even using his code for the project while I implement my part (but he will use mine when it is finished).
Am I correct in thinking that the factory really needs to be a wrapper around the Base class rather than the Base acting as the factory?

You do NOT want to use your interface class as the factory class. For one, if it is a true interface class, there is no implementation. Second, if the interface class does have some implementation defined (in addition to the pure virtual functions), making a static factory method now forces the base class to be recompiled every time you add a child class implementation.
The best way to implement the factory pattern is to have your interface class separate from your factory.
A very simple (and incomplete) example is below:
class MyInterface
{
public:
virtual void MyFunc() = 0;
};
class MyImplementation : public MyInterface
{
public:
virtual void MyFunc() {}
};
class MyFactory
{
public:
static MyInterface* CreateImplementation(...);
};

I'd have to agree with you. Probably one of the most important principles of object oriented programming is to have a single responsibility for the scope of a piece of code (whether it's a method, class or namespace). In your case, your base class serves the purpose of defining an interface. Adding a factory method to that class, violates that principle, opening the door to a world of shi... trouble.

Yes, a static factory method in the interface (base class) requires it to have knowledge of all possible instantiations. That way, you don't get any of the flexibility the Factory Method pattern is intended to bring.
The Factory should be an independent piece of code, used by client code to create instances. You have to decide somewhere in your program what concrete instance to create. Factory Method allows you to avoid having the same decision spread out through your client code. If later you want to change the implementation (or e.g. for testing), you have just one place to edit: this may be e.g. a simple global change, through conditional compilation (usually for tests), or even via a dependency injection configuration file.
Be careful about how client code communicates what kind of implementation it wants: that's not an uncommon way of reintroducing the dependencies factories are meant to hide.

It's not uncommon to see factory member functions in a class, but it makes my eyes bleed. Often their use have been mixed up with the functionality of the named constructor idiom. Moving the creation function(s) to a separate factory class will buy you more flexibility also to swap factories during testing.

When the interface is just for hiding the implementation details and there will be only one implementation of the Base interface ever, it could be ok to couple them. In that case, the factory function is just a new name for the constructor of the actual implementation.
However, that case is rare. Except when explicit designed having only one implementation ever, you are better off to assume that multiple implementations will exist at some point in time, if only for testing (as you discovered).
So usually it is better to split the Factory part into a separate class.

Related

Is there any way to avoid declaring virtual methods when storing (children) pointers?

I have run into an annoying problem lately, and I am not satisfied with my own workaround: I have a program that maintains a vector of pointers to a base class, and I am storing there all kind of children object-pointers. Now, each child class has methods of their own, and the main program may or not may call these methods, depending on the type of object (note though that they all heavily use common methods of the base class, so this justify inheritance).
I have found useful to have an "object identifier" to check the class type (and then either call the method or not), which is already not very beautiful, but this is not the main inconvenience. The main inconvenience is that, if I want to actually be able to call a derived class method using the base class pointer (or even just store the pointer in the pointer array), then one need to declare the derived methods as virtual in the base class.
Make sense from the C++ coding point of view.. but this is not practical in my case (from the development point of view), because I am planning to create many different children classes in different files, perhaps made by different people, and I don't want to tweak/maintain the base class each time, to add virtual methods!
How to do this? Essentially, what I am asking (I guess) is how to implement something like Objective-C NSArrays - if you send a message to an object that does not implement the method, well, nothing happens.
regards
Instead of this:
// variant A: declare everything in the base class
void DoStuff_A(Base* b) {
if (b->TypeId() == DERIVED_1)
b->DoDerived1Stuff();
else if if (b->TypeId() == DERIVED_2)
b->DoDerived12Stuff();
}
or this:
// variant B: declare nothing in the base class
void DoStuff_B(Base* b) {
if (b->TypeId() == DERIVED_1)
(dynamic_cast<Derived1*>(b))->DoDerived1Stuff();
else if if (b->TypeId() == DERIVED_2)
(dynamic_cast<Derived2*>(b))->DoDerived12Stuff();
}
do this:
// variant C: declare the right thing in the base class
b->DoStuff();
Note there's a single virtual function in the base per stuff that has to be done.
If you find yourself in a situation where you are more comfortable with variants A or B then with variant C, stop and rethink your design. You are coupling components too tightly and in the end it will backfire.
I am planning to create many different children classes in different
files, perhaps made by different people, and I don't want to
tweak/maintain the base class each time, to add virtual methods!
You are OK with tweaking DoStuff each time a derived class is added, but tweaking Base is a no-no. May I ask why?
If your design does not fit in either A, B or C pattern, show what you have, for clairvoyance is a rare feat these days.
You can do what you describe in C++, but not using functions. It is, by the way, kind of horrible but I suppose there might be cases in which it's a legitimate approach.
First way of doing this:
Define a function with a signature something like boost::variant parseMessage(std::string, std::vector<boost::variant>); and perhaps a string of convenience functions with common signatures on the base class and include a message lookup table on the base class which takes functors. In each class constructor add its messages to the message table and the parseMessage function then parcels off each message to the right function on the class.
It's ugly and slow but it should work.
Second way of doing this:
Define the virtual functions further down the hierarchy so if you want to add int foo(bar*); you first add a class that defines it as virtual and then ensure every class that wants to define int foo(bar*); inherit from it. You can then use dynamic_cast to ensure that the pointer you are looking at inherits from this class before trying to call int foo(bar*);. Possible these interface adding classes could be pure virtual so they can be mixed in to various points using multiple inheritance, but that may have its own problems.
This is less flexible than the first way and requires the classes that implement a function to be linked to each other. Oh, and it's still ugly.
But mostly I suggest you try and write C++ code like C++ code not Objective-C code.
This can be solved by adding some sort of introspection capabilities and meta object system. This talk Metadata and reflection in C++ — Jeff Tucker demonstrates how to do this using c++'s template meta programming.
If you don't want to go to the trouble of implementing one yourself, then it would be easier to use an existing one such as Qt's meta object system. Note that this solution does not work with multiple inheritance due to limitations in the meta object compiler: QObject Multiple Inheritance.
With that installed, you can query for the presence of methods and call them. This is quite tedious to do by hand, so the easiest way to call such a methods is using the signal and slot mechanism.
There is also GObject which is quite simmilar and there are others.
If you are planning to create many different children classes in different files, perhaps made by different people, and also I would guess you don't want to change your main code for every child class. Then I think what you need to do in your base class is to define several (not to many) virtual functions (with empty implementation) BUT those functions should be used to mark a time in the logic where they are called like "AfterInseart" or "BeforeSorting", Etc.
Usually there are not to many places in the logic you wish a derived classes to perform there own logic.

Virtual event handlers from several classes: multiple inheritance or composition?

My team has written several C++ classes which implement event handling via pure virtual callbacks - for example, when a message is received from another process, the base class which handles IPC messaging calls its own pure virtual function, and a derived class handles the event in an override of that function. The base class knows the event has occurred; the derived class knows what to do with it.
I now want to combine the features provided by these base classes in a higher-level class, so for example when a message arrives from another process, my new class can then forward it on over its network connection using a similar event-driven networking class. It looks like I have two options:
(1) composition: derive classes from each of the event-handling base classes and add objects of those derived classes to my new class as members, or:
(2) multiple inheritance: make my new class a derived class of all of the event-handling base classes.
I've tried both (1) and (2), and I'm not satisfied with my implementation of either.
There's an extra complication: some of the base classes have been written using initialisation and shutdown methods instead of using constructors and destructors, and of course these methods have the same names in each class. So multiple inheritance causes function name ambiguity. Solvable with using declarations and/or explicit scoping, but not the most maintainable-looking thing I've ever seen.
Even without that problem, using multiple inheritance and overriding every pure virtual function from each of several base classes is going to make my new class very big, bordering on "God Object"-ness. As requirements change (read: "as requirements are added") this isn't going to scale well.
On the other hand, using separate derived classes and adding them as members of my new class means I have to write lots of methods on each derived class to exchange information between them. This feels very much like "getters and setters" - not quite as bad, but there's a lot of "get this information from that class and hand it to this one", which has an inefficient feel to it - lots of extra methods, lots of extra reads and writes, and the classes have to know a lot about each other's logic, which feels wrong. I think a full-blown publish-and-subscribe model would be overkill, but I haven't yet found a simple alternative.
There's also a lot of duplication of data if I use composition. For example, if my class's state depends on whether its network connection is up and running, I have to either have a state flag in every class affected by this, or have every class query the networking class for its state every time a decision needs to be made. If I had just one multiply-inherited class, I could just use a flag which any code in my class could access.
So, multiple inheritance, composition, or perhaps something else entirely? Is there a general rule-of-thumb on how best to approach this kind of thing?
From your description I think you've gone for a "template method" style approach where the base does work and then calls a pure virtual that the derived class implements rather than a "callback interface" approach which is pretty much the same except that the pure virtual method is on a completely separate interface that's passed in to the "base" as a parameter to the constructor. I personally prefer the later as I find it considerably more flexible when the time comes to plug objects together and build higher level objects.
I tend to go for composition with the composing class implementing the callback interfaces that the composed objects require and then potentially composing again in a similar style at a higher level.
You can then decide if it's appropriate to compose by having the composing object implement the callback interfaces and pass them in to the "composed" objects in their constructors OR you can implement the callback interface in its own object possibly with a simpler and more precise callback interface that your composing object implements, and compose both the "base object" and the "callback implementation object"...
Personally I wouldn't go with an "abstract event handling" interface as I prefer my code to be explicit and clear even if that leads to it being slightly less generic.
I'm not totally clear on what your new class is trying to achieve, but it sounds like you're effectively having to provide a new implementation somewhere for all of these abstract event classes.
Personally I would plump for composition. Multiple inheritance quickly becomes a nightmare, especially when things have to change, and composition keeps the existing separation of concerns.
You state that each derived object will have to communicate with the network class, but can you try and reduce this to the minimum. For instance, each derived event object is purely responsible for packaging up the event info into some kind of generic packet, and then that packet is passed to the network class to do the guts of sending?
Without knowing exactly what your new class is doing it's hard to comment, or suggest better patterns, but the more I code, the more I am learning to agree with the old adage "favour composition over inheritance"

Using non-abstract class as base

I need to finish others developer work but problem is that he started in different way...
So now I found in situation to use existing code where he chooses to inherit a non-abstract class (very big class, without any virtual functions) that already implements bunch of interfaces or to dismiss that code (which shouldn't be to much work) and to write another class that implements interfaces I need.
What are the pros and cons that would help me to choose the better approach.
p.s. please note that I don't have to much experience
Many Thanks
Although it is very tempting to say write it from scratch again, don't do it! The existing code may be ugly, but it looks like it does work. Since the class is big, I assume there is fair bit of history behind it as well. It might have solutions for some very obscure cases which you might not have imagined till now. What I suggest is, if possible first talk to the person who developed that class, understand how it works, then derive from it (after making its destructor virtual of course) and complete your work. Then as and when time permits slowly refactor the parts of the class into smaller more manageable classes. Also, don't forget to write a good unit-tester before you start so that you can validate the new behavior against the existing class's behavior. One more thing, there is nothing wrong in inheriting from a non-abstract base class as long as it makes sense and the base class destructor is virtual.
If the other developer has written a base-class with no virtual functions, then those functions do not need to be overridden, and it is correct to define them in a non-abstract base class.
If those functions define functionality that all the child-classes require then it would be a mistake to get rid of the base class, as you would then need to implement those functions individually in each of the child classes.
I've seen a lot of developers go 'interface-mad' in the last couple of years, but base classes still serve a function over interfaces - to provide a concrete implementation that is common to all child classes. It would be a mistake to get rid of the base class and have seperate implementations of these functions in each of the child classes.
HOWEVER, if the child classes are inheriting functionality that they do not require, or require a separate implementation of, then the Base class is a mistake and interfaces would seem like the better option to divide the functionality between the child classes.
Despite this, I would agree with Naveen that its probably not worth the extra work this will give you, it may seem simple, but if this is a big class with a lot of inheritors then it could turn out to be a nightmare. Quite often in Software Engineering you have to deal with another developer's code that you might have implemented differently. If you re-implemented it ever time you will be a very unproductive developer. I say work with what you've got and get the project finished on time.
Is there anything at all you want to use from the base class or would you end up overriding everything?
Does it define some sort of type that you want to use for an "is-a" relationship?
(for example, base class is "animal" and you want to make "cat", but if it doesn't add any behavior to its interface, that doesn't seem likely)
Is the base class used in other interfaces you need to use? (like if someone is passing objects through a reference/pointer to the base class)
If not, I'd say there's no advantage in inheriting from that class over implementing the interface(s) yourself.
What are the pros and cons that would help me to choose the better approach.
It's legal to derive from a class with no virtual functions, but that doesn't make it a good idea. When you derive from a class with virtual functions, you often use that class through pointers (eg., a class Derived that inherits from Base is often manipulated through Base*s). That doesn't work when you don't use virtual functions. Also, if you have a pointer to the base class, delete-ing it can lead to a memory leak.
However, it sounds more like these classes aren't being used through pointers-to-the-base. Instead the base class is simply used to get a lot of built in functionality, although the classes aren't related in the normal sense. Inversion of control (and has-a relationships) is a more common way to do that nowadays (split the functionality of the base class into a number of interfaces -- pure virtual base classes -- and then have the objects that currently derive from the base class instead have member variables of those interfaces).
At the very least, you'll want to split the big base class into well-defined smaller classes and use those (like mixins), which sounds like your second option.
However, that doesn't mean rewrite all the other code that uses the blob base class all in one go. That's a big undertaking and you're likely to make small typos and similar mistakes. Instead, buy yourself copies of Working Effectively With Legacy Code and Large-Scale C++ Software Design, and do the work piecemeal.
From you question it is not too clear what the problem is - looking at the title (Using non-abstract class as base) I can tell you that using an abstract class (non pure virtual - when you talk about interfaces in C++ I am assuming pure virtual abstract classes) as base makes sense only if there is common functionality you can share between subclasses - meaning that a number of classes extend the same abstract class inheriting the common implementation. If that's not the case (and you're pretty confident it's never gonna happen) then it doesn't make sense to use an abstract class.
If you can extract out some of the functionality in you big class in such a way that leads to (even potential) code reuse then it could make sense - otherwise I wouldn't see the point.

Best way to use a C++ Interface

I have an interface class similar to:
class IInterface
{
public:
virtual ~IInterface() {}
virtual methodA() = 0;
virtual methodB() = 0;
};
I then implement the interface:
class AImplementation : public IInterface
{
// etc... implementation here
}
When I use the interface in an application is it better to create an instance of the concrete class AImplementation. Eg.
int main()
{
AImplementation* ai = new AIImplementation();
}
Or is it better to put a factory "create" member function in the Interface like the following:
class IInterface
{
public:
virtual ~IInterface() {}
static std::tr1::shared_ptr<IInterface> create(); // implementation in .cpp
virtual methodA() = 0;
virtual methodB() = 0;
};
Then I would be able to use the interface in main like so:
int main()
{
std::tr1::shared_ptr<IInterface> test(IInterface::create());
}
The 1st option seems to be common practice (not to say its right). However, the 2nd option was sourced from "Effective C++".
One of the most common reasons for using an interface is so that you can "program against an abstraction" rather then a concrete implementation.
The biggest benefit of this is that it allows changing of parts of your code while minimising the change on the remaining code.
Therefore although we don't know the full background of what you're building, I would go for the Interface / factory approach.
Having said this, in smaller applications or prototypes I often start with concrete classes until I get a feel for where/if an interface would be desirable. Interfaces can introduce a level of indirection that may just not be necessary for the scale of app you're building.
As a result in smaller apps, I find I don't actually need my own custom interfaces. Like so many things, you need to weigh up the costs and benefits specific to your situation.
There is yet another alternative which you haven't mentioned:
int main(int argc, char* argv[])
{
//...
boost::shared_ptr<IInterface> test(new AImplementation);
//...
return 0;
}
In other words, one can use a smart pointer without using a static "create" function. I prefer this method, because a "create" function adds nothing but code bloat, while the benefits of smart pointers are obvious.
There are two separate issues in your question:
1. How to manage the storage of the created object.
2. How to create the object.
Part 1 is simple - you should use a smart pointer like std::tr1::shared_ptr to prevent memory leaks that otherwise require fancy try/catch logic.
Part 2 is more complicated.
You can't just write create() in main() like you want to - you'd have to write IInterface::create(), because otherwise the compiler will be looking for a global function called create, which isn't what you want. It might seem like having the 'std::tr1::shared_ptr test' initialized with the value returned by create() might seem like it'd do what you want, but that's not how C++ compilers work.
As to whether using a factory method on the interface is a better way to do this than just using new AImplementation(), it's possible it'd be helpful in your situation, but beware of speculative complexity - if you're writing the interface so that it always creates an AImplementation and never a BImplementation or a CImplementation, it's hard to see what the extra complexity buys you.
"Better" in what sense?
The factory method doesn't buy you much if you only plan to have, say, one concrete class. (But then again, if you only plan to have one concrete class, do you really need the interface class at all? Maybe yes, if you're using COM.) In any case, if you can forsee a small, fixed limit on the number of concrete classes, then the simpler implementation may be the "better" one, on the whole.
But if there may be many concrete classes, and if you don't want to have the base class be tightly coupled to them, then the factory pattern may be useful.
And yes, this can help reduce coupling -- if the base class provides some means for the derived classes to register themselves with the base class. This would allow the factory to know which derived classes exist, and how to create them, without needing compile-time information about them.
Use the 1st method. Your factory method in the 2nd option would have to be implemented per-concrete class and this is not possible to do in the interface. I.e., IInterface::create() has no idea exactly which concrete class you actually wish to instantiate.
A static method cannot be virtual, and implementing a non-static create() method in your concrete classes has not really won you anything in this case.
Factory methods are certainly useful, but this is not the correct use.
Which item in Effective C++ recommends the 2nd option? I don't see it in mine (though I don't also have the second book). That may clear up a mis-understanding.
I would go with the first option just because it's more common and more understandable. It's really up to you, but if your working on a commercial app then I would ask what my peers what they use.
I do have a very simple question there:
Are you sure you want to use a pointer ?
This question might seem unlogical but people coming from a Java background use new much often than required. In your example, creating the variable on the stack would be amply sufficient.

Pimpl idiom vs Pure virtual class interface

I was wondering what would make a programmer to choose either Pimpl idiom or pure virtual class and inheritance.
I understand that pimpl idiom comes with one explicit extra indirection for each public method and the object creation overhead.
The Pure virtual class in the other hand comes with implicit indirection(vtable) for the inheriting implementation and I understand that no object creation overhead.
EDIT: But you'd need a factory if you create the object from the outside
What makes the pure virtual class less desirable than the pimpl idiom?
When writing a C++ class, it's appropriate to think about whether it's going to be
A Value Type
Copy by value, identity is never important. It's appropriate for it to be a key in a std::map. Example, a "string" class, or a "date" class, or a "complex number" class. To "copy" instances of such a class makes sense.
An Entity type
Identity is important. Always passed by reference, never by "value". Often, doesn't make sense to "copy" instances of the class at all. When it does make sense, a polymorphic "Clone" method is usually more appropriate. Examples: A Socket class, a Database class, a "policy" class, anything that would be a "closure" in a functional language.
Both pImpl and pure abstract base class are techniques to reduce compile time dependencies.
However, I only ever use pImpl to implement Value types (type 1), and only sometimes when I really want to minimize coupling and compile-time dependencies. Often, it's not worth the bother. As you rightly point out, there's more syntactic overhead because you have to write forwarding methods for all of the public methods. For type 2 classes, I always use a pure abstract base class with associated factory method(s).
Pointer to implementation is usually about hiding structural implementation details. Interfaces are about instancing different implementations. They really serve two different purposes.
The pimpl idiom helps you reduce build dependencies and times especially in large applications, and minimizes header exposure of the implementation details of your class to one compilation unit. The users of your class should not even need to be aware of the existence of a pimple (except as a cryptic pointer to which they are not privy!).
Abstract classes (pure virtuals) is something of which your clients must be aware: if you try to use them to reduce coupling and circular references, you need to add some way of allowing them to create your objects (e.g. through factory methods or classes, dependency injection or other mechanisms).
I was searching an answer for the same question.
After reading some articles and some practice I prefer using "Pure virtual class interfaces".
They are more straight forward (this is a subjective opinion). Pimpl idiom makes me feel I'm writing code "for the compiler", not for the "next developer" that will read my code.
Some testing frameworks have direct support for Mocking pure virtual classes
It's true that you need a factory to be accessible from the outside.
But if you want to leverage polymorphism: that's also "pro", not a "con". ...and a simple factory method does not really hurts so much
The only drawback (I'm trying to investigate on this) is that pimpl idiom could be faster
when the proxy-calls are inlined, while inheriting necessarily need an extra access to the object VTABLE at runtime
the memory footprint the pimpl public-proxy-class is smaller (you can do easily optimizations for faster swaps and other similar optimizations)
I hate pimples! They do the class ugly and not readable. All methods are redirected to pimple. You never see in headers, what functionalities has the class, so you can not refactor it (e. g. simply change the visibility of a method). The class feels like "pregnant". I think using iterfaces is better and really enough to hide the implementation from the client. You can event let one class implement several interfaces to hold them thin. One should prefer interfaces!
Note: You do not necessary need the factory class. Relevant is that the class clients communicate with it's instances via the appropriate interface.
The hiding of private methods I find as a strange paranoia and do not see reason for this since we hav interfaces.
There's a very real problem with shared libraries that the pimpl idiom circumvents neatly that pure virtuals can't: you cannot safely modify/remove data members of a class without forcing users of the class to recompile their code. That may be acceptable under some circumstances, but not e.g. for system libraries.
To explain the problem in detail, consider the following code in your shared library/header:
// header
struct A
{
public:
A();
// more public interface, some of which uses the int below
private:
int a;
};
// library
A::A()
: a(0)
{}
The compiler emits code in the shared library that calculates the address of the integer to be initialized to be a certain offset (probably zero in this case, because it's the only member) from the pointer to the A object it knows to be this.
On the user side of the code, a new A will first allocate sizeof(A) bytes of memory, then hand a pointer to that memory to the A::A() constructor as this.
If in a later revision of your library you decide to drop the integer, make it larger, smaller, or add members, there'll be a mismatch between the amount of memory user's code allocates, and the offsets the constructor code expects. The likely result is a crash, if you're lucky - if you're less lucky, your software behaves oddly.
By pimpl'ing, you can safely add and remove data members to the inner class, as the memory allocation and constructor call happen in the shared library:
// header
struct A
{
public:
A();
// more public interface, all of which delegates to the impl
private:
void * impl;
};
// library
A::A()
: impl(new A_impl())
{}
All you need to do now is keep your public interface free of data members other than the pointer to the implementation object, and you're safe from this class of errors.
Edit: I should maybe add that the only reason I'm talking about the constructor here is that I didn't want to provide more code - the same argumentation applies to all functions that access data members.
We must not forget that inheritance is a stronger, closer coupling than delegation. I would also take into account all the issues raised in the answers given when deciding what design idioms to employ in solving a particular problem.
Although broadly covered in the other answers maybe I can be a bit more explicit about one benefit of pimpl over virtual base classes:
A pimpl approach is transparent from the user view point, meaning you can e.g. create objects of the class on the stack and use them directly in containers. If you try to hide the implementation using an abstract virtual base class, you will need to return a shared pointer to the base class from a factory, complicating it's use. Consider the following equivalent client code:
// Pimpl
Object pi_obj(10);
std::cout << pi_obj.SomeFun1();
std::vector<Object> objs;
objs.emplace_back(3);
objs.emplace_back(4);
objs.emplace_back(5);
for (auto& o : objs)
std::cout << o.SomeFun1();
// Abstract Base Class
auto abc_obj = ObjectABC::CreateObject(20);
std::cout << abc_obj->SomeFun1();
std::vector<std::shared_ptr<ObjectABC>> objs2;
objs2.push_back(ObjectABC::CreateObject(13));
objs2.push_back(ObjectABC::CreateObject(14));
objs2.push_back(ObjectABC::CreateObject(15));
for (auto& o : objs2)
std::cout << o->SomeFun1();
In my understanding these two things serve completely different purposes. The purpose of the pimple idiom is basically give you a handle to your implementation so you can do things like fast swaps for a sort.
The purpose of virtual classes is more along the line of allowing polymorphism, i.e. you have a unknown pointer to an object of a derived type and when you call function x you always get the right function for whatever class the base pointer actually points to.
Apples and oranges really.
The most annoying problem about the pimpl idiom is it makes it extremely hard to maintain and analyse existing code. So using pimpl you pay with developer time and frustration only to "reduce build dependencies and times and minimize header exposure of the implementation details". Decide yourself, if it is really worth it.
Especially "build times" is a problem you can solve by better hardware or using tools like Incredibuild ( www.incredibuild.com, also already included in Visual Studio 2017 ), thus not affecting your software design. Software design should be generally independent of the way the software is built.