C++ API Design Patterns - c++

I have an API implemementation / design question to ask. I need to design a modern, usable DLL API, but I am unsure which approach is best and why. I'm not even sure if what I have below could be considered either!
My implementation of the classes needs to be abstracted from the client, but the client also needs to be able to extend and define additional (supported) classes as needed. Many of the internal classes of the library will need references to these external components, necessating the use of pure virtual classes or polymorphism.
I'm not too worried about GNU/Clang compatability - that may come later (as in much later or possibly never) but the primary need is in the ABI layers of MSVC 15/17.
The two ways I can see to implement this is to either use to use a Pimpl idiom or an external Factory method. I'm not a big fan of either (PIMPL adds to code complexity and maintainability, Factory abstracts class creation) so a well thought out third option is a possibility if viable.
For the sake of the simplicity in this post, I've put the "implementations" in the headers below, but they would in their own respective CPPs. I've also cut out any implied constructors, destructors or includes.
I'll define two pure virtual classes here for the meantime
//! IFoo.h
class IFoo
{
virtual void someAction() noexcept = 0;
}
//! IBar.h
class IBar
{
public:
virtual void Foo(IFoo& other) noexcept = 0;
};
Edit, I'll put this here too:
//! BarImpl.h (Internal)
class BarImpl : public IBar
{
public:
virtual void Foo(IFoo& foo) noexcept
{
foo.someAction();
}
};
First up, we have the PIMPL approach
//! Bar.h (External)
class API_EXPORT Bar : public IBar
{
private:
BarImpl* impl;
public:
virtual void Foo(IFoo& foo) noexcept
{
impl->Foo(foo);
}
};
Ideally, the client code therefore looks alittle something like this:
//! client_code.cpp
class CustomFoo : public IFoo;
int
main(int argc, char** argv)
{
auto foo = CustomFoo();
auto bar = Bar();
bar.Foo(foo); // do a barrel roll
}
Secondly, we have a Factory approach
//! BarFactory.h (External)
class API_EXPORT BarFactory
{
private:
BarImpl impl;
public:
virtual IBar& Allocate() noexcept
{
return impl; // For simplicity
}
}
The resulting client code looking something like:
//! client_code.cpp
class CustomFoo : public IFoo;
int
main(int argc, char** argv)
{
auto foo = CustomFoo();
auto barfactory = BarFactory();
IBar& bar = barfactory.Allocate();
bar.Foo(foo); // do a barrel roll
}
In my eyes, both approaches have their own merits. The PIMPL method, although double abstracted and possibly slower due to all the virtualisation, brings a humble simplicity to the client code. The factory avoids this but necessitates a seperate object creation / management class. As these are external functions, I don't think I can easily get away with templating these functions as you might expect to with an internal class, so all this extra code will have to be written either way.
Both methods seem to ensure that the interface remains stable between versions - ensuring binary compatibility between future minor releases.
Would anyone be able to lend a hand to the conundrum I have created for myself above? It would be much appreciated!
Many Thanks,
Chris

Related

Simplify an extensible "Perform Operation X on Data Y" framework

tl;dr
My goal is to conditionally provide implementations for abstract virtual methods in an intermediate workhorse template class (depending on template parameters), but to leave them abstract otherwise so that classes derived from the template are reminded by the compiler to implement them if necessary.
I am also grateful for pointers towards better solutions in general.
Long version
I am working on an extensible framework to perform "operations" on "data". One main goal is to allow XML configs to determine program flow, and allow users to extend both allowed data types and operations at a later date, without having to modify framework code.
If either one (operations or data types) is kept fixed architecturally, there are good patterns to deal with the problem. If allowed operations are known ahead of time, use abstract virtual functions in your data types (new data have to implement all required functionality to be usable). If data types are known ahead of time, use the Visitor pattern (where the operation has to define virtual calls for all data types).
Now if both are meant to be extensible, I could not find a well-established solution.
My solution is to declare them independently from one another and then register "operation X for data type Y" via an operation factory. That way, users can add new data types, or implement additional or alternative operations and they can be produced and configured using the same XML framework.
If you create a matrix of (all data types) x (all operations), you end up with a lot of classes. Hence, they should be as minimal as possible, and eliminate trivial boilerplate code as far as possible, and this is where I could use some inspiration and help.
There are many operations that will often be trivial, but might not be in specific cases, such as Clone() and some more (omitted here for "brevity"). My goal is to conditionally provide implementations for abstract virtual methods if appropriate, but to leave them abstract otherwise.
Some solutions I considered
As in example below: provide default implementation for trivial operations. Consequence: Nontrivial operations need to remember to override with their own methods. Can lead to run-time problems if some future developer forgets to do that.
Do NOT provide defaults. Consequence: Nontrivial functions need to be basically copy & pasted for every final derived class. Lots of useless copy&paste code.
Provide an additional template class derived from cOperation base class that implements the boilerplate functions and nothing else (template parameters similar to specific operation workhorse templates). Derived final classes inherit from their concrete operation base class and that template. Consequence: both concreteOperationBase and boilerplateTemplate need to inherit virtually from cOperation. Potentially some run-time overhead, from what I found on SO. Future developers need to let their operations inherit virtually from cOperation.
std::enable_if magic. Didn't get the combination of virtual functions and templates to work.
Here is a (fairly) minimal compilable example of the situation:
//Base class for all operations on all data types. Will be inherited from. A lot. Base class does not define any concrete operation interface, nor does it necessarily know any concrete data types it might be performed on.
class cOperation
{
public:
virtual ~cOperation() {}
virtual std::unique_ptr<cOperation> Clone() const = 0;
virtual bool Serialize() const = 0;
//... more virtual calls that can be either trivial or quite involved ...
protected:
cOperation(const std::string& strOperationID, const std::string& strOperatesOnType)
: m_strOperationID()
, m_strOperatesOnType(strOperatesOnType)
{
//empty
}
private:
std::string m_strOperationID;
std::string m_strOperatesOnType;
};
//Base class for all data types. Will be inherited from. A lot. Does not know any operations that might be performed on it.
struct cDataTypeBase
{
virtual ~cDataTypeBase() {}
};
Now, I'll define an example data type.
//Some concrete data type. Still does not know any operations that might be performed on it.
struct cDataTypeA : public cDataTypeBase
{
static const std::string& GetDataName()
{
static const std::string strMyName = "cDataTypeA";
return strMyName;
}
};
And here is an example operation. It defines a concrete operation interface, but does not know the data types it might be performed on.
//Some concrete operation. Does not know all data types it might be expected to work on.
class cConcreteOperationX : public cOperation
{
public:
virtual bool doSomeConcreteOperationX(const cDataTypeBase& dataBase) = 0;
protected:
cConcreteOperationX(const std::string& strOperatesOnType)
: cOperation("concreteOperationX", strOperatesOnType)
{
//empty
}
};
The following template is meant to be the boilerplate workhorse. It implements as much trivial and repetitive code as possible and is provided alongside the concrete operation base class - concrete data types are still unknown, but are meant to be provided as template parameters.
//ConcreteOperationTemplate: absorb as much common/trivial code as possible, so concrete derived classes can have minimal code for easy addition of more supported data types
template <typename ConcreteDataType, typename DerivedOperationType, bool bHasTrivialCloneAndSerialize = false>
class cConcreteOperationXTemplate : public cConcreteOperationX
{
public:
//Can perform datatype cast here:
virtual bool doSomeConcreteOperationX(const cDataTypeBase& dataBase) override
{
const ConcreteDataType* pCastData = dynamic_cast<const ConcreteDataType*>(&dataBase);
if (pCastData == nullptr)
{
return false;
}
return doSomeConcreteOperationXOnCastData(*pCastData);
}
protected:
cConcreteOperationXTemplate()
: cConcreteOperationX(ConcreteDataType::GetDataName()) //requires ConcreteDataType to have a static method returning something appropriate
{
//empty
}
private:
//Clone can be implemented here via CRTP
virtual std::unique_ptr<cOperation> Clone() const override
{
return std::unique_ptr<cOperation>(new DerivedOperationType(*static_cast<const DerivedOperationType*>(this)));
}
//TODO: Some Magic here to enable trivial serializations, but leave non-trivials abstract
//Problem with current code is that virtual bool Serialize() override will also be overwritten for bHasTrivialCloneAndSerialize == false
virtual bool Serialize() const override
{
return true;
}
virtual bool doSomeConcreteOperationXOnCastData(const ConcreteDataType& castData) = 0;
};
Here are two implementations of the example operation on the example data type. One of them will be registered as the default operation, to be used if the user does not declare anything else in the config, and the other is a potentially much more involved non-default operation that might take many additional parameters into account (these would then have to be serialized in order to be correctly re-instantiated on the next program run). These operations need to know both the operation and the data type they relate to, but could potentially be implemented at a much later time, or in a different software component where the specific combination of operation and data type are required.
//Implementation of operation X on type A. Needs to know both of these, but can be implemented if and when required.
class cConcreteOperationXOnTypeADefault : public cConcreteOperationXTemplate<cDataTypeA, cConcreteOperationXOnTypeADefault, true>
{
virtual bool doSomeConcreteOperationXOnCastData(const cDataTypeA& castData) override
{
//...do stuff...
return true;
}
};
//Different implementation of operation X on type A.
class cConcreteOperationXOnTypeASpecialSauce : public cConcreteOperationXTemplate<cDataTypeA, cConcreteOperationXOnTypeASpecialSauce/*, false*/>
{
virtual bool doSomeConcreteOperationXOnCastData(const cDataTypeA& castData) override
{
//...do stuff...
return true;
}
//Problem: Compiler does not remind me that cConcreteOperationXOnTypeASpecialSauce might need to implement this method
//virtual bool Serialize() override {}
};
int main(int argc, char* argv[])
{
std::map<std::string, std::map<std::string, std::unique_ptr<cOperation>>> mapOpIDAndDataTypeToOperation;
//...fill map, e.g. via XML config / factory method...
const cOperation& requestedOperation = *mapOpIDAndDataTypeToOperation.at("concreteOperationX").at("cDataTypeA");
//...do stuff...
return 0;
}
if you data types are not virtual (for each operation call you know both operation type and data type at compile time) you may consider following approach:
#include<iostream>
#include<string>
template<class T>
void empty(T t){
std::cout<<"warning about missing implementation"<<std::endl;
}
template<class T>
void simple_plus(T){
std::cout<<"simple plus"<<std::endl;
}
void plus_string(std::string){
std::cout<<"plus string"<<std::endl;
}
template<class Data, void Implementation(Data)>
class Operation{
public:
static void exec(Data d){
Implementation(d);
}
};
#define macro_def(OperationName) template<class T> class OperationName : public Operation<T, empty<T>>{};
#define macro_template_inst( TypeName, OperationName, ImplementationName ) template<> class OperationName<TypeName> : public Operation<TypeName, ImplementationName<TypeName>>{};
#define macro_inst( TypeName, OperationName, ImplementationName ) template<> class OperationName<TypeName> : public Operation<TypeName, ImplementationName>{};
// this part may be generated on base of .xml file and put into .h file, and then just #include generated.h
macro_def(Plus)
macro_template_inst(int, Plus, simple_plus)
macro_template_inst(double, Plus, simple_plus)
macro_inst(std::string, Plus, plus_string)
int main() {
Plus<int>::exec(2);
Plus<double>::exec(2.5);
Plus<float>::exec(2.5);
Plus<std::string>::exec("abc");
return 0;
}
Minus of this approach is that you'd have to compile project in 2 steps: 1) transform .xml to .h 2) compile project using generated .h file. On plus side compiler/ide (I use qtcreator with mingw) gives warning about unused parameter t in function
void empty(T t)
and stack trace where from it was called.

Handle Body Idiom in C++

I have a class in my library which I want to expose to the users. I don't want to expose the whole class as I might want to make a binary incompatible changes later. I am confused with which of the following ways would be best.
Case 1:
struct Impl1;
struct Handle1
{
// The definition will not be inline and will be defined in a C file
// Showing here for simplicity
void interface()
{
static_cast<Impl1*>(this)->interface();
}
}
struct Impl1 : public Handle1
{
void interface(){ /* Do ***actual*** work */ }
private:
int _data; // And other private data
};
Case 2:
struct Impl2
struct Handle2
{
// Constructor/destructor to manage impl
void interface() // Will not be inline as above.
{
_impl->interface();
}
private:
Impl2* _impl;
}
struct Impl2
{
void interface(){ /* Do ***actual*** work */ }
private:
int _data; // And other private data
};
The Handle class is only for exposing functionality. They will be created and managed only inside the library. Inheritance is just for abstracting implementation details. There won't be multiple/different impl classes. In terms of performance, I think both will be identical. Is it? I am thinking of going with the Case 1 approach. Are there any issues that needs to be taken care of?
Your second approach looks very much like the compilation firewall idiom (sometimes known as the PIMPL idiom).
The only difference is that in the compilation firewall idiom, the implementation class is usually (but not always) defined as a member. Don't forget the constructor
(which allocates the Impl) and the destructor (which frees it). Along with the copy constructor and assignment operator.
The first approach also works, but it will require factory functions to create the objects. When I've used it, I've simply made all of the functions in the Handle pure virtual, and let the client code call them directly.
In this case, since client code actually has pointers to your object (in the compilation firewall idiom, the only pointers are in the Handle class itself), and the client will have to worry about memory management; if no cycles are possible, this is one case where shared_ptr makes a lot of sense. (The factory function can return a
shared_ptr, for example, and client code may never see a raw pointer.)

Singletons and factories with multiple libraries in C++

I've been reading and searching the web for a while now but haven't found a nice solution. Here's what I want to do:
I am writing a library that defines an abstract base class - lets call it IFoo.
class IFoo {
public:
virtual void doSomething() = 0;
virtual ~IFoo() {}
};
The library also defines a couple of implementations for that - lets call them FooLibOne and FooLibTwo.
In order to encapsulate the creation process and decide which concrete implementation is used depending on some runtime paramter, I use a factory FooFactory that maps std::string to factory methods (in my case boost::function, but that should not be the point here). It also allows new factory methods to be registered. It looks something like this:
class FooFactory {
public:
typedef boost::function<IFoo* ()> CreatorFunction;
IFoo* create(std::string name);
void registerCreator(std::string name, CreatorFunction f);
private:
std::map<std::string, CreatorFunction> mapping_;
};
For now, I added the implementations provided (FooLibOne, FooLibTwo) by the library directly in the constructor of FooFactory - thus they are always available. Some of the library code uses the FooFactory to initialize certain objects etc. I have refrained from using the Singleton pattern for the factories so far since tee pattern is debated often enough and I wasn't sure, how different implementations of the Singleton pattern would work in combination with possibly multiple shared libraries etc.
However, passing around the factories can be a little cumbersome and I still think, this is one of the occassions the Singleton pattern could be of good use. Especially if I consider, that the users of the library should be able to add more implementations of IFoo which should also be accessible for the (already existing) library code. Of course, Dependency Injection - meaning I pass an instance of a factory through the constructor - could do the trick (and does it for now). But this approach kind of fails if I want to be even more flexible and introduce a second layer of dynamic object creation. Meaning: I want to dynamicly create objects (see above) within dynamically created objects (say implementations of an abstract base class IBar - BarOne and BarTwo - again via a factory BarFactory).
Lets say BarOne requires an IFoo object but BarTwo doesn't. I still have to provide the FooFactory to the BarFactory in any case, since one of the IBar implementations might need it. Having globally accessible factories would mitigate this problem and I wouldn't be forced to forsee, which factories may be needed by implementations of a specific interface. In addition I could register the creation methods directly in the source file of the implementations.
FooFactory::Instance().registerCreator("new_creator", boost::bind(...));
Since I think it is a good idea, what would be the right way to implement it? I was going for a templated approach like the SingletonHolder from Modern C++ Design (see also Loki library) to wrap the factories. However, I'd rather implement it as a Meyer's Singleton instead. But I still think there will be issues with shared libraries. The solution should work with GCC (and preferably MSVC). I'm also open for other ideas from a design point of view but please avoid the common "Singletons are evil"-rants. ;-)
Thanks in advance.
Hopefully, 76 lines of code speak more than a few words - (using C++11 version of these features instead of the boost ones, but they're pretty much the same anyway)
I would put the definition of the factory(ies) and the definition of the creators in the same (or nearby) scope, so that each of the creators can "see" any of their dependent factories - avoiding the need to pass factories around too much, and avoiding singletons
Cars & Sirens:
class ISiren {};
class Siren : public ISiren
{
public:
Siren() { std::cout << "Siren Created" << std::endl; }
};
class ICar{};
class EstateCar : public ICar
{
public:
EstateCar() { std::cout << "EstateCar created" << std::endl;}
};
class PoliceCar : public ICar
{
std::shared_ptr<ISiren> siren;
public:
PoliceCar( std::shared_ptr<ISiren> siren)
: siren( siren )
{
std::cout << "PoliceCar created" << std::endl;
}
};
Factories:
typedef std::function< std::shared_ptr<ICar> () > CreatorType;
class CarFactory
{
std::map<std::string, CreatorType> creators;
public:
void AddFactory( std::string type, CreatorType func )
{
creators.insert( std::make_pair(type, func) );
}
std::shared_ptr<ICar> CreateCar( std::string type )
{
CreatorType& create( creators[type] );
return create();
}
};
class SirenFactory
{
public: // Simple factory creating 1 siren type just for brevity
std::shared_ptr<ISiren> CreateSiren() { return std::make_shared<Siren>(); }
};
"Factory Root" (main function, wherever factories are defined) :
int main()
{
CarFactory car_factory; // Car factory unaware of Siren factory
SirenFactory siren_factory;
auto EstateCarLambda = []() {
return std::make_shared<EstateCar>();
}; // Estate car lambda knows nothing of the Siren Factory
auto PoliceCarLambda = [&siren_factory]() {
return std::make_shared<PoliceCar>( siren_factory.CreateSiren() );
}; // Police car creation lambda using the Siren Factory
car_factory.AddFactory( "EstateCar", EstateCarLambda );
car_factory.AddFactory( "PoliceCar", PoliceCarLambda );
std::shared_ptr<ICar> car1 = car_factory.CreateCar( "EstateCar" );
std::shared_ptr<ICar> car2 = car_factory.CreateCar( "PoliceCar" );
}

c++ singleton implementation : pimpl idiom for singletons, pros and cons

When implementing singletons in c++, I see two ways to store implementation data :
(A) put all implementation data in the private section and implement class as usual
(B) "pimpl idiom for singletons" : hide implementation data by placing it to the 'Impl' structure, which can be defined in the implementation file. Private section contains only a reference to the implementation structure.
Here is a concept code to clarify what I mean by (A) and (B) implementation options :
(A) SingletonClassMembers.hpp :
// a lot of includes required by private section
#include "HelperClass1.hpp"
#include "HelperClass2.hpp"
// some includes required by public section
// ...
class SingletonClassMembers {
public:
static SingletonClassMembers& getInstance();
// public methods
private:
SingletonClassMembers ();
~SingletonClassMembers();
SingletonClassMembers (const SingletonClassMembers&); //not implemented
SingletonClassMembers& operator=(const SingletonClassMembers&); //not implemented
HelperClass1 mMember1;
HelperClass2 mMember2; //and so on
(A) SingletonClassMembers.cpp :
#include "SingletonClassMembers.hpp"
SingletonClassMembers& getInstance() {
static SingletonClassMembers sImpl;
return sImpl;
}
(B) SingletonHiddenImpl.hpp :
// some includes required by public section
// ...
class SingletonHiddenImpl {
public:
static SingletonHiddenImpl& getInstance();
// public methods
private:
SingletonHiddenImpl ();
~SingletonHiddenImpl ();
SingletonHiddenImpl (const SingletonHiddenImpl&); //not implemented
SingletonHiddenImpl& operator=(const SingletonHiddenImpl&); //not implemented
struct Impl;
Impl& mImpl;
};
(B) SingletonHiddenImpl.cpp :
#include "SingletonHiddenImpl.hpp"
#include "HelperClass1.hpp"
#include "HelperClass2.hpp"
struct SingletonHiddenImpl::Impl {
HelperClass1 member1;
HelperClass2 member2;
};
static inline SingletonHiddenImpl::Impl& getImpl () {
static Impl sImpl;
return sImpl;
}
SingletonHiddenImpl::SingletonHiddenImpl ()
: mImpl (getImpl())
{
}
So, using (B) approach, you can hide implementation details better and (unlike pimpl idiom for ordinary classes) there`s no performance loss. I can`t imagine conditions where (A) approach would be more appropriate
The question is, what are the advantages of storing implementation data as class members (A) ?
Thank you
Using case A has following benefits:
You reduce dependency between classes SingletonClassMembers and SingletonHiddenImpl.
You don't need to create configurator pattern in class SingletonClassMembers if you trying avoid restriction on (1) by dependency injection
This case is weak, but anyway: it is simple to maintenance single class
In multithreading environment you will need to support both class synchronization mechanism, while in single class only single locks is needed.
As you only have one instance of your singleton, you can actually move your helpers into the implementation class as "static" there without requiring them to be private inside the header. Of course you don't want to initialise them until you start your class so you would use some kind of smart-pointer, could be auto_ptr here or boost::scoped_ptr, or a pointer with boost::once initialisation (more thread-safe) with deletes in your singleton's destructor.
You can call this model C and probably has the best of both worlds as you completely hide your implementation.
As is the case with any singleton, you need to be extra careful not to throw in your constructor.
When considering efficiency with pimpl, it is not the heap that causes overhead, but the indirection (done by delegation). This delegation typically isn't optimized out (at least not at the time I was considering this ;-)), so there is not a big gain apart from the startup (1 time) penalty for creating the impl. (BTW, I didn't see any delegation functions in your example)
So I don't see that much difference in using pimpl in normal classes or in singletons. I think in both case, using pimpl for classes with limited interface and heavy implementation, it makes sense.

Call a C++ base class method automatically

I'm trying to implement the command design pattern, but I'm stumbling accross a conceptual problem. Let's say you have a base class and a few subclasses like in the example below:
class Command : public boost::noncopyable {
virtual ResultType operator()()=0;
//Restores the model state as it was before command's execution.
virtual void undo()=0;
//Registers this command on the command stack.
void register();
};
class SomeCommand : public Command {
virtual ResultType operator()(); // Implementation doesn't really matter here
virtual void undo(); // Same
};
The thing is, everytime operator () is called on a SomeCommand instance, I'd like to add *this to a stack (mostly for undo purposes) by calling the Command's register method. I'd like to avoid calling "register" from SomeCommand::operator()(), but to have it called automaticaly (someway ;-) )
I know that when you construct a sub class such as SomeCommand, the base class constructor is called automaticaly, so I could add a call to "register" there. The thing I don't want to call register until operator()() is called.
How can I do this? I guess my design is somewhat flawed, but I don't really know how to make this work.
It looks as if you can benefit from the NVI (Non-Virtual Interface) idiom. There the interface of the command object would have no virtual methods, but would call into private extension points:
class command {
public:
void operator()() {
do_command();
add_to_undo_stack(this);
}
void undo();
private:
virtual void do_command();
virtual void do_undo();
};
There are different advantages to this approach, first of which is that you can add common functionality in the base class. Other advantages are that the interface of your class and the interface of the extension points is not bound to each other, so you could offer different signatures in your public interface and the virtual extension interface. Search for NVI and you will get much more and better explanations.
Addendum: The original article by Herb Sutter where he introduces the concept (yet unnamed)
Split the operator in two different methods, e.g. execute and executeImpl (to be honest, I don't really like the () operator). Make Command::execute non-virtual, and Command::executeImpl pure virtual, then let Command::execute perform the registration, then call it executeImpl, like this:
class Command
{
public:
ResultType execute()
{
... // do registration
return executeImpl();
}
protected:
virtual ResultType executeImpl() = 0;
};
class SomeCommand
{
protected:
virtual ResultType executeImpl();
};
Assuming it's a 'normal' application with undo and redo, I wouldn't try and mix managing the stack with the actions performed by the elements on the stack. It will get very complicated if you either have multiple undo chains (e.g. more than one tab open), or when you do-undo-redo, where the command has to know whether to add itself to undo or move itself from redo to undo, or move itself from undo to redo. It also means you need to mock the undo/redo stack to test the commands.
If you do want to mix them, then you will have three template methods, each taking the two stacks (or the command object needs to have references to the stacks it operates on when created), and each performing the move or add, then calling the function. But if you do have those three methods, you will see that they don't actually do anything other than call public functions on the command and are not used by any other part of the command, so become candidates the next time you refactor your code for cohesion.
Instead, I'd create an UndoRedoStack class which has an execute_command(Command*command) function, and leave the command as simple as possible.
Basically Patrick's suggestion is the same as David's which is also the same as mine. Use NVI (non-virtual interface idiom) for this purpose. Pure virtual interfaces lack any kind of centralized control. You could alternatively create a separate abstract base class that all commands inherit, but why bother?
For detailed discussion about why NVIs are desirable, see C++ Coding Standards by Herb Sutter. There he goes so far as to suggest making all public functions non-virtual to achieve a strict separation of overridable code from public interface code (which should not be overridable so that you can always have some centralized control and add instrumentation, pre/post-condition checking, and whatever else you need).
class Command
{
public:
void operator()()
{
do_command();
add_to_undo_stack(this);
}
void undo()
{
// This might seem pointless now to just call do_undo but
// it could become beneficial later if you want to do some
// error-checking, for instance, without having to do it
// in every single command subclass's undo implementation.
do_undo();
}
private:
virtual void do_command() = 0;
virtual void do_undo() = 0;
};
If we take a step back and look at the general problem instead of the immediate question being asked, I think Pete offers some very good advice. Making Command responsible for adding itself to the undo stack is not particularly flexible. It can be independent of the container in which it resides. Those higher-level responsibilities should probably be a part of the actual container which you can also make responsible for executing and undoing the command.
Nevertheless, it should be very helpful to study NVI. I've seen too many developers write pure virtual interfaces like this out of the historical benefits they had only to add the same code to every subclass that defines it when it need only be implemented in one central place. It is a very handy tool to add to your programming toolbox.
I once had a project to create a 3D modelling application and for that I used to have the same requirement. As far as I understood when working on it was that no matter what and operation should always know what it did and therefore should know how to undo it. So I had a base class created for each operation and it's operation state as shown below.
class OperationState
{
protected:
Operation& mParent;
OperationState(Operation& parent);
public:
virtual ~OperationState();
Operation& getParent();
};
class Operation
{
private:
const std::string mName;
public:
Operation(const std::string& name);
virtual ~Operation();
const std::string& getName() const{return mName;}
virtual OperationState* operator ()() = 0;
virtual bool undo(OperationState* state) = 0;
virtual bool redo(OperationState* state) = 0;
};
Creating a function and it's state would be like:
class MoveState : public OperationState
{
public:
struct ObjectPos
{
Object* object;
Vector3 prevPosition;
};
MoveState(MoveOperation& parent):OperationState(parent){}
typedef std::list<ObjectPos> PrevPositions;
PrevPositions prevPositions;
};
class MoveOperation : public Operation
{
public:
MoveOperation():Operation("Move"){}
~MoveOperation();
// Implement the function and return the previous
// previous states of the objects this function
// changed.
virtual OperationState* operator ()();
// Implement the undo function
virtual bool undo(OperationState* state);
// Implement the redo function
virtual bool redo(OperationState* state);
};
There used to be a class called OperationManager. This registered different functions and created instances of them within it like:
OperationManager& opMgr = OperationManager::GetInstance();
opMgr.register<MoveOperation>();
The register function was like:
template <typename T>
void OperationManager::register()
{
T* op = new T();
const std::string& op_name = op->getName();
if(mOperations.count(op_name))
{
delete op;
}else{
mOperations[op_name] = op;
}
}
Whenever a function was to be executed, it would be based on the currently selected objects or the whatever it needs to work on. NOTE: In my case, I didn't need to send the details of how much each object should move because that was being calculated by MoveOperation from the input device once it was set as the active function.
In the OperationManager, executing a function would be like:
void OperationManager::execute(const std::string& operation_name)
{
if(mOperations.count(operation_name))
{
Operation& op = *mOperations[operation_name];
OperationState* opState = op();
if(opState)
{
mUndoStack.push(opState);
}
}
}
When there's a necessity to undo, you do that from the OperationManager like:
OperationManager::GetInstance().undo();
And the undo function of the OperationManager looks like this:
void OperationManager::undo()
{
if(!mUndoStack.empty())
{
OperationState* state = mUndoStack.pop();
if(state->getParent().undo(state))
{
mRedoStack.push(state);
}else{
// Throw an exception or warn the user.
}
}
}
This made the OperationManager not be aware of what arguments each function needs and so was easy to manage different functions.