Disclaimer: I haven't been able to clearly describe exactly what I am trying to do, so I hope the example will be clearer than my explanation! Please suggest any re-phrasing to make it clearer. :)
Is it possible to override functions with more specific versions than those required by an interface in order to handle subclasses of the parameters of methods in that interface separately to the generic case? (Example and better explanation below...) If it can't be done directly, is there some pattern which can be used to achieve a similar effect?
Example
#include <iostream>
class BaseNode {};
class DerivedNode : public BaseNode {};
class NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node) = 0;
};
class MyNodeProcessor : public NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node)
{
std::cout << "Processing a node." << std::endl;
}
virtual void processNode(DerivedNode* node)
{
std::cout << "Special processing for a DerivedNode." << std::endl;
}
};
int main()
{
BaseNode* bn = new BaseNode();
DerivedNode* dn = new DerivedNode();
NodeProcessingInterface* processor = new MyNodeProcessor();
// Calls MyNodeProcessor::processNode(BaseNode) as expected.
processor->processNode(bn);
// Calls MyNodeProcessor::processNode(BaseNode).
// I would like this to call MyNodeProcessor::processNode(DerivedNode).
processor->processNode(dn);
delete bn;
delete dn;
delete processor;
return 0;
}
My motivation
I want to be able to implement several different concrete NodeProcessors some of which will treat all nodes the same (i.e. implement only what is shown in the interface) and some of which will distinguish between different types of node (as in MyNodeProcessor). So I would like the second call to processNode(dn) to use the implementation in MyNodeProcessor::processNode(DerivedNode) by overloading (some parts/subclasses of) the interface methods. Is that possible?
Obviously if I change processor to be of type MyNodeProcessor* then this works as expected, but I need to be able to use different node processors interchangeably.
I can also get around this by having a single method processNode(BaseNode) which checks the precise type of its argument at run-time and branches based on that. It seems inelegant to me to include this check in my code (especially as the number of node types grows and I have a giant switch statement). I feel like the language should be able to help.
I am using C++ but I'm interested in general answers as well if you prefer (or if this is easier/different in other languages).
No, that's not possible this way. The virtual method dispatch happens at compiletime, i.e. is using the static type of the Processor pointer, namely NodeProcessingInterface. If that base type has only one virtual function, only that one virtual function (or its overriding implementations) will be called. The compiler has no way to determine that there migth be a derived NodeProcessor class implementing more distinguished functions.
So, instead of diversifying the methods in derived classes, you'd have to do it the other way round: Declare all different virtual functions that you need in the base class override them as needed:
class NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node) = 0;
//simplify the method definition for complex node hierarchies:
#define PROCESS(_derived_, _base_) \
virtual void processNode(_derived_* node) { \
processNode(static_cast<_base_*>(node)); \
}
PROCESS(DerivedNode, BaseNode)
PROCESS(FurtherDerivedNode, DerivedNode)
PROCESS(AnotherDerivedNode, BaseNode)
#undef PROCESS
};
class BoringNodeProcessor : public NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node) override
{
std::cout << "It's all the same.\n";
}
};
class InterestingNodeProcessor : public NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node) override
{
std::cout << "A Base.\n";
}
virtual void processNode(DerivedNode* node) override
{
std::cout << "A Derived.\n";
}
};
You're correct that you don't want to to type-checking. That would violate the Open-Closed principle -- because every time you added a specialized node type you'd have to modify this method.
What you're describing sounds similar to a plugin architecture, or the bridge pattern.
If you use inheritance rather than overloading -- i.e. move the specialized processNode into a subclass of MyNodeProcessor -- I think that will give you what you want.
EDIT:
Or, along slightly different lines, you could make the node processor a template class and use partial specialization to get the behavior you want.
Well, as soon as drifting from c++ is fine, I think, what you want is called "categories" in Objective C. You might find this link interesting: http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ProgrammingWithObjectiveC/CustomizingExistingClasses/CustomizingExistingClasses.html
Related
I have a class that will serve as the base class for (many) other classes. The derived classes each have a slight variation in their logic around a single function, which itself will be one of a set group of external functions. I aim to have something which is efficient, clear and will result in the minimal amount of additional code per new deriving class:
Here is what I have come up with:
// ctor omitted for brevity
class Base
{
public:
void process(batch_t &batch)
{
if (previous) previous->process(batch);
pre_process(batch);
proc.process(batch);
post_process(batch);
}
protected:
// no op unless overridden
virtual void pre_process(batch_t &batch) {}
virtual void post_process(batch_t &batch) {}
Processor proc;
Base* previous;
}
Expose the 'process' function which follows a set pattern
The core logic of the function is defined by a drop in class 'Processor'
Allow modification of this pattern via two virtual functions, which define additional work done before/after the call to Processor::proc
Sometimes, this object has a handle to another which must do something else before it, for this I have a pointer 'previous'
Does this design seem good or are there some glaring holes I haven't accounted for? Or are there other common patterns used in situations like this?
Does this design seem good or are there some glaring holes I haven't accounted for? Or are there other common patterns used in situations like this?
Without knowing more about your goals, all I can say is that it seems quite sensible. It's so sensible, in fact, there's a common name for this idiom: A "Non-virtual Interface". Also described as a "Template Method Design Pattern" by the gang of four, if you are in Java-sphere.
You are currently using the so called "Template Method" pattern (see, for instance, here). You have to note that it uses inheritance to essentially modify the behaviour of the process(batch) function by overriding the pre_process and post_process methods. This creates strong coupling. For instance, if you subclass your base class to use a particular pre_process implementation, then you can't use this implementation in any other subclass without duplicating code.
I personally would go with the "Strategy" pattern (see, for instance, here) which is more flexible and allows code re-use more easily, as follows:
struct PreProcessor {
virtual void process(batch&) = 0;
};
struct PostProcessor {
virtual void process(batch&) = 0;
};
class Base {
public:
//ctor taking pointers to subclasses of PreProcessor and PostProcessor
void process(batch_t &batch)
{
if (previous) previous->process(batch);
pre_proc->process(batch);
proc.process(batch);
post_proc->process(batch);
}
private:
PreProcessor* pre_proc;
Processor proc;
PostProcessor* post_proc;
Base* previous;
}
Now, you can create subclasses of PreProcessor and PostProcessor which you can mix and match and then pass to your Base class. You can of course apply the same approach for your Processor class.
Given your information, I don't see any benefit of using Inheritance (one Base and many Derived classes) here. Writing a new (whole) class just because you have a new couple of pre/post process logic is not a good idea. Not to mention, this will make difficult to reuse these logic.
I recommend a more composable design:
typedef void (*Handle)(batch_t&);
class Foo
{
public:
Foo(Handle pre, Handle post, Foo* previous) :
m_pre(pre),
m_post(post),
m_previous(previous) {}
void process(batch_t& batch)
{
if (m_previous) m_previous->process(batch);
(*m_pre)(batch);
m_proc.process(batch);
(*m_post)(batch);
}
private:
Processor m_proc;
Handle m_pre;
Handle m_post;
Foo* m_previous;
}
This way, you can create any customized Foo object with any logic of pre/post process you want. If the creation is repetitive, you can always extract it into a createXXX method of a FooFactory class.
P/S: if you don't like function pointers, you can use whatever representing a function, such as interface with one method, or lambda expression ...
tl;dr
My goal is to conditionally provide implementations for abstract virtual methods in an intermediate workhorse template class (depending on template parameters), but to leave them abstract otherwise so that classes derived from the template are reminded by the compiler to implement them if necessary.
I am also grateful for pointers towards better solutions in general.
Long version
I am working on an extensible framework to perform "operations" on "data". One main goal is to allow XML configs to determine program flow, and allow users to extend both allowed data types and operations at a later date, without having to modify framework code.
If either one (operations or data types) is kept fixed architecturally, there are good patterns to deal with the problem. If allowed operations are known ahead of time, use abstract virtual functions in your data types (new data have to implement all required functionality to be usable). If data types are known ahead of time, use the Visitor pattern (where the operation has to define virtual calls for all data types).
Now if both are meant to be extensible, I could not find a well-established solution.
My solution is to declare them independently from one another and then register "operation X for data type Y" via an operation factory. That way, users can add new data types, or implement additional or alternative operations and they can be produced and configured using the same XML framework.
If you create a matrix of (all data types) x (all operations), you end up with a lot of classes. Hence, they should be as minimal as possible, and eliminate trivial boilerplate code as far as possible, and this is where I could use some inspiration and help.
There are many operations that will often be trivial, but might not be in specific cases, such as Clone() and some more (omitted here for "brevity"). My goal is to conditionally provide implementations for abstract virtual methods if appropriate, but to leave them abstract otherwise.
Some solutions I considered
As in example below: provide default implementation for trivial operations. Consequence: Nontrivial operations need to remember to override with their own methods. Can lead to run-time problems if some future developer forgets to do that.
Do NOT provide defaults. Consequence: Nontrivial functions need to be basically copy & pasted for every final derived class. Lots of useless copy&paste code.
Provide an additional template class derived from cOperation base class that implements the boilerplate functions and nothing else (template parameters similar to specific operation workhorse templates). Derived final classes inherit from their concrete operation base class and that template. Consequence: both concreteOperationBase and boilerplateTemplate need to inherit virtually from cOperation. Potentially some run-time overhead, from what I found on SO. Future developers need to let their operations inherit virtually from cOperation.
std::enable_if magic. Didn't get the combination of virtual functions and templates to work.
Here is a (fairly) minimal compilable example of the situation:
//Base class for all operations on all data types. Will be inherited from. A lot. Base class does not define any concrete operation interface, nor does it necessarily know any concrete data types it might be performed on.
class cOperation
{
public:
virtual ~cOperation() {}
virtual std::unique_ptr<cOperation> Clone() const = 0;
virtual bool Serialize() const = 0;
//... more virtual calls that can be either trivial or quite involved ...
protected:
cOperation(const std::string& strOperationID, const std::string& strOperatesOnType)
: m_strOperationID()
, m_strOperatesOnType(strOperatesOnType)
{
//empty
}
private:
std::string m_strOperationID;
std::string m_strOperatesOnType;
};
//Base class for all data types. Will be inherited from. A lot. Does not know any operations that might be performed on it.
struct cDataTypeBase
{
virtual ~cDataTypeBase() {}
};
Now, I'll define an example data type.
//Some concrete data type. Still does not know any operations that might be performed on it.
struct cDataTypeA : public cDataTypeBase
{
static const std::string& GetDataName()
{
static const std::string strMyName = "cDataTypeA";
return strMyName;
}
};
And here is an example operation. It defines a concrete operation interface, but does not know the data types it might be performed on.
//Some concrete operation. Does not know all data types it might be expected to work on.
class cConcreteOperationX : public cOperation
{
public:
virtual bool doSomeConcreteOperationX(const cDataTypeBase& dataBase) = 0;
protected:
cConcreteOperationX(const std::string& strOperatesOnType)
: cOperation("concreteOperationX", strOperatesOnType)
{
//empty
}
};
The following template is meant to be the boilerplate workhorse. It implements as much trivial and repetitive code as possible and is provided alongside the concrete operation base class - concrete data types are still unknown, but are meant to be provided as template parameters.
//ConcreteOperationTemplate: absorb as much common/trivial code as possible, so concrete derived classes can have minimal code for easy addition of more supported data types
template <typename ConcreteDataType, typename DerivedOperationType, bool bHasTrivialCloneAndSerialize = false>
class cConcreteOperationXTemplate : public cConcreteOperationX
{
public:
//Can perform datatype cast here:
virtual bool doSomeConcreteOperationX(const cDataTypeBase& dataBase) override
{
const ConcreteDataType* pCastData = dynamic_cast<const ConcreteDataType*>(&dataBase);
if (pCastData == nullptr)
{
return false;
}
return doSomeConcreteOperationXOnCastData(*pCastData);
}
protected:
cConcreteOperationXTemplate()
: cConcreteOperationX(ConcreteDataType::GetDataName()) //requires ConcreteDataType to have a static method returning something appropriate
{
//empty
}
private:
//Clone can be implemented here via CRTP
virtual std::unique_ptr<cOperation> Clone() const override
{
return std::unique_ptr<cOperation>(new DerivedOperationType(*static_cast<const DerivedOperationType*>(this)));
}
//TODO: Some Magic here to enable trivial serializations, but leave non-trivials abstract
//Problem with current code is that virtual bool Serialize() override will also be overwritten for bHasTrivialCloneAndSerialize == false
virtual bool Serialize() const override
{
return true;
}
virtual bool doSomeConcreteOperationXOnCastData(const ConcreteDataType& castData) = 0;
};
Here are two implementations of the example operation on the example data type. One of them will be registered as the default operation, to be used if the user does not declare anything else in the config, and the other is a potentially much more involved non-default operation that might take many additional parameters into account (these would then have to be serialized in order to be correctly re-instantiated on the next program run). These operations need to know both the operation and the data type they relate to, but could potentially be implemented at a much later time, or in a different software component where the specific combination of operation and data type are required.
//Implementation of operation X on type A. Needs to know both of these, but can be implemented if and when required.
class cConcreteOperationXOnTypeADefault : public cConcreteOperationXTemplate<cDataTypeA, cConcreteOperationXOnTypeADefault, true>
{
virtual bool doSomeConcreteOperationXOnCastData(const cDataTypeA& castData) override
{
//...do stuff...
return true;
}
};
//Different implementation of operation X on type A.
class cConcreteOperationXOnTypeASpecialSauce : public cConcreteOperationXTemplate<cDataTypeA, cConcreteOperationXOnTypeASpecialSauce/*, false*/>
{
virtual bool doSomeConcreteOperationXOnCastData(const cDataTypeA& castData) override
{
//...do stuff...
return true;
}
//Problem: Compiler does not remind me that cConcreteOperationXOnTypeASpecialSauce might need to implement this method
//virtual bool Serialize() override {}
};
int main(int argc, char* argv[])
{
std::map<std::string, std::map<std::string, std::unique_ptr<cOperation>>> mapOpIDAndDataTypeToOperation;
//...fill map, e.g. via XML config / factory method...
const cOperation& requestedOperation = *mapOpIDAndDataTypeToOperation.at("concreteOperationX").at("cDataTypeA");
//...do stuff...
return 0;
}
if you data types are not virtual (for each operation call you know both operation type and data type at compile time) you may consider following approach:
#include<iostream>
#include<string>
template<class T>
void empty(T t){
std::cout<<"warning about missing implementation"<<std::endl;
}
template<class T>
void simple_plus(T){
std::cout<<"simple plus"<<std::endl;
}
void plus_string(std::string){
std::cout<<"plus string"<<std::endl;
}
template<class Data, void Implementation(Data)>
class Operation{
public:
static void exec(Data d){
Implementation(d);
}
};
#define macro_def(OperationName) template<class T> class OperationName : public Operation<T, empty<T>>{};
#define macro_template_inst( TypeName, OperationName, ImplementationName ) template<> class OperationName<TypeName> : public Operation<TypeName, ImplementationName<TypeName>>{};
#define macro_inst( TypeName, OperationName, ImplementationName ) template<> class OperationName<TypeName> : public Operation<TypeName, ImplementationName>{};
// this part may be generated on base of .xml file and put into .h file, and then just #include generated.h
macro_def(Plus)
macro_template_inst(int, Plus, simple_plus)
macro_template_inst(double, Plus, simple_plus)
macro_inst(std::string, Plus, plus_string)
int main() {
Plus<int>::exec(2);
Plus<double>::exec(2.5);
Plus<float>::exec(2.5);
Plus<std::string>::exec("abc");
return 0;
}
Minus of this approach is that you'd have to compile project in 2 steps: 1) transform .xml to .h 2) compile project using generated .h file. On plus side compiler/ide (I use qtcreator with mingw) gives warning about unused parameter t in function
void empty(T t)
and stack trace where from it was called.
I would like to make the runtime type of a local variable depend on some condition. Say we have this situation:
#include <iostream>
class Base{
public:
virtual void foo()=0;
};
class Derived1 : public Base {
virtual void foo(){
std::cout << "D1" << std::endl;
}
};
class Derived2 : public Base {
virtual void foo(){
std::cout << "D2" << std::endl;
}
};
In Java-like languages where objects are always handled through "references" the solution is simple (pseudocode):
Base x = condition ? Derived1() : Derived2();
The C++ solution will obviously involve pointers (at least behind the scenes), since there is no other way to bring two different types under the same variable (which must have a type). It cannot be simply Base as Base objects cannot be constructed (it has a pure virtual function).
The simplest way would be to use raw pointers:
Base* x = condition ? static_cast<Base*>(new Derived1()) : static_cast<Base*>(new Derived2());
(The casts are needed to make the two branches of the ternary operator have the same type)
Manual pointer handling is error-prone and old school, this situation calls for a unique_ptr.
std::unique_ptr<Base> x{condition ? static_cast<Base*>(new Derived1()) : static_cast<Base*>(new Derived2())};
Eh... Not exactly what I'd call elegant. It uses explicit new and casting. I hoped to use something like std::make_unique to hide the new but it doesn't seem possible.
Is this just one of those situations where you conclude "C++ is like that, if you need elegance use other languages (perhaps making a trade-off on other aspects)"?
Or is this whole idea just totally un-C++-ish? Am I in the wrong mindset here, trying to force ideas from different languages on C++?
Is this just one of those situations where you conclude "C++ is like that, if you need elegance use other languages (perhaps making a trade-off on other aspects)"?
Or is this whole idea just totally un-C++-ish? Am I in the wrong mindset here, trying to force ideas from different languages on C++?
It really depends on what you are going to use x for.
Variants
The C++ solution will obviously involve pointers (at least behind the scenes), since there is no other way to bring two different types under the same variable (which must have a type).
You can also use boost::variant (or boost::any, but boost::variant might be better in this case). For example, given that Derived1 is default constructible:
boost::variant<Derived1, Derived2> x;
if (!condition) x = Derived2();
This will work even if Derived1 and Derived2 don't share a base class. Then you can use the visitor pattern to operate on x. Given, for example:
struct Derived1 {
void foo1(){
std::cout << "D1" << std::endl;
}
};
struct Derived2 {
void foo2(){
std::cout << "D2" << std::endl;
}
};
then you can define the visitor as:
class some_visitor : public boost::static_visitor<void> {
public:
void operator()(Derived1& x) const {
x.foo1();
}
void operator()(Derived2& x) const {
x.foo2();
}
};
and use it as:
boost::apply_visitor(some_visitor(), x);
Live demo
Polymorphic calls
If you really need to use x polymorphically, then yes, std::unique_ptr is ok. And just call your polymorphic function as x->foo():
std::unique_ptr<Base> x = condition ? std::unique_ptr<Base>(new Derived1()) : std::unique_ptr<Base>(new Derived2());
Live demo
Concepts/Templates
If you just need to call a function than you might just be better off defining a concept and expressing it with templates:
template<class Type>
void my_func(Type& x) { x.foo(); }
You'll be able to define concepts explicitly in future C++ versions too.
Live demo
One 'radical' possibility is to create a new kind of make_unique that will create the right typed return value
template<typename TReal, typename TOutside, typename... Args>
auto make_base_unique(Args&&... args) -> std::unique_ptr<TOutside>
{
return std::unique_ptr<TOutside>(new TReal(std::forward<Args>(args)...));
}
Then use it like:
auto x = (condition ? make_base_unique<Derived1,Base>() : make_base_unique<Derived2,Base>());
I have a question about implementing interface in C++:
Suppose there is an interface:
class A
{
virtual void f() = 0;
};
When implementing this, I wonder if there's a way to do something like:
class B : public A {
void f(int arg=0) {....} // unfortunately it does not implement f() this way
};
I want to keep the iterface clean. When client code calls through public interface A, arg is always set to 0 automatically. However when I call it through B, I have the flexibility to call it with arg set to some different value. Is it achievable?
EDIT: Since I control the interface and implementation, I am open to any suggestions, Macros, templates, functors, or anything else that makes sense. I just want to have a minimal and clean code base. The class is big, and I don't want to write any code that not absolutely necessary - e.g. another function that simply forwards to the actual implementation.
EDIT2: Just want to clarify a bit: The public interface is provided to client. It is more restrictive than Class B interface, which is only used internally. However the function f() is essentially doing the same thing, other than minor different treatment based on input arg. The real class has quite some interface functions, and the signature is complex. Doing function forwarding quickly results in tedious code repetition, and it pollutes the internal interface for B. I wonder what is the best way to deal with this in C++.
Thanks!
Yes, just make two separate functions:
class B : public A {
void f() { return f(0); }
void f(int arg) { .... }
};
When you have an interface, the basic principle should be that a function ALWAYS takes the same arguments and ALWAYS operates in the same way, no matter what the derived class is doing. Adding extra arguments is not allowed, because that presumes that the "thing" that operates on the object "knows" what the argument is/does.
There are several ways around this problem - thre that spring to mind immediately are:
Add the argument to the interface/baseclass.
Don't use an argument, but some extra function that [when the derived object is created or some other place that "knows the difference"] stores the extra information inside the object that needs it.
Add another class that "knows" what the argument is inside the class.
An example of the second one would be:
class B: public A
{
private:
int x;
public:
B() x(0) { ... } // default is 0.
void f() { ... uses x ... }
void setX(int newX) { x = newX; };
int getX() { return x; }
};
So, when you want to use x with another value than zero, you call bobject->setX(42); or something like that.
From your descriptions I'd say you should provide two classes, both with a specific responsibility: One to implement the desired functionality, the other to provide the needed interface to the client. That way you separate concerns and dont violate the SRP:
class BImpl {
public:
doF(int arg);
};
class B : public A {
BImpl impl;
public:
virtual void f() override {
impl.doF(0);
}
};
Doing function forwarding quickly results in tedious code repetition, and it pollutes the internal interface for B. I wonder what is the best way to deal with this in C++.
It sounds like you need to write a script to automate part of the process.
I'm trying to implement the command design pattern, but I'm stumbling accross a conceptual problem. Let's say you have a base class and a few subclasses like in the example below:
class Command : public boost::noncopyable {
virtual ResultType operator()()=0;
//Restores the model state as it was before command's execution.
virtual void undo()=0;
//Registers this command on the command stack.
void register();
};
class SomeCommand : public Command {
virtual ResultType operator()(); // Implementation doesn't really matter here
virtual void undo(); // Same
};
The thing is, everytime operator () is called on a SomeCommand instance, I'd like to add *this to a stack (mostly for undo purposes) by calling the Command's register method. I'd like to avoid calling "register" from SomeCommand::operator()(), but to have it called automaticaly (someway ;-) )
I know that when you construct a sub class such as SomeCommand, the base class constructor is called automaticaly, so I could add a call to "register" there. The thing I don't want to call register until operator()() is called.
How can I do this? I guess my design is somewhat flawed, but I don't really know how to make this work.
It looks as if you can benefit from the NVI (Non-Virtual Interface) idiom. There the interface of the command object would have no virtual methods, but would call into private extension points:
class command {
public:
void operator()() {
do_command();
add_to_undo_stack(this);
}
void undo();
private:
virtual void do_command();
virtual void do_undo();
};
There are different advantages to this approach, first of which is that you can add common functionality in the base class. Other advantages are that the interface of your class and the interface of the extension points is not bound to each other, so you could offer different signatures in your public interface and the virtual extension interface. Search for NVI and you will get much more and better explanations.
Addendum: The original article by Herb Sutter where he introduces the concept (yet unnamed)
Split the operator in two different methods, e.g. execute and executeImpl (to be honest, I don't really like the () operator). Make Command::execute non-virtual, and Command::executeImpl pure virtual, then let Command::execute perform the registration, then call it executeImpl, like this:
class Command
{
public:
ResultType execute()
{
... // do registration
return executeImpl();
}
protected:
virtual ResultType executeImpl() = 0;
};
class SomeCommand
{
protected:
virtual ResultType executeImpl();
};
Assuming it's a 'normal' application with undo and redo, I wouldn't try and mix managing the stack with the actions performed by the elements on the stack. It will get very complicated if you either have multiple undo chains (e.g. more than one tab open), or when you do-undo-redo, where the command has to know whether to add itself to undo or move itself from redo to undo, or move itself from undo to redo. It also means you need to mock the undo/redo stack to test the commands.
If you do want to mix them, then you will have three template methods, each taking the two stacks (or the command object needs to have references to the stacks it operates on when created), and each performing the move or add, then calling the function. But if you do have those three methods, you will see that they don't actually do anything other than call public functions on the command and are not used by any other part of the command, so become candidates the next time you refactor your code for cohesion.
Instead, I'd create an UndoRedoStack class which has an execute_command(Command*command) function, and leave the command as simple as possible.
Basically Patrick's suggestion is the same as David's which is also the same as mine. Use NVI (non-virtual interface idiom) for this purpose. Pure virtual interfaces lack any kind of centralized control. You could alternatively create a separate abstract base class that all commands inherit, but why bother?
For detailed discussion about why NVIs are desirable, see C++ Coding Standards by Herb Sutter. There he goes so far as to suggest making all public functions non-virtual to achieve a strict separation of overridable code from public interface code (which should not be overridable so that you can always have some centralized control and add instrumentation, pre/post-condition checking, and whatever else you need).
class Command
{
public:
void operator()()
{
do_command();
add_to_undo_stack(this);
}
void undo()
{
// This might seem pointless now to just call do_undo but
// it could become beneficial later if you want to do some
// error-checking, for instance, without having to do it
// in every single command subclass's undo implementation.
do_undo();
}
private:
virtual void do_command() = 0;
virtual void do_undo() = 0;
};
If we take a step back and look at the general problem instead of the immediate question being asked, I think Pete offers some very good advice. Making Command responsible for adding itself to the undo stack is not particularly flexible. It can be independent of the container in which it resides. Those higher-level responsibilities should probably be a part of the actual container which you can also make responsible for executing and undoing the command.
Nevertheless, it should be very helpful to study NVI. I've seen too many developers write pure virtual interfaces like this out of the historical benefits they had only to add the same code to every subclass that defines it when it need only be implemented in one central place. It is a very handy tool to add to your programming toolbox.
I once had a project to create a 3D modelling application and for that I used to have the same requirement. As far as I understood when working on it was that no matter what and operation should always know what it did and therefore should know how to undo it. So I had a base class created for each operation and it's operation state as shown below.
class OperationState
{
protected:
Operation& mParent;
OperationState(Operation& parent);
public:
virtual ~OperationState();
Operation& getParent();
};
class Operation
{
private:
const std::string mName;
public:
Operation(const std::string& name);
virtual ~Operation();
const std::string& getName() const{return mName;}
virtual OperationState* operator ()() = 0;
virtual bool undo(OperationState* state) = 0;
virtual bool redo(OperationState* state) = 0;
};
Creating a function and it's state would be like:
class MoveState : public OperationState
{
public:
struct ObjectPos
{
Object* object;
Vector3 prevPosition;
};
MoveState(MoveOperation& parent):OperationState(parent){}
typedef std::list<ObjectPos> PrevPositions;
PrevPositions prevPositions;
};
class MoveOperation : public Operation
{
public:
MoveOperation():Operation("Move"){}
~MoveOperation();
// Implement the function and return the previous
// previous states of the objects this function
// changed.
virtual OperationState* operator ()();
// Implement the undo function
virtual bool undo(OperationState* state);
// Implement the redo function
virtual bool redo(OperationState* state);
};
There used to be a class called OperationManager. This registered different functions and created instances of them within it like:
OperationManager& opMgr = OperationManager::GetInstance();
opMgr.register<MoveOperation>();
The register function was like:
template <typename T>
void OperationManager::register()
{
T* op = new T();
const std::string& op_name = op->getName();
if(mOperations.count(op_name))
{
delete op;
}else{
mOperations[op_name] = op;
}
}
Whenever a function was to be executed, it would be based on the currently selected objects or the whatever it needs to work on. NOTE: In my case, I didn't need to send the details of how much each object should move because that was being calculated by MoveOperation from the input device once it was set as the active function.
In the OperationManager, executing a function would be like:
void OperationManager::execute(const std::string& operation_name)
{
if(mOperations.count(operation_name))
{
Operation& op = *mOperations[operation_name];
OperationState* opState = op();
if(opState)
{
mUndoStack.push(opState);
}
}
}
When there's a necessity to undo, you do that from the OperationManager like:
OperationManager::GetInstance().undo();
And the undo function of the OperationManager looks like this:
void OperationManager::undo()
{
if(!mUndoStack.empty())
{
OperationState* state = mUndoStack.pop();
if(state->getParent().undo(state))
{
mRedoStack.push(state);
}else{
// Throw an exception or warn the user.
}
}
}
This made the OperationManager not be aware of what arguments each function needs and so was easy to manage different functions.