I'm trying to implement the command design pattern, but I'm stumbling accross a conceptual problem. Let's say you have a base class and a few subclasses like in the example below:
class Command : public boost::noncopyable {
virtual ResultType operator()()=0;
//Restores the model state as it was before command's execution.
virtual void undo()=0;
//Registers this command on the command stack.
void register();
};
class SomeCommand : public Command {
virtual ResultType operator()(); // Implementation doesn't really matter here
virtual void undo(); // Same
};
The thing is, everytime operator () is called on a SomeCommand instance, I'd like to add *this to a stack (mostly for undo purposes) by calling the Command's register method. I'd like to avoid calling "register" from SomeCommand::operator()(), but to have it called automaticaly (someway ;-) )
I know that when you construct a sub class such as SomeCommand, the base class constructor is called automaticaly, so I could add a call to "register" there. The thing I don't want to call register until operator()() is called.
How can I do this? I guess my design is somewhat flawed, but I don't really know how to make this work.
It looks as if you can benefit from the NVI (Non-Virtual Interface) idiom. There the interface of the command object would have no virtual methods, but would call into private extension points:
class command {
public:
void operator()() {
do_command();
add_to_undo_stack(this);
}
void undo();
private:
virtual void do_command();
virtual void do_undo();
};
There are different advantages to this approach, first of which is that you can add common functionality in the base class. Other advantages are that the interface of your class and the interface of the extension points is not bound to each other, so you could offer different signatures in your public interface and the virtual extension interface. Search for NVI and you will get much more and better explanations.
Addendum: The original article by Herb Sutter where he introduces the concept (yet unnamed)
Split the operator in two different methods, e.g. execute and executeImpl (to be honest, I don't really like the () operator). Make Command::execute non-virtual, and Command::executeImpl pure virtual, then let Command::execute perform the registration, then call it executeImpl, like this:
class Command
{
public:
ResultType execute()
{
... // do registration
return executeImpl();
}
protected:
virtual ResultType executeImpl() = 0;
};
class SomeCommand
{
protected:
virtual ResultType executeImpl();
};
Assuming it's a 'normal' application with undo and redo, I wouldn't try and mix managing the stack with the actions performed by the elements on the stack. It will get very complicated if you either have multiple undo chains (e.g. more than one tab open), or when you do-undo-redo, where the command has to know whether to add itself to undo or move itself from redo to undo, or move itself from undo to redo. It also means you need to mock the undo/redo stack to test the commands.
If you do want to mix them, then you will have three template methods, each taking the two stacks (or the command object needs to have references to the stacks it operates on when created), and each performing the move or add, then calling the function. But if you do have those three methods, you will see that they don't actually do anything other than call public functions on the command and are not used by any other part of the command, so become candidates the next time you refactor your code for cohesion.
Instead, I'd create an UndoRedoStack class which has an execute_command(Command*command) function, and leave the command as simple as possible.
Basically Patrick's suggestion is the same as David's which is also the same as mine. Use NVI (non-virtual interface idiom) for this purpose. Pure virtual interfaces lack any kind of centralized control. You could alternatively create a separate abstract base class that all commands inherit, but why bother?
For detailed discussion about why NVIs are desirable, see C++ Coding Standards by Herb Sutter. There he goes so far as to suggest making all public functions non-virtual to achieve a strict separation of overridable code from public interface code (which should not be overridable so that you can always have some centralized control and add instrumentation, pre/post-condition checking, and whatever else you need).
class Command
{
public:
void operator()()
{
do_command();
add_to_undo_stack(this);
}
void undo()
{
// This might seem pointless now to just call do_undo but
// it could become beneficial later if you want to do some
// error-checking, for instance, without having to do it
// in every single command subclass's undo implementation.
do_undo();
}
private:
virtual void do_command() = 0;
virtual void do_undo() = 0;
};
If we take a step back and look at the general problem instead of the immediate question being asked, I think Pete offers some very good advice. Making Command responsible for adding itself to the undo stack is not particularly flexible. It can be independent of the container in which it resides. Those higher-level responsibilities should probably be a part of the actual container which you can also make responsible for executing and undoing the command.
Nevertheless, it should be very helpful to study NVI. I've seen too many developers write pure virtual interfaces like this out of the historical benefits they had only to add the same code to every subclass that defines it when it need only be implemented in one central place. It is a very handy tool to add to your programming toolbox.
I once had a project to create a 3D modelling application and for that I used to have the same requirement. As far as I understood when working on it was that no matter what and operation should always know what it did and therefore should know how to undo it. So I had a base class created for each operation and it's operation state as shown below.
class OperationState
{
protected:
Operation& mParent;
OperationState(Operation& parent);
public:
virtual ~OperationState();
Operation& getParent();
};
class Operation
{
private:
const std::string mName;
public:
Operation(const std::string& name);
virtual ~Operation();
const std::string& getName() const{return mName;}
virtual OperationState* operator ()() = 0;
virtual bool undo(OperationState* state) = 0;
virtual bool redo(OperationState* state) = 0;
};
Creating a function and it's state would be like:
class MoveState : public OperationState
{
public:
struct ObjectPos
{
Object* object;
Vector3 prevPosition;
};
MoveState(MoveOperation& parent):OperationState(parent){}
typedef std::list<ObjectPos> PrevPositions;
PrevPositions prevPositions;
};
class MoveOperation : public Operation
{
public:
MoveOperation():Operation("Move"){}
~MoveOperation();
// Implement the function and return the previous
// previous states of the objects this function
// changed.
virtual OperationState* operator ()();
// Implement the undo function
virtual bool undo(OperationState* state);
// Implement the redo function
virtual bool redo(OperationState* state);
};
There used to be a class called OperationManager. This registered different functions and created instances of them within it like:
OperationManager& opMgr = OperationManager::GetInstance();
opMgr.register<MoveOperation>();
The register function was like:
template <typename T>
void OperationManager::register()
{
T* op = new T();
const std::string& op_name = op->getName();
if(mOperations.count(op_name))
{
delete op;
}else{
mOperations[op_name] = op;
}
}
Whenever a function was to be executed, it would be based on the currently selected objects or the whatever it needs to work on. NOTE: In my case, I didn't need to send the details of how much each object should move because that was being calculated by MoveOperation from the input device once it was set as the active function.
In the OperationManager, executing a function would be like:
void OperationManager::execute(const std::string& operation_name)
{
if(mOperations.count(operation_name))
{
Operation& op = *mOperations[operation_name];
OperationState* opState = op();
if(opState)
{
mUndoStack.push(opState);
}
}
}
When there's a necessity to undo, you do that from the OperationManager like:
OperationManager::GetInstance().undo();
And the undo function of the OperationManager looks like this:
void OperationManager::undo()
{
if(!mUndoStack.empty())
{
OperationState* state = mUndoStack.pop();
if(state->getParent().undo(state))
{
mRedoStack.push(state);
}else{
// Throw an exception or warn the user.
}
}
}
This made the OperationManager not be aware of what arguments each function needs and so was easy to manage different functions.
Related
I came across an open source C++ code and I got curious, why do people design the classes this way?
So first things first, here is the Abstract class:
class BaseMapServer
{
public:
virtual ~BaseMapServer(){}
virtual void LoadMapInfoFromFile(const std::string &file_name) = 0;
virtual void LoadMapFromFile(const std::string &map_name) = 0;
virtual void PublishMap() = 0;
virtual void SetMap() = 0;
virtual void ConnectROS() = 0;
};
Nothing special here and having an abstract class can have several well understood reasons. So from this point, I thought maybe author wanted to share common features among other classes. So here is the next class, which is a seperate class but actually holds a pointer of type abstract class mentioned above (actual cpp file, other two classes are header files) :
class MapFactory
{
BaseMapServer *CreateMap(
const std::string &map_type,
rclcpp::Node::SharedPtr node, const std::string &file_name)
{
if (map_type == "occupancy") return new OccGridServer(node, file_name);
else
{
RCLCPP_ERROR(node->get_logger(), "map_factory.cpp 15: Cannot load map %s of type %s", file_name.c_str(), map_type.c_str());
throw std::runtime_error("Map type not supported")
}
}
};
And now the interesting thing comes, here is the child class of the abstract class:
class OccGridServer : public BaseMapServer
{
public:
explicit OccGridServer(rclcpp::Node::SharedPtr node) : node_(node) {}
OccGridServer(rclcpp::Node::SharedPtr node, std::string file_name);
OccGridServer(){}
~OccGridServer(){}
virtual void LoadMapInfoFromFile(const std::string &file_name);
virtual void LoadMapFromFile(const std::string &map_name);
virtual void PublishMap();
virtual void SetMap();
virtual void ConnectROS();
protected:
enum MapMode { TRINARY, SCALE, RAW };
// Info got from the YAML file
double origin_[3];
int negate_;
double occ_th_;
double free_th_;
double res_;
MapMode mode_ = TRINARY;
std::string frame_id_ = "map";
std::string map_name_;
// In order to do ROS2 stuff like creating a service we need a node:
rclcpp::Node::SharedPtr node_;
// A service to provide the occupancy grid map and the message with response:
rclcpp::Service<nav_msgs::srv::GetMap>::SharedPtr occ_service_;
nav_msgs::msg::OccupancyGrid map_msg_;
// Publish map periodically for the ROS1 via bridge:
rclcpp::TimerBase::SharedPtr timer_;
};
So what is the purpose of the MapFactory class?
To be more specific - what is the advantage of creating a class which holds a pointer of type Abstract class BaseMapServer which is a constructor (I believe) and this weird constructor creates a memory for the new object called OccGridServer and returns it? I got so confused by only writing this. I really want to become a better C++ coder and I am desperate to know the secret behind these code designs.
The MapFactory class is used to create the correct subclass instance of BaseMapServer based on the parameters passed to it.
In this particular case there is only one child class instance, but perhaps there are plans to add more. Then when more are added the factory method can look something like this:
BaseMapServer *CreateMap(
const std::string &map_type,
rclcpp::Node::SharedPtr node, const std::string &file_name)
{
if (map_type == "occupancy") return new OccGridServer(node, file_name);
// create Type2Server
else if (map_type == "type2") return new Type2Server(node, file_name);
// create Type3Server
else if (map_type == "type3") return new Type3Server(node, file_name);
else
{
RCLCPP_ERROR(node->get_logger(),
"map_factory.cpp 15: Cannot load map %s of type %s",
file_name.c_str(), map_type.c_str());
throw std::runtime_error("Map type not supported")
}
}
This has the advantage that the caller doesn't need to know the exact subclass being used, and in fact the underlying subclass could potentially change or even be replaced under the hood without the calling code needing to be modified. The factory method internalizes this logic for you.
Its a Factory pattern. See https://en.wikipedia.org/wiki/Factory_method_pattern. It looks like the current code only supports one implementation (OccGridServer), but more could be added at a future date. Conversely, if there's only ever likely to be one concrete implementation, then it's overdesign.
This is example of the factory design pattern. The use case is this: there are several types of very similar classes that will be used in code. In this case, OccGridServer is the only one actually shown, but a generic explanation might reference hypothetical Dog, Cat, Otter, etc. classes. Because of their similarity, some polymorphism is desired: if they all inherit from a base class Animal they can share virtual class methods like ::genus, ::species, etc., and the derived classes can be pointed to or referred to with base class pointers/references. In your case, OccGridServer inherits from BaseMapServer; presumably there are other derived classes as well, and pointers/references.
If you know which derived class is needed at compile time, you would normally just call its constructor. The point of the factory design pattern is to simplify selection of a derived class when the particular derived class is not known until runtime. Imagine that a user picks their favorite animal by selecting a button or typing in a name. This generally means that somewhere there's a big if/else block that maps from some type of I/O disambiguator (string, enum, etc.) to a particular derived class type, calling its constructor. It's useful to encapsulate this in a factory pattern, which can act like a named constructor that takes this disambiguator as a "constructor" parameter and finds the correct derived class to construct.
Typically, by the way, CreateMap would be a static method of BaseMapServer. I don't see why a separate class for the factory function is needed in this case.
I have a class that will serve as the base class for (many) other classes. The derived classes each have a slight variation in their logic around a single function, which itself will be one of a set group of external functions. I aim to have something which is efficient, clear and will result in the minimal amount of additional code per new deriving class:
Here is what I have come up with:
// ctor omitted for brevity
class Base
{
public:
void process(batch_t &batch)
{
if (previous) previous->process(batch);
pre_process(batch);
proc.process(batch);
post_process(batch);
}
protected:
// no op unless overridden
virtual void pre_process(batch_t &batch) {}
virtual void post_process(batch_t &batch) {}
Processor proc;
Base* previous;
}
Expose the 'process' function which follows a set pattern
The core logic of the function is defined by a drop in class 'Processor'
Allow modification of this pattern via two virtual functions, which define additional work done before/after the call to Processor::proc
Sometimes, this object has a handle to another which must do something else before it, for this I have a pointer 'previous'
Does this design seem good or are there some glaring holes I haven't accounted for? Or are there other common patterns used in situations like this?
Does this design seem good or are there some glaring holes I haven't accounted for? Or are there other common patterns used in situations like this?
Without knowing more about your goals, all I can say is that it seems quite sensible. It's so sensible, in fact, there's a common name for this idiom: A "Non-virtual Interface". Also described as a "Template Method Design Pattern" by the gang of four, if you are in Java-sphere.
You are currently using the so called "Template Method" pattern (see, for instance, here). You have to note that it uses inheritance to essentially modify the behaviour of the process(batch) function by overriding the pre_process and post_process methods. This creates strong coupling. For instance, if you subclass your base class to use a particular pre_process implementation, then you can't use this implementation in any other subclass without duplicating code.
I personally would go with the "Strategy" pattern (see, for instance, here) which is more flexible and allows code re-use more easily, as follows:
struct PreProcessor {
virtual void process(batch&) = 0;
};
struct PostProcessor {
virtual void process(batch&) = 0;
};
class Base {
public:
//ctor taking pointers to subclasses of PreProcessor and PostProcessor
void process(batch_t &batch)
{
if (previous) previous->process(batch);
pre_proc->process(batch);
proc.process(batch);
post_proc->process(batch);
}
private:
PreProcessor* pre_proc;
Processor proc;
PostProcessor* post_proc;
Base* previous;
}
Now, you can create subclasses of PreProcessor and PostProcessor which you can mix and match and then pass to your Base class. You can of course apply the same approach for your Processor class.
Given your information, I don't see any benefit of using Inheritance (one Base and many Derived classes) here. Writing a new (whole) class just because you have a new couple of pre/post process logic is not a good idea. Not to mention, this will make difficult to reuse these logic.
I recommend a more composable design:
typedef void (*Handle)(batch_t&);
class Foo
{
public:
Foo(Handle pre, Handle post, Foo* previous) :
m_pre(pre),
m_post(post),
m_previous(previous) {}
void process(batch_t& batch)
{
if (m_previous) m_previous->process(batch);
(*m_pre)(batch);
m_proc.process(batch);
(*m_post)(batch);
}
private:
Processor m_proc;
Handle m_pre;
Handle m_post;
Foo* m_previous;
}
This way, you can create any customized Foo object with any logic of pre/post process you want. If the creation is repetitive, you can always extract it into a createXXX method of a FooFactory class.
P/S: if you don't like function pointers, you can use whatever representing a function, such as interface with one method, or lambda expression ...
tl;dr
My goal is to conditionally provide implementations for abstract virtual methods in an intermediate workhorse template class (depending on template parameters), but to leave them abstract otherwise so that classes derived from the template are reminded by the compiler to implement them if necessary.
I am also grateful for pointers towards better solutions in general.
Long version
I am working on an extensible framework to perform "operations" on "data". One main goal is to allow XML configs to determine program flow, and allow users to extend both allowed data types and operations at a later date, without having to modify framework code.
If either one (operations or data types) is kept fixed architecturally, there are good patterns to deal with the problem. If allowed operations are known ahead of time, use abstract virtual functions in your data types (new data have to implement all required functionality to be usable). If data types are known ahead of time, use the Visitor pattern (where the operation has to define virtual calls for all data types).
Now if both are meant to be extensible, I could not find a well-established solution.
My solution is to declare them independently from one another and then register "operation X for data type Y" via an operation factory. That way, users can add new data types, or implement additional or alternative operations and they can be produced and configured using the same XML framework.
If you create a matrix of (all data types) x (all operations), you end up with a lot of classes. Hence, they should be as minimal as possible, and eliminate trivial boilerplate code as far as possible, and this is where I could use some inspiration and help.
There are many operations that will often be trivial, but might not be in specific cases, such as Clone() and some more (omitted here for "brevity"). My goal is to conditionally provide implementations for abstract virtual methods if appropriate, but to leave them abstract otherwise.
Some solutions I considered
As in example below: provide default implementation for trivial operations. Consequence: Nontrivial operations need to remember to override with their own methods. Can lead to run-time problems if some future developer forgets to do that.
Do NOT provide defaults. Consequence: Nontrivial functions need to be basically copy & pasted for every final derived class. Lots of useless copy&paste code.
Provide an additional template class derived from cOperation base class that implements the boilerplate functions and nothing else (template parameters similar to specific operation workhorse templates). Derived final classes inherit from their concrete operation base class and that template. Consequence: both concreteOperationBase and boilerplateTemplate need to inherit virtually from cOperation. Potentially some run-time overhead, from what I found on SO. Future developers need to let their operations inherit virtually from cOperation.
std::enable_if magic. Didn't get the combination of virtual functions and templates to work.
Here is a (fairly) minimal compilable example of the situation:
//Base class for all operations on all data types. Will be inherited from. A lot. Base class does not define any concrete operation interface, nor does it necessarily know any concrete data types it might be performed on.
class cOperation
{
public:
virtual ~cOperation() {}
virtual std::unique_ptr<cOperation> Clone() const = 0;
virtual bool Serialize() const = 0;
//... more virtual calls that can be either trivial or quite involved ...
protected:
cOperation(const std::string& strOperationID, const std::string& strOperatesOnType)
: m_strOperationID()
, m_strOperatesOnType(strOperatesOnType)
{
//empty
}
private:
std::string m_strOperationID;
std::string m_strOperatesOnType;
};
//Base class for all data types. Will be inherited from. A lot. Does not know any operations that might be performed on it.
struct cDataTypeBase
{
virtual ~cDataTypeBase() {}
};
Now, I'll define an example data type.
//Some concrete data type. Still does not know any operations that might be performed on it.
struct cDataTypeA : public cDataTypeBase
{
static const std::string& GetDataName()
{
static const std::string strMyName = "cDataTypeA";
return strMyName;
}
};
And here is an example operation. It defines a concrete operation interface, but does not know the data types it might be performed on.
//Some concrete operation. Does not know all data types it might be expected to work on.
class cConcreteOperationX : public cOperation
{
public:
virtual bool doSomeConcreteOperationX(const cDataTypeBase& dataBase) = 0;
protected:
cConcreteOperationX(const std::string& strOperatesOnType)
: cOperation("concreteOperationX", strOperatesOnType)
{
//empty
}
};
The following template is meant to be the boilerplate workhorse. It implements as much trivial and repetitive code as possible and is provided alongside the concrete operation base class - concrete data types are still unknown, but are meant to be provided as template parameters.
//ConcreteOperationTemplate: absorb as much common/trivial code as possible, so concrete derived classes can have minimal code for easy addition of more supported data types
template <typename ConcreteDataType, typename DerivedOperationType, bool bHasTrivialCloneAndSerialize = false>
class cConcreteOperationXTemplate : public cConcreteOperationX
{
public:
//Can perform datatype cast here:
virtual bool doSomeConcreteOperationX(const cDataTypeBase& dataBase) override
{
const ConcreteDataType* pCastData = dynamic_cast<const ConcreteDataType*>(&dataBase);
if (pCastData == nullptr)
{
return false;
}
return doSomeConcreteOperationXOnCastData(*pCastData);
}
protected:
cConcreteOperationXTemplate()
: cConcreteOperationX(ConcreteDataType::GetDataName()) //requires ConcreteDataType to have a static method returning something appropriate
{
//empty
}
private:
//Clone can be implemented here via CRTP
virtual std::unique_ptr<cOperation> Clone() const override
{
return std::unique_ptr<cOperation>(new DerivedOperationType(*static_cast<const DerivedOperationType*>(this)));
}
//TODO: Some Magic here to enable trivial serializations, but leave non-trivials abstract
//Problem with current code is that virtual bool Serialize() override will also be overwritten for bHasTrivialCloneAndSerialize == false
virtual bool Serialize() const override
{
return true;
}
virtual bool doSomeConcreteOperationXOnCastData(const ConcreteDataType& castData) = 0;
};
Here are two implementations of the example operation on the example data type. One of them will be registered as the default operation, to be used if the user does not declare anything else in the config, and the other is a potentially much more involved non-default operation that might take many additional parameters into account (these would then have to be serialized in order to be correctly re-instantiated on the next program run). These operations need to know both the operation and the data type they relate to, but could potentially be implemented at a much later time, or in a different software component where the specific combination of operation and data type are required.
//Implementation of operation X on type A. Needs to know both of these, but can be implemented if and when required.
class cConcreteOperationXOnTypeADefault : public cConcreteOperationXTemplate<cDataTypeA, cConcreteOperationXOnTypeADefault, true>
{
virtual bool doSomeConcreteOperationXOnCastData(const cDataTypeA& castData) override
{
//...do stuff...
return true;
}
};
//Different implementation of operation X on type A.
class cConcreteOperationXOnTypeASpecialSauce : public cConcreteOperationXTemplate<cDataTypeA, cConcreteOperationXOnTypeASpecialSauce/*, false*/>
{
virtual bool doSomeConcreteOperationXOnCastData(const cDataTypeA& castData) override
{
//...do stuff...
return true;
}
//Problem: Compiler does not remind me that cConcreteOperationXOnTypeASpecialSauce might need to implement this method
//virtual bool Serialize() override {}
};
int main(int argc, char* argv[])
{
std::map<std::string, std::map<std::string, std::unique_ptr<cOperation>>> mapOpIDAndDataTypeToOperation;
//...fill map, e.g. via XML config / factory method...
const cOperation& requestedOperation = *mapOpIDAndDataTypeToOperation.at("concreteOperationX").at("cDataTypeA");
//...do stuff...
return 0;
}
if you data types are not virtual (for each operation call you know both operation type and data type at compile time) you may consider following approach:
#include<iostream>
#include<string>
template<class T>
void empty(T t){
std::cout<<"warning about missing implementation"<<std::endl;
}
template<class T>
void simple_plus(T){
std::cout<<"simple plus"<<std::endl;
}
void plus_string(std::string){
std::cout<<"plus string"<<std::endl;
}
template<class Data, void Implementation(Data)>
class Operation{
public:
static void exec(Data d){
Implementation(d);
}
};
#define macro_def(OperationName) template<class T> class OperationName : public Operation<T, empty<T>>{};
#define macro_template_inst( TypeName, OperationName, ImplementationName ) template<> class OperationName<TypeName> : public Operation<TypeName, ImplementationName<TypeName>>{};
#define macro_inst( TypeName, OperationName, ImplementationName ) template<> class OperationName<TypeName> : public Operation<TypeName, ImplementationName>{};
// this part may be generated on base of .xml file and put into .h file, and then just #include generated.h
macro_def(Plus)
macro_template_inst(int, Plus, simple_plus)
macro_template_inst(double, Plus, simple_plus)
macro_inst(std::string, Plus, plus_string)
int main() {
Plus<int>::exec(2);
Plus<double>::exec(2.5);
Plus<float>::exec(2.5);
Plus<std::string>::exec("abc");
return 0;
}
Minus of this approach is that you'd have to compile project in 2 steps: 1) transform .xml to .h 2) compile project using generated .h file. On plus side compiler/ide (I use qtcreator with mingw) gives warning about unused parameter t in function
void empty(T t)
and stack trace where from it was called.
One of the nice things in Java is implementing interface. For example consider the following snippet:
interface SimpleInterface()
{
public: void doThis();
}
...
SimpleInterface simple = new SimpleInterface()
{
#Override public doThis(){ /**Do something here*/}
}
The only way I could see this being done is through Lambda in C++ or passing an instance of function<> to a class. But I am actually checking if this is possible somehow? I have classes which implements a particular interface and these interfaces just contain 1-2 methods. I can't write a new file for it or add a method to a class which accepts a function<> or lambda so that it can determine on what to do. Is this strictly C++ limitation? Will it ever be supported?
Somehow, I wanted to write something like this:
thisClass.setAction(int i , new SimpleInterface()
{
protected:
virtual void doThis(){}
});
One thing though is that I haven't check the latest spec for C++14 and I wanted to know if this is possible somehow.
Thank you!
Will it ever be supported?
You mean, will the language designers ever add a dirty hack where the only reason it ever existed in one language was because those designers were too stupid to add the feature they actually needed?
Not in this specific instance.
You can create a derived class that derives from it and then uses a lambda, and then use that at your various call sites. But you'd still need to create one converter for each interface.
struct FunctionalInterfaceImpl : SimpleInterface {
FunctionalInterfaceImpl(std::function<void()> f)
: func(f) {}
std::function<void()> func;
void doThis() { func(); }
};
You seem to think each class needs a separate .h and .cpp file. C++ allows you to define a class at any scope, including local to a function:
void foo() {
struct SimpleInterfaceImpl : SimpleInterface
{
protected:
void doThis() override {}
};
thisClass.setAction(int i , new SimpleInterfaceImpl());
}
Of course, you have a naked new in there which is probably a bad idea. In real code, you'd want to allocate the instance locally, or use a smart pointer.
This is indeed a "limitation" of C++ (and C#, as I was doing some research some time ago). Anonymous java classes are one of its unique features.
The closest way you can emulate this is with function objects and/or local types. C++11 and later offers lambdas which are semantic sugar of those two things, for this reason, and saves us a lot of writing. Thank goodness for that, before c++11 one had to define a type for every little thing.
Please note that for interfaces that are made up of a single method, then function objects/lambdas/delegates(C#) are actually a cleaner approach. Java uses interfaces for this case as a "limitation" of its own. It would be considered a Java-ism to use single-method interfaces as callbacks in C++.
Local types are actually a pretty good approximation, the only drawback being that you are forced to name the types (see edit) (a tiresome obligation, which one takes over when using static languages of the C family).
You don't need to allocate an object with new to use it polymorphically. It can be a stack object, which you pass by reference (or pointer, for extra anachronism). For instance:
struct This {};
struct That {};
class Handler {
public:
virtual ~Handler ();
virtual void handle (This) = 0;
virtual void handle (That) = 0;
};
class Dispatcher {
Handler& handler;
public:
Dispatcher (Handler& handler): handler(handler) { }
template <typename T>
void dispatch (T&& obj) { handler.handle(std::forward<T>(obj)); }
};
void f ()
{
struct: public Handler {
void handle (This) override { }
void handle (That) override { }
} handler;
Dispatcher dispatcher { handler };
dispatcher.dispatch(This {});
dispatcher.dispatch(That {});
}
Also note the override specifier offered by c++11, which has more or less the same purpose as the #Override annotation (generate a compile error in case this member function (method) does not actually override anything).
I have never heard about this feature being supported or even discussed, and I personally don't see it even being considered as a feature in C++ community.
EDIT right after finishing this post, I realised that there is no need to name local types (naturally), so the example becomes even more java-friendly. The only difference being that you cannot define a new type within an expression. I have updated the example accordingly.
In c++ interfaces are classes which has pure virtual functions in them, etc
class Foo{
virtual Function() = 0;
};
Every single class that inherits this class must implement this function.
Disclaimer: I haven't been able to clearly describe exactly what I am trying to do, so I hope the example will be clearer than my explanation! Please suggest any re-phrasing to make it clearer. :)
Is it possible to override functions with more specific versions than those required by an interface in order to handle subclasses of the parameters of methods in that interface separately to the generic case? (Example and better explanation below...) If it can't be done directly, is there some pattern which can be used to achieve a similar effect?
Example
#include <iostream>
class BaseNode {};
class DerivedNode : public BaseNode {};
class NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node) = 0;
};
class MyNodeProcessor : public NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node)
{
std::cout << "Processing a node." << std::endl;
}
virtual void processNode(DerivedNode* node)
{
std::cout << "Special processing for a DerivedNode." << std::endl;
}
};
int main()
{
BaseNode* bn = new BaseNode();
DerivedNode* dn = new DerivedNode();
NodeProcessingInterface* processor = new MyNodeProcessor();
// Calls MyNodeProcessor::processNode(BaseNode) as expected.
processor->processNode(bn);
// Calls MyNodeProcessor::processNode(BaseNode).
// I would like this to call MyNodeProcessor::processNode(DerivedNode).
processor->processNode(dn);
delete bn;
delete dn;
delete processor;
return 0;
}
My motivation
I want to be able to implement several different concrete NodeProcessors some of which will treat all nodes the same (i.e. implement only what is shown in the interface) and some of which will distinguish between different types of node (as in MyNodeProcessor). So I would like the second call to processNode(dn) to use the implementation in MyNodeProcessor::processNode(DerivedNode) by overloading (some parts/subclasses of) the interface methods. Is that possible?
Obviously if I change processor to be of type MyNodeProcessor* then this works as expected, but I need to be able to use different node processors interchangeably.
I can also get around this by having a single method processNode(BaseNode) which checks the precise type of its argument at run-time and branches based on that. It seems inelegant to me to include this check in my code (especially as the number of node types grows and I have a giant switch statement). I feel like the language should be able to help.
I am using C++ but I'm interested in general answers as well if you prefer (or if this is easier/different in other languages).
No, that's not possible this way. The virtual method dispatch happens at compiletime, i.e. is using the static type of the Processor pointer, namely NodeProcessingInterface. If that base type has only one virtual function, only that one virtual function (or its overriding implementations) will be called. The compiler has no way to determine that there migth be a derived NodeProcessor class implementing more distinguished functions.
So, instead of diversifying the methods in derived classes, you'd have to do it the other way round: Declare all different virtual functions that you need in the base class override them as needed:
class NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node) = 0;
//simplify the method definition for complex node hierarchies:
#define PROCESS(_derived_, _base_) \
virtual void processNode(_derived_* node) { \
processNode(static_cast<_base_*>(node)); \
}
PROCESS(DerivedNode, BaseNode)
PROCESS(FurtherDerivedNode, DerivedNode)
PROCESS(AnotherDerivedNode, BaseNode)
#undef PROCESS
};
class BoringNodeProcessor : public NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node) override
{
std::cout << "It's all the same.\n";
}
};
class InterestingNodeProcessor : public NodeProcessingInterface
{
public:
virtual void processNode(BaseNode* node) override
{
std::cout << "A Base.\n";
}
virtual void processNode(DerivedNode* node) override
{
std::cout << "A Derived.\n";
}
};
You're correct that you don't want to to type-checking. That would violate the Open-Closed principle -- because every time you added a specialized node type you'd have to modify this method.
What you're describing sounds similar to a plugin architecture, or the bridge pattern.
If you use inheritance rather than overloading -- i.e. move the specialized processNode into a subclass of MyNodeProcessor -- I think that will give you what you want.
EDIT:
Or, along slightly different lines, you could make the node processor a template class and use partial specialization to get the behavior you want.
Well, as soon as drifting from c++ is fine, I think, what you want is called "categories" in Objective C. You might find this link interesting: http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ProgrammingWithObjectiveC/CustomizingExistingClasses/CustomizingExistingClasses.html