Correct pattern to configure objects built by Factory - c++

I've had this problem tickling me for the past weeks; my current implementation works, but I'm curious to know if there is a "good way" to do this. I'm new to design patterns, so this might be a stupid question.
Put simply, you have:
An object prototype providing an interface (let's call it abstract kernel);
Specific kernels implementing the above interface in various ways;
A concrete kernel Factory;
Another object Foo, which stores a pointer to an abstract kernel, as is returned by the Factory.
My problem is this; specific kernels implementations may define their own set of parameters, which differ from one kernel to another.
Foo uses kernels to do some processing, but this processing ultimately depends on these parameters, and I don't know how to configure those in a nice way.
I don't want to go for an abstract factory, and configure the concrete factory before building, because this seems wrong to me; it's not the factory that has parameters, it's the kernel.
But on the other hand, even if I set the kernel pointer in Foo as public, I can't access the parameters of the underlying kernel since they're not part of the prototype's interface... I'm sure other people had this problem before, maybe there's a simple solution I don't see. :S
Thanks in advance!
NOTE: In my current implementation, there is no kernel Factory. I put the kernel concrete type as a template of Foo, and set the kernel as a public member, which allows me to configure the kernel after the declaration, and before to start the processing.

If a piece of code knows what concrete kind of kernel it works with, it should have a pointer to that specific concrete kernel type. If it doesn't, it cannot access its specific parameters (but can possibly access all parameters in a generic way as suggested by #Jaywalker).
Your current implementation seems to go the first route, which is perfectly OK.
I have very limited info about your design, but it looks like you have several concrete kernel types, a separate builder for each type, and a separate configurator for each type. Packing all the builders into a Factory is problematic, as there's no clean and elegant way to forward concrete kernel types to their respective configurators (without things like *_cast<> or double dispatch). There are at least two ways to solve this and still have a Factory:
Bundle each builder with its respective configurator, and pack all the bundles into a Factory that churns out configured kernels.
Bundle each kernel with its configurator and make a Factory producing these bundles (this way a kernel may be configured any number of times during its life cycle).

Anything which is not part of the prototype interface will not be available in Foo, as you have said. It simply doesn't make sense to use the factory pattern if Foo knows the specifics of each kernel implementation.
In some limited circumstances, adding something like following getters and setters in the prototype interface could get your work done:
virtual bool setParameter (const string &key, const string &value) = 0;
virtual string getParameter (const string &key) = 0;

Related

How to create a container storing functions with different signatures?

I implemented a class which depended on an interface for sending data.
Concrete versions of the interface were implemented for testing and for production and they were injected at construction (depending if the class was being tested or used in production)
This works but there is a maintainence overhead on maintaining multiple overloaded send functions that do very similiar things.
I would like to template the send function however that is impossible on an overriden function.
My next idea is rather than the class depending on an interface, it will contain a map of datatypes to callbacks. This means I can specify the functionality for each datatype and inject it into the class depending on if I want test functionality or real functionality.
The difficulty comes because the map has to store functions with different signatures because the paramter type is different for every function.
How best can I do this? Is the idea sound or is there a better design?

Given both types, can I determine if a virtual function have been overridden?

I am designing a system that is built from a collection of modules. The base Module class has a set of virtual functions that do nothing by default, but can be overridden in derived module classes if they are needed.
For example, one of these functions is OnUpdate(). Currently when an update occurs, I loop through all modules and run this function on each of them. Of course, this results in many "wasted" calls on any module that does not override the relevant function.
Given that most modules do not override most member functions (and profiling indicates that this is a bottleneck in my system), what I'd prefer to do is keep track of which modules DO override a member function and execute only those. At the moment, I am planning to keep an extra vector of only those module pointers that need to be run.
My problem is that collecting this information requires me to detect whether or not a module has overridden each member function. I can do this analysis on module creation when I know the exact derived type that I'm working with, so I think it should be possible, but the test that I expected to work does not.
Here's some simplified detection code that fails:
template <typename T>
T & AddModule() {
static_assert(std::is_base_of<Module, T>,
"AddModule() can only take module types.");
// Create the new module and store it.
auto mod_ptr = std::make_shared<T>();
modules.push_back(mod_ptr);
// If the module has OnUpdate() track it!
// *** This test is not working! ***
if (&T::OnUpdate != &Module::OnUpdate) {
on_update_mods.push_back(mod_ptr);
}
// ...lots more tests here...
return *mod_ptr;
}
Is there some other way that I can handle this detection?
I'm also happy to hear alternate ideas for how to achieve my goal. I definitely want to keep it easy for module writers to simply implement any of the designated functions and have them called as appropriate.
The only other option I've come up with so far is to NOT have the virtual functions in the Module base class and then use SFINAE to detect them in the derived class, followed by wrapping them in lambdas and storing a vector of functors to run. I think that will work BUT it will be slightly slower (more indirection with the functors) and more error prone (right now if you make a typo in a member function name the override keyword will throw an error...)
This question is, indeed, similar to this one, but the solutions provided won't work for me. The first option requires all working member functions to be identified separately in the constructor, and I may be working with dozens or more, so this is error prone. The second option has the functions return a bool, but is only relevant when they are run, and I can't run them when they aren't needed.
I ended up finding a solution to this problem that gives me a reasonable speedup. Specifically, for each virtual member function I keep a bool in the base class indicating if that function has been overridden. Each of these bools defaults to true, but the base implementation of the member function sets it to false when executed (as this will only be run if the function is NOT overridden).
Periodically I scan through and update my vectors, removing any member functions that have been shown to not exist.
Net result, ~10% speedup with no changes needed to the derived modules themselves. Not as fast as I hoped for, but given that this is for a piece of scientific software where some runs need to go for a month, shaving a few days off certainly helps.

C++ Different subclasses need different parameters

I'm looking for the best way to accomplish the following:
Background
I have a based class with a request() virtual method with different subclasses provide alternate implementations of performing the requests. The idea is I'd like to let the client instantiate one of these subclasses and pass in one of these objects to a subsystem which will call request() when it needs to. The goal is to let the client decide how requests are handled by instantiated the desired subclass.
Problem
However, if a certain subclass implementation is chosen, it needs a piece of information from the subsystem which would most naturally be passed as an argument to request (i.e. request(special_info);). But other subclasses don't need this. Is there a clean way to hide this difference or appropriate design pattern that can be used here?
Thanks
Make the base request() method take the information as argument, and ignore the argument in subclass implementations that don't need it.
Or pass the SubSystem instance itself to the handler, and let the handler get the information it needs from the SubSystem (and ignore it if it doesn't need any information from the SubSystem). That would make the design more extensible: you wouldn't need to pass an additional argument and refactor all the methods each time a new subclass needing additional information is introduced.
JB Nizet's suggestion is one possible solution - it will certainly work.
What worries me a little is the rather vague notion that "some need more information". Where does this information come from, what decides that? The general principle with inheritance is that you have a baseclass that does the right thing for all the objects. If you have to go say "Is it type A object or type B object, then do this, else if it's type C object do something slightly different, and if it's type D object, do a another kind of thing", then you're doing it wrong.
It may be that JB's suggestion is the right one for you, but I would also consider the option that "special_info" can be passed into the constructor, or be fetched via some helper function. The constructor solution is a sane one, because at construction time, obviously, you need to know if something is a A, B, C or D object that you are creating. The helper function is a good solution some other times, but if it's used badly, it can lead to a bit of a messy solution, so use with care.
Generally, when things end up like this, it's because you are splitting the classes up "the wrong way".

Flexible application configuration in C++

I am developing a C++ application used to simulate a real world scenario. Based on this simulation our team is going to develop, test and evaluate different algorithms working within such a real world scenrio.
We need the possibility to define several scenarios (they might differ in a few parameters, but a future scenario might also require creating objects of new classes) and the possibility to maintain a set of algorithms (which is, again, a set of parameters but also the definition which classes are to be created). Parameters are passed to the classes in the constructor.
I am wondering which is the best way to manage all the scenario and algorithm configurations. It should be easily possible to have one developer work on one scenario with "his" algorithm and another developer working on another scenario with "his" different algorithm. Still, the parameter sets might be huge and should be "sharable" (if I defined a set of parameters for a certain algorithm in Scenario A, it should be possible to use the algorithm in Scenario B without copy&paste).
It seems like there are two main ways to accomplish my task:
Define a configuration file format that can handle my requirements. This format might be XML based or custom. As there is no C#-like reflection in C++, it seems like I have to update the config-file parser each time a new algorithm class is added to project (in order to convert a string like "MyClass" into a new instance of MyClass). I could create a name for every setup and pass this name as command line argument.
The pros are: no compilation required to change a parameter and re-run, I can easily store the whole config file with the simulation results
contra: seems like a lot of effort, especially hard because I am using a lot of template classes that have to be instantiated with given template arguments. No IDE support for writing the file (at least without creating a whole XSD which I would have to update everytime a parameter/class is added)
Wire everything up in C++ code. I am not completely sure how I would do this to separate all the different creation logic but still be able to reuse parameters across scenarios. I think I'd also try to give every setup a (string) name and use this name to select the setup via command line arg.
pro: type safety, IDE support, no parser needed
con: how can I easily store the setup with the results (maybe some serialization?)?, needs compilation after every parameter change
Now here are my questions:
- What is your opinion? Did I miss
important pros/cons?
- did I miss a third option?
- Is there a simple way to implement the config file approach that gives
me enough flexibility?
- How would you organize all the factory code in the seconde approach? Are there any good C++ examples for something like this out there?
Thanks a lot!
There is a way to do this without templates or reflection.
First, you make sure that all the classes you want to create from the configuration file have a common base class. Let's call this MyBaseClass and assume that MyClass1, MyClass2 and MyClass3 all inherit from it.
Second, you implement a factory function for each of MyClass1, MyClass2 and MyClass3. The signatures of all these factory functions must be identical. An example factory function is as follows.
MyBaseClass * create_MyClass1(Configuration & cfg)
{
// Retrieve config variables and pass as parameters
// to the constructor
int age = cfg->lookupInt("age");
std::string address = cfg->lookupString("address");
return new MyClass1(age, address);
}
Third, you register all the factory functions in a map.
typedef MyBaseClass* (*FactoryFunc)(Configuration *);
std::map<std::string, FactoryFunc> nameToFactoryFunc;
nameToFactoryFunc["MyClass1"] = &create_MyClass1;
nameToFactoryFunc["MyClass2"] = &create_MyClass2;
nameToFactoryFunc["MyClass3"] = &create_MyClass3;
Finally, you parse the configuration file and iterate over it to find all the entries that specify the name of a class. When you find such an entry, you look up its factory function in the nameToFactoryFunc table and invoke the function to create the corresponding object.
If you don't use XML, it's possible that boost::spirit could short-circuit at least some of the problems you are facing. Here's a simple example of how config data could be parsed directly into a class instance.
I found this website with a nice template supporting factory which I think will be used in my code.

Overriding / modifying C++ classes using DLLs

I have a project with a large codebase (>200,000 lines of code) I maintain ("The core").
Currently, this core has a scripting engine that consists of hooks and a script manager class that calls all hooked functions (that registered via DLL) as they occur. To be quite honest I don't know how exactly it works, since the core is mostly undocumented and spans several years and a magnitude of developers (who are, of course, absent). An example of the current scripting engine is:
void OnMapLoad(uint32 MapID)
{
if (MapID == 1234)
{
printf("Map 1234 has been loaded");
}
}
void SetupOnMapLoad(ScriptMgr *mgr)
{
mgr->register_hook(HOOK_ON_MAP_LOAD, (void*)&OnMapLoad);
}
A supplemental file named setup.cpp calls SetupOnMapLoad with the core's ScriptMgr.
This method is not what I'm looking for. To me, the perfect scripting engine would be one that will allow me to override core class methods. I want to be able to create classes that inherit from core classes and extend on them, like so:
// In the core:
class Map
{
uint32 m_mapid;
void Load();
//...
}
// In the script:
class ExtendedMap : Map
{
void Load()
{
if (m_mapid == 1234)
printf("Map 1234 has been loaded");
Map::Load();
}
}
And then I want every instance of Map in both the core and scripts to actually be an instance of ExtendedMap.
Is that possible? How?
The inheritance is possible. I don't see a solution for replacing the instances of Map with instances of ExtendedMap.
Normally, you could do that if you had a factory class or function, that is always used to create a Map object, but this is a matter of existing (or inexistent) design.
The only solution I see is to search in the code for instantiations and try to replace them by hand. This is a risky one, because you might miss some of them, and it might be that some of the instantiations are not in the source code available to you (e.g. in that old DLL).
Later edit
This method overriding also has a side effect in case of using it in a polymorphic way.
Example:
Map* pMyMap = new ExtendedMap;
pMyMap->Load(); // This will call Map::Load, and not ExtendedMap::Load.
This sounds like a textbook case for the "Decorator" design pattern.
Although it's possible, it's quite dangerous: the system should be open for extension (i.e. hooks), but closed for change (i.e. overriding/redefining). When inheriting like that, you can't anticipate the behaviour your client code is going to show. As you see in your example, client code must remember to call the superclass' method, which it won't :)
An option would be to create a non-virtual interface: an abstract base class that has some template methods that call pure virtual functions. These must be defined by subclasses.
If you want no core Map's to be created, the script should give the core a factory to create Map descendants.
If my experience with similar systems is applicable to your situation, there are several hooks registered. So basing a solution on the pattern abstract factory will not really work. Your system is near of the pattern observer, and that's what I'd use. You create one base class with all the possible hooks as virtual members (or several one with related hooks if the hooks are numerous). Instead of registering hooks one by one, you register one object, of a type descendant of the class with the needed override. The object can have state, and replace advantageously the void* user data fields that such callbacks system have commonly.