Developing a modular application, I want to inject some helper classes into each module. This should happen automated. Note that my helpers have state, so I can't just make them static and include them where needed.
I could store all helpers in a map with a string key and make it available to the abstract base class all modules inherit from.
std::unordered_map<std::string, void*> helpers;
RendererModule renderer = new RendererModule(helpers); // argument is passed to
// base class constructor
Then inside a module, I could access helpers like this.
std::string file = (FileHelper*)helpers["file"]->Read("C:/file.txt");
But instead, I would like to access the helpers like this.
std::string file = File->Read("C:/file.txt");
To do so, at the moment I separately define members for all helpers in the module base class, and set them for each specific module.
FileHelper file = new FileHelper(); // some helper instances are passed to
// multiple modules, while others are
// newly created for each one
RendererModule renderer = new RendererModule();
renderer->File = file;
Is there a way to automate this, so that I don't have to change to module code when adding a new helper to the application, while remaining with the second syntax? I an not that familiar with C macros, so I don't know if they are capable of that.
I think I see what your dilemma is, but I have no good solution for it. However, since there are no other answers, I will contribute my two cents.
I use the combination of a few strategies to help me with these kinds of problems:
If the helper instance is truly module-specific, I let the module itself create and manage it inside.
If I don't want the module to know about the creation or destruction of the helper(s), or if the lifetime of the helper instance is not tied to the module that is using it, or if I want to share a helper instance among several modules, I create it outside and pass the reference to the entry-point constructor of the module. Passing it to the constructor has the advantage of making the dependency explicit.
If the number of the helpers are high (say more than 2-3) I create an encompassing struct (or simple class) that just contains all the pointers and pass that struct into the constructor of the module or subsystem. For example:
struct Platform { // I sometimes call it "Environment", etc.
FileHelper * file;
LogHelper * log;
MemoryHelper * mem;
StatsHelper * stats;
};
Note: this is not a particularly nice or safe solution, but it's no worse than managing disparate pointers and it is straightforward.
All the above assumes that helpers have no dependency on modules (i.e. they are on a lower abstraction of dependency level and know nothing about modules.) If some helpers are closer to modules, that is, if you start to want to inject module-on-module dependencies into each other, the above strategies really break down.
In these cases (which obviously happen a lot) I have found that a centralized ModuleManager singleton (probably a global object) is the best. You explicitly register your modules into it, along with explicit order of initialization, and it constructs all the modules. The modules can ask this ModuleManager for a reference to other modules by name (kind of like a map of strings to module pointers,) but they do this once and store the pointers internally in any way they want for convenient and fast access.
However, to prevent messy lifetime and order-of-destruction issues, any time a module is constructed or destructed, the ModuleManager notifies all other modules via callbacks, so they have the chance to update their internal pointers to avoid dangling pointers and other problems.
That's it. By the way, you might want to investigate articles and implementations related to the "service locator" pattern.
Related
I'm writing this, well, call it a library I guess. It offers a set of global variables of type MyType. Now, I want to write the source of each of these MyType's in its own .cpp and .h files, unaware of all the rest, without needing some central header file saying MyType* offerings = { &global1, &global2, /*... */ }.
Now, had these been different classes I want to be able to instantiate, I would want to use a factory pattern; but here they're all of the same type, and I don't need to instantiate anything. I would think each variable needs to be 'registered' into a global array (or unordered set) from somewhere in its sources.
So, what's the idiomatic way to do this?
You could take a look at the Registry Pattern and create a manager for your filesystem or folder that will manage these objects.
The Registry could have everything related to Filesystem Handling so you insert your object names and properties in a model in one config file or database. The registry could look up that and instantiate your objects on runtime.
Now you would need a mechanism to communicate this objects to the rest of the system. But if your objects are not going to change then just a registry with compile time objects would do.
The Registry is a pattern to handle global objects in a similar fashion to the singleton.
I have a game that consists of few modules.
One of them is database module.
I want to make it something like that:
Database{
public:
save(&Object); //all my classes in the all modules inherit from Object
load(&Object);
};
What would be the best way to make that module independent from other modules (other modules will store data in Database using save and load functions)?
I consider few solutions:
All objects have something like serialize() method that is inherited from Object class (analogy to Java). Database use that method to get the string and save it. Obvious disadvantages are: all objects have to implement new method and it won't be optimum to save strings (not knowing about the classes' structure).
Make 'manifests' for all the classes (in e.g. text file that will be send to Database). That manifests will describe what the structure of class is (e.g. one string, two double, one rare use int). Disadvantage is flexibility - changing the classes in other modules will have affect on manifests.
All classes has own save and load methods and Database use them. I don't want it, because all classes would have to know about database type and save and load should be in Database class, not distributed in the whole code (it's a main point to make such a module).
Database knows about all other modules (and will know how to save all objects). Bad thing here is a lot of dependencies. Changes in any of modules will affect the Database.
Which way will be good? Or maybe there's a better option?
One solution I've come across is to have all Object subclasses implement a virtual void serialize(ISerializer& serializer) method.
ISerializer would have pure virtual methods like void onInt(int value), void onString(const char* string) etc to be called by the Object subclass inside its serialize()-method. Your Database module could implement ISerializer in two separate classes, DatabaseReader and DatabaseWriter. Later on you could add ObjectInspectionFileDumper, OnScreenObjectStateDebugger or NetworkWriter that also implement ISerializer, but in other modules. Each object only needs to implement the serialize()-method once to gain all those possibilities for extension.
Pros:
Reading and writing is pretty much guaranteed to match up as long as you don't read data for an old version of an object without some kind of versioning scheme on top.
This is an orthogonal design, where the number of Object types and Serializer types can grow independently of each other.
Cons:
Mainly some virtual function overhead, if that is an issue for your project. This isn't something you will typically be doing much during regular gameplay though.
Later, you might come across things you want to call Objects which you don't want to serialize, then it could make sense to separate that out into an ISerializable interface class, only containing the pure virtual serialize()-method. To accommodate serializers where it matters (like debug serializers), you might want to change to void onInt(const char* name, int value) etc instead.
HTH
Run into a bit of an issue, and I'm looking for the best solution concept/theory.
I have a system that needs to use objects. Each object that the system uses has a known interface, likely implemented as an abstract class. The interfaces are known at build time, and will not change. The exact implementation to be used will vary and I have no idea ahead of time what module will be providing it. The only guarantee is that they will provide the interface. The class name and module (DLL) come from a config file or may be changed programmatically.
Now, I have all that set up at the moment using a relatively simple system, set up something like so (rewritten pseudo-code, just to show the basics):
struct ClassID
{
Module * module;
int number;
};
class Module
{
HMODULE module;
function<void * (int)> * createfunc;
static Module * Load(String filename);
IObject * CreateClass(int number)
{
return createfunc(number);
}
};
class ModuleManager
{
bool LoadModule(String filename);
IObject * CreateClass(String classname)
{
ClassID class = AvailableClasses.find(classname);
return class.module->CreateObject(class.number);
}
vector<Module*> LoadedModules;
map<String, ClassID> AvailableClasses;
};
Modules have a few exported functions to give the number of classes they provide and the names/IDs of those, which are then stored. All classes derive from IObject, which has a virtual destructor, stores the source module and has some methods to get the class' ID, what interface it implements and such.
The only issue with this is each module has to be manually loaded somewhere (listed in the config file, at the moment). I would like to avoid doing this explicitly (outside of the ModuleManager, inside that I'm not really concerned as to how it's implemented).
I would like to have a similar system without having to handle loading the modules, just create an object and (once it's all set up) it magically appears.
I believe this is similar to what COM is intended to do, in some ways. I looked into the COM system briefly, but it appears to be overkill beyond belief. I only need the classes known within my system and don't need all the other features it handles, just implementations of interfaces coming from somewhere.
My other idea is to use the registry and keep a key with all the known/registered classes and their source modules and numbers, so I can just look them up and it will appear that Manager::CreateClass finds and makes the object magically. This seems like a viable solution, but I'm not sure if it's optimal or if I'm reinventing something.
So, after all that, my question is: How to handle this? Is there an existing technology, if not, how best to set it up myself? Are there any gotchas that I should be looking out for?
COM very likely is what you want. It is very broad but you don't need to use all the functionality. For example, you don't need to require participants to register GUIDs, you can define your own mechanism for creating instances of interfaces. There are a number of templates and other mechanisms to make it easy to create COM interfaces. What's more, since it is a standard, it is easy to document the requirements.
One very important thing to bear in mind is that importing/exporting C++ objects requires all participants to be using the same compiler. If you think that ever could be a problem to you then you should use COM. If you are happy to accept that restriction then you can carry on as you are.
I don't know if any technology exists to do this.
I do know that I worked with a system very similar to this. We used XML files to describe the various classes that different modules made available. Our equivalent of ModuleManager would parse the xml files to determine what to create for the user at run time based on the class name they provided and the configuration of the system. (Requesting an object that implemented interface 'I' could give back any of objects 'A', 'B' or 'C' depending on how the system was configured.)
The big gotcha we found was that the system was very brittle and at times hard to debug/understand. Just reading through the code, it was often near impossible to see what concrete class was being instantiated. We also found that maintaining the XML created more bugs and overhead than expected.
If I was to do this again, I would keep the design pattern of exposing classes from DLL's through interfaces, but I would not try to build a central registry of classes, nor would I derive everything from a base class such as IObject.
I would instead make each module responsible for exposing its own factory functions(s) to instantiate objects.
I am currently busy refactoring big parts in my application. The main purpose is to remove as much as possible dependencies between the different modules. I now stumble on the following problem:
In my application I have a GUI module that has defined an interface IDataProvider. The interface needs to be implemented by the application and is used to 'provide data' to the GUI module. E.g. a data grid can be given this IDataProvider and use it to loop over all the instances that should be shown in the data grid, and getting their data.
Now have another module (in fact quite some more modules) that all need something similar (like a reporting module, a database integration module, a mathematical solver module, ...). At this moment I can see 2 things I can do:
I could move IDataProvider from the GUI layer to a much lower-level layer and reuse this same interface in all the other modules.
This has the advantage that it becomes easier for the application to use all the modules (it only has to implement a data provider once).
The disadvantage is that I introduce a dependency between the modules and the central IDataProvider. If someone starts to extend IDataProvider with additional methods needed for one module, it also starts to pollute the other modules.
The other alternative is to give every module its own data provider, and force the application to implement all of them if it wants to use all the modules.
The advantage is that the modules are not dependent on a common part
The disadvantage is that I end up with IGridDataProvider, IReportDataProvider, IDatabaseDataProvider, ISolverDataProvider.
What's the best approach to use? Is it acceptible to make all modules dependent on the same common interface if they require [almost or completely] the same kind of interface?
If I use the same IDataProvider interface, can this give nasty problems in the future (which I am not aware of at this moment)?
Why don't you do an intermediate implementation? Have some class implement recurring parts of IDataProvider (as in the 1st case) in a factored-out library (or other layer). Also, everyone is required to "implement" their own IDataProvider (as in the 2nd case). Then, you can re-use your IDataProvider implementation all over the place and add specific methods in custom classes by creating a derived class...
I.e.:
// Common module.
class BasicDataProvider : IDataProvider
{
public:
// common overrides...
};
// For modules requiring no specific methods...
typedef BasicDataProvider ReportDataProvider;
// Database module requires "special" handling.
class DatabaseDataProvider : BasicDataProvider
{
public:
// custom overrides...
};
There is an alternative to the disadvantage you cite for moving IDataProvider to a lower-level layer.
A module that wants an extended interface could put those extensions in its own sub-interface of IDataProvider. You could encourage this by pro-actively creating those sub-interfaces.
I wouldn't mind having multiple module depending on one interface even if it doesn't use all of the methods the interface publishes. You could also think more in a meaning for part of the interface instead of for what module is it intended. Most of the module you mention only need read access. So you could separate in this way and have another for write, etc.
The data layer doesn't need to know what the data is used for(which is the job of the presentation layer). It only needs to know how to return it and how to modify it.
Moreover, there's absolutely no problem into moving the data provider(which could also be labeled as a controller) to a lower level because it's probably already implementing some business logic(like data consistency) which has nothing to do with the UI.
If you're worried that additional methods would be applied to an interface you can use an Adaptor pattern. That is:
class myFoo{
public:
Bar getBar() =0;
}
and in the other module:
class myBaz{
public:
Bar getBar() =0;
}
Then to use one with the other:
class MyAdaptor: public myBaz{
public:
MyAdaptor(myFoo *_input){
m_Foo = _input;
}
Bar getBar(){ return m_Foo->getBar(); }
private:
myFoo* m_Foo;
}
That way you implement everything in your myBaz interface and only need to supply the glue in one place. The myFoo can have as many additional methods added to it as they want, the rest of your application need not know or care about it.
I am developing a C++ application used to simulate a real world scenario. Based on this simulation our team is going to develop, test and evaluate different algorithms working within such a real world scenrio.
We need the possibility to define several scenarios (they might differ in a few parameters, but a future scenario might also require creating objects of new classes) and the possibility to maintain a set of algorithms (which is, again, a set of parameters but also the definition which classes are to be created). Parameters are passed to the classes in the constructor.
I am wondering which is the best way to manage all the scenario and algorithm configurations. It should be easily possible to have one developer work on one scenario with "his" algorithm and another developer working on another scenario with "his" different algorithm. Still, the parameter sets might be huge and should be "sharable" (if I defined a set of parameters for a certain algorithm in Scenario A, it should be possible to use the algorithm in Scenario B without copy&paste).
It seems like there are two main ways to accomplish my task:
Define a configuration file format that can handle my requirements. This format might be XML based or custom. As there is no C#-like reflection in C++, it seems like I have to update the config-file parser each time a new algorithm class is added to project (in order to convert a string like "MyClass" into a new instance of MyClass). I could create a name for every setup and pass this name as command line argument.
The pros are: no compilation required to change a parameter and re-run, I can easily store the whole config file with the simulation results
contra: seems like a lot of effort, especially hard because I am using a lot of template classes that have to be instantiated with given template arguments. No IDE support for writing the file (at least without creating a whole XSD which I would have to update everytime a parameter/class is added)
Wire everything up in C++ code. I am not completely sure how I would do this to separate all the different creation logic but still be able to reuse parameters across scenarios. I think I'd also try to give every setup a (string) name and use this name to select the setup via command line arg.
pro: type safety, IDE support, no parser needed
con: how can I easily store the setup with the results (maybe some serialization?)?, needs compilation after every parameter change
Now here are my questions:
- What is your opinion? Did I miss
important pros/cons?
- did I miss a third option?
- Is there a simple way to implement the config file approach that gives
me enough flexibility?
- How would you organize all the factory code in the seconde approach? Are there any good C++ examples for something like this out there?
Thanks a lot!
There is a way to do this without templates or reflection.
First, you make sure that all the classes you want to create from the configuration file have a common base class. Let's call this MyBaseClass and assume that MyClass1, MyClass2 and MyClass3 all inherit from it.
Second, you implement a factory function for each of MyClass1, MyClass2 and MyClass3. The signatures of all these factory functions must be identical. An example factory function is as follows.
MyBaseClass * create_MyClass1(Configuration & cfg)
{
// Retrieve config variables and pass as parameters
// to the constructor
int age = cfg->lookupInt("age");
std::string address = cfg->lookupString("address");
return new MyClass1(age, address);
}
Third, you register all the factory functions in a map.
typedef MyBaseClass* (*FactoryFunc)(Configuration *);
std::map<std::string, FactoryFunc> nameToFactoryFunc;
nameToFactoryFunc["MyClass1"] = &create_MyClass1;
nameToFactoryFunc["MyClass2"] = &create_MyClass2;
nameToFactoryFunc["MyClass3"] = &create_MyClass3;
Finally, you parse the configuration file and iterate over it to find all the entries that specify the name of a class. When you find such an entry, you look up its factory function in the nameToFactoryFunc table and invoke the function to create the corresponding object.
If you don't use XML, it's possible that boost::spirit could short-circuit at least some of the problems you are facing. Here's a simple example of how config data could be parsed directly into a class instance.
I found this website with a nice template supporting factory which I think will be used in my code.