I have a project with a large codebase (>200,000 lines of code) I maintain ("The core").
Currently, this core has a scripting engine that consists of hooks and a script manager class that calls all hooked functions (that registered via DLL) as they occur. To be quite honest I don't know how exactly it works, since the core is mostly undocumented and spans several years and a magnitude of developers (who are, of course, absent). An example of the current scripting engine is:
void OnMapLoad(uint32 MapID)
{
if (MapID == 1234)
{
printf("Map 1234 has been loaded");
}
}
void SetupOnMapLoad(ScriptMgr *mgr)
{
mgr->register_hook(HOOK_ON_MAP_LOAD, (void*)&OnMapLoad);
}
A supplemental file named setup.cpp calls SetupOnMapLoad with the core's ScriptMgr.
This method is not what I'm looking for. To me, the perfect scripting engine would be one that will allow me to override core class methods. I want to be able to create classes that inherit from core classes and extend on them, like so:
// In the core:
class Map
{
uint32 m_mapid;
void Load();
//...
}
// In the script:
class ExtendedMap : Map
{
void Load()
{
if (m_mapid == 1234)
printf("Map 1234 has been loaded");
Map::Load();
}
}
And then I want every instance of Map in both the core and scripts to actually be an instance of ExtendedMap.
Is that possible? How?
The inheritance is possible. I don't see a solution for replacing the instances of Map with instances of ExtendedMap.
Normally, you could do that if you had a factory class or function, that is always used to create a Map object, but this is a matter of existing (or inexistent) design.
The only solution I see is to search in the code for instantiations and try to replace them by hand. This is a risky one, because you might miss some of them, and it might be that some of the instantiations are not in the source code available to you (e.g. in that old DLL).
Later edit
This method overriding also has a side effect in case of using it in a polymorphic way.
Example:
Map* pMyMap = new ExtendedMap;
pMyMap->Load(); // This will call Map::Load, and not ExtendedMap::Load.
This sounds like a textbook case for the "Decorator" design pattern.
Although it's possible, it's quite dangerous: the system should be open for extension (i.e. hooks), but closed for change (i.e. overriding/redefining). When inheriting like that, you can't anticipate the behaviour your client code is going to show. As you see in your example, client code must remember to call the superclass' method, which it won't :)
An option would be to create a non-virtual interface: an abstract base class that has some template methods that call pure virtual functions. These must be defined by subclasses.
If you want no core Map's to be created, the script should give the core a factory to create Map descendants.
If my experience with similar systems is applicable to your situation, there are several hooks registered. So basing a solution on the pattern abstract factory will not really work. Your system is near of the pattern observer, and that's what I'd use. You create one base class with all the possible hooks as virtual members (or several one with related hooks if the hooks are numerous). Instead of registering hooks one by one, you register one object, of a type descendant of the class with the needed override. The object can have state, and replace advantageously the void* user data fields that such callbacks system have commonly.
Related
I'm trying to make a cpp interface class (pure virtual) declare a function that all derived classes must implement. However because the interface class is trying to be ignorant of implementation details, it doesn't know about the type of the returned object, and would like to delegate that to the derived class. The specific type of the returned object is handled by the derived class.
class UIInterface
{
// Should not know about QWidget
// Would like to defer return type until derived class which implements interface
QWidget *getWindow() = 0;
}
class QUIManager : public UIInterface
{
QWidget *getWindow() override {return m_widget;}
}
class XUIManager : public UIInterface
{
XWidget *getWindow() override {return m_widget;}
}
Except UIInterface should not know about QWidget. In some future version, the UIManager might be an XUIManager which returns a different type of window. If possible, I'd like to avoid returning std::any or void * followed by casting.
This pattern keeps showing up in my code, so I'm probably doing something wrong.
Edit based on comments:
My code is experimental, so although I'm using Qt as the UI for now, it's conceivable that may change, for example to use an immediate mode package, or in any case to separate the core logic from the UI. The core logic, may, for example, be accessed from just a console with no UI. Likewise, I'm using Qt's model/view and database classes.
Some examples:
The core needs to tell the UI to open and close windows. I've concluded in most cases that the core does not need to blindly shuffle naked UI pointers, so perhaps this use case is no longer that important.
The core needs to be able to glue database, model, and view together, without these latter three items knowing about each other, even though all three latter items may be specific to Qt or some other framework, or split up, such as using sqlite3 standalone and delegating model/view to Qt. For example, core needs to tell database interface to open a sqlite3 file, ask the modelcreator to create a model based on this, then pass model to UIManager to create the view. In no case does the core need to know specific types, and it would probably suffice to pass pointers around, but this seems like it's not the C++ way these days.
Although for now the track is C++, at some point the core itself might be implemented in a language better suited to the core algorithmic functions, eg Julia, Common Lisp, etc., which will introduce an impedance mismatch with Qt, so I'm trying my best to ensure the core can blindly call some high level functions while still serving as the central hub for the application.
Two options come in my mind, depending what fit better in your project:
1) Use a placeholder for return type:
class UIInterface
{
Widget* getWindow() = 0;
}
you can define in other file using Widget = QWidget. You can hange the alias or even implement your class Widget later and the whole UIInterface will not change. In this case you're just hiding the real type to the layout of your class.
2) You shoulde use a template class like
template<typename T>
class UIInterface
{
T* getWindow() = 0;
}
BUT there are downsides for No.2: you cannot use anymore UIInterface as interface without specifying T, and you're actually to state thatQWidget is the concrete type for T in your code.
Since you wrote "the interface might change in future" and not "I would create an interface regardless of the concrete widget type" I guess the option that fit you better is No.1
I sometimes come across, mostly when working with old code, the situation that one class acts as a mere forwarding of the calls to another class. Imagine there is a old controller which controls somethings however some of those can be dedicated to a new class. Now, the old controller will call the new class interface.
Ex.
class Controller {
public:
void addObject(const std::string & id,
const Object * obj) {
m_Wrk.addObject(id, obj);
}
private:
Worker m_Wrk;
};
class Worker {
public:
void addObject(const std::string & id,
const Object * obj) {
//do actual adding
}
};
Now, when thinking about testing the software, the interfaces might need to be tested in both classes, and it is harder in the controller as it does not mostly as it is necessary to check worker changes on controller tests.
Is this usage particularly bad or is it okay to use this kind of design in an already existing code as explained above.
Thanks
It's hard to answer whether this code is good or bad - it depends on circumstances. The code in question (except for the fact that m_Wrk field should be a pointer) is a PImpl pattern, which is a kind of a bridge pattern. This pattern is used to split the abstraction and implementation so that they can change independently.
For example, if you write a C++ library and you'd like to provide a stable ABI which does not change if there're no changes in public interface, you could put the Controller interface in header, together with forward-declaration of Worker class, and put both interface and implementation of Worker in .cpp file. When Worker changes, the Controller ABI remains the same. If the implementation details were placed directly in Controller class, the ABI might change.
Also, such pattern (Bridge in general) allows you to use different implementations of the abstraction given, so, it's possible to separate inheritance in terms of interface extension and inheritance in terms of Liskov substitution.
Also, if the Controller implements the same interface as Worker, in future it can implement some additional behavior, like in the proxy pattern.
If one of these situations takes place in your project (or may take place in future), it's probably a good code. If no, it's just an unnecessary complication.
Run into a bit of an issue, and I'm looking for the best solution concept/theory.
I have a system that needs to use objects. Each object that the system uses has a known interface, likely implemented as an abstract class. The interfaces are known at build time, and will not change. The exact implementation to be used will vary and I have no idea ahead of time what module will be providing it. The only guarantee is that they will provide the interface. The class name and module (DLL) come from a config file or may be changed programmatically.
Now, I have all that set up at the moment using a relatively simple system, set up something like so (rewritten pseudo-code, just to show the basics):
struct ClassID
{
Module * module;
int number;
};
class Module
{
HMODULE module;
function<void * (int)> * createfunc;
static Module * Load(String filename);
IObject * CreateClass(int number)
{
return createfunc(number);
}
};
class ModuleManager
{
bool LoadModule(String filename);
IObject * CreateClass(String classname)
{
ClassID class = AvailableClasses.find(classname);
return class.module->CreateObject(class.number);
}
vector<Module*> LoadedModules;
map<String, ClassID> AvailableClasses;
};
Modules have a few exported functions to give the number of classes they provide and the names/IDs of those, which are then stored. All classes derive from IObject, which has a virtual destructor, stores the source module and has some methods to get the class' ID, what interface it implements and such.
The only issue with this is each module has to be manually loaded somewhere (listed in the config file, at the moment). I would like to avoid doing this explicitly (outside of the ModuleManager, inside that I'm not really concerned as to how it's implemented).
I would like to have a similar system without having to handle loading the modules, just create an object and (once it's all set up) it magically appears.
I believe this is similar to what COM is intended to do, in some ways. I looked into the COM system briefly, but it appears to be overkill beyond belief. I only need the classes known within my system and don't need all the other features it handles, just implementations of interfaces coming from somewhere.
My other idea is to use the registry and keep a key with all the known/registered classes and their source modules and numbers, so I can just look them up and it will appear that Manager::CreateClass finds and makes the object magically. This seems like a viable solution, but I'm not sure if it's optimal or if I'm reinventing something.
So, after all that, my question is: How to handle this? Is there an existing technology, if not, how best to set it up myself? Are there any gotchas that I should be looking out for?
COM very likely is what you want. It is very broad but you don't need to use all the functionality. For example, you don't need to require participants to register GUIDs, you can define your own mechanism for creating instances of interfaces. There are a number of templates and other mechanisms to make it easy to create COM interfaces. What's more, since it is a standard, it is easy to document the requirements.
One very important thing to bear in mind is that importing/exporting C++ objects requires all participants to be using the same compiler. If you think that ever could be a problem to you then you should use COM. If you are happy to accept that restriction then you can carry on as you are.
I don't know if any technology exists to do this.
I do know that I worked with a system very similar to this. We used XML files to describe the various classes that different modules made available. Our equivalent of ModuleManager would parse the xml files to determine what to create for the user at run time based on the class name they provided and the configuration of the system. (Requesting an object that implemented interface 'I' could give back any of objects 'A', 'B' or 'C' depending on how the system was configured.)
The big gotcha we found was that the system was very brittle and at times hard to debug/understand. Just reading through the code, it was often near impossible to see what concrete class was being instantiated. We also found that maintaining the XML created more bugs and overhead than expected.
If I was to do this again, I would keep the design pattern of exposing classes from DLL's through interfaces, but I would not try to build a central registry of classes, nor would I derive everything from a base class such as IObject.
I would instead make each module responsible for exposing its own factory functions(s) to instantiate objects.
I am currently busy refactoring big parts in my application. The main purpose is to remove as much as possible dependencies between the different modules. I now stumble on the following problem:
In my application I have a GUI module that has defined an interface IDataProvider. The interface needs to be implemented by the application and is used to 'provide data' to the GUI module. E.g. a data grid can be given this IDataProvider and use it to loop over all the instances that should be shown in the data grid, and getting their data.
Now have another module (in fact quite some more modules) that all need something similar (like a reporting module, a database integration module, a mathematical solver module, ...). At this moment I can see 2 things I can do:
I could move IDataProvider from the GUI layer to a much lower-level layer and reuse this same interface in all the other modules.
This has the advantage that it becomes easier for the application to use all the modules (it only has to implement a data provider once).
The disadvantage is that I introduce a dependency between the modules and the central IDataProvider. If someone starts to extend IDataProvider with additional methods needed for one module, it also starts to pollute the other modules.
The other alternative is to give every module its own data provider, and force the application to implement all of them if it wants to use all the modules.
The advantage is that the modules are not dependent on a common part
The disadvantage is that I end up with IGridDataProvider, IReportDataProvider, IDatabaseDataProvider, ISolverDataProvider.
What's the best approach to use? Is it acceptible to make all modules dependent on the same common interface if they require [almost or completely] the same kind of interface?
If I use the same IDataProvider interface, can this give nasty problems in the future (which I am not aware of at this moment)?
Why don't you do an intermediate implementation? Have some class implement recurring parts of IDataProvider (as in the 1st case) in a factored-out library (or other layer). Also, everyone is required to "implement" their own IDataProvider (as in the 2nd case). Then, you can re-use your IDataProvider implementation all over the place and add specific methods in custom classes by creating a derived class...
I.e.:
// Common module.
class BasicDataProvider : IDataProvider
{
public:
// common overrides...
};
// For modules requiring no specific methods...
typedef BasicDataProvider ReportDataProvider;
// Database module requires "special" handling.
class DatabaseDataProvider : BasicDataProvider
{
public:
// custom overrides...
};
There is an alternative to the disadvantage you cite for moving IDataProvider to a lower-level layer.
A module that wants an extended interface could put those extensions in its own sub-interface of IDataProvider. You could encourage this by pro-actively creating those sub-interfaces.
I wouldn't mind having multiple module depending on one interface even if it doesn't use all of the methods the interface publishes. You could also think more in a meaning for part of the interface instead of for what module is it intended. Most of the module you mention only need read access. So you could separate in this way and have another for write, etc.
The data layer doesn't need to know what the data is used for(which is the job of the presentation layer). It only needs to know how to return it and how to modify it.
Moreover, there's absolutely no problem into moving the data provider(which could also be labeled as a controller) to a lower level because it's probably already implementing some business logic(like data consistency) which has nothing to do with the UI.
If you're worried that additional methods would be applied to an interface you can use an Adaptor pattern. That is:
class myFoo{
public:
Bar getBar() =0;
}
and in the other module:
class myBaz{
public:
Bar getBar() =0;
}
Then to use one with the other:
class MyAdaptor: public myBaz{
public:
MyAdaptor(myFoo *_input){
m_Foo = _input;
}
Bar getBar(){ return m_Foo->getBar(); }
private:
myFoo* m_Foo;
}
That way you implement everything in your myBaz interface and only need to supply the glue in one place. The myFoo can have as many additional methods added to it as they want, the rest of your application need not know or care about it.
I have a code base where many of the classes I implement derive from classes that are provided by other divisions of my company. Working with these other devisions often have the working relationship as though they are third party middle ware vendors.
I'm trying to write test code without modifying these base classes. However, there are issues with creating meaningful test
objects due to the lack of interfaces:
//ACommonClass.h
#include "globalthermonuclearwar.h" //which contains deep #include dependencies...
#include "tictactoe.h" //...and need to exist at compile time to get into test...
class Something //which may or may not inherit from another class similar to this...
{
public:
virtual void fxn1(void); //which often calls into many other classes, similar to this
//...
int data1; //will be the only thing I can test against, but is often meaningless without fxn1 implemented
//...
};
I'd normally extract an interface and work from there, but as these are "Third Party", I can't commit these changes.
Currently, I've created a separate file that holds fake implementations for functions that are defined in the third-party supplied base class headers on a need to know basis, as has been described in the book "Working with Legacy Code".
My plan was to continue to use these definitions and provide alternative test implementations for each third party class that I needed:
//SomethingRequiredImplementations.cpp
#include "ACommonClass.h"
void CGlobalThermoNuclearWar::Simulate(void) {}; // fake this and all other required functions...
// fake implementations for otherwise undefined functions in globalthermonuclearwar.h's #include files...
void Something::fxn1(void) { data1 = blah(); } //test specific functionality.
But before I start doing that I was wondering if any one has tried providing actual objects on a code base similar to mine, which would allow creating new test specific classes to use in place of actual third-party classes.
Note all code bases in question are written in C++.
Mock objects are suitable for this kind of task. They allow you to simulate the existence of other components without needing them to be present. You simply define the expected input and output in your tests.
Google have a good mocking framework for C++.
I'm running into a very similar problem at the moment. I don't want to add a bunch of interfaces that are only there for the purpose of testing, so I can't use any of the existing mock object libraries. To get around this I do the same thing, creating a different file with fake implementations, and having my tests link the fake behaviour, and production code links the real behaviour.
What I wish I could do at this point, is take the internals of another mock framework, and use it inside my fake objects. It would look a little something like this:
Production.h
class ConcreteProductionClass { // regular everyday class
protected:
ConcreteProductionClass(); // I've found the 0 arg constructor useful
public:
void regularFunction(); // regular function that I want to mock
}
Mock.h
class MockProductionClass
: public ConcreteProductionClass
, public ClassThatLetsMeSetExpectations
{
friend class ConcreteProductionClass;
MockTypes membersNeededToSetExpectations;
public:
MockClass() : ConcreteProductionClass() {}
}
ConcreteProductionClass::regularFunction() {
membersNeededToSetExpectations.PassOrFailTheTest();
}
ProductionCode.cpp
void doSomething(ConcreteProductionClass c) {
c.regularFunction();
}
Test.cpp
TEST(myTest) {
MockProductionClass m;
m.SetExpectationsAndReturnValues();
doSomething(m);
ASSERT(m.verify());
}
The most painful part of all this is that the other mock frameworks are so close to this, but don't do it exactly, and the macros are so convoluted that it's not trivial to adapt them. I've begun looking into this on my spare time, but it's not moving along very quickly. Even if I got my method working the way I want, and had the expectation setting code in place, this method still has a couple drawbacks, one of them being that your build commands can get to be kind of long if you have to link against a lot of .o files rather than one .a, but that's manageable. It's also impossible to fall through to the default implementation, since we're not linking it. Anyway, I know this doesn't answer the question, or really even tell you anything you don't already know, but it shows how close the C++ community is to being able to mock classes that don't have a pure virtual interface.
You might want to consider mocking instead of faking as a potential solution. In some cases you may need to write wrapper classes that are mockable if the original classes aren't. I've done this with framework classes in C#/.Net, but not C++ so YMMV.
If I have a class that I need under test that derives from something I can't (or don't want to) run under test I'll:
Make a new logic-only class.
Move the code-i-wanna-test to the logic class.
Use an interface to talk back to the real class to interact with the base class and/or things I can't or won't put in the logic.
Define a test class using that same interface. This test class could have nothing but noops or fancy code that simulates the real classes.
If I have a class that I just need to use in testing, but using the real class is a problem (dependencies or unwanted behaviors):
I'll define a new interface that looks like all of the public methods I need to call.
I'll create a mock version of the object that supports that interface for testing.
I'll create another class that is constructed with a "real" version of that class. It also supports that interface. All interface calls a forwarded to the real object methods.
I'll only do this for methods I actually call - not ALL the public methods. I'll add to these classes as I write more tests.
For example, I wrap MFC's GDI classes like this to test Windows GDI drawing code. Templates can make some of this easier - but we often end up not doing that for various technical reasons (stuff with Windows DLL class exporting...).
I'm sure all this is in Feather's Working with Legacy Code book - and what I'm describing has actual terms. Just don't make me pull the book off the shelf...
One thing you did not indicate in your question is the reason why your classes derive from base classes from the other division. Is the relationship really a IS-A relationshiop ?
Unless your classes needs to be used by a framework, you could consider favoring delegation over inheritance. Then you can use dependency injection to provide your class with a mock of their class in the unit tests.
Otherwise, an idea would be to write a script to extract and create the interface your need from the header they provide, and integrate this to the compilation process so your unit test can ve checked in.