Module and classes handling (dynamic linking) - c++

Run into a bit of an issue, and I'm looking for the best solution concept/theory.
I have a system that needs to use objects. Each object that the system uses has a known interface, likely implemented as an abstract class. The interfaces are known at build time, and will not change. The exact implementation to be used will vary and I have no idea ahead of time what module will be providing it. The only guarantee is that they will provide the interface. The class name and module (DLL) come from a config file or may be changed programmatically.
Now, I have all that set up at the moment using a relatively simple system, set up something like so (rewritten pseudo-code, just to show the basics):
struct ClassID
{
Module * module;
int number;
};
class Module
{
HMODULE module;
function<void * (int)> * createfunc;
static Module * Load(String filename);
IObject * CreateClass(int number)
{
return createfunc(number);
}
};
class ModuleManager
{
bool LoadModule(String filename);
IObject * CreateClass(String classname)
{
ClassID class = AvailableClasses.find(classname);
return class.module->CreateObject(class.number);
}
vector<Module*> LoadedModules;
map<String, ClassID> AvailableClasses;
};
Modules have a few exported functions to give the number of classes they provide and the names/IDs of those, which are then stored. All classes derive from IObject, which has a virtual destructor, stores the source module and has some methods to get the class' ID, what interface it implements and such.
The only issue with this is each module has to be manually loaded somewhere (listed in the config file, at the moment). I would like to avoid doing this explicitly (outside of the ModuleManager, inside that I'm not really concerned as to how it's implemented).
I would like to have a similar system without having to handle loading the modules, just create an object and (once it's all set up) it magically appears.
I believe this is similar to what COM is intended to do, in some ways. I looked into the COM system briefly, but it appears to be overkill beyond belief. I only need the classes known within my system and don't need all the other features it handles, just implementations of interfaces coming from somewhere.
My other idea is to use the registry and keep a key with all the known/registered classes and their source modules and numbers, so I can just look them up and it will appear that Manager::CreateClass finds and makes the object magically. This seems like a viable solution, but I'm not sure if it's optimal or if I'm reinventing something.
So, after all that, my question is: How to handle this? Is there an existing technology, if not, how best to set it up myself? Are there any gotchas that I should be looking out for?

COM very likely is what you want. It is very broad but you don't need to use all the functionality. For example, you don't need to require participants to register GUIDs, you can define your own mechanism for creating instances of interfaces. There are a number of templates and other mechanisms to make it easy to create COM interfaces. What's more, since it is a standard, it is easy to document the requirements.
One very important thing to bear in mind is that importing/exporting C++ objects requires all participants to be using the same compiler. If you think that ever could be a problem to you then you should use COM. If you are happy to accept that restriction then you can carry on as you are.

I don't know if any technology exists to do this.
I do know that I worked with a system very similar to this. We used XML files to describe the various classes that different modules made available. Our equivalent of ModuleManager would parse the xml files to determine what to create for the user at run time based on the class name they provided and the configuration of the system. (Requesting an object that implemented interface 'I' could give back any of objects 'A', 'B' or 'C' depending on how the system was configured.)
The big gotcha we found was that the system was very brittle and at times hard to debug/understand. Just reading through the code, it was often near impossible to see what concrete class was being instantiated. We also found that maintaining the XML created more bugs and overhead than expected.
If I was to do this again, I would keep the design pattern of exposing classes from DLL's through interfaces, but I would not try to build a central registry of classes, nor would I derive everything from a base class such as IObject.
I would instead make each module responsible for exposing its own factory functions(s) to instantiate objects.

Related

Runtime interfaces and object composition in C++

I am searching for a simple, light-weight solution for interface-based runtime object composition in C++. I want to be able to specify interfaces (methods declarations), and objects (creatable through factory pattern) implementing these. At runtime I want mechanisms to instantiate these objects and interconnect these based on interface-connectors. The method calls at runtime should remain fairly cheap, i.e. only several more instructions per call, comparable to functor patterns.
The whole thing needs to be platform independent (at least MS Windows and Linux). And the solution needs to be licensed liberally, like open source LGPL or (even better) BSD or something, especially allowing use commercial products.
What I do not want are heavy things like networking, inter-process-communication, extra compiler steps (one-time code generation is ok though), or dependencies to some heavy libraries (like Qt).
The concrete scenario is: I have such a mechanism in a larger software, but the mechanism is not very well implemented. Interfaces are realized by base classes exported by Dlls. These Dlls also export factory functions to instantiate the implementing objects, based on hand-written class ids.
Before I now start to redesign and implement something better myself, I want to know if there is something out there which would be even better.
Edit: The solution also needs to support multi-threading environments. Additionally, as everything will happen inside the same process, I do not need data serialization mechanisms of any kind.
Edit: I know how such mechanisms work, and I know that several teaching books contain corresponding examples. I do not want to write it myself. The aim of my question is: Is there some sort of "industry standard" lib for this? It is a small problem (within a single process) and I am really only searching for a small solution.
Edit: I got the suggestion to add a pseudo-code example of what I really want to do. So here it is:
Somewhere I want to define interfaces. I do not care if it's C-Headers or some language and code generation.
class interface1 {
public:
virtual void do_stuff(void) = 0;
};
class interface2 {
public:
virtual void do_more_stuff(void) = 0;
};
Then I want to provide (multiple) implementations. These may even be placed in Dll-based plugins. Especially, these two classes my be implemented in two different Dlls not knowing each other at compile time.
class A : public interface1 {
public:
virtual void do_stuff(void) {
// I even need to call further interfaces here
// This call should, however, not require anything heavy, like data serialization or something.
this->con->do_more_stuff();
}
// Interface connectors of some kind. Here I use something like a template
some_connector<interface2> con;
};
class B : public interface2 {
public:
virtual void do_more_stuff() {
// finally doing some stuff
}
};
Finally, I may application main code I want to be able to compose my application logic at runtime (e.g. based on user input):
void main(void) {
// first I create my objects through a factory
some_object a = some_factory::create(some_guid<A>);
some_object b = some_factory::create(some_guid<B>);
// Then I want to connect the interface-connector 'con' of object 'a' to the instance of object 'b'
some_thing::connect(a, some_guid<A::con>, b);
// finally I want to call an interface-method.
interface1 *ia = a.some_cast<interface1>();
ia->do_stuff();
}
I am perfectly able to write such a solution myself (including all pitfalls). What I am searching for is a solution (e.g. a library) which is used and maintained by a wide user base.
While not widely used, I wrote a library several years ago that does this.
You can see it on GitHub zen-core library, and it's also available on Google Code
The GitHub version only contains the core libraries, which is really all the you need. The Google Code version contains a LOT of extra libraries, primarily for game development, but it does provide a lot of good examples on how to use it.
The implementation was inspired by Eclipse's plugin system, using a plugin.xml file that indicates a list of available plugins, and a config.xml file that indicates which plugins you would like to load. I'd also like to change it so that it doesn't depend on libxml2 and allow you to be able to specify plugins using other methods.
The documentation has been destroyed thanks to some hackers, but if you think this would be useful then I can write enough documentation to get you started.
A co-worker gave me two further tips:
The loki library (originating from the modern c++ book):
http://loki-lib.sourceforge.net/
A boost-like library:
http://kifri.fri.uniza.sk/~chochlik/mirror-lib/html/
I still have not looked at all the ideas I got.

Is it safe to use strings as private data members in a class used across a DLL boundry?

My understanding is that exposing functions that take or return stl containers (such as std::string) across DLL boundaries can cause problems due to differences in STL implementations of those containers in the 2 binaries. But is it safe to export a class like:
class Customer
{
public:
wchar_t * getName() const;
private:
wstring mName;
};
Without some sort of hack, mName is not going to be usable by the executable, so it won't be able to execute methods on mName, nor construct/destruct this object.
My gut feeling is "don't do this, it's unsafe", but I can't figure out a good reason.
It is not a problem. Because it is trumped by the bigger problem, you cannot create an object of that class in code that lives in a module other than the one that contains the code for the class. Code in another module cannot accurately know the required object size, their implementation of the std::string class may well be different. Which, as declared, also affects the size of the Customer object. Even the same compiler cannot guarantee this, mixing optimized and debugging builds of these modules for example. Albeit that this is usually pretty easy to avoid.
So you must create a class factory for Customer objects, a factory that lives in that same module. Which then automatically implies that any code that touches the "mName" member also lives in the same module. And is therefore safe.
Next step then is to not expose Customer at all but expose an pure abstract base class (aka interface). Now you can prevent the client code from creating an instance of Customer and shoot their leg off. And you'll trivially hide the std::string as well. Interface-based programming techniques are common in module interop scenarios. Also the approach taken by COM.
As long as the allocator of instances of the class and deallocator are of the same settings, you should be ok, but you are right to avoid this.
Differences between the .exe and .dll as far as debug/release, code generation (Multi-threaded DLL vs. Single threaded) could cause problems in some scenarios.
I would recommend using abstract classes in the DLL interface with creation and deletion done solely inside the DLL.
Interfaces like:
class A {
protected:
virtual ~A() {}
public:
virtual void func() = 0;
};
//exported create/delete functions
A* create_A();
void destroy_A(A*);
DLL Implementation like:
class A_Impl : public A{
public:
~A_Impl() {}
void func() { do_something(); }
}
A* create_A() { return new A_Impl; }
void destroy_A(A* a) {
A_Impl* ai=static_cast<A_Impl*>(a);
delete ai;
}
Should be ok.
Even if your class has no data members, you cannot expect it to be usable from code compiled with a different compiler. There is no common ABI for C++ classes. You can expect differences in name mangling just for starters.
If you are prepared to constrain clients to use the same compiler as you, or provide source to allow clients to compile your code with their compiler, then you can do pretty much anything across your interface. Otherwise you should stick to C style interfaces.
If you want to provide an object oriented interface in a DLL that is truly safe, I would suggest building it on top of the COM object model. That's what it was designed for.
Any other attempt to share classes between code that is compiled by different compilers has the potential to fail. You may be able to get something that seems to work most of the time, but it can't be guaraneteed to work.
The chances are that at some point you're going to be relying on undefined behaviour in terms of calling conventions or class structure or memory allocation.
The C++ standard does not say anything about the ABI provided by implementations. Even on a single platform changing the compiler options may change binary layout or function interfaces.
Thus to ensure that standard types can be used across DLL boundaries it is your responsibility to ensure that either:
Resource Acquisition/Release for standard types is done by the same DLL. (Note: you can have multiple crt's in a process but a resource acquired by crt1.DLL must be released by crt1.DLL.)
This is not specific to C++. In C for example malloc/free, fopen/fclose call pairs must each go to a single C runtime.
This can be done by either of the below:
By explicitly exporting acquisition/release functions ( Photon's answer ). In this case you are forced to use a factory pattern and abstract types.Basically COM or a COM-clone
Forcing a group of DLL's to link against the same dynamic CRT. In this case you can safely export any kind of functions/classes.
There are also two "potential bug" (among others) you must take care, since they are related to what is "under" the language.
The first is that std::strng is a template, and hence it is instantiated in every translation unit. If they are all linked to a same module (exe or dll) the linker will resolve same functions as same code, and eventually inconsistent code (same function with different body) is treated as error.
But if they are linked to different module (and exe and a dll) there is nothing (compiler and linker) in common. So -depending on how the module where compiled- you may have different implementation of a same class with different member and memory layout (for example one may have some debugging or profiling added features the other has not). Accessing an object created on one side with methods compiled on the other side, if you have no other way to grant implementation consistency, may end in tears.
The second problem (more subtle) relates to allocation/deallocaion of memory: because of the way windows works, every module can have a distinct heap. But the standard C++ does not specify how new and delete take care about which heap an object comes from. And if the string buffer is allocated on one module, than moved to a string instance on another module, you risk (upon destruction) to give the memory back to the wrong heap (it depends on how new/delete and malloc/free are implemented respect to HeapAlloc/HeapFree: this merely relates to the level of "awarness" the STL implementation have respect to the underlying OS. The operation is not itself destructive -the operation just fails- but it leaks the origin's heap).
All that said, it is not impossible to pass a container. It is just up to you to grant a consistent implementation between the sides, since the compiler and linker have no way to cross check.

Putting all code of a module behind 1 interface. Good idea or not?

I have several modules (mainly C) that need to be redesigned (using C++). Currently, the main problems are:
many parts of the application rely on the functions of the module
some parts of the application might want to overrule the behavior of the module
I was thinking about the following approach:
redesign the module so that it has a clear modern class structure (using interfaces, inheritence, STL containers, ...)
writing a global module interface class that can be used to access any functionality of the module
writing an implementation of this interface that simply maps the interface methods to the correct methods of the correct class in the interface
Other modules in the application that currently directly use the C functions of the module, should be passed [an implementation of] this interface. That way, if the application wants to alter the behavior of one of the functions of the module, it simply inherits from this default implementation and overrules any function that it wants.
An example:
Suppose I completely redesign my module so that I have classes like: Book, Page, Cover, Author, ... All these classes have lots of different methods.
I make a global interface, called ILibraryAccessor, with lots of pure virtual methods
I make a default implementation, called DefaultLibraryAccessor, than simply forwards all methods to the correct method of the correct class, e.g.
DefaultLibraryAccessor::printBook(book) calls book->print()
DefaultLibraryAccessor::getPage(book,10) calls book->getPage(10)
DefaultLibraryAccessor::printPage(page) calls page->print()
Suppose my application has 3 kinds of windows
The first one allows all functionality and as an application I want to allow that
The second one also allows all functionality (internally), but from the application I want to prevent printing separate pages
The third one also allows all functionality (internally), but from the application I want to prevent printing certain kinds of books
When constructing the window, the application passes an implementation of ILibraryAccessor to the window
The first window will get the DefaultLibraryAccessor, allowing everything
I will pass a special MyLibraryAccessor to the second window, and in MyLibraryAccessor, I will overrule the printPage method and let it fail
I will pass a special AnotherLibraryAccessor to the third window, and in AnotherLibraryAccessor, I will overrule the printBook method and check the type of book before I will call book->print().
The advantage of this approach is that, as shown in the example, an application can overrule any method it wants to overrule. The disadvantage is that I get a rather big interface, and the class-structure is completely lost for all modules that wants to access this other module.
Good idea or not?
You could represent the class structure with nested interfaces. E.g. instead of DefaultLibraryAccessor::printBook(book), have DefaultLibraryAccessor::Book::print(book). Otherwise it looks like a good design to me.
Maybe look at the design pattern called "Facade". Use one facade per module. Your approach seems good.
ILibraryAccessor sounds like a known anti-pattern, the "god class".
Your individual windows are probably better off inheriting and overriding at Book/Page/Cover/Author level.
The only thing I'd worry about is a loss of granularity, partly addressed by suszterpatt previously. Your implementations might end up being rather heavyweight and inflexible. If you're sure that you can predict the future use of the module at this point then the design is probably ok.
It occurs to me that you might want to keep the interface fine-grained, but find some way of injecting this kind of display-specific behaviour rather than trying to incorporate it at top level.
If you have n number of methods in your interface class, And there are m number of behaviors per each method, you get m*(nC1 + nC2 + nC3 + ... + nCn) Implementations of your interface (I hope I got my math right :) ). Compare this with the m*n implementations you need if you were to have a single interface per function. And this method has added flexibility which is more important. So, no - I don't think a single interface would do. But you don't have to be extreme about it.
EDIT: I am sure the math is wrong. :(

Reusing interfaces throughout your application

I am currently busy refactoring big parts in my application. The main purpose is to remove as much as possible dependencies between the different modules. I now stumble on the following problem:
In my application I have a GUI module that has defined an interface IDataProvider. The interface needs to be implemented by the application and is used to 'provide data' to the GUI module. E.g. a data grid can be given this IDataProvider and use it to loop over all the instances that should be shown in the data grid, and getting their data.
Now have another module (in fact quite some more modules) that all need something similar (like a reporting module, a database integration module, a mathematical solver module, ...). At this moment I can see 2 things I can do:
I could move IDataProvider from the GUI layer to a much lower-level layer and reuse this same interface in all the other modules.
This has the advantage that it becomes easier for the application to use all the modules (it only has to implement a data provider once).
The disadvantage is that I introduce a dependency between the modules and the central IDataProvider. If someone starts to extend IDataProvider with additional methods needed for one module, it also starts to pollute the other modules.
The other alternative is to give every module its own data provider, and force the application to implement all of them if it wants to use all the modules.
The advantage is that the modules are not dependent on a common part
The disadvantage is that I end up with IGridDataProvider, IReportDataProvider, IDatabaseDataProvider, ISolverDataProvider.
What's the best approach to use? Is it acceptible to make all modules dependent on the same common interface if they require [almost or completely] the same kind of interface?
If I use the same IDataProvider interface, can this give nasty problems in the future (which I am not aware of at this moment)?
Why don't you do an intermediate implementation? Have some class implement recurring parts of IDataProvider (as in the 1st case) in a factored-out library (or other layer). Also, everyone is required to "implement" their own IDataProvider (as in the 2nd case). Then, you can re-use your IDataProvider implementation all over the place and add specific methods in custom classes by creating a derived class...
I.e.:
// Common module.
class BasicDataProvider : IDataProvider
{
public:
// common overrides...
};
// For modules requiring no specific methods...
typedef BasicDataProvider ReportDataProvider;
// Database module requires "special" handling.
class DatabaseDataProvider : BasicDataProvider
{
public:
// custom overrides...
};
There is an alternative to the disadvantage you cite for moving IDataProvider to a lower-level layer.
A module that wants an extended interface could put those extensions in its own sub-interface of IDataProvider. You could encourage this by pro-actively creating those sub-interfaces.
I wouldn't mind having multiple module depending on one interface even if it doesn't use all of the methods the interface publishes. You could also think more in a meaning for part of the interface instead of for what module is it intended. Most of the module you mention only need read access. So you could separate in this way and have another for write, etc.
The data layer doesn't need to know what the data is used for(which is the job of the presentation layer). It only needs to know how to return it and how to modify it.
Moreover, there's absolutely no problem into moving the data provider(which could also be labeled as a controller) to a lower level because it's probably already implementing some business logic(like data consistency) which has nothing to do with the UI.
If you're worried that additional methods would be applied to an interface you can use an Adaptor pattern. That is:
class myFoo{
public:
Bar getBar() =0;
}
and in the other module:
class myBaz{
public:
Bar getBar() =0;
}
Then to use one with the other:
class MyAdaptor: public myBaz{
public:
MyAdaptor(myFoo *_input){
m_Foo = _input;
}
Bar getBar(){ return m_Foo->getBar(); }
private:
myFoo* m_Foo;
}
That way you implement everything in your myBaz interface and only need to supply the glue in one place. The myFoo can have as many additional methods added to it as they want, the rest of your application need not know or care about it.

Overriding / modifying C++ classes using DLLs

I have a project with a large codebase (>200,000 lines of code) I maintain ("The core").
Currently, this core has a scripting engine that consists of hooks and a script manager class that calls all hooked functions (that registered via DLL) as they occur. To be quite honest I don't know how exactly it works, since the core is mostly undocumented and spans several years and a magnitude of developers (who are, of course, absent). An example of the current scripting engine is:
void OnMapLoad(uint32 MapID)
{
if (MapID == 1234)
{
printf("Map 1234 has been loaded");
}
}
void SetupOnMapLoad(ScriptMgr *mgr)
{
mgr->register_hook(HOOK_ON_MAP_LOAD, (void*)&OnMapLoad);
}
A supplemental file named setup.cpp calls SetupOnMapLoad with the core's ScriptMgr.
This method is not what I'm looking for. To me, the perfect scripting engine would be one that will allow me to override core class methods. I want to be able to create classes that inherit from core classes and extend on them, like so:
// In the core:
class Map
{
uint32 m_mapid;
void Load();
//...
}
// In the script:
class ExtendedMap : Map
{
void Load()
{
if (m_mapid == 1234)
printf("Map 1234 has been loaded");
Map::Load();
}
}
And then I want every instance of Map in both the core and scripts to actually be an instance of ExtendedMap.
Is that possible? How?
The inheritance is possible. I don't see a solution for replacing the instances of Map with instances of ExtendedMap.
Normally, you could do that if you had a factory class or function, that is always used to create a Map object, but this is a matter of existing (or inexistent) design.
The only solution I see is to search in the code for instantiations and try to replace them by hand. This is a risky one, because you might miss some of them, and it might be that some of the instantiations are not in the source code available to you (e.g. in that old DLL).
Later edit
This method overriding also has a side effect in case of using it in a polymorphic way.
Example:
Map* pMyMap = new ExtendedMap;
pMyMap->Load(); // This will call Map::Load, and not ExtendedMap::Load.
This sounds like a textbook case for the "Decorator" design pattern.
Although it's possible, it's quite dangerous: the system should be open for extension (i.e. hooks), but closed for change (i.e. overriding/redefining). When inheriting like that, you can't anticipate the behaviour your client code is going to show. As you see in your example, client code must remember to call the superclass' method, which it won't :)
An option would be to create a non-virtual interface: an abstract base class that has some template methods that call pure virtual functions. These must be defined by subclasses.
If you want no core Map's to be created, the script should give the core a factory to create Map descendants.
If my experience with similar systems is applicable to your situation, there are several hooks registered. So basing a solution on the pattern abstract factory will not really work. Your system is near of the pattern observer, and that's what I'd use. You create one base class with all the possible hooks as virtual members (or several one with related hooks if the hooks are numerous). Instead of registering hooks one by one, you register one object, of a type descendant of the class with the needed override. The object can have state, and replace advantageously the void* user data fields that such callbacks system have commonly.