Deciding about constructed objects at compilation time - c++

I have following problem to solve.
I have component A. This component has some sub-components - B,C,D. Using cmake I am building or not those B,C,D components. It depends on current platform configuration. My cmake system is making executable makefiles (for A component) for linking only those components, which were used in given cmake run. If component B was built, it is added to executable if not - is not linked. The same with other - C,D.
All those B,C,D components provide some implementations of interface used in A component. This A component shall manage objects created by B,C,D and keep those objects in some map, using proper object at proper time.
Question:
I want to achieve some simple and reliable mechanism for adding those objects implementing A interface automatically, the same as it is now with linking - linked are only modules, which were built. The same with those objects - I would like to have them registered in A component only when they were compiled.
It is hard for me to explain it. The idea is easy - build some map of those objects at compilation time. Only components compiled shall deliver their object to this map.

I have used designs similar to how Objective-C and Smalltalk implement methods.
In C++, methods == member functions and must be defined at compile time. So, even though the interface can be extended with mechanisms such is the preprocessor, the same configuration must also affect any clients of the class, or they simply won't link.
So I use a message passing system to invoke methods on objects. So if A is the main class, and you compile in C and D but not B, then the message processor of A will only respond to messages that have handlers registered by C and D.
This type of design does require having a messaging system of some sort. There are numerous existing systems such as Google Protocol Buffers and Apache Thrift. I chose to design one since I wanted even more runtime configurability than most existing systems allow (many of these messaging systems have IDL compilers involved).
However, it did allow me to get closer to the OO realm than the mixed-paradigm language C++ typically permits.

Related

Logging of ATL class objects

I have a pretty large Dll library, which is developed in C++ using Microsoft Active Template Library (ATL).
I'm trying to gather test data from this library during runtime, so I can build some good unit tests for it later. Since some of the classes are ridiculously big, it would be very tedious work to manually add logging to all the member functions and log the state of all hundreds of member data variables after every member function is run.
Of course, I don't need to log exactly everything. I could just log the state after the most frequent member functions have been run. However, this library needs to be ported to a more modern stack in the future so it would be good to have solid unit tests.
Is there a way to dump whole ATL objects from memory to file, which can later be easily analyzed? Are there any tools or libraries available for this kind of task?
Can I even find ATL classes that could help me with this?

Do I need do need to implicitly link between explicitly loaded shared libraries for interaction?

See the image below. The plugins implement an interace from the core library. (QtPlugin) The concrete plugin class is exported. The plugins should be able to retrieve the concrete plugin class instance from the core and call its methods. If I want to implement this kind of interacting plugins, do I have to link the plugins against each other?
I dont know what happens exactly when symbols get resolved. As far as I can imagine the process stores the resolved symbols. So as soon as the core library resolved the symbols the plugins can receive objects of other plugin classes and call methods on it, if they have the headers. Is this true (for all platforms)?
Some generic information on where symbols get stored and who can access it would be nice too.
generally you link against something, so plugin A links against Core library since it needs to know about the core implementation to function, the implicit link does not exist, there is not knowledge of plugin A inside Core (and there shouldnt be) so therefore core does not know about plugin A or B, and that means plugin B and A wont know about each other either without linking against each other.
in this kind of model you want to keep it agnostic between the plugins and use interfaces or abstract classes to communicate. (for example if a plugin inherits from Core with some pure virtual functions, another plugin can hold a pointer to it, and call functions on it without knowing the full implementation)
generally you link against something, so plugin A links against Core library since it needs to know about the core implementation to function, the implicit link does not exist, there is not knowledge of plugin A inside Core (and there shouldnt be) so therefore core does not know about plugin A or B, and that means plugin B and A wont know about each other either without linking against each other.
Edit for comment:
In that case you could use interfaces, which plugins inherit from. so in the core library you make a class that is called ITerminal, which has a set of virtual functions (Update, Init, Connect, Open, whatever you need) without an implementation, and then pluginA can inherit from it and give the functions implementations. that way other plugins can hold a handle to ITerminal and call functions on it without knowing about the details of pluginA. to create it you need a factory for example Core::CreateTerminal, which will return an ITerminal (ITerminal* object = new PluginA();) now pluginB can call Core::CreateTerminal, which gives them a handle to ITerminal which has an implementation that Core chose in this case. to expand on it you can have plugins register themselves to core so core just calls a create function in the plugin, for example pluginA could register itself as a ITerminal class to core, then when CreateTerminal is called it will call the plugin to create a specific object. that way you can swap plugins in and out (have different terminals without changing Core or other plugins)

Is there a way to implement dynamic factory pattern in c++?

The DYNAMIC FACTORY pattern describes how to create a factory that
allows the creation of unanticipated products derived from the same
abstraction by storing the information about their concrete type in
external metadata
from : http://www.wirfs-brock.com/PDFs/TheDynamicFactoryPattern.pdf
The PDF says:
Configurability
. We can change the behavior of an application by just changing its configuration
information. This can be done without the need to change any source code (just change the descriptive information about the type in the metadata repository) or to restart the application (if caching is not used – if caching is used the cache will need to be flushed).
It is not possible to introduce new types to a running C++ program without modifying source code. At the very least, you'd need to write a shared library containing a factory to generate instances of the new type: but doing so is expressly rules out by the PDF:
Extensibility / Evolvability
. New product types should be easily
added without requiring neither a
new factory class nor modifying
any existing one.
This is not practical in C++.
Still, the functionality can be achieved by using metadata to guide some code writing function, then invoking the compiler (whether as a subprocess or a library) to create a shared library. This is pretty much what the languages mentioned in the PDF are doing when they use reflection and metadata to ask the virtual machine to create new class instances: it's just more normal in those language environments to need bits of the compiler/interpreter hanging around in memory, so it doesn't seem such a big step.
Yes...
Look at the Factories classes in the Qtilities Qt library.
#TonyD regarding
We can change the behavior of an application by just changing its configuration information.
It is 100% possible if you interpret the sentence in another way. What I read and understand is you change a configuration file (xml in the doc) that gets loaded to change the behaviour of the application. So perhaps your application has 2 loggers, one to file and one to a GUI. So the config file can be edited to choose one or both to be used. Thus no change of the application but the behaviour is changed. The requirement is that anything that you can configure in the file is available in the code, so to say log using network will not work since it is not implemented.
New product types should be easily added without requiring neither a new factory class nor modifying any existing one.
Yes that sounds a bit impossible. I will accept the ability to add ones without having to change the original application. Thus one should be able to add using plugins or another method and leave the application/factory/existing classes in tact and unchanged.
All of the above is supported by the example provided. Although Qtilities is a Qt library, the factories are not Qt specific.

What's the point of _MERGE_PROXYSTUB?

I have generated an ATL COM object using VS2008 and the code contains references to a definition called _MERGE_PROXYSTUB (because I chose the 'Merge proxy/stub' option when I initially ran the wizard.)
What is the point of a proxy/stub? If I don't select the the merge option then I get a separate MyControlPS.DLL instead - when would this ever be used?
FWIW the control seems to register and work fine if I remove all the code surrounded by the _MERGE_PROXYSTUB defines. A debug build doesn't even define _MERGE_PROXYSTUB and it still works OK.
So, can I do without a proxy/stub?
You need a proxy/stub if you want your COM object to be called from an application using a different threading model than your COM object.
For example, we have a plug in that gets loaded by an application that uses a particular threading model (can't remember which), but our COM object is multithreaded apartment (MTA) - so the the proxy/stub is required to marshall the data between the objects when a function call is made, while still adhering to the rules of the threading model.
If these rules are broken, then COM will either throw an exception or return a failure HRESULT such as RPC_E_WRONG_THREAD
If you don't check the merge proxy/stub option, then visual studio produces a seperate project for the proxy/stubs which get build into a seperate dll. This makes things more difficult for deployment if they are required, but you can basically just ignore them if you are not affected by threading model issues.
So you can do without proxy/stubs if the application calling the COM object is using the same threading model as your object
Larry Osterman provides a readable introduction to threading models on his blog.
Also, if your interfaces contain only type-library-friendly types (BSTR, VARIANT, etc) and appear in the library block of your IDL, you can elect to have them "type library marshalled" meaning that a system-provided proxy/stub uses the meta-data from the type library.
When interfaces are put inside the library block, and DllRegisterServer is customized to register the type library (pass TRUE to XxxModule::DllRegisterServer, if I recall correctly) your interfaces will be marshalled by the system, if necessary, as described by John Sibly.
At that point, the proxy/stub isn't even used, so _MERGE_PROXYSTUB has no effect.

How to implement monkey patch in C++?

Is it possible to implement monkey patching in C++?
Or any other similar approach to that?
Thanks.
Not portably so, and due to the dangers for larger projects you better have good reason.
The Preprocessor is probably the best candidate, due to it's ignorance of the language itself. It can be used to rename attributes, methods and other symbol names - but the replacement is global at least for a single #include or sequence of code.
I've used that before to beat "library diamonds" into submission - Library A and B both importing an OS library S, but in different ways so that some symbols of S would be identically named but different. (namespaces were out of the question, for they'd have much more far-reaching consequences).
Similary, you can replace symbol names with compatible-but-superior classes.
e.g. in VC, #import generates an import library that uses _bstr_t as type adapter. In one project I've successfully replaced these _bstr_t uses with a compatible-enough class that interoperated better with other code, just be #define'ing _bstr_t as my replacement class for the #import.
Patching the Virtual Method Table - either replacing the entire VMT or individual methods - is somethign else I've come across. It requires good understanding of how your compiler implements VMTs. I wouldn't do that in a real life project, because it depends on compiler internals, and you don't get any warning when thigns have changed. It's a fun exercise to learn about the implementation details of C++, though. One application would be switching at runtime from an initializer/loader stub to a full - or even data-dependent - implementation.
Generating code on the fly is common in certain scenarios, such as forwarding/filtering COM Interface calls or mapping OS Window Handles to library objects. I'm not sure if this is still "monkey-patching", as it isn't really toying with the language itself.
To add to other answers, consider that any function exposed through a shared object or DLL (depending on platform) can be overridden at run-time. Linux provides the LD_PRELOAD environment variable, which can specify a shared object to load after all others, which can be used to override arbitrary function definitions. It's actually about the best way to provide a "mock object" for unit-testing purposes, since it is not really invasive. However, unlike other forms of monkey-patching, be aware that a change like this is global. You can't specify one particular call to be different, without impacting other calls.
Considering the "guerilla third-party library use" aspect of monkey-patching, C++ offers a number of facilities:
const_cast lets you work around zealous const declarations.
#define private public prior to header inclusion lets you access private members.
subclassing and use Parent::protected_field lets you access protected members.
you can redefine a number of things at link time.
If the third party content you're working around is provided already compiled, though, most of the things feasible in dynamic languages isn't as easy, and often isn't possible at all.
I suppose it depends what you want to do. If you've already linked your program, you're gonna have a hard time replacing anything (short of actually changing the instructions in memory, which might be a stretch as well). However, before this happens, there are options. If you have a dynamically linked program, you can alter the way the linker operates (e.g. LD_LIBRARY_PATH environment variable) and have it link something else than the intended library.
Have a look at valgrind for example, which replaces (among alot of other magic stuff it's dealing with) the standard memory allocation mechanisms.
As monkey patching refers to dynamically changing code, I can't imagine how this could be implemented in C++...