Using boost-di with configuration file and shared libraries - c++

I am planning a c++ project using dependency injection via boost di. In my opinion I will need a mechanism for dynamically loading libraries too, to be able to realy benefit of dependency injection.
Therefore I consider using boost dll to use a platform independent shared library mechansim.
For dependency configuration I think about using INI-files via boost property tree.
Do you see any major drawback in this approach?
Or is there another platform independent mechanism/library?
Thanks for your opinions
Andreas

There is a mechanism to decide during runtime what implementation to use. But due to the compile time approach of Boost DI it seems by design not intended to use it with dynamic libraries.
For a pure compile time injection it looks very smart and nice to use. For my problem, it seems not the right solution.

During C++ projects development, I had to find a C++ DI framework.
After some investigation and evaluations and based on our requirements (detailed below), we found out that Hypodermic framework suits our needs.
Our criterions (except the basic one that the framework shall recursively inject instances !!) were:
Non-intrusive (do not require to dirty existing code with 'decorations' macros...) as Google fruit and or other libs require.
Support singleton instance
Support of configurable instantiation (using a lambda)
Support of container\injector composition
Generic Shareable container\injector in order to support modules (static or dynamic). BTW, I agree with
Boost-DI, as answered by Andreas, is not suitable because it is heavily templated and as answered in this question.
Boost-DI does not allow real composition of containers at run-time and even not at compile-time unless you share the header files with the root injector. It violates the 'privacy' of modules (as you need to publish them to be able to inject them).
Hypodermic allows to configure containers and create sub-containers. Container is agnostic to type (not template) therefore it is possible to share it at run-time.
BTW, this solves also this question

Related

Is it common practice to abstract library dependencies from implementation?

My answer to this question would be "no." But my coworkers disagree.
We're rebuilding our product and have a lot of critical decisions to make in the near-term.
While doing some of my own work I noticed that we've got some in-house C++ classes to abstract some of the POSIX API (threads, mutexes, semaphores, and rw locks) and other utility classes. Note that these classes are basic, and haven't been ported from Linux (portability is a factor in the rebuild.) We are also using POCO C++ libraries.
I brought this to the attention of my coworkers and suggested that we ditch our in-house classes in favour of their POCO equivalents. I want to take full advantage of the library we're already using. They suggested that we should implement our in-house classes using POCO, and further abstract additional POCO classes as necessary, so as not to depend on any specific C++ library (citing future unknowns - what if we want to use a different lib/framework like QT or boost, what if the one we choose turns out to be no good or development becomes inactive, etc.)
They also don't want to refactor legacy code, and by abstracting parts of POCO with our own classes, we can implement additional functionality (classic OOP.) Both of these arguments I can appreciate. However, I argue that if we're doing a recode we should go big, or go home. Now would be the time to refactor and it really shouldn't be that bad especially given the similarity between our classes and those in POCO (threads, etc.) I don't know what to say regarding the second point - should we only use extended classes where the functionality is necessary?
My coworkers also don't want to litter the POCO namespace all over the place. I argue that we should pick a library/framework/toolkit, and stick with it. Take full advantage of its features. Is this not typical practice? The only project I've seen that abstracts an entire framework is Freeswitch (that provides its own interface to APR.)
One suggestion is that the API we expose to each other, and potential customers, should be free of POCO, but it would be present in the implementation (which makes sense.)
None of us really have experience in these kinds of design decisions, and it shows in the current product. Having been at this since I was young, I've got some intuition that has brought me here, but no practical experience either. I really want to avoid poor solutions to problems that are already solved.
I think my question boils down to this: When building a product, should we a) choose a dominant framework on which to base most of our code, and b) expect that framework to be tightly coupled with the product? Isn't that the point of a framework? (Is framework or library more appropriate for POCO?)
First, the API that you expose should definitely be free of POCO, boost, qt, or any other type that is not part of the standard C++ library. This is because the base libraries have their own release cycle, distinct from the release cycle of your library. If the users of your library also use boost, but a different, incompatible, version, they would need to spend time to resolve the incompatibility. The only exception to this rule is when you design a library to be released as part of a wider framework - say, an addition to the POCO toolkit. In this case the release of your library is tied to the release of the entire toolkit.
Internally, however, you should avoid using your own wrappers, unless the library that you are abstracting out is a true "commodity library"1. The reason for this is that when you hide an external library behind your classes, most of the time you mimic the level of abstraction of the library that you are hiding. The code that uses your wrapper will program to the level of abstraction dictated by the external library. When you swap the implementation behind your wrapper for a different framework, it is very likely that you would either (1) adapt the new framework to fit the level of abstraction of the old framework, or (2) will need to change the way in which you use your wrapper. Both cases are highly suspect: if you do (1), perhaps you shouldn't switch in the first place, and if you do (2), then your wrappers prove to be useless.
1 By "commodity library" I mean a library that provides a level of abstraction commonly found in other libraries that serve a similar purpose.
There are two situations where I think it's worth having your own wrappers:
1) You've looked at several different mutex implementations on different systems/libraries, you've established a common set of requirements that they can all satisfy and that are sufficient for your software. Then you define that abstraction and implement it one or more times, knowing that you've planned ahead for flexibility. The rest of your code is written to rely only on your abstraction, not on any incidental properties of the current implementation(s). I have done this in the past, although not in code I can show you.
A classic example of this "least common interface" would be to change rename in the filesystem abstraction, on the basis that Windows cannot implement an atomic rename-over-an-existing-file. So your code must not rely on atomic rename-replacement if you might in future swap out your current *nix implementation for one that can't do that. You have to restrict the interface from the start.
When done right, this kind of interface can considerably ease any kind of future porting, either to a new system or because you want to change your third-party library dependencies. However, an entire framework is probably too big to successfully do this with -- essentially you'd be inventing and writing your own framework, which is not a trivial task and conceivably is a larger task than writing your actual software.
2) You want to be able to mock/stub/sham/spoof/plagiarize/whatever the next clever technique is, the mutex in tests, and decide that you will find this easier if you have your own wrapper thrown over it than if you're trying to mess with symbols from third-party libraries, or that are built-in.
Note that defining your own functions called wrap_pthread_mutex_init, wrap_pthread_mutex_lock etc, that precisely mimic pthread_* functions, and take exactly the same parameters, might satisfy (2) but doesn't satisfy (1). And anyway, doing (2) properly probably requires more than just wrappers, you usually also want to inject the dependencies into your code.
Doing extra work under the heading of flexibility, without actually providing for flexibility, is pretty much a waste of time. It can be very difficult or even provably impossible to implement one threading environment in terms of another one. If you decide in future to switch from pthreads to std::thread in C++, then having used an abstraction that looks exactly like the pthreads API under different names is (approximately) no help whatsoever.
For another possible change you might make, implementing the full pthreads API on Windows is sort of possible, but probably more difficult than only implementing what you actually need. So if you port to Windows, all your abstraction saves you is the time to search and replace all calls in the rest of your software. You're still going to have to (a) plug in a complete Posix implementation for Windows, or (b) do the work to figure out what you actually need, and only implement that. Your wrapper won't help with the real work.

Advantage of wrapping classes in DLLs

I've just finished a phase in my project where I wrote a small infrastructure to carry out a certain task, made of a core class with several auxiliary classes.
The C++'ness is quite basic - single inheritance, some STL containers, that's it.
No threads - client runs the show.
What I would like to do now is wrap it all up in a DLL, version it, and use
it as a standalone unit. I'd like that seperation in order to track changes and
development better, and perhaps for other projects as well.
As I don't have experience with classes in DLLs, I would like to hear yours:
What's your approach to this problem?
Specifically:
Is it worth the trouble?
Do you do that often or not at all?
What about compatibiliy issues (like clients compiled using a different compiler)?
I'm not really asking for a debate (though that's the probable outcome), but rather an advice from experience.
Thanks for your time.
I find it hard to see any benefit with this. I can see plenty of problems:
No type checking across a DLL boundary. Any version mismatches will result in runtime failures, harder to detect than compile time failures.
Extra deployment headaches. You may be tempted to update some but not all modules and so deal with complex dependencies.
All clients that want to use these DLLs must use the same compiler.
Only make this change if you can identify benefits that outweigh the negatives.
C++ code is not binary compatible between compilers, it's generally no use creating DLLs exposing C++ classes that aren't built as part of the project that uses them.
If you want to create a Windows DLL with a well-defined object-oriented interface that the rest of the world can use, make it a COM inproc server.

How to dynamically load C++ objects and use them through a wrapper interface

I have a multithreaded dynamic library which exposes a basic API and which I use in a
couple of applications. Currently I'm using a custom(as in legacy) implementation of some basic threading and synchronization primitives with which I'm not happy at all as it does not provide much flexibility, features and it's also a hassle to maintain(has implementations for both Linux and Windows).
I would like to replace that with some existing threading library but I would also like to provide some flexibility, meaning that I would like to be able to try out a bunch of libraries to see how they perform on different platforms I build my library for(I would like to try boost::thread, Poco::Thread and the new C++0X thread implementation) or even let the user to which I provide my library to fit it's own thread custom implementation if wanted, so that the library and the user's app would be able to use the same threading infrastructure - ideally I would have a config file or something on those lines to let the user specify its desired implementation or use a default provided one.
I was thinking of the following:
making a thin wrapper(pimpl style) to be used inside my library that will use a dynamic class loader to fit the desired implementation at runtime. But how could I handle the case of the C++0X threads? These are in the standard library which will already be linked against my library so no point in custom loading it at runtime.
using a dynamic loader coupled with the Prototype design pattern. But the pattern requires the implementation of the clone() method which would mean changing the threading library code. I might have poorly understood the pattern and I might be mistaking about this, so please correct me if I'm wrong.
As you can see I don't have many ideas to work with right now(but it's a start) so any pointers on the following would be of great help:
Is providing such a functionality a good idea? Do you see any caveats?
Is the dynamic loading facility a feasible idea? What are the downsides?
Any pointers/links on how to properly implement that?
If I'm going the "thin wrapper" way would it be a good idea to expose it as part of my library's API?
Are there any alternatives/patterns to achieve the same kind of functionality(I mean to achieve the same result but without dynamic loading)?
What is the benefit of providing the ability to dynamically or delay load of a threading solution? Ideally you would pick a threading solution and create a library which has an API interface and if the solution was later found to be insufficient you could write a new library using the same interface but a different underlying solution. I would even consider statically linking such a library although a DLL is fine as well. I wouldn't bother with the ability to make it interchangeable at runtime or anything like that.
I highly recommend Boost threads. Cross platform and based off POSIX it's very easy to implement in a variety of ways. I believe that C++0x threads were marginally based off this solution however since C++0x is not finalized or fully supported by all compilers yet I would only consider it as a replacement for boost in the future.
I think that by providing a wrapper for the threading library, and initializing it at runtime, you are limiting yourself to the lowest common denominator. That is, your interface into the thread library calls will need to include operations that are implemented by all of the libraries.
If this is acceptable, then you should look into using the Adapter Pattern to handle calls into the thread library that is chosen by the user. Basically, you would use the config file to determine which thread library is in use, and then wrap it in an adapter class that implements your threading operations methods interface and delegates the calls the appropriate methods on the underlying library. You could also use the adapter to make up for unimplemented functionality in certain library (i.e. by implementing reader/writer locks using the mutexes provided by a library, etc.)

What's the simplest way to write portable dynamically loadable libraries in C++?

I'm working on a project which has multiple similar code paths which I'd like to separate from the main project into plugins. The project must remain cross-platform compatible, and all of the dynamic library loading APIs I've looked into are platform specific.
What's the simplest way to create a dynamic library loading system which can be compiled and run on multiple operating systems without extra modification of the code? Ideally, I'd like to write one plugin, and have it work on all the operating systems the project supports.
Thanks.
You will have to use platform dependent code for the loading system. It's different loading a DLL on Windows than loading a shared object in Unix. But, with a couple of #ifdef you will be able to have mostly the same code base in the loader.
Having said that, I think you can make your plugins platform independent. Of course, you will have to compile it for every platform, but the code will be 99% the same.
Dynamic library loading an Windows and Unix/Linux works with 3 functions. A pair of functions to load/unload the libraries, and another function to get the address of a function in the library. You can easily write a wrapper around these three functions to provide cross operating systems suppport.
Ideally, I'd like to write one plugin, and have it work on all the operating systems the project supports.
Few things from top of my head:
Avoid static objects in the dynamic libraries. Provision proper initialization methods/functions to allocate the objects. The issues which occur during library being loaded by the OS (this is when the c'tors for static objects are called) are very hard to debug - next only to the multi-threading issues.
Interface headers may not contain code. No inline methods, no preprocessor defines. That is to avoid tainting application with the code from particular version of library, making it impossible to replace the library at later time.
Interface headers may not contain implementation classes themselves - only abstract classes and factory functions. Similar to the previous point - to avoid application depend on the particular version of the classes. Factories are needed as a way for user application to instantiate the concrete implementation classes.
When introducing new version of an interface, to keep things somehow backward compatible, do not modify existing abstract class - create new abstract class inherited from it and add new methods there. Change factory to return the new version. (Recall MS' IInterface, IInterface2, IInterface3 and so on.) In the implementation, use newer version of the abstract class. That by polymorphism would make the implementation backward compatible with the older interface versions. (That obviously calls for periodic interface maintenance and clean-ups - to remove the old cruft.)
Take a look at the boost.extension library, it is not really part of boost, but you can find it in the sandbox. It is kind of frozen also, but overall the library is stable and easy to use.

Can I un-singleton a singleton

I want to use a library that makes heavy use of singletons, but I actually need some of the manager classes to have multiple instances. The code is open source, but changing the code myself would make updating the lib to a newer version hard.
What tricks are there, if any, to force creation of a new instance of a singleton or even the whole library?
Find out who wrote the library in the first place, visit their home address and beat them to a bloody pulp, preferrably with a book about software design. :)
Apart from that: maintain your changes to the original library as a patch set so you can (more or less) easily apply it to each new version. Also, try to get your changes into the library so you don’t have to maintain the patches yourself. ;)
Under windows you can encapsulate each instance in different DLLs.
The answer is "it depends". A good example is the C++ heap used by Microsofts C runtime. This is implemented as a singleton, of course. Now, when you statically link the CRT into multiple DLLs, you end up with multiple copies. The newer implementations have a single heap, whereas the older CRTs created one heap per library linked in.
Remember that the library also depends on the singleton nature of the singleton. If you want to update the library you need to verify that your changes don't violate library functioning. That can only be done manually each time.