Can I un-singleton a singleton - c++

I want to use a library that makes heavy use of singletons, but I actually need some of the manager classes to have multiple instances. The code is open source, but changing the code myself would make updating the lib to a newer version hard.
What tricks are there, if any, to force creation of a new instance of a singleton or even the whole library?

Find out who wrote the library in the first place, visit their home address and beat them to a bloody pulp, preferrably with a book about software design. :)
Apart from that: maintain your changes to the original library as a patch set so you can (more or less) easily apply it to each new version. Also, try to get your changes into the library so you don’t have to maintain the patches yourself. ;)

Under windows you can encapsulate each instance in different DLLs.

The answer is "it depends". A good example is the C++ heap used by Microsofts C runtime. This is implemented as a singleton, of course. Now, when you statically link the CRT into multiple DLLs, you end up with multiple copies. The newer implementations have a single heap, whereas the older CRTs created one heap per library linked in.

Remember that the library also depends on the singleton nature of the singleton. If you want to update the library you need to verify that your changes don't violate library functioning. That can only be done manually each time.

Related

Using boost-di with configuration file and shared libraries

I am planning a c++ project using dependency injection via boost di. In my opinion I will need a mechanism for dynamically loading libraries too, to be able to realy benefit of dependency injection.
Therefore I consider using boost dll to use a platform independent shared library mechansim.
For dependency configuration I think about using INI-files via boost property tree.
Do you see any major drawback in this approach?
Or is there another platform independent mechanism/library?
Thanks for your opinions
Andreas
There is a mechanism to decide during runtime what implementation to use. But due to the compile time approach of Boost DI it seems by design not intended to use it with dynamic libraries.
For a pure compile time injection it looks very smart and nice to use. For my problem, it seems not the right solution.
During C++ projects development, I had to find a C++ DI framework.
After some investigation and evaluations and based on our requirements (detailed below), we found out that Hypodermic framework suits our needs.
Our criterions (except the basic one that the framework shall recursively inject instances !!) were:
Non-intrusive (do not require to dirty existing code with 'decorations' macros...) as Google fruit and or other libs require.
Support singleton instance
Support of configurable instantiation (using a lambda)
Support of container\injector composition
Generic Shareable container\injector in order to support modules (static or dynamic). BTW, I agree with
Boost-DI, as answered by Andreas, is not suitable because it is heavily templated and as answered in this question.
Boost-DI does not allow real composition of containers at run-time and even not at compile-time unless you share the header files with the root injector. It violates the 'privacy' of modules (as you need to publish them to be able to inject them).
Hypodermic allows to configure containers and create sub-containers. Container is agnostic to type (not template) therefore it is possible to share it at run-time.
BTW, this solves also this question

What's the simplest way to write portable dynamically loadable libraries in C++?

I'm working on a project which has multiple similar code paths which I'd like to separate from the main project into plugins. The project must remain cross-platform compatible, and all of the dynamic library loading APIs I've looked into are platform specific.
What's the simplest way to create a dynamic library loading system which can be compiled and run on multiple operating systems without extra modification of the code? Ideally, I'd like to write one plugin, and have it work on all the operating systems the project supports.
Thanks.
You will have to use platform dependent code for the loading system. It's different loading a DLL on Windows than loading a shared object in Unix. But, with a couple of #ifdef you will be able to have mostly the same code base in the loader.
Having said that, I think you can make your plugins platform independent. Of course, you will have to compile it for every platform, but the code will be 99% the same.
Dynamic library loading an Windows and Unix/Linux works with 3 functions. A pair of functions to load/unload the libraries, and another function to get the address of a function in the library. You can easily write a wrapper around these three functions to provide cross operating systems suppport.
Ideally, I'd like to write one plugin, and have it work on all the operating systems the project supports.
Few things from top of my head:
Avoid static objects in the dynamic libraries. Provision proper initialization methods/functions to allocate the objects. The issues which occur during library being loaded by the OS (this is when the c'tors for static objects are called) are very hard to debug - next only to the multi-threading issues.
Interface headers may not contain code. No inline methods, no preprocessor defines. That is to avoid tainting application with the code from particular version of library, making it impossible to replace the library at later time.
Interface headers may not contain implementation classes themselves - only abstract classes and factory functions. Similar to the previous point - to avoid application depend on the particular version of the classes. Factories are needed as a way for user application to instantiate the concrete implementation classes.
When introducing new version of an interface, to keep things somehow backward compatible, do not modify existing abstract class - create new abstract class inherited from it and add new methods there. Change factory to return the new version. (Recall MS' IInterface, IInterface2, IInterface3 and so on.) In the implementation, use newer version of the abstract class. That by polymorphism would make the implementation backward compatible with the older interface versions. (That obviously calls for periodic interface maintenance and clean-ups - to remove the old cruft.)
Take a look at the boost.extension library, it is not really part of boost, but you can find it in the sandbox. It is kind of frozen also, but overall the library is stable and easy to use.

Using stl containers without boost pointers

My company are currently not won over by the Boost libraries and while I've used them and have been getting them pushed through for some work, some projects due to their nature will not be allowed to use Boost. Basically libraries, like Boost, cannot be brought in for work so I am limited to the libraries available by default (Currently using Visual Studio 2005).
So... My question is, if I am unable to use Boost::shared_ptr and its little brothers, what is the alternative when using STL containers with pointers?
One option I see is writing a container class like a shared_ptr that looks after a given pointer, but I'd like to know if there are other alternatives first.
If they're not going to accept boost, I presume other "not developed here" libraries are out of the question.
It seems to me you're left with two options:
Roll your own shared_ptr.
Use raw pointers, and manage the memory yourself.
Neither is ideal, and each comes with it's own pain. Your saving grace might be that you have all of the source to boost available to you. You can use it as a model for writing your own shared_ptr.
In Visual Studio 2008 there is available std::tr1::shared_ptr. I'm not sure it is available in VS2005, you should check.
That definetly depends on what you want to do. It's not as if shared_ptr are absolutely necessary for a project that uses pointers.
If you really need them, import those classes/templates/functions you really need to your own project if possible without importing the whole boost lib.
Without knowing the background, it's hard to say why boost's libraries aren't permitted. If the reason is to avoid complex dependencies, you can easily work around the issue: Almost all boost libraries work with only a simply #include header: in short, they don't need linking and thus avoid dll-hell or any variant thereof.
So, if external libraries aren't appreciated because of the complexities involved in linking (whether statically or dynamically) them, you can simply copy the boost headers you need into the project by hand and use them directly.
For clarity and to make future upgrades and maintenance easier I'd avoid renaming the boost libraries (so future coders know where the code came from). If "they" don't want such simple code inclusions, well, you could make the argument that quite a few boost headers are headed for inclusion in the spec, and that they'll save everyone a bunch of headaches and time. Legally, the boost license is specifically designed to be as easy and safe to integrate as possible: all files have an explicit license which permits all relevant things, and almost all libs have exactly the same license.
I'm curious though: why exactly aren't boost headers permitted?

Reorganize Classes into Static Libraries

I am about to attempt reorganizing the way my group builds a set of large applications that share about 90% of their source files. Right now, these applications are built without any libraries whatsoever involved except for externally linked ones that are not under our control. The applications use the same common source files (we are not maintaining 5 versions of the same .h/.cpp files), but these are not built into any common library. So, at the moment, we are paying the price of building the same code over-and-over per application, each time we intend to release a version. To me, this sounds like a prime candidate for using libraries to capture the shared code and reduce build times. I do not have the option of using DLL's, so the approach is to use static libraries.
I would like to know what tips you would have for how to approach this task. I have limited experience with creating/organizing static libraries, so even the basic suggestions towards organization/gotchas are welcome. Maybe even a good book recommendation?
I have done a brief exercise by finding the entire subset of files that each application share in common. As a proof of concept, I took these files and placed them in a single "Common Monster" static library. Building the full application using this single static library certainly improves the build time for all of the applications, but should I leave it at this? The purpose of the library in this form is not very focused and seems like a lazy attempt at modularity. There is ongoing development with these applications, and I'm afraid this setup will cause problems further down the line.
It's very hard to give general guidelines in this area - how you structure libraries depends very much on how you use them. Perhaps if I describe my own code libraries this may help:
One general purpose library containing code that I expect all applications will have at least a 50/50 chance of needing to use. This includes string utilities, regexes, expression evaluation, XML parsing and ODBC support. Conceivably this should be split up a bit, but it makes distributing my code in FOSS projects easier to keep it monolithic.
A library supporting multi-threading, providing wrappers around threads, mutexes, semaphores etc.
One supporting SQLite via its native interface, rather than via ODBC.
A C++ web server wrapper round the Mongoose C web server.
The general purpose library is used in all the stuff I write, the others in more specialised circumstances. Headers for each library are held in separate directories, as are the library binaries themselves (though they should probably be in a single lib directory).
Make sure that the dependencies of your libraries form acyclic directed graph (a tree). While this is not necessarily a problem for static libs (I'm not sure in fact), it will be a problem if you ever decide to switch to dlls. Depending on your situation, this may require some redesign of interfaces.
Another thing I noticed (for sure on MSVC), which you may consider if build speed is an important concern: DLLs link much faster than static libraries. I assume this is because they don't have to be copied into the new executable and there's no need to search an eliminate unused code. Even if it's no option for production, you may use this trick while developing.
I also have the habit to create my solution files with CMake, because it is easier to overview the entire build process than clicking through an endless list of options in a GUI. It's up to you to decide if you want to walk that path.

Best practices for creating an application which will be upgraded frequently - C++

I am developing a portable C++ application and looking for some best practices in doing that. This application will have frequent updates and I need to build it in such a way that parts of program can be updated easily.
For a frequently updating program, creating the program parts into libraries is the best practice? If program parts are in separate libraries, users can just replace the library when something changes.
If answer for point 1 is "yes", what type of library I have to use? In LINUX, I know I can create a "shared library", but I am not sure how portable is that to windows. What type of library I have to use? I am aware about the DLL hell issues in windows as well.
Any help would be great!
Yes, using libraries is good, but the idea of "simply" replacing a library with a new one may be unrealistic, as library APIs tend to change and apps often need to be updated to take advantage of, or even be compatible with, different versions of a library. With a good amount of integration testing though, you'll be able to 'support' a range of different versions of the library. Or, if you control the library code yourself, you can make sure that changes to the library code never breaks the application.
In Windows DLLs are the direct equivalent to shared libraries (so) in Linux, and if you compile both in a common environment (either cross-compiling or using MingW in Windows) then the linker will just do it the same way. Presuming, of course, that all the rest of your code is cross-platform and configures itself correctly for the target platform.
IMO, DLL hell was really more of a problem in the old days when applications all installed their DLLs into a common directory like C:\WINDOWS\SYSTEM, which people don't really do anymore simply because it creates DLL hell. You can place your shared libraries in a more appropriate place where it won't interfere with other non-aware apps, or - the simplest possible - just have them in the same directory as the executable that needs them.
I'm not entirely convinced that separating out the executable portions of your program in any way simplifies upgrades. It might, maybe, in some rare cases, make the update installer smaller, but the effort will be substantial, and certainly not worth it the one time you get it wrong. Replace all executable code as one in most cases.
On the other hand, you want to be very careful about messing with anything your users might have changed. Draw a bright line between the part of the application that is just code and the part that is user data. Handle the user data with care.
If it is an application my first choice would be to ship a statically-linked single executable. I had the opportunity to work on a product that was shipped to 5 platforms (Win2K,WinXp, Linux, Solaris, Tru64-Unix), and believe me maintaining shared libraries or DLLs with large codebase is a hell of a task.
Suppose this is a non-trivial application which involves use of 3rd Party GUI, Threads etc. Using C++, there is no real one way of doing it on all platforms. This means you will have to maintain different codebases for different platforms anyway. Then there are some wierd behaviours (bugs) of 3rd Party libraries on different platforms. All this will create a burden if application is shipped using different library versions i.e. different versions are to be attached to different platforms. I have seen people shipping libraries to all platforms when the fix is only for a particular platform just to avoid the versioning confusion. But it is not that simple, customer often has a different angle to how he/she wants to upgrade/patch which is also to be considered.
Ofcourse if the binary you are building is huge, then one can consider DLLs/shared-libraries. Even if that is the case, what i would suggest is to build your application in the form of layers like:-
Application-->GUI-->Platform-->Base-->Fundamental
So here some libraries can have common-code for all platforms. Only specific libraries like 'Platform' can be updated for specific behaviours. This will make you life a lot easier.
IMHO a DLL/shared-library option is viable when you are building a product that acts as a complete solution rather than just an application. In such a case different subsystems use common logic simultaneously within your product framework whose logic can then be shared in memory using DLLs/shared-libraries.
HTH,
As soon as you're trying to deal with both Windows and a UNIX system like Linux, life gets more complicated.
What are the service requirements you have to satisfy? Can you control when client systems get upgraded? How many systems will you need to support? How much of a backward-compatibility requirement do you have.
To answer your question with a question, why are you making the application native if being portable is one of the key goals?
You could consider moving to a a virtual platform like Java or .Net/Mono. You can still write C++ libraries (shared libraries on linux, DLL's on windows) for anything that would be better as native code, but the bulk of your application will be genuinely portable.