Generic interface for big projects - c++

Let's say you are writing a bigger project and you have to use 3rd party libraries. So your complete project will be depended on these libraries.
I thought instead of using those 3rd party libraries directly, I would write some sort of an wrapper library or an interface that would look something like this.
void myMainloop(...){
3rdpartyMainloop(...);
}
So if the 3rd party library gets outdated I just could switch to another library by just integrating it in my wrapper library.
Is this a good thing to do? What alternatives do I have?
I am a little bit worried that if I have two libraries that are essentially doing the same thing but are completely different designed, that it is not possible to find an generic interface for both.

You could write a generic interface to which you could adapt one or more 3rd party libraries. However, it would require very careful design and planning, not to mention a huge amount of development effort if you're talking about a library of the size and scope of Qt. You'd first have to design your interfaces, then implement those interfaces for each library you wanted to be able to plug in.
Bottom line, it's a very lofty goal, and you're not likely to end up with anything that's truly "generic". At best, you'd probably have to dumb down the interface to the point where you're not taking advantage of many of the features that each library has to offer. Also, you'd likely introduce overhead that would diminish your applications performance, so keep that in mind if performance is critical.
I'd say just choose the library that suits your needs the best, unless you want to roll your own, which I doubt.

Related

Is it common practice to abstract library dependencies from implementation?

My answer to this question would be "no." But my coworkers disagree.
We're rebuilding our product and have a lot of critical decisions to make in the near-term.
While doing some of my own work I noticed that we've got some in-house C++ classes to abstract some of the POSIX API (threads, mutexes, semaphores, and rw locks) and other utility classes. Note that these classes are basic, and haven't been ported from Linux (portability is a factor in the rebuild.) We are also using POCO C++ libraries.
I brought this to the attention of my coworkers and suggested that we ditch our in-house classes in favour of their POCO equivalents. I want to take full advantage of the library we're already using. They suggested that we should implement our in-house classes using POCO, and further abstract additional POCO classes as necessary, so as not to depend on any specific C++ library (citing future unknowns - what if we want to use a different lib/framework like QT or boost, what if the one we choose turns out to be no good or development becomes inactive, etc.)
They also don't want to refactor legacy code, and by abstracting parts of POCO with our own classes, we can implement additional functionality (classic OOP.) Both of these arguments I can appreciate. However, I argue that if we're doing a recode we should go big, or go home. Now would be the time to refactor and it really shouldn't be that bad especially given the similarity between our classes and those in POCO (threads, etc.) I don't know what to say regarding the second point - should we only use extended classes where the functionality is necessary?
My coworkers also don't want to litter the POCO namespace all over the place. I argue that we should pick a library/framework/toolkit, and stick with it. Take full advantage of its features. Is this not typical practice? The only project I've seen that abstracts an entire framework is Freeswitch (that provides its own interface to APR.)
One suggestion is that the API we expose to each other, and potential customers, should be free of POCO, but it would be present in the implementation (which makes sense.)
None of us really have experience in these kinds of design decisions, and it shows in the current product. Having been at this since I was young, I've got some intuition that has brought me here, but no practical experience either. I really want to avoid poor solutions to problems that are already solved.
I think my question boils down to this: When building a product, should we a) choose a dominant framework on which to base most of our code, and b) expect that framework to be tightly coupled with the product? Isn't that the point of a framework? (Is framework or library more appropriate for POCO?)
First, the API that you expose should definitely be free of POCO, boost, qt, or any other type that is not part of the standard C++ library. This is because the base libraries have their own release cycle, distinct from the release cycle of your library. If the users of your library also use boost, but a different, incompatible, version, they would need to spend time to resolve the incompatibility. The only exception to this rule is when you design a library to be released as part of a wider framework - say, an addition to the POCO toolkit. In this case the release of your library is tied to the release of the entire toolkit.
Internally, however, you should avoid using your own wrappers, unless the library that you are abstracting out is a true "commodity library"1. The reason for this is that when you hide an external library behind your classes, most of the time you mimic the level of abstraction of the library that you are hiding. The code that uses your wrapper will program to the level of abstraction dictated by the external library. When you swap the implementation behind your wrapper for a different framework, it is very likely that you would either (1) adapt the new framework to fit the level of abstraction of the old framework, or (2) will need to change the way in which you use your wrapper. Both cases are highly suspect: if you do (1), perhaps you shouldn't switch in the first place, and if you do (2), then your wrappers prove to be useless.
1 By "commodity library" I mean a library that provides a level of abstraction commonly found in other libraries that serve a similar purpose.
There are two situations where I think it's worth having your own wrappers:
1) You've looked at several different mutex implementations on different systems/libraries, you've established a common set of requirements that they can all satisfy and that are sufficient for your software. Then you define that abstraction and implement it one or more times, knowing that you've planned ahead for flexibility. The rest of your code is written to rely only on your abstraction, not on any incidental properties of the current implementation(s). I have done this in the past, although not in code I can show you.
A classic example of this "least common interface" would be to change rename in the filesystem abstraction, on the basis that Windows cannot implement an atomic rename-over-an-existing-file. So your code must not rely on atomic rename-replacement if you might in future swap out your current *nix implementation for one that can't do that. You have to restrict the interface from the start.
When done right, this kind of interface can considerably ease any kind of future porting, either to a new system or because you want to change your third-party library dependencies. However, an entire framework is probably too big to successfully do this with -- essentially you'd be inventing and writing your own framework, which is not a trivial task and conceivably is a larger task than writing your actual software.
2) You want to be able to mock/stub/sham/spoof/plagiarize/whatever the next clever technique is, the mutex in tests, and decide that you will find this easier if you have your own wrapper thrown over it than if you're trying to mess with symbols from third-party libraries, or that are built-in.
Note that defining your own functions called wrap_pthread_mutex_init, wrap_pthread_mutex_lock etc, that precisely mimic pthread_* functions, and take exactly the same parameters, might satisfy (2) but doesn't satisfy (1). And anyway, doing (2) properly probably requires more than just wrappers, you usually also want to inject the dependencies into your code.
Doing extra work under the heading of flexibility, without actually providing for flexibility, is pretty much a waste of time. It can be very difficult or even provably impossible to implement one threading environment in terms of another one. If you decide in future to switch from pthreads to std::thread in C++, then having used an abstraction that looks exactly like the pthreads API under different names is (approximately) no help whatsoever.
For another possible change you might make, implementing the full pthreads API on Windows is sort of possible, but probably more difficult than only implementing what you actually need. So if you port to Windows, all your abstraction saves you is the time to search and replace all calls in the rest of your software. You're still going to have to (a) plug in a complete Posix implementation for Windows, or (b) do the work to figure out what you actually need, and only implement that. Your wrapper won't help with the real work.

Portable C++ library for IPC (processes and shared memory), Boost vs ACE vs Poco?

I need a portable C++ library for doing IPC. I used fork() and SysV shared memory until now but this limits me to Linux/Unix. I found out that there are 3 major C++ libraries that offer a portable solution (including Windows and Mac OS X). I really like Boost, and would like to use it but I need processes and it seems like that this is only an experimental branch until now!? I have never heard of ACE or POCO before and thus I am stuck I do not know which one to choose. I need fork(), sleep() (usleep() would be great) and shared memory of course. Performance and documentation are also important criteria.
Thanks, for your Help!
Boost Interprocess has been around since Boost 1.35 (which should be something like 3 years ago if memory serves).
ACE has been around longer, but from the sound of things, it's probably overkill -- ACE is a big library, and you only seem to want a tiny bit of what it includes. That's not necessarily a major problem, but it is something to keep in mind. In particular, a library that's really designed for big projects can tend to seem (or be) a bit clumsy for smaller ones. ACE is also intended primarily for network development, with IPC included because (for example) you might want to build what appears to be a single server out of a number of cooperating processes, and if so you obviously need a way to build those cooperating processes.
POCO is a lot more like ACE -- it's basically a network library that happens to include some IPC capability. Again, you're looking at using a pretty small part of a much larger, more ambitious library.
Based on what you want, I'd probably use Boost -- it seems to be the closest fit for what you've said you want. POCO would probably be my second choice. Although it's separate from Boost, it seems to largely follow similar design philosophy -- in particular it's meant to integrate with the standard library, where ACE tends to be more all-encompassing.
I like to add the Apache portable runtime. It`s not really c++ but of course you can use it. The headers even has the "extern "C"" included.
Included is:
Shared Memory
Network connections
Signals
Mutexes
many other things.
The problem with boost is that it has strong requirements for the c++ compiler. Especially cross compilers have a problem with e.g. the strong template usage, so that a plain C library is "more portable".

Choosing between static libs and dynamic libs/plugins?

I have been throwing stuff together in a small test game over the past 6 months or so, and right now everything is in the one project. I would like to learn more about creating an "engine" for reusability and I am trying to figure out the best way to do this.
Static Libs are obviously a tiny bit faster, as they are compiled in rather than loaded at runtime, but that really does not matter to me. The advantages of DLLs over static libs seems rather large. So im wondering what the best approach/most used approach is for a "game engine".
I am using Ogre3D (rendering engine) which supports dll plugins that it loads and what not, but I would like to write my dlls where I could basically use them anywhere. So would my best bet be to write individual DLLs for each portion of my sample engine, such as sound.dll, gui.dll, etc? Or would I be better served by creating one large dll, deemed engine.dll or something? I am assuming the best approach would be to write something like...engine.dll for the general structure, then compose it of other dlls such as sound/gui/input/etc.
Would it be silly of me to write my dlls independently of Ogre's Plugin system? My Sound relies on the FMOD library, my GUI relies on the CEGUI library, and so on. I mainly just want to create my engine to a point where it is easily usable with the functions I need from the individual librarys.
If you are creating DLLs for re-usability it makes sense to split it up into self contained units of functions. Thus, if you have a general sound library that, for example, plays wav files, that could be separated out as sound.dll if you think lots of other programs might make use of that particular library. Likewise for anything you think would be re-usable elsewhere.
However, if it is mostly the case that the only people using your sound routines are those who will also load the rest of your game engine (or most of it) dll, I see little point in having so many separate libraries. I can see the sense in having engine.dll so that many games can be created from a single engine but not from splitting up the rest unnecessarily.

Why is the Loki library not more widely used?

The Loki library implements some very widely used concepts (smart pointer, visitor, factory, etc.). The associated book "Modern C++ Design" is often mentioned, but the library itself is not widely used. Why is that?
Most developers seem to prefer Boost. In particular, why do people often decide to use Boost's smart pointers rather than Loki's?
Loki is a research/proof-of-concept sort of thing. Alexandrescu pushes new ideas, other people adopt those for real world. Also boost::shared_ptr is almost literally in TR1.
Loki's suffers from being a good library touching on several functional areas (template metaprogamming support with a few specific applications: smart pointers, singletons, function objects, scope guards etc.), whereas boost is a collection of many libraries typically exhaustively covering each functional area and much more highly tuned for portability (first).
When 9 birds out of 10 can be killed with the same stone, many people just start with boost and fill in the gaps with third party libraries. It's very hard to compete with boost if you overlap. Because you won't overlap with much of boost, people will download/install boost anyway to get the other functionality, so unless you nail an area that boost is weak at - and the difference is significant to the project, they'll "settle" for boost there too.
Further, Alexandrescu made repeated attempts to get Loki included in boost, and some of the key boost authors just weren't cooperative. My personal view is that they want the more complete but much less user-friendly MPL to have more "market share": as authors of the library and the hard-copy books that are the only decent documentation (in stark contrast with most other boost libraries which have excellent online documentation), they do quite well out of this.
If anyone is offended by and disagrees with this analysis, I'm all ears.
Another practical issue with extremely parameterised code is that in large projects where different developers/teams work independently, they'll often end up using subtly different instantiations of the same template pretty arbitrarily. This makes it harder to pass values between those subsystems: the receiver may need to:
be parameterised (i.e. templated, and hence inline, which introduces compilation dependencies and slower builds in enterprise-scale systems)
provide some minimal coverage for all possible instantiations (e.g. checking error codes and expecting/handling exceptions)
working through some compile-time to run-time hand-over based on an abstract base accessor with implementations for each instantiation) which compromises some of the performance benefits of parameterisation
This is all possible, but it takes a great programmer to navigate the terrain.
You want to use a library that the next programmer is going to know and that is going to be well supported in the future - so you pick a major lib.
Because it's a major lib lots of people use it, so it becomes the default choice.
I actually prefer Loki's way of doing things and I have contributed to Loki myself a Decorator pattern which now sits in the tracker because the project as far as I know is no longer maintained.
I use boost shared_pointer just because it will be the standard very soon, I may dislike the fact that I can't customize it to act exactly the way I want it to act but I have to live with it.
Usage of the standard library is important as it keeps the code maintainable by other programmers. If it's open source and you want to experiment go ahead and use Loki. No one is stopping you.
Actually Windows Vista uses some of Loki's features.
I am guessing they are not using the redundant implementations of smart pointers and visitors.
Speaking as someone who's used quite a bit of the Boost library, and also looked at Loki more than once, the biggest problem was the sparsity of documentation. Also, Loki uses some of the hairiest bits of C++ templates. Exciting stuff, but also rather daunting.
I used Loki once for a little tool (basically an interpreter) and actually liked it. My coworkers were less enthusiastic about the library, so its use remained constrained to this small sub-project.

Reorganize Classes into Static Libraries

I am about to attempt reorganizing the way my group builds a set of large applications that share about 90% of their source files. Right now, these applications are built without any libraries whatsoever involved except for externally linked ones that are not under our control. The applications use the same common source files (we are not maintaining 5 versions of the same .h/.cpp files), but these are not built into any common library. So, at the moment, we are paying the price of building the same code over-and-over per application, each time we intend to release a version. To me, this sounds like a prime candidate for using libraries to capture the shared code and reduce build times. I do not have the option of using DLL's, so the approach is to use static libraries.
I would like to know what tips you would have for how to approach this task. I have limited experience with creating/organizing static libraries, so even the basic suggestions towards organization/gotchas are welcome. Maybe even a good book recommendation?
I have done a brief exercise by finding the entire subset of files that each application share in common. As a proof of concept, I took these files and placed them in a single "Common Monster" static library. Building the full application using this single static library certainly improves the build time for all of the applications, but should I leave it at this? The purpose of the library in this form is not very focused and seems like a lazy attempt at modularity. There is ongoing development with these applications, and I'm afraid this setup will cause problems further down the line.
It's very hard to give general guidelines in this area - how you structure libraries depends very much on how you use them. Perhaps if I describe my own code libraries this may help:
One general purpose library containing code that I expect all applications will have at least a 50/50 chance of needing to use. This includes string utilities, regexes, expression evaluation, XML parsing and ODBC support. Conceivably this should be split up a bit, but it makes distributing my code in FOSS projects easier to keep it monolithic.
A library supporting multi-threading, providing wrappers around threads, mutexes, semaphores etc.
One supporting SQLite via its native interface, rather than via ODBC.
A C++ web server wrapper round the Mongoose C web server.
The general purpose library is used in all the stuff I write, the others in more specialised circumstances. Headers for each library are held in separate directories, as are the library binaries themselves (though they should probably be in a single lib directory).
Make sure that the dependencies of your libraries form acyclic directed graph (a tree). While this is not necessarily a problem for static libs (I'm not sure in fact), it will be a problem if you ever decide to switch to dlls. Depending on your situation, this may require some redesign of interfaces.
Another thing I noticed (for sure on MSVC), which you may consider if build speed is an important concern: DLLs link much faster than static libraries. I assume this is because they don't have to be copied into the new executable and there's no need to search an eliminate unused code. Even if it's no option for production, you may use this trick while developing.
I also have the habit to create my solution files with CMake, because it is easier to overview the entire build process than clicking through an endless list of options in a GUI. It's up to you to decide if you want to walk that path.