Cross-platform C++... platform-specific code sections in one file, or separate classes? - c++

Writing C++ stuff that's a bit low-level, you quite often have to write OS-specific stuff. Let's take a MessageBox() function as an example (not a brilliant example since frameworks do this but it's easy to follow).
Say you have a base App class, which each project provides a custom sub-class like MyApp. You could have a single App::MessageBox() method with several #ifdef blocks for each OS. Alternatively you could have a base class/interface, with per-OS sub-classes providing the implementation. e.g AppWin32 : public App
In one way the latter seems neater to me, but on the other it means your MyApp has to use some ugly code to make sure it subclasses the correct OS-specific base class.
What's the better approach?

As file/socket/memory/database/... functions are less different from platform to platform than say GUI functions, most of the code is shared/compiled for all architectures. I just use #ifdef blocks around the platform specific code inside these functions/classes.
For the completely different GUI (or any other complex subsystem) code, you should use different implementation (not header, maybe internal header) files under platform directories. (windows/window.cpp , xwin/window.cpp, macosx/window.cpp, ...)
Take a look at GUI toolkits for this scheme, wxwidgets or fltk or most of the others...

IMO, some classes is too simple, such as open/close, #ifdef is not bad for these situations. I don't think providing an abstract platform layer for a simple function is a good idea, it is over-engineering.
But cross-platform is complicated. If you want to provide a specific platform feature that does not exist in a target platform, you should create an abstract platform layer and implement it for each platform, e.g. Clipboard in Windows is big different with X Selections in X11, so you have to maintain different data structures and algorithms to make them unified.

You can use only one #ifdef block if you move implementation to several .cpp files. You could also combine this with the second aproach, however, if there should be need for it. Still, for simple use i guess multiple definition files will do the trick.

#ifdef is definetely the worst.
I would put platform-specific code to separate source trees (one tree per platform). From the project location (for particilar platform) one may use 2 filesystem links - to the "universal" tree (say, named as "src") and to platform specific tree (say, named "specific", actually pointing to linux, windows, or other platform-specific source root).
This guarantees that you always pick the proper version of your classees. Moreover, you may even give the same names for similar classes designed for different platforms.
Of course,this approach is impossible under Windows with FAT filesystem, but this is quite rare case now (as there are usually no reasons not to use NTFS under Windows).

Related

Is it common practice to abstract library dependencies from implementation?

My answer to this question would be "no." But my coworkers disagree.
We're rebuilding our product and have a lot of critical decisions to make in the near-term.
While doing some of my own work I noticed that we've got some in-house C++ classes to abstract some of the POSIX API (threads, mutexes, semaphores, and rw locks) and other utility classes. Note that these classes are basic, and haven't been ported from Linux (portability is a factor in the rebuild.) We are also using POCO C++ libraries.
I brought this to the attention of my coworkers and suggested that we ditch our in-house classes in favour of their POCO equivalents. I want to take full advantage of the library we're already using. They suggested that we should implement our in-house classes using POCO, and further abstract additional POCO classes as necessary, so as not to depend on any specific C++ library (citing future unknowns - what if we want to use a different lib/framework like QT or boost, what if the one we choose turns out to be no good or development becomes inactive, etc.)
They also don't want to refactor legacy code, and by abstracting parts of POCO with our own classes, we can implement additional functionality (classic OOP.) Both of these arguments I can appreciate. However, I argue that if we're doing a recode we should go big, or go home. Now would be the time to refactor and it really shouldn't be that bad especially given the similarity between our classes and those in POCO (threads, etc.) I don't know what to say regarding the second point - should we only use extended classes where the functionality is necessary?
My coworkers also don't want to litter the POCO namespace all over the place. I argue that we should pick a library/framework/toolkit, and stick with it. Take full advantage of its features. Is this not typical practice? The only project I've seen that abstracts an entire framework is Freeswitch (that provides its own interface to APR.)
One suggestion is that the API we expose to each other, and potential customers, should be free of POCO, but it would be present in the implementation (which makes sense.)
None of us really have experience in these kinds of design decisions, and it shows in the current product. Having been at this since I was young, I've got some intuition that has brought me here, but no practical experience either. I really want to avoid poor solutions to problems that are already solved.
I think my question boils down to this: When building a product, should we a) choose a dominant framework on which to base most of our code, and b) expect that framework to be tightly coupled with the product? Isn't that the point of a framework? (Is framework or library more appropriate for POCO?)
First, the API that you expose should definitely be free of POCO, boost, qt, or any other type that is not part of the standard C++ library. This is because the base libraries have their own release cycle, distinct from the release cycle of your library. If the users of your library also use boost, but a different, incompatible, version, they would need to spend time to resolve the incompatibility. The only exception to this rule is when you design a library to be released as part of a wider framework - say, an addition to the POCO toolkit. In this case the release of your library is tied to the release of the entire toolkit.
Internally, however, you should avoid using your own wrappers, unless the library that you are abstracting out is a true "commodity library"1. The reason for this is that when you hide an external library behind your classes, most of the time you mimic the level of abstraction of the library that you are hiding. The code that uses your wrapper will program to the level of abstraction dictated by the external library. When you swap the implementation behind your wrapper for a different framework, it is very likely that you would either (1) adapt the new framework to fit the level of abstraction of the old framework, or (2) will need to change the way in which you use your wrapper. Both cases are highly suspect: if you do (1), perhaps you shouldn't switch in the first place, and if you do (2), then your wrappers prove to be useless.
1 By "commodity library" I mean a library that provides a level of abstraction commonly found in other libraries that serve a similar purpose.
There are two situations where I think it's worth having your own wrappers:
1) You've looked at several different mutex implementations on different systems/libraries, you've established a common set of requirements that they can all satisfy and that are sufficient for your software. Then you define that abstraction and implement it one or more times, knowing that you've planned ahead for flexibility. The rest of your code is written to rely only on your abstraction, not on any incidental properties of the current implementation(s). I have done this in the past, although not in code I can show you.
A classic example of this "least common interface" would be to change rename in the filesystem abstraction, on the basis that Windows cannot implement an atomic rename-over-an-existing-file. So your code must not rely on atomic rename-replacement if you might in future swap out your current *nix implementation for one that can't do that. You have to restrict the interface from the start.
When done right, this kind of interface can considerably ease any kind of future porting, either to a new system or because you want to change your third-party library dependencies. However, an entire framework is probably too big to successfully do this with -- essentially you'd be inventing and writing your own framework, which is not a trivial task and conceivably is a larger task than writing your actual software.
2) You want to be able to mock/stub/sham/spoof/plagiarize/whatever the next clever technique is, the mutex in tests, and decide that you will find this easier if you have your own wrapper thrown over it than if you're trying to mess with symbols from third-party libraries, or that are built-in.
Note that defining your own functions called wrap_pthread_mutex_init, wrap_pthread_mutex_lock etc, that precisely mimic pthread_* functions, and take exactly the same parameters, might satisfy (2) but doesn't satisfy (1). And anyway, doing (2) properly probably requires more than just wrappers, you usually also want to inject the dependencies into your code.
Doing extra work under the heading of flexibility, without actually providing for flexibility, is pretty much a waste of time. It can be very difficult or even provably impossible to implement one threading environment in terms of another one. If you decide in future to switch from pthreads to std::thread in C++, then having used an abstraction that looks exactly like the pthreads API under different names is (approximately) no help whatsoever.
For another possible change you might make, implementing the full pthreads API on Windows is sort of possible, but probably more difficult than only implementing what you actually need. So if you port to Windows, all your abstraction saves you is the time to search and replace all calls in the rest of your software. You're still going to have to (a) plug in a complete Posix implementation for Windows, or (b) do the work to figure out what you actually need, and only implement that. Your wrapper won't help with the real work.

Advantage of wrapping classes in DLLs

I've just finished a phase in my project where I wrote a small infrastructure to carry out a certain task, made of a core class with several auxiliary classes.
The C++'ness is quite basic - single inheritance, some STL containers, that's it.
No threads - client runs the show.
What I would like to do now is wrap it all up in a DLL, version it, and use
it as a standalone unit. I'd like that seperation in order to track changes and
development better, and perhaps for other projects as well.
As I don't have experience with classes in DLLs, I would like to hear yours:
What's your approach to this problem?
Specifically:
Is it worth the trouble?
Do you do that often or not at all?
What about compatibiliy issues (like clients compiled using a different compiler)?
I'm not really asking for a debate (though that's the probable outcome), but rather an advice from experience.
Thanks for your time.
I find it hard to see any benefit with this. I can see plenty of problems:
No type checking across a DLL boundary. Any version mismatches will result in runtime failures, harder to detect than compile time failures.
Extra deployment headaches. You may be tempted to update some but not all modules and so deal with complex dependencies.
All clients that want to use these DLLs must use the same compiler.
Only make this change if you can identify benefits that outweigh the negatives.
C++ code is not binary compatible between compilers, it's generally no use creating DLLs exposing C++ classes that aren't built as part of the project that uses them.
If you want to create a Windows DLL with a well-defined object-oriented interface that the rest of the world can use, make it a COM inproc server.

Preparing a library for plugin support

I searched for this particular question but could not come up with any results, neither here nor on-line in general (maybe because it is a little harder to phrase for me). If it has already been asked, please point me in the right direction.
I am at a point where I would like my libraries/software to be pluggable. I see all these various libraries and systems where plugins are used extensively and the authors boastfully point out (in a good way!) that their software has plugin support.
So my question is, where do I start? Are there any books/on-line-resources that break the ice and may guide one on the do's and dont's of making your library pluggable, define best practices etc.?
You have to understand some things before starting :
There is no support for modules (static or dynamic) in standard C++. Nope. Not yet. Maybe in 2015.
Dlls (or .so on unix-like systems) are dynamically loaded libraries that are compiler/os dependant. So it's a pragmatic solution that fill the need.
So, you'll have to use shared libraries (whatever the file extension, it's the keyword for searches about this subject) as plugin binaries. If your plugin should contain more than runtime code, like graphic resources, you can include your graphic resources in the binary, or have a file format or compressed archibe that contain the binary file.
Whatever the way you setup your plugin files, in C++ the problem is about the interface.
Depending on wich compiler you use, you'll have different ways to "tag" functions or classes as exported/imported (meaning your plugin source code export the code and the user of the plugin should import the code).
Setup clean and clear interface in C++ for the modules, with no templates (because they are compiler and compiler configuration dependant). Those interfaces should be function declarations and class declarations with no inline code and marked exported/imported.
Now, once you've got this, you can use OS-specific API to load/unload dynamic library binaries while the application is running. Once it's done, you can get pointers to functions, again using the OS-specific API. I let you search for it.
Now, there are libraries that provide ways to abstract this in a cross-platform way. I didn't use them yet and they are known to be unperfect because of lack of definitions in the C++ standard, but they could be useful if you're planning to have your application cross-platform:
boost::extension : it's not yet a boost library, nor even proposed yet, and it's developpement is in pause (until some new standard C++ implementations are done) so it might be a bad idea but a lot of people say they use it with success.
POCO libraries have a library for shared libraries that would be the equivalent of boost::extension. Again lot of people say it's useful so I guess it's good enough to be used.
The other alternative, that is easy to setup if you don't need to support tons of target platforms, is to just write some wrapper code around OS-Specific APIs. That's what I did before knowing about boost::extension for example.

What's the simplest way to write portable dynamically loadable libraries in C++?

I'm working on a project which has multiple similar code paths which I'd like to separate from the main project into plugins. The project must remain cross-platform compatible, and all of the dynamic library loading APIs I've looked into are platform specific.
What's the simplest way to create a dynamic library loading system which can be compiled and run on multiple operating systems without extra modification of the code? Ideally, I'd like to write one plugin, and have it work on all the operating systems the project supports.
Thanks.
You will have to use platform dependent code for the loading system. It's different loading a DLL on Windows than loading a shared object in Unix. But, with a couple of #ifdef you will be able to have mostly the same code base in the loader.
Having said that, I think you can make your plugins platform independent. Of course, you will have to compile it for every platform, but the code will be 99% the same.
Dynamic library loading an Windows and Unix/Linux works with 3 functions. A pair of functions to load/unload the libraries, and another function to get the address of a function in the library. You can easily write a wrapper around these three functions to provide cross operating systems suppport.
Ideally, I'd like to write one plugin, and have it work on all the operating systems the project supports.
Few things from top of my head:
Avoid static objects in the dynamic libraries. Provision proper initialization methods/functions to allocate the objects. The issues which occur during library being loaded by the OS (this is when the c'tors for static objects are called) are very hard to debug - next only to the multi-threading issues.
Interface headers may not contain code. No inline methods, no preprocessor defines. That is to avoid tainting application with the code from particular version of library, making it impossible to replace the library at later time.
Interface headers may not contain implementation classes themselves - only abstract classes and factory functions. Similar to the previous point - to avoid application depend on the particular version of the classes. Factories are needed as a way for user application to instantiate the concrete implementation classes.
When introducing new version of an interface, to keep things somehow backward compatible, do not modify existing abstract class - create new abstract class inherited from it and add new methods there. Change factory to return the new version. (Recall MS' IInterface, IInterface2, IInterface3 and so on.) In the implementation, use newer version of the abstract class. That by polymorphism would make the implementation backward compatible with the older interface versions. (That obviously calls for periodic interface maintenance and clean-ups - to remove the old cruft.)
Take a look at the boost.extension library, it is not really part of boost, but you can find it in the sandbox. It is kind of frozen also, but overall the library is stable and easy to use.

how to use cpp source for 2 projects

I'm not sure if I am going about this the right way. I am making some c++ classes to use in 2 apps. I am going to have to compile them to be used in a Cocoa app and later be compiled for use with fastcgi.
Should I create a dynamic library?
If the source files have to be compiled with different conditional compilation settings to work in the two applications, you'd better ship the source and compile it along with the apps themselves.
Otherwise, you can create a library and ship the compiled versions along with the headers for compilation of the apps. Whether the library should be dynamic or not depends on your situation. If you don't need to update the library separately without recompiling the executable, a simple static library is probably a better choice.
Don't forget that you have static library as an option too. On some platforms dynamic libs come with a bunch of annoying baggage. They're also slightly slower (though this is usually beside the point). They also can't be replaced without recompiling the program (you might be more concerned about this in a web environment). Finally, when you're only running one of the programs on a computer you gain nothing by making the lib dynamic...and little when there's only two.
If you want to share multiple C++ class among projects you should typically place them in a class library. Not familiar with Cocoa though.
If you have many classes, then shared library. Make sure to use only abstract classes and no code in public headers (templates are OK). Provide factories (or plain functions) to instantiate the objects.
If that sounds like too much coding for you then, well, modern version control makes it rather painless to simply re-use the files in several projects, living in the same repository.
P.S. Another factor to consider is how many people work on the project. Shared library is an extra responsibility. If you have a person to take care of it, then it might be worth doing simply from organizational point of view.