I am migrating a project from Linux to Xcode and I encountered a "version" problem..
I need a unique identifier at compile time for my dynamic stuff, on linux I was using the __ COUNTER__ preprocessor, but it seems that the gcc 4.2 used in Xcode doesn't know about __ COUNTER__ yet...
So, I was wondering what I could do to solve this?
I can upgrade the GCC to 4.3(which understands __ COUNTER__), by using the macports.org or something like that... I am very noob on OSX and not very good on linux =[
or find another way to accomplish this, in the case, a method to give the function/variable an unique identifier. I tried with __ LINE__ but after few days, you end up declaring stuff on the same line on different files, and playing with that is just not that produtive...
Any help is appreciated!
Thanks,
Jonathan
I need to catalog all classes used in
a project, so these classes can be
created on the fly from within a
factory [...]
Short of using RTTI (which isn't a bad idea if you are allowed to do this; boost::any does this), how about just using the string for the class names? You can retrieve this through a macro.
#include <iostream>
#include <string>
using namespace std;
template <class T>
const char* my_type_id()
{
return "Unknown";
}
#define REGISTER_TYPE(some_type) \
template <> inline \
const char* my_type_id<some_type>() \
{ \
return #some_type; \
}
REGISTER_TYPE(int)
REGISTER_TYPE(std::string)
int main()
{
// displays "int"
cout << my_type_id<int>() << endl;
// displays "std::string"
cout << my_type_id<string>() << endl;
// displays "Unknown" - we haven't registered char
cout << my_type_id<char>() << endl;
}
Nicest thing with this approach is that you don't have to worry about problems across translation units or modules with this approach. Only thing you have to watch out for is name conflicts, in which case you can specify a namespace to help avoid them ("std::string" as opposed to simply "string", e.g.).
We use this solution as an alternative for boost::any which we provide through our SDK (and therefore can't use boost as it would require our users to have boost installed or for us to ship parts of boost in which case it could lead to conflicts for users who have different versions of boost installed). It's not as automatic as boost::any as it requires manual registration of supported types (closer to boost::variant in this regard), but doesn't require our SDK users to have RTTI enabled and works portably across module boundaries (I'm not sure if one can depend on RTTI to produce the same information across varying compilers, settings, and modules - I doubt it).
Now you can use these type-associated string IDs however you like. One example would be to use it to map creation functions to these string IDs so that you can create an instance of std::string, for example, through factory::create("std::string"); Note that this is a hypothetical case for demo purposes only as using a factory to create std::string would be rather odd.
#stinky472: I use a code close to what you wrote above...
My problem was that I was using a macro to declare the namespaces of a project, so by that, having the fullname of the class, like class c is in a::b::c.
What I did was changing my code to not rely on the namespaces itself, but add a new argument at the class macro declaration to tell what namespace it is using, like:
newclass(a::b, c): public d{
};
my problem with counter was that the namespaces were being used on lots of classes, thus, creating the same variable name within the namespaces macros, and by using the new way above, I don't need the counter anymore...
thanks for the help,
Jonathan
Related
I'm having a bit of a go at developing a platform abstraction library for an application I'm writing, and struggling to come up with a neat way of separating my platform independent code from the platform specific code.
As I see it there are two basic approaches possible: platform independent classes with platform specific delegates, or platform independent classes with platform specific derived classes. Are there any inherent advantages/disadvantages to either approach? And in either case, what's the best mechanism to set up the delegation/inheritance relationship such that the process is transparent to a user of the platform independent classes?
I'd be grateful for any suggestions as to a neat architecture to employ, or even just some examples of what people have done in the past and the pros/cons of the given approach.
EDIT: in response to those suggesting Qt and similar, yes I'm purposely looking to "reinvent the wheel" as I'm not just concerned with developing the app, I'm also interested in the intellectual challenge of rolling my own platform abstraction library. Thanks for the suggestion though!
I'm using platform neutral header files, keeping any platform specific code in the source files (using the PIMPL idiom where neccessary). Each platform neutral header has one platform specific source file per platform, with extensions such as *.win32.cpp, *.posix.cpp. The platform specific ones are only compiled on the relevent platforms.
I also use boost libraries (filesystem, threads) to reduce the amount of platform specific code I have to maintain.
It's platform independent classes declarations with platform specific definitions.
Pros: Works fairly well, doesn't rely on the preprocessor - no #ifdef MyPlatform, keeps platform specific code readily identifiable, allows compiler specific features to be used in platform specific source files, doesn't pollute the global namespace by #including platform headers.
Cons: It's difficult to use inheritance with pimpled classes, sometimes the PIMPL structs need their own headers so they can be referenced from other platform specific source files.
Another way is to have platform independent conventions, but substitute platform specific source code at compile time.
That is to say that if you imagine a component, Foo, that has to be platform specific (like sockets or GUI elements), but has these public members:
class Foo {
public:
void write(const char* str);
void close();
};
Every module that has to use a Foo, obviously has #include "Foo.h", but in a platform specific make file you might have -IWin32, which means that the compiler looks in .\Win32 and finds a Windows specific Foo.h which contains the class, with the same interface, but maybe Windows specific private members etc.
So there is never any file which contains Foo as written above, but only sets of platform specific files which are only used when selected by a platform specific make file.
Have a look at ACE. It has a pretty good abstraction using templates and inheritance.
I might go for a policy-type thing:
template<typename Platform>
struct PlatDetails : private Platform {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") + getName();
}
};
// For any serious compatibility functions, these would
// of course have to be in different headers, and the implementations
// would call some platform-specific functions to get precise
// version numbers. Using PImpl would be a smart idea for these
// classes if they need any platform-specific members, since as
// Joe Gauterin says, you want to avoid your application code indirectly
// including POSIX or Windows system headers, containing useless definitions.
struct Windows {
std::string getName() const { return "Windows"; }
};
struct Linux {
std::string getName() const { return "Linux"; }
};
#ifdef WIN32
typedef PlatDetails<Windows> PlatformDetails;
#else
typedef PlatDetails<Linux> PlatformDetails;
#endif
int main() {
std::cout << PlatformDetails().getName() << "\n";
}
There's not a whole lot to choose though between doing this, and doing regular simulated dynamic binding with CRTP, so that the generic thing is the base and the specific thing the derived class:
template<typename Platform>
struct PlatDetails {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") +
static_cast<Platform*>(this)->getName();
}
};
struct Windows : PlatDetails<Windows> {
std::string getName() const { return "Windows"; }
};
struct Linux : PlatDetails<Linux> {
std::string getName() const { return "Linux"; }
};
#ifdef WIN32
typedef Windows PlatformDetails;
#else
typedef Linux PlatformDetails;
#endif
int main() {
std::cout << PlatformDetails().getName() << "\n";
}
Basically in the latter version, getName must be public (although I think you can use friend) and so must be the inheritance, whereas in the former, the inheritance can be private and/or the interface functions can be protected, if desired. So the adaptor can be a firewall between the interface the platform has to implement, and the interface your application code uses. Furthermore you can have multiple policies in the former (i.e. multiple platform-dependent facets used by the same platform-independent class), but not for the latter.
The advantage of either of them over versions with delegates or non-template-using inheritance, is that you don't need any virtual functions. Arguably this isn't a whole lot of advantage, considering how scary both policy-based design and CRTP are at first contact.
In practice, though, I agree with quamrana that normally you can just have different implementations of the same thing on different platforms:
// Or just set the include path with -I or whatever
#ifdef WIN32
#include "windows/platform.h"
#else
#include "linux/platform.h"
#endif
struct PlatformDetails {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") +
porting::getName();
}
};
// windows/platform.h
namespace porting {
std::string getName() { return "Windows"; }
}
// linux/platform.h
namespace porting {
std::string getName() { return "Linux"; }
}
If you like to use a full-blown c++ framework available for many platforms and permissive copylefted, use Qt.
So... you don't want to simply use Qt? For real work using C++, I'd very highly recommend it. It's an absolutely excellent cross-platform toolkit. I just wrote a few plugins to get it working on the Kindle, and now the Palm Pre. Qt makes everything easy and fun. Downright rejuvenating, even. Well, until your first encounter with QModelIndex, but they've supposedly realized they over-engineered it and they're replacing it ;)
As an academic exercise though, this is an interesting problem. As a wheel re-inventor myself, I've even done it a few times now. :)
Short answer: I'd go with PIMPL. (Qt sources have examples a-plenty)
I've used base classes and platform specific derived classes in the past, but it usually ends up a bit messier than I had in mind. I've also done part of an implementation using some degree of function pointers for platform specific bits, and I was even less happy with that.
Both times I ended up with a very strong feeling that I was over-architecting and had lost my way.
I found using private implementation classes (PIMPL) with different platforms specific bits in different files easiest to write AND debug. However... don't be too afraid of an #ifdef or two, if it's just a few lines and very clear what's going on. I hate cluttered or nested #ifdef logic, but one or two here and there can really help avoid code duplication.
With PIMPL, you're not constantly refactoring your design as you discover new bits that require different implementations between platforms. That way be dragons.
At the implementation level, hidden from the application... there's nothing wrong with a few platform specific derived classes either. If two platform implementations are fairly well defined and share almost no data members, they'd be a good candidate for that. Just do it after realizing that, not before out of some idea that everything needs to fit your selected pattern.
If anything, the biggest gripe I have about coding today is how easily people seem to get lost in idealism. PIMPL is a pattern, having platform specific derived classes is another pattern. Using function pointers is a pattern. There's nothing that says they're mutually exclusive.
However, as a general guideline... start with PIMPL.
There're also the big boys, such as Qt4 (complete framework + GUI),GTK+ (gui-only afaik), and Boost (framework only, no GUI), all 3 support most platforms, GTK+ is C, Qt4/Boost are C++ and for the most part template based.
You might also want to take a look at poco:
The POCO C++ Libraries (POCO stands for POrtable COmponents) are open source C++ class libraries that simplify and accelerate the development of network-centric, portable applications in C++. The libraries integrate perfectly with the C++ Standard Library and fill many of the functional gaps left open by it. Their modular and efficient design and implementation makes the POCO C++ Libraries extremely well suited for embedded development, an area where the C++ programming language is becoming increasingly popular, due to its suitability for both low-level (device I/O, interrupt handlers, etc.) and high-level object-oriented development. Of course, the POCO C++ Libraries are also ready for enterprise-level challenges.
(source: pocoproject.org)
Say I have these namespaces:
namespace old
{
std::array<std::string,1> characters {"old"};
}
namespace young
{
std::array<std::string,1> characters {"young"};
}
Then I want the user to tell me at the beginning which version is he using. Then call the appropriate namespace throughout the program.
I have tried using namespace depending on input, but it doesn't work because I need to call the correct namespace in functions on other source files. I was thinking maybe can I send the namespace as a function parameter? Or do something clever with templates?
EDIT:
When I refer to "user" I mean somebody that is using my executable, a person playing my game.
What I want to do is to ask him the version he is going to use e.g. US version (things have some names), or UK version (things have other names).
All that changes is the names I use. But I want him to be able to switch between versions every time.
I hope it is clear, please let me know if you need further clarification.
There is no way to pass a namespace as function parameter or template parameter. User may use it as:
using namespace old;
characters[0] = 'O';
or code as:
old::characters[0] = 'O';
UPDATE: After updating original question
Namespaces are relevant during compile-time and do not reflect any behavior in runtime. What you need is more along the lines of:
enum Language
{
ENGLISH_UK, ENGLISH_US
};
std::array<std::string, 2> label = {
"colour", // for British-english
"color" // for US-English
};
And then in the code:
static Language lang = ENGLISH_UK;
std::cout << label[lang] << std::endl;
So, if there is a change in user interface, you do not need to recompile the whole app.
Short answer is no, because what functions are called and what variables are accessed at a particular location in your code when you e.g. write characters is detrmined at compile-time.
The slightly longer answer is that you can create wrapper functions and references in a separate namespace and let them forward to one or the other depending on the user (as long as the types are the same).
E.g.
namespace current {
int namespace_to_use = 1; // can be set by some initialization function in your code
std::array<std::string,1>& get_characters(){
return namespace_to_use == 0 ? old::characters : young::characters;
}
}
I wouldn't call that good application design and there are many more advanced/better versions of this (e.g. based on dynamic polymorphism and the factory pattern or pointers/references). What fits best depends on your needs and your level of expereience.
I was wondering if there is some standardized way of getting type sizes in memory at the pre-processor stage - so in macro form, sizeof() does not cut it.
If their isn't a standardized method are their conventional methods that most IDE's use anyway?
Are there any other methods that anyone can think of to get such data?
I suppose I could do a two stage build kind of thing, get the output of a test program and feed it back into the IDE, but that's not really any easier than #defining them in myself.
Thoughts?
EDIT:
I just want to be able to swap code around with
#ifdef / #endif
Was it naive of me to think that an IDE or underlying compiler might define that information under some macro? Sure the pre-processor doesn't get information on any actual machine code generation functions, but the IDE and the Compiler do, and they call the pre-processor and declare stuff to it in advance.
EDIT FURTHER
What I imagined as a conceivable concept was this:
The C++ Committee has a standard that says for every type (perhaps only those native to C++) the compiler has to give to the IDE a header file, included by default that declares the size in memory that ever native type uses, like so:
#define CHAR_SIZE 8
#define INT_SIZE 32
#define SHORT_INT_SIZE 16
#define FLOAT_SIZE 32
// etc
Is there a flaw in this process somewhere?
EDIT EVEN FURTHER
In order to get across the multi-platform build stage problem, perhaps this standard could mandate that a simple program like the one shown by lacqui would be required to compile and run be run by default, this way, whatever that gets type sizes will be the same machine that compiles the code in the second or 'normal' build stage.
Apologies:
I've been using 'Variable' instead of 'Type'
Depending on your build environment, you may be able to write a utility program that generates a header that is included by other files:
int main(void) {
out = make_header_file(); // defined by you
fprintf(out, "#ifndef VARTYPES_H\n#define VARTYPES_H\n");
size_t intsize = sizeof(int);
if (intsize == 4)
fprintf(out, "#define INTSIZE_32\n");
else if (intsize == 8)
fprintf(out, "#define INTSIZE_64\n");
// .....
else fprintf(out, "$define INTSIZE_UNKNOWN\n");
}
Of course, edit it as appropriate. Then include "vartypes.h" everywhere you need these definitions.
EDIT: Alternatively:
fprintf(out, "#define INTSIZE_%d\n", (sizeof(int) / 8));
fprintf(out, "#define INTSIZE %d\n", (sizeof(int) / 8));
Note the lack of underscore in the second one - the first creates INTSIZE_32 which can be used in #ifdef. The second creates INTSIZE, which can be used, for example char bits[INTSIZE];
WARNING: This will only work with an 8-bit char. Most modern home and server computers will follow this pattern; however, some computers may use different sizes of char
Sorry, this information isn't available at the preprocessor stage. To compute the size of a variable you have to do just about all the work of parsing and abstract evaluation - not quite code generation, but you have to be able to evaluate constant-expressions and substitute template parameters, for instance. And you have to know considerably more about the code generation target than the preprocessor usually does.
The two-stage build thing is what most people do in practice, I think. Some IDEs have an entire compiler built into them as a library, which lets them do things more efficiently.
Why do you need this anyway?
The cstdint include provides typedefs and #defines that describe all of the standard integer types, including typedefs for exact-width int types and #defines for the full value range for them.
No, it's not possible. Just for example, it's entirely possible to run the preprocessor on one machine, and do the compilation entirely separately on a completely different machine with (potentially) different sizes for (at least some) types.
For a concrete example, consider that the normal distribution of SQLite is what they call an "amalgamation" -- a single already-preprocessed source code file that you actually compile on your computer.
You want to generate different code based on the sizes of some type? maybe you can do this with template specializations:
#include <iostream>
template <int Tsize>
struct dosomething{
void doit() { std::cout << "generic version" << std::endl; }
};
template <>
void dosomething<sizeof(int)>::doit()
{ std::cout << "int version" << std::endl; }
template <>
void dosomething<sizeof(char)>::doit()
{ std::cout << "char version" << std::endl; }
int main(int argc, char** argv)
{
typedef int foo;
dosomething<sizeof(foo)> myfoo;
myfoo.doit();
}
How would that work? The size isn't known at the preprocessing stage. At that point, you only have the source code. The only way to find the size of a type is to compile its definition.
You might as well ask for a way to get the result of running a program at the compilation stage. The answer is "you can't, you have to run the program to get its output". Just like you need to compile the program in order to get the output from the compiler.
What are you trying to do?
Regarding your edit, it still seems confused.
Such a header could conceivably exist for built-in types, but never for variables. A macro could perhaps be written to replace known type names with a hardcoded number, but it wouldn't know what to do if you gave it a variable name.
Once again, what are you trying to do? What is the problem you're trying to solve? There may be a sane solution to it if you give us a bit more context.
For common build environments, many frameworks have this set up manually. For instance,
http://www.aoc.nrao.edu/php/tjuerges/ALMA/ACE-5.5.2/html/ace/Basic__Types_8h-source.html
defines things like ACE_SIZEOF_CHAR. Another library described in a book I bought called POSH does this too, in a very includable way: http://www.hookatooka.com/wpc/
The term "standardized" is the problem. There's not standard way of doing it, but it's not very difficult to set some pre-processor symbols using a configuration utility of some sort. A real simple one would be compile and run a small program that checks sizes with sizeof and then outputs an include file with some symbols set.
Is there a quick way of outputting the names of enumerated values? I suppose you know what I mean, and at all this isn't possible as of course all of this data becomes irrelevant during compile process, but I'm using MSVC in debugging mode, so is it possible?
Metamacros cause all sorts of havoc on Intellisense and the like, but they can make this task easy...
#define MY_ENUMS(e_) \
e_(Enum_A), \
e_(Enum_B), \
e_(Enum_C), \
#define ENUM_EXPANDER(e_) e
enum MyEnums
{
MY_ENUMS(ENUM_EXPANDER)
CountOfMyEnums
};
#define STRING_EXPANDER(e_) #e_
const char* g_myEnumStrings[] =
{
MY_ENUMS(STRING_EXPANDER)
};
Possibly even
#define CASE_EXPANDER(e_) case e_: return #e_;
const char* GetEnumName(MyEnums e)
{
switch (e)
{
MY_ENUMS(CASE_EXPANDER)
default:
return "Invalid enum value";
}
}
Different "expander macros" can be used to fill maps or other data structures of your choice. I've used this sort of horror to parse enums out of config files (so the person authoring the config file could use the enum rather than the index).
I just put the enum names in a lookup table (or you could use a map<>) with the enum value as a key and have a function perform the lookup.
It's low-tech, but usually not too much of a pain.
In some projects I'd have a weird header/macro arrangement that could build the enum definition using a single declaration-like item per enum name. My opinon on how that technique works wavers back and forth between "handy" or "kludgy" though.
This is common C++ problem, that is solved using "Typesafe enum pattern". Usually this is done using some crazy precompiler definitions, or code generators. Quick search for "Typesafe enum pattern C++" can give you these ways. Personally, I have my own code generator for C++ enumerations, which is executed as MSVC custom build step for h-files with enumerations.
Unfortunately not. All of the enum names are lost by the compiler. The PDB file has them, so the debugger can work it out, but otherwise the only way to do it would be to write a function that does a switch and returns a string.
I'm having a bit of a go at developing a platform abstraction library for an application I'm writing, and struggling to come up with a neat way of separating my platform independent code from the platform specific code.
As I see it there are two basic approaches possible: platform independent classes with platform specific delegates, or platform independent classes with platform specific derived classes. Are there any inherent advantages/disadvantages to either approach? And in either case, what's the best mechanism to set up the delegation/inheritance relationship such that the process is transparent to a user of the platform independent classes?
I'd be grateful for any suggestions as to a neat architecture to employ, or even just some examples of what people have done in the past and the pros/cons of the given approach.
EDIT: in response to those suggesting Qt and similar, yes I'm purposely looking to "reinvent the wheel" as I'm not just concerned with developing the app, I'm also interested in the intellectual challenge of rolling my own platform abstraction library. Thanks for the suggestion though!
I'm using platform neutral header files, keeping any platform specific code in the source files (using the PIMPL idiom where neccessary). Each platform neutral header has one platform specific source file per platform, with extensions such as *.win32.cpp, *.posix.cpp. The platform specific ones are only compiled on the relevent platforms.
I also use boost libraries (filesystem, threads) to reduce the amount of platform specific code I have to maintain.
It's platform independent classes declarations with platform specific definitions.
Pros: Works fairly well, doesn't rely on the preprocessor - no #ifdef MyPlatform, keeps platform specific code readily identifiable, allows compiler specific features to be used in platform specific source files, doesn't pollute the global namespace by #including platform headers.
Cons: It's difficult to use inheritance with pimpled classes, sometimes the PIMPL structs need their own headers so they can be referenced from other platform specific source files.
Another way is to have platform independent conventions, but substitute platform specific source code at compile time.
That is to say that if you imagine a component, Foo, that has to be platform specific (like sockets or GUI elements), but has these public members:
class Foo {
public:
void write(const char* str);
void close();
};
Every module that has to use a Foo, obviously has #include "Foo.h", but in a platform specific make file you might have -IWin32, which means that the compiler looks in .\Win32 and finds a Windows specific Foo.h which contains the class, with the same interface, but maybe Windows specific private members etc.
So there is never any file which contains Foo as written above, but only sets of platform specific files which are only used when selected by a platform specific make file.
Have a look at ACE. It has a pretty good abstraction using templates and inheritance.
I might go for a policy-type thing:
template<typename Platform>
struct PlatDetails : private Platform {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") + getName();
}
};
// For any serious compatibility functions, these would
// of course have to be in different headers, and the implementations
// would call some platform-specific functions to get precise
// version numbers. Using PImpl would be a smart idea for these
// classes if they need any platform-specific members, since as
// Joe Gauterin says, you want to avoid your application code indirectly
// including POSIX or Windows system headers, containing useless definitions.
struct Windows {
std::string getName() const { return "Windows"; }
};
struct Linux {
std::string getName() const { return "Linux"; }
};
#ifdef WIN32
typedef PlatDetails<Windows> PlatformDetails;
#else
typedef PlatDetails<Linux> PlatformDetails;
#endif
int main() {
std::cout << PlatformDetails().getName() << "\n";
}
There's not a whole lot to choose though between doing this, and doing regular simulated dynamic binding with CRTP, so that the generic thing is the base and the specific thing the derived class:
template<typename Platform>
struct PlatDetails {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") +
static_cast<Platform*>(this)->getName();
}
};
struct Windows : PlatDetails<Windows> {
std::string getName() const { return "Windows"; }
};
struct Linux : PlatDetails<Linux> {
std::string getName() const { return "Linux"; }
};
#ifdef WIN32
typedef Windows PlatformDetails;
#else
typedef Linux PlatformDetails;
#endif
int main() {
std::cout << PlatformDetails().getName() << "\n";
}
Basically in the latter version, getName must be public (although I think you can use friend) and so must be the inheritance, whereas in the former, the inheritance can be private and/or the interface functions can be protected, if desired. So the adaptor can be a firewall between the interface the platform has to implement, and the interface your application code uses. Furthermore you can have multiple policies in the former (i.e. multiple platform-dependent facets used by the same platform-independent class), but not for the latter.
The advantage of either of them over versions with delegates or non-template-using inheritance, is that you don't need any virtual functions. Arguably this isn't a whole lot of advantage, considering how scary both policy-based design and CRTP are at first contact.
In practice, though, I agree with quamrana that normally you can just have different implementations of the same thing on different platforms:
// Or just set the include path with -I or whatever
#ifdef WIN32
#include "windows/platform.h"
#else
#include "linux/platform.h"
#endif
struct PlatformDetails {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") +
porting::getName();
}
};
// windows/platform.h
namespace porting {
std::string getName() { return "Windows"; }
}
// linux/platform.h
namespace porting {
std::string getName() { return "Linux"; }
}
If you like to use a full-blown c++ framework available for many platforms and permissive copylefted, use Qt.
So... you don't want to simply use Qt? For real work using C++, I'd very highly recommend it. It's an absolutely excellent cross-platform toolkit. I just wrote a few plugins to get it working on the Kindle, and now the Palm Pre. Qt makes everything easy and fun. Downright rejuvenating, even. Well, until your first encounter with QModelIndex, but they've supposedly realized they over-engineered it and they're replacing it ;)
As an academic exercise though, this is an interesting problem. As a wheel re-inventor myself, I've even done it a few times now. :)
Short answer: I'd go with PIMPL. (Qt sources have examples a-plenty)
I've used base classes and platform specific derived classes in the past, but it usually ends up a bit messier than I had in mind. I've also done part of an implementation using some degree of function pointers for platform specific bits, and I was even less happy with that.
Both times I ended up with a very strong feeling that I was over-architecting and had lost my way.
I found using private implementation classes (PIMPL) with different platforms specific bits in different files easiest to write AND debug. However... don't be too afraid of an #ifdef or two, if it's just a few lines and very clear what's going on. I hate cluttered or nested #ifdef logic, but one or two here and there can really help avoid code duplication.
With PIMPL, you're not constantly refactoring your design as you discover new bits that require different implementations between platforms. That way be dragons.
At the implementation level, hidden from the application... there's nothing wrong with a few platform specific derived classes either. If two platform implementations are fairly well defined and share almost no data members, they'd be a good candidate for that. Just do it after realizing that, not before out of some idea that everything needs to fit your selected pattern.
If anything, the biggest gripe I have about coding today is how easily people seem to get lost in idealism. PIMPL is a pattern, having platform specific derived classes is another pattern. Using function pointers is a pattern. There's nothing that says they're mutually exclusive.
However, as a general guideline... start with PIMPL.
There're also the big boys, such as Qt4 (complete framework + GUI),GTK+ (gui-only afaik), and Boost (framework only, no GUI), all 3 support most platforms, GTK+ is C, Qt4/Boost are C++ and for the most part template based.
You might also want to take a look at poco:
The POCO C++ Libraries (POCO stands for POrtable COmponents) are open source C++ class libraries that simplify and accelerate the development of network-centric, portable applications in C++. The libraries integrate perfectly with the C++ Standard Library and fill many of the functional gaps left open by it. Their modular and efficient design and implementation makes the POCO C++ Libraries extremely well suited for embedded development, an area where the C++ programming language is becoming increasingly popular, due to its suitability for both low-level (device I/O, interrupt handlers, etc.) and high-level object-oriented development. Of course, the POCO C++ Libraries are also ready for enterprise-level challenges.
(source: pocoproject.org)