Why do so many libraries define their own fixed width integers? - c++

Since at least C++11 we got lovely fixed width integers for example in C++'s <cstdint> or in C's <stdint.h> out of the box, (for example std::uint32_t, std::int8_t), so with or without the std:: in front of them and even as macros for minimum widths (INT16_C, UINT32_C and so on).
Yet we have do deal with libraries every day, that define their own fixed width integers and you might have seen for example sf::Int32, quint32, boost::uint32_t, Ogre::uint32, ImS32, ... I can go on and on if you want me to. You too know a couple more probably.
Sometimes these typedefs (also often macro definitions) may lead to conflicts for example when you want to pass a std fixed width integer to a function from a library expecting a fixed width integer with exactly the same width, but defined differently.
The point of fixed width integers is them having a fixed size, which is what we need in many situations as you know. So why would all these libraries go about and typedef exactly the same integers we already have in the C++ standard? Those extra defines are sometimes confusing, redundant and may invade your codebase, which are very bad things. And if they don't have the width and signedness they promise to have, they at least sin against the principle of least astonishment, so what's their point I hereby ask you?

Why do so many libraries define their own fixed width integers?
Probably for some of the reasons below:
they started before C++11 or C11 (examples: GTK, Qt, libraries internal to GCC, Boost, FLTK, GTKmm, Jsoncpp, Eigen, Dlib, OpenCV, Wt)
they want to have readable code, within their own namespace or class (having their own namespace, like Qt does, may improve readability of well written code).
they are build-time configurable (e.g. with GNU autoconf).
they want to be compilable with old C++ compilers (e.g. some C++03 one)
they want to be cross-compilable to cheap embedded microcontrollers whose compiler is not a full C++11 compiler.
they may have generic code (or template-s, e.g. in Eigen or Dlib) to perhaps support arbitrary-precision arithmetic (or bignums) perhaps using GMPlib.
they want to be somehow provable with Frama-C or DO-178C certified (for embedded critical software systems)
they are processor specific (e.g. asmjit which generates machine code at runtime on a few architectures)
they might interface to specific hardware or programming languages (Tensorflow, OpenCL, Cuda).
they want to be usable from Python or GNU guile.
they could be operating-system specific.
they add some additional runtime checks, e.g. against division by 0 (or other undefined behavior) or overflow (that the C++ standard cannot require, for performance or historical reasons)
they are intended to be easily usable from machine-generated C++ code (e.g. RefPerSys, ANTLR, ...)
they are designed to be callable from C code (e.g. libgccjit).
etc... Finding other good reasons is left as an exercise to the reader.

Related

How to compile C++ freestanding / baremetal with g++?

C++ has many aspects, included by default, that are not desirable or even functional on embedded bare metal / free standing platforms.
How can I configure g++ to compile C++ in a manner suitable for embedded bare-metal / freestanding?
The compiler option -ffreestanding is explicitly there for this purpose. It enables implementation-defined forms of main and it will disable various assumptions about standard library functions, optimizing code a bit differently in some situations.
Unfortunately in terms of of main(), C++ explicitly doesn't allow void main (void) as an implementation-defined form, unlike C. There is no sensible solution to this, it's a language flaw. Name it main_ or some such.
However, this doesn't disable or block any dangerous and unsuitable C++ features from being used - that burden lies on the programmer who picked C++. You have to manually ensure that things like heap allocation, RTTI, exceptions and so on aren't present. Removing the entire heap segment from your linker script is a sensible thing to do - it should block things like std::vector or std::string from linking.
Then depending on target, there might be certain suitable gcc compiler ports. Such as in case of ARM where one compiler port is called "gcc-none-eabi", which would be the suitable one for bare metal Cortex M microcontrollers.
At a minimum, you'll need a "C run-time" (CRT) for the target - either use one provided by the tool chain (generally recommended) or create one yourself (compile with -nostdlib). It's not a beginner task to create one, particularly not for targets with advanced MMUs. Some general advise of how to do so here. Creating one for C++ is slightly more intricate than for C, since it also has to include all constructor calls to static storage duration objects.

Why isn't the C++ standard library already pre-included in any C++ source?

In C++ the standard library is wrapped in the std namespace and the programmer is not supposed to define anything inside that namespace. Of course the standard include files don't step on each other names inside the standard library (so it's never a problem to include a standard header).
Then why isn't the whole standard library included by default instead of forcing programmers to write for example #include <vector> each time? This would also speed up compilation as the compilers could start with a pre-built symbol table for all the standard headers.
Pre-including everything would also solve some portability problems: for example when you include <map> it's defined what symbols are taken into std namespace, but it's not guaranteed that other standard symbols are not loaded into it and for example you could end up (in theory) with std::vector also becoming available.
It happens sometimes that a programmer forgets to include a standard header but the program compiles anyway because of an include dependence of the specific implementation. When moving the program to another environment (or just another version of the same compiler) the same source code could however stop compiling.
From a technical point of view I can image a compiler just preloading (with mmap) an optimal perfect-hash symbol table for the standard library.
This should be faster to do than loading and doing a C++ parse of even a single standard include file and should be able to provide faster lookup for std:: names. This data would also be read-only (thus probably allowing a more compact representation and also shareable between multiple instances of the compiler).
These are however just shoulds as I never implemented this.
The only downside I see is that we C++ programmers would lose compilation coffee breaks and Stack Overflow visits :-)
EDIT
Just to clarify the main advantage I see is for the programmers that today, despite the C++ standard library being a single monolithic namespace, are required to know which sub-part (include file) contains which function/class. To add insult to injury when they make a mistake and forget an include file still the code may compile or not depending on the implementation (thus leading to non-portable programs).
Short answer is because it is not the way the C++ language is supposed to be used
There are good reasons for that:
namespace pollution - even if this could be mitigated because std namespace is supposed to be self coherent and programmer are not forced to use using namespace std;. But including the whole library with using namespace std; will certainly lead to a big mess...
force programmer to declare the modules that he wants to use to avoid inadvertently calling a wrong standard function because standard library is now huge and not all programmers know all modules
history: C++ has still strong inheritance from C where namespace do not exist and where the standard library is supposed to be used as any other library.
To go in your sense, Windows API is an example where you only have one big include (windows.h) that loads many other smaller include files. And in fact, precompiled headers allows that to be fast enough
So IMHO a new language deriving from C++ could decide to automatically declare the whole standard library. A new major release could also do it, but it could break code intensively using using namespace directive and having custom implementations using same names as some standard modules.
But all common languages that I know (C#, Python, Java, Ruby) require the programmer to declare the parts of the standard library that he wants to use, so I suppose that systematically making available every piece of the standard library is still more awkward than really useful for the programmer, at least until someone find how to declare the parts that should not be loaded - that's why I spoke of a new derivative from C++
Most of the C++ standard libraries are template based which means that the code they'll generate will depend ultimately in how you use them. In other words, there is very little that could be compiled before instantiate a template like std::vector<MyType> m_collection;.
Also, C++ is probably the slowest language to compile and there is a lot parsing work that compilers have to do when you #include a header file that also includes other headers.
Well, first thing first, C++ tries to adhere to "you only pay for what you use".
The standard-library is sometimes not part of what you use at all, or even of what you could use if you wanted.
Also, you can replace it if there's a reason to do so: See libstdc++ and libc++.
That means just including it all without question isn't actually such a bright idea.
Anyway, the committee are slowly plugging away at creating a module-system (It takes lots of time, hopefully it will work for C++1z: C++ Modules - why were they removed from C++0x? Will they be back later on?), and when that's done most downsides to including more of the standard-library than strictly neccessary should disappear, and the individual modules should more cleanly exclude symbols they need not contain.
Also, as those modules are pre-parsed, they should give the compilation-speed improvement you want.
You offer two advantages of your scheme:
Compile-time performance. But nothing in the standard prevents an implementation from doing what you suggest[*] with a very slight modification: that the pre-compiled table is only mapped in when the translation unit includes at least one standard header. From the POV of the standard, it's unnecessary to impose potential implementation burden over a QoI issue.
Convenience to programmers: under your scheme we wouldn't have to specify which headers we need. We do this in order to support C++ implementations that have chosen not to implement your idea of making the standard headers monolithic (which currently is all of them), and so from the POV of the C++ standard it's a matter of "supporting existing practice and implementation freedom at a cost to programmers considered acceptable". Which is kind of the slogan of C++, isn't it?
Since no C++ implementation (that I know of) actually does this, my suspicion is that in point of fact it does not grant the performance improvement you think it does. Microsoft provides precompiled headers (via stdafx.h) for exactly this performance reason, and yet it still doesn't give you an option for "all the standard libraries", instead it requires you to say what you want in. It would be dead easy for this or any other implementation to provide an implementation-specific header defined to have the same effect as including all standard headers. This suggests to me that at least in Microsoft's opinion there would be no great overall benefit to providing that.
If implementations were to start providing monolithic standard libraries with a demonstrable compile-time performance improvement, then we'd discuss whether or not it's a good idea for the C++ standard to continue permitting implementations that don't. As things stand, it has to.
[*] Except perhaps for the fact that <cassert> is defined to have different behaviour according to the definition of NDEBUG at the point it's included. But I think implementations could just preprocess the user's code as normal, and then map in one of two different tables according to whether it's defined.
I think the answer comes down to C++'s philosophy of not making you pay for what you don't use. It also gives you more flexibility: you aren't forced to use parts of the standard library if you don't need them. And then there's the fact that some platforms might not support things like throwing exceptions or dynamically allocating memory (like the processors used in the Arduino, for example). And there's one other thing you said that is incorrect. As long as it's not a template class, you are allowed to add swap operators to the std namespace for your own classes.
First of all, I am afraid that having a prelude is a bit late to the game. Or rather, seeing as preludes are not easily extensible, we have to content ourselves with a very thin one (built-in types...).
As an example, let's say that I have a C++03 program:
#include <boost/unordered_map.hpp>
using namespace std;
using boost::unordered_map;
static unordered_map<int, string> const Symbols = ...;
It all works fine, but suddenly when I migrate to C++11:
error: ambiguous symbol "unordered_map", do you mean:
- std::unordered_map
- boost::unordered_map
Congratulations, you have invented the least backward compatible scheme for growing the standard library (just kidding, whoever uses using namespace std; is to blame...).
Alright, let's not pre-include them, but still bundle the perfect hash table anyway. The performance gain would be worth it, right?
Well, I seriously doubt it. First of all because the Standard Library is tiny compared to most other header files that you include (hint: compare it to Boost). Therefore the performance gain would be... smallish.
Oh, not all programs are big; but the small ones compile fast already (by virtue of being small) and the big ones include much more code than the Standard Library headers so you won't get much mileage out of it.
Note: and yes, I did benchmark the file look-up in a project with "only" a hundred -I directives; the conclusion was that pre-computing the "include path" to "file location" map and feeding it to gcc resulted in a 30% speed-up (after using ccache already). Generating it and keeping it up-to-date was complicated, so we never used it...
But could we at least include a provision that the compiler could do it in the Standard?
As far as I know, it is already included. I cannot remember if there is a specific blurb about it, but the Standard Library is really part of the "implementation" so resolving #include <vector> to an internal hash-map would fall under the as-if rule anyway.
But they could do it, still!
And lose any flexibility. For example Clang can use either the libstdc++ or the libc++ on Linux, and I believe it to be compatible with the Dirkumware's derivative that ships with VC++ (or if not completely, at least greatly).
This is another point of customization: if the Standard library does not fit your needs, or your platforms, by virtue of being mostly treated like any other library you can replace part of most of it with relative ease.
But! But!
#include <stdafx.h>
If you work on Windows, you will recognize it. This is called a pre-compiled header. It must be included first (or all benefits are lost) and in exchange instead of parsing files you are pulling in an efficient binary representation of those parsed files (ie, a serialized AST version, possibly with some type resolution already performed) which saves off maybe 30% to 50% of the work. Yep, this is close to your proposal; this is Computer Science for you, there's always someone else who thought about it first...
Clang and gcc have a similar mechanism; though from what I've heard it can be so painful to use that people prefer the more transparent ccache in practice.
And all of these will come to naught with modules.
This is the true solution to this pre-processing/parsing/type-resolving madness. Because modules are truly isolated (ie, unlike headers, not subject to inclusion order), an efficient binary representation (like pre-compiled headers) can be pre-computed for each and every module you depend on.
This not only means the Standard Library, but all libraries.
Your solution, more flexible, and dressed to the nines!
One could use an alternative implementation of the C++ Standard Library to the one shipped with the compiler. Or wrap headers with one's definitions, to add, enable or disable features (see GNU wrapper headers). Plain text headers and the C inclusion model are a more powerful and flexible mechanism than a binary black box.

Should a C++ embedded application use a common header with typedefs for built-in C++ types?

It's common practice where I work to avoid directly using built-in types and instead include a standardtypes.h that has items like:
// \Common\standardtypes.h
typedef double Float64_T;
typedef int SInt32_T;
Almost all components and source files become dependent on this header, but some people argue that it's needed to abstract the size of the types (in practice this hasn't been needed).
Is this a good practice (especially in large-componentized systems)? Are there better alternatives? Or should the built-in types be used directly?
You can use the standardized versions available in modern C and C++ implementations in the header file: stdint.h
It has types of the like: uint8_t, int32_t, etc.
In general this is a good way to protect code against platform dependency. Even if you haven't experienced a need for it to date, it certainly makes the code easier to interpret since one doesn't need to guess a storage size as you would for 'int' or 'long' which will vary in size with platform.
It would probably be better to use the standard POSIX types defined in stdint.h et al, e.g. uint8_t, int32_t, etc. I'm not sure if there are part of C++ yet but they are in C99.
Since it hasn't been said yet, and even though you've already accepted an answer:
Only used concretely-sized types when you need concretely sized types. Mostly, this means when you're persisting data, if you're directly interacting with hardware, or using some other code (e.g. a network stack) that expects concretely-sized types. Most of the time, you should just use the abstractly-sized types so that your compiler can optimize more intelligently and so that future readers of your code aren't burdened with useless details (like the size and signedness of a loop counter).
(As several other responses have said, use stdint.h, not something homebrew, when writing new code and not interfacing with the old.)
The biggest problem with this approach is that so many developers do it that if you use a third-party library you are likely to end up with a symbol name conflict, or multiple names for the same types. It would be wise where necessary to stick to the standard implementation provided by C99's stdint.h.
If your compiler does not provide this header (as for example VC++), then create one that conforms to that standard. One for VC++ for example can be found at https://github.com/chemeris/msinttypes/blob/master/stdint.h
In your example I can see little point for defining size specific floating-point types, since these are usually tightly coupled to the FP hardware of the target and the representation used. Also the range and precision of a floating point value is determined by the combination of exponent width and significant width, so the overall width alone does not tell you much, or guarantee compatibility across platforms. With respect to single and double precision, there is far less variability across platforms, most of which use IEEE-754 representations. On some 8 bit compilers float and double are both 32-bit, while long double on x86 GCC is 80 bits, but only 64 bits in VC++. The x86 FPU supports 80 bits in hardware (2).
I think it's not a good practice. Good practice is to use something like uint32_t where you really need 32-bit unsigned integer and if you don't need a particular range use just unsigned.
It might matter if you are making cross-platform code, where the size of native types can vary from system to system. For example, the wchar_t type can vary from 8 bits to 32 bits, depending on the system.
Personally, however, I don't think the approach you describe is as practical as its proponents may suggest. I would not use that approach, even for a cross-platform system. For example, I'd rather build my system to use wchar_t directly, and simply write the code with an awareness that the size of wchar_t will vary depending on platform. I believe that is FAR more valuable.
As others have said, use the standard types as defined in stdint.h. I disagree with those who say to only use them in some places. That works okay when you work with a single processor. But when you have a project which uses multiple processor types (e.g. ARM, PIC, 8051, DSP) (which is not uncommon in embedded projects) keeping track of what an int means or being able to copy code from one processor to the other almost requires you to use fixed size type definitions.
At least it is required for me, since in the last six months I worked on 8051, PIC18, PIC32, ARM, and x86 code for various projects and I can't keep track of all the differences without screwing up somewhere.

"Uint32", "int16" and the like; are they standard c++?

I'm quite new to c++, but I've got the hang of the fundamentals. I've come across the use of "Uint32" (in various capitalizations) and similar data types when reading other's code, but I can't find any documentation mentioning them. I understand that "Uint32" is an unsigned int with 32 bits, but my compiler doesn't. I'm using visual c++ express, and it doesn't recognize any form of it from what I can tell.
Is there some compilers that reads those data types by default, or have these programmers declared them themselves as classes or #define constants?
I can see a point in using them to know exactly how long your integer will be, since the normal declaration seems to vary depending on the system. Is there any other pros or cons using them?
Unix platforms define these types in stdint.h, this is the preferred method of ensuring type sizing when writing portable code.
Microsoft's platforms do not define this header, which is a problem when going cross-platform. If you're not using Boost Integer Library already, I recommend getting Paul Hsieh's portable stdint.h implementation of this header for use on Microsoft platforms.
Update: Visual Studio 2010 and later do define this header.
The C99 header file stdint.h defines typedefs of this nature of the form uint32_t. As far as I know, standard C++ doesn't provide a cstdint version of this with the symbols in namespace std, but some compilers may, and you will typically be able to include the C99 header from C++ code anyways. The next version of C++ will provide the cstdint header.
You will often see code from other people who use non-standard forms of this theme, such as Uint32_t or Uint32 or uint32 etc. They typically just provide a single header that defines these types within the project. Probably this code was originally developed a long time ago, and they never bothered to sed replace the definitions out when C99 compilers became common.
Visual c++ doesn't support the fixed-width integer types, because it doesn't include support for C99. Check out the answers to my question on this subject for various options you have for using them.
The main reason for using them is that you then don't have to worry about any possible problems arising when switching between 64bit and 32bit OS.
Also if you are interfacing to any legacy code that you new was destined for 32bit or even 16bit then it avoids potential problems there as well.
Try UINT32 for Microsoft.
The upper case makes it clear that this is defined as a macro. If you try to compile using a different compiler that doesn't already contain the macro, you can define it yourself and your code doesn't have to change.
uint32 et al. are defined by macros. They solve a historic portability problem of there being few guarantees across platforms (back when there there more platform options than now) of how many bits you'd get when you asked for an int or a short. (One now-defunct C compile for the Mac provided 8-bit shorts!).

What's your favorite way of dealing with cross-platform development? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently working on cross-platform applications and was just curious as to how other people tackle problems such as:
Endianess
Floating point support (some systems emulate in software, VERY slow)
I/O systems (i.e. display, sound, file access, networking, etc. )
And of course, the plethora of compiler differences
Obviously this is targeted at languages like c/c++ which don't abstract most of this stuff (unlike java or c#, which aren't supported on a lot of systems).
And if you were curious, the systems I'm developing on are the Nintendo DS, Wii, PS3, XBox360 and PC.
EDIT
There have been a lot of really good answers on here, ranging from how to handle the differences yourself, to library suggestions (even the suggestion of just giving in and using wine). I'm not actually looking for a solution (already have one), but was just curious as to how others tackle this situation as it is always good to see how others think/code so you can continue to evolve and grow.
Here's the way I've tackled the problem (and, if you haven't guessed from this list of systems above, I'm developing console/windows games). Please keep in mind that the systems I work on generally don't have cross-platform libraries already written for them (Sony actually recommends that you write your own rendering engine from scratch and just use their OpenGL implementation, which doesn't quite follow the standards anyway, as a reference).
Endianess
All of our assets can be custom made for each system. All of our raw data (except for textures) is stored in XML which we convert to a system specific binary format when the project is built. Seeing as how we are developing for game consoles, we don't need to worry about data being transfered between platforms with different endian formats (only the PC allows the users to do this, thus, it is insulated from the other systems as well).
Floating point support
Most modern systems do floating point values fine, the exception to this is the Nintendo DS (and GBA, but thats pretty much a dead platform for us these days). We handle this through 2 different classes. The first is a "fixed point" class (templated, can specify what integer type to use and how many bits for the decimal value) which implements all arithmetic operators (taking care of bit-shifts) and automates type conversions. The second is a "floating point" class, which is a basically just a wrapper around the float for the most part, the only difference is that it also implements the shift operators. By implementing the shift operators, we can then use bit shifts for fast multiplications/divisions on the DS and then seamlessly transition to platforms that work better with floats (like the XBox360).
I/O Systems
This is probably the trickiest problem for us, because every system has there own method for controller input, graphics (XBox360 uses a variant of DirectX9, PS3 has OpenGL or you can write your own from scratch and the DS and Wii have thier own proprietary systems), sound and networking (really only the DS differs in protocol by much, but then they each have their own server system that you have to use).
The way we ended up tackling this was by simply writing fairly high level wrappers for each of the systems (e.g. meshes for graphics, key mapping systems for controllers, etc.) and having all the systems use the same header files for access. It's then just a matter of writing specific cpp files for each platform (thus forming "the engine").
Compiler Differences
This is one thing that can't be tackled too easily, as we run into problems with compilers, we usually log the information on a local wiki (so others can see what to look out for and the workarounds to go with it) and if possible, write a macro that will handle the situation for us. While its not the most elegant solution, it works and seeing how some compilers a simply broken in certain places, the more elegant solutions tend to break the compilers anyway. (I just wish all of the compilers implemented Microsoft's "#pragma once" command, so much easier than wrapping everything in #ifdef's)
A great deal of this complexity is generally solved by the third party libraries (boost being the most famous) you are using. One rarely writes everything from scratch...
For endian issues in data loaded from files,
embed a value such as 0x12345678 in the file header.
The object that loads the data, look at this value, and if it matches its internal representation of the value, then the file contains native endian values. The load is simple from there.
If the value does not match, then it is a foreign endianism, so the loader needs to flip the values before storing them.
I usually encapsulate system-specific calls in a single class. If you decide to port your application to a new platform, you only have to port one file...
I normally use multi-platform libraries, like boost or Qt, they solves about the 95% of my problems dealing with platform specific codes (i admit the only platform i-m dealing with are win-xp and linux). For the remaining 5%, I usually encapsulate the platform specific code in one or more classes, using factory pattern or generic programming to reduce the #ifdef/#endif sections
I think the other answers have done a great job of addressing all your concerns except for endianness, so I'll add something about that... it should only be a problem at your interfaces to the outside world. All your internal data processing should be done in the native endianness. When communicating via TCP/IP (or any other socket protocol), there are functions you should always use to convert your values to and from network byte order. IIRC, the functions are htons() and htonl(), (host to network short, host to network long) and their inverses, which I can't remember... perhaps something like ntohl(), etc?
The only other place you should be interacting with data that has the wrong byte order is reading files from your local hard drive, so make your file loaders and writers use similar functions (perhaps you can even get away with using the network functions).
By using these library-provided functions for dealing with endianness always (use them even in code you never intend to port, unless you have a compelling reason not to -- it'll make like easier later when you decide to port), you can run the code on any platform and it will "just work", regardless of the native endianness.
Usually, this kind of portability problem are left to the build system (autotools or cmake in my case) which detect specific of the system. Finally, I get a config.h from this build system and then I just have to use constant defined in this header (using IF DEFINED).
For example here is a config.h :
/* Define to 1 if you have the <math.h> header file. */
#define HAVE_MATH_H
/* Define to 1 if you have the <sys/time.h> header file. */
#define HAVE_SYS_TIME_H
/* Define to 1 if you have the <errno.h> header file. */
#define HAVE_ERRNO_H
/* Define to 1 if you have the <time.h> header file. */
#define HAVE_TIME_H
Then the code will look like this (for time.h for example) :
#ifdef (HAVE_TIME_H)
//you can use some function from time.h
#else
//find another solution :)
#endif
For data formats - use plain text for everything. For compiler differences, be aware of the C++ standard and make use of compiler switches such as g++ -pedantic, which will warn you of portability problems.
It depends on the kind of things you are doing. One thing which is almost always the right choice is to port the basic stuff to any target platform, and then deal with it with a common API.
For example, I do a lot of numerical computation coding, and some platforms have a lot of broken/non standard code: the way to solve it is to reimplement those functions, and then use those new functions everywhere in your code (for platforms which work, the new function just calls the old one).
But this only really works for low level stuff. For GUI, high level IO, using an already existing library is definitely a better option almost every time.
For platforms without native floating point support, we have used some own fixed point type and some typedefs. Like this:
// native floating points
typedef float Real;
or for fixed points something like:
typedef FixedPoint_16_16 Real;
Then math functions may look this:
Real OurMath::ourSin(const Real& value);
Actual implementation might ofcourse be:
float OurMath::ourSin(const float& value)
{
return sin(value);
}
// for fixed points something more or less trickery stuff
For things like endianness using different functions or classes is a bit nore overhead. try using the pre-processor like:
#ifdef INTEL_LINUX:
code here
#endif
#ifdef SOLARIS_POWERPC
code here
#endif