What i have is:
a hex file with the bytes of a c-struct in it, orderd in big-endian
the struct definition as *.h file
the struct information as dwarf2 debug info
My application has to be written in C / C++. Intermediate scripts using for example python would be ok.
What i have to do is read the bytes of the hex-file and cast it into the struct type on a system that is little-endian.
And during this process, i will have to reverse the bytes of each struct member.
The obvious solution would be to write a conversion function, that does byteswapping for each struct-member, but since the struct has multiple layers and ~1200 members that are changing faster than i can update my conversion function, writing that by hand is no solution.
So i could generate the conversion function automatically by:
Finding and parsing the types inside multiple *.h files
Iterating members of all struct-types and generate swaps for them -> without some sort of reflection api not that easy)
loading the struct via the conversion function.
Since this solution seems like quite a bit of work, i was wandering if there is easier way like telling the compiler to swap it or use debug-info somehow.
Does anybody know a trick that might help in this case?
Thanks and greetings!
Remark:
Changing any of the processes leading to this / changing the input-conditions or delegating responsibilities to other developers involved is not pssible.
Changing something about the hex-file as an input is not possible. This file comes out of some other system that will not change to fix this problem here.
Padding, Datatype-sizes etc. are identical. This is ensured by other measures, too. So endianess is defenetly the only problem. This is also why i see no reason against using dwarf2 info to identify the bytes of every struct member.
I agree that the layout of the struct is very bad. But It has some reasons why it is that way and to be short, i can/am not allowed not change that anyway because of process-reasons and backwards compatibility.
To give some more scope:
The Software that all of this is used in is deployed to multiple different embedded devices (multiple types). The hex-file containes the calibration information of the software and is thus stored in a specific system that can only output this hex-file.
I am now porting the software to a little-endian device and i have to use the hex-file given from the "main" branch of software, which is big-endian, as an input.
There is no way to tell C or C++ compiler to swap bytes from LE to BE or vice versa automatically. You really have to do it yourself. If your data structs are really huge, probably the best way is to implement automatic conversion code generation.
The problem, as far as I understand it, is tricky but tractable. As far as I understand, data extraction won't be running on an embedded device, so it won't be resource constrained. I say - embrace the runtime inefficiency that desktop hardware allows, and go for easy to debug instead.
Instead of thinking of the source file as "almost what I need modulo a couple of minor adjustments", think of it as "generic binary file with an open ended, evolving schema". The schema description is the DWARF data.
What I would do: start a Python project. Use the pyelftools PyPI module to parse the DWARF. Scroll for the compile units (CUs). In each CU, scroll through the top level entries (DIEs). Look for a DW_TAG_structure_type DIE with a specific value of DW_AT_name (I hope the struct name is known in advance). Then go through the DW_TAG_member sub-DIEs. DW_AT_data_member_location will give you the offset, letting you work around the padding. Look at DW_AT_type to detect the member type (you'd have to resolve the DIE reference for that). Recurse into struct- and array-type members as necessary.
From that, generate a format string for the struct.unpack method - it can read big-endian ints seamlessly. Then use struct.pack to format it into whatever format the C++ consumer expects.
This depends on you being able to track the data file to the DWARF info of the generating executable, exactly the same build. I hope the processes of the organization allow for that.
Recent versions of GCC allow the declaration of the desired endianness irrespective of the target platform for a source code section using the pragma scalar_storage_order or a specific type using an attribute with the same identifier. The main catch: g++ does not support this. Also, this won't work in all cases. For example, taking a pointer to a member with transparent endianness conversion leads to an error. Unless you're okay with sticking to C for struct access (it all depends on your current codebase), this is not an option.
The persistence layout is based on the original struct layout - so be it. However, a more explicit approach of serializing the structs should be preferred for exactly the reason you bring this up. Besides the endianness issue, struct packing also affects compatibility and should be explicitly specified. For persistence, a packing of 1 would be optimal. For in-memory data structures, that alignment is far from optimal in terms of performance and concurrency characteristics. Also, different platforms might have incompatible data types (e.g. sizeof(long) on 64-bit Linux/Windows - LP64 vs. LLP64). So, keeping the persistence layout separate from in-memory data structures tends to have a long list of advantages and therefore usually outweights the disadvantage of having to maintain the serialization code separately. Particularly, if portability is a major concern.
You could take advantage of C/C++-based reflection libraries or implement one yourself. In case of C, this will definitely require macros (e.g. Metaresc). In case of C++, you might actually get away your original struct definitions (e.g. Boost.Precise and Flat Reflection).
If reflection is not an option, you could generate the serialization code either by parsing the headers or debug symbols. Generally, parsing C/C++ is more complex. By moving the structs involved into dedicated headers, you might get away with a simple C/C++ parser. To make things easier, you could simplify parsing by processing the gdb output of ptype based on debug symbols. Or, you could parse debug symbols directly. With a scripting language like Python, both approaches should be feasible (pygccxml and pyelftools come to mind).
Rather than sticking to generating the serialization code as part of the build process, you could generate that code once and require updates whenever the structs change in the future. That's what I would do in a multi-platform scenario. Doing that would also spare you the pain of implementing a perfect parser that can deal with all kinds of C/C++ input, it would only have to be good enough for one-time generation.
Related
I am working on a pretty dynamic C++ program which allows the user to define their own data structures which are then serialized in an output HDF5 data file. Instead of requiring the user to define a new HDF5 data type, I am "splitting" their data structures into HDF5 subgroups in which I store the different member variable data sets. I am interested in labeling the HDF5 group that has the subgroup members with the type of the data structure that was written to it so that future users of the data file will have more knowledge about how to use the data contained within it.
All of this context gets me to my question in the title. How reliable are demangled names? The crux of the issue could be summarized with the following example (using boost to demangle as an example, not a necessity). If I use
std::string tn = boost::core::demangle(typeid(MyType).name());
to get the demangled name of MyType on one system with a given compiler, will I get the same result if I use the same code on a different system with a potentially different compiler? Could I safely do tn_sys_with_clang == tn_sys_with_gcc and trust that this equality holds as long as MyType is the same type?
The answer seems to me to be obviously yes, and I have checked a few examples across many different compilers on Compiler Explorer; however, I want to be confident that I am not missing any edge cases. Moreover, I'm not sure how the demangling process differs between compilers and how that might lead to the introduction of differing amounts of whitespace.
The emphasis here is that the only variable changing is the system and compiler. I know that changing the definition of MyType or moving it to a different namespace or including a pesky using directive or changing how I demangle could change the string output by the demangling. I want to focus on a more limited question where only the compiler and system change.
Unreliable.
If you compile with the same compiler on the same OS then you should have some stability — but that is absolutely not guaranteed. ABI changes in name mangling can happen at any time in a compiler’s release cycle.
Individual compiler teams may have some information about this in their documentation. I am not going to look it up. Sorry.
All bets are off if you compile with either different compilers or different operating systems.
For example, LLVM/Clang on Windows comes with a version that uses MSVC as the backend. Consequently, name mangling on the native Windows Clang port is not compatible with the native Linux Clang.
Finally, just running a few tests with your (current) compiler is always a good way to shoot yourself in the foot. As the adage goes, “just because it works on your compiler, today...”
The reliability of de-mangled names does not seem to be something that is well documented. For this reason, I am going to simply document the few tests that I've done on my x86_64 system allowing me to compare gcc and clang. These tests done through Compiler Explorer verifies that the returned strings for the same types are the same (including whitespace).
Maybe if I start using this in my application, one of the users will find an issue and I can update this question with another answer down the line, but for now, I think it is safe(ish) to trust de-mangling.
I'm focusing on micropython, specifically the branch dynamic-native-modules.
This feature will, in the future, allow you to compile a C/C++ function into a native .obj and package it together with a .py interface for a huge speed boost.
Awesome! But the issue is that if you're using a RTOS, which doesn't have virtual memory, then any executing native code can access any part of the address space including peripherals, the RTOS' state etc.
You don't want the user to be able to do something like this:
void user_func()
{
/* point to arbitrary memory, potentially the reset registers, flash erase . . . you get the point */
int * a = (int*)0x1234;
*a = 0x10110000; // DESTROY!!!
}
Even the following should be disallowed:
void user_func()
{
int a;
(int*)(&a-1000) = 0x10010111;
}
SOLUTIONS?
Create own version of gcc (for each binary format)
Decompile .obj files and detect use of pointers (for each binary binary format)
FEEDBACK TO COMMENTS
I get that it may be impossible to stop a malicious user but that's not the #1 worry. We want to stop well-meaning but accidental code. If it's not possible to stop every, single case that's ok.
If we can prohibit/detect explicit pointer accesses and simply provide warnings regarding array use, that is still very valuable.
WARNING: YOU'RE USING AN ARRAY! MAKE SURE YOU DON'T GO OUT-OF-BOUNDS
Your best chance is a GCC plugin which looks at the frontend-generated GENERIC or GIMPLE IRs and implements the policies you want. Depending on the policies and the source code you want to accept, this could be a lot of work and very difficult.
If you want a purely syntax-based or type-based approach (simply rejecting all pointer arithmetic), Clang with its ASTs is easier to work with than GCC.
There could be a way of doing what you want - as long as you protect from honest mistakes, not deliberate attempts to break things.
First, you embrace C++ Core Guidelines and use support from the tools: https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#S-tools
For your purposes, it will stop your code from using raw pointers and pointer arithmetic in non-library code. Period. Core guidelines do more than that, but this is what is related to your question.
Than, you do a simple grep on the code to make sure it is not using pointers blessed by Guidelines - this would be a case of more or less simple grep.
In practice, it is not reasonably possible.
You might consider changing the compilation (e.g. with a GCC plugin, like mentioned in Florian Weimer's answer) to check every access to arrays, but that makes the generated code significantly slower. Your native hardware is slow enough, you don't want to make it even more slow.
And Python is not exactly the right source language. Its dynamic typing would make its quite slow already.
Perhaps you might consider a statically typed language, like Ocaml (combined with a JIT or AOT compilation library like GCCJIT, etc). The type system (and its inference) increase the safety (and the speed) of the generated code, and with lot of hard work (several years, probably worth a PhD
since you'll need do new research) you might improve the type inference to even infer the occurrences where the array index is never out-of-bounds (and an index boundary check is not even needed).
In most case, upgrading the hardware (to a Raspberry Pi style one, with an MMU, and probably the ability to have a real OS with some virtual memory) is probably the most pragmatic approach
PS. Be aware of Rice's theorem. Most static source analysis cannot work reliably and well (be sound and complete, in technical terms).
Rationale: In my day-to-day C++ code development, I frequently need to
answer basic questions such as who calls what in a very large C++ code
base that is frequently changing. But, I also need to have some
automated way to exactly identify what the code is doing around a
particular area of code. "grep" tools such as Cscope are useful (and
I use them heavily already), but are not C++-language-aware: They
don't give any way to identify the types and kinds of lexical
environment of a given use of a type or function a such way that is
conducive to automation (even if said automation is limited to
"read-only" operations such as code browsing and navigation, but I'm
asking for much more than that below).
Question: Does there exist already an open-source C/C++-based library
(native, not managed, not Microsoft- or Linux-specific) that can
statically scan or analyze a large tree of C++ code, and can produce
result sets that answer detailed questions such as:
What functions are called by some supplied function?
What functions make use of this supplied type?
Ditto the above questions if C++ classes or class templates are involved.
The result set should provide some sort of "handle". I should be able
to feed that handle back to the library to perform the following types
of introspection:
What is the byte offset into the file where the reference was made?
What is the reference into the abstract syntax tree (AST) of that
reference, so that I can inspect surrounding code constructs? And
each AST entity would also have file path, byte-offset, and
type-info data associated with it, so that I could recursively walk
up the graph of callers or referrers to do useful operations.
The answer should meet the following requirements:
API: The API exposed must be one of the following:
C or C++ and probably is "C handle" or C++-class-instance-based
(and if it is, must be generic C o C++ code and not Microsoft- or
Linux-specific code constructs unless it is to meet specifics of
the given platform), or
Command-line standard input and standard output based.
C++ aware: Is not limited to C code, but understands C++ language
constructs in minute detail including awareness of inter-class
inheritance relationships and C++ templates.
Fast: Should scan large code bases significantly faster than
compiling the entire code base from scratch. This probably needs to
be relaxed, but only if Incremental result retrieval and Resilient
to small code changes requirements are fully met below.
Provide Result counts: I should be able to ask "How many results
would you provide to some request (and no don't send me all of the
results)?" that responds on the order of less than 3 seconds versus
having to retrieve all results for any given question. If it takes
too long to get that answer, then wastes development time. This is
coupled with the next requirement.
Incremental result retrieval: I should be able to then ask "Give me
just the next N results of this request", and then a handle to the
result set so that I can ask the question repeatedly, thus
incrementally pulling out the results in stages. This means I
should not have to wait for the entire result set before seeing
some subset of all of the results. And that I can cancel the
operation safely if I have seen enough results. Reason: I need to
answer the question: "What is the build or development impact of
changing some particular function signature?"
Resilient to small code changes: If I change a header or source
file, I should not have to wait for the entire code base to be
rescanned, but only that header or source file
rescanned. Rescanning should be quick. E.g., don't do what cscope
requires you to do, which is to rescan the entire code base for
small changes. It is understood that if you change a header, then
scanning can take longer since other files that include that header
would have to be rescanned.
IDE Agnostic: Is text editor agnostic (don't make me use a specific
text editor; I've made my choice already, thank you!)
Platform Agnostic: Is platform-agnostic (don't make me only use it
on Linux or only on Windows, as I have to use both of those
platforms in my daily grind, but I need the tool to be useful on
both as I have code sandboxes on both platforms).
Non-binary: Should not cost me anything other than time to download
and compile the library and all of its dependencies.
Not trial-ware.
Actively Supported: It is likely that sending help requests to mailing lists
or associated forums is likely to get a response in less than 2
days.
Network agnostic: Databases the library builds should be able to be used directly on
a network from 32-bit and 64-bit systems, both Linux and Windows
interchangeably, at the same time, and do not embed hardcoded paths
to filesystems that would otherwise "root" the database to a
particular network.
Build environment agnostic: Does not require intimate knowledge of my build environment, with
the notable exception of possibly requiring knowledge of compiler
supplied CPP macro definitions (e.g. -Dmacro=value).
I would say that CLang Index is a close fit. However I don't think that it stores data in a database.
Anyway the CLang framework offer what you actually need to build a tool tailored to your needs, if only because of its C, C++ and Objective-C parsing / indexing capabitilies. And since it's provided as a set of reusable libraries... it was crafted for being developed on!
I have to admit that I haven't used either because I work with a lot of Microsoft-specific code that uses Microsoft compiler extensions that i don't expect them to understand, but the two open source analyzers I'm aware of are Mozilla Pork and the Clang Analyzer.
If you are looking for results of code analysis (metrics, graphs, ...) why not use a tool (instead of API) to do that? If you can, I suggest you to take a look at Understand.
It's not free (there's a trial version) but I found it very useful.
Maybe Doxygen with GraphViz could be the answer of some of your constraints but not all,for example the analysis of Doxygen is not incremental.
I have written a C++ library that saves my data (a collection of custom structs etc) into a binary file. I currently use (i.e. create and consume) the files locally, on my Windows (XP) machine. For simplicity, lets think of the library in two parts: a writer (Creates the files) and a reader or consumer (simply reads data from the files).
Recently though, I would like to also consume (i.e. read) the data files I have created on my XP machine, on my Linux machine. I must point out at this stage that both machines are PCs (so have the same endianess etc).
I can build a reader (and compile for Linux [Ubuntu 9.10 to be precise]), since I am the library creator. My question, before I embark down this road (of building the reader etc) is:
Assuming I have succesfully built the reader for Linux,
Can I simply copy accross, files that were created on the windows (XP) machine to the Linux (Ubuntu 9.10) machine and use the Linux reader to successfully read the copied over file?
For the files to be binary compatible:
endianness must match (as it does for you)
bitfield packing order must be the same
sizes and signedness of types must be the same
the compiler must make the same decisions about padding and alignment
It's certainly possible for all of these conditions to be fulfilled, or for you to not happen to be hitting any cases for which they are not. At the very least, though, I'd add some sanity checks and/or sentinel members to detect problems.
Binary files should be compatible across machines with the same endianess.
The issue you may have in your code is the size of ints, you can't necessarily assume that the compiler on different OS's has the same size int. So either copy blocks of bytes and cast them, or use int16, int32 etc.
Structs are not a file format, and you shouldn't try to use them as such.
When attempting to make structs work with fread and fwrite, there's a huge number of hacks to make it work. You byte-swap integers so that you can share files between little-endian and big-endian machines. You change your structs to use fixed-width integer types, so you can share between machines with different word sizes (such as between x86 and x64 machines). You add compiler-specific pragmas to control the padding of structs to share between compiler versions.
It works, but it's ugly. Not to mention, easy to get wrong.
Much like the recommendation in The byte order fallacy, a much better idea is to write code to read/write the fields individually. By writing your own code, you can ensure there's no padding, and you can choose integer sizes independently of the local size of integers, and you can support both endiannesses without byte-swapping (by reading/writing the bytes of an integer separately).
Unlike the hacky approach, this is hard to get wrong. Further, because you don't rely on any compiler or architecture specific behaviors, either your code will work on all compilers and architectures, or none. If you do it right, you shouldn't have any platform-specific bugs.
There is one downside; individually reading/writing the fields will be slower than just using fread/fwrite directly. You can set up a buffer (uint8_t buffer[]) and write the entirety of the data into it, and then write everything out at once, which might help, but it'll still be slower (because you'd still have to move the fields into the buffer one at a time), but for most purposes it'll still be fast enough (exceptions being embedded / real-time systems or extremely high performance computing).
If:
the machines have the same endianess (as you stated they have) and
you do open the streams in binary mode, as text mode might do funny things e.g. with line-ends and
you have programmed cleanly so you don't stumble over implementation-defined stuff like alignments, data type sizes, and struct packing,
then yes, your files should be portable.
The third bullet point is what makes a file format a "portable" one. Depending on what kind of data you have in your structs, it can be very easy or a bit tricky. Bitfields, or data being reinterpreted from a different type are especially tricky.
You might consider taking a look at the Boost Serialization Library.
A lot of thought has been put into it, and it will handle many of the potential cross-platform incompatibilities for you.
Of course, it's possible that it's overkill for your particular use case, especially if you've already got your writers & readers implemented.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently working on cross-platform applications and was just curious as to how other people tackle problems such as:
Endianess
Floating point support (some systems emulate in software, VERY slow)
I/O systems (i.e. display, sound, file access, networking, etc. )
And of course, the plethora of compiler differences
Obviously this is targeted at languages like c/c++ which don't abstract most of this stuff (unlike java or c#, which aren't supported on a lot of systems).
And if you were curious, the systems I'm developing on are the Nintendo DS, Wii, PS3, XBox360 and PC.
EDIT
There have been a lot of really good answers on here, ranging from how to handle the differences yourself, to library suggestions (even the suggestion of just giving in and using wine). I'm not actually looking for a solution (already have one), but was just curious as to how others tackle this situation as it is always good to see how others think/code so you can continue to evolve and grow.
Here's the way I've tackled the problem (and, if you haven't guessed from this list of systems above, I'm developing console/windows games). Please keep in mind that the systems I work on generally don't have cross-platform libraries already written for them (Sony actually recommends that you write your own rendering engine from scratch and just use their OpenGL implementation, which doesn't quite follow the standards anyway, as a reference).
Endianess
All of our assets can be custom made for each system. All of our raw data (except for textures) is stored in XML which we convert to a system specific binary format when the project is built. Seeing as how we are developing for game consoles, we don't need to worry about data being transfered between platforms with different endian formats (only the PC allows the users to do this, thus, it is insulated from the other systems as well).
Floating point support
Most modern systems do floating point values fine, the exception to this is the Nintendo DS (and GBA, but thats pretty much a dead platform for us these days). We handle this through 2 different classes. The first is a "fixed point" class (templated, can specify what integer type to use and how many bits for the decimal value) which implements all arithmetic operators (taking care of bit-shifts) and automates type conversions. The second is a "floating point" class, which is a basically just a wrapper around the float for the most part, the only difference is that it also implements the shift operators. By implementing the shift operators, we can then use bit shifts for fast multiplications/divisions on the DS and then seamlessly transition to platforms that work better with floats (like the XBox360).
I/O Systems
This is probably the trickiest problem for us, because every system has there own method for controller input, graphics (XBox360 uses a variant of DirectX9, PS3 has OpenGL or you can write your own from scratch and the DS and Wii have thier own proprietary systems), sound and networking (really only the DS differs in protocol by much, but then they each have their own server system that you have to use).
The way we ended up tackling this was by simply writing fairly high level wrappers for each of the systems (e.g. meshes for graphics, key mapping systems for controllers, etc.) and having all the systems use the same header files for access. It's then just a matter of writing specific cpp files for each platform (thus forming "the engine").
Compiler Differences
This is one thing that can't be tackled too easily, as we run into problems with compilers, we usually log the information on a local wiki (so others can see what to look out for and the workarounds to go with it) and if possible, write a macro that will handle the situation for us. While its not the most elegant solution, it works and seeing how some compilers a simply broken in certain places, the more elegant solutions tend to break the compilers anyway. (I just wish all of the compilers implemented Microsoft's "#pragma once" command, so much easier than wrapping everything in #ifdef's)
A great deal of this complexity is generally solved by the third party libraries (boost being the most famous) you are using. One rarely writes everything from scratch...
For endian issues in data loaded from files,
embed a value such as 0x12345678 in the file header.
The object that loads the data, look at this value, and if it matches its internal representation of the value, then the file contains native endian values. The load is simple from there.
If the value does not match, then it is a foreign endianism, so the loader needs to flip the values before storing them.
I usually encapsulate system-specific calls in a single class. If you decide to port your application to a new platform, you only have to port one file...
I normally use multi-platform libraries, like boost or Qt, they solves about the 95% of my problems dealing with platform specific codes (i admit the only platform i-m dealing with are win-xp and linux). For the remaining 5%, I usually encapsulate the platform specific code in one or more classes, using factory pattern or generic programming to reduce the #ifdef/#endif sections
I think the other answers have done a great job of addressing all your concerns except for endianness, so I'll add something about that... it should only be a problem at your interfaces to the outside world. All your internal data processing should be done in the native endianness. When communicating via TCP/IP (or any other socket protocol), there are functions you should always use to convert your values to and from network byte order. IIRC, the functions are htons() and htonl(), (host to network short, host to network long) and their inverses, which I can't remember... perhaps something like ntohl(), etc?
The only other place you should be interacting with data that has the wrong byte order is reading files from your local hard drive, so make your file loaders and writers use similar functions (perhaps you can even get away with using the network functions).
By using these library-provided functions for dealing with endianness always (use them even in code you never intend to port, unless you have a compelling reason not to -- it'll make like easier later when you decide to port), you can run the code on any platform and it will "just work", regardless of the native endianness.
Usually, this kind of portability problem are left to the build system (autotools or cmake in my case) which detect specific of the system. Finally, I get a config.h from this build system and then I just have to use constant defined in this header (using IF DEFINED).
For example here is a config.h :
/* Define to 1 if you have the <math.h> header file. */
#define HAVE_MATH_H
/* Define to 1 if you have the <sys/time.h> header file. */
#define HAVE_SYS_TIME_H
/* Define to 1 if you have the <errno.h> header file. */
#define HAVE_ERRNO_H
/* Define to 1 if you have the <time.h> header file. */
#define HAVE_TIME_H
Then the code will look like this (for time.h for example) :
#ifdef (HAVE_TIME_H)
//you can use some function from time.h
#else
//find another solution :)
#endif
For data formats - use plain text for everything. For compiler differences, be aware of the C++ standard and make use of compiler switches such as g++ -pedantic, which will warn you of portability problems.
It depends on the kind of things you are doing. One thing which is almost always the right choice is to port the basic stuff to any target platform, and then deal with it with a common API.
For example, I do a lot of numerical computation coding, and some platforms have a lot of broken/non standard code: the way to solve it is to reimplement those functions, and then use those new functions everywhere in your code (for platforms which work, the new function just calls the old one).
But this only really works for low level stuff. For GUI, high level IO, using an already existing library is definitely a better option almost every time.
For platforms without native floating point support, we have used some own fixed point type and some typedefs. Like this:
// native floating points
typedef float Real;
or for fixed points something like:
typedef FixedPoint_16_16 Real;
Then math functions may look this:
Real OurMath::ourSin(const Real& value);
Actual implementation might ofcourse be:
float OurMath::ourSin(const float& value)
{
return sin(value);
}
// for fixed points something more or less trickery stuff
For things like endianness using different functions or classes is a bit nore overhead. try using the pre-processor like:
#ifdef INTEL_LINUX:
code here
#endif
#ifdef SOLARIS_POWERPC
code here
#endif