Recently, I was learning about networking in C++ and I found boost.asio which is a cross platform library, and then I got a thought about how this library is a cross platform since Windows provides different library for networking and even mac also,
so how its function works on different machines, does cross platform libraries create their own functions for this purpose or they contain different private functions of different machine's logics and provides public functions which then during compiling time check for on which machine that codes are compiling and change our written functions with machines defined libraries.
For example
//Operations for windows
Private void WindowsFunc
{ code }
//Operations for mac
Private void MacFunc
{ code }
//library's functions
Public void Do
{
//Performs different operations
//for different machines
If (windows)
WindowsFunc
else if (Mac)
MacFunc
}
It could be a solution may be😗
There are several possible ways:
Use ifdefs
char* doFoo(void){
#ifdef _WIN32
return "win32";
#elif ...
return ...;
#endif ...
}
The pro is, that it doesn't require much setup.
The con is, that it clutters the code, if you do it too much.
Have different implementations in different directories.
As a folder structure:
root
|---->Other files
|---->windows implementation
| |->foo.c
|---->linux implementation
|->foo.c
|---->macos implementation
|->foo.c
And then you can use some build system, or a custom shell script for selecting things.
(Pseudo code)
if(OS==windows)
compileDirectory(windowsImpl);
else if(OS==linux)
compileDirectory(windowsImpl);
if(OS==macos)
compileDirectory(windowsImpl);
The pro is, that it doesn't clutter the code, allows better adding of new features and some sort of abstraction.
The contra is, that it can be quite tremendeous to setup.
This is often done using compiler-specific macros. For example, the gcc compiler defines the preprocessor symbol
__linux
on Linux. So, C or C++ code can use
#if __linux
to compile Linux-specific code. Nothing in the actual code actually #defines this. This is defined by default, by the Linux version of gcc. Similar preprocessor macros are defined on other platforms support by gcc. Other compilers have similar pre-defined macros. Cross-platform library's sources assemble a collection of these macros they check in order to compile operating system-specific code.
Another, common approach is to explicitly define a library-specific symbol as part of the library's build instruction or script. The build instructions that come with a cross-platform library include the instructions for running the appropriate script, from the library's package, that runs the compiler and explicitly sets the appropriate preprocessor symbol, which is then used in this manner.
Generally, preprocessor usage is discouraged in modern C++ code, and used sparingly. This is one use case where preprocessor macro usage is quite common, in practice.
Generally cross platform libraries are implemented in layers.
There is the public interface, the common code, and a minimalized set of platform dependent code.
The public interface usually uses #if defined(symbol) checks to determine which platform it is running on. It may include system headers based on this, but more often will simply forward declare any symbols it needs to expose for platform specific APIs (if any). In some cases, header only libraries will go further than this.
The common code will have minimal platform specific stuff in it. It will try to deal with platform specific stuff using abstraction; it may #include platform specific helper headers, but the code is common between platforms.
Like, it might use Native::RWLock and Native::RWLock::lock( mutex ) functions that are typedefs and thin wrappers around platform specific types.
The platform specific code is different for each platform. It may be conditionally compiled in the build system NativeMutexImp_mac.cpp , or wrapped in #ifdef blocks, or even both.
Now header only libraries are leakier than this. And some less organized cross platform code will mix thenplatform specific and not code all over the place. Finally, performance requirements may require some leaking platform specific code in public header files.
But the main idea is that you hide the OS API use that you in turn use to implement your cross platform functionality.
This can make the end user code faster than native OS API use if your API makes efficient use easier than the native APIs do. Optimization is fungible; code that is easier to make performant is actually faster in practice.
Related
So I'm working on an Modular C++ library for me and my team. Here is my situation:
I have a library A which contains a complex storage control class a. I have also a library B that is something like a interface to an complex protocol which contains an special response. Now, I want to have a function in class a, which CAN use B. This can be helpful for Program X, which uses A and B. But there is also Program Y, which will only use A and not the complex library B.
How can I get this behavior in C++? Do I need macros, symbols or is there an other way to implement this easily so that I don't need to include an extra file in a Program? Which type of library is the better one?
This is pretty common in system libraries, where optional features can be chosen at (library) compile time. With this approach, you would have one or more #define preprocessor macros guarding the optional features. Thus, in library A:
#ifdef USE_LIBRARY_B
#include <b/foo.h>
int A_optional_feature_using_B(void) { ... }
#endif
// rest of library A follows
At compile time you would either define USE_LIBRARY_B or not (and add any necessary linker flags of course). For example, with gcc on a UNIX-like platform:
$ gcc ... -DUSE_LIBRARY_B ...
Most commonly, something like autoconf is used for UNIX environments to give the end-user an easy way to select the optional features. Thus, you might see:
me#pc:library_a$ ./configure --with-library-b
See the autoconf documentation for information on how to set something like that up.
On Windows, you could do something similar (e.g., in Visual Studio, use different Project/Solution configurations that define the appropriate macros).
The above gives you compile-time only control over the extra features. To allow the choice to be made at runtime so that the library can use B if present, you will need to use runtime dynamic linking. This is highly platform-specific. On Linux, use libdl. On Windows, use LoadLibrary() and friends. Here you will have to do a lot of extra coding to look for and load the optional library if present and to return appropriate error codes / exceptions if not present and functions requiring the optional library are called.
If all this sounds like a pain in the rear, you're right, it is. If there is any reasonable way to adjust your strategy so that you don't have to do this, you will be better off. But for some use cases this is necessary and appropriate and if yours is so, then I hope the above gives you some starting points, and good luck.
I have a C code which is written for an ATmega16 chip, and it is full of keywords like :
flash, eeprom, bit
and macros(?) like
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
that come before function signatures.
Now what I want to do is write and run unit tests that verify the correctness of the logic of the controller unit and I want to be able to run these tests on any computer and not need to have the "device" that the code represents.
I searched a lot and came across "abstracting the hardware" and "replacing them with stubs" kind of solutions, but I'm not sure how I can abstract something like "interrupt [TIM1_OVF]" in the code!
I was wondering if there any special tools that provide the environment for running these sorts of codes?
And also if I am going at it wrong, can anybody point me in the right direction? giving that changing or rewriting (!) the micro-controller's code might not be an option?
Thanks a bunch.
Your examples are not ISO C code, they are compiler specific extensions, they are not common across AVR compilers let alone architectures. In many cases they can be worked around by defining macros that require little or no modification of the code. To make your code portable in any case even across different vendor's AVR compilers it is a good idea to do that in any case, although a combination of techniques may be required.
Most compilers support an "always include" option that allows a header file to be included from the command line with an explicit #include directive in the source. Creating a header with your compatibility macros, and including it either implicitly as described or explicitly in the code is a useful technique. For example for the issues you have mentioned, you might have:
// compatability.h
#if !defined COMPATABILITY_INCLUDE
#define COMPATABILITY_INCLUDE
#if defined __IAR_SYSTEMS_ICC__
#define INTERRUPT( irq, handler ) __interrupt [irq] void handler(void)
#elif defined _WIN32
#define INTERRUPT( irq, handler ) void handler(void)
#define __flash const
#define __eeprom const
#define __bit char
#else
#error Unknown toolchain/environment
#endif
#endif
That will remove the memory location qualifiers from the Win32 code, and define __bit as a char. The interrupt handler macro will turn a handler into a regular function on Win32, but does require your code to be modified, but since every toolchain does this differently, that is perhaps no bad thing.
For example in this case you would change:
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
{
...
}
to
INTERRUPT( TIM_OVF, timer1_ovf_isr )
{
...
}
Note that you should use approapriate target macros in the compatability file - I have guessed at IAR for example; you may be using a different compiler. Your compiler documentation should specify the available predefined macros, alternatively Pre-defined Compiler Macros "project" on Sourceforge is a useful resource.
Some of the transformations may change the code semantically, such as swapping __bit for char in some cases for example if the bit is assigned a value greater than one, and then compared with 1, the embedded target is likely to yield true, while on the PC build it will not. It might better be transformed to _Bool but your compiler may give warnings about implicit conversions. My suggestions may not necessarily be the best possible transformation either - consult your compiler's manual for the precise semantics and decide how best to transform them to standard C for test builds.
An alternative that preserves proprietary semantics is to run your unit tests in an instruction-set simulator using debugger scripting if available to implement stubs for hardware interaction, however that method makes it impossible to use off-the-shelf unit-testing frameworks such as CUnit.
Depending on your toolchain, you may already have AVR simulator available, which would allow you to run your unit tests on any PC. For example, IAR provides "C-SPY", an AVR simulator that supports a terminal window, can show show register values, can support generation of interrupts, etc. Assuming you keep your unit sizes reasonable, you do not need significant infrastructure or stubbed interfaces to make this work.
One large benefit of running unit tests on your target platform (with your target compiler) is that you can account for any particular behaviors that will be caused by the platform (endianness, word size, compiler or library peculiarities, etc), compared to running in a PC environment.
Say you have a piece of code that must be different depending on the operating system your program is running on.
There's the old school way of doing it:
#ifdef WIN32
// code for Windows systems
#else
// code for other systems
#endif
But there must be cleaner solutions that this one, right?
The typical approach I've seen first hand at a half-dozen companies over my career is the use of a Hardware Abstraction Layer (HAL).
The idea is that you put the lowest level stuff into a dedicated header plus statically linked library, which includes things like:
Fixed width integers (int64_t on Linux, __int64 on Windows, etc).
Common library functions (strtok_r() vs strtok_s() on Linux vs Windows).
A common data type setup (ie: typedefs for all data types, such as xInt, xFloat etc, used throughout the code so that if the underlying type changes for a platform, or a new platform is suddenly supported, no need to re-write and re-test code that depends on it, which can be extremely expensive in terms of labor).
The HAL itself is usually riddled with preprocessor directives like in your example, and that's just the reality of the matter. If you wrap it with run-time if/else statements, you comilation will fail due to unresolved symbols. Or worse, you could have extra symbols included which will increase the size of your output, and likely slow down your program if that code is executed frequently.
So long as the HAL has been well-written, the header and library for the HAL give you a common interface and set of data types to use in the rest of your code with minimal hassle.
The most beautiful aspect of this, from a professional standpoint, is that all of your other code doesn't have to ever concern itself with architecture or operating system specifics. You'll have the same code-flow on various systems, which will by extension, allow you to test the same code in a variety of different manners, and find bugs you wouldn't normally expect or test for. From a company's perspective, this saves a ton of money in terms of labor, and not losing clients due to them being angry with bugs in production software.
I've had to do a lot of this sort of stuff in my career, supporting code that buils and runs on an embedded device, plus in windows, and then also have it run on different ASICS and/or revisions of ASICS.
I tend to do what you suggest and then when things really diverge, move on to defining the interface I desire to be fixed between platforms and then having separate implementation files or even libraries. It can get really messy as the codebase gets older and more exceptions need to be added.
Sometimes you can hide this stuff in header files, so your code looks 'clean', but a lot of times that's just obfuscating what's going on behind a bunch of macro magic.
The only other thing I'd add is I tend to make the #ifdef/#else/#endif chain fail if none of the options are defined. This forces me to revisit the issue when a new revision comes along. Some folks prefer it to have a default, but I find that just hides potential failures.
Granted, I'm working in the embedded world where code space is paramount (since memory is small and fixed), and code cleanliness unfortunately has to take a back seat.
An adopted practice for non-trivial projects is to write platform-specific code in separate files (and in separate directories, where applicable), avoiding "localized" #ifdefs to the fullest possible extent.
Say you are developing a library called "Example" and example.hpp will be your library header:
example.hpp
#include "platform.hpp"
//
// here: platform-independent declarations, includes etc
//
// below: platform-specific includes
#if defined(WINDOWS)
#include "windows\win32_specific_code.hpp"
// other win32 headers
#elif defined(POSIX)
#include "posix/linux_specific_code.hpp"
// other linux headers
#endif
platform.hpp (simplified)
#if defined(WIN32) && !defined(UNIX)
#define WINDOWS
#elif defined(UNIX) && !defined(WIN32)
#define POSIX
#endif
win32_specific_code.hpp
void Function1();
win32_specific_code.cpp
#include "../platform.hpp"
#ifdef WINDOWS // We should not violate the One Definition Rule
#include "win32_specific_code.hpp"
#include <iostream>
void Function1()
{
std::cout << "You are on WINDOWS" << std::endl;
}
//...
#endif /* WINDOWS */
Of course, declare Function1() in your linux_specific_code.hpp file as well.
Then, when implementing it for Linux (in the linux_specific_code.cpp file), be sure to surround everything for conditional compilation as well, similar to I did above (eg. using #ifdef POSIX). Otherwise, the compiler will generate multiple definitions and you'll get a linker error.
Now, everything an user of your library must do is #include <example.hpp> in his code, and place either #define WINDOWS or #define POSIX in his compiler's preprocessor definitions. In fact, the second step might not be necessary at all, assuming his environment already defines either one of the WIN32 or UNIX macros. This way, Function1() can already be used from the code in a cross-platform manner.
This approach is pretty much the one used by the Boost C++ Libraries. I personally find it clean and sensible. If, however, you don't like it, you can have a read at Chromium's conventions for multi-platform development for a somewhat different strategy.
To follow from my previous question about virtual and multiple inheritance (in a cross platform scenario) - after reading some answers, it has occurred to me that I could simplify my model by keeping the server and client classes, and replacing the platform specific classes with #ifdefs (which is what I was going to do originally).
Will using this code be simpler? It'd mean there'd be less files at least! The downside is that it creates a somewhat "ugly" and slightly harder to read Foobar class since there's #ifdefs all over the place. Note that our Unix Foobar source code will never be passed to the compiler, so this has the same effect as #ifdef (since we'd also use #ifdef to decide what platform specific class to call).
class Foobar {
public:
int someData;
#if WINDOWS
void someWinFunc1();
void someWinFunc2();
#elif UNIX
void someUnixFunc1();
void someUnixFunc2();
#endif
void crossPlatformFunc();
};
class FoobarClient : public Foobar;
class FoobarServer : public Foobar;
Note: Some stuff (ctor, etc) left out for a simpler example.
Update:
For those who want to read more into the background of this issue, I really suggest skimming over the appropriate mailing list thread. Thing start to get interesting around the 3rd post. Also there is a related code commit with which you can see the real life code in question here.
Preferably, contain the platform dependant nature of the operations within the methods so the class declaration remains the same across platforms. (ie, use #ifdefs in the implementations)
If you can't do this, then your class ought to be two completely separate classes, one for each platform.
My personal preference is to push ifdef magic into the make files, so the source code stays as clean as possible. Then have an implementation file per platform. This of course implies you can come up with an interface common for all your supported systems.
Edit:
One common way of getting around such a lower denominator design for cross-platform development is an opaque handle idiom. Same idea as ioctl(2) escape route - have a method returning opaque forward-declared structure defined differently for each platform (preferably in the implementation file) and only use it when common abstraction doesn't hold.
If you're fully sure that you won't use functions from the other OS on the one compiled, then using ifdef's has a lot of advantages:
Code and variables non used won't be compiled into the executable (however smart-linking helps here a bit)
It will be ease to see what code is live
You will be able to easily include platform dependent files.
However, classing based on OS can still have it's benefits:
You'll be able to be sure that the code compiles on all platforms when doing changes for one
The code and design will be cleaner
The latter is achieved by ifdefing platform-specific code in the class bodies themselves, or just ifdefing out the non-supported OS instances in compilation.
My preference is to push platform specific issues to the leaf-most modules and try to wrap them into a common interface. Put the specific methods, classes and functions into separate translation units. Let the linker and build process determine which specific translation units to combine. This makes for much cleaner code and easier debugging times.
From experience, I had a project that used #ifdef VERSION2. I spent a week in debugging because one module used #ifdef VERSION_2. A subtlety that would be easier to catch if all the version 2 specific code was in version 2 modules.
Having #ifdefs for platform specific code is idiomatic; especially since code for one platform won't even compile if it's enabled on another. Sounds like a good approach to me.
I am engaged in developing a C++ mobile phone application on the Symbian platforms. One of the requirement is it has to work on all the Symbian phones right from 2nd edition phones to 5th edition phones. Now across editions there are differences in the Symbian SDKs. I have to use preprocessor directives to conditionally compile code that are relevant to the SDK for which the application is being built like below:
#ifdef S60_2nd_ED
Code
#elif S60_3rd_ED
Code
#else
Code
Now since the application I am developing is not trivial it will soon grow to tens of thousands of lines of code and preprocessor directives like above would be spread all over. I want to know is there any alternative to this or may be a better way to use these preprocessor directives in this case.
Please help.
Well ... That depends on the exact nature of the differences. If it's possible to abstract them out and isolate them into particular classes, then you can go that route. This would mean having version-specific implementations of some classes, and switch entire implementations rather than just a few lines here and there.
You'd have
MyClass.h
MyClass_S60_2nd.cpp
MyClass_S60_3rd.cpp
and so on. You can select which CPP file to compile either by wrapping the entire inside using #ifdefs as above, or my controlling at the build-level (through Makefiles or whatever) which files are included when you're building for various targets.
Depending on the nature of the changes, this might be far cleaner.
I've been exactly where you are.
One trick is, even if you're going to have conditions in code, don't switch on Symbian versions. It makes it difficult to add support for new versions in future, or to customise for handsets which are unusual in some way. Instead, identify what the actual properties are that you're relying on, write the code around those, and then include a header file which does:
#if S60_3rd_ED
#define CAF_AGENT 1
#define HTTP_FILE_UPLOAD 1
#elif S60_2nd_ED
#define CAF_AGENT 0
#if S60_2nd_ED_FP2
#define HTTP_FILE_UPLOAD 1
#else
#define HTTP_FILE_UPLOAD 0
#endif
#endif
and so on. Obviously you can group the defines by feature rather than by version if you prefer, have completely different headers per configuration, or whatever scheme suits you.
We had defines for the UI classes you inherit from, too, so that there was some UI code in common between S60 and UIQ. In fact because of what the product was, we didn't have much UI-related code, so a decent proportion of it was common.
As others say, though, it's even better to herd the variable behaviour into classes and functions where possible, and link different versions.
[Edit in response to comment:
We tried quite hard to avoid doing anything dependent on resolution - fortunately the particular app didn't make this too difficult, so our limited UI was pretty generic. The main thing where we switched on screen resolution was for splash/background images and the like. We had a script to preprocess the build files, which substituted the width and height into a file name, splash_240x320.bmp or whatever. We actually hand-generated the images, since there weren't all that many different sizes and the images didn't change often. The same script generated a .h file containing #defines of most of the values used in the build file generation.
This is for per-device builds: we also had more generic SIS files which just resized images on the fly, but we often had requirements on installed size (ROM was sometimes quite limited, which matters if your app is part of the base device image), and resizing images was one way to keep it down a bit. To support screen rotation on N92, Z8, etc, we still needed portrait and landscape versions of some images, since flipping aspect ratio doesn't give as good results as resizing to the same or similar ratio...]
In our company we write a lot of cross-platform code (gamedevelopment for win32/ps3/xbox/etc).
To avoid platform-related macroses as much as possible we generally use the next few tricks:
extract platfrom-related code into platform-abstraction libraries that has the same interface across different platforms, but not the same implementation;
split code into different .cpp files for different platforms (ex: "pipe.h", "pipe_common.cpp", "pipe_linux.cpp", "pipe_win32.cpp", ...);
use macroses and helper functions to unify platform-specific function calls (ex: "#define usleep(X) Sleep((X)/1000u)");
use cross-platform third-party libraries.
You can try to define a common interface for all the platforms, if possible. Then, implement the interface for each platform.
Select the right implementation using preprocessor directives.
This way, you will have the platform selection directive in fewer places in your code (ideally, in one place, explicitly in the header file declaring the interface).
This means something like:
commoninterface.h /* declaring the common interface API. Platform identification preprocessor directives might be needed for things like common type definitions */
platform1.c /*specific implementation*/
platform2.c /*specific implementation*/
Look at SQLite. They have the same problem. They move the platform-dependent stuff to separate files and effectively compile only needed stuff by having the preprocessor directives that exclude an entire file contents. It's a widely used approach.
No Idea about alternative, But what you can do is, use different files to include for different version of OS. example
#ifdef S60_2nd_ED
#include "graphics2"
#elif S60_3rd_ED
#include "graphics3"
#else
#include "graphics"
You could something like they do for the assembly definition in the linux kernel. Each architecture has its own directory (asm-x86 for instance). All these folders cluster the same high level header files presenting the same interface. When the kernel is configured, a link named asm is created targeting the appropriate asm-arch directory. This way, all the C files include files like .
There are several differences between S60 2nd ed and 3rd ed applications that are not limited to code: application resource files differ, graphic formats and tools to pack them are different, mmp-files differ in many ways.
Based on my experience, don't try to automate it too much, but have a separate build scripts for 2nd ed and 3rd ed. In code level, separate differences to own classes that have common abstract API, use flags only in rare cases.
You should try and avoid spreading #ifs through the code.
Rather; use the #if in the header files to define alternative macros and then in the code use the single macro.
This method allows you to keep the code slightly more readable.
Example:
Plop.h
======
#if V1
#define MAKE_CALL(X,Y) makeCallV1(X,Y)
#elif V2
#define MAKE_CALL(X,Y) makeCallV2("Plop",X,222,Y)
....
#endif
Plop.cpp
========
if (pushPlop)
{
MAKE_CALL(911,"Help");
}
To help facilitate this split version specific code into their own functions, then use macros to activiate the functions as shown above. Also you can wrap the changing parts of the SDK in your own class to try and provide a consistent interface then all your differences are managed within the wrapper class leaving your code that does the work more tidy.