Dos and Don'ts of Conditional Compile [closed] - c++

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
When is doing conditional compilation a good idea and when is it a horribly bad idea?
By conditional compile I mean using #ifdefs to only compile certain bits of code in certain conditions. The #defineds themselves may be in either a common header file or introduced via the -D compiler directive.

The good ideas:
header guards (you can't do much better for portability)
conditional implementation (juggling with platform differences)
debug specific checks (asserts, etc...)
per suggestion: extern "C" { and } so that the same headers may be used by the C++ implementation and by the C clients of the API
The bad idea:
changing the API between compile flags, since it forces the client to changes its uses with the same compile flags... urk!

Don't put ifdef in your code.
It makes it really hard to read and understand. Please make the code as easy to read as possable for the maintainer (he knows where you live and owns an Axe).
Hide the conditional code in separate functions and use the ifdef to define what functions are being used.
DONT use the else part to make a definition. If you are ding that you are saying one platform is unique and all others are the same. This is unlikely, what is more likely is that you know what happens on a couple of platforms but you should use the #else section to stick a #error so when it is ported to a new platform a developer has to explicitly fix the condition for his platform.
x.h
#if defined(WINDOWS)
#define MyPlatfromSleepSeconds(x) sleep(x * 1000)
#elif defined (UNIX)
#define MyPlatfromSleepSeconds(x) Sleep(x)
#else
#error "Please define appropriate sleep for your platform"
#endif
Don;t be tempted to expand a macro into multiple lines of code. That leads to madness.
p.h
#if defined(SOLARIS_3_1_1)
#define DO_SOME_TASK(x,y) doPartA(x); \
doPartB(y); \
couple(x,y)
#elif defined(WINDOWS)
#define DO_SOME_TASK(x,y) doAndCouple(x,y)
#else
#error "Please define appropriate DO_SOME_TASK for your platform"
#endif
If you develop the code on windows then test on solaris 3_1_1 later you may find unexpected bugs when people do things like:
int loop;
for(loop = 0;loop < 10;++loop)
DO_SOME_TASK(loop,loop); // Windows works fine()
// Solaras. Only doPartA() is in the loop.
// The other statements are done when the loop finishes

Basically, you should try to keep the amount of code that is conditionally compiled to a minimum, because you should be trying to test all that and having lots of conditions makes that more difficult. It also reduces the readability of the code; conditionally compiling whole files is clearer, e.g., by putting platform-specific code in a separate file for each platform and having that all have the same API from the perspective of the rest of the problem. Also try to avoid using it in function headers; again, that's because that's a place where it is particularly confusing.
But that's not to say that you should never use conditional compilation. Just try to keep it short and minimal. (Where I can, I use conditional compilation to control the definitions of other macros which are then just used in the rest of the code; that seems to be clearer to me at least.)

It's a bad idea whenever you don't know what you're doing. It can be a good idea when you're effectively solving an issue this way :).
The way you describe conditional compiling, include guards are part of it. It's not only a good idea to use it. It's a way to avoid compilation errors.
For me, conditional compiling is also a way to target multiple compilers and operating systems. I'm involved in a lib that's supposed to be compileable on Windows XP and newer, 32 or 64 bit, using MinGW and Visual C++, on Linux 32 and 64 bit using gcc/g++ and on MacOS using I-don't-know-what (I'm not maintaining that, but I assume it's a gcc port). Without the preprocessor conditions, it would be pretty much impossible to create a single source file that's compileable anywhere.

Another pragmatic use of conditional compiles is to "comment out" sections of code which contain standard "C" comments (i.e. /* */). Some compilers do not allow nesting of these comments, for example:
/* comment out block of code
.... code ....
/* This is a standard
* comment.
*/ ... oopos! Some compilers try to compile code after this closing comment.
.... code ....
end of block of code*/
(As you can see in the syntax highlighting, StackOverflow does not nest comments.)
instead you can use#ifdef to get the right effect, for example:
#ifdef _NOT_DEFINED_
.... code ....
/* This is a standard
* comment.
*/
.... code ....
#endif

In the past if you wanted to produce truly portable code, you'd have to resort to some form of conditional compilation. With there being a proliferation of portable libraries (such as APR, boost etc.) this reason has little weight IMHO. If you are using conditional compilation simply compile out blocks of code that are not need for particular builds, you should really revisit your design - I should imagine that this would become a nightmare to maintain.
Having said all that, if you do need to use conditional compilation, I would hide as much as I can away from the main body of the code and limit to to very specific cases that are very well understood.

Good/justifiable uses are based on cost/benefit analysis. Obviously, people here are very conscious of the risks:
in linking objects that saw different versions of classes, functions etc.
in making code hard to understand, test and reason about
But, there are uses which often fall into the net-benefit category:
header guards
code customisations for distinct software "ecosystems", such as Linux versus Windows, Visual C++ versus GCC, CPU-specific optimisations, sometimes word size and endianness factors (though with C++ you can often determine these at compile via template hackery, but that may prove messier still) - abstracts away lower-level differences to provide a consistent API across those environments
interacting with existing code that uses preprocessor defines to select versions of APIs, standards, behaviours, thread safety, protocols etc. (sad but true)
compilation that may use optional features when available (think of GNU configure scripts and all the tests they perform on OS interfaces etc)
request that extra code be generated in a translation unit, such as adding main() for a standalone app versus without for a library
controlling code inclusion for distinct logical build modes such as debug and release

It is always a bad idea. What it does is effectively create multiple versions of your source code, all of which need to be tested, which is a pain, to say the least. Unfortunately, like many bad things it is sometimes unavoidable. I use it in very small amounts when writing code that needs to be ported between Windows and Linux, but if I found myself doing it a lot, I would consider alternatives, such as having two separate development sub-trees.

Related

How should I write my C++ to be prepared for C++ modules?

There are already two compilers that support C++ modules:
Clang: http://clang.llvm.org/docs/Modules.html
MS VS 2015: http://blogs.msdn.com/b/vcblog/archive/2015/12/03/c-modules-in-vs-2015-update-1.aspx
When starting a new project now, what should I pay attention to in order to be able to adopt the modules feature when it is eventually released in my compiler?
Is it possible to use modules and still maintain compatibility with older compilers that do not support it?
There are already two compilers that support C++ modules
clang: http://clang.llvm.org/docs/Modules.html
MS VS 2015: http://blogs.msdn.com/b/vcblog/archive/2015/12/03/c-modules-in-vs-2015-update-1.aspx
The Microsoft approach appears to be the one gaining the most traction, mainly because Microsoft are throwing a lot more resources at their implementation than any of the clang folk currently. See https://llvm.org/bugs/buglist.cgi?list_id=100798&query_format=advanced&component=Modules&product=clang for what I mean, there are some big showstopper bugs in Modules for C++, whereas Modules for C or especially Objective C look much more usable in real world code. Visual Studio's biggest and most important customer, Microsoft, is pushing hard for Modules because it solves a whole ton of internal build scalability problems, and Microsoft's internal code is some of the hardest C++ to compile anywhere in existence so you can't throw any compiler other than MSVC at it (e.g. good luck getting clang or GCC to compile 40k line functions). Therefore the clang build tricks used by Google etc aren't available to Microsoft, and they have a huge pressing need to get it fixed sooner rather than later.
This isn't to say there aren't some serious design flaws with the Microsoft proposal when applied in practice to large real world code bases. However Gaby is of the view you should refactor your code for Modules, and whilst I disagree, I can see where he is coming from.
When starting a new project now, what should I pay attention to in order to be able to adopt the modules feature when it is eventually released in my compiler?
In so far as Microsoft's compiler is currently expected to implement Modules, you ought to make sure your library is usable in all of these forms:
Dynamic library
Static library
Header only library
Something very surprising to many people is that C++ Modules as currently expected to be implemented keeps those distinctions, so now you get a C++ Module variant for all three of the above, with the first most looking like what people expect a C++ Module to be, and the last looking most like a more useful precompiled header. The reason you ought to support those variants is because you can reuse most of the same preprocessor machinery to also support C++ Modules with very little extra work.
A later Visual Studio will allow linking of the module definition file (the .ifc file) as a resource into DLLs. This will finally eliminate the need for the .lib and .dll distinction on MSVC, you just supply a single DLL to the compiler and it all "just works" on module import, no headers or anything else needed. This of course smells a bit like COM, but without most of the benefits of COM.
Is it possible to use modules in a single codebase and still maintain compatibility with older compilers that do not support it?
I'm going to assume you meant the bold text inserted above.
The answer is generally yes with even more preprocessor macro fun. #include <someheader> can turn into an import someheader within the header because the preprocessor still works as usual. You can therefore mark up individual library headers with C++ Modules support along something like these lines:
// someheader.hpp
#if MODULES_ENABLED
# ifndef EXPORTING_MODULE
import someheader; // Bring in the precompiled module from the database
// Do NOT set NEED_DEFINE so this include exits out doing nothing more
# else
// We are at the generating the module stage, so mark up the namespace for export
# define SOMEHEADER_DECL export
# define NEED_DEFINE
# endif
#else
// Modules are not turned on, so declare everything inline as per the old way
# define SOMEHEADER_DECL
# define NEED_DEFINE
#endif
#ifdef NEED_DEFINE
SOMEHEADER_DECL namespace someheader
{
// usual classes and decls here
}
#endif
Now in your main.cpp or whatever, you simply do:
#include "someheader.hpp"
... and if the compiler had /experimental:modules /DMODULES_ENABLED then your application automagically uses the C++ Modules edition of your library. If it doesn't, you get inline inclusion as we've always done.
I reckon these are the minimum possible set of changes to your source code to make your code Modules-ready now. You will note I have said nothing about build systems, this is because I am still debugging the cmake tooling I've written to get all this stuff to "just work" seamlessly and I expect to be debugging it for some months yet. Expect to see it maybe at a C++ conference next year or the year after :)
Is it possible to use modules and still maintain compatibility with older compilers that do not support it?
No, it is not possible. It might be possible using some #ifdef magic like this:
#ifdef CXX17_MODULES
...
#else
#pragma once, #include "..." etc.
#endif
but this means you still need to provide .h support and thus lose all the benefits, plus your codebase looks quite ugly now.
If you do want to follow this approach, the easiest way to detect "CXX17_MODULES" which I just made up is to compile a small test program that uses modules with a build system of your choice, and define a global for everyone to see telling whether the compilation succeeded or not.
When starting a new project now, what should I pay attention to in order to be able to adopt the modules feature when it is eventually released in my compiler?
It depends. If your project is enterprise and gets you food on the plate, I'd wait a few years once it gets released in stables so that it becomes widely adapted. On the other hand, if your project can afford to be bleeding-edge, by all means, use modules.
Basically, it's the same story ast with Python3 and Python2, or less relevantly, PHP7 and PHP5. You need to find a balance between being a good up-to-date programmer and not annoying people on Debian ;-)

is there a way to run C code designed for an embedded micro-controller on a normal computer?

I have a C code which is written for an ATmega16 chip, and it is full of keywords like :
flash, eeprom, bit
and macros(?) like
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
that come before function signatures.
Now what I want to do is write and run unit tests that verify the correctness of the logic of the controller unit and I want to be able to run these tests on any computer and not need to have the "device" that the code represents.
I searched a lot and came across "abstracting the hardware" and "replacing them with stubs" kind of solutions, but I'm not sure how I can abstract something like "interrupt [TIM1_OVF]" in the code!
I was wondering if there any special tools that provide the environment for running these sorts of codes?
And also if I am going at it wrong, can anybody point me in the right direction? giving that changing or rewriting (!) the micro-controller's code might not be an option?
Thanks a bunch.
Your examples are not ISO C code, they are compiler specific extensions, they are not common across AVR compilers let alone architectures. In many cases they can be worked around by defining macros that require little or no modification of the code. To make your code portable in any case even across different vendor's AVR compilers it is a good idea to do that in any case, although a combination of techniques may be required.
Most compilers support an "always include" option that allows a header file to be included from the command line with an explicit #include directive in the source. Creating a header with your compatibility macros, and including it either implicitly as described or explicitly in the code is a useful technique. For example for the issues you have mentioned, you might have:
// compatability.h
#if !defined COMPATABILITY_INCLUDE
#define COMPATABILITY_INCLUDE
#if defined __IAR_SYSTEMS_ICC__
#define INTERRUPT( irq, handler ) __interrupt [irq] void handler(void)
#elif defined _WIN32
#define INTERRUPT( irq, handler ) void handler(void)
#define __flash const
#define __eeprom const
#define __bit char
#else
#error Unknown toolchain/environment
#endif
#endif
That will remove the memory location qualifiers from the Win32 code, and define __bit as a char. The interrupt handler macro will turn a handler into a regular function on Win32, but does require your code to be modified, but since every toolchain does this differently, that is perhaps no bad thing.
For example in this case you would change:
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
{
...
}
to
INTERRUPT( TIM_OVF, timer1_ovf_isr )
{
...
}
Note that you should use approapriate target macros in the compatability file - I have guessed at IAR for example; you may be using a different compiler. Your compiler documentation should specify the available predefined macros, alternatively Pre-defined Compiler Macros "project" on Sourceforge is a useful resource.
Some of the transformations may change the code semantically, such as swapping __bit for char in some cases for example if the bit is assigned a value greater than one, and then compared with 1, the embedded target is likely to yield true, while on the PC build it will not. It might better be transformed to _Bool but your compiler may give warnings about implicit conversions. My suggestions may not necessarily be the best possible transformation either - consult your compiler's manual for the precise semantics and decide how best to transform them to standard C for test builds.
An alternative that preserves proprietary semantics is to run your unit tests in an instruction-set simulator using debugger scripting if available to implement stubs for hardware interaction, however that method makes it impossible to use off-the-shelf unit-testing frameworks such as CUnit.
Depending on your toolchain, you may already have AVR simulator available, which would allow you to run your unit tests on any PC. For example, IAR provides "C-SPY", an AVR simulator that supports a terminal window, can show show register values, can support generation of interrupts, etc. Assuming you keep your unit sizes reasonable, you do not need significant infrastructure or stubbed interfaces to make this work.
One large benefit of running unit tests on your target platform (with your target compiler) is that you can account for any particular behaviors that will be caused by the platform (endianness, word size, compiler or library peculiarities, etc), compared to running in a PC environment.

Best (cleanest) way for writing platform specific code

Say you have a piece of code that must be different depending on the operating system your program is running on.
There's the old school way of doing it:
#ifdef WIN32
// code for Windows systems
#else
// code for other systems
#endif
But there must be cleaner solutions that this one, right?
The typical approach I've seen first hand at a half-dozen companies over my career is the use of a Hardware Abstraction Layer (HAL).
The idea is that you put the lowest level stuff into a dedicated header plus statically linked library, which includes things like:
Fixed width integers (int64_t on Linux, __int64 on Windows, etc).
Common library functions (strtok_r() vs strtok_s() on Linux vs Windows).
A common data type setup (ie: typedefs for all data types, such as xInt, xFloat etc, used throughout the code so that if the underlying type changes for a platform, or a new platform is suddenly supported, no need to re-write and re-test code that depends on it, which can be extremely expensive in terms of labor).
The HAL itself is usually riddled with preprocessor directives like in your example, and that's just the reality of the matter. If you wrap it with run-time if/else statements, you comilation will fail due to unresolved symbols. Or worse, you could have extra symbols included which will increase the size of your output, and likely slow down your program if that code is executed frequently.
So long as the HAL has been well-written, the header and library for the HAL give you a common interface and set of data types to use in the rest of your code with minimal hassle.
The most beautiful aspect of this, from a professional standpoint, is that all of your other code doesn't have to ever concern itself with architecture or operating system specifics. You'll have the same code-flow on various systems, which will by extension, allow you to test the same code in a variety of different manners, and find bugs you wouldn't normally expect or test for. From a company's perspective, this saves a ton of money in terms of labor, and not losing clients due to them being angry with bugs in production software.
I've had to do a lot of this sort of stuff in my career, supporting code that buils and runs on an embedded device, plus in windows, and then also have it run on different ASICS and/or revisions of ASICS.
I tend to do what you suggest and then when things really diverge, move on to defining the interface I desire to be fixed between platforms and then having separate implementation files or even libraries. It can get really messy as the codebase gets older and more exceptions need to be added.
Sometimes you can hide this stuff in header files, so your code looks 'clean', but a lot of times that's just obfuscating what's going on behind a bunch of macro magic.
The only other thing I'd add is I tend to make the #ifdef/#else/#endif chain fail if none of the options are defined. This forces me to revisit the issue when a new revision comes along. Some folks prefer it to have a default, but I find that just hides potential failures.
Granted, I'm working in the embedded world where code space is paramount (since memory is small and fixed), and code cleanliness unfortunately has to take a back seat.
An adopted practice for non-trivial projects is to write platform-specific code in separate files (and in separate directories, where applicable), avoiding "localized" #ifdefs to the fullest possible extent.
Say you are developing a library called "Example" and example.hpp will be your library header:
example.hpp
#include "platform.hpp"
//
// here: platform-independent declarations, includes etc
//
// below: platform-specific includes
#if defined(WINDOWS)
#include "windows\win32_specific_code.hpp"
// other win32 headers
#elif defined(POSIX)
#include "posix/linux_specific_code.hpp"
// other linux headers
#endif
platform.hpp (simplified)
#if defined(WIN32) && !defined(UNIX)
#define WINDOWS
#elif defined(UNIX) && !defined(WIN32)
#define POSIX
#endif
win32_specific_code.hpp
void Function1();
win32_specific_code.cpp
#include "../platform.hpp"
#ifdef WINDOWS // We should not violate the One Definition Rule
#include "win32_specific_code.hpp"
#include <iostream>
void Function1()
{
std::cout << "You are on WINDOWS" << std::endl;
}
//...
#endif /* WINDOWS */
Of course, declare Function1() in your linux_specific_code.hpp file as well.
Then, when implementing it for Linux (in the linux_specific_code.cpp file), be sure to surround everything for conditional compilation as well, similar to I did above (eg. using #ifdef POSIX). Otherwise, the compiler will generate multiple definitions and you'll get a linker error.
Now, everything an user of your library must do is #include <example.hpp> in his code, and place either #define WINDOWS or #define POSIX in his compiler's preprocessor definitions. In fact, the second step might not be necessary at all, assuming his environment already defines either one of the WIN32 or UNIX macros. This way, Function1() can already be used from the code in a cross-platform manner.
This approach is pretty much the one used by the Boost C++ Libraries. I personally find it clean and sensible. If, however, you don't like it, you can have a read at Chromium's conventions for multi-platform development for a somewhat different strategy.

Indicate C++ standard in source in a standard way

Standard compliant C++ compilers define a __cplusplus macro which may
be inspected during preprocessing to determine under what standard a
file is being compiled, e.g:
#if __cplusplus < 201103L
#error "You need a C++11 compliant compiler."
#endif
#include <iostream>
#include <vector>
int main(){
std::vector<int> v {1, 2, 3};
for (auto i : v){
std::cout << i << " ";
}
std::cout << std::endl;
return 0;
}
My question is:
Is there a standard way to indicate what standard a source
file should be compiled with?
That would allow build tools to inspect sources prior to compilation
to determine the appropriate argument for -std= (cf. shebang's which
can indicate scripting language/version: #!/usr/bin/env python3).
A non standard and brittle way I can think of is looking for the
preprocessor checks of __cplusplus but in the example above I could
also have written:
#if __cplusplus <= 199711L
#error "You need a C++11 compliant compiler."
#endif
hence, writing e.g. a regex would become quite tricky to catch all variations.
EDIT:
While I sympathize with the answer by #Gary which suggests relying on a build system,
it assumes that we actually will have a build step.
But you can already today:
use an interpreter to run a C++ program using e.g. CINT
or use a source to source translation using e.g. rosecompiler
My question is also about indicating that the source is C++ and what version
it was intended for (imagine someone digging out my code 70 years from now
when C++ might be as popular as say Cobol is today).
I guess the equivalent thing I would be looking for is the C++ equiavlent of HTML's:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
C++ Standards in a way are somewhat like developing against a library. In that sense, libraries typically evolve in a way that slowly deprecates old functions while making access to new functions. The typical way is the introduction of new methods or signatures while still allowing access to the old ones.
As a simple example, for instance, you might make an app for the iPhone that is backwards compatible with IOS 4 and above. You don't get the option to cherry pick what specific versions you want to support. This is good because otherwise you open code evolution up to a matrix of possibilities, making your code harder to understand and maintain.
Alternatively, you may introduce preprocessor instructions to build certain pieces conditionally depending on a version or flag of some sort. These are temporary measures, however, and should be removed as the code evolves.
So I think for answering this question as is, the better question is asking oneself in this situation is what will adding something like this actually solve and will it add needless complexity (one of the code smells of bad design)?
In this situation and from experience, I personally think you're better sticking with one standard. I think you'll find that trying to differentiate standards by sprinkling various preprocessor #ifdef and #ifndefs is going to make understanding your code base difficult to understand and manage. Even if you had one include file with the definition of what version is allowed that gets included by all other files, it becomes yet another file to manage....not to mention when you change it you have to recompile everything that includes it.
If you're worried about someone building your code base with the wrong standard, use a build system that doesn't require developers to input that information. For instance Make, Ant, cmake. It makes the building of your software simple and clearly defines how the project should be compiled in a repeatable fashion. If you go this route, you'll see that trying to protect the code from being compiled improperly becomes a non-issue.
Also, if they go out of their way and compile with the wrong standard, they'll be greeted with plenty of compiler errors =)

Optimizing code for various C/C++ compilers

For those that develop software for multiple platforms, how do you handle the potential that compilers might do certain things better than other compilers.
Say you develop for OS X, Windows, Linux and you are using Clang/LLVM, VS and GCC.
So if someone compiles your app on OS X and they are using GCC and another person compiles on OS X using the Intel Compilers and you could optimized sections of the code for the Intel compilers if the person has them.
Would you just check a Preprocessor directive?
#ifdef __GCC_
// do it this way
#endif
#ifdef __INTEL__
// do it this way
#endif
#ifdef __GCC_WITH C++_V11_Support__
// do it this way
#endif
#idfef __WINDOWS_VISUAL_STUDIO
// do it this way
#endif
Or is there a better way?
How does one find a list of what directive a compiler offers for checking compiler version, etc
Don't choose the implementation based on predefined macros. Let the build system control it.
This lets you build and compare multiple implementations against each other during unit testing.
Typically, optimization follows the traditional 80/20 or 90/10 rule of "20% of the code takes 80% of the time to run" (and "20% of the code takes 80% of the time to develop"). Substitute 80/20 for 90/10 if you like - it's nearly always somewhere between those two...
So, the first stage of "do we optimize for a particular compiler" is to figure out what parts of your code are slow, and if you can make it any better in a generic way that works on all compilers (e.g. passing const reference rather than a copy of a large object). Once you have exhausted all generic improvements to the code, you may want to look at compiler specific optimizations - but that really requires that you gain enough that it really is worth the extra maintenance of having code that is different between the different compilers.
In general, I would very much avoid the "things are different in different compilers".
Generally speaking, compilers are written to optimize common code, not something specialized written specific for the compiler. So generally you should just focus on writing clean code, using the fastest algorithms. However some compilers are hintable, for instance gcc, through attributes using these attributes lets the compiler do its job better.
For instance using the noreturn attribute will allow gcc to discard function return code, thereby minimizing code size. I guess a lot of compilers have similar hinting schemes.
One could then do;
#ifdef GCC
#define NO_RETURN __attribute(...)
#else
#define NO_RETURN
#endif
And use NO_RETURN in your code.