Is it a bad practice to use #ifdef in code? - c++

I have to use lot of #ifdef i386 and x86_64 for architecture specific code and some times #ifdef MAC or #ifdef WIN32... so on for platform specific code.
We have to keep the common code base and portable.
But we have to follow the guideline that use of #ifdef is strict no. I dont understand why?
As a extension to this question I would also like to understand when to use #ifdef ?
For example, dlopen() cannot open 32 bit binary while running from 64 bit process and vice versa. Thus its more architecture specific. Can we use #ifdef in such situation?

With #ifdef instead of writing portable code, you're still writing multiple pieces of platform-specific code. Unfortunately, in many (most?) cases, you quickly end up with a nearly impenetrable mixture of portable and platform-specific code.
You also frequently get #ifdef being used for purposes other than portability (defining what "version" of the code to produce, such as what level of self-diagnostics will be included). Unfortunately, the two often interact, and get intertwined. For example, somebody porting some code to MacOS decides that it needs better error reporting, which he adds -- but makes it specific to MacOS. Later, somebody else decides that the better error reporting would be awfully useful on Windows, so he enables that code by automatically #defineing MACOS if WIN32 is defined -- but then adds "just a couple more" #ifdef WIN32 to exclude some code that really is MacOS specific when Win32 is defined. Of course, we also add in the fact that MacOS is based on BSD Unix, so when MACOS is defined, it automatically defines BSD_44 as well -- but (again) turns around and excludes some BSD "stuff" when compiling for MacOS.
This quickly degenerates into code like the following example (taken from #ifdef Considered Harmful):
#ifdef SYSLOG
#ifdef BSD_42
openlog("nntpxfer", LOG_PID);
#else
openlog("nntpxfer", LOG_PID, SYSLOG);
#endif
#endif
#ifdef DBM
if (dbminit(HISTORY_FILE) < 0)
{
#ifdef SYSLOG
syslog(LOG_ERR,"couldn’t open history file: %m");
#else
perror("nntpxfer: couldn’t open history file");
#endif
exit(1);
}
#endif
#ifdef NDBM
if ((db = dbm_open(HISTORY_FILE, O_RDONLY, 0)) == NULL)
{
#ifdef SYSLOG
syslog(LOG_ERR,"couldn’t open history file: %m");
#else
perror("nntpxfer: couldn’t open history file");
#endif
exit(1);
}
#endif
if ((server = get_tcp_conn(argv[1],"nntp")) < 0)
{
#ifdef SYSLOG
syslog(LOG_ERR,"could not open socket: %m");
#else
perror("nntpxfer: could not open socket");
#endif
exit(1);
}
if ((rd_fp = fdopen(server,"r")) == (FILE *) 0){
#ifdef SYSLOG
syslog(LOG_ERR,"could not fdopen socket: %m");
#else
perror("nntpxfer: could not fdopen socket");
#endif
exit(1);
}
#ifdef SYSLOG
syslog(LOG_DEBUG,"connected to nntp server at %s", argv[1]);
#endif
#ifdef DEBUG
printf("connected to nntp server at %s\n", argv[1]);
#endif
/*
* ok, at this point we’re connected to the nntp daemon
* at the distant host.
*/
This is a fairly small example with only a few macros involved, yet reading the code is already painful. I've personally seen (and had to deal with) much worse in real code. Here the code is ugly and painful to read, but it's still fairly easy to figure out which code will be used under what circumstances. In many cases, you end up with much more complex structures.
To give a concrete example of how I'd prefer to see that written, I'd do something like this:
if (!open_history(HISTORY_FILE)) {
logerr(LOG_ERR, "couldn't open history file");
exit(1);
}
if ((server = get_nntp_connection(server)) == NULL) {
logerr(LOG_ERR, "couldn't open socket");
exit(1);
}
logerr(LOG_DEBUG, "connected to server %s", argv[1]);
In such a case, it's possible that our definition of logerr would be a macro instead of an actual function. It might be sufficiently trivial that it would make sense to have a header with something like:
#ifdef SYSLOG
#define logerr(level, msg, ...) /* ... */
#else
enum {LOG_DEBUG, LOG_ERR};
#define logerr(level, msg, ...) /* ... */
#endif
[for the moment, assuming a preprocessor that can/will handle variadic macros]
Given your supervisor's attitude, even that may not be acceptable. If so, that's fine. Instead a macro, implement that capability in a function instead. Isolate each implementation of the function(s) in its own source file and build the files appropriate to the target. If you have a lot of platform-specific code, you usually want to isolate it into a directory of its own, quite possibly with its own makefile1, and have a top-level makefile that just picks which other makefiles to invoke based on the specified target.
Some people prefer not to do this. I'm not really arguing one way or the other about how to structure makefiles, just noting that it's a possibility some people find/consider useful.

You should avoid #ifdef whenever possible. IIRC, it was Scott Meyers who wrote that with #ifdefs you do not get platform-independent code. Instead you get code that depends on multiple platforms. Also #define and #ifdef are not part of the language itself. #defines have no notion of scope, which can cause all sorts of problems. The best way is to keep the use of the preprocessor to a bare minimum, such as the include guards. Otherwise you are likely to end up with a tangled mess, which is very hard to understand, maintain, and debug.
Ideally, if you need to have platform-specific declarations, you should have separate platform-specific include directories, and handle them appropriately in your build environment.
If you have platform specific implementation of certain functions, you should also put them into separate .cpp files and again hash them out in the build configuration.
Another possibility is to use templates. You can represent your platforms with empty dummy structs, and use those as template parameters. Then you can use template specialization for platform-specific code. This way you would be relying on the compiler to generate platform-specific code from templates.
Of course, the only way for any of this to work, is to very cleanly factor out platform-specific code into separate functions or classes.

I have seen 3 broad usages of #ifdef:
isolate platform specific code
isolate feature specific code (not all versions of a compilers / dialect of a language are born equal)
isolate compilation mode code (NDEBUG anyone ?)
Each has the potential to create a huge mess of unmaintanable code, and should be treated accordingly, but not all of them can be dealt with in the same fashion.
1. Platform specific code
Each platform comes with its own set of specific includes, structures and functions to deal with things like IO (mainly).
In this situation, the simplest way to deal with this mess is to present a unified front, and have platform specific implementations.
Ideally:
project/
include/namespace/
generic.h
src/
unix/
generic.cpp
windows/
generic.cpp
This way, the platform stuff is all kept together in one single file (per header) so easy to locate. The generic.h file describes the interface, the generic.cpp is selected by the build system. No #ifdef.
If you want inline functions (for performance), then a specific genericImpl.i providing the inline definitions and platform specific can be included at the end of the generic.h file with a single #ifdef.
2. Feature specific code
This gets a bit more complicated, but is usually experienced only by libraries.
For example, Boost.MPL is much easier to implement with compilers having variadic templates.
Or, compilers supporting move constructors allow you to define more efficient versions of some operations.
There is no paradise here. If you find yourself in such a situation... you end up with a Boost-like file (aye).
3. Compilation Mode code
You can generally get away with a couple #ifdef. The traditional example is assert:
#ifdef NDEBUG
# define assert(X) (void)(0)
#else // NDEBUG
# define assert(X) do { if (!(X)) { assert_impl(__FILE__, __LINE__, #X); } while(0)
#endif // NDEBUG
Then, the use of the macro itself is not susceptible to the compilation mode, so at least the mess is contained within a single file.
Beware: there is a trap here, if the macro is not expanded to something that counts for a statement when "ifdefed away" then you risk to change the flow under some circumstances. Also, macro not evaluating their arguments may lead to strange behavior when there are function calls (with side effects) in the mix, but in this case this is desirable as the computation involved may be expensive.

Many programs use such a scheme to make platform specific code. A better way, and also a way to clean up the code, is to put all code specific to one platform in one file, naming the functions the same and having the same arguments. Then you just select which file to build depending on the platform.
It might still be some places left where you can not extract platform specific code into separate functions or files, and you still might need the #ifdef parts, but hopefully it should be minimized.

I prefer splitting the platform dependent code & features into separate translation units and letting the build process decide which units to use.
I've lost a week of debugging time due to misspelled identifiers. The compiler does not do checking of defined constants across translation units. For example, one unit may use "WIN386" and another "WIN_386". Platform macros are a maintenance nightmare.
Also, when reading the code, you have to check the build instructions and header files to see which identifers are defined. There is also a difference between an identifier existing and having a value. Some code may test for the existance of an identifier while another tests the value of the same identifer. The latter test is undefined when the identifier is not specified.
Just believe they are evil and prefer not to use them.

Not sure what you mean by "#ifdef is strict no", but perhaps you are referring to a policy on a project you are working on.
You might consider not checking for things like Mac or WIN32 or i386, though. In general, you do not actually care if you are on a Mac. Instead, there is some feature of MacOS that you want, and what you care about is the presence (or absence) of that feature. For that reason, it is common to have a script in your build setup that checks for features and #defines things based on the features provided by the system, rather than making assumptions about the presence of features based on the platform. After all, you might assume certain features are absent on MacOS, but someone may have a version of MacOS on which they have ported that feature. The script that checks for such features is commonly called "configure", and it is often generated by autoconf.

personally, I prefer to abstract that noise well (where necessary). if it's all over the body of a class' interface - yuck!
so, let's say there is a type which is platform defined:
I will use a typedef at a high level for the inner bits and create an abstraction - that's often one line per #ifdef/#else/#endif.
then for the implementation, i will also use a single #ifdef for that abstraction in most cases (but that does mean that the platform specific definitions appear once per platform). I also separate them into separate platform specific files so I can rebuild a project by throwing all the sources into a project and building without a hiccup. In that case, #ifdef is also handier than trying to figure out all the dependencies per project per platform, per build type.
So, just use it to focus on the platform specific abstraction you need, and use abstractions so the client code is the same -- just like reducing the scope of a variable ;)

Others have indicated the preferred solution: put the dependent code in
a separate file, which is included. This the files corresponding to
different implementations can either be in separate directories (one of
which is specified by means of a -I or a /I directive in the
invocation), or by building up the name of the file dynamically (using
e.g macro concatenation), and using something like:
#include XX_dependentInclude(config.hh)
(In this case, XX_dependentInclude might be defined as something like:
#define XX_string2( s ) # s
#define XX_stringize( s ) XX_string2(s)
#define XX_paste2( a, b ) a ## b
#define XX_paste( a, b ) XX_paste2( a, b )
#define XX_dependentInclude(name) XX_stringize(XX_paste(XX_SYST_ID,name))
and SYST_ID is initialized using -D or /D in the compiler
invocation.)
In all of the above, replace XX_ with the prefix you usually use for macros.

Related

Why use these many macros when it is really not needed

When we look at STL header files, we see many macros used where we could instead write single lines, or sometimes single word, directly. I don't understand why people use so many macros. e.g.
_STD_BEGIN
using ::type_info;
_STD_END
#if defined(__cplusplus)
#define _STD_BEGIN namespace std {
#define _STD_END }
#define _STD ::std::
Library providers have to cope with a wide range of implementations and use case. I can see two reasons for use of macros in this case (and there are probably others I'm not thinking about now):
the need to support compilers which don't support namespace. I'm not sure if it would be a concern for a recent implementation, but most of them have a long history and removing such macros even if compilers which don't support namespaces are no more supported (the not protected using ::type_info; hints that it is the case) would have a low priority.
the desire to allow customers to use their implementation of the standard library in addition to the one provided by the compiler provider without replacing it. Configuring of the library would then allow to substitute another name for std.
That
#if defined(__cplusplus)
in your sample is the key. Further down in your source I would expect to see alternative definitions for the macros. Depending on compilation environment, some constructs may require different syntax or not be supported at all; so we write code once, using macros for such constructs, and arrange for the macros to be defined appropriately depending on what is supported.
Macros vs variables : macros can run faster in this case because they are actually made constants after pre-processing.(Operations on constants are faster than that on variables).
Macros vs functions : Using macros avoids the overhead compared to that when using functions requires pushing parameters to stack, pushing return address and then popping from stack....
Macros : Faster execution but requires more memory space.
Function : Slower execution but less memory space.

How to make C++ program work across compilers

I wanted to know how I would make my C++ program work across compilers. I wanted to make the program so if it's being compiled with borland it will use the clrscr() function otherwise it'd use system("CLS"). I've seen code that has done something similar but I couldn't find an explanation of what it does or how it works. Any help would be appreciated.
In general, to make a C or C++ program work across multiple compilers you want to confine yourself to standard C or C++ as much as possible. Sometimes you have to use compiler/platform specific functionality, though, and one way to handle that is via the preprocessor.
The predef project on SourceForge lists a bunch a preprocessor symbols that are defined automatically by various compilers, for various platforms, et cetera. You can use that information to implement what you need, for example:
void clearScreen() {
// __BORLANDC__ is defined by the Borland C++ compiler.
#ifdef __BORLANDC__
clrscr();
#else
system("cls");
#endif
}
One easy answer from the top of the head is define your own function calls and then translate it into real calls depending on the compiling parameters (with #ifdef preprocessing definitions - look which values are corresponding to which compiler).
example:
#if defined(__COMPILER_ONE__)
#define ClearScreen() clrscr()
#elif defined(__COMPILER_TWO__)
#define ClearScreen() system("CLS")
#else
#error "I do not know what to do!"
#endif
You would have to create a dedicated header file for this and to include it everywhere, of course.
(Of course you have to substitute COMPILER_ONE and COMPILER_TWO with relevant definitions :) )
How to make something work across different compilers is simple question which is very complex to answer! Your specific query about clearing the screen;
I would attempt it like this, first you have your own function say
void clear_screen();
And define it like this:
void clear_screen()
{
#ifdef LINUX
...
#eleif MS_WIN
...
#endif
}
Please note I have just guessed what the #define 's are. This is know as conditional complication, generally regarded as evil, but containing it in a function reduces the harm a little.
The way it's typically done is through the magic of the preprocessor or makefiles. Either way, you hide the implementation details behind a common interface in a header file, such as void clearscreen(). Then in a single source file you can hide the Borland implementation behind #ifdef BORLAND, and similarly for other implementations. Alternatively, you can put each implementation in a separate source file, and only compile the proper one based on a variable in a makefile.
You can do this by checking compiler macros with the #ifdef compiler macro:
#ifdef BORLAND
borland();
#else
otherCompiler();
#endif

how to handle optimizations in code

I am currently writing various optimizations for some code. Each of theses optimizations has a big impact on the code efficiency (hopefully) but also on the source code. However I want to keep the possibility to enable and disable any of them for benchmarking purpose.
I traditionally use the #ifdef OPTIM_X_ENABLE/#else/#endif method, but the code quickly become too hard to maintain.
One can also create SCM branches for each optimizations. It's much better for code readability until you want to enable or disable more than a single optimization.
Is there any other and hopefully better way work with optimizations ?
EDIT :
Some optimizations cannot work simultaneously. I may need to disable an old optimization to bench a new one and see which one I should keep.
I would create a branch for an optimization, benchmark it until you know it has a significant improvement, and then simply merge it back to trunk. I wouldn't bother with the #ifdefs once it's back on trunk; why would you need to disable it once you know it's good? You always have the repository history if you want to be able to rollback a particular change.
There are so many ways of choosing which part of your code that will execute. Conditional inclusion using the preprocessor is usually the hardest to maintain, in my experience. So try to minimize that, if you can. You can separate the functionality (optimized, unoptimized) in different functions. Then call the functions conditionally depending on a flag. Or you can create an inheritance hierarchy and use virtual dispatch. Of course it depends on your particular situation. Perhaps if you could describe it in more detail you would get better answers.
However, here's a simple method that might work for you: Create two sets of functions (or classes, whichever paradigm you are using). Separate the functions into different namespaces, one for optimized code and one for readable code. Then simply choose which set to use by conditionally using them. Something like this:
#include <iostream>
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
int main()
{
f();
}
Then in optimized.h:
namespace optimized
{
void f() { std::cout << "optimized selected" << std::endl; }
}
and in readable.h:
namespace readable
{
void f() { std::cout << "readable selected" << std::endl; }
}
This method does unfortunately need to use the preprocessor, but the usage is minimal. Of course you can improve this by introducing a wrapper header:
wrapper.h:
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
Now simply include this header and further minimize the potential preprocessor usage. Btw, the usual separation of header/cpp should still be done.
Good luck!
I would work at class level (or file level for C) and embed all the various versions in the same working software (no #ifdef) and choose one implementation or the other at runtime through some configuration file or command line options.
It should be quite easy as optimizations should not change anything at internal API level.
Another way if you'are using C++ can be to instantiate templates to avoid duplicating high level code or selecting a branch at run-time (even if this is often an acceptable option, some switches here and there are often not such a big issue).
In the end various optimized backend could eventually be turned to libraries.
Unit Tests should be able to work without modifying them with every variant of implementation.
My rationale is that embedding every variant mostly change software size, and it's very rarely a problem. This approach also has other benefits : you can take care easily of changing environment. An optimization for some OS or some hardware may not be one on another. In many cases it will even be easy to choose the best version at runtime.
You may have two (three/more) version of function you optimise with names like:
function
function_optimized
which have identical arguments and return same results.
Then you may #define selector in som header like:
#if OPTIM_X_ENABLE
#define OPT(f) f##_optimized
#else
#define OPT(f) f
#endif
Then call functions having optimized variants as OPT(function)(argument, argument...). This method is not so aestetic but it does so.
You may go further and use re#define names for all your optimized functions:
#if OPTIM_X_ENABLE
#define foo foo_optimized
#define bar bar_optimized
...
#endif
And leave caller code as is. Preprocessor does function substitution for you. I like it most, because it works transparently while per-function (and also per datatype and per variable) grained which is enough in most cases for me.
More exotic method is to make separate .c file for non-optimized and optimized code and compile only one of them. They may have same names but with different paths, so switching can be made by change single option in command line.
I'm confused. Why don't you just find out where each performance problem is, fix it, and continue. Here's an example.

Writing cross-platform C++ Code (Windows, Linux and Mac OSX)

This is my first-attempt at writing anything even slightly complicated in C++, I'm attempting to build a shared library that I can interface with from Objective-C, and .NET apps (ok, that part comes later...)
The code I have is -
#ifdef TARGET_OS_MAC
// Mac Includes Here
#endif
#ifdef __linux__
// Linux Includes Here
#error Can't be compiled on Linux yet
#endif
#ifdef _WIN32 || _WIN64
// Windows Includes Here
#error Can't be compiled on Windows yet
#endif
#include <iostream>
using namespace std;
bool probe(){
#ifdef TARGET_OS_MAC
return probe_macosx();
#endif
#ifdef __linux__
return probe_linux();
#endif
#ifdef _WIN32 || _WIN64
return probe_win();
#endif
}
bool probe_win(){
// Windows Probe Code Here
return true;
}
int main(){
return 1;
}
I have a compiler warning, simply untitled: In function ‘bool probe()’:untitled:29: warning: control reaches end of non-void function - but I'd also really appreciate any information or resources people could suggest for how to write this kind of code better....
instead of repeating yourself and writing the same #ifdef .... lines again, again, and again, you're maybe better of declaring the probe() method in a header, and providing three different source files, one for each platform. This also has the benefit that if you add a platform you do not have to modify all of your existing sources, but just add new files. Use your build system to select the appropriate source file.
Example structure:
include/probe.h
src/arch/win32/probe.cpp
src/arch/linux/probe.cpp
src/arch/mac/probe.cpp
The warning is because probe() doesn't return a value. In other words, none of the three #ifdefs matches.
I'll address this specific function:
bool probe() {
#ifdef TARGET_OS_MAC
return probe_macosx();
#elif defined __linux__
return probe_linux();
#elif defined _WIN32 || defined _WIN64
return probe_win();
#else
#error "unknown platform"
#endif
}
Writing it this way, as a chain of if-elif-else, eliminates the error because it's impossible to compile without either a valid return statement or hitting the #error.
(I believe WIN32 is defined for both 32- and 64-bit Windows, but I couldn't tell you definitively without looking it up. That would simplify the code.)
Unfortunately, you can't use #ifdef _WIN32 || _WIN64: see http://codepad.org/3PArXCxo for a sample error message. You can use the special preprocessing-only defined operator, as I did above.
Regarding splitting up platforms according to functions or entire files (as suggested), you may or may not want to do that. It's going to depend on details of your code, such as how much is shared between platforms and what you (or your team) find best to keep functionality in sync, among other issues.
Furthermore, you should handle platform selection in your build system, but this doesn't mean you can't use the preprocessor: use macros conditionally defined (by the makefile or build system) for each platform. In fact, this is the often the most practical solution with templates and inline functions, which makes it more flexible than trying to eliminate the preprocessor. It combines well with the whole-file approach, so you still use that where appropriate.
You might want to have a single config header which translates all the various compiler- and platform-specific macros into well-known and understood macros that you control. Or you could add -DBEAKS_PLAT_LINUX to your compiler command line—through your build system—to define that macro (remember to use a prefix for macro names).
It seems none of TARGET_OS_MAC, __linux__, _WIN32 or _WIN64 is defined at the time you compile your code.
So its like your code was:
bool probe(){
}
That's why the compiler complains about reaching the end of a non-void function. There is no return clause.
Also, for the more general question, here are my guidelines when developping multi-platform/architecure software/libraries:
Avoid specific cases. Try to write code that is OS-agnostic.
When dealing with system specific stuff, try to wrap things into "opaque" classes. As an example, if you are dealing with files (different APIs on Linux and Windows), try to create a File class that will embed all the logic and provide a common interface, whatever the operating system. If some feature is not available on one of the OS, deal with it: if the feature makes no sense for a specific OS, it's often fine to do nothing at all.
In short: the less #ifdef the better. And no matter how portable your code is, test it on every platform before releasing it.
Good luck ;)
The warning is because if none of the defines are actually defined then you have no return in your probe function. The fix for that is put in a default return.
To add something more to this, other than the outstanding options above, the directives __linux__ and _WIN32 are known to the compiler, where the TARGET_OS_MAC directive was not, this can be resolved by using __APPLE__. Source: http://www.winehq.org/pipermail/wine-patches/2003-July/006906.html

separating compilation for to avoid recompilation when I add some debugging to .h file

I have a .h file which is used almost throughout the source code (in my case, it is just one directory with. .cc and .h files). Basically, I keep two versions of .h file: one with some debugging info for code analysis and the regular one. The debugging version has only one extra macro and extern function declaration. I switch pretty regularly between two versions. However, this causes a 20 minute recompilation.
How would you recommend to avoid this issue of recompilation? Perhaps to set some flags, create different tree? what are the common solutions and how to embed them?
The new .h file contains:
extern void (foo)(/*some params*/);
/***extra stuff****/
#define foo(...) ( /*call_some_function*/) , foo())
/*some_functions*_for_debugging/
As, you can see that will ensue a recompilation. I build with gcc on Linux AS 3
Thanks
To avoid the issue with an external function , you could leave the prototype in both versions, it doesn't harm being there, if not used. But with the macro no chance, you can forget it, it needs recompilation for code replacements.
I would make intensive use of precompiled headers to fasten recompilation (as it cannot be avoided). GCC and Precompiled-Headers. For other compilers use your favorite search engine. Any modern compiler should support this feature, for large scale projects it's inevitable you have to use it otherwise you'll be really unproductive.
Beside this, if you have enough disk space, I would check out two working copies. Each of them compiled with different settings. You would have to commit and update each time to transfer changes to the other working copy but it'll take for sure less than 20mins ;-)
You need to minimize the amount of your code (specifically - the number of files) that depend on that header file. Other than that you can't do much - when you need to change the header you will face recompilation of everything that includes it.
So you need to reorganize your code in such a way that only a select files include the header. For example you could move the functions that need its contents into a separate source file (or several files) and only include the header into those but into other files.
If the debugging macros are actually used in most of the files that include the header, then they need to be recompiled anyway! In this case, you have two options:
Keep two sets of object files, one without debugging code and one with. Use different makefiles/build configurations to allow them to be kept in separate locations.
Use a global variable, along these lines:
In your common.h:
extern int debug;
In your debug.c:
int debug = 1;
Everywhere else (can use a macro for this):
if (debug) {
/*(do_debug_stuff*/
}
A slight variation of the concept is to call an actual function in debug.c that might just do nothing if debugging is disabled.
I don't exactly understand your problem. As I understood, you are trying to create a test framework. I can suggest something. You may move the changing stuff to .c file like follows.
In new.h
extern void (foo)(/*some params*/);
/***extra stuff****/
#define foo(...) ( /*call_some_function_dummy*/) , foo())
/*some_functions*_for_debugging/
In new.c
call_some_function_dummy()
{
#ifdef _DEBUG
call_some_function()
#endif
}
Now if you switch to debug mode, only New.c need to be recompiled and compilation will be much faster. Hope this will help you.
Solution 2:
In New.h
extern void (foo)(/*some params*/);
/***extra stuff****/
#define foo(...) ( /*call_some_function[0]*/) , foo())
/*some_functions*_for_debugging/
In New.c
#ifdef _DEBUG
call_some_function[] =
{
call_some_function0,
call_some_function1
};
#else
call_some_function[]
{
dummy_nop,
dummy_nop
};
#endif
Why not move the macro to its own header and only include it where needed.
Just another thought.
I cannot see how you can avoid recompiling the dependent source files. However you may be able to speed up the other processing in the build.
For example can you use a form of precompiled headers, and only include your headerr in the code files and not other headers. Another way could be to parallelise the build or perhaps use a fast piece of hardware such as a solid state drive.
Remember that hardware is cheap programmers are expensive to quote wotshisname.