Why use these many macros when it is really not needed - c++

When we look at STL header files, we see many macros used where we could instead write single lines, or sometimes single word, directly. I don't understand why people use so many macros. e.g.
_STD_BEGIN
using ::type_info;
_STD_END
#if defined(__cplusplus)
#define _STD_BEGIN namespace std {
#define _STD_END }
#define _STD ::std::

Library providers have to cope with a wide range of implementations and use case. I can see two reasons for use of macros in this case (and there are probably others I'm not thinking about now):
the need to support compilers which don't support namespace. I'm not sure if it would be a concern for a recent implementation, but most of them have a long history and removing such macros even if compilers which don't support namespaces are no more supported (the not protected using ::type_info; hints that it is the case) would have a low priority.
the desire to allow customers to use their implementation of the standard library in addition to the one provided by the compiler provider without replacing it. Configuring of the library would then allow to substitute another name for std.

That
#if defined(__cplusplus)
in your sample is the key. Further down in your source I would expect to see alternative definitions for the macros. Depending on compilation environment, some constructs may require different syntax or not be supported at all; so we write code once, using macros for such constructs, and arrange for the macros to be defined appropriately depending on what is supported.

Macros vs variables : macros can run faster in this case because they are actually made constants after pre-processing.(Operations on constants are faster than that on variables).
Macros vs functions : Using macros avoids the overhead compared to that when using functions requires pushing parameters to stack, pushing return address and then popping from stack....
Macros : Faster execution but requires more memory space.
Function : Slower execution but less memory space.

Related

Avoid expansion of macros while using boost preprocessor sequences

I'm trying to get the OS and compiler name as a string in C++. Although there are many questions about this I did not find a definitive answer. So I tried to use Boost.Predef 1.55 which defines macros of the type BOOST_OS_<OS> and BOOST_OS_<OS>_NAME.
Hence one could simply do if(BOOST_OS_<OS>) return BOOST_OS_<OS>_NAME; for every OS boost supports. Same for compilers with COMP instead of OS. To avoid the repetition I wanted to use Boost.Preprocessor and put them all in a loop.
What I came up with is this:
#define MAKE_STMT_I2(PREFIX) if(PREFIX) return PREFIX ## _NAME;
#define MAKE_STMT_I(type, curName) MAKE_STMT_I2(BOOST_ ## type ## _ ## curName)
#define MAKE_STMT(s, type, curName) MAKE_STMT_I(type, curName)
#define OS_LIST (AIX)(AMIGAOS)(ANDROID)(BEOS)(BSD)(CYGWIN)(HPUX)(IRIX)(LINUX)(MACOS)(OS400)(QNX)(SOLARIS)(UNIX)(SVR4)(VMS)(WINDOWS)(BSDI)(DRAGONFLY)(BSD_FREE)(BSD_NET)(BSD_OPEN)
BOOST_PP_SEQ_FOR_EACH(MAKE_STMT, OS, OS_LIST)
However I run into problems where the values are expanded to soon. E.g. VMS defines already a macro named VMS which then gets replaced in OS_LIST. Even doing something like #define OS_LIST (##AIX##)(##AMIGAOS##)(... does not help as it seems to get expanded in boost later.
How can I avoid the expansion in the sequence completely?
Since you rely on the token VMS being undefined, a quick solution is a simple #undef VMS. Obviously, to avoid breaking code which relies on that macro, you should put your Boost PP code in its own .cpp file.
How can I avoid the expansion in the sequence completely?
You can't. Passing high level data structures as an argument to a macro necessarily involves evaluating the data structure.
You could avoid this problem and still use the boost macros in basically three ways:
1. Undefine problem macros before the call
This is essentially what MSalters recommended.
The idea being that if VMS isn't defined, its evaluation won't expand it.
Here, you risk VMS being left undefined, which could have dire consequences, so you have to mitigate that (MSalters touched on this).
2. Build high level macros from different data
2 might for example use:
#define OS_LIST (S_AIX)(S_BEOS)(S_VMS)
...and require you to change your MAKE_STMT macro complex; for example, this:
#define MAKE_STMT_I2(PREFIX) if(PREFIX) return PREFIX ## _NAME;
#define MAKE_STMT_I(curName) MAKE_STMT_I2(BOOST_O ## curName)
#define MAKE_STMT(s, type, curName) MAKE_STMT_I(curName)
#define OS_LIST (S_AIX)(S_AMIGAOS)(S_ANDROID)(S_BEOS)(S_BSD)(S_CYGWIN)(S_HPUX)(S_IRIX)(S_LINUX)(S_MACOS)(S_OS400)(S_QNX)(S_SOLARIS)(S_UNIX)(S_SVR4)(S_VMS)(S_WINDOWS)(S_BSDI)(S_DRAGONFLY)(S_BSD_FREE)(S_BSD_NET)(S_BSD_OPEN)
(Note: Here I'm ignoring the type; it's not necessary to pass OS in as data to the iteration sequence anyway).
The idea here is to find a different shared portion of BOOST_OS_FOO and BOOST_OS_FOO_NAME to put in your data, so that your data doesn't include the macros you're defining.
Here, you risk S_FOO being defined at some higher level messing you up. You could mitigate this by finding a different piece to use in your data.
3. Build wrapper identifiers
This is easiest to define by example:
#define OS_LIST (AIX)(BEOS)(8VMS)
#define BOOST_OS_8VMS BOOST_OS_VMS
#define BOOST_OS_8VMS_NAME BOOST_OS_VMS_NAME
The idea here is that you're building different BOOST_OS_xxx / BOOST_OS_xxx_NAME form macros, then remapping those back to the desired ones. Using a numeric prefix has the advantage of becoming immune to expansion (such entities are valid preprocessor tokens (pp-numbers), but they cannot be object-like macros).

Is it a bad practice to use #ifdef in code?

I have to use lot of #ifdef i386 and x86_64 for architecture specific code and some times #ifdef MAC or #ifdef WIN32... so on for platform specific code.
We have to keep the common code base and portable.
But we have to follow the guideline that use of #ifdef is strict no. I dont understand why?
As a extension to this question I would also like to understand when to use #ifdef ?
For example, dlopen() cannot open 32 bit binary while running from 64 bit process and vice versa. Thus its more architecture specific. Can we use #ifdef in such situation?
With #ifdef instead of writing portable code, you're still writing multiple pieces of platform-specific code. Unfortunately, in many (most?) cases, you quickly end up with a nearly impenetrable mixture of portable and platform-specific code.
You also frequently get #ifdef being used for purposes other than portability (defining what "version" of the code to produce, such as what level of self-diagnostics will be included). Unfortunately, the two often interact, and get intertwined. For example, somebody porting some code to MacOS decides that it needs better error reporting, which he adds -- but makes it specific to MacOS. Later, somebody else decides that the better error reporting would be awfully useful on Windows, so he enables that code by automatically #defineing MACOS if WIN32 is defined -- but then adds "just a couple more" #ifdef WIN32 to exclude some code that really is MacOS specific when Win32 is defined. Of course, we also add in the fact that MacOS is based on BSD Unix, so when MACOS is defined, it automatically defines BSD_44 as well -- but (again) turns around and excludes some BSD "stuff" when compiling for MacOS.
This quickly degenerates into code like the following example (taken from #ifdef Considered Harmful):
#ifdef SYSLOG
#ifdef BSD_42
openlog("nntpxfer", LOG_PID);
#else
openlog("nntpxfer", LOG_PID, SYSLOG);
#endif
#endif
#ifdef DBM
if (dbminit(HISTORY_FILE) < 0)
{
#ifdef SYSLOG
syslog(LOG_ERR,"couldn’t open history file: %m");
#else
perror("nntpxfer: couldn’t open history file");
#endif
exit(1);
}
#endif
#ifdef NDBM
if ((db = dbm_open(HISTORY_FILE, O_RDONLY, 0)) == NULL)
{
#ifdef SYSLOG
syslog(LOG_ERR,"couldn’t open history file: %m");
#else
perror("nntpxfer: couldn’t open history file");
#endif
exit(1);
}
#endif
if ((server = get_tcp_conn(argv[1],"nntp")) < 0)
{
#ifdef SYSLOG
syslog(LOG_ERR,"could not open socket: %m");
#else
perror("nntpxfer: could not open socket");
#endif
exit(1);
}
if ((rd_fp = fdopen(server,"r")) == (FILE *) 0){
#ifdef SYSLOG
syslog(LOG_ERR,"could not fdopen socket: %m");
#else
perror("nntpxfer: could not fdopen socket");
#endif
exit(1);
}
#ifdef SYSLOG
syslog(LOG_DEBUG,"connected to nntp server at %s", argv[1]);
#endif
#ifdef DEBUG
printf("connected to nntp server at %s\n", argv[1]);
#endif
/*
* ok, at this point we’re connected to the nntp daemon
* at the distant host.
*/
This is a fairly small example with only a few macros involved, yet reading the code is already painful. I've personally seen (and had to deal with) much worse in real code. Here the code is ugly and painful to read, but it's still fairly easy to figure out which code will be used under what circumstances. In many cases, you end up with much more complex structures.
To give a concrete example of how I'd prefer to see that written, I'd do something like this:
if (!open_history(HISTORY_FILE)) {
logerr(LOG_ERR, "couldn't open history file");
exit(1);
}
if ((server = get_nntp_connection(server)) == NULL) {
logerr(LOG_ERR, "couldn't open socket");
exit(1);
}
logerr(LOG_DEBUG, "connected to server %s", argv[1]);
In such a case, it's possible that our definition of logerr would be a macro instead of an actual function. It might be sufficiently trivial that it would make sense to have a header with something like:
#ifdef SYSLOG
#define logerr(level, msg, ...) /* ... */
#else
enum {LOG_DEBUG, LOG_ERR};
#define logerr(level, msg, ...) /* ... */
#endif
[for the moment, assuming a preprocessor that can/will handle variadic macros]
Given your supervisor's attitude, even that may not be acceptable. If so, that's fine. Instead a macro, implement that capability in a function instead. Isolate each implementation of the function(s) in its own source file and build the files appropriate to the target. If you have a lot of platform-specific code, you usually want to isolate it into a directory of its own, quite possibly with its own makefile1, and have a top-level makefile that just picks which other makefiles to invoke based on the specified target.
Some people prefer not to do this. I'm not really arguing one way or the other about how to structure makefiles, just noting that it's a possibility some people find/consider useful.
You should avoid #ifdef whenever possible. IIRC, it was Scott Meyers who wrote that with #ifdefs you do not get platform-independent code. Instead you get code that depends on multiple platforms. Also #define and #ifdef are not part of the language itself. #defines have no notion of scope, which can cause all sorts of problems. The best way is to keep the use of the preprocessor to a bare minimum, such as the include guards. Otherwise you are likely to end up with a tangled mess, which is very hard to understand, maintain, and debug.
Ideally, if you need to have platform-specific declarations, you should have separate platform-specific include directories, and handle them appropriately in your build environment.
If you have platform specific implementation of certain functions, you should also put them into separate .cpp files and again hash them out in the build configuration.
Another possibility is to use templates. You can represent your platforms with empty dummy structs, and use those as template parameters. Then you can use template specialization for platform-specific code. This way you would be relying on the compiler to generate platform-specific code from templates.
Of course, the only way for any of this to work, is to very cleanly factor out platform-specific code into separate functions or classes.
I have seen 3 broad usages of #ifdef:
isolate platform specific code
isolate feature specific code (not all versions of a compilers / dialect of a language are born equal)
isolate compilation mode code (NDEBUG anyone ?)
Each has the potential to create a huge mess of unmaintanable code, and should be treated accordingly, but not all of them can be dealt with in the same fashion.
1. Platform specific code
Each platform comes with its own set of specific includes, structures and functions to deal with things like IO (mainly).
In this situation, the simplest way to deal with this mess is to present a unified front, and have platform specific implementations.
Ideally:
project/
include/namespace/
generic.h
src/
unix/
generic.cpp
windows/
generic.cpp
This way, the platform stuff is all kept together in one single file (per header) so easy to locate. The generic.h file describes the interface, the generic.cpp is selected by the build system. No #ifdef.
If you want inline functions (for performance), then a specific genericImpl.i providing the inline definitions and platform specific can be included at the end of the generic.h file with a single #ifdef.
2. Feature specific code
This gets a bit more complicated, but is usually experienced only by libraries.
For example, Boost.MPL is much easier to implement with compilers having variadic templates.
Or, compilers supporting move constructors allow you to define more efficient versions of some operations.
There is no paradise here. If you find yourself in such a situation... you end up with a Boost-like file (aye).
3. Compilation Mode code
You can generally get away with a couple #ifdef. The traditional example is assert:
#ifdef NDEBUG
# define assert(X) (void)(0)
#else // NDEBUG
# define assert(X) do { if (!(X)) { assert_impl(__FILE__, __LINE__, #X); } while(0)
#endif // NDEBUG
Then, the use of the macro itself is not susceptible to the compilation mode, so at least the mess is contained within a single file.
Beware: there is a trap here, if the macro is not expanded to something that counts for a statement when "ifdefed away" then you risk to change the flow under some circumstances. Also, macro not evaluating their arguments may lead to strange behavior when there are function calls (with side effects) in the mix, but in this case this is desirable as the computation involved may be expensive.
Many programs use such a scheme to make platform specific code. A better way, and also a way to clean up the code, is to put all code specific to one platform in one file, naming the functions the same and having the same arguments. Then you just select which file to build depending on the platform.
It might still be some places left where you can not extract platform specific code into separate functions or files, and you still might need the #ifdef parts, but hopefully it should be minimized.
I prefer splitting the platform dependent code & features into separate translation units and letting the build process decide which units to use.
I've lost a week of debugging time due to misspelled identifiers. The compiler does not do checking of defined constants across translation units. For example, one unit may use "WIN386" and another "WIN_386". Platform macros are a maintenance nightmare.
Also, when reading the code, you have to check the build instructions and header files to see which identifers are defined. There is also a difference between an identifier existing and having a value. Some code may test for the existance of an identifier while another tests the value of the same identifer. The latter test is undefined when the identifier is not specified.
Just believe they are evil and prefer not to use them.
Not sure what you mean by "#ifdef is strict no", but perhaps you are referring to a policy on a project you are working on.
You might consider not checking for things like Mac or WIN32 or i386, though. In general, you do not actually care if you are on a Mac. Instead, there is some feature of MacOS that you want, and what you care about is the presence (or absence) of that feature. For that reason, it is common to have a script in your build setup that checks for features and #defines things based on the features provided by the system, rather than making assumptions about the presence of features based on the platform. After all, you might assume certain features are absent on MacOS, but someone may have a version of MacOS on which they have ported that feature. The script that checks for such features is commonly called "configure", and it is often generated by autoconf.
personally, I prefer to abstract that noise well (where necessary). if it's all over the body of a class' interface - yuck!
so, let's say there is a type which is platform defined:
I will use a typedef at a high level for the inner bits and create an abstraction - that's often one line per #ifdef/#else/#endif.
then for the implementation, i will also use a single #ifdef for that abstraction in most cases (but that does mean that the platform specific definitions appear once per platform). I also separate them into separate platform specific files so I can rebuild a project by throwing all the sources into a project and building without a hiccup. In that case, #ifdef is also handier than trying to figure out all the dependencies per project per platform, per build type.
So, just use it to focus on the platform specific abstraction you need, and use abstractions so the client code is the same -- just like reducing the scope of a variable ;)
Others have indicated the preferred solution: put the dependent code in
a separate file, which is included. This the files corresponding to
different implementations can either be in separate directories (one of
which is specified by means of a -I or a /I directive in the
invocation), or by building up the name of the file dynamically (using
e.g macro concatenation), and using something like:
#include XX_dependentInclude(config.hh)
(In this case, XX_dependentInclude might be defined as something like:
#define XX_string2( s ) # s
#define XX_stringize( s ) XX_string2(s)
#define XX_paste2( a, b ) a ## b
#define XX_paste( a, b ) XX_paste2( a, b )
#define XX_dependentInclude(name) XX_stringize(XX_paste(XX_SYST_ID,name))
and SYST_ID is initialized using -D or /D in the compiler
invocation.)
In all of the above, replace XX_ with the prefix you usually use for macros.

how to handle optimizations in code

I am currently writing various optimizations for some code. Each of theses optimizations has a big impact on the code efficiency (hopefully) but also on the source code. However I want to keep the possibility to enable and disable any of them for benchmarking purpose.
I traditionally use the #ifdef OPTIM_X_ENABLE/#else/#endif method, but the code quickly become too hard to maintain.
One can also create SCM branches for each optimizations. It's much better for code readability until you want to enable or disable more than a single optimization.
Is there any other and hopefully better way work with optimizations ?
EDIT :
Some optimizations cannot work simultaneously. I may need to disable an old optimization to bench a new one and see which one I should keep.
I would create a branch for an optimization, benchmark it until you know it has a significant improvement, and then simply merge it back to trunk. I wouldn't bother with the #ifdefs once it's back on trunk; why would you need to disable it once you know it's good? You always have the repository history if you want to be able to rollback a particular change.
There are so many ways of choosing which part of your code that will execute. Conditional inclusion using the preprocessor is usually the hardest to maintain, in my experience. So try to minimize that, if you can. You can separate the functionality (optimized, unoptimized) in different functions. Then call the functions conditionally depending on a flag. Or you can create an inheritance hierarchy and use virtual dispatch. Of course it depends on your particular situation. Perhaps if you could describe it in more detail you would get better answers.
However, here's a simple method that might work for you: Create two sets of functions (or classes, whichever paradigm you are using). Separate the functions into different namespaces, one for optimized code and one for readable code. Then simply choose which set to use by conditionally using them. Something like this:
#include <iostream>
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
int main()
{
f();
}
Then in optimized.h:
namespace optimized
{
void f() { std::cout << "optimized selected" << std::endl; }
}
and in readable.h:
namespace readable
{
void f() { std::cout << "readable selected" << std::endl; }
}
This method does unfortunately need to use the preprocessor, but the usage is minimal. Of course you can improve this by introducing a wrapper header:
wrapper.h:
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
Now simply include this header and further minimize the potential preprocessor usage. Btw, the usual separation of header/cpp should still be done.
Good luck!
I would work at class level (or file level for C) and embed all the various versions in the same working software (no #ifdef) and choose one implementation or the other at runtime through some configuration file or command line options.
It should be quite easy as optimizations should not change anything at internal API level.
Another way if you'are using C++ can be to instantiate templates to avoid duplicating high level code or selecting a branch at run-time (even if this is often an acceptable option, some switches here and there are often not such a big issue).
In the end various optimized backend could eventually be turned to libraries.
Unit Tests should be able to work without modifying them with every variant of implementation.
My rationale is that embedding every variant mostly change software size, and it's very rarely a problem. This approach also has other benefits : you can take care easily of changing environment. An optimization for some OS or some hardware may not be one on another. In many cases it will even be easy to choose the best version at runtime.
You may have two (three/more) version of function you optimise with names like:
function
function_optimized
which have identical arguments and return same results.
Then you may #define selector in som header like:
#if OPTIM_X_ENABLE
#define OPT(f) f##_optimized
#else
#define OPT(f) f
#endif
Then call functions having optimized variants as OPT(function)(argument, argument...). This method is not so aestetic but it does so.
You may go further and use re#define names for all your optimized functions:
#if OPTIM_X_ENABLE
#define foo foo_optimized
#define bar bar_optimized
...
#endif
And leave caller code as is. Preprocessor does function substitution for you. I like it most, because it works transparently while per-function (and also per datatype and per variable) grained which is enough in most cases for me.
More exotic method is to make separate .c file for non-optimized and optimized code and compile only one of them. They may have same names but with different paths, so switching can be made by change single option in command line.
I'm confused. Why don't you just find out where each performance problem is, fix it, and continue. Here's an example.

What is the _REENTRANT flag?

which compiling a multithreaded program we use gcc like below:
gcc -lpthread -D_REENTRANT -o someprogram someprogram.c
what exactly is the flag -D_REENTRANT doing over here?
Defining _REENTRANT causes the compiler to use thread safe (i.e. re-entrant) versions of several functions in the C library.
You can search your header files to see what happens when it's defined.
Excerpt from the libc 8.2 manual:
Macro: _REENTRANT
Macro: _THREAD_SAFE
These macros are obsolete. They have the same effect as defining
_POSIX_C_SOURCE with the value 199506L.
Some very old C libraries required one of these macros to be defined
for basic functionality (e.g. getchar) to be thread-safe.
We recommend you use _GNU_SOURCE in new programs. If you don’t specify
the ‘-ansi’ option to GCC, or other conformance options such as
-std=c99, and don’t define any of these macros explicitly, the effect is the same as defining _DEFAULT_SOURCE to 1.
When you define a feature test macro to request a larger class of
features, it is harmless to define in addition a feature test macro
for a subset of those features. For example, if you define
_POSIX_C_SOURCE, then defining _POSIX_SOURCE as well has no effect. Likewise, if you define _GNU_SOURCE, then defining either
_POSIX_SOURCE or _POSIX_C_SOURCE as well has no effect.
JayM replied:
Defining _REENTRANT causes the compiler to use thread safe (i.e. re-entrant) versions of several functions in the C library.
You can search your header files to see what happens when it's defined.
Since OP and I were both interested in the question, I decided to actually post the answer. :) The following things happen with _REENTRANT on Mac OS X 10.11.6:
<math.h> gains declarations for lgammaf_r, lgamma_r, and lgammal_r.
On Linux (Red Hat Enterprise Server 5.10), I see the following changes:
<unistd.h> gains a declaration for the POSIX 1995 function getlogin_r.
So it seems like _REENTRANT is mostly a no-op, these days. It might once have declared a lot of new functions, such as strtok_r; but these days those functions are mostly mandated by various decades-old standards (C99, POSIX 95, POSIX.1-2001, etc.) and so they're just always enabled.
I have no idea why the two systems I checked avoid declaring lgamma_r resp. getlogin_r when _REENTRANT is not #defined. My wild guess is that this is just historical cruft that nobody ever bothered to go through and clean up.
Of course my observations on these two systems might not generalize to all systems your code might ever encounter. You should definitely still pass -pthread to the compiler (or, less good but okay, -lpthread -D_REENTRANT) whenever your program requires pthreads.
In multithreaded programs, you tell the compiler that you need this feature by defining the _REENTRANT macro before any #include lines in your program. This does three things, and does them so elegantly that usually you don’t even need to know what was done:
Some functions get prototypes for a re-entrant safe equivalent.
These are normally the same function name, but with _r appended so
that, for example, gethostbyname is changed to gethostbyname_r.
Some stdio.h functions that are normally implemented as macros
become proper re-entrant safe functions.
The variable errno, from errno.h, is changed to call a function,
which can determine the real errno value in a multithread safe way.
Taken from Beginning Linux Programming
It simply defined _REENTRANT for the preprocessor. Somewhere in the associated code, you'll probably find #ifdef _REENTRANT or #if defined(_REENTRANT) in at least a few places.
Also note that the name "_REENTRANT: is in the implementer's name space (any name starting with an underscore followed by another underscore or a capital letter is), so defining it means you've stepped outside what the standard defines (at least the C or C++ standards).

Declaring namespace as macro - C++

In standard library, I found that namespace std is declared as a macro.
#define _STD_BEGIN namespace std {
#define _STD_END }
Is this a best practice when using namespaces?
The macro is declared in Microsoft Visual Studio 9.0\VC\include\yvals.h. But I couldn't find the STL files including this. If it is not included, how it can be used?
Any thoughts..?
Probably not a best practice as it can be difficult to read compared to a vanilla namespace declaration. That said, remember rules don't always apply universally, and I'm sure there is some scenario where a macro might clean things up considerably.
"But I couldn't find the STL files including this. If it is not included, how it can be used?".
All files that use this macro include yvals.h somehow. For example <vector> includes <memory>, which includes <iterator>, which includes <xutility>, which includes <climits>, which includes <yvals.h>. The chain may be deep, but it does include it it some point.
And I want to clarify, this only applies to this particular implementation of the standard library; this is in no way standardized.
In general No. The macros were probably used at the time when namespaces were not implemented by some compilers, or for compatibity with specific platforms.
No idea. The file would probably be included by some other file that was included into the STL file.
One approach that I saw in a library that I recently used was:
BEGIN_NAMESPACE_XXX()
where XXX is the number of namespace levels for example:
BEGIN_NAMESPACE_3(ns1, ns1, ns3)
would take three arguments and expand to
namespace ns1 {
namespace ns2 {
namespace ns2 {
and a matching END_NAMESPACE_3 would expand to
}
}
}
(I have added the newlines and indentation for clarity's sake only)
I imagine the only reason to do this is if you want to make it easy to change the namespace used by your application / library, or disable namespaces altogether for compatibility reasons.
I could see doing this for the C libraries that are included in C++ by reference (eg., the header that C calls string.h and that C++ calls cstring). In that case, the macro definition would depend on an #ifdef _c_plus_plus.
I wouldn't do it in general. I can't think of any compiler worth using that doesn't support namespaces, exceptions, templates or other "modern" C++ features (modern is in quotes because these features were added in the mid to late '90s). In fact, by my definition, compilers are only worth using if they offer good support for their respective language. This isn't a language issue; it's a simple case of "if I chose language X, I'd prefer to use it as it exists today, not as it existed a decade or two ago." I've never understood why some projects spend time trying to support pre-ANSI C compilers, for instance.