I'm trying to create a macro for debug logging purposes. Here is an extra simplified version:
#if defined _DEBUG
#define LOG std::cout
#else
#define LOG IGNORETHISLINEOFCODE
#endif
/* ... */
LOG << "Here's some debug code";
I've been thinking of the ways I can tell the compiler to ignore this line of code that starts with "LOG". I'm personally not looking for alternative ways, such as #define LOG( ... ) (void)0. Here's what I've tried:
Overloading the leftshift operator for void as an inline constexpr that does nothing (which still results in it being visible in the disassembly; I don't want that)
Defining LOG as: #define LOG //, but the comment identifier isn't substituted in
Any ideas? Like I said earlier, I don't want any alternatives, such as surrounding all the log code with #if defined _DEBUG
If your version of C++ handles if constexpr I've come to like things along this line for what you're asking.
#include <iostream>
template <bool Log>
struct LOGGER {
template <typename T>
LOGGER& operator<<(T const &t) {
if constexpr (Log)
std::cout << t;
return *this;
}
};
LOGGER<false> LOG;
int main (int argc, char const* argv[])
{
LOG << "A log statement." << '\n';
return 0;
}
Your question and constraint ("I don't want any alternatives") are weirdly specific.
I've been thinking of the ways I can tell the compiler to ignore this line of code that starts with "LOG"
Don't do that, it'll be trivially broken by a multi-line logging statement. Any code that can suddenly break due to otherwise-legal reformatting is best avoided.
Next we have
... which still results in it being visible in the disassembly ...
which shouldn't be true if the code is genuinely dead, you have a decent compiler, and you turn on optimization. It's still some work, though.
The usual solution is something like
#ifdef NDEBUG
#define LOG(EXPR)
#else
#define LOG(EXPR) std::cerr << EXPR
#endif
This is an alternative, but it's not an alternative such as surrounding all the log code with #if defined, so I don't know if it's a problem for you or not.
It does have the advantage of genuinely compiling to nothing at any optimization level.
another possibility based on the compiler optimization abilities:
#define LOG if (DEBUG) std::cout
now you can use
#define DEBUG false
LOG << "hello " << " world 1" << endl;
you should be able to use const bool DEBUG = false as well.
#if defined _DEBUG
#define LOG std::cout
#else
#define LOG /##/
#endif
This works as well. It's the answer to the original question, so I'll mark it as such, but just know that this does not support multiline operations.
I suppose you could do something like the following for multiline operations. I don't know how well it'd work.
#if defined _DEBUG
#define LOG( in ) std::cout << in
#else
#define LOG( in ) /##/
#endif
Better logic would be to define tracer policy, where you can set the logging level at the start of the application and then use the tracing level to make the decision to either log the degug information. Tracing level can be defined as an enum like
enum Tracelevel{CRITICAL, ERROR, INFO, TEST, DEBUG};
setTraceLevel(TraceLevel trcLvl){
_traceLevel = trcLvl;
};
#if defined _DEBUG
if(_traceLevel == DEBUG) {\
#define LOG std::cout
}
#endif
A lightweight logger can be found http://www.drdobbs.com/cpp/a-lightweight-logger-for-c/240147505?pgno=1
Below is an example of using a debug variable
class A{
public:
A(bool debug):m_debug(debug){};
~A(){};
void Test(){
for(int i=0;i<1000000;i++){
// do something
if(m_debug) print();
}
}
void print(){
std::cout << "something" << std::endl;
}
private:
bool m_debug;
};
Below is an example of using a debug macro preprocessor
#include "Preprocessor.h"
class A{
public:
void Test(){
for(int i=0;i<1000000;i++){
// do something
#ifdef DEBUG
print();
#endif
}
}
void print(){
std::cout << "something" << std::endl;
}
};
In Preprocessor.h is simply
#define DEBUG
The good thing about using a debug variable is the class has one less dependency on a global preprocessor header. The good thing about the macro approach is that there are 1000000 less if statement executed at run time, which might be critical for lets say graphics application when every single fps counts. What would be considered as a better approach?
The better approach is to use preprocessor, however, it do not necessary a new header file to define the macro.
You can set the flag at compiling, using -DMACRO_NAME or, in your case, -DDEBUG.
As long as the job is printing debug info, the precedent is to use macros like Visual Studio does (debug/release build). However in Qt world, there is class QLoggingCatagory where you can enable/disable logging sections. It calls the function QLoggingCategory::isDebugEnabled()every time a message is being logged out which makes me think it is not a major issue for performance at least for normal use.
That said if we comapare Visual Studio MFC application with Qt application, MFC applications are lighting fast. This could be attributed at least in part because it uses macros and the difference can be noticed rather easily in debug and release build as well where macro/debug info is the main difference.
Given all this evidence my vote is for macros approach in your case for maximum performance.
First, the macro way is better for both benefits in memory and if testing(though this is really minor cost). Do you have any special scenarios to use the debug variable?
why not define the A::Test() in the CPP file? thus the global preprocessor header could be moved to the CPP file. Anyway, I don't think expose such debug details in the header is a good idea.
Another alternative, if you do not like having a bunch of #ifdef DEBUG / #endif lines, you could create a macro that outputs it's argument if defined (much like the assert macro).
#ifdef DEBUG
#define PRINT(x) (x)
#else
#define PRINT(x)
#endif
PRINT( std::cout << "something" << std::endl );
The code path does not get much more minimal than "compiled away". However, if you are willing to perform a refactoring step, you can make the run-time debug version cheaper at the cost of a larger executable.
The idea is to make the debug state a template parameter of a refactored version of Test() so that it may or may not print a debug statement on each iteration. The compiler's dead code elimination pass will then optimize away the statement in the case false is passed to the template, since template parameters will be treated as compile time constants in the template expansion.
The fully optimized version can still use conditional compilation to always disable debug output, of course.
template <bool Debug>
void TestDebug(){
for(int i=0;i<1000000;i++){
// do something
if (Debug) print();
}
}
void Test(){
#ifdef DEBUG
if(m_debug) TestDebug<true>();
else TestDebug<false>();
#else
TestDebug<false>();
#endif
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
The C preprocessor is justifiably feared and shunned by the C++ community. In-lined functions, consts and templates are usually a safer and superior alternative to a #define.
The following macro:
#define SUCCEEDED(hr) ((HRESULT)(hr) >= 0)
is in no way superior to the type safe:
inline bool succeeded(int hr) { return hr >= 0; }
But macros do have their place, please list the uses you find for macros that you can't do without the preprocessor.
Please put each use-cases in a seperate answer so it can be voted up and if you know of how to achieve one of the answers without the preprosessor point out how in that answer's comments.
As wrappers for debug functions, to automatically pass things like __FILE__, __LINE__, etc:
#ifdef ( DEBUG )
#define M_DebugLog( msg ) std::cout << __FILE__ << ":" << __LINE__ << ": " << msg
#else
#define M_DebugLog( msg )
#endif
Since C++20 the magic type std::source_location can however be used instead of __LINE__ and __FILE__ to implement an analogue as a normal function (template).
Methods must always be complete, compilable code; macros may be code fragments. Thus you can define a foreach macro:
#define foreach(list, index) for(index = 0; index < list.size(); index++)
And use it as thus:
foreach(cookies, i)
printf("Cookie: %s", cookies[i]);
Since C++11, this is superseded by the range-based for loop.
Header file guards necessitate macros.
Are there any other areas that necessitate macros? Not many (if any).
Are there any other situations that benefit from macros? YES!!!
One place I use macros is with very repetitive code. For example, when wrapping C++ code to be used with other interfaces (.NET, COM, Python, etc...), I need to catch different types of exceptions. Here's how I do that:
#define HANDLE_EXCEPTIONS \
catch (::mylib::exception& e) { \
throw gcnew MyDotNetLib::Exception(e); \
} \
catch (::std::exception& e) { \
throw gcnew MyDotNetLib::Exception(e, __LINE__, __FILE__); \
} \
catch (...) { \
throw gcnew MyDotNetLib::UnknownException(__LINE__, __FILE__); \
}
I have to put these catches in every wrapper function. Rather than type out the full catch blocks each time, I just type:
void Foo()
{
try {
::mylib::Foo()
}
HANDLE_EXCEPTIONS
}
This also makes maintenance easier. If I ever have to add a new exception type, there's only one place I need to add it.
There are other useful examples too: many of which include the __FILE__ and __LINE__ preprocessor macros.
Anyway, macros are very useful when used correctly. Macros are not evil -- their misuse is evil.
Mostly:
Include guards
Conditional compilation
Reporting (predefined macros like __LINE__ and __FILE__)
(rarely) Duplicating repetitive code patterns.
In your competitor's code.
Inside conditional compilation, to overcome issues of differences between compilers:
#ifdef WE_ARE_ON_WIN32
#define close(parm1) _close (parm1)
#define rmdir(parm1) _rmdir (parm1)
#define mkdir(parm1, parm2) _mkdir (parm1)
#define access(parm1, parm2) _access(parm1, parm2)
#define create(parm1, parm2) _creat (parm1, parm2)
#define unlink(parm1) _unlink(parm1)
#endif
When you want to make a string out of an expression, the best example for this is assert (#x turns the value of x to a string).
#define ASSERT_THROW(condition) \
if (!(condition)) \
throw std::exception(#condition " is false");
String constants are sometimes better defined as macros since you can do more with string literals than with a const char *.
e.g. String literals can be easily concatenated.
#define BASE_HKEY "Software\\Microsoft\\Internet Explorer\\"
// Now we can concat with other literals
RegOpenKey(HKEY_CURRENT_USER, BASE_HKEY "Settings", &settings);
RegOpenKey(HKEY_CURRENT_USER, BASE_HKEY "TypedURLs", &URLs);
If a const char * were used then some sort of string class would have to be used to perform the concatenation at runtime:
const char* BaseHkey = "Software\\Microsoft\\Internet Explorer\\";
RegOpenKey(HKEY_CURRENT_USER, (string(BaseHkey) + "Settings").c_str(), &settings);
RegOpenKey(HKEY_CURRENT_USER, (string(BaseHkey) + "TypedURLs").c_str(), &URLs);
Since C++20 it is however possible to implement a string-like class type that can be used as a non-type template parameter type of a user-defined string literal operator which allows such concatenation operations at compile-time without macros.
When you want to change the program flow (return, break and continue) code in a function behaves differently than code that is actually inlined in the function.
#define ASSERT_RETURN(condition, ret_val) \
if (!(condition)) { \
assert(false && #condition); \
return ret_val; }
// should really be in a do { } while(false) but that's another discussion.
The obvious include guards
#ifndef MYHEADER_H
#define MYHEADER_H
...
#endif
Let's say we'll ignore obvious things like header guards.
Sometimes, you want to generate code that needs to be copy/pasted by the precompiler:
#define RAISE_ERROR_STL(p_strMessage) \
do \
{ \
try \
{ \
std::tstringstream strBuffer ; \
strBuffer << p_strMessage ; \
strMessage = strBuffer.str() ; \
raiseSomeAlert(__FILE__, __FUNCSIG__, __LINE__, strBuffer.str().c_str()) \
} \
catch(...){} \
{ \
} \
} \
while(false)
which enables you to code this:
RAISE_ERROR_STL("Hello... The following values " << i << " and " << j << " are wrong") ;
And can generate messages like:
Error Raised:
====================================
File : MyFile.cpp, line 225
Function : MyFunction(int, double)
Message : "Hello... The following values 23 and 12 are wrong"
Note that mixing templates with macros can lead to even better results (i.e. automatically generating the values side-by-side with their variable names)
Other times, you need the __FILE__ and/or the __LINE__ of some code, to generate debug info, for example. The following is a classic for Visual C++:
#define WRNG_PRIVATE_STR2(z) #z
#define WRNG_PRIVATE_STR1(x) WRNG_PRIVATE_STR2(x)
#define WRNG __FILE__ "("WRNG_PRIVATE_STR1(__LINE__)") : ------------ : "
As with the following code:
#pragma message(WRNG "Hello World")
it generates messages like:
C:\my_project\my_cpp_file.cpp (225) : ------------ Hello World
Other times, you need to generate code using the # and ## concatenation operators, like generating getters and setters for a property (this is for quite a limited cases, through).
Other times, you will generate code than won't compile if used through a function, like:
#define MY_TRY try{
#define MY_CATCH } catch(...) {
#define MY_END_TRY }
Which can be used as
MY_TRY
doSomethingDangerous() ;
MY_CATCH
tryToRecoverEvenWithoutMeaningfullInfo() ;
damnThoseMacros() ;
MY_END_TRY
(still, I only saw this kind of code rightly used once)
Last, but not least, the famous boost::foreach !!!
#include <string>
#include <iostream>
#include <boost/foreach.hpp>
int main()
{
std::string hello( "Hello, world!" );
BOOST_FOREACH( char ch, hello )
{
std::cout << ch;
}
return 0;
}
(Note: code copy/pasted from the boost homepage)
Which is (IMHO) way better than std::for_each.
So, macros are always useful because they are outside the normal compiler rules. But I find that most the time I see one, they are effectively remains of C code never translated into proper C++.
Unit test frameworks for C++ like UnitTest++ pretty much revolve around preprocessor macros. A few lines of unit test code expand into a hierarchy of classes that wouldn't be fun at all to type manually. Without something like UnitTest++ and it's preprocessor magic, I don't know how you'd efficiently write unit tests for C++.
You can't perform short-circuiting of function call arguments using a regular function call. For example:
#define andm(a, b) (a) && (b)
bool andf(bool a, bool b) { return a && b; }
andm(x, y) // short circuits the operator so if x is false, y would not be evaluated
andf(x, y) // y will always be evaluated
To fear the C preprocessor is like to fear the incandescent bulbs just because we get fluorescent bulbs. Yes, the former can be {electricity | programmer time} inefficient. Yes, you can get (literally) burned by them. But they can get the job done if you properly handle it.
When you program embedded systems, C uses to be the only option apart form assembler. After programming on desktop with C++ and then switching to smaller, embedded targets, you learn to stop worrying about “inelegancies” of so many bare C features (macros included) and just trying to figure out the best and safe usage you can get from those features.
Alexander Stepanov says:
When we program in C++ we should not be ashamed of its C heritage, but make
full use of it. The only problems with C++, and even the only problems with C, arise
when they themselves are not consistent with their own logic.
Some very advanced and useful stuff can still be built using preprocessor (macros), which you would never be able to do using the c++ "language constructs" including templates.
Examples:
Making something both a C identifier and a string
Easy way to use variables of enum types as string in C
Boost Preprocessor Metaprogramming
We use the __FILE__ and __LINE__ macros for diagnostic purposes in information rich exception throwing, catching and logging, together with automated log file scanners in our QA infrastructure.
For instance, a throwing macro OUR_OWN_THROW might be used with exception type and constructor parameters for that exception, including a textual description. Like this:
OUR_OWN_THROW(InvalidOperationException, (L"Uninitialized foo!"));
This macro will of course throw the InvalidOperationException exception with the description as constructor parameter, but it'll also write a message to a log file consisting of the file name and line number where the throw occured and its textual description. The thrown exception will get an id, which also gets logged. If the exception is ever caught somewhere else in the code, it will be marked as such and the log file will then indicate that that specific exception has been handled and that it's therefore not likely the cause of any crash that might be logged later on. Unhandled exceptions can be easily picked up by our automated QA infrastructure.
Code repetition.
Have a look to boost preprocessor library, it's a kind of meta-meta-programming. In topic->motivation you can find a good example.
One common use is for detecting the compile environment, for cross-platform development you can write one set of code for linux, say, and another for windows when no cross platform library already exists for your purposes.
So, in a rough example a cross-platform mutex can have
void lock()
{
#ifdef WIN32
EnterCriticalSection(...)
#endif
#ifdef POSIX
pthread_mutex_lock(...)
#endif
}
For functions, they are useful when you want to explicitly ignore type safety. Such as the many examples above and below for doing ASSERT. Of course, like a lot of C/C++ features you can shoot yourself in the foot, but the language gives you the tools and lets you decide what to do.
I occasionally use macros so I can define information in one place, but use it in different ways in different parts of the code. It's only slightly evil :)
For example, in "field_list.h":
/*
* List of fields, names and values.
*/
FIELD(EXAMPLE1, "first example", 10)
FIELD(EXAMPLE2, "second example", 96)
FIELD(ANOTHER, "more stuff", 32)
...
#undef FIELD
Then for a public enum it can be defined to just use the name:
#define FIELD(name, desc, value) FIELD_ ## name,
typedef field_ {
#include "field_list.h"
FIELD_MAX
} field_en;
And in a private init function, all the fields can be used to populate a table with the data:
#define FIELD(name, desc, value) \
table[FIELD_ ## name].desc = desc; \
table[FIELD_ ## name].value = value;
#include "field_list.h"
Something like
void debugAssert(bool val, const char* file, int lineNumber);
#define assert(x) debugAssert(x,__FILE__,__LINE__);
So that you can just for example have
assert(n == true);
and get the source file name and line number of the problem printed out to your log if n is false.
If you use a normal function call such as
void assert(bool val);
instead of the macro, all you can get is your assert function's line number printed to the log, which would be less useful.
#define ARRAY_SIZE(arr) (sizeof arr / sizeof arr[0])
Unlike the 'preferred' template solution discussed in a current thread, you can use it as a constant expression:
char src[23];
int dest[ARRAY_SIZE(src)];
You can use #defines to help with debugging and unit test scenarios. For example, create special logging variants of the memory functions and create a special memlog_preinclude.h:
#define malloc memlog_malloc
#define calloc memlog calloc
#define free memlog_free
Compile you code using:
gcc -Imemlog_preinclude.h ...
An link in your memlog.o to the final image. You now control malloc, etc, perhaps for logging purposes, or to simulate allocation failures for unit tests.
When you are making a decision at compile time over Compiler/OS/Hardware specific behavior.
It allows you to make your interface to Comppiler/OS/Hardware specific features.
#if defined(MY_OS1) && defined(MY_HARDWARE1)
#define MY_ACTION(a,b,c) doSothing_OS1HW1(a,b,c);}
#elif define(MY_OS1) && defined(MY_HARDWARE2)
#define MY_ACTION(a,b,c) doSomthing_OS1HW2(a,b,c);}
#elif define(MY_SUPER_OS)
/* On this hardware it is a null operation */
#define MY_ACTION(a,b,c)
#else
#error "PLEASE DEFINE MY_ACTION() for this Compiler/OS/HArdware configuration"
#endif
Compilers can refuse your request to inline.
Macros will always have their place.
Something I find useful is #define DEBUG for debug tracing -- you can leave it 1 while debugging a problem (or even leave it on during the whole development cycle) then turn it off when it is time to ship.
You can #define constants on the compiler command line using the -D or /D option. This is often useful when cross-compiling the same software for multiple platforms because you can have your makefiles control what constants are defined for each platform.
In my last job, I was working on a virus scanner. To make thing easier for me to debug, I had lots of logging stuck all over the place, but in a high demand app like that, the expense of a function call is just too expensive. So, I came up with this little Macro, that still allowed me to enable the debug logging on a release version at a customers site, without the cost of a function call would check the debug flag and just return without logging anything, or if enabled, would do the logging... The macro was defined as follows:
#define dbgmsg(_FORMAT, ...) if((debugmsg_flag & 0x00000001) || (debugmsg_flag & 0x80000000)) { log_dbgmsg(_FORMAT, __VA_ARGS__); }
Because of the VA_ARGS in the log functions, this was a good case for a macro like this.
Before that, I used a macro in a high security application that needed to tell the user that they didn't have the correct access, and it would tell them what flag they needed.
The Macro(s) defined as:
#define SECURITY_CHECK(lRequiredSecRoles) if(!DoSecurityCheck(lRequiredSecRoles, #lRequiredSecRoles, true)) return
#define SECURITY_CHECK_QUIET(lRequiredSecRoles) (DoSecurityCheck(lRequiredSecRoles, #lRequiredSecRoles, false))
Then, we could just sprinkle the checks all over the UI, and it would tell you which roles were allowed to perform the action you tried to do, if you didn't already have that role. The reason for two of them was to return a value in some places, and return from a void function in others...
SECURITY_CHECK(ROLE_BUSINESS_INFORMATION_STEWARD | ROLE_WORKER_ADMINISTRATOR);
LRESULT CAddPerson1::OnWizardNext()
{
if(m_Role.GetItemData(m_Role.GetCurSel()) == parent->ROLE_EMPLOYEE) {
SECURITY_CHECK(ROLE_WORKER_ADMINISTRATOR | ROLE_BUSINESS_INFORMATION_STEWARD ) -1;
} else if(m_Role.GetItemData(m_Role.GetCurSel()) == parent->ROLE_CONTINGENT) {
SECURITY_CHECK(ROLE_CONTINGENT_WORKER_ADMINISTRATOR | ROLE_BUSINESS_INFORMATION_STEWARD | ROLE_WORKER_ADMINISTRATOR) -1;
}
...
Anyways, that's how I've used them, and I'm not sure how this could have been helped with templates... Other than that, I try to avoid them, unless REALLY necessary.
I use macros to easily define Exceptions:
DEF_EXCEPTION(RessourceNotFound, "Ressource not found")
where DEF_EXCEPTION is
#define DEF_EXCEPTION(A, B) class A : public exception\
{\
public:\
virtual const char* what() const throw()\
{\
return B;\
};\
}\
If you have a list of fields that get used for a bunch of things, e.g. defining a structure, serializing that structure to/from some binary format, doing database inserts, etc, then you can (recursively!) use the preprocessor to avoid ever repeating your field list.
This is admittedly hideous. But maybe sometimes better than updating a long list of fields in multiple places? I've used this technique exactly once, and it was quite helpful that one time.
Of course the same general idea is used extensively in languages with proper reflection -- just instrospect the class and operate on each field in turn. Doing it in the C preprocessor is fragile, illegible, and not always portable. So I mention it with some trepidation. Nonetheless, here it is...
(EDIT: I see now that this is similar to what #Andrew Johnson said on 9/18; however the idea of recursively including the same file takes the idea a bit further.)
// file foo.h, defines class Foo and various members on it without ever repeating the
// list of fields.
#if defined( FIELD_LIST )
// here's the actual list of fields in the class. If FIELD_LIST is defined, we're at
// the 3rd level of inclusion and somebody wants to actually use the field list. In order
// to do so, they will have defined the macros STRING and INT before including us.
STRING( fooString )
INT( barInt )
#else // defined( FIELD_LIST )
#if !defined(FOO_H)
#define FOO_H
#define DEFINE_STRUCT
// recursively include this same file to define class Foo
#include "foo.h"
#undef DEFINE_STRUCT
#define DEFINE_CLEAR
// recursively include this same file to define method Foo::clear
#include "foo.h"
#undef DEFINE_CLEAR
// etc ... many more interesting examples like serialization
#else // defined(FOO_H)
// from here on, we know that FOO_H was defined, in other words we're at the second level of
// recursive inclusion, and the file is being used to make some particular
// use of the field list, for example defining the class or a single method of it
#if defined( DEFINE_STRUCT )
#define STRING(a) std::string a;
#define INT(a) long a;
class Foo
{
public:
#define FIELD_LIST
// recursively include the same file (for the third time!) to get fields
// This is going to translate into:
// std::string fooString;
// int barInt;
#include "foo.h"
#endif
void clear();
};
#undef STRING
#undef INT
#endif // defined(DEFINE_STRUCT)
#if defined( DEFINE_ZERO )
#define STRING(a) a = "";
#define INT(a) a = 0;
#define FIELD_LIST
void Foo::clear()
{
// recursively include the same file (for the third time!) to get fields.
// This is going to translate into:
// fooString="";
// barInt=0;
#include "foo.h"
#undef STRING
#undef int
}
#endif // defined( DEFINE_ZERO )
// etc...
#endif // end else clause for defined( FOO_H )
#endif // end else clause for defined( FIELD_LIST )
I've used the preprocesser to calculate fixed-point numbers from floating point values used in embedded systems that cannot use floating point in the compiled code. It's handy to have all of your math in Real World Units and not have to think about them in fixed-point.
Example:
// TICKS_PER_UNIT is defined in floating point to allow the conversions to compute during compile-time.
#define TICKS_PER_UNIT 1024.0
// NOTE: The TICKS_PER_x_MS will produce constants in the preprocessor. The (long) cast will
// guarantee there are no floating point values in the embedded code and will produce a warning
// if the constant is larger than the data type being stored to.
// Adding 0.5 sec to the calculation forces rounding instead of truncation.
#define TICKS_PER_1_MS( ms ) (long)( ( ( ms * TICKS_PER_UNIT ) / 1000 ) + 0.5 )
Yet another foreach macros. T: type, c: container, i: iterator
#define foreach(T, c, i) for(T::iterator i=(c).begin(); i!=(c).end(); ++i)
#define foreach_const(T, c, i) for(T::const_iterator i=(c).begin(); i!=(c).end(); ++i)
Usage (concept showing, not real):
void MultiplyEveryElementInList(std::list<int>& ints, int mul)
{
foreach(std::list<int>, ints, i)
(*i) *= mul;
}
int GetSumOfList(const std::list<int>& ints)
{
int ret = 0;
foreach_const(std::list<int>, ints, i)
ret += *i;
return ret;
}
Better implementations available: Google "BOOST_FOREACH"
Good articles available: Conditional Love: FOREACH Redux (Eric Niebler) http://www.artima.com/cppsource/foreach.html
Maybe the greates usage of macros is in platform-independent development.
Think about cases of type inconsistency - with macros, you can simply use different header files -- like:
--WIN_TYPES.H
typedef ...some struct
--POSIX_TYPES.h
typedef ...some another struct
--program.h
#ifdef WIN32
#define TYPES_H "WINTYPES.H"
#else
#define TYPES_H "POSIX_TYPES.H"
#endif
#include TYPES_H
Much readable than implementing it in other ways, to my opinion.
I've a very basic class, name it Basic, used in nearly all other files in a bigger project. In some cases, there needs to be debug output, but in release mode, this should not be enabled and be a NOOP.
Currently there is a define in the header, which switches a makro on or off, depending on the setting. So this is definetely a NOOP, when switched off. I'm wondering, if I have the following code, if a compiler (MSVS / gcc) is able to optimize out the function call, so that it is again a NOOP. (By doing that, the switch could be in the .cpp and switching will be much faster, compile/link time wise).
--Header--
void printDebug(const Basic* p);
class Basic {
Basic() {
simpleSetupCode;
// this should be a NOOP in release,
// but constructor could be inlined
printDebug(this);
}
};
--Source--
// PRINT_DEBUG defined somewhere else or here
#if PRINT_DEBUG
void printDebug(const Basic* p) {
// Lengthy debug print
}
#else
void printDebug(const Basic* p) {}
#endif
As with all questions like this, the answer is - if it really matters to you, try the approach and examine the emitted assembly language.
Compiler possibly may optimize this code, if it knows printDebug function implementation at compilation time. If printDebug is in another object module, this possibly may be optimized only by linker, using the whole program optimization. But the only way to test this is to read compiler-generated Assembly code.
If you already have PRINT_DEBUG macro, you can extend it by the way as TRACE is defined:
#define PRINT_DEBUG // optional
#ifdef PRINT_DEBUG
#define PRINT_DEBUG_CALL(p) printDebug(p)
#else
#define PRINT_DEBUG_CALL(p)
#endif
void printDebug(const Basic* p);
class Basic {
Basic() {
simpleSetupCode;
// this should be a NOOP in release,
// but constructor could be inlined
PRINT_DEBUG_CALL(this);
}
};
--Source--
// PRINT_DEBUG defined somewhere else or here
#if PRINT_DEBUG
void printDebug(const Basic* p) {
// Lengthy debug print
}
#endif
#if PRINT_DEBUG
#define printDebug _real_print_debug
#else
#define printDebug(...)
#endif
This way the preprocessor will strip all debug code before it even gets to the compiler.
Currently most of the optimizations are done at compile time. Some compilers as LLVM are able to optimize at link time. This is a really interesting idea. I suggest you to take a look at.
Waiting for these kind of optimization, what you can do is the following. Define a macro that let you include the following statement depending on whether DEBUG is defined or not.
#ifdef DEBUG
#define IF_DEBUG (false) {} else
#else
#define IF_DEBUG
#endif
You can the use it like this
Basic() {
simpleSetupCode;
// this should be a NOOP in release,
// but constructor could be inlined
IF_DEBUG printDebug(this);
}
which is already much more readable than
Basic() {
simpleSetupCode;
// this should be a NOOP in release,
// but constructor could be inlined
#if DEBUG
printDebug(this);
#endif
}
Note that you can use it as if it was a keyword
IF_DEBUG {
printDebug(this);
printDebug(thas);
}
errm, why not use the pre-processor macro differently?
Just of the top of my head, something like:
#define DEBUG_TRACE(p)
#ifdef PRINT_DEBUG
printDebug(p);
#else
;
#endif