I am creating c++ library modules in my application. To do logging, I use spdlog. But in a production environment, I don't want my lib modules to do any logging. One way to achieve turning on/off would be to litter my code with #ifdef conditionals like...
#ifdef logging
// call the logger here.
#endif
I am looking for a way to avoid writing these conditionals. May be a wrapper function that does the #ifdef checking and write it. But the problem with this approach is that I have to write wrappers for every logging method (such as info, trace, warn, error, ...)
Is there a better way?
You can disable logging with set_level():
auto my_logger = spdlog::basic_logger_mt("basic_logger", "logs/basic.txt");
#if defined(PRODUCTION)
my_logger->set_level(spdlog::level::off);
#else
my_logger->set_level(spdlog::level::trace);
#endif
spdlog::register_logger(my_logger);
You can disable all logging before you compile the code by adding the following macro (before including spdlog.h):
#define SPDLOG_ACTIVE_LEVEL SPDLOG_LEVEL_OFF
#include<spdlog.h>
It is explained as a comment in the file https://github.com/gabime/spdlog/blob/v1.x/include/spdlog/spdlog.h :
//
// enable/disable log calls at compile time according to global level.
//
// define SPDLOG_ACTIVE_LEVEL to one of those (before including spdlog.h):
// SPDLOG_LEVEL_TRACE,
// SPDLOG_LEVEL_DEBUG,
// SPDLOG_LEVEL_INFO,
// SPDLOG_LEVEL_WARN,
// SPDLOG_LEVEL_ERROR,
// SPDLOG_LEVEL_CRITICAL,
// SPDLOG_LEVEL_OFF
//
Using this macro will also speed up your productive code because the logging calls are completely erased from your code. Therefore this approach may be better than using my_logger->set_level(spdlog::level::off);
However, in order for the complete code removal to work you need to use either of the macros when logging:
SPDLOG_LOGGER_###(logger, ...)
SPDLOG_###(...)
where ### is one of TRACE, DEBUG, INFO, WARN, ERROR, CRITICAL.
The latter macro uses the default logger spdlog::default_logger_raw(), the former can be used with your custom loggers. The variadic arguments ... stand for the regular arguments to your logging invocation: the fmt string, followed by some values to splice into the message.
I don't know spdlog.
However, you may define a macro in one of your common used include file, to replace the logcall by nothing, or a call to an empty inline function which the compiler optimizer will eliminate.
in "app.h"
#ifndef LOG
#ifdef logging
#define LOG spdlog
#endif
#ifndef logging
#define LOG noop
#endif
#endif
Did you get the idea?
This let most of your code untouched
Normally, for classes I don't intend to include in production code I have conditional operators such as the usual:
#ifdef DEBUG_VERSION
This could also be around certain chunks of code that performs additional steps in development mode.
I've just thought (after many years or using the above): What happens if a typo is introduced in the above? It could have great consequences. Pieces of code included (or not included) when the opposite was intended.
So I'm now wondering about alternatives, and thought about creating 2 macro's:
INCLUDE_IN_DEBUG_BUILD
END_INCLUDE_IN_DEBUG_BUILD
If a typo is ever created in these, an error message is created at compile time, forcing the user to correct it. The first would evaluate to "if (1){" in the debug build and "if (0){" in the production build, so any compiler worth using should optimise those lines out, and even if they don't, at least the code inside will never be called.
Now I'm wondering: Is there something I'm missing here? Why does no-one else use something like this?
Update: I replaced the header-based approach with a build-system based approach.
You want to be able to disable not just part of the code inside a function, but maybe also in other areas like inside a class or namespace:
struct my_struct {
#ifdef DEBUG_VERSION
std::string trace_prefix;
#endif
};
So the real question seems to be: How to prevent typos in your #ifdefs? Here's something which does not limit you and which should work well.
Modify your build system to either define DEBUG_VERSION or RELEASE_VERSION. It should be easy to ensure this. Define those to nothing, e.g. -DDEBUG_VERSION or -DRELEASE_VERSION for GCC/Clang.
With this, you can protect your code like this:
#ifdef DEBUG_VERSION
DEBUG_VERSION
// ...
#endif
or
#ifndef DEBUG_VERSION
DEBUG_VERSION
// ...
#else
RELEASE_VERSION
// ...
#endif
And voila, in the second example above, I already added a small typo: #ifndef instead of #ifdef - but the compiler would complain now as DEBUG_VERSION and RELEASE_VERSION are not defined (as in "defined away" by the header) in the corresponding branches.
To make it as safe as possible, you should always have both branches with the two defines, so the first example I gave should be improved to:
#ifdef DEBUG_VERSION
DEBUG_VERSION
// ...
#else
RELEASE_VERSION
#endif
even if the release branch contains no other code/statements. That way you can catch most errors and I think it is quite descriptive. Since the DEBUG_VERSION is replaced with nothing only in the debug branch, all typos will lead to a compile-time error. The same for RELEASE_VERSION.
Is there any methods to remove a line of code from a Release build, but leave it in the Debug build without ugly #if statements?
For example, is there some way to achieve the equivalent of the below code without using all these if statements?
#if DEBUG
Log.Log("I am in debug mode");
#endif
If I have a conditional, run-time check in the Log.Log function, then the string "I am in debug mode" will be preserved within my compiled executable, which is exactly what I do not want.
Define another macro in a common, shared header.
#ifdef DEBUG
#define LOG(m) Log.Log(m);
#else
#define LOG(m) do {} while(false);
#endif
Then replace your calls to Log.Log with LOG.
You'll ultimately need a preprocessor conditional somewhere, but you could apply it "upstream" in some shared header if you want to keep your application code clean. In that case, you'd have something like
#if DEBUG
#define DebugLog(m) Log.Log(m);
#else
#define DebugLog(m)
#endif
in the header associated with Log, and instead of calling Log.Log(m) inside a preprocessor conditional in your application code, you'd just call DebugLog(m). In a Debug build, the macro would expand to Log.Log(m), but otherwise it would just disappear entirely.
Have your Log.Log function #ifdef'd based on the build. So in DEBUG, it logs stuff, and in RELEASE, it's a no-op. For example:
namespace Log
{
void Log()
{
#if DEBUG
//Insert logging code here.
#endif
}
}
Here's a little problem I've been thinking about for a while now that I have not found a solution for yet.
So, to start with, I have this function guard that I use for debugging purpose:
class FuncGuard
{
public:
FuncGuard(const TCHAR* funcsig, const TCHAR* funcname, const TCHAR* file, int line);
~FuncGuard();
// ...
};
#ifdef _DEBUG
#define func_guard() FuncGuard __func_guard__( TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), __LINE__)
#else
#define func_guard() void(0)
#endif
The guard is intended to help trace the path the code takes at runtime by printing some information to the debug console. It is intended to be used such as:
void TestGuardFuncWithCommentOne()
{
func_guard();
}
void TestGuardFuncWithCommentTwo()
{
func_guard();
// ...
TestGuardFuncWithCommentOne();
}
And it gives this as a result:
..\tests\testDebug.cpp(121):
Entering[ void __cdecl TestGuardFuncWithCommentTwo(void) ]
..\tests\testDebug.cpp(114):
Entering[ void __cdecl TestGuardFuncWithCommentOne(void) ]
Leaving[ TestGuardFuncWithCommentOne ]
Leaving[ TestGuardFuncWithCommentTwo ]
Now, one thing that I quickly realized is that it's a pain to add and remove the guards from the function calls. It's also unthinkable to leave them there permanently as they are because it drains CPU cycles for no good reasons and it can quickly bring the app to a crawl. Also, even if there were no impacts on the performances of the app in debug, there would soon be a flood of information in the debug console that would render the use of this debug tool useless.
So, I thought it could be a good idea to enable and disable them on a per-file basis.
The idea would be to have all the function guards disabled by default, but they could be enabled automagically in a whole file simply by adding a line such as
EnableFuncGuards();
at the top of the file.
I've thought about many a solutions for this. I won't go into details here since my question is already long enough, but let just say that I've tried more than a few trick involving macros that all failed, and one involving explicit implementation of templates but so far, none of them can get me the actual result I'm looking for.
Another restricting factor to note: The header in which the function guard mechanism is currently implemented is included through a precompiled header. I know it complicates things, but if someone could come up with a solution that could work in this situation, that would be awesome. If not, well, I certainly can extract that header fro the precompiled header.
Thanks a bunch in advance!
Add a bool to FuncGuard that controls whether it should display anything.
#ifdef NDEBUG
#define SCOPE_TRACE(CAT)
#else
extern bool const func_guard_alloc;
extern bool const func_guard_other;
#define SCOPE_TRACE(CAT) \
NppDebug::FuncGuard npp_func_guard_##__LINE__( \
TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), \
__LINE__, func_guard_##CAT)
#endif
Implementation file:
void example_alloc() {
SCOPE_TRACE(alloc);
}
void other_example() {
SCOPE_TRACE(other);
}
This:
uses specific categories (including one per file if you like)
allows multiple uses in one function, one per category or logical scope (by including the line number in the variable name)
compiles away to nothing in NDEBUG builds (NDEBUG is the standard I'm-not-debugging macro)
You will need a single project-wide file containing definitions of your category bools, changing this 'settings' file does not require recompiling any of the rest of your program (just linking), so you can get back to work. (Which means it will also work just fine with precompiled headers.)
Further improvement involves telling the FuncGuard about the category, so it can even log to multiple locations. Have fun!
You could do something similar to the assert() macro where having some macro defined or not changes the definition of assert() (NDEBUG in assert()'s case).
Something like the following (untested):
#undef func_guard
#ifdef USE_FUNC_GUARD
#define func_guard() NppDebug::FuncGuard __npp_func_guard__( TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), __LINE__)
#else
#define func_guard() void(0)
#endif
One thing to remember is that the include file that does this can't have include guard macros (at least not around this part).
Then you can use it like so to get tracing controlled even within a compilation unit:
#define USE_FUNC_GUARD
#include "funcguard.h"
// stuff you want traced
#undef USE_FUNC_GUARD
#include "funcguard.h"
// and stuff you don't want traced
Of course this doesn't play 100% well with pre-compiled headers, but I think that subsequent includes of the header after the pre-compiled stuff will still work correctly. Even so, this is probably the kind of thing that shouldn't be in a pre-compiled header set.
This may be a matter of style, but there's a bit of a divide in our dev team and I wondered if anyone else had any ideas on the matter...
Basically, we have some debug print statements which we turn off during normal development. Personally I prefer to do the following:
//---- SomeSourceFile.cpp ----
#define DEBUG_ENABLED (0)
...
SomeFunction()
{
int someVariable = 5;
#if(DEBUG_ENABLED)
printf("Debugging: someVariable == %d", someVariable);
#endif
}
Some of the team prefer the following though:
// #define DEBUG_ENABLED
...
SomeFunction()
{
int someVariable = 5;
#ifdef DEBUG_ENABLED
printf("Debugging: someVariable == %d", someVariable);
#endif
}
...which of those methods sounds better to you and why? My feeling is that the first is safer because there is always something defined and there's no danger it could destroy other defines elsewhere.
My initial reaction was #ifdef, of course, but I think #if actually has some significant advantages for this - here's why:
First, you can use DEBUG_ENABLED in preprocessor and compiled tests. Example - Often, I want longer timeouts when debug is enabled, so using #if, I can write this
DoSomethingSlowWithTimeout(DEBUG_ENABLED? 5000 : 1000);
... instead of ...
#ifdef DEBUG_MODE
DoSomethingSlowWithTimeout(5000);
#else
DoSomethingSlowWithTimeout(1000);
#endif
Second, you're in a better position if you want to migrate from a #define to a global constant. #defines are usually frowned on by most C++ programmers.
And, Third, you say you've a divide in your team. My guess is this means different members have already adopted different approaches, and you need to standardise. Ruling that #if is the preferred choice means that code using #ifdef will compile -and run- even when DEBUG_ENABLED is false. And it's much easier to track down and remove debug output that is produced when it shouldn't be than vice-versa.
Oh, and a minor readability point. You should be able to use true/false rather than 0/1 in your #define, and because the value is a single lexical token, it's the one time you don't need parentheses around it.
#define DEBUG_ENABLED true
instead of
#define DEBUG_ENABLED (1)
They're both hideous. Instead, do this:
#ifdef DEBUG
#define D(x) do { x } while(0)
#else
#define D(x) do { } while(0)
#endif
Then whenever you need debug code, put it inside D();. And your program isn't polluted with hideous mazes of #ifdef.
#ifdef just checks if a token is defined, given
#define FOO 0
then
#ifdef FOO // is true
#if FOO // is false, because it evaluates to "#if 0"
We have had this same problem across multiple files and there is always the problem with people forgetting to include a "features flag" file (With a codebase of > 41,000 files it is easy to do).
If you had feature.h:
#ifndef FEATURE_H
#define FEATURE_H
// turn on cool new feature
#define COOL_FEATURE 1
#endif // FEATURE_H
But then You forgot to include the header file in file.cpp:
#if COOL_FEATURE
// definitely awesome stuff here...
#endif
Then you have a problem, the compiler interprets COOL_FEATURE being undefined as a "false" in this case and fails to include the code. Yes gcc does support a flag that causes a error for undefined macros... but most 3rd party code either defines or does not define features so this would not be that portable.
We have adopted a portable way of correcting for this case as well as testing for a feature's state: function macros.
if you changed the above feature.h to:
#ifndef FEATURE_H
#define FEATURE_H
// turn on cool new feature
#define COOL_FEATURE() 1
#endif // FEATURE_H
But then you again forgot to include the header file in file.cpp:
#if COOL_FEATURE()
// definitely awseome stuff here...
#endif
The preprocessor would have errored out because of the use of an undefined function macro.
For the purposes of performing conditional compilation, #if and #ifdef are almost the same, but not quite. If your conditional compilation depends on two symbols then #ifdef will not work as well. For example, suppose you have two conditional compilation symbols, PRO_VERSION and TRIAL_VERSION, you might have something like this:
#if defined(PRO_VERSION) && !defined(TRIAL_VERSION)
...
#else
...
#endif
Using #ifdef the above becomes much more complicated, especially getting the #else part to work.
I work on code that uses conditional compilation extensively and we have a mixture of #if & #ifdef. We tend to use #ifdef/#ifndef for the simple case and #if whenever two or more symbols are being evaluation.
I think it's entirely a question of style. Neither really has an obvious advantage over the other.
Consistency is more important than either particular choice, so I'd recommend that you get together with your team and pick one style, and stick to it.
I myself prefer:
#if defined(DEBUG_ENABLED)
Since it makes it easier to create code that looks for the opposite condition much easier to spot:
#if !defined(DEBUG_ENABLED)
vs.
#ifndef(DEBUG_ENABLED)
It's a matter of style. But I recommend a more concise way of doing this:
#ifdef USE_DEBUG
#define debug_print printf
#else
#define debug_print
#endif
debug_print("i=%d\n", i);
You do this once, then always use debug_print() to either print or do nothing. (Yes, this will compile in both cases.) This way, your code won't be garbled with preprocessor directives.
If you get the warning "expression has no effect" and want to get rid of it, here's an alternative:
void dummy(const char*, ...)
{}
#ifdef USE_DEBUG
#define debug_print printf
#else
#define debug_print dummy
#endif
debug_print("i=%d\n", i);
#if gives you the option of setting it to 0 to turn off the functionality, while still detecting that the switch is there.
Personally I always #define DEBUG 1 so I can catch it with either an #if or #ifdef
#if and #define MY_MACRO (0)
Using #if means that you created a "define" macro, i.e., something that will be searched in the code to be replaced by "(0)". This is the "macro hell" I hate to see in C++, because it pollutes the code with potential code modifications.
For example:
#define MY_MACRO (0)
int doSomething(int p_iValue)
{
return p_iValue + 1 ;
}
int main(int argc, char **argv)
{
int MY_MACRO = 25 ;
doSomething(MY_MACRO) ;
return 0;
}
gives the following error on g++:
main.cpp|408|error: lvalue required as left operand of assignment|
||=== Build finished: 1 errors, 0 warnings ===|
Only one error.
Which means that your macro successfully interacted with your C++ code: The call to the function was successful. In this simple case, it is amusing. But my own experience with macros playing silently with my code is not full of joy and fullfilment, so...
#ifdef and #define MY_MACRO
Using #ifdef means you "define" something. Not that you give it a value. It is still polluting, but at least, it will be "replaced by nothing", and not seen by C++ code as lagitimate code statement. The same code above, with a simple define, it:
#define MY_MACRO
int doSomething(int p_iValue)
{
return p_iValue + 1 ;
}
int main(int argc, char **argv)
{
int MY_MACRO = 25 ;
doSomething(MY_MACRO) ;
return 0;
}
Gives the following warnings:
main.cpp||In function ‘int main(int, char**)’:|
main.cpp|406|error: expected unqualified-id before ‘=’ token|
main.cpp|399|error: too few arguments to function ‘int doSomething(int)’|
main.cpp|407|error: at this point in file|
||=== Build finished: 3 errors, 0 warnings ===|
So...
Conclusion
I'd rather live without macros in my code, but for multiple reasons (defining header guards, or debug macros), I can't.
But at least, I like to make them the least interactive possible with my legitimate C++ code. Which means using #define without value, using #ifdef and #ifndef (or even #if defined as suggested by Jim Buck), and most of all, giving them names so long and so alien no one in his/her right mind will use it "by chance", and that in no way it will affect legitimate C++ code.
Post Scriptum
Now, as I'm re-reading my post, I wonder if I should not try to find some value that won't ever ever be correct C++ to add to my define. Something like
#define MY_MACRO ##################
that could be used with #ifdef and #ifndef, but not let code compile if used inside a function... I tried this successfully on g++, and it gave the error:
main.cpp|410|error: stray ‘#’ in program|
Interesting.
:-)
That is not a matter of style at all. Also the question is unfortunately wrong. You cannot compare these preprocessor directives in the sense of better or safer.
#ifdef macro
means "if macro is defined" or "if macro exists". The value of macro does not matter here. It can be whatever.
#if macro
if always compare to a value. In the above example it is the standard implicit comparison:
#if macro !=0
example for the usage of #if
#if CFLAG_EDITION == 0
return EDITION_FREE;
#elif CFLAG_EDITION == 1
return EDITION_BASIC;
#else
return EDITION_PRO;
#endif
you now can either put the definition of CFLAG_EDITION either in your code
#define CFLAG_EDITION 1
or you can set the macro as compiler flag. Also see here.
The first seems clearer to me. It seems more natural make it a flag as compared to defined/not defined.
Both are exactly equivalent. In idiomatic use, #ifdef is used just to check for definedness (and what I'd use in your example), whereas #if is used in more complex expressions, such as #if defined(A) && !defined(B).
There is a difference in case of different way to specify a conditional define to the driver:
diff <( echo | g++ -DA= -dM -E - ) <( echo | g++ -DA -dM -E - )
output:
344c344
< #define A
---
> #define A 1
This means, that -DA is synonym for -DA=1 and if value is omitted, then it may lead to problems in case of #if A usage.
A little OT, but turning on/off logging with the preprocessor is definitely sub-optimal in C++. There are nice logging tools like Apache's log4cxx which are open-source and don't restrict how you distribute your application. They also allow you to change logging levels without recompilation, have very low overhead if you turn logging off, and give you the chance to turn logging off completely in production.
I used to use #ifdef, but when I switched to Doxygen for documentation, I found that commented-out macros cannot be documented (or, at least, Doxygen produces a warning). This means I cannot document the feature-switch macros that are not currently enabled.
Although it is possible to define the macros only for Doxygen, this means that the macros in the non-active portions of the code will be documented, too. I personally want to show the feature switches and otherwise only document what is currently selected. Furthermore, it makes the code quite messy if there are many macros that have to be defined only when Doxygen processes the file.
Therefore, in this case, it is better to always define the macros and use #if.
I've always used #ifdef and compiler flags to define it...
Alternatively, you can declare a global constant, and use the C++ if, instead of the preprocessor #if. The compiler should optimize the unused branches away for you, and your code will be cleaner.
Here is what C++ Gotchas by Stephen C. Dewhurst says about using #if's.
I like #define DEBUG_ENABLED (0) when you might want multiple levels of debug. For example:
#define DEBUG_RELEASE (0)
#define DEBUG_ERROR (1)
#define DEBUG_WARN (2)
#define DEBUG_MEM (3)
#ifndef DEBUG_LEVEL
#define DEBUG_LEVEL (DEBUG_RELEASE)
#endif
//...
//now not only
#if (DEBUG_LEVEL)
//...
#endif
//but also
#if (DEBUG_LEVEL >= DEBUG_MEM)
LOG("malloc'd %d bytes at %s:%d\n", size, __FILE__, __LINE__);
#endif
Makes it easier to debug memory leaks, without having all those log lines in your way of debugging other things.
Also the #ifndef around the define makes it easier to pick a specific debug level at the commandline:
make -DDEBUG_LEVEL=2
cmake -DDEBUG_LEVEL=2
etc
If not for this, I would give advantage to #ifdef because the compiler/make flag would be overridden by the one in the file. So you don't have to worry about changing back the header before doing the commit.
As with many things, the answer depends. #ifdef is great for things that are guaranteed to be defined or not defined in a particular unit. Include guards for example. If the include file is present at least once, the symbol is guaranteed to be defined, otherwise not.
However, some things don't have that guarantee. Think about the symbol HAS_FEATURE_X. How many states exist?
Undefined
Defined
Defined with a value (say 0 or 1).
So, if you're writing code, especially shared code, where some may #define HAS_FEATURE_X 0 to mean feature X isn't present and others may just not define it, you need to handle all those cases.
#if !defined(HAS_FEATURE_X) || HAS_FEATURE_X == 1
Using just an #ifdef could allow for a subtle error where something is switched in (or out) unexpectedly because someone or some team has a convention of defining unused things to 0. In some ways, I like this #if approach because it means the programmer actively made a decision. Leaving something undefined is passive and from an external point of view, it can sometimes be unclear whether that was intentional or an oversight.