how to separate debug and release mode code - c++

During debug mode or while I am doing testing, I need to print lots of various information, so i use this method:
#ifdef TESTING
// code with lots of debugging info
#else
// clean code only
#endif // TESTING`
Is this a good method, or is there any other simple and elegant method ?
But this way, I am repeating the same code in two places and if anything is to be changed later on in the code, I have to do it in both places, which is time consuming and error prone.
Thanks.
I am using MS Visual Studio.

You could use a macro to print debug information and then in the release build, define that macro as empty.
e.g.,
#ifdef _DEBUG
#define DEBUG_PRINT(x) printf(x);
#else
#define DEBUG_PRINT(x)
#endif
Using this method, you can also add more information like
__LINE__
__FILE__
to the debug information automatically.

Write once
#ifdef _DEBUG
const bool is_debig = true;
#else
const bool is_debig = false;
#endif
and then
template<bool debug>
struct TemplateDebugHelper {
void PrintDebugInfo(const char* );
void CalcTime(...);
void OutputInfoToFile(...);
/// .....
};
// Empty inline specialization
template<>
struct TemplateDebugHelper<false> {
void PrintDebugInfo(const char* ) {} // Empty body
void CalcTime(...) {} // Empty body
void OutputInfoToFile(...) {} // Empty body
/// .....
};
typedef TemplateDebugHelper<is_debug> DebugHelper;
DebugHelper global_debug_helper;
int main()
{
global_debug_helper.PrintDebugInfo("Info"); // Works only for is_debug=true
}

Use a define like that on include headers
#ifdef TESTING
#define DEBUG_INFOS(_X_) CallYourDebugFunction(_X_ )
#else
#define DEBUG_INFOS(_X_) ((void)0)
#endif
and then use only this on your code
...
DEBUG_INFOS("infos what ever");
RestOfWork();
...
You can also use and search for the ASSERT and TRACE macros and use the DebugView from sysinternals to read the output real time from trace, or track problems with the ASSERT. ASSERT and TRACE do similar work, and you can get ideas from them.
comments: I use the TESTING declare, because I see that on the question.

Use Log4Cxx, instead of rolling your own logging. The Log4Cxx package is highly configurable, supports different levels of logging based on importance/severity, and supports multiple forms of output.
In addition, unless it is very critical code that must be super optimized, I would recommend leaving logging statements (assuming you use Log4Cxx) in your code, but simply turn down the logging level. That way, logging can be dynamically enabled, which can be incredibly helpful if one of your users experiences a hard-to-replicate bug... just direct them on how to configure a higher logging level. If you completely elide the logging from the executable, then there is no way to obtain that valuable debugging output in the field.

You could use something like boost::log setting severity level to the one you need.
void init()
{
logging::core::get()->set_filter
(
flt::attr< logging::trivial::severity_level >("Severity") >= logging::trivial::info
);
}
int main(int, char*[])
{
init();
BOOST_LOG_TRIVIAL(trace) << "A trace severity message";
BOOST_LOG_TRIVIAL(debug) << "A debug severity message";
BOOST_LOG_TRIVIAL(info) << "An informational severity message";
BOOST_LOG_TRIVIAL(warning) << "A warning severity message";
BOOST_LOG_TRIVIAL(error) << "An error severity message";
BOOST_LOG_TRIVIAL(fatal) << "A fatal severity message";
}
I think pantheios have something similar too.

I'm writing for embedded systems in C.
In my programs I'm using following macros:
#define _L log_stamp(__FILE__, __LINE__)
#define _LS(a) log_string(a)
#define _LI(a) log_long(a)
#define _LA(a,l) log_array(a,l)
#define _LH(a) log_hex(a)
#define _LC(a) log_char(a)
#define ASSERT(con) log_assert(__FILE__, __LINE__, con)
When I'm making release version, I simply switch off #define DEBUG directive and all macros become empty.
Note that it doesn't consume any CPU cycles and memory in release version.
Macros are the only way to save to log information: where the logging was done
(file and line number).
If I need this information I use:
_L;_LS("this is a log message number ");_LI(5);
otherwise without _L directive.

There is a simple way, which works with most compilers:
#ifdef TESTING
#define DPRINTF( args ) printf args
#else
#define DPRINTF( args ) ((void)0)
#endif
Next, in the source code you should use it as:
DPRINTF(("Debug: value a = %d, value b = %d\n", a, b));
The drawback is that you have to use double parentheses, but in old C and C++
standards variadic macro are not supported (only as a compiler extension).

Related

How to use preprocessor IF on DEFINE that is an ENUM member?

I am struggling with this for a while now, and cant get it to work!
I have a preprocessor define for LOG_LEVEL which defines what logs my program should emit.
I Have a lot of LOG points, so performance is needed,
therefore, no use of runtime check for log_level.
I trimmed my code to the minimal problematic construct which can be played with (here)[https://onlinegdb.com/u39ueqNAI]:
#include <iostream>
typedef enum {
LOG_SILENT=0,
LOG_ERROR,
LOG_WARNING,
LOG_INFO,
LOG_DEBUG
} E_LOG_LEVELS;
// this define is set using -DLOG_LEVEL=LOG_WARNING to the compiler.
#define LOG_LEVEL LOG_WARNING
int main() {
std::cout << "Logging Level Value is " << LOG_LEVEL << std::endl; // output 2 (correctly)
#if LOG_LEVEL==LOG_WARNING
std::cout << "Log Level Warning!" << std::endl; // outputs (correctly)
#endif
#if LOG_LEVEL==LOG_ERROR
std::cout << "Log Level Error!" << std::endl; // outputs (Why ??? )
#endif
return 0;
}
The main issue is that the #if LOG_LEVEL==LOG_* always true.
I also tried #if LOG_LEVEL==2 but this returned FALSE (uff).
what's going on ?
how can I test that a define is an enum value ?
You don't need the preprocessor for this. A normal
if (LOG_LEVEL <= LOG_WARNING)
will not create a runtime test when the condition involves only constants and the build has any optimization at all.
Modern C++ allows you to force the compiler to implement the conditional at compile-time, using if constexpr (...). This will prune away dead branches even with optimization disabled.
Finally, if you insist on using the preprocessor, and you can guarantee that the macro will use the symbolic name (you'll never build with g++ -DLOG_LEVEL=2), then you can
#define STRINGIFY(x) #x
#define STRINGY2(x) STRINGIFY(x)
#define PP_LOG_LEVEL STRINGY2(LOG_LEVEL)
#define LOG_LEVEL_IS(x) STRINGY2(LOG_LEVEL)==STRINGIFY(x)
then either
#if PP_LOG_LEVEL=="LOG_WARNING"
or
#if PP_LOG_LEVEL_IS(LOG_WARNING)
But I would recommend avoiding preprocessor string comparison for this.
If you do use the preprocessor, I recommend checking the actual value against a whitelist and using #error to stop the build in case LOG_LEVEL isn't set to any of the approved names.

Unused variable warning from debugging code

I've built a logging system that compiles in two platforms:
1) A debug platform where the logging calls are interleaved in the code.
2) An on-chip platform in which the logging calls should not appear in the code, due to hard limitations on code size and run time.
To achieve my goal I used C macros:
#ifdef DEBUG_PLATFORM
#define LOG(log) std::sstream s; s<<log; log_func(s);
#else
#define LOG(log) ;
#endif
Alas, the unused variable compiler warning is giving me a hard time. For example the following code will compile in the debug platform but not in the online platform:
int a = 5;
int b = func(1,2,3);
LOG("a: "<<a<<" b: "<< b)
I'd like to free the user from thinking on these issues and doing tricks to avoid the warning (like adding (void)a). Most users don't compile the online platform and these type of errors will be found in retrospective and will cause a lot of inconvenience.
I am not allowed to change the compiler flags, using unused variable warnings is a must.
Does anybody have an idea how to overcome this difficulty?
Is there a way to instruct the compiler to ignore the warning for all variables inside some scope?
I would suggest you logging a variable at a time:
#ifdef DEBUG_PLATFORM
#define LOG(log) { std::stringstream s; s<< #log << '=' << log << ' '; log_func(s); }
#else
#define LOG(log) (void) log;
#endif
#log will print the variable name.
(void) log will make the compiler ignore that it has not been used.
You could log more variables provided that you put more macro versions, but it will be messy. With #log and with (void) log, you can no longer pass "a: " << a to LOG

Somehow tell compiler to "Do not process line of code"

I'm trying to create a macro for debug logging purposes. Here is an extra simplified version:
#if defined _DEBUG
#define LOG std::cout
#else
#define LOG IGNORETHISLINEOFCODE
#endif
/* ... */
LOG << "Here's some debug code";
I've been thinking of the ways I can tell the compiler to ignore this line of code that starts with "LOG". I'm personally not looking for alternative ways, such as #define LOG( ... ) (void)0. Here's what I've tried:
Overloading the leftshift operator for void as an inline constexpr that does nothing (which still results in it being visible in the disassembly; I don't want that)
Defining LOG as: #define LOG //, but the comment identifier isn't substituted in
Any ideas? Like I said earlier, I don't want any alternatives, such as surrounding all the log code with #if defined _DEBUG
If your version of C++ handles if constexpr I've come to like things along this line for what you're asking.
#include <iostream>
template <bool Log>
struct LOGGER {
template <typename T>
LOGGER& operator<<(T const &t) {
if constexpr (Log)
std::cout << t;
return *this;
}
};
LOGGER<false> LOG;
int main (int argc, char const* argv[])
{
LOG << "A log statement." << '\n';
return 0;
}
Your question and constraint ("I don't want any alternatives") are weirdly specific.
I've been thinking of the ways I can tell the compiler to ignore this line of code that starts with "LOG"
Don't do that, it'll be trivially broken by a multi-line logging statement. Any code that can suddenly break due to otherwise-legal reformatting is best avoided.
Next we have
... which still results in it being visible in the disassembly ...
which shouldn't be true if the code is genuinely dead, you have a decent compiler, and you turn on optimization. It's still some work, though.
The usual solution is something like
#ifdef NDEBUG
#define LOG(EXPR)
#else
#define LOG(EXPR) std::cerr << EXPR
#endif
This is an alternative, but it's not an alternative such as surrounding all the log code with #if defined, so I don't know if it's a problem for you or not.
It does have the advantage of genuinely compiling to nothing at any optimization level.
another possibility based on the compiler optimization abilities:
#define LOG if (DEBUG) std::cout
now you can use
#define DEBUG false
LOG << "hello " << " world 1" << endl;
you should be able to use const bool DEBUG = false as well.
#if defined _DEBUG
#define LOG std::cout
#else
#define LOG /##/
#endif
This works as well. It's the answer to the original question, so I'll mark it as such, but just know that this does not support multiline operations.
I suppose you could do something like the following for multiline operations. I don't know how well it'd work.
#if defined _DEBUG
#define LOG( in ) std::cout << in
#else
#define LOG( in ) /##/
#endif
Better logic would be to define tracer policy, where you can set the logging level at the start of the application and then use the tracing level to make the decision to either log the degug information. Tracing level can be defined as an enum like
enum Tracelevel{CRITICAL, ERROR, INFO, TEST, DEBUG};
setTraceLevel(TraceLevel trcLvl){
_traceLevel = trcLvl;
};
#if defined _DEBUG
if(_traceLevel == DEBUG) {\
#define LOG std::cout
}
#endif
A lightweight logger can be found http://www.drdobbs.com/cpp/a-lightweight-logger-for-c/240147505?pgno=1

c++ force compiler to opt out some piece of code

I have a piece of code:
// some code, which only do sanity check
expensive checks
// sanity check end
Now how do I tell the compiler to force it to opt out
this piece? Basically it means when I compile with -O2 or
O3, I don't want it to be there...
Thanks!
You can accomplish this with a constant and a single if/def pair. This allows the code to still be compiled and checked for errors but omitted during optimization. This can prevent changes that might break the check code from going undetected.
#if defined(USE_EXPENSIVE_CHECKS) || defined(DEBUG)
#define USE_EXPENSIVE_CHECKS_VALUE true
#else
#define USE_EXPENSIVE_CHECKS_VALUE false
#endif
namespace {
const bool useExpensiveChecks = USE_EXPENSIVE_CHECKS_VALUE;
};
void function()
{
if(useExpensiveChecks == true)
{
// expensive checks
}
}
Instead of relying on the compiler to optimize the code out, you could pass the compiler an additional symbol definition only when you want the code to run:
// some code, which only do sanity check
#ifdef my_symbol
expensive checks
#endif
// sanity check end
Using macros and conditionals in the preprocessor is really the only way to avoid code being generated by the compiler.
So, here's how I would do it:
#ifdef NEED_EXPENSIVE_CHECKS
inline expensive_checking(params...)
{
... do expensive checking here ...
}
#else
inline expensive_checking(params...)
{
}
#endif
Then just call:
some code
expensive_checking(some_parameters...)
some other code
An empty inlined function will result in "no code" in any decent, modern compiler. Use -DNEED_EXPENSIVE_CHECKS in your debug build settings, and don't use that in release build.
I have also been known to use a combination of macro and function, such as this:
#ifdef NEED_EXPENSIVE_CHECKS
#define EXPENSIVE_CHECKS(stuff...) expensive_checks(__FILE__, __LINE__, stuff...)
inline expensive_checks(const char *file, int line, stuff ...)
{
if (some_checking)
{
cerr << "Error, some_checking failed at " << file << ":" << line << endl;
}
}
#else
#define EXPENSIVE_CHECKS(stuff...)
#endif
Now, you get information on which file and what line when something fails, which can be very useful if the checks are made in many places (and you can use __function__ or __pretty_function__ to get the function name as well, if you wish).
Obviously, the assert() macro will essentially do what my macro solution does, except it usually doesn't provide the filename and line-number.
Move your checks into a different function, then import cassert and write assert(expensive_check()). When you want to disable the checks, use #define NDEBUG before the inclusion of cassert.

How to remove log debugging statements from a program

I am using boost::log as a logger for my C++ program.
During development I often use it this way, for example:
#define LOG(severity) BOOST_LOG_SEV(boost::logger::get(), (severity))
#define LOG_ERR LOG(Severity::error)
#define LOG_INFO LOG(Severity::info)
#define LOG_DEBUG LOG(Severity::debug)
where BOOST_LOG_SEV is the facility provided by boost::log, while LOG, LOG_ERROR, LOG_INFO, LOG_DEBUG are shortcuts defined by me.
In short, BOOST_LOG_SEV dynamically compares the current debugging severity with the severity passed to the macro itself to decide whether to emit the output or not.
This is an example of a program which use the above macros for debugging purposes:
// set at compile time
#define MAX_LOG_SEVERITY Severity::debug
int main() {
// Print all the messages with a
// Severity <= MAX_LOG_SEVERITY defined before compiling
boost::log::set_severity(boost::logger::get(), MAX_LOG_SEVERITY); // set_severity() is fictitious just to give you an idea
// bool err = ...
if (err)
LOG_ERR << "An error occurred";
else
LOG_INFO << "Okay;
LOG_DEBUG << "main() called";
}
Now, when releasing the program for a production environment, debugging messages with a Severity::debug level do not really make sense. I could hide them from the output by simply decreasing MAX_LOG_SEVERITY to Severity::info, but the problem is that the calls made by LOG_DEBUG will not be removed from the executable code. This has a bad impact on both efficiency and object size.
The code is full of logging statements and I'd really like to preserve the simple use of operator<<().
Without touching those statements themselves, is there any better macro definition/trick for LOG_DEBUG that would make the pre-processor or the compiler (during its optimizations) "skip" or "remove" the debugging statements when MAX_LOG_SEVERITY is set to the Severity::debug constant ?
While I can't make any guarantees, something like this might work. It depends on what your optimizer does and whether or not you have side effects in the parameters to operator<<.
#ifdef NO_LOG_DEBUG
static class DevNull
{
} dev_null;
template <typename T>
DevNull & operator<<(DevNull & dest, T)
{
return dest;
}
#define LOG_DEBUG dev_null
#else
#define LOG_DEBUG LOG(Severity::debug)
#endif
#MartinShobe's accepted answer works on:
g++ (4.7.2) with -O1 and higher
clang++ (3.4) with -O2 and higher
Visual Studio (2008) with linker flag /OPT:REF
The accepted answer does not work for me (MSVC 2019, stdc++17).
My solution is a bit whacky though. But the optimization should definitely take care of it:
#ifdef NDEBUG
#define LOG_DEBUG if (false) std::cout
#else
#define LOG_DEBUG if (true) std::cout
#endif
Usage:
LOG_DEBUG << ... << std::endl;
Turns off all optimizations in the program and speeds compilation.
/Od
or boot_log_stop