I've built a logging system that compiles in two platforms:
1) A debug platform where the logging calls are interleaved in the code.
2) An on-chip platform in which the logging calls should not appear in the code, due to hard limitations on code size and run time.
To achieve my goal I used C macros:
#ifdef DEBUG_PLATFORM
#define LOG(log) std::sstream s; s<<log; log_func(s);
#else
#define LOG(log) ;
#endif
Alas, the unused variable compiler warning is giving me a hard time. For example the following code will compile in the debug platform but not in the online platform:
int a = 5;
int b = func(1,2,3);
LOG("a: "<<a<<" b: "<< b)
I'd like to free the user from thinking on these issues and doing tricks to avoid the warning (like adding (void)a). Most users don't compile the online platform and these type of errors will be found in retrospective and will cause a lot of inconvenience.
I am not allowed to change the compiler flags, using unused variable warnings is a must.
Does anybody have an idea how to overcome this difficulty?
Is there a way to instruct the compiler to ignore the warning for all variables inside some scope?
I would suggest you logging a variable at a time:
#ifdef DEBUG_PLATFORM
#define LOG(log) { std::stringstream s; s<< #log << '=' << log << ' '; log_func(s); }
#else
#define LOG(log) (void) log;
#endif
#log will print the variable name.
(void) log will make the compiler ignore that it has not been used.
You could log more variables provided that you put more macro versions, but it will be messy. With #log and with (void) log, you can no longer pass "a: " << a to LOG
Related
I am struggling with this for a while now, and cant get it to work!
I have a preprocessor define for LOG_LEVEL which defines what logs my program should emit.
I Have a lot of LOG points, so performance is needed,
therefore, no use of runtime check for log_level.
I trimmed my code to the minimal problematic construct which can be played with (here)[https://onlinegdb.com/u39ueqNAI]:
#include <iostream>
typedef enum {
LOG_SILENT=0,
LOG_ERROR,
LOG_WARNING,
LOG_INFO,
LOG_DEBUG
} E_LOG_LEVELS;
// this define is set using -DLOG_LEVEL=LOG_WARNING to the compiler.
#define LOG_LEVEL LOG_WARNING
int main() {
std::cout << "Logging Level Value is " << LOG_LEVEL << std::endl; // output 2 (correctly)
#if LOG_LEVEL==LOG_WARNING
std::cout << "Log Level Warning!" << std::endl; // outputs (correctly)
#endif
#if LOG_LEVEL==LOG_ERROR
std::cout << "Log Level Error!" << std::endl; // outputs (Why ??? )
#endif
return 0;
}
The main issue is that the #if LOG_LEVEL==LOG_* always true.
I also tried #if LOG_LEVEL==2 but this returned FALSE (uff).
what's going on ?
how can I test that a define is an enum value ?
You don't need the preprocessor for this. A normal
if (LOG_LEVEL <= LOG_WARNING)
will not create a runtime test when the condition involves only constants and the build has any optimization at all.
Modern C++ allows you to force the compiler to implement the conditional at compile-time, using if constexpr (...). This will prune away dead branches even with optimization disabled.
Finally, if you insist on using the preprocessor, and you can guarantee that the macro will use the symbolic name (you'll never build with g++ -DLOG_LEVEL=2), then you can
#define STRINGIFY(x) #x
#define STRINGY2(x) STRINGIFY(x)
#define PP_LOG_LEVEL STRINGY2(LOG_LEVEL)
#define LOG_LEVEL_IS(x) STRINGY2(LOG_LEVEL)==STRINGIFY(x)
then either
#if PP_LOG_LEVEL=="LOG_WARNING"
or
#if PP_LOG_LEVEL_IS(LOG_WARNING)
But I would recommend avoiding preprocessor string comparison for this.
If you do use the preprocessor, I recommend checking the actual value against a whitelist and using #error to stop the build in case LOG_LEVEL isn't set to any of the approved names.
Below is an example of using a debug variable
class A{
public:
A(bool debug):m_debug(debug){};
~A(){};
void Test(){
for(int i=0;i<1000000;i++){
// do something
if(m_debug) print();
}
}
void print(){
std::cout << "something" << std::endl;
}
private:
bool m_debug;
};
Below is an example of using a debug macro preprocessor
#include "Preprocessor.h"
class A{
public:
void Test(){
for(int i=0;i<1000000;i++){
// do something
#ifdef DEBUG
print();
#endif
}
}
void print(){
std::cout << "something" << std::endl;
}
};
In Preprocessor.h is simply
#define DEBUG
The good thing about using a debug variable is the class has one less dependency on a global preprocessor header. The good thing about the macro approach is that there are 1000000 less if statement executed at run time, which might be critical for lets say graphics application when every single fps counts. What would be considered as a better approach?
The better approach is to use preprocessor, however, it do not necessary a new header file to define the macro.
You can set the flag at compiling, using -DMACRO_NAME or, in your case, -DDEBUG.
As long as the job is printing debug info, the precedent is to use macros like Visual Studio does (debug/release build). However in Qt world, there is class QLoggingCatagory where you can enable/disable logging sections. It calls the function QLoggingCategory::isDebugEnabled()every time a message is being logged out which makes me think it is not a major issue for performance at least for normal use.
That said if we comapare Visual Studio MFC application with Qt application, MFC applications are lighting fast. This could be attributed at least in part because it uses macros and the difference can be noticed rather easily in debug and release build as well where macro/debug info is the main difference.
Given all this evidence my vote is for macros approach in your case for maximum performance.
First, the macro way is better for both benefits in memory and if testing(though this is really minor cost). Do you have any special scenarios to use the debug variable?
why not define the A::Test() in the CPP file? thus the global preprocessor header could be moved to the CPP file. Anyway, I don't think expose such debug details in the header is a good idea.
Another alternative, if you do not like having a bunch of #ifdef DEBUG / #endif lines, you could create a macro that outputs it's argument if defined (much like the assert macro).
#ifdef DEBUG
#define PRINT(x) (x)
#else
#define PRINT(x)
#endif
PRINT( std::cout << "something" << std::endl );
The code path does not get much more minimal than "compiled away". However, if you are willing to perform a refactoring step, you can make the run-time debug version cheaper at the cost of a larger executable.
The idea is to make the debug state a template parameter of a refactored version of Test() so that it may or may not print a debug statement on each iteration. The compiler's dead code elimination pass will then optimize away the statement in the case false is passed to the template, since template parameters will be treated as compile time constants in the template expansion.
The fully optimized version can still use conditional compilation to always disable debug output, of course.
template <bool Debug>
void TestDebug(){
for(int i=0;i<1000000;i++){
// do something
if (Debug) print();
}
}
void Test(){
#ifdef DEBUG
if(m_debug) TestDebug<true>();
else TestDebug<false>();
#else
TestDebug<false>();
#endif
}
I have a piece of code:
// some code, which only do sanity check
expensive checks
// sanity check end
Now how do I tell the compiler to force it to opt out
this piece? Basically it means when I compile with -O2 or
O3, I don't want it to be there...
Thanks!
You can accomplish this with a constant and a single if/def pair. This allows the code to still be compiled and checked for errors but omitted during optimization. This can prevent changes that might break the check code from going undetected.
#if defined(USE_EXPENSIVE_CHECKS) || defined(DEBUG)
#define USE_EXPENSIVE_CHECKS_VALUE true
#else
#define USE_EXPENSIVE_CHECKS_VALUE false
#endif
namespace {
const bool useExpensiveChecks = USE_EXPENSIVE_CHECKS_VALUE;
};
void function()
{
if(useExpensiveChecks == true)
{
// expensive checks
}
}
Instead of relying on the compiler to optimize the code out, you could pass the compiler an additional symbol definition only when you want the code to run:
// some code, which only do sanity check
#ifdef my_symbol
expensive checks
#endif
// sanity check end
Using macros and conditionals in the preprocessor is really the only way to avoid code being generated by the compiler.
So, here's how I would do it:
#ifdef NEED_EXPENSIVE_CHECKS
inline expensive_checking(params...)
{
... do expensive checking here ...
}
#else
inline expensive_checking(params...)
{
}
#endif
Then just call:
some code
expensive_checking(some_parameters...)
some other code
An empty inlined function will result in "no code" in any decent, modern compiler. Use -DNEED_EXPENSIVE_CHECKS in your debug build settings, and don't use that in release build.
I have also been known to use a combination of macro and function, such as this:
#ifdef NEED_EXPENSIVE_CHECKS
#define EXPENSIVE_CHECKS(stuff...) expensive_checks(__FILE__, __LINE__, stuff...)
inline expensive_checks(const char *file, int line, stuff ...)
{
if (some_checking)
{
cerr << "Error, some_checking failed at " << file << ":" << line << endl;
}
}
#else
#define EXPENSIVE_CHECKS(stuff...)
#endif
Now, you get information on which file and what line when something fails, which can be very useful if the checks are made in many places (and you can use __function__ or __pretty_function__ to get the function name as well, if you wish).
Obviously, the assert() macro will essentially do what my macro solution does, except it usually doesn't provide the filename and line-number.
Move your checks into a different function, then import cassert and write assert(expensive_check()). When you want to disable the checks, use #define NDEBUG before the inclusion of cassert.
I am using boost::log as a logger for my C++ program.
During development I often use it this way, for example:
#define LOG(severity) BOOST_LOG_SEV(boost::logger::get(), (severity))
#define LOG_ERR LOG(Severity::error)
#define LOG_INFO LOG(Severity::info)
#define LOG_DEBUG LOG(Severity::debug)
where BOOST_LOG_SEV is the facility provided by boost::log, while LOG, LOG_ERROR, LOG_INFO, LOG_DEBUG are shortcuts defined by me.
In short, BOOST_LOG_SEV dynamically compares the current debugging severity with the severity passed to the macro itself to decide whether to emit the output or not.
This is an example of a program which use the above macros for debugging purposes:
// set at compile time
#define MAX_LOG_SEVERITY Severity::debug
int main() {
// Print all the messages with a
// Severity <= MAX_LOG_SEVERITY defined before compiling
boost::log::set_severity(boost::logger::get(), MAX_LOG_SEVERITY); // set_severity() is fictitious just to give you an idea
// bool err = ...
if (err)
LOG_ERR << "An error occurred";
else
LOG_INFO << "Okay;
LOG_DEBUG << "main() called";
}
Now, when releasing the program for a production environment, debugging messages with a Severity::debug level do not really make sense. I could hide them from the output by simply decreasing MAX_LOG_SEVERITY to Severity::info, but the problem is that the calls made by LOG_DEBUG will not be removed from the executable code. This has a bad impact on both efficiency and object size.
The code is full of logging statements and I'd really like to preserve the simple use of operator<<().
Without touching those statements themselves, is there any better macro definition/trick for LOG_DEBUG that would make the pre-processor or the compiler (during its optimizations) "skip" or "remove" the debugging statements when MAX_LOG_SEVERITY is set to the Severity::debug constant ?
While I can't make any guarantees, something like this might work. It depends on what your optimizer does and whether or not you have side effects in the parameters to operator<<.
#ifdef NO_LOG_DEBUG
static class DevNull
{
} dev_null;
template <typename T>
DevNull & operator<<(DevNull & dest, T)
{
return dest;
}
#define LOG_DEBUG dev_null
#else
#define LOG_DEBUG LOG(Severity::debug)
#endif
#MartinShobe's accepted answer works on:
g++ (4.7.2) with -O1 and higher
clang++ (3.4) with -O2 and higher
Visual Studio (2008) with linker flag /OPT:REF
The accepted answer does not work for me (MSVC 2019, stdc++17).
My solution is a bit whacky though. But the optimization should definitely take care of it:
#ifdef NDEBUG
#define LOG_DEBUG if (false) std::cout
#else
#define LOG_DEBUG if (true) std::cout
#endif
Usage:
LOG_DEBUG << ... << std::endl;
Turns off all optimizations in the program and speeds compilation.
/Od
or boot_log_stop
During debug mode or while I am doing testing, I need to print lots of various information, so i use this method:
#ifdef TESTING
// code with lots of debugging info
#else
// clean code only
#endif // TESTING`
Is this a good method, or is there any other simple and elegant method ?
But this way, I am repeating the same code in two places and if anything is to be changed later on in the code, I have to do it in both places, which is time consuming and error prone.
Thanks.
I am using MS Visual Studio.
You could use a macro to print debug information and then in the release build, define that macro as empty.
e.g.,
#ifdef _DEBUG
#define DEBUG_PRINT(x) printf(x);
#else
#define DEBUG_PRINT(x)
#endif
Using this method, you can also add more information like
__LINE__
__FILE__
to the debug information automatically.
Write once
#ifdef _DEBUG
const bool is_debig = true;
#else
const bool is_debig = false;
#endif
and then
template<bool debug>
struct TemplateDebugHelper {
void PrintDebugInfo(const char* );
void CalcTime(...);
void OutputInfoToFile(...);
/// .....
};
// Empty inline specialization
template<>
struct TemplateDebugHelper<false> {
void PrintDebugInfo(const char* ) {} // Empty body
void CalcTime(...) {} // Empty body
void OutputInfoToFile(...) {} // Empty body
/// .....
};
typedef TemplateDebugHelper<is_debug> DebugHelper;
DebugHelper global_debug_helper;
int main()
{
global_debug_helper.PrintDebugInfo("Info"); // Works only for is_debug=true
}
Use a define like that on include headers
#ifdef TESTING
#define DEBUG_INFOS(_X_) CallYourDebugFunction(_X_ )
#else
#define DEBUG_INFOS(_X_) ((void)0)
#endif
and then use only this on your code
...
DEBUG_INFOS("infos what ever");
RestOfWork();
...
You can also use and search for the ASSERT and TRACE macros and use the DebugView from sysinternals to read the output real time from trace, or track problems with the ASSERT. ASSERT and TRACE do similar work, and you can get ideas from them.
comments: I use the TESTING declare, because I see that on the question.
Use Log4Cxx, instead of rolling your own logging. The Log4Cxx package is highly configurable, supports different levels of logging based on importance/severity, and supports multiple forms of output.
In addition, unless it is very critical code that must be super optimized, I would recommend leaving logging statements (assuming you use Log4Cxx) in your code, but simply turn down the logging level. That way, logging can be dynamically enabled, which can be incredibly helpful if one of your users experiences a hard-to-replicate bug... just direct them on how to configure a higher logging level. If you completely elide the logging from the executable, then there is no way to obtain that valuable debugging output in the field.
You could use something like boost::log setting severity level to the one you need.
void init()
{
logging::core::get()->set_filter
(
flt::attr< logging::trivial::severity_level >("Severity") >= logging::trivial::info
);
}
int main(int, char*[])
{
init();
BOOST_LOG_TRIVIAL(trace) << "A trace severity message";
BOOST_LOG_TRIVIAL(debug) << "A debug severity message";
BOOST_LOG_TRIVIAL(info) << "An informational severity message";
BOOST_LOG_TRIVIAL(warning) << "A warning severity message";
BOOST_LOG_TRIVIAL(error) << "An error severity message";
BOOST_LOG_TRIVIAL(fatal) << "A fatal severity message";
}
I think pantheios have something similar too.
I'm writing for embedded systems in C.
In my programs I'm using following macros:
#define _L log_stamp(__FILE__, __LINE__)
#define _LS(a) log_string(a)
#define _LI(a) log_long(a)
#define _LA(a,l) log_array(a,l)
#define _LH(a) log_hex(a)
#define _LC(a) log_char(a)
#define ASSERT(con) log_assert(__FILE__, __LINE__, con)
When I'm making release version, I simply switch off #define DEBUG directive and all macros become empty.
Note that it doesn't consume any CPU cycles and memory in release version.
Macros are the only way to save to log information: where the logging was done
(file and line number).
If I need this information I use:
_L;_LS("this is a log message number ");_LI(5);
otherwise without _L directive.
There is a simple way, which works with most compilers:
#ifdef TESTING
#define DPRINTF( args ) printf args
#else
#define DPRINTF( args ) ((void)0)
#endif
Next, in the source code you should use it as:
DPRINTF(("Debug: value a = %d, value b = %d\n", a, b));
The drawback is that you have to use double parentheses, but in old C and C++
standards variadic macro are not supported (only as a compiler extension).