I'm writing a program, and I would like to output different types of debugging information depending on the value of various macro variables (so that I can change the value of a flag that then causes different levels of information to be written to the screen).
For example, suppose I have the following code that prints information to the screen about my program (call this D1):
cout << "% Percentage complete: "
<< ceil((static_cast<double>(idx)/static_cast<double>(ITERATIONS))*(100.00))
<< "%" << endl;
cout << "x = [ x; ";
for(int i=0; i<space.getDimension(); i++)
cout << visited.vec[visited.bestIndex].x[i] << "\t";
cout << "];" << endl;
Now, also suppose I have the following code that prints different information to the screen about my program (call this D2):
cout << "best = [ best; "
<< visited.vec[visited.bestIndex].meanQALY() << "];\n" << endl;
space.displayConstraintsMATLAB(idx+1);
I would like to be able to insert statements such as #D1 and #D2 at certain places in my code and have the macro processor replace these statements with the above code blocks.
How can I do this?
(I'm happy to hear suggestions for different ways of doing this, if macros are not an ideal solution.)
Seems to me what you're looking for is a logging facility. Check out this thread for some suggestions.
Why using a whole framework instead of Macros?
The problem is that proper logging is more difficult than it seems. There are threading issues, sessions, classes and object instances are factors you have to consider. Also log file buffering, roll over and compressing log files are issues you have to think about. You might also want to log over the network or to syslog (or both), or into a database. Correctly implementing all of this yourself is a lot of work.
Are Macros any good?
Sure! In our project we have defined a single macro called LOG which wraps calls to our logging framework (we're using log4cpp). If one day we device to move to another framework, we only have to redefine our LOG macro in a single place instead of combing the entire codebase. This works because most logging frameworks share a similar interface usually consisting of the log-level and the message string.
You can either make a macro which does debugging for you -- like:
#ifdef D1
# define DEBUG(var) //Debug 1 print implementation here
#elif defined D2
# define DEBUG(var) //Debug 2 print implementation here
#else
# define DEBUG(var) //No-op
#endif
Otherwise you could make a debug function, then inside of that function do a similar check with #if def statements to see how you want to process the input. The function version has a little more early type detection so it tends to tell you about errors in a nicer fashion when you try printing something that's not processable (some custom object). Additionally if the function is a no-op (has no internals in Release mode) then the function call will be discarded by your compiler when any optimization flags are present, so you won't pay any additional cost in Release mode as a result of Debug calls.
For my own debug calls I usually defined a stream operator, much like std::cout, to convert all the inputs to strings or char* and then punt them to a a debug function if DEBUG is defined or print nothing. For exceptions and logging you can do a similar metric with varying levels of severity (Info, Warning, Error, etc). This tends to make it as easy to throw debug code around in C++ as it is in more modern languages imo.
Related
I am building somewhat larger c++ code bases than I'm used to. I have a need for both good logging and debugging, at least to the console, and also speed.
Generally, I like to do something like this
// Some header file
bool DEBUG = true;
And then in some other file
if (DEBUG) cout << "Some debugging information" << endl;
The issue with this (among others) is that the branching lowers the speed of the final executable. In order to fix this, I'd have to go into the files at the end and remove all these, and then I couldn't use them again later without saving them to some other file and then putting them back in.
What is the most efficient solution to this quandry? Python decorators provide a nice approach that I'm not certain exists in CPP.
Thanks!
The classic way is to make that DEBUG not a variable, but a preprocessor macro. Then you can have two builds: one with the macro defined to 1, the other with it defined to 0 (or not defined at all, depending on how you plan to use it). Then you can either do #ifdef to completely remove the debug code from being seen by the compiler, or just put it into a regular if, the optimizer will take care of removing the branch with the constant conditional.
I am writing a scientific computation code in C++. There are outputs that I want to write in a console and outputs that I write into a file. However, when debugging after implementing a new feature, it is useful to print out much more information than usual. So far I was just sending more information to std::cout/clog and commented these lines out when not needed.
What I want is like std::clog, which would go into a file when needed, or not do anything at all, when not needed. It is ok, if I need to recompile the code to switch between the two regimes. It is important that nothing happens when not needed, because for a real large calculation the log file would be enormous (or the the console full of rubbish) and all the writing would slow the calculation down.
I am looking for the smallest possible implementation, ideally using only standard libraries for portability.
The obvious solution is to have a global variable, redirect clog to a file and then use an if statement.
bool DEBUG = true;
std::ofstream out("logfile");
std::clog.rdbuf(out.rdbuf());
...
if (DEBUG) std::clog << "my message" << std::endl;
...
Is there a more elegant way of doing this?
Edit:
I would like to avoid using non-standard libraries and preprocessor macros (program is spread across many files and also a bad programming habit in general). One way I could imagine this working, but I don't know how to do it, is to create a globally accessible object that would be able to accept messages using << and would save them to a file. Then I could just comment out the line inside this object class that saves it to a file. However, I don't know how much performance impact may result from passing messages to such a disfunctional object.
You may use any external logging library for C/C++.
Or create your own small implementation with only utilities what you need.
A traditional logging mechanism is build on macros (or inline functions) and looks like:
#define LOG_MESSAGE(msg) \
{
#ifdef DEBUG
// your debug logging
#else
// your release logging, may be leaved empty
#endif // DEBUG
}
It's also useful to add different logging levels: Error, Warning, Info, etc.
Is there a "best practice" or similar for coding in a debug-mode in one's code?
For example,
#include <iostream>
int main()
{
#ifdef MY_DEBUG_DEF
std::cout << "This is only printed if MY_DEBUG_DEF is defined\n";
#endif
return 0;
}
Or is this considered bad practice because the code gets bit messier?
I have noticed some libraries (for example libcurl, which is a large and well-known library) have this feature; if you define VERBOSE with libcurl you get basically a debug mode
Thank you.
A more usual way is to follow conventions from assert(3): wrap with #ifndef NDEBUG .... #endifcode which is only useful for debugging, and without any significant side effects.
You could even add some debug-printing macro like
extern bool wantdebug;
#ifndef NDEBUG
#define OUTDEBUG(Out) do { if (wantdebug) \
std::cerr << __FILE__ << ":" << __LINE__ \
<< " " << Out << std::endl; \
} while(0)
#else
#define OUTDEBUG(Out) do {}while(0)
#endif
and use something like OUTDEBUG("x=" << x) at appropriate places in your code. Then wantdebug flag would be set thru the debugger, or thru some program arguments. You probably want to emit a newline and flush cerr (or cout, or your own debug output stream) -using std::endl ...- to get the debug output displayed immediately (so a future crash of your program would still give sensible debug outputs).
That is an acceptable method. Yes, it gets messy, but it makes it a lot easier to debug as well. Plus you can normally collapse those kinds of things, thus removing the messiness.
As you've mentioned, libcurl uses the method. I also used to have a teacher who works for HP on their printer software, and they used the same method.
I personally prefer runtime enabled logging. That means that you don't have to recompile to get the "debug output". So I have a command-line argument -v=n, where n defaults to zero, and gets stored in the variable verbosity in the actual program. With -v=1 I get basic tracing (basic flow of the code), with -v=2 I get "more stuff" (such as dumps of internal state in selected functions). Two levels is often enough, but three levels may be good at times. An alternative is to make verbosity a bit-pattern, and enable/disable certain functionality based on which bits are set - so set bit 0 for basic trace, bit 1 gives extra info in some module, bit 2 gives extra trace in another module, etc. If you want to be REALLY fancy, you have names, such as -trace=all_basic, -trace=detailed, -trace=module_A, -trace=all_basic,module_A,module_B or some such.
Combine this with a macro along the lines of:
#define TRACE do { if (verbosity > 0) \
std::cout << __FILE__ << ":" << __LINE__ << ":" \
<< __PRETTY_FUNCTION__ << std::endl; } while(0)
For things that may take a substantial amount of extra time, such as verifying the correctness of a large and complex data structure (tree, linked list, etc), then using #ifndef NDEBUG around that code would be a good thing. Assuming of course you believe that you'll never mess that up in a release build.
Real livig code here:
https://github.com/Leporacanthicus/lacsap/blob/master/trace.h
https://github.com/Leporacanthicus/lacsap/blob/master/trace.cpp
being use here for example:
https://github.com/Leporacanthicus/lacsap/blob/master/expr.cpp
(Note that some simple functions that get called a lot don't have "TRACE" - it just clutters up the trace and makes it far too long)
Using logger may be better, e.g.
log4cxx
log4cpp
ACE_Log_Msg: It is in ACE
Boost.Log:
The above are very flexible, but heavyweight. You can also implement some simple macros instead:
#ifdef NDEBUG
#define DBG(FMT, ...)
#else // !NDEBUG
#define DBG(FMT, ...) fprintf (stderr, FMT, ## __VA_ARGS__)
#endif // NDEBUG
The above is GCC syntax from Macros with a Variable Number of Arguments.
For VC, please see also How to make a variadic macro (variable number of arguments)
Some investigation on existing C/C++ logging solutions turned out that Pantheios might be the best in my case, that is lowest overhead if logging is disabled.
All the loggers seem to support a kind of a print log message. However, in my case I have a function call that should be avoided if logging is disabled (since it's quite expensive).
At the moment I use a very simple logging setup like
#ifdef DEBUG_L1
cout << "msg 1" << endl // log level 1
#ifdef DEBUG_L2
printBuffer() // log level 2
#endif
#endif
It serves my needs (for now) since i pay zero overhead if logging is disabled. However the code quickly looks ugly and it is not very flexible.
This should be realized with a C++ logger. As said, the function body of printBuffer() is quite expensive. It would be good if calling it could be avoided if logging is turned off.
Is it possible to declare a whole function call only to be carried out when above a certain log level? Or do i still need the preprocessor in this case?
Edit:
Thanks #BobTFish. I was actually thinking about using the kind of setup you are describing. I am wondering how flexible one can realize this kind of thing. Typically i log a set of strings and values (int, float, and pointers). In the style
cout << "name1=" << int << " name2=" << (void*)(ptr) << endl;
Now, I really do not like switching to a printf like syntax at this point. How would the macro approach deal with this (since it's templated it with just one class parameter)?
What me come in mind are c++ specific template based logging frameworks like easylogging or spdlog. In spdlog for example you can create custom log targets by implementing a sink interface. Another (may be the better) option is to use it's log level feature.
Here an example (copied from spdlog manual):
//
// Runtime log levels
//
spd::set_level(spd::level::info); //Set global log level to info
console->debug("This message shold not be displayed!");
console->set_level(spd::level::debug); // Set specific logger's log level
console->debug("Now it should..");
By implementing the << operator for an own custom class, you can control what data is being dumped to the log. With logger->should_log() you can test if specified log level is enabled.
I think you can use Google logging library from here
The typical use of glog with condition
#include <glog/logging.h>
{
// Initialize logging. There are multiple options, so read the documentation
google::InitGoogleLogging();
void* p;
LOG_IF(INFO, p == nullptr) << "We have nullptr. Bomb detected!";
// Don't forget to shut that down
google::ShutdownGoogleLogging();
}
If you concerned with performance and runtime overhead, take a look at zf_log library. Things you could like about it:
It evaluates logging arguments and invokes actual logging function only when necessary (when log level allows that).
It has run-time log level AND compile-time log level. LOG() statements below compile-time log level are compiled out and have no run-time overhead.
Run-time log level could be changed in run-time, but when message log level is below run-time log level, arguments will not be evaluated and actual log function will not be called. The only thing that will be evaluated is if (msg_log_level >= runtime_log_level).
It has extremely small call site (amount of code generated per each LOG() line), 3x-20x times less than other libraries.
It doesn't slow down compilation of your sources that include its headers (unlike some header-only libraries).
I have intermittent crashes occurring in my ActiveMQ libraries due to the way I'm using the activemq-cpp API. It'd be much easier to debug the issue if I could observe every function being called leading up to the crash. Are there any quick ways to trace the entry and exit of functions in a Visual Studio 2005 c++ multithreaded program?
Thanks in advance!
Use a Tracer object. Something like this:
class Tracer
{
public:
Tracer(const char *functionName) : functionName_(functionName)
{
cout << "Entering function " << functionName_ << endl;
}
~Tracer()
{
cout << "Exiting function " << functionName_ << endl;
}
const char *functionName_;
};
Now you can simply instantiate a Tracer object at the top the function, and it will automatically print "exiting... " when the function exits and the destructor is called:
void foo()
{
Tracer t("foo");
...
}
While the debugger is attached to a process, you can rightclick in the source code and select "breakpoint->add TracePoint", with the text you want (even some macro's are supplied).
The Tracepoint is in fact a BreakPoint with the "When Hit" field on some message printer functionality, and it doesn't actually break the process. I found it mighty useful: it also has a macro $FUNCTION, which does exactly what you need: print the function it is in (provided it has the debug info available...), and a $THREADID.
All the options above nice and can help you. But i can't see how setting TracePoing with mouse can help you in case you code have thousands of functions.
This kind of thing should be part of your regular programming work. When you writing a function you should think what trace message will help you to debug it.
You need to write/use existing logger that can be spitted to section (reader thread, worker thread, etc... )and different logging levels (error, warning,trace, verbose etc.. ). The good logger should be designed in the way it's not hurt you performance, this usually harm the verbosity, but complex synchronization problems usually can be reproduced unless the logging is very fast, like assigning a string pointer to the array that can be dumped after problem is reproduced. I usually start debugging with full trace dumped to the screen and if i lucky and bug reproduced this way, fixing the bug is trivial because i already have enough information, the fun starts when problem goes away and you need to play with verbosity in order to reproduce the problem.
I actually find debugging more creative and satisfying than code writing, but this is just me :).