To print debug messages in my program, I have a that can be used like this:
DBG(5) << "Foobar" << std::endl;
5 means the level of the message, if the debug level is smaller than 5, it won't print the message. Currently it is implemented like:
#define DBG(level) !::Logger::IsToDebug((level)) ? : ::Logger::Debug
Basically IsToDebug checks if the message should be printed, and returns true when it should. Logger::Debug is an std::ostream. This works with gcc and clang too, however clang generates expression result unused warnings. According to this email this doesn't like to change either.
Prefixing it with (void) doesn't work, it will only cast the thing before the ?, resulting in a compilation error (void can't be converted to bool, obviously). The other problem with this syntax that it uses a gcc extension.
Doing things like #define DBG(x) if (::Logger::IsToDebug((x))) ::Logger::Debug solves the problem, but it's a sure way break your program (if (foo) DBG(1) << "foo"; else ...) (and I can't put the whole thing into a do { ... } while(0) due to how the macro is called.)
The only more or less viable solution I came up with is this (assuming IsToDebug returns either 0 or 1):
#define DBG(level) for(int dbgtmpvar = ::Logger::IsToDebug((level)); \
dbgtmpvar > 0; --dbgtmpvar) ::Logger::Debug
Which seems like an overkill (not counting it's runtime overhead)
I think you should use ternary operator, as it is defined in the Standard, rather than compiler extension. To use the Standard ternary operator, you've to provide the second expression as well. For that, you can define a stream class, deriving from std::ostream which doesn't print anything to anywhere. Object of such a class can be used as second expression.
class oemptystream : std::ostream
{
//..
};
extern oemptystream nout; //declaration here, as definition should go to .cpp
then
#define DBG(level) ::Logger::IsToDebug((level))? nout : ::Logger::Debug
Now if you use this macro, then at runtime, the expression would reduce to either this:
nout << "message";
Or this,
::Logger::Debug << "message";
Either way, it is pretty much like this:
std::cout << "message";
So I hope it shouldn't give compiler warning.
Related
I would like to print : table_name[variable_value]
by giving ONE input : table_name[variable_name]
Let me explain a simpler case with a toy solution based on a macro:
int i = 1771;
I can print the variable_name with
#define toy_solution(x) cout << #x ;
If I execute
toy_solution(i);
"i" will be printed.
Now, imagine there is a well-allocated table T.
I would like to write in the program:
solution(T[i]);
and to read on the screen "T[1771]".
An ideal solution would treat the two cases, that is :
ideal_solution(i) would print i.
ideal_solution(T[i]) would print T[1771].
It is not important to me to use a macro or a function.
Thanks for your help.
#define toy_solution(x, i) cout << #x << "[" << i << "]"
I would like to print : table_name[variable_value]
by giving ONE input : table_name[variable_name]
well, as you did not understand my comment, I'll say out loud in an answer:
what you want to do is not possible
You have to choose between either #Alin's solution or #lizusek.
I think that #lizusek's solution is better because you're writing C++ code, so if you can do something that gives the same result than with using macros, you should use plain C++ code.
edit: let my try to explain why this is not possible
so what you want is:
f(T[i]) -> T, i
The only way you could write that so it would make sense in preprocessor is:
#define f(T[i]) cout<<#T#<<#i#
but then the preprocessor will give an error, because you can't index an array as a function (even a macro function) parameter:
test.c:5:12: error: expected comma in macro parameter list
#define f(T[i]) cout<<#T#<<#i#
^
If you try to do the same thing using a C++ function, then it's even more non-sense, as a function call such as:
toy_solution(t[i]);
would actually be translated to the value t[i] points to at runtime, so inside the function you'll never be able to known that the given value was actually in an array. So what you want is wrong, and you should stick to good coding practices: using a function and if what you want is:
toy_solution(t[i]);
then use:
toy_solution("t", i);
Possible solutions that you should never use
well, when I say it's not possible, it's that the only solutions I can think off are so twisted that you'd be insane to actually use them in your code… And if you do, I hope I'll never read one of your code or I may become violent :-) That's why I won't show you how or give you any code that could help do what I'm about to tell you.
use a template system
You could either write your code using your own template system or use one commonly used for HTML processing to process your source code through it and apply a transformation rule such as:
toy_solution(t[i]) -> toy_solution("t", t[i])
it's definitely possible, but it makes your build chain even more complicated and dependant on more tools. C/C++ build toolchain are complicated enough, please don't make it worst.
Or you code make your own fork of C and of a C compiler to change the syntax rules so what you want becomes possible. Though, I personnally would never use your fork, and then I'd go trolling and flaming about this on HN, deeply regretting to have given you such a bad idea :-)
use a custom class to encapsulate your arrays in
if you do something like:
template<T>
class Element {
T value;
List<T> _owner;
[…]
}
template<T>
class List {
Element<T> values[];
std::string _name;
[…]
}
so that when you call the function
toy_solution(T[i]);
the implementation would look like:
void toy_solution(Element<T> e) {
std::cout<<e.get_list_name()<<" "<<e.get_value()<<std::endl;
}
but that's sooo much boilerplate and overhead just to avoid doing a simple function definition that does not look as nice as you dream of, that I find it really stupid to do so.
You can write a function as simple as that:
void solution( std::string const& t, int i) {
std::cout << t << "[" << i << "]";
}
usage:
int i = 1771;
solution( "T", i);
You can also write a macro, but be aware that this is not type safe. Function should be preferred.
Our project uses a macro to make logging easy and simple in one-line statements, like so:
DEBUG_LOG(TRACE_LOG_LEVEL, "The X value = " << x << ", pointer = " << *x);
The macro translates the 2nd parameter into stringstream arguments, and sends it off to a regular C++ logger. This works great in practice, as it makes multi-parameter logging statements very concise. However, Scott Meyers has said, in Effective C++ 3rd Edition, "You can get all the efficiency of a macro plus all the predictable behavior and type safety of a regular function by using a template for an inline function" (Item 2). I know there are many issues with macro usage in C++ related to predictable behavior, so I'm trying to eliminate as many macros as possible in our code base.
My logging macro is defined similar to:
#define DEBUG_LOG(aLogLevel, aWhat) { \
if (isEnabled(aLogLevel)) { \
std::stringstream outStr; \
outStr<< __FILE__ << "(" << __LINE__ << ") [" << getpid() << "] : " << aWhat; \
logger::log(aLogLevel, outStr.str()); \
}
I've tried several times to rewrite this into something that doesn't use macros, including:
inline void DEBUG_LOG(LogLevel aLogLevel, const std::stringstream& aWhat) {
...
}
And...
template<typename WhatT> inline void DEBUG_LOG(LogLevel aLogLevel, WhatT aWhat) {
... }
To no avail (neither of the above 2 rewrites will compile against our logging code in the 1st example). Any other ideas? Can this be done? Or is it best to just leave it as a macro?
Logging remains one of the few places were you can't completely do away with macros, as you need call-site information (__LINE__, __FILE__, ...) that isn't available otherwise. See also this question.
You can, however, move the logging logic into a seperate function (or object) and provide just the call-site information through a macro. You don't even need a template function for this.
#define DEBUG_LOG(Level, What) \
isEnabled(Level) && scoped_logger(Level, __FILE__, __LINE__).stream() << What
With this, the usage remains the same, which might be a good idea so you don't have to change a load of code. With the &&, you get the same short-curcuit behaviour as you do with your if clause.
Now, the scoped_logger will be a RAII object that will actually log what it gets when it's destroyed, aka in the destructor.
struct scoped_logger
{
scoped_logger(LogLevel level, char const* file, unsigned line)
: _level(level)
{ _ss << file << "(" << line << ") [" << getpid() << "] : "; }
std::stringstream& stream(){ return _ss; }
~scoped_logger(){ logger::log(_level, _ss.str()); }
private:
std::stringstream _ss;
LogLevel _level;
};
Exposing the underlying std::stringstream object saves us the trouble of having to write our own operator<< overloads (which would be silly). The need to actually expose it through a function is important; if the scoped_logger object is a temporary (an rvalue), so is the std::stringstream member and only member overloads of operator<< will be found if we don't somehow transform it to an lvalue (reference). You can read more about this problem here (note that this problem has been fixed in C++11 with rvalue stream inserters). This "transformation" is done by calling a member function that simply returns a normal reference to the stream.
Small live example on Ideone.
No, it is not possible to rewrite this exact macro as a template since you are using operators (<<) in the macro, which can't be passed as a template argument or function argument.
We had the same issue and solved it with a class based approach, using a syntax like
DEBUG_LOG(TRACE_LOG_LEVEL) << "The X value = " << x << ", pointer = " << *x << logger::flush;
This would indeed require to rewrite the code (by using a regular expression) and introduce some class magic, but gives the additional benefit of greater flexibiliy (delayed output, output options per log level (to file or stdout) and things like that).
The problem with converting that particular macro into a function is that things like "The X value = " << x are not valid expressions.
The << operator is left-associative, which means something in the form A << B << C is treated as (A << B) << C. The overloaded insertion operators for iostreams always return a reference to the same stream so you can do more insertions in the same statement. That is, if A is a std::stringstream, since A << B returns A, (A << B) << C; has the same effect as A << B; A << C;.
Now you can pass B << C into a macro just fine. The macro just treats it as a bunch of tokens, and doesn't worry about what they mean until all the substituting is done. At that point, the left-associative rule can kick in. But for any function argument, even if inlined and templated, the compiler needs to figure out what the type of the argument is and how to find its value. If B << C is invalid (because B is neither a stream nor an integer), compiler error. Even if B << C is valid, since function parameters are always evaluated before anything in the invoked function, you'll end up with the behavior A << (B << C), which is not what you want here.
If you're willing to change all the uses of the macro (say, use commas instead of << tokens, or something like #svenihoney's suggestion), there are ways to do something. If not, that macro just can't be treated like a function.
I'd say there's no harm in this macro though, as long as all the programmers who have to use it would understand why on a line starting with DEBUG_LOG, they might see compiler errors relating to std::stringstream and/or logger::log.
If you keep a macro, check out C++ FAQ answers 39.4 and 39.5 for tricks to avoid a few nasty ways macros like this can surprise you.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
When are C++ macros beneficial?
Why is #define bad and what is the proper substitute?
Someone has told me that #define is bad. Well, I honestly don't not understand why its bad. If its bad, then what other way can I do this then?
#include <iostream>
#define stop() cin.ignore(numeric_limits<streamsize>::max(), '\n');
#define is not inherently bad. However, there are usually better ways of doing what you want. Consider an inline function:
inline void stop() {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
(Really, you don't even need inline for a function like that. Just a plain ordinary function would work just fine.)
It's bad because it's indiscriminate. Anywhere you have stop() in your code will get replaced.
The way you solve it is by putting that code into its own method.
In C++, using #define is not forcibly bad, although alternatives should be preferred. There are some context, such as include guards in which there is no other portable/standard alternative.
It should be avoided because the C preprocessor operates (as the name suggests) before the compiler. It performs simple textual replacement, without regard to other definitions. This means the result input to the compiler sometimes doesn't make sense. Consider:
// in some header file.
#define FOO 5
// in some source file.
int main ()
{
// pre-compiles to: "int 5 = 2;"
// the compiler will vomit a weird compiler error.
int FOO = 2;
}
This example may seem trivial, but real examples exist. Some Windows SDK headers define:
#define min(a,b) ((a<b)?(a):(b))
And then code like:
#include <Windows.h>
#include <algorithm>
int main ()
{
// pre-compiles to: "int i = std::((1<2)?(1):(2));"
// the compiler will vomit a weird compiler error.
int i = std::min(1, 2);
}
When there are alternatives, use them. In the posted example, you can easily write:
void stop() {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
For constants, use real C++ constants:
// instead of
#define FOO 5
// prefer
static const int FOO = 5;
This will guarantee that your compiler sees the same thing you do and benefit you with name overrides in nested scopes (a local FOO variable will override the meaning of global FOO) as expected.
It's not necessarily bad, it's just that most things people have used it for in the past can be done in a much better way.
For example, that snippet you provide (and other code macros) could be an inline function, something like (untested):
static inline void stop (void) {
cin.ignore(numeric_limits<streamsize>::max(), '\n');
}
In addition, there are all the other things that code macros force you to do "macro gymnastics" for, such as if you wanted to call the very badly written:
#define f(x) x * x * x + x
with:
int y = f (a + 1); // a + 1 * a + 1 * a + 1 + a + 1 (4a+2, not a^3+a)
int z = f (a++); // a++ * a++ * a++ + a++
The first of those will totally surprise you with its results due to the precedence of operators, and the second will give you undefined behaviour. Inline functions do not suffer these problems.
The other major thing that macros are used for is for providing enumerated values such as:
#define ERR_OK 0
#define ERR_ARG 1
: :
#define ERR_MEM 99
and these are better done with enumerations.
The main problem with macros is that the substitution is done early in the translation phase, and information is often lost because of this. For example, a debugger generally doesn't know about ERR_ARG since it would have been substituted long before the part of the translation process that creates debugging information.
But, having maligned them enough, they're still useful for defining simple variables which can be used for conditional compilation. That's pretty much all I use them for in C++ nowadays.
#define by itself is not bad, but it does have some bad properties to it. I'll list a few things that I know of:
"Functions" do not act as expected.
The following code seems reasonable:
#define getmax(a,b) (a > b ? a : b)
...but what happens if I call it as such?:
int a = 5;
int b = 2;
int c = getmax(++a,b); // c equals 7.
No, that is not a typo. c will be equal to 7. If you don't believe me, try it. That alone should be enough to scare you.
The preprocessor is inherently global
Whenever you use a #define to define a function (such as stop()), it acts across ALL included files after being discovered.
What this means is that you can actually change libraries that you did not write. As long as they use the function stop() in the header file, you could change the behavior of code you didn't write and didn't modify.
Debugging is more difficult.
The preprocessor does symbolic replacement before the code ever makes it to the compiler. Thus if you have the following code:
#define NUM_CUSTOMERS 10
#define PRICE_PER_CUSTOMER 1.10
...
double something = NUM_CUSTOMERS * PRICE_PER_CUSTOMER;
if there is an error on that line, then you will NOT see the convenient variable names in the error message, but rather will see something like this:
double something = 10 * 1.10;
So that makes it more difficult to find things in code. In this example, it doesn't seem that bad, but if you really get into the habit of doing it, then you can run into some real headaches.
I've implemented an ostream for debug output which sends ends up sending the debug info to OutputDebugString. A typical use of it looks like this (where debug is an ostream object):
debug << "some error\n";
For release builds, what's the least painful and most performant way to not output these debug statements?
The most common (and certainly most performant) way is to remove them using the preprocessor, using something like this (simplest possible implementation):
#ifdef RELEASE
#define DBOUT( x )
#else
#define DBOUT( x ) x
#endif
You can then say
DBOUT( debug << "some error\n" );
Edit: You can of course make DBOUT a bit more complex:
#define DBOUT( x ) \
debug << x << "\n"
which allows a somewhat nicer syntax:
DBOUT( "Value is " << 42 );
A second alternative is to define DBOUT to be the stream. This means that you must implement some sort of null stream class - see Implementing a no-op std::ostream. However, such a stream does have an runtime overhead in the release build.
A prettier method:
#ifdef _DEBUG
#define DBOUT cout // or any other ostream
#else
#define DBOUT 0 && cout
#endif
DBOUT << "This is a debug build." << endl;
DBOUT << "Some result: " << doSomething() << endl;
As long as you don't do anything weird, functions called and passed to DBOUT won't be called in release builds. This macro works because of operator precedence and the logical AND; because && has lower precedence than <<, release builds compile DBOUT << "a" as 0 && (cout << "a"). The logical AND doesn't evaluate the expression on the right if the expression on the left evaluates to zero or false; because the left-hand expression always evaluates to zero, the right-hand expression is always removed by any compiler worth using except when all optimization is disabled (and even then, obviously unreachable code may still be ignored.)
Here is an example of weird things that will break this macro:
DBOUT << "This is a debug build." << endl, doSomething();
Watch the commas. doSomething() will always be called, regardless of whether or not _DEBUG is defined. This is because the statement is evaluated in release builds as:
(0 && (cout << "This is a debug build." << endl)), doSomething();
// evaluates further to:
false, doSomething();
To use commas with this macro, the comma must be wrapped in parentheses, like so:
DBOUT << "Value of b: " << (a, b) << endl;
Another example:
(DBOUT << "Hello, ") << "World" << endl; // Compiler error on release build
In release builds, this is evaluated as:
(0 && (cout << "Hello, ")) << "World" << endl;
// evaluates further to:
false << "World" << endl;
which causes a compiler error because bool cannot be shifted left by a char pointer unless a custom operator is defined. This syntax also causes additional problems:
(DBOUT << "Result: ") << doSomething() << endl;
// evaluates to:
false << doSomething() << endl;
Just like when the comma was used poorly, doSomething() still gets called, because its result has to be passed to the left-shift operator. (This can only occur when a custom operator is defined that left-shifts a bool by a char pointer; otherwise, a compiler error occurs.)
Do not parenthesize DBOUT << .... If you want to parenthesize a literal integer shift, then parenthesize it, but I'm not aware of a single good reason to parenthesize a stream operator.
How about this? You'd have to check that it actually optimises to nothing in release:
#ifdef NDEBUG
class DebugStream {};
template <typename T>
DebugStream &operator<<(DebugStream &s, T) { return s; }
#else
typedef ostream DebugStream;
#endif
You will have to pass the debug stream object as a DebugStream&, not as an ostream&, since in release builds it isn't one. This is an advantage, since if your debug stream isn't an ostream, that means you don't incur the usual runtime penalty of a null stream that supports the ostream interface (virtual functions that actually get called but do nothing).
Warning: I just made this up, normally I would do something similar to Neil's answer - have a macro meaning "only do this in debug builds", so that it is explicit in the source what is debugging code, and what isn't. Some things I don't actually want to abstract.
Neil's macro also has the property that it absolutely, definitely, doesn't evaluate its arguments in release. In contrast, even with my template inlined, you will find that sometimes:
debug << someFunction() << "\n";
cannot be optimised to nothing, because the compiler doesn't necessarily know that someFunction() has no side-effects. Of course if someFunction() does have side effects then you might want it to be called in release builds, but that's a peculiar mixing of logging and functional code.
Like others have said the most performant way is to use the preprocessor. Normally I avoid the preprocessor, but this is about the only valid use I have found for it bar protecting headers.
Normally I want the ability to turn on any level of tracing in release executables as well as debug executables. Debug executables get a higher default trace level, but the trace level can be set by configuration file or dynamically at runtime.
To this end my macros look like
#define TRACE_ERROR if (Debug::testLevel(Debug::Error)) DebugStream(Debug::Error)
#define TRACE_INFO if (Debug::testLevel(Debug::Info)) DebugStream(Debug::Info)
#define TRACE_LOOP if (Debug::testLevel(Debug::Loop)) DebugStream(Debug::Loop)
#define TRACE_FUNC if (Debug::testLevel(Debug::Func)) DebugStream(Debug::Func)
#define TRACE_DEBUG if (Debug::testLevel(Debug::Debug)) DebugStream(Debug::Debug)
The nice thing about using an if statement is that there is no cost to for tracing that is not output, the tracing code only gets called if it will be printed.
If you don't want a certain level to not appear in release builds use a constant that is available at compile time in the if statement.
#ifdef NDEBUG
const bool Debug::DebugBuild = false;
#else
const bool Debug::DebugBuild = true;
#endif
#define TRACE_DEBUG if (Debug::DebugBuild && Debug::testLevel(Debug::Debug)) DebugStream(Debug::Debug)
This keeps the iostream syntax, but now the compiler will optimise the if statement out of the code, in release builds.
#ifdef RELEASE
#define DBOUT( x )
#else
#define DBOUT( x ) x
#endif
Just use this in the actual ostream operators themselves. You could even write a single operator for it.
template<typename T> Debugstream::operator<<(T&& t) {
DBOUT(ostream << std::forward<T>(t);) // where ostream is the internal stream object or type
}
If your compiler can't optimize out empty functions in release mode, then it's time to get a new compiler.
I did of course use rvalue references and perfect forwarding, and there's no guarantee that you have such a compiler. But, you can surely just use a const ref if your compiler is only C++03 compliant.
#iain: Ran out of room in the comment box so posting it here for clarity.
The use of if statements isn't a bad idea! I know that if statements in macros may have some pitfalls so I'd have to be especially careful constructing them and using them. For example:
if (error) TRACE_DEBUG << "error";
else do_something_for_success();
...would end up executing do_something_for_success() if an error occurs and debug-level trace statements are disabled because the else statement binds with the inner if-statement. However, most coding styles mandate use of curly braces which would solve the problem.
if (error)
{
TRACE_DEBUG << "error";
}
else
{
do_something_for_success();
}
In this code fragment, do_something_for_success() is not erroneously executed if debug-level tracing is disabled.
I was just made aware of a bug I introduced, the thing that surprised me is that it compiled, is it legal to switch on a constant?
Visual Studio 8 and Comeau both accept it (with no warnings).
switch(42) { // simplified version, this wasn't a literal in real life
case 1:
std::cout << "This is of course, imposible" << std::endl;
}
It's not impossible that switching on a constant makes sense. Consider:
void f( const int x ) {
switch( x ) {
...
}
}
Switching on a literal constant would rarely make sense, however. But it is legal.
Edit: Thinking about it, there is case where switching on a literal makes
perfect sense:
int main() {
switch( CONFIG ) {
...
}
}
where the program was compiled with:
g++ -DCONFIG=42 foo.cpp
Not everything that makes sense to the compiler makes sense!
The following will also compile but makes no sense:
if (false)
{
std::cout << "This is of course, imposible" << std::endl;
}
It's up to us as developers to spot these.
One good reason for this being legal is that the compiler might well be able to resolve the value at compile time, depending on what stage of development you're at.
E.g. you might use something like this for debugging stuff:
int glyphIndex;
...
#if CHECK_INVALID_GLYPH
glyphIndex = -1;
#endif
switch (glyphIndex)
...
The compiler knows for certain that glyphIndex is -1 here, so it's as good as a constant. Alternatively, you might code it like this:
#if CHECK_INVALID_GLYPH
const int glyphIndex = -1;
#else
int glyphIndex = GetGlyph();
#endif
You wouldn't really want to have to change the body of your switch statement just so you could make little changes like this, and the compiler is perfectly capable of rationalising the code to eliminate the parts that will never be executed anyway.
Yes, it's perfectly legal to switch on any integer expression. It's the same as switching on an integer value returned by a function - a construct used quite often.
Yes, but why you'd want to (unless debugging) is another matter.
It's similar to if (0) or while (true).
Yes, it's legal.