Enable Boost.Log only on debug - c++

I need a logger for debug purpose and I'm using Boost.Log (1.54.0 with a patch in the boost.org homepage).
It's all fine I've created some macro like this:
#define LOG_MESSAGE( lvl ) BOOST_LOG_TRIVIAL( lvl )
Now is that a way that LOG_MESSAGE( lvl ) is expaneded in BOOST_LOG_TRIVIAL( lvl ) only in debug mode and ignore in release?
for example:
LOG_MESSAGE( critical ) << "If I read this message we're in debug mode"
edit
My first attempt is to create a nullstream... I think that in release mode compiler will optimize it...
#if !defined( NDEBUG )
#include <boost/log/trivial.hpp>
#define LOG_MESSAGE( lvl ) BOOST_LOG_TRIVIAL( lvl )
#else
#if defined( __GNUC__ )
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-value"
#endif
#include <iosfwd>
struct nullstream : public std::ostream {
nullstream() : std::ios(0), std::ostream(0) {}
};
static nullstream g_nullstream;
#define LOG_MESSAGE( lvl ) g_nullstream
#if defined( __GNUC__ )
#pragma GCC diagnostic pop
#endif
#endif

The severity level of the log entry meerly acts as a filter for sinks. The sink will decide what to do with the message (print it or not) based on the severity level. But the message will still be sent.
If you are trying to not send the message at all, then you'll need to redefine LOG_MESSAGE to something which actually does nothing. there might be something in the Boost library for this, otherwise, you'll have to write your own. Perhaps this will be a start:
class NullLogger
{
public:
template <typename SeverityT> NullLogger (SeverityT) {};
template <typename Val> NullLog& operator<< (const Val&) { return * this};
};
...and then:
#define LOG_MESSAGE (lvl) NullLogger (lvl)
Note however that even though nothing is being done with the log message or the expressions that make it up, the expressions are still evaluated. If some of these expressions are expensive, you will still take the performance hit. For example:
LOG_MESSAGE (debug) << SomeSuperExpensiveFunction();
Even if you are using the NullLogger above, SomeSuperExpensiveFunction() is still going to be called.
I would suggest as an alternative adding a flag that is evaluated at runtime, and decide at runtime whether or not to do the logging:
if (mLogStuff)
{
LOG_MESSAGE (debug) << SomeSuperExpensiveFunction();
}
boolean comparisons are super cheap, and you may find one day in the future that the ability to turn logging on and off could be super handy. Also, doing this means you don't need to add yet another #define, which is always a good thing.

I like John's NullLogger class. The only change I would make is as follows
#define LOG_MESSAGE(lvl) while (0) NullLogger (lvl)
Unfortunately this may generate warnings, but I would hope a decent compiler would then be able to eliminate all the associated logging code.

It is possible to achieve this without defining a NullLogger or similar:
#define TEST_LOG(lvl) \
if constexpr(boost::log::trivial::lvl >= boost::log::trivial::MAX_LOG_LEVEL) \
BOOST_LOG_TRIVIAL(lvl)
Then compile with -DMAX_LOG_LEVEL=info to statically deactivate all log messages below info.
Also note that with a properly implemented macro (like TEST_LOG but also like BOOST_LOG_TRIVIAL) expensive functions are not evaluated:
// We either log with trace or warning severity, so this filter
// does not let any message pass
logging::core::get()->set_filter(
logging::trivial::severity >= logging::trivial::error);
// Filtered at compile time
{
auto start = std::chrono::steady_clock::now();
for (size_t i = 0; i < 1000 * 1000; i++) {
TEST_LOG(trace) << "Hello world!";
}
auto end = std::chrono::steady_clock::now();
std::cerr << std::chrono::duration<double>(end-start).count() << "s" << std::endl;
// Prints: 1.64e-07s
}
// Filtered at compile time
{
auto start = std::chrono::steady_clock::now();
for (size_t i = 0; i < 1000 * 1000; i++) {
TEST_LOG(trace) << ComputeExpensiveMessage();
}
auto end = std::chrono::steady_clock::now();
std::cerr << std::chrono::duration<double>(end-start).count() << "s" << std::endl;
// Prints: 8.5e-08s
}
// Filtered at run time
{
auto start = std::chrono::steady_clock::now();
for (size_t i = 0; i < 1000 * 1000; i++) {
TEST_LOG(warning) << "Hello world!";
}
auto end = std::chrono::steady_clock::now();
std::cerr << std::chrono::duration<double>(end-start).count() << "s" << std::endl;
// Prints: 0.249306s
}
// Filtered at run time
{
auto start = std::chrono::steady_clock::now();
for (size_t i = 0; i < 1000 * 1000; i++) {
TEST_LOG(warning) << ComputeExpensiveMessage();
}
auto end = std::chrono::steady_clock::now();
std::cerr << std::chrono::duration<double>(end-start).count() << "s" << std::endl;
// Prints: 0.250101s
}

John's NullLogger class doesn't compile correctly on MSVC, and still requires Boost dependency for SeverityT which is actually not needed.
I propose the following change to the class:
class NullLogger
{
public:
template <typename Val> NullLogger& operator<< (const Val&) { return *this; };
};
#define BOOST_LOG_TRIVIAL(lvl) NullLogger()

Related

Clang issues -Wunused-value depending on whether the code is called from a macro

I use a special assertion macros called CHECK. It is implemented like this:
#define CHECK(condition) check(condition).ok ? std::cerr : std::cerr
The user can choose to provide additional information that is printed if the assertion fails:
CHECK(a.ok());
CHECK(a.ok()) << a.to_string();
Notice the ternary operator in macro definition. It ensures that a.to_string() is executed only when the assertion fails. So far so good. I've been using this (and other similar) macros for a long time without any problems.
But recently I found that clang issues “expression result unused [-Wunused-value]” warning regarding the second std::cerr if CHECK is used inside another macro:
#define DO(code) do { code } while(0)
int main() {
do { CHECK(2 * 2 == 4); } while(0); // no warning
DO( CHECK(2 * 2 == 4); ); // warning
}
Full example: https://godbolt.org/z/5bfnEGqsn.
This makes no sense to me. Why would this diagnostic depend on whether the code was expanded from a macro or not? GCC issues no warnings in either case.
Two questions:
Is there any reason for such behavior or should I file this as clang bug?
How can I suppress this without disabling “-Wunused-value” altogether? I've tried [[maybe_unused]] and __attribute__((unused)) but they don't seem to work on statements.
I don't think that this is a good solution what I suggest here but you could change your code so that you will always use your std::cerr, by changing your check(condition).ok ? std::cerr : std::cerr to check(condition).ok ? std::cerr << "" : std::cerr << "":
#include <iostream>
struct CheckResult {
CheckResult(bool ok_arg) : ok(ok_arg) {}
~CheckResult() { if (!ok) abort(); }
bool ok;
};
inline CheckResult check(bool ok) {
if (!ok) std::cerr << "Assertion failed!\n";
return CheckResult(ok);
}
#define CHECK(condition) \
check(condition).ok ? std::cerr << "" : std::cerr << ""
#define DO(code) \
do { code } while(0)
int main() {
do { CHECK(2 * 2 == 4); } while(0);
DO( CHECK(2 * 2 == 4); );
}
Another thing you could do is to use a function that returns that std::cerr:
#include <iostream>
struct CheckResult {
CheckResult(bool ok_arg) : ok(ok_arg) {}
~CheckResult() { if (!ok) abort(); }
bool ok;
};
inline CheckResult check(bool ok) {
if (!ok) std::cerr << "Assertion failed!\n";
return CheckResult(ok);
}
[[maybe_unused]] inline std::ostream & get_ostream() {
return std::cerr;
}
#define CHECK(condition) \
check(condition).ok ? get_ostream() : get_ostream()
#define DO(code) \
do { code } while(0)
int main() {
do { CHECK(2 * 2 == 4); } while(0);
DO( CHECK(2 * 2 == 4); );
}
The [[maybe_unused]] here is not about the returned value but about the function in case that you change your code so that it is not used under certain conditions (maybe not needed here).
My major concern about your approach is this statement:
Notice the ternary operator in macro definition. It ensures that a.to_string() is executed only when the assertion fails.
Without reading the documentation and just looking at CHECK(a.ok()) << a.to_string(); on one would assume that a.to_string() will only be executed if the assertion fails.
For a standpoint of code review or collaboration, this can be really problematic.

Handle std::thread::hardware_concurrency()

In my question about std::thread, I was advised to use std::thread::hardware_concurrency(). I read somewhere (which I can not find it and seems like a local repository of code or something), that this feature is not implemented for versions of g++ prior to 4.8.
As a matter of fact, I was the at the same victim position as this user. The function will simply return 0. I found in this answer a user implementation. Comments on whether this answer is good or not are welcome!
So I would like to do this in my code:
unsinged int cores_n;
#if g++ version < 4.8
cores_n = my_hardware_concurrency();
#else
cores_n = std::thread::hardware_concurrency();
#endif
However, I could find a way to achieve this result. What should I do?
There is another way than using the GCC Common Predefined Macros: Check if std::thread::hardware_concurrency() returns zero meaning the feature is not (yet) implemented.
unsigned int hardware_concurrency()
{
unsigned int cores = std::thread::hardware_concurrency();
return cores ? cores : my_hardware_concurrency();
}
You may be inspired by awgn's source code (GPL v2 licensed) to implement my_hardware_concurrency()
auto my_hardware_concurrency()
{
std::ifstream cpuinfo("/proc/cpuinfo");
return std::count(std::istream_iterator<std::string>(cpuinfo),
std::istream_iterator<std::string>(),
std::string("processor"));
}
Based on common predefined macros link, kindly provided by Joachim, I did:
int p;
#if __GNUC__ >= 5 || __GNUC_MINOR__ >= 8 // 4.8 for example
const int P = std::thread::hardware_concurrency();
p = (trees_no < P) ? trees_no : P;
std::cout << P << " concurrent threads are supported.\n";
#else
const int P = my_hardware_concurrency();
p = (trees_no < P) ? trees_no : P;
std::cout << P << " concurrent threads are supported.\n";
#endif

Is it okay to do "#ifdef DEBUG( ... ) __VA_ARGS__"?

Global.h
#ifndef GLOBAL_H
# define GLOBAL_H
#define DEBUG
#ifdef DEBUG
# define IF_DEBUG( ... ) __VA_ARGS__
#else
# define IF_DEBUG( ... )
#endif /* DEBUG */
#endif /* GLOBAL_H */
Main.cpp
#include <string>
#include <iostream>
#include "Global.h"
int main() {
int A = 1;
int B = 2;
int C = 0;
IF_DEBUG(
std::cout << "\nStep 1> Calculating...\n";
)
C = A + B;
// DO WHATEVER
IF_DEBUG(
std::cout << "\nStep n> ...\n";
)
// ...
std::cout << C << std::endl;
// Note: I could also do some operations within the IF_DEBUG macro.
IF_DEBUG(
int X = 10;
int Y = 5;
int Z = X / Y;
std::cout << Z << std::endl;
)
IF_DEBUG(
std::cout << "\nDebugged! This program has been paused. Enter any key to continue!\n";
::getchar();
)
return 0;
}
Do you see how I defined IF_DEBUG in the Global header file (Global.h) and how I constantly used
it in the Main source file (Main.cpp) for debugging purposes?
Is it okay and safe to do that?
I am asking this question because I am unsure if its okay to do that. When I show this to my friend and he said its "bad" to do that. Therefore, I am unsure.
This is a very common and useful trick. But it's better not to have the #define DEBUG in the source code. You can define it in the compile command line instead. g++ -DDEBUG -c file.cpp will compile the code as if DEBUG was defined.
If you're using a Makefile you can add it to the CPPFLAGS (C Preprocessor Flags) variable: CPPFLAGS=-DDEBUG.
If you're using an IDE try to find the C Preprocessor Flags in the project settings.

Multi-threaded performance std::string

We are running some code on a project that uses OpenMP and I've run into something strange. I've included parts of some play code that demonstrates what I see.
The tests compare calling a function with a const char* argument with a std::string argument in a multi-threaded loop. The functions essentially do nothing and so have no overhead.
What I do see is a major difference in the time it takes to complete the loops. For the const char* version doing 100,000,000 iterations the code takes 0.075 seconds to complete compared with 5.08 seconds for the std::string version. These tests were done on Ubuntu-10.04-x64 with gcc-4.4.
My question is basically whether this is solely due the dynamic allocation of std::string and why in this case that can't be optimized away since it is const and can't change?
Code below and many thanks for your responses.
Compiled with: g++ -Wall -Wextra -O3 -fopenmp string_args.cpp -o string_args
#include <iostream>
#include <map>
#include <string>
#include <stdint.h>
// For wall time
#ifdef _WIN32
#include <time.h>
#else
#include <sys/time.h>
#endif
namespace
{
const int64_t g_max_iter = 100000000;
std::map<const char*, int> g_charIndex = std::map<const char*,int>();
std::map<std::string, int> g_strIndex = std::map<std::string,int>();
class Timer
{
public:
Timer()
{
#ifdef _WIN32
m_start = clock();
#else /* linux & mac */
gettimeofday(&m_start,0);
#endif
}
float elapsed()
{
#ifdef _WIN32
clock_t now = clock();
const float retval = float(now - m_start)/CLOCKS_PER_SEC;
m_start = now;
#else /* linux & mac */
timeval now;
gettimeofday(&now,0);
const float retval = float(now.tv_sec - m_start.tv_sec) + float((now.tv_usec - m_start.tv_usec)/1E6);
m_start = now;
#endif
return retval;
}
private:
// The type of this variable is different depending on the platform
#ifdef _WIN32
clock_t
#else
timeval
#endif
m_start; ///< The starting time (implementation dependent format)
};
}
bool contains_char(const char * id)
{
if( g_charIndex.empty() ) return false;
return (g_charIndex.find(id) != g_charIndex.end());
}
bool contains_str(const std::string & name)
{
if( g_strIndex.empty() ) return false;
return (g_strIndex.find(name) != g_strIndex.end());
}
void do_serial_char()
{
int found(0);
Timer clock;
for( int64_t i = 0; i < g_max_iter; ++i )
{
if( contains_char("pos") )
{
++found;
}
}
std::cout << "Loop time: " << clock.elapsed() << "\n";
++found;
}
void do_parallel_char()
{
int found(0);
Timer clock;
#pragma omp parallel for
for( int64_t i = 0; i < g_max_iter; ++i )
{
if( contains_char("pos") )
{
++found;
}
}
std::cout << "Loop time: " << clock.elapsed() << "\n";
++found;
}
void do_serial_str()
{
int found(0);
Timer clock;
for( int64_t i = 0; i < g_max_iter; ++i )
{
if( contains_str("pos") )
{
++found;
}
}
std::cout << "Loop time: " << clock.elapsed() << "\n";
++found;
}
void do_parallel_str()
{
int found(0);
Timer clock;
#pragma omp parallel for
for( int64_t i = 0; i < g_max_iter ; ++i )
{
if( contains_str("pos") )
{
++found;
}
}
std::cout << "Loop time: " << clock.elapsed() << "\n";
++found;
}
int main()
{
std::cout << "Starting single-threaded loop using std::string\n";
do_serial_str();
std::cout << "\nStarting multi-threaded loop using std::string\n";
do_parallel_str();
std::cout << "\nStarting single-threaded loop using char *\n";
do_serial_char();
std::cout << "\nStarting multi-threaded loop using const char*\n";
do_parallel_char();
}
My question is basically whether this is solely due the dynamic allocation of std::string and why in this case that can't be optimized away since it is const and can't change?
Yes, it is due to the allocation and copying for std::string on every iteration.
A sufficiently smart compiler could potentially optimize this, but it is unlikely to happen with current optimizers. Instead, you can hoist the string yourself:
void do_parallel_str()
{
int found(0);
Timer clock;
std::string const str = "pos"; // you can even make it static, if desired
#pragma omp parallel for
for( int64_t i = 0; i < g_max_iter; ++i )
{
if( contains_str(str) )
{
++found;
}
}
//clock.stop(); // Or use something to that affect, so you don't include
// any of the below expression (such as outputing "Loop time: ") in the timing.
std::cout << "Loop time: " << clock.elapsed() << "\n";
++found;
}
Does changing:
if( contains_str("pos") )
to:
static const std::string str = "pos";
if( str )
Change things much? My current best guess is that the implicit constructor call for std::string every loop would introduce a fair bit of overhead and optimising it away whilst possible is still a sufficiently hard problem I suspect.
std::string (in your case temporary) requires dynamic allocation, which is a very slow operation, compared to everything else in your loop. There are also old implementations of standard library that did COW, which also slow in multi-threaded environment. Having said that, there is no reason why compiler cannot optimize temporary string creation and optimize away the whole contains_str function call, unless you have some side effects there. Since you didn't provide implementation for that function, it's impossible to say if it could be completely optimized away.

What is good practice for generating verbose output?

what is good practice for generating verbose output? currently, i have a function
bool verbose;
int setVerbose(bool v)
{
errormsg = "";
verbose = v;
if (verbose == v)
return 0;
else
return -1;
}
and whenever i want to generate output, i do something like
if (debug)
std::cout << "deleting interp" << std::endl;
however, i don't think that's very elegant. so i wonder what would be a good way to implement this verbosity switch?
The simplest way is to create small class as follows(here is Unicode version, but you can easily change it to single-byte version):
#include <sstream>
#include <boost/format.hpp>
#include <iostream>
using namespace std;
enum log_level_t {
LOG_NOTHING,
LOG_CRITICAL,
LOG_ERROR,
LOG_WARNING,
LOG_INFO,
LOG_DEBUG
};
namespace log_impl {
class formatted_log_t {
public:
formatted_log_t( log_level_t level, const wchar_t* msg ) : fmt(msg), level(level) {}
~formatted_log_t() {
// GLOBAL_LEVEL is a global variable and could be changed at runtime
// Any customization could be here
if ( level <= GLOBAL_LEVEL ) wcout << level << L" " << fmt << endl;
}
template <typename T>
formatted_log_t& operator %(T value) {
fmt % value;
return *this;
}
protected:
log_level_t level;
boost::wformat fmt;
};
}//namespace log_impl
// Helper function. Class formatted_log_t will not be used directly.
template <log_level_t level>
log_impl::formatted_log_t log(const wchar_t* msg) {
return log_impl::formatted_log_t( level, msg );
}
Helper function log was made template to get nice call syntax. Then it could be used in the following way:
int main ()
{
// Log level is clearly separated from the log message
log<LOG_DEBUG>(L"TEST %3% %2% %1%") % 5 % 10 % L"privet";
return 0;
}
You could change verbosity level at runtime by changing global GLOBAL_LEVEL variable.
int threshold = 3;
class mystreambuf: public std::streambuf
{
};
mystreambuf nostreambuf;
std::ostream nocout(&nostreambuf);
#define log(x) ((x >= threshold)? std::cout : nocout)
int main()
{
log(1) << "No hello?" << std::endl; // Not printed on console, too low log level.
log(5) << "Hello world!" << std::endl; // Will print.
return 0;
}
You could use log4cpp
You can wrap your functionality in a class that supports the << operator which allows you to do something like
class Trace {
public:
enum { Enable, Disable } state;
// ...
operator<<(...)
};
Then you can do something like
trace << Trace::Enable;
trace << "deleting interp"
1. If you are using g++ you could use the -D flag, this allows the compilator to define a macro of your choice.
Defining the
For instance :
#ifdef DEBUG_FLAG
printf("My error message");
#endif
2. I agree this isn't elegant either, so to make it a bit nicer :
void verbose(const char * fmt, ... )
{
va_list args; /* Used as a pointer to the next variable argument. */
va_start( args, fmt ); /* Initialize the pointer to arguments. */
#ifdef DEBUG_FLAG
printf(fmt, &args);
#endif
/*This isn't tested, the point is to be able to pass args to
printf*/
}
That you could use like printf :
verbose("Error number %d\n",errorno);
3. A third solution easier, and more C++ and Unix like is to pass an argument to your program that is going to be used - as the macro earlier - to initialize a particular variable (that could be a global const).
Example :
$ ./myprogram -v
if(optarg('v')) static const verbose = 1;