I would like to set a debug mode so that it prints the log statements only if the debug mode is on. For example if I have code like this
printf("something \n");
.
.
.
perror("something \n");
It only works if the debug flag is on.. I don't want to use "if" statements.
I think there is a clever way to do this using #define or something..
Thank is advance..
#ifdef _DEBUG // or #ifndef NDEBUG
#define LOG_MSG(...) printf(__VA_ARGS__) // Or simply LOG_MSG(msg) printf(msg)
#else
#define LOG_MSG(...) // Or LOG_MSG(msg)
#endif
On non-Debug built LOG_MSG would yeild to nothing. Instead of defining it with raw printf, you can have your custom logging-function, or class-method to be called.
Without going in to specific libraries or solutions, generally people make a logger class or function, and a single debug flag. The debug function checks this flag before calling printf or cout. Then in the rest of your code you simply call your debug function / method.
Here's an example:
class MyDebugger
{
private:
bool m_debug;
public:
MyDebugger();
void setDebug(bool debug);
void debug(const char* message);
};
MyDebugger::MyDebugger()
{
m_debug = false;
}
void MyDebugger::setDebug(bool debug)
{
m_debug = debug;
}
void MyDebugger::debug(const char* message)
{
if(m_debug)
{
cout << message << endl;
}
}
int main(int argc, char** argv)
{
MyDebugger debugger;
debugger.debug("This won't be shown");
debugger.setDebug(true);
debugger.debug("But this will");
return 0;
}
of course this is an incredibly naive implementation. In real logger classes there are many levels for finer-grained control of how much detail gets printed (levels like error, warning, info, and debug to differentiate the importance of the message). They might also let you log to files as well as stdout. Still this should give you a general idea.
In GCC, something like
#define debugprint(...) printf(__VA_ARGS__)
You can do a simple C-style macro definition (especially if you compiler is modern enough to do variable arguments macros, i.e. gcc or VS2005+) doing printf with a check of the debug level which can be a static global variable.
If you go with C++-style class similar to what #Chris suggests, I would make the logging function inline to ensure that when logging is disabled you are not wasting time on calling functions.
Related
I have been trying to figure out why this is happening and maybe it is just due to inexperience at this point but could really use some help.
When I run my code, which is compiled into a DLL using C++20, I get that a debug assertion has failed with the expression being __acrt_first_block == header.
I narrowed down where the code is failing, but the weird part is that it runs just fine when I change the Init(std::string filePath function signature to not contain the parameter. The code is below and hope someone can help.
Logger.h
#pragma once
#include "../Core.h"
#include <memory>
#include <string>
#include "spdlog/spdlog.h"
namespace Ruby
{
class RUBY_API Logger
{
public:
static void Init(std::string filePath);
inline static std::shared_ptr<spdlog::logger>& GetCoreLogger() { return coreLogger; }
inline static std::shared_ptr<spdlog::logger>& GetClientLogger() { return clientLogger; }
private:
static std::shared_ptr<spdlog::logger> coreLogger;
static std::shared_ptr<spdlog::logger> clientLogger;
};
}
Logger.cpp
namespace Ruby
{
std::shared_ptr<spdlog::logger> Logger::coreLogger;
std::shared_ptr<spdlog::logger> Logger::clientLogger;
void Logger::Init(std::string filePath)
{
std::string pattern{ "%^[%r][%n][%l]: %v%$" };
auto fileSink = std::make_shared<spdlog::sinks::basic_file_sink_mt>(filePath, true);
// Setup the console and file sinks
std::vector<spdlog::sink_ptr> coreSinks;
coreSinks.push_back(std::make_shared<spdlog::sinks::stdout_color_sink_mt>());
coreSinks.push_back(fileSink);
// Bind the sinks to the core logger.
coreLogger = std::make_shared<spdlog::logger>("RUBY", begin(coreSinks), end(coreSinks));
// Set the Patterns for the sinks
coreLogger->sinks()[0]->set_pattern(pattern);
coreLogger->sinks()[1]->set_pattern(pattern);
// Tell spdlog to flush the file loggers on trace or worse message (can be changed if necessary).
coreLogger->flush_on(spdlog::level::trace);
// Set the default level of the logger
coreLogger->set_level(spdlog::level::trace);
// Do the same for the client logger
std::vector<spdlog::sink_ptr> clientSinks;
clientSinks.push_back(std::make_shared<spdlog::sinks::stdout_color_sink_mt>());
clientSinks.push_back(fileSink);
clientLogger = std::make_shared<spdlog::logger>("APP", begin(clientSinks), end(clientSinks));
clientLogger->sinks()[0]->set_pattern(pattern);
clientLogger->sinks()[1]->set_pattern(pattern);
clientLogger->flush_on(spdlog::level::trace);
clientLogger->set_level(spdlog::level::trace);
}
}
Entrypoint.h
#pragma once
#ifdef RB_PLATFORM_WINDOWS
extern Ruby::Application* Ruby::CreateApplication();
int main(int argc, char** argv)
{
Ruby::Logger::Init("../Logs/Recent_Run.txt");
RB_CORE_INFO("Initialized the logger.");
auto app = Ruby::CreateApplication();
app->Run();
delete app;
return 0;
}
#else
#error Ruby only supports windows
#endif // RB_PLATFORM_WINDOWS
For anyone else who runs into a similar problem, here is how I fixed it.
Essentially the function signature for the Init() function was the problem. The std::string parameter was causing the debug assertion to fire, my best guess as of right now was because of move semantics but that part I am still not sure on. So there are a couple of ways that I found to fix this.
Method 1:
Make the parameter a const char*. I don't quite like this approach as it then relies on C style strings and if you are trying to write a program in modern C++, this is a huge step backwards.
Method 2:
Make the parameter a const std::string&. Making it a const reference to a string prevents the move semantics (again as far as I know) and the assertion no longer fires. I prefer this fix as it keeps the program in modern C++.
I hope this helps anyone who has similar issues, and be careful with statics and move semantics.
I have two functions in C library that I am making.
One is a setup function, other is a function that does some operations. I want the second operations function to print an error if the setup function has not run before it.
What would be the best way to do this?
Here is what I have in my mind, but I am not sure if that is how it is done.
The setup function:
void setup_function()
{
#ifndef FUNCTION_SETUP
#define FUNCTION_SETUP
a_init();
b_init();
c_init();
#endif
}
And the operations function:
bool operations()
{
#ifdef FUNCTION_SETUP
try
{
/* My code */
return true;
}
catch (...)
{
Serial.println("Error in operations");
return false;
}
#elif Serial.println("Function not setup. Please use setup_function() in void setup()");
#endif
}
#ifndef only checks whether this function was defined somewhere for the compiler and won't affect runtime.
best way to do this is through use of a global variable that changes value once the setup function is executed. if you're defining these functions in classes you could use static data member and setup function
C has a pre-processing command #error that can be used to trigger a stop to the compiling. However, the compilation unit is processed in order, not ran. Some programmes need to just run to see, (which is related to the halting problem.)
The idiomatic way to to runtime checks is with assert, as in this C99 example. (You would #include <cassert> in C++.)
#include <stdbool.h>
#include <assert.h>
static bool is_setup; // Can be optimized away with -D NDEBUG.
static void setup_function(void) {
assert(!is_setup && (is_setup = true));
}
static bool operations(void) {
assert(is_setup);
return true;
}
int main(void) {
//setup_function(); // Triggers `assert` if omitted.
operations();
return 0;
}
However, C++ has techniques that encourage RAII; when possible, one should generally use this to set up an object on acquisition and manage the object throughout it's lifetime.
Been years since I've coded in C/C++ so sorry about the newbie'ish question. I have codebase that compiles differently based upon configurations that are defined via #defines, which can be provided as args to the makefile. Is there a way to encode these #defines so I can look at an executable and see what the define was - e.g.
int main() {
#ifdef CONFIG_A
init_config_a();
#endif
#ifdef CONFIG_B
init_config_b();
#endif
}
#ifdef CONFIG_A
void init_config_a() {
// do something
}
#endif
#ifdef CONFIG_B
void init_config_b() {
// do something for B
}
#endif
How can I tell if a given executable was created with config A or config B. One hack is to look for symbols that are only compiled based upon the definitions (e.g. init_config_a) but that's pretty ugly.
EDIT: Sorry I neglected an important piece of info: the program is actually compiled to run on an embedded system so I can't easily just add a switch or some other mechanism to run the program locally.
Well, your question is not really precise on how you really want to get the information once you have the binary. As solution that does not involved disassembly would be having a struct with that information and initialize it when you want to print that information. Perhaps something as trivial as this:
#include <stdio.h>
#include <string.h>
struct buildinfo {
int CONFIG_A;
int CONFIG_B;
};
void get_build_info(struct buildinfo *info)
{
if(info == NULL)
return;
memset(info, 0, sizeof *info);
#ifdef CONFIG_A
info->CONFIG_A = 1;
#endif
#ifdef CONFIG_B
info->CONFIG_B = 1;
#endif
}
int main(int argc, char **argv)
{
if(argc == 2 && strcmp(argv[1], "-v") == 0)
{
struct buildinfo info;
get_build_info(&info);
printf("build info: CONFIG_A: %s, CONFIG_B: %s\n",
info->CONFIG_A ? "yes" : "no",
info->CONFIG_B ? "yes" ; "no");
return 0;
}
...
return 0;
}
I you don't want to analyse the binary, then you can execute ./yourprogram -v and see the information printed on screen.
The best way will be to name the binary based upon the define used.
If you want to tell whether the binary was build with CONFIG_A or CONFIG_B just by inspection. On possible approach could be the following.
Put a signature depending on the configuration at a specific address (will work at any address too). e.g.
int main() {
#ifdef CONFIG_A
// this sign can be put at specific address with #pragma
const char sign[]="CONFIG_A";
init_config_a();
#elif defined(CONFIG_B) // only one shall be defined at a time
// this sign can be put at specific address with #pragma
const char sign[]="CONFIG_B";
init_config_b();
#endif
}
When you open the binary in a text editor you will be able to see the sign in ASCII view.
Suppose I have a debug function that is defined like this:
namespace debug {
void report(std::string message);
}
Can I pull some compiler trick that will, when compiled, replace every call safely with a nop. I dont want to call an empty function, I want to not call the function at all.
If it is possible... can I make a namespace "disappear", too?
Debug executables will be compiled with the symbol DEBUGEXECUTABLE defined (I can imagine some tricks with macros).
You can do something like this:
namespace debug
{
void report(std::string message); // ToDo - define this somewhere
}
namespace release
{
template <class Y>
void report(Y&&){} // Intentionally do nothing
}
#if defined(DEBUGEXECUTABLE)
namespace foo = debug; // set foo to the debug namespace
#else
namespace foo = release; // set foo to the release namespace
#endif
Then use foo::report in your code. I like this since it minimises the use of preprocessor macros and keeps any compiler errors broadly similar across the debug and release configurations.
Passing a r-value reference in the release mode will allow the compiler to optimise out any anonymous temporaries. For the debug family of functions, you ought to pass strings by constant reference though to avoid any possiblity of a value copy being taken: void report(const std::string& message);
This is as optimal as I can make it.
We define DEBUG to have a report that does something, and leave it out it to do nothing: (or we can use whatever symbol you are using in your build process to distinguish debug and opt from production code)
#define DEBUG
We create two namespaces. One is called debug, the other release. In each we create an anonymous namespace, which makes it easy for the compiler to detect and discard unused functions:
namespace debug {
namespace {
void report(std::string const& s) {
std::cerr << s << "\n"; // sample implementation
}
}
}
namespace release {
namespace {
template<class T>
void report(T&&) {} // Or `class...Ts` and `Ts&&...` to handle more than 1 argument optionally.
}
}
Here we create a namespace alias that differs in release and debug:
#ifdef DEBUG
namespace report=debug;
#else
namespace report=release;
#endif
And our main:
int main() {
report::report("hello");
}
We can see the results of this under gcc 4.9 with DEBUG defined and not over at godbot. As you can hopefully see, when #define DEBUG is not defined, the compiler produces nothing but an empty main.
If it is defined, it compiles to what you'd expect.
namespace debug {
#ifdef DEBUGEXECUTABLE
void report(std::string message);
#else
inline void report(std::string message)
{
//nop - compiler should optimize it
}
#endif
}
I am adding unit tests to project in Qt and am looking to use QTestLib. I have set up the tests and they are running fine.
The issue is that in the project we have overridden qDebug() to output to our own log file. This works great when running the app, the problem is that when I am testing the classes, it will sometimes start logging, which is then sent to the output window. The result is a complete disaster that is next to impossible to read as our logs get mixed in with the QTest output.
I am wondering if there is a way to suppress the qDebug() output, or at least move it somewhere else. I have tried adding #define QT_NO_DEBUG_OUTPUT and also using qInstallMsgHandler(messageOutput); to redirect or prevent the output, but neither had any effect.
The solution given by #Kuba works in some cases, but not when used in conjunction with QTest::qExec(&test,argc,argv) in the main method to run a number of tests. In that case the only way to disable the qDebug() output (that I found) is for each of the test classes in their void initTestCase() slot to register a new message handler.
For example
void noMessageOutput(QtMsgType, const char *)
{}
int main(int argc,char* argv[])
{
qInstallMsgHandler(noMessageOutput);
tst_Class1 t1;
tst_Class2 t2;
QTest::qExec(&t1,argc,argv);
QTest::qExec(&t2,argc,argv);
}
Will show the debug output tst_Class1, Class1, tst_Class2, and Class2. To prevent this you must explicitly disable the output in each of the test classes
class tst_Class1
{
//class stuff
private slots:
void initTestCase();
//test cases
};
void tst_Class1::initTestCase()
{
qInstallMsgHandler(noMessageOutput);
}
class tst_Class2
{
//class stuff
private slots:
void initTestCase();
//test cases
};
void tst_Class2::initTestCase()
{
qInstallMsgHandler(noMessageOutput);
}
If you wish to see the debug output from a subset of the classes the remove the qInstallMsgHandler() line and it will come through.
The QT_NO_DEBUG_OUTPUT define must go into your project files or makefiles and must be present for every file you compile. You must then recompile your application (not Qt itself of course). This macro's presence on compiler's command line guarantees that the first time QDebug header is included by any code, the qDebug will be redefined to a no-op. That's what this macro does: it disables qDebug if it is present when the <QtCore/qdebug.h> header gets included -- whether directly by you or indirectly by other headers.
Using qInstallMsgHandler certainly works at suppressing debug output.
Below is a self-contained example.
#if 0
// Enabling this section disables all debug output from non-Qt code.
#define QT_NO_DEBUG_OUTPUT
#endif
#include <QtCore/QDebug>
void noMessageOutput(QtMsgType, const char *)
{}
int main(int argc, char *argv[])
{
qDebug() << "I'm shown";
qInstallMsgHandler(noMessageOutput);
qDebug() << "I'm hidden";
}