suppressing mexPrintf in matlab mex code - c++

I have mex code that I call from my matlab scripts. To debug the code, I had put a lot of mexPrintf statements, but for timing purposes now I don't want any I/O taking place in my mex code call (since I/O takes a lot of time). What's the easiest and best way to suppress the mexPrintf calls in my code, such that those statements are not executed at all, without having to delete/comment out those statements? (I don't want to delete/comment out those statements and recompile my mex code because I may need these debug statements later on, and I don't want to keep going through this iteration of modifying and building my code again and again).
Is there any compiler switch that can do the trick? or some preprocessor statement?
Thanks!

You can not turn mexPrintf off. You need to modify your code. Define e.g. DEBUG flag to decide, when you want to print things, and when not. For example, with normal printf function
#include <stdio.h>
#include <stdlib.h>
//#define DEBUG
#ifdef DEBUG
#define MPRINT(...) printf(__VA_ARGS__);
#else
#define MPRINT(...)
#endif
int main()
{
MPRINT("%d\n", 5);
}
Nothing is printed if you run it now. But if you uncomment the #define DEBUG statement, you get 5 printed.
Alternatively, you can embrace all mexPrintf calls in such clauses:
#ifdef DEBUG
mexPrintf(...);
#endif
Again, nothing will be printed if DEBUG is not defined. But that is much more work.
You can also do a similar thing without recompiling your mex file by using a normal if statement and pass a verbose parameter to the mex file. However, this will still have some impact on performance if you execute the if statement too often. So go for the DEBUG more - that is the standard way to do it.

Related

Macro which will not compile the function if not defined

Currently using to show debug output when in debug mode:
#ifdef _DEBUG
#define printX(...) Serial.printf( __VA_ARGS__ )
#else
#define printX(...) NULL
#endif
yet this still include the printX in the result code, and the parameters which have been applied still consume memory, cpu power, and stack size, so my question is:
is there a way to have a macro, which is not including the function, and "ignoring" all of it's calls in the source when in "release mode" and basically not compile anything with it ?
A macro is a not a function. It does not consume any memory, cpu power, or stack size. This is because macros operate entirely at compile time, and just act as a text replacing mechanism. When the program is run, there are no macros which are "called".
The macro
#define printX(...) NULL
replaces printX function call with all its arguments with plain NULL. This is a textual replacement that happens before a compiler is able to take a look at the code, so any nested calls inside printX, e.g.
printX(someExpensiveCall())
will also be completely eliminated.
In my programs I include a line that says:
#define DEBUG_MODE
and I use it anywhere I want to compile with (or without) debug mode:
#ifdef DEBUG_MODE
print here all the info I need for debug and certainly don't want in released binary.
#endif
Before releasing the final binary I comment out the definition line.

Using #define for printf, does it effect on speed

I am using
#define printInt(x) printf ("%d",x)
In main()
I can use it like this:
int var=10;
printInt (var);
Which is easier to use than typing
printf ("%d",var);
Will using my own #define for printing an int, float etc make my program slower?
No this will not effect the speed. The macro is expanded during pre-processing so that in every instance that you use printInt(myInt) what is actually passed to the compiler will be printf("%d", myInt). So I think the binary output would be identical either way.
No, it doesn't affect the speed of your program.
The #define instructions are processed by the preprocessor before your program is compiled.
For example the call
printInt(var);
is replaced with
printf ("%d",var);
by the preprocessor.
Therefore the compiler can't determine if a #define was used or not. In both cases it leads to the same code (and the same program). Thats the reason why it isn't possible that both programs differ in their speed.
EDIT: If you use a lot of #defines in your program, it is possible that the speed of the proprocessing step decreases. But in most cases this should be no problem.

increase c++ code verbosity with macros

I'd like to have the possibility to increase the verbosity for debug purposes of my program. Of course I can do that using a switch/flag during runtime. But that can be very inefficient, due to all the 'if' statements I should add to my code.
So, I'd like to add a flag to be used during compilation in order to include optional, usually slow debug operations in my code, without affecting the performance/size of my program when not needed. here's an example:
/* code */
#ifdef _DEBUG_
/* do debug operations here
#endif
so, compiling with -D_DEBUG_ should do the trick. without it, that part won't be included in my program.
Another option (at least for i/o operations) would be to define at least an i/o function, like
#ifdef _DEBUG_
#define LOG(x) std::clog << x << std::endl;
#else
#define LOG(x)
#endif
However, I strongly suspect this probably isn't the cleanest way to do that. So, what would you do instead?
I prefer to use #ifdef with real functions so that the function has an empty body if _DEBUG_ is not defined:
void log(std::string x)
{
#ifdef _DEBUG_
std::cout << x << std::endl;
#endif
}
There are three big reasons for this preference:
When _DEBUG_ is not defined, the function definition is empty and any modern compiler will completely optimize out any call to that function (the definition should be visible inside that translation unit, of course).
The #ifdef guard only has to be applied to a small localized area of code, rather than every time you call log.
You do not need to use lots of macros, avoiding pollution of your code.
You can use macros to change implementation of the function (Like in sftrabbit's solution). That way, no empty places will be left in your code, and the compiler will optimize the "empty" calls away.
You can also use two distinct files for the debug and release implementation, and let your IDE/build script choose the appropriate one; this involves no #defines at all. Just remember the DRY rule and make the clean code reusable in debug scenario.
I would say that his actually is very dependent on the actual problem you are facing. Some problems will benefit more of the second solution, whilst the simple code might be better with simple defines.
Both snippets that you describe are correct ways of using conditional compilation to enable or disable the debugging through a compile-time switch. However, your assertion that checking the debug flags at runtime "can be very inefficient, due to all the 'if' statements I should add to my code" is mostly incorrect: in most practical cases a runtime check does not influence the speed of your program in a detectable way, so if keeping the runtime flag offers you potential advantages (e.g. turning the debugging on to diagnose a problem in production without recompiling) you should go for a run-time flag instead.
For the additional checks, I would rely on the assert (see the assert.h) which does exactly what you need: check when you compile in debug, no check when compiled for the release.
For the verbosity, a more C++ version of what you propose would use a simple Logger class with a boolean as template parameter. But the macro is fine as well if kept within the Logger class.
For commercial software, having SOME debug output that is available at runtime on customer sites is usually a valuable thing to have. I'm not saying everything has to be compiled into the final binary, but it's not at all unusual that customers do things to your code that you don't expect [or that causes the code to behave in ways that you don't expect]. Being able to tell the customer "Well, if you run myprog -v 2 -l logfile.txt and do you usual thing, then email me logfile.txt" is a very, very useful thing to have.
As long as the "if-statement to decide if we log or not" is not in the deepest, darkest jungle in peru, eh, I mean in the deepest nesting levels of your tight, performance critical loop, then it's rarely a problem to leave it in.
So, I personally tend to go for the "always there, not always enabled" approach. THat's not to say that I don't find myself adding some extra logging in the middle of my tight loops sometimes - only to remove it later on when the bug is fixed.
You can avoid the function-like macro when doing conditional compilation. Just define a regular or template function to do the logging and call it inside the:
#ifdef _DEBUG_
/* ... */
#endif
part of the code.
At least in the *Nix universe, the default define for this kind of thing is NDEBUG (read no-debug). If it is defined, your code should skip the debug code. I.e. you would do something like this:
#ifdef NDEBUG
inline void log(...) {}
#else
inline void log(...) { .... }
#endif
An example piece of code I use in my projects. This way, you can use variable argument list and if DEBUG flag is not set, related code is cleared out:
#ifdef DEBUG
#define PR_DEBUG(fmt, ...) \
PR_DEBUG(fmt, ...) printf("[DBG] %s: " fmt, __func__, ## __VA_ARGS__)
#else
#define PR_DEBUG(fmt, ...)
#endif
Usage:
#define DEBUG
<..>
ret = do_smth();
PR_DEBUG("some kind of code returned %d", ret);
Output:
[DBG] some_func: some kind of code returned 0
of course, printf() may be replaced by any output function you use. Furthermore, it can be easily modified so additional information, as for example time stamp, is automatically appended.
For me it depends from application to application.
I've had applications where I wanted to always log (for example, we had an application where in case of errors, clients would take all the logs of the application and send them to us for diagnostics). In such a case, the logging API should probably be based on functions (i.e. not macros) and always defined.
In cases when logging is not always necessary or you need to be able to completely disable it for performance/other reasons, you can define logging macros.
In that case I prefer a single-line macro like this:
#ifdef NDEBUG
#define LOGSTREAM /##/
#else
#define LOGSTREAM std::clog
// or
// #define LOGSTREAM std::ofstream("output.log", std::ios::out|std::ios::app)
#endif
client code:
LOG << "Initializing chipmunk feeding module ...\n";
//...
LOG << "Shutting down chipmunk feeding module ...\n";
It's just like any other feature.
My assumptions:
No global variables
System designed to interfaces
For whatever you want verbose output, create two implementations, one quiet, one verbose.
At application initialisation, choose the implementation you want.
It could be a logger, or a widget, or a memory manager, for example.
Obviously you don't want to duplicate code, so extract the minimum variation you want. If you know what the strategy pattern is, or the decorator pattern, these are the right direction. Follow the open closed principle.

_CrtSetAllocHook never shows filename/line number

I am implementing a memory tracker in my application so that further down the line, should I get any memory leaks I can switch this little guy on to find it.
All is great except that I am never passed the filename or the line number. Is there some flag I have to set using _CrtSetDbgFlag, or a preprocessor command?
After I ran the thing (bare-bones) it showed 26 allocations that were not cleaned up and I am pretty sure they are not me, but have no idea where they occurred.
Thanks in advance!
From the <crtdbg.h> header file:
#ifdef _CRTDBG_MAP_ALLOC
#define malloc(s) _malloc_dbg(s, _NORMAL_BLOCK, __FILE__, __LINE__)
// etc...
#endif
Note how the redefinition now calls another version of malloc that has the file and line number that you are looking for. Clearly, to make this work you will have to #define _CRTDBG_MAP_ALLOC and #include crtdb.h. This is best done in your precompiled header file so that you can be reasonably sure that all of your code will be compiled with these macros in effect.
That still doesn't guarantee that you'll get this info. Your project might be using a .lib that was compiled without it. Another failure mode is DLLs that might be unloaded just before you generate the leak report. The file and line info for that DLL will be unloaded as well.
There's a fallback to diagnose those kind of trouble makers. The leak report has a line for the leak that starts with the block number, shown at the start inside curly braces. As long as that block number is stable between runs, you can force the debugger to break when the allocation is made. Put this code in your main method or whatever point in your code that executes early:
_crtBreakAlloc = 42; // Change the number

C++ conditional compilation

I have the following code snippet:
#ifdef DO_LOG
#define log(p) record(p)
#else
#define log(p)
#endif
void record(char *data){
.....
.....
}
Now if I call log("hello world") in my code and DO_LOG isn't defined, will the line be compiled, in other words will it eat up the memory for the string "hello world"?
P.S. There are a lot of record calls in the program and it is memory sensitive, so is there any other way to conditionally compile so that it only depends on the #define DO_LOG?
This should be trivial to verify for yourself by inspecting the resulting binary.
I would say "no", since the expression totally goes away, the compiler will never see the string (it's removed by the preprocessor's macro expansion).
Since the preprocessor runs before the compiler, the line will not even exist when the compiler runs. So the answer is no, it does not use any memory at all.
No, it will not be in the binary. It will not even be compiled - the preprocessor will expand it into an empty string prior to the compilation, so the compiler will not even see it.
No. The preprocessor is executed prior to compilation, and so the code will never even be seen. I would like to add, though, that if you are interested in adding logging to your C++ application, you might want to use the Log4Cxx library. It uses similar macros which you can completely elide from your application, but when logging is enabled, it supports several different levels of logging (based on importance/severity) as well as multiple different "appenders" to which to send logging output (e.g. syslog, console, files, network I/O, etc.).
The full API documentation may be found at Log4Cxx API docs. Also, if you have any Java developers on board who have used Log4J, they should feel right at home with Log4Cxx (and convince you to use it).