I wonder, what is the best trace technique for you.
Currently, I am defining trace level for each source witch macro just before including trace header, which is using defined macro.
For example:
trace.h:
#if DEBUG == 0
#define debug(...)
#define trace(...)
#elif DEBUG==1
#define debug(...)
#define trace(...) printf(__VA_ARGS__)
#elif DEBUG==2
#define debug(...) printf(__VA_ARGS__)
#define trace(...) printf(__VA_ARGS__)
#else
#error Bad DEBUG value
#endif
For all sources.c (trace_value varies)
#define DEBUG trace_value
#include "trace.h"
void func(){
debug("func()");
trace("func() ok");
reurn;
}
However, project grows and I want to use precompiled header. It would be great to include trace header in my precompiled header. So I am wondering, what are your trace techniques? Thank you.
EDIT:
I forgot to write important thing, I am interested in logging technique for latency critical application.
Logging is a complex issue and what it does/how you use it depends greatly on the needs of your application.
In small applications, I tend to use either injected std::ostream references, or custom code for logging specifically. I also stay away from the C formatted printing APIs (and define my own operator<< for things I need to trace).
In large applications, if you have complex tracing needs (rotate log files between executions, logs per category and configurable from outside the application, automatic log formatting, high-throughput/performance logging, and so on) use an external library (like log4cpp).
I also tend to use/define macros, only after all my code can be written without them.
Example implementation with injected logging stream:
#include<iosfwd>
class http_server {
public:
http_server(std::string server, std::uint16_t listening_port,
std::ostream& log = cnull); // cnull is as defined at
// http://stackoverflow.com/a/6240980/186997
private:
std::ostream& log_;
};
Main (console) code:
int main(...) {
http_server("localhost", 8080, std::clog);
}
Unit test code:
std::ostringstream server_log;
http_server("localhost", 8080, server_log);
// assert on the contents of server_log
Basically, I consider that if I need tracing, it is a part of the API interface and not an afterthought, something I would disable through macros, or something to hardcode.
If I need formatted logging, I would consider specializing a std::ostream, or (most probably) wrapping it into a specialized formatting class and injecting that around.
Macros for tracing code are usually reserved for performance-critical code (where you cannot afford to call stuff if it doesn't do anything).
Related
I have a project that must be parsed before original compilation. It's needed for reflection purposes. In short I want to check edited .h files for some attributes in the code and gather info about it to generate specific include files.
For example reflectable classes/fields/methods will be marked as META(parameters...). This macro will be defined like this #define META(...) __attribute__(annotate("reflectable")).
The code should looks like this (similar to Qt QObject or Unreal Engine 4):
1.
// --- macros outside of header to parse ---
// Marks declaration as reflectable (with some metadata), empty for base compiler, useful for clang
#ifdef __clang__
#define META(...) __attribute__(annotate("reflectable"))
#else
#define META(...)
#endif
// Injects some generated code after parsing step, empty for clang, useful for base compiler
#ifdef __clang__
#define GENERATED_REFLECTION_INFO
#else
#define GENERATED_REFLECTION_INFO GENERATE_CODE(__FILE__, __LINE__)
#endif
// --- reflectable class ---
META(Serializable, Exposed, etc)
class MyReflectableClass : public BaseReflectableClass
{
GENERATED_REFLECTION_INFO
public:
// Fields examples
META()
int32 MyReflectableField;
META(SkipSerialize)
float MySecondaryReflectableField;
// Methods example
META(RemoteMethod)
void MyReflectableMethod(int32 Param1, uint8 Param2);
};
libclang was used for this goal. But when project has includes like <string> or <memory>, it causes long time parsing of long chain of dependencies in standard library.
Also if project has a lot of headers, similar situation would be happened.
How can I avoid parsing of full standard library? Probably some kind of caches that I could use in libclang?
Or probably are there some ways to skip analyzing of these includes and convince the parser that unknown types are acceptable?
So how can I optimize application to reduce the time of parsing?
C++, as a language, is not very amenable to speculative parsing due to type-dependent parsing. For example, the < token might have different meanings depending upon whether it is preceded by a template or not.
However, libclang supports precompiled headers, which let you cache the standard library headers.
I am creating c++ library modules in my application. To do logging, I use spdlog. But in a production environment, I don't want my lib modules to do any logging. One way to achieve turning on/off would be to litter my code with #ifdef conditionals like...
#ifdef logging
// call the logger here.
#endif
I am looking for a way to avoid writing these conditionals. May be a wrapper function that does the #ifdef checking and write it. But the problem with this approach is that I have to write wrappers for every logging method (such as info, trace, warn, error, ...)
Is there a better way?
You can disable logging with set_level():
auto my_logger = spdlog::basic_logger_mt("basic_logger", "logs/basic.txt");
#if defined(PRODUCTION)
my_logger->set_level(spdlog::level::off);
#else
my_logger->set_level(spdlog::level::trace);
#endif
spdlog::register_logger(my_logger);
You can disable all logging before you compile the code by adding the following macro (before including spdlog.h):
#define SPDLOG_ACTIVE_LEVEL SPDLOG_LEVEL_OFF
#include<spdlog.h>
It is explained as a comment in the file https://github.com/gabime/spdlog/blob/v1.x/include/spdlog/spdlog.h :
//
// enable/disable log calls at compile time according to global level.
//
// define SPDLOG_ACTIVE_LEVEL to one of those (before including spdlog.h):
// SPDLOG_LEVEL_TRACE,
// SPDLOG_LEVEL_DEBUG,
// SPDLOG_LEVEL_INFO,
// SPDLOG_LEVEL_WARN,
// SPDLOG_LEVEL_ERROR,
// SPDLOG_LEVEL_CRITICAL,
// SPDLOG_LEVEL_OFF
//
Using this macro will also speed up your productive code because the logging calls are completely erased from your code. Therefore this approach may be better than using my_logger->set_level(spdlog::level::off);
However, in order for the complete code removal to work you need to use either of the macros when logging:
SPDLOG_LOGGER_###(logger, ...)
SPDLOG_###(...)
where ### is one of TRACE, DEBUG, INFO, WARN, ERROR, CRITICAL.
The latter macro uses the default logger spdlog::default_logger_raw(), the former can be used with your custom loggers. The variadic arguments ... stand for the regular arguments to your logging invocation: the fmt string, followed by some values to splice into the message.
I don't know spdlog.
However, you may define a macro in one of your common used include file, to replace the logcall by nothing, or a call to an empty inline function which the compiler optimizer will eliminate.
in "app.h"
#ifndef LOG
#ifdef logging
#define LOG spdlog
#endif
#ifndef logging
#define LOG noop
#endif
#endif
Did you get the idea?
This let most of your code untouched
I have to use lot of #ifdef i386 and x86_64 for architecture specific code and some times #ifdef MAC or #ifdef WIN32... so on for platform specific code.
We have to keep the common code base and portable.
But we have to follow the guideline that use of #ifdef is strict no. I dont understand why?
As a extension to this question I would also like to understand when to use #ifdef ?
For example, dlopen() cannot open 32 bit binary while running from 64 bit process and vice versa. Thus its more architecture specific. Can we use #ifdef in such situation?
With #ifdef instead of writing portable code, you're still writing multiple pieces of platform-specific code. Unfortunately, in many (most?) cases, you quickly end up with a nearly impenetrable mixture of portable and platform-specific code.
You also frequently get #ifdef being used for purposes other than portability (defining what "version" of the code to produce, such as what level of self-diagnostics will be included). Unfortunately, the two often interact, and get intertwined. For example, somebody porting some code to MacOS decides that it needs better error reporting, which he adds -- but makes it specific to MacOS. Later, somebody else decides that the better error reporting would be awfully useful on Windows, so he enables that code by automatically #defineing MACOS if WIN32 is defined -- but then adds "just a couple more" #ifdef WIN32 to exclude some code that really is MacOS specific when Win32 is defined. Of course, we also add in the fact that MacOS is based on BSD Unix, so when MACOS is defined, it automatically defines BSD_44 as well -- but (again) turns around and excludes some BSD "stuff" when compiling for MacOS.
This quickly degenerates into code like the following example (taken from #ifdef Considered Harmful):
#ifdef SYSLOG
#ifdef BSD_42
openlog("nntpxfer", LOG_PID);
#else
openlog("nntpxfer", LOG_PID, SYSLOG);
#endif
#endif
#ifdef DBM
if (dbminit(HISTORY_FILE) < 0)
{
#ifdef SYSLOG
syslog(LOG_ERR,"couldn’t open history file: %m");
#else
perror("nntpxfer: couldn’t open history file");
#endif
exit(1);
}
#endif
#ifdef NDBM
if ((db = dbm_open(HISTORY_FILE, O_RDONLY, 0)) == NULL)
{
#ifdef SYSLOG
syslog(LOG_ERR,"couldn’t open history file: %m");
#else
perror("nntpxfer: couldn’t open history file");
#endif
exit(1);
}
#endif
if ((server = get_tcp_conn(argv[1],"nntp")) < 0)
{
#ifdef SYSLOG
syslog(LOG_ERR,"could not open socket: %m");
#else
perror("nntpxfer: could not open socket");
#endif
exit(1);
}
if ((rd_fp = fdopen(server,"r")) == (FILE *) 0){
#ifdef SYSLOG
syslog(LOG_ERR,"could not fdopen socket: %m");
#else
perror("nntpxfer: could not fdopen socket");
#endif
exit(1);
}
#ifdef SYSLOG
syslog(LOG_DEBUG,"connected to nntp server at %s", argv[1]);
#endif
#ifdef DEBUG
printf("connected to nntp server at %s\n", argv[1]);
#endif
/*
* ok, at this point we’re connected to the nntp daemon
* at the distant host.
*/
This is a fairly small example with only a few macros involved, yet reading the code is already painful. I've personally seen (and had to deal with) much worse in real code. Here the code is ugly and painful to read, but it's still fairly easy to figure out which code will be used under what circumstances. In many cases, you end up with much more complex structures.
To give a concrete example of how I'd prefer to see that written, I'd do something like this:
if (!open_history(HISTORY_FILE)) {
logerr(LOG_ERR, "couldn't open history file");
exit(1);
}
if ((server = get_nntp_connection(server)) == NULL) {
logerr(LOG_ERR, "couldn't open socket");
exit(1);
}
logerr(LOG_DEBUG, "connected to server %s", argv[1]);
In such a case, it's possible that our definition of logerr would be a macro instead of an actual function. It might be sufficiently trivial that it would make sense to have a header with something like:
#ifdef SYSLOG
#define logerr(level, msg, ...) /* ... */
#else
enum {LOG_DEBUG, LOG_ERR};
#define logerr(level, msg, ...) /* ... */
#endif
[for the moment, assuming a preprocessor that can/will handle variadic macros]
Given your supervisor's attitude, even that may not be acceptable. If so, that's fine. Instead a macro, implement that capability in a function instead. Isolate each implementation of the function(s) in its own source file and build the files appropriate to the target. If you have a lot of platform-specific code, you usually want to isolate it into a directory of its own, quite possibly with its own makefile1, and have a top-level makefile that just picks which other makefiles to invoke based on the specified target.
Some people prefer not to do this. I'm not really arguing one way or the other about how to structure makefiles, just noting that it's a possibility some people find/consider useful.
You should avoid #ifdef whenever possible. IIRC, it was Scott Meyers who wrote that with #ifdefs you do not get platform-independent code. Instead you get code that depends on multiple platforms. Also #define and #ifdef are not part of the language itself. #defines have no notion of scope, which can cause all sorts of problems. The best way is to keep the use of the preprocessor to a bare minimum, such as the include guards. Otherwise you are likely to end up with a tangled mess, which is very hard to understand, maintain, and debug.
Ideally, if you need to have platform-specific declarations, you should have separate platform-specific include directories, and handle them appropriately in your build environment.
If you have platform specific implementation of certain functions, you should also put them into separate .cpp files and again hash them out in the build configuration.
Another possibility is to use templates. You can represent your platforms with empty dummy structs, and use those as template parameters. Then you can use template specialization for platform-specific code. This way you would be relying on the compiler to generate platform-specific code from templates.
Of course, the only way for any of this to work, is to very cleanly factor out platform-specific code into separate functions or classes.
I have seen 3 broad usages of #ifdef:
isolate platform specific code
isolate feature specific code (not all versions of a compilers / dialect of a language are born equal)
isolate compilation mode code (NDEBUG anyone ?)
Each has the potential to create a huge mess of unmaintanable code, and should be treated accordingly, but not all of them can be dealt with in the same fashion.
1. Platform specific code
Each platform comes with its own set of specific includes, structures and functions to deal with things like IO (mainly).
In this situation, the simplest way to deal with this mess is to present a unified front, and have platform specific implementations.
Ideally:
project/
include/namespace/
generic.h
src/
unix/
generic.cpp
windows/
generic.cpp
This way, the platform stuff is all kept together in one single file (per header) so easy to locate. The generic.h file describes the interface, the generic.cpp is selected by the build system. No #ifdef.
If you want inline functions (for performance), then a specific genericImpl.i providing the inline definitions and platform specific can be included at the end of the generic.h file with a single #ifdef.
2. Feature specific code
This gets a bit more complicated, but is usually experienced only by libraries.
For example, Boost.MPL is much easier to implement with compilers having variadic templates.
Or, compilers supporting move constructors allow you to define more efficient versions of some operations.
There is no paradise here. If you find yourself in such a situation... you end up with a Boost-like file (aye).
3. Compilation Mode code
You can generally get away with a couple #ifdef. The traditional example is assert:
#ifdef NDEBUG
# define assert(X) (void)(0)
#else // NDEBUG
# define assert(X) do { if (!(X)) { assert_impl(__FILE__, __LINE__, #X); } while(0)
#endif // NDEBUG
Then, the use of the macro itself is not susceptible to the compilation mode, so at least the mess is contained within a single file.
Beware: there is a trap here, if the macro is not expanded to something that counts for a statement when "ifdefed away" then you risk to change the flow under some circumstances. Also, macro not evaluating their arguments may lead to strange behavior when there are function calls (with side effects) in the mix, but in this case this is desirable as the computation involved may be expensive.
Many programs use such a scheme to make platform specific code. A better way, and also a way to clean up the code, is to put all code specific to one platform in one file, naming the functions the same and having the same arguments. Then you just select which file to build depending on the platform.
It might still be some places left where you can not extract platform specific code into separate functions or files, and you still might need the #ifdef parts, but hopefully it should be minimized.
I prefer splitting the platform dependent code & features into separate translation units and letting the build process decide which units to use.
I've lost a week of debugging time due to misspelled identifiers. The compiler does not do checking of defined constants across translation units. For example, one unit may use "WIN386" and another "WIN_386". Platform macros are a maintenance nightmare.
Also, when reading the code, you have to check the build instructions and header files to see which identifers are defined. There is also a difference between an identifier existing and having a value. Some code may test for the existance of an identifier while another tests the value of the same identifer. The latter test is undefined when the identifier is not specified.
Just believe they are evil and prefer not to use them.
Not sure what you mean by "#ifdef is strict no", but perhaps you are referring to a policy on a project you are working on.
You might consider not checking for things like Mac or WIN32 or i386, though. In general, you do not actually care if you are on a Mac. Instead, there is some feature of MacOS that you want, and what you care about is the presence (or absence) of that feature. For that reason, it is common to have a script in your build setup that checks for features and #defines things based on the features provided by the system, rather than making assumptions about the presence of features based on the platform. After all, you might assume certain features are absent on MacOS, but someone may have a version of MacOS on which they have ported that feature. The script that checks for such features is commonly called "configure", and it is often generated by autoconf.
personally, I prefer to abstract that noise well (where necessary). if it's all over the body of a class' interface - yuck!
so, let's say there is a type which is platform defined:
I will use a typedef at a high level for the inner bits and create an abstraction - that's often one line per #ifdef/#else/#endif.
then for the implementation, i will also use a single #ifdef for that abstraction in most cases (but that does mean that the platform specific definitions appear once per platform). I also separate them into separate platform specific files so I can rebuild a project by throwing all the sources into a project and building without a hiccup. In that case, #ifdef is also handier than trying to figure out all the dependencies per project per platform, per build type.
So, just use it to focus on the platform specific abstraction you need, and use abstractions so the client code is the same -- just like reducing the scope of a variable ;)
Others have indicated the preferred solution: put the dependent code in
a separate file, which is included. This the files corresponding to
different implementations can either be in separate directories (one of
which is specified by means of a -I or a /I directive in the
invocation), or by building up the name of the file dynamically (using
e.g macro concatenation), and using something like:
#include XX_dependentInclude(config.hh)
(In this case, XX_dependentInclude might be defined as something like:
#define XX_string2( s ) # s
#define XX_stringize( s ) XX_string2(s)
#define XX_paste2( a, b ) a ## b
#define XX_paste( a, b ) XX_paste2( a, b )
#define XX_dependentInclude(name) XX_stringize(XX_paste(XX_SYST_ID,name))
and SYST_ID is initialized using -D or /D in the compiler
invocation.)
In all of the above, replace XX_ with the prefix you usually use for macros.
This is my first-attempt at writing anything even slightly complicated in C++, I'm attempting to build a shared library that I can interface with from Objective-C, and .NET apps (ok, that part comes later...)
The code I have is -
#ifdef TARGET_OS_MAC
// Mac Includes Here
#endif
#ifdef __linux__
// Linux Includes Here
#error Can't be compiled on Linux yet
#endif
#ifdef _WIN32 || _WIN64
// Windows Includes Here
#error Can't be compiled on Windows yet
#endif
#include <iostream>
using namespace std;
bool probe(){
#ifdef TARGET_OS_MAC
return probe_macosx();
#endif
#ifdef __linux__
return probe_linux();
#endif
#ifdef _WIN32 || _WIN64
return probe_win();
#endif
}
bool probe_win(){
// Windows Probe Code Here
return true;
}
int main(){
return 1;
}
I have a compiler warning, simply untitled: In function ‘bool probe()’:untitled:29: warning: control reaches end of non-void function - but I'd also really appreciate any information or resources people could suggest for how to write this kind of code better....
instead of repeating yourself and writing the same #ifdef .... lines again, again, and again, you're maybe better of declaring the probe() method in a header, and providing three different source files, one for each platform. This also has the benefit that if you add a platform you do not have to modify all of your existing sources, but just add new files. Use your build system to select the appropriate source file.
Example structure:
include/probe.h
src/arch/win32/probe.cpp
src/arch/linux/probe.cpp
src/arch/mac/probe.cpp
The warning is because probe() doesn't return a value. In other words, none of the three #ifdefs matches.
I'll address this specific function:
bool probe() {
#ifdef TARGET_OS_MAC
return probe_macosx();
#elif defined __linux__
return probe_linux();
#elif defined _WIN32 || defined _WIN64
return probe_win();
#else
#error "unknown platform"
#endif
}
Writing it this way, as a chain of if-elif-else, eliminates the error because it's impossible to compile without either a valid return statement or hitting the #error.
(I believe WIN32 is defined for both 32- and 64-bit Windows, but I couldn't tell you definitively without looking it up. That would simplify the code.)
Unfortunately, you can't use #ifdef _WIN32 || _WIN64: see http://codepad.org/3PArXCxo for a sample error message. You can use the special preprocessing-only defined operator, as I did above.
Regarding splitting up platforms according to functions or entire files (as suggested), you may or may not want to do that. It's going to depend on details of your code, such as how much is shared between platforms and what you (or your team) find best to keep functionality in sync, among other issues.
Furthermore, you should handle platform selection in your build system, but this doesn't mean you can't use the preprocessor: use macros conditionally defined (by the makefile or build system) for each platform. In fact, this is the often the most practical solution with templates and inline functions, which makes it more flexible than trying to eliminate the preprocessor. It combines well with the whole-file approach, so you still use that where appropriate.
You might want to have a single config header which translates all the various compiler- and platform-specific macros into well-known and understood macros that you control. Or you could add -DBEAKS_PLAT_LINUX to your compiler command line—through your build system—to define that macro (remember to use a prefix for macro names).
It seems none of TARGET_OS_MAC, __linux__, _WIN32 or _WIN64 is defined at the time you compile your code.
So its like your code was:
bool probe(){
}
That's why the compiler complains about reaching the end of a non-void function. There is no return clause.
Also, for the more general question, here are my guidelines when developping multi-platform/architecure software/libraries:
Avoid specific cases. Try to write code that is OS-agnostic.
When dealing with system specific stuff, try to wrap things into "opaque" classes. As an example, if you are dealing with files (different APIs on Linux and Windows), try to create a File class that will embed all the logic and provide a common interface, whatever the operating system. If some feature is not available on one of the OS, deal with it: if the feature makes no sense for a specific OS, it's often fine to do nothing at all.
In short: the less #ifdef the better. And no matter how portable your code is, test it on every platform before releasing it.
Good luck ;)
The warning is because if none of the defines are actually defined then you have no return in your probe function. The fix for that is put in a default return.
To add something more to this, other than the outstanding options above, the directives __linux__ and _WIN32 are known to the compiler, where the TARGET_OS_MAC directive was not, this can be resolved by using __APPLE__. Source: http://www.winehq.org/pipermail/wine-patches/2003-July/006906.html
Here's a little problem I've been thinking about for a while now that I have not found a solution for yet.
So, to start with, I have this function guard that I use for debugging purpose:
class FuncGuard
{
public:
FuncGuard(const TCHAR* funcsig, const TCHAR* funcname, const TCHAR* file, int line);
~FuncGuard();
// ...
};
#ifdef _DEBUG
#define func_guard() FuncGuard __func_guard__( TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), __LINE__)
#else
#define func_guard() void(0)
#endif
The guard is intended to help trace the path the code takes at runtime by printing some information to the debug console. It is intended to be used such as:
void TestGuardFuncWithCommentOne()
{
func_guard();
}
void TestGuardFuncWithCommentTwo()
{
func_guard();
// ...
TestGuardFuncWithCommentOne();
}
And it gives this as a result:
..\tests\testDebug.cpp(121):
Entering[ void __cdecl TestGuardFuncWithCommentTwo(void) ]
..\tests\testDebug.cpp(114):
Entering[ void __cdecl TestGuardFuncWithCommentOne(void) ]
Leaving[ TestGuardFuncWithCommentOne ]
Leaving[ TestGuardFuncWithCommentTwo ]
Now, one thing that I quickly realized is that it's a pain to add and remove the guards from the function calls. It's also unthinkable to leave them there permanently as they are because it drains CPU cycles for no good reasons and it can quickly bring the app to a crawl. Also, even if there were no impacts on the performances of the app in debug, there would soon be a flood of information in the debug console that would render the use of this debug tool useless.
So, I thought it could be a good idea to enable and disable them on a per-file basis.
The idea would be to have all the function guards disabled by default, but they could be enabled automagically in a whole file simply by adding a line such as
EnableFuncGuards();
at the top of the file.
I've thought about many a solutions for this. I won't go into details here since my question is already long enough, but let just say that I've tried more than a few trick involving macros that all failed, and one involving explicit implementation of templates but so far, none of them can get me the actual result I'm looking for.
Another restricting factor to note: The header in which the function guard mechanism is currently implemented is included through a precompiled header. I know it complicates things, but if someone could come up with a solution that could work in this situation, that would be awesome. If not, well, I certainly can extract that header fro the precompiled header.
Thanks a bunch in advance!
Add a bool to FuncGuard that controls whether it should display anything.
#ifdef NDEBUG
#define SCOPE_TRACE(CAT)
#else
extern bool const func_guard_alloc;
extern bool const func_guard_other;
#define SCOPE_TRACE(CAT) \
NppDebug::FuncGuard npp_func_guard_##__LINE__( \
TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), \
__LINE__, func_guard_##CAT)
#endif
Implementation file:
void example_alloc() {
SCOPE_TRACE(alloc);
}
void other_example() {
SCOPE_TRACE(other);
}
This:
uses specific categories (including one per file if you like)
allows multiple uses in one function, one per category or logical scope (by including the line number in the variable name)
compiles away to nothing in NDEBUG builds (NDEBUG is the standard I'm-not-debugging macro)
You will need a single project-wide file containing definitions of your category bools, changing this 'settings' file does not require recompiling any of the rest of your program (just linking), so you can get back to work. (Which means it will also work just fine with precompiled headers.)
Further improvement involves telling the FuncGuard about the category, so it can even log to multiple locations. Have fun!
You could do something similar to the assert() macro where having some macro defined or not changes the definition of assert() (NDEBUG in assert()'s case).
Something like the following (untested):
#undef func_guard
#ifdef USE_FUNC_GUARD
#define func_guard() NppDebug::FuncGuard __npp_func_guard__( TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), __LINE__)
#else
#define func_guard() void(0)
#endif
One thing to remember is that the include file that does this can't have include guard macros (at least not around this part).
Then you can use it like so to get tracing controlled even within a compilation unit:
#define USE_FUNC_GUARD
#include "funcguard.h"
// stuff you want traced
#undef USE_FUNC_GUARD
#include "funcguard.h"
// and stuff you don't want traced
Of course this doesn't play 100% well with pre-compiled headers, but I think that subsequent includes of the header after the pre-compiled stuff will still work correctly. Even so, this is probably the kind of thing that shouldn't be in a pre-compiled header set.