Should I use if unlikely for hard crashing errors? - c++

I often find myself writing a code that looks something like this:
if(a == nullptr) throw std::runtime_error("error at " __FILE__ ":" S__LINE__);
Should I prefer handling errors with if unlikely?
if unlikely(a == nullptr) throw std::runtime_error("error at " __FILE__ ":" S__LINE__);
Will the compiler automatically deduce which part of the code should be cached or is this an actually useful thing to do? Why do I not see many people handling errors like this?

Yes you can do that. But even better is to move the throw to a separate function, and mark it with __attribute__((cold, noreturn)). This will remove the need to say unlikely() at each call site, and may improve code generation by moving the exception throwing logic entirely outside the happy path, improving instruction cache efficiency and inlining possibilities.
If you prefer to use unlikely() for semantic notation (to make the code easier to read), that's fine too, but it isn't optimal by itself.

Should I use "if unlikely" for hard crashing errors?
For cases like that I'd prefer moving code that throws to a standalone extern function that's marked as noreturn. This way, your actual code isn't "polluted" with lots of exception-related code (or whatever your "hard crashing" code). Contrary to the accepted answer, you don't need to mark it as cold, but you really need noreturn to make compiler not to try generating code to preserve registers or whatever state and essentially assume that after going there there is no way back.
For example, if you write code this way:
#include <stdexcept>
#define _STR(x) #x
#define STR(x) _STR(x)
void test(const char* a)
{
if(a == nullptr)
throw std::runtime_error("error at " __FILE__ ":" STR(__LINE__));
}
compiler will generate lots of instructions that deal with constructing and throwing this exception. You also introduce dependency on std::runtime_error. Check out how generated code will look like if you have just three checks like that in your test function:
First improvement: to move it to a standalone function:
void my_runtime_error(const char* message);
#define _STR(x) #x
#define STR(x) _STR(x)
void test(const char* a)
{
if (a == nullptr)
my_runtime_error("error at " __FILE__ ":" STR(__LINE__));
}
this way you avoid generating all that exception related code inside your function. Right away generated instructions become simpler and cleaner and reduce effect on the instructions that are generated by your actual code where you perform checks:
There is still room for improvement. Since you know that your my_runtime_error won't return you should let the compiler know about it, so that it wouldn't need to preserve registers before calling my_runtime_error:
#if defined(_MSC_VER)
#define NORETURN __declspec(noreturn)
#else
#define NORETURN __attribute__((__noreturn__))
#endif
void NORETURN my_runtime_error(const char* message);
...
When you use it multiple times in your code you can see that generated code is much smaller and reduces effect on instructions that are generated by your actual code:
As you can see, this way compiler doesn't need to preserve registers before calling your my_runtime_error.
I would also suggest against concatenating error strings with __FILE__ and __LINE__ into monolithic error message strings. Pass them as standalone parameters and simply make a macro that passes them along!
void NORETURN my_runtime_error(const char* message, const char* file, int line);
#define MY_ERROR(msg) my_runtime_error(msg, __FILE__, __LINE__)
void test(const char* a)
{
if (a == nullptr)
MY_ERROR("error");
if (a[0] == 'a')
MY_ERROR("first letter is 'a'");
if (a[0] == 'b')
MY_ERROR("first letter is 'b'");
}
It may seem like there is more code generated per each my_runtime_error call (2 more instructions in case of x64 build), but the total size is actually smaller, as the saved size on constant strings is way larger than the extra code size.
Also, note that these code examples are good for showing benefit of making your "hard crashing" function an extern. Need for noreturn becomes more obvious in real code, for example:
#include <math.h>
#if defined(_MSC_VER)
#define NORETURN __declspec(noreturn)
#else
#define NORETURN __attribute__((noreturn))
#endif
void NORETURN my_runtime_error(const char* message, const char* file, int line);
#define MY_ERROR(msg) my_runtime_error(msg, __FILE__, __LINE__)
double test(double x)
{
int i = floor(x);
if (i < 10)
MY_ERROR("error!");
return 1.0*sqrt(i);
}
Generated assembly:
Try to remove NORETURN, or change __attribute__((noreturn)) to __attribute__((cold)) and you'll see completely different generated assembly!
As a last point (which is obvious IMO and was omitted). You need to define your
my_runtime_error function in some cpp file. Since it's going to be one copy only, you can put whatever code you want in this function.
void NORETURN my_runtime_error(const char* message, const char* file, int line)
{
// you can log the message over network,
// save it to a file and finally you can throw it an error:
std::string msg = message;
msg += " at ";
msg += file;
msg += ":";
msg += std::to_string(line);
throw std::runtime_error(msg);
}
One more point: clang actually recognizes that this type of function would benefit from noreturn and warns about it if -Wmissing-noreturn warning was enabled:
warning: function 'my_runtime_error' could be declared with attribute
'noreturn' [-Wmissing-noreturn] { ^

It depends.
First of all, you can definitely do this, and this will likely (pun intended) not harm the performance of your application. But please note that likely/unlikely attributes are compiler-specific, and should be decorated accordingly.
Secondly, if you want a performance gain, the outcome will depend on the target platform (and corresponding compiler backend). If we're talking about the 'default' x86 architecture, you will not get much of a profit on modern chips - the only change these attributes will produce is a change in the code layout (unlike earlier times when x86 supported software branch prediction). For small branches (like your example), it will have very little effect on cache utilization and/or front-end latencies.
UPDATE:
Will the compiler automatically deduce which part of the code should be cached or is this an actually useful thing to do?
This is actually a very wide and complicated topic. What will compiler do depends on the particular compiler, its backend (target architecture) and compilation options. Again, for x86, here's the following rule (taken from Intel® 64 and IA-32 Architectures Optimization Reference Manual):
Assembly/Compiler Coding Rule 3. (M impact, H generality) Arrange code to be consistent with
the static branch prediction algorithm: make the fall-through code following a conditional branch be the
likely target for a branch with a forward target, and make the fall-through code following a conditional
branch be the unlikely target for a branch with a backward target.
As far as I'm aware, that's the only thing that's left from static branch prediction in modern x86, and likely/unlikely attributes might only be used to "overwrite" this default behaviour.

Since you're "crashing hard" anyways, I'd go with
#include <cassert>
...
assert(a != nullptr);
This is compiler-independent, should give you close to optimal performance, gives you a breakpoint when running in a debugger, generates a core dump when not in a debugger, and can be disabled by setting the NDEBUG preprocessor symbol, which many build systems do by default for release builds.

Related

Is it possible to have a zero-cost assert() such that code should not have to be modified between debug and release builds?

I've noticed that some code often looks like this:
#ifdef DEBUG
assert(i == 1);
#endif //DEBUG
and that you may have several blocks of these sitting around in your raw code. Having to write out each block is tedious and messy.
Would it be plausible to have a function like this:
auto debug_assert = [](auto expr) {
#ifdef DEBUG
assert(expr);
#endif //DEBUG
};
or something like this:
#ifdef DEBUG
auto debug_assert = [](bool expr) {
assert(expr);
};
#else //DEBUG
void debug_assert(bool expr) {}
#endif //DEBUG
to get a zero-cost assert when the DEBUG flag is not specified? (i.e. it should have the same effect as if it was not put into the code without the lambda running, etc. and be optimized out by the g++/clang compilers).
As mentioned by #KerrekSB, you can already disable asserts by defining NDEBUG before including <cassert>. The best way to ensure that it's defined before including the header file is to list it in as the argument to the compiler (with gcc it's -DNDEBUG)
Note: the assert is removed by replacing it with a no-op expression, and there, the argument isn't evaluated at all (which is different from your suggested solution)! This is why it's of utmost importance to not call any functions that have side effects in assert.
For completeness: here is how assert can be implemented:
#include <cstdio>
#include <cstdlib>
#ifndef NDEBUG
#define assert(EXPRESSION) ((EXPRESSION) ? (void)0 : (printf("assertion failed at line %d, file %s: %s\n", __LINE__, __FILE__, #EXPRESSION), exit(-1)))
#else
#define assert(EXPRESSION) (void)0
#endif
Introducing your own assert-style macro is very commonly done. There are quite a lot of reasons you may want to do this:
you want to include more information about the evaluated expression (see Catch's REQUIRE and how they use expression templates to decompose the expression into individual elements and stringify them)
you want to do action other than exit()ing the program, like throwing an exception, mailing the developer, logging to a file, breaking into the debugger
you want to evaluate the expression even on release builds which is less error prone than not evaluating it at all (after all, if it doesn't have side effects, it can be eliminated by a compiler optimizations, and if it does, you just avoided a heisenbug)
and so on, and so on (if you have an idea, you can post a comment, I'll add it to the answer)

Asserts and unused variables in Visual Studio 2010 SP1

I use the code below for assert in "release", have for some time with no issues ever.
Then along came Visual Studio 2010 Pro SP1, and things went south, as also happened to mr. Krunthar.
Problem is, when I have a piece of code in which I do sanity checks like this:
#define ASSERT(condition, msg) do { (void)sizeof(condition); } while (0,0)
// Note: (0,0) is to avoid warning C4127: conditional expression is constant
{
int result = CallMeOnce(); // its side effects are the important stuff
// perform additional sanity checks in debug
ASSERT(result >= 0, "too low");
ASSERT(result <= 100, "too high");
ASSERT(!isPrime(result), "too prime");
}
VS2010 spits out a warning C4189: 'result' : local variable is initialized but not referenced
I am at a loss on how to fix that:
Code like (void)(condition) will execute any expression passed as condition, which is a no no
Putting CallMeOnce() inside the ASSERT expression is impossible
Refactoring all the different CallMeOnce()s is NOT an option
I'd rather not have to write scaffolding code like (void)result, if (result == result) {} or UNREFERENCED_PARAMETER(result) (or equivalent) outside the macro just to avoid the warning as it makes the code even harder to read (pollution), and is easy to forget while writing code in Debug. Also: in lots of places!
I'm considering creating another macro (ASSERTU?) just for variables, but it feels so... quirky!
Has anyone found a better way out?
Thanks a lot!
Edit: Clarified preference for the variable handling at caller's level
in your assert macro you have
(void)sizeof(condition);
presumably this code was written by someone else, so, explanation:
the rôle of the (void) cast is to tell the compiler that you really intended this do-nothing expression statement to do nothing.
now do the same for your result
that was easy, wasn't it? sometimes solution is just staring you in the face. ;-)
by the way, when this construct is used to suppress warnings about unused formal arguments, you might want to add a redefinition of the name, like
(void) unusedArg; struct unusedArg;
this prevents inadvertently using the argument with later maintenance of the code
however, the error generated by visual c++ is not exactly informative
there are umpteen level of sophistication that can be added, but i think even the name redefinition is perhaps going too far – the cost greater than the advantage, perhaps
You can use the UNREFERENCED_PARAMETER macro.
It seems I got somewhere!
#define ASSERT(condition, msg) \
do { \
if (0,0) { \
(void)(condition); \
} \
} while (0,0)
Mandatory explanation:
(void)(condition); will suppress C4189, but will execute any expression or function call passed in.
However, if (false) {...} will make sure that whatever (valid expression) "..." may be, it will not be executed. Code optimization phase will see it as dead code and throw it away (no code generated at all for the block in my tests!).
Finally, the owl trick (0,0) will prevent C4127, which seems a quite useless warning in the first place but hey, less clutter in the compilation output!
The only weakness I could find to this solution is that condition needs to be compilable code, so if you #ifdef-ed out part of the expression, it will raise an error. It might be that it's also compiling (though not calling) the code for the called functions; more research would be useful.
This is much nicer. Also: an expression instead of a statement
#define ASSERT(condition, msg) ( false ? (void)(condition) : (void)0 )
though you might want both debug and release versions of your assert to have the same semantic, so a do {...} while (0,0) around it might be appropriate.
You can use pairs of __pragma(warning(push)) __pragma(warning(disable: 4127)) and __pragma(warning(pop)) to silence C4127 just for the ASSERT line.
Then (void)(true ? (void)0 : ((void)(expression))) silences C4189.
This is an excerpt from my own implementation of an assertion macro.
The PPK_ASSERT(expression) macro will ultimately expand to PPK_ASSERT_3(level, expression) or PPK_ASSERT_UNUSED(expression) depending on whether assertions are enabled or disabled.
#define PPK_ASSERT_3(level, expression, ...)\
__pragma(warning(push))\
__pragma(warning(disable: 4127))\
do\
{\
static bool _ignore = false;\
if (PPK_ASSERT_LIKELY(expression) || _ignore || pempek::assert::implementation::ignoreAllAsserts());\
else\
{\
if (pempek::assert::implementation::handleAssert(PPK_ASSERT_FILE, PPK_ASSERT_LINE, PPK_ASSERT_FUNCTION, #expression, level, _ignore, __VA_ARGS__) == pempek::assert::implementation::AssertAction::Break)\
PPK_ASSERT_DEBUG_BREAK();\
}\
}\
while (false)\
__pragma(warning(pop))
and
#define PPK_ASSERT_UNUSED(expression) (void)(true ? (void)0 : ((void)(expression)))

increase c++ code verbosity with macros

I'd like to have the possibility to increase the verbosity for debug purposes of my program. Of course I can do that using a switch/flag during runtime. But that can be very inefficient, due to all the 'if' statements I should add to my code.
So, I'd like to add a flag to be used during compilation in order to include optional, usually slow debug operations in my code, without affecting the performance/size of my program when not needed. here's an example:
/* code */
#ifdef _DEBUG_
/* do debug operations here
#endif
so, compiling with -D_DEBUG_ should do the trick. without it, that part won't be included in my program.
Another option (at least for i/o operations) would be to define at least an i/o function, like
#ifdef _DEBUG_
#define LOG(x) std::clog << x << std::endl;
#else
#define LOG(x)
#endif
However, I strongly suspect this probably isn't the cleanest way to do that. So, what would you do instead?
I prefer to use #ifdef with real functions so that the function has an empty body if _DEBUG_ is not defined:
void log(std::string x)
{
#ifdef _DEBUG_
std::cout << x << std::endl;
#endif
}
There are three big reasons for this preference:
When _DEBUG_ is not defined, the function definition is empty and any modern compiler will completely optimize out any call to that function (the definition should be visible inside that translation unit, of course).
The #ifdef guard only has to be applied to a small localized area of code, rather than every time you call log.
You do not need to use lots of macros, avoiding pollution of your code.
You can use macros to change implementation of the function (Like in sftrabbit's solution). That way, no empty places will be left in your code, and the compiler will optimize the "empty" calls away.
You can also use two distinct files for the debug and release implementation, and let your IDE/build script choose the appropriate one; this involves no #defines at all. Just remember the DRY rule and make the clean code reusable in debug scenario.
I would say that his actually is very dependent on the actual problem you are facing. Some problems will benefit more of the second solution, whilst the simple code might be better with simple defines.
Both snippets that you describe are correct ways of using conditional compilation to enable or disable the debugging through a compile-time switch. However, your assertion that checking the debug flags at runtime "can be very inefficient, due to all the 'if' statements I should add to my code" is mostly incorrect: in most practical cases a runtime check does not influence the speed of your program in a detectable way, so if keeping the runtime flag offers you potential advantages (e.g. turning the debugging on to diagnose a problem in production without recompiling) you should go for a run-time flag instead.
For the additional checks, I would rely on the assert (see the assert.h) which does exactly what you need: check when you compile in debug, no check when compiled for the release.
For the verbosity, a more C++ version of what you propose would use a simple Logger class with a boolean as template parameter. But the macro is fine as well if kept within the Logger class.
For commercial software, having SOME debug output that is available at runtime on customer sites is usually a valuable thing to have. I'm not saying everything has to be compiled into the final binary, but it's not at all unusual that customers do things to your code that you don't expect [or that causes the code to behave in ways that you don't expect]. Being able to tell the customer "Well, if you run myprog -v 2 -l logfile.txt and do you usual thing, then email me logfile.txt" is a very, very useful thing to have.
As long as the "if-statement to decide if we log or not" is not in the deepest, darkest jungle in peru, eh, I mean in the deepest nesting levels of your tight, performance critical loop, then it's rarely a problem to leave it in.
So, I personally tend to go for the "always there, not always enabled" approach. THat's not to say that I don't find myself adding some extra logging in the middle of my tight loops sometimes - only to remove it later on when the bug is fixed.
You can avoid the function-like macro when doing conditional compilation. Just define a regular or template function to do the logging and call it inside the:
#ifdef _DEBUG_
/* ... */
#endif
part of the code.
At least in the *Nix universe, the default define for this kind of thing is NDEBUG (read no-debug). If it is defined, your code should skip the debug code. I.e. you would do something like this:
#ifdef NDEBUG
inline void log(...) {}
#else
inline void log(...) { .... }
#endif
An example piece of code I use in my projects. This way, you can use variable argument list and if DEBUG flag is not set, related code is cleared out:
#ifdef DEBUG
#define PR_DEBUG(fmt, ...) \
PR_DEBUG(fmt, ...) printf("[DBG] %s: " fmt, __func__, ## __VA_ARGS__)
#else
#define PR_DEBUG(fmt, ...)
#endif
Usage:
#define DEBUG
<..>
ret = do_smth();
PR_DEBUG("some kind of code returned %d", ret);
Output:
[DBG] some_func: some kind of code returned 0
of course, printf() may be replaced by any output function you use. Furthermore, it can be easily modified so additional information, as for example time stamp, is automatically appended.
For me it depends from application to application.
I've had applications where I wanted to always log (for example, we had an application where in case of errors, clients would take all the logs of the application and send them to us for diagnostics). In such a case, the logging API should probably be based on functions (i.e. not macros) and always defined.
In cases when logging is not always necessary or you need to be able to completely disable it for performance/other reasons, you can define logging macros.
In that case I prefer a single-line macro like this:
#ifdef NDEBUG
#define LOGSTREAM /##/
#else
#define LOGSTREAM std::clog
// or
// #define LOGSTREAM std::ofstream("output.log", std::ios::out|std::ios::app)
#endif
client code:
LOG << "Initializing chipmunk feeding module ...\n";
//...
LOG << "Shutting down chipmunk feeding module ...\n";
It's just like any other feature.
My assumptions:
No global variables
System designed to interfaces
For whatever you want verbose output, create two implementations, one quiet, one verbose.
At application initialisation, choose the implementation you want.
It could be a logger, or a widget, or a memory manager, for example.
Obviously you don't want to duplicate code, so extract the minimum variation you want. If you know what the strategy pattern is, or the decorator pattern, these are the right direction. Follow the open closed principle.

How to separate logging logic from business logic in a C program? And in a C++ one?

I am currently coding in C and I have lots of printfs so that I can track, at some times, the flow of my application. The problem is that some times I want more detail than others, so I usually spend my time commenting/uncommenting my C code, so I can get the appropriate output.
When using Java or C#, I can generally separate both my implementation code from the logging logic by using Aspects.
Is there any similar technique you use in C to get around this problem?
I know I could put a flag called DEBUG that could be either on or off, so I wouldn't have to go all around and comment/uncomment my whole code every time I want to either show or hide the printfs. The question is I'd like to also get rid of the logging logic in my code.
If instead of C I was coding in C++, would it be any better?
Edit
It seems there is an AspectC++, so for C++ there seems to be a solution. What about C?
Thanks
IME you cannot really separate logging from the algorithms that you want to log about. Placing logging statements strategically takes time and experience. Usually, the code keeps assembling logging statements over its entire lifetime (though it's asymptotic). Usually, the logging evolves with the code. If the algorithm changes often, so will usually the logging code.
What you can do is make logging as unobtrusive as possible. That is, make sure logging statements always are one-liners that do not disrupt reading the algorithm, make it so others can insert additional logging statements into an existing algorithm without having to fully understand your logging lib, etc.
In short, treat logging like you treat string handling: Wrap it in a nice little lib that will be included and used just about everywhere, make that lib fast, and make it easy to use.
Not really.
If you have variadic macros available, you can easily play games like this:
#ifdef NDEBUG
#define log(...) (void)0
#else
#define log(...) do {printf("%s:%d: ", __FILE__, __LINE__); printf(__VA_ARGS__);} while(0)
#endif
You can also have logging that's turn-off-and-onable at a finer granularity:
#define LOG_FLAGS <something>;
#define maybe_log(FLAG, ...) do { if (FLAG&LOG_FLAGS) printf(__VA_ARGS__);} while(0)
int some_function(int x, int y) {
maybe_log(FUNCTION_ENTRY, "x=%d;y=%d\n", x, y);
... do something ...
maybe_log(FUNCTION_EXIT, "result=%d\n", result);
return result;
}
Obviously this can be a bit tedious with only allowing a single return from each function, since you can't directly get at the function return.
Any of those macros and calls to printf could be replaced with something (other macros, or variadic function calls) that allows the actual logging format and target to be separated from the business logic, but the fact of some kind of logging being done can't be, really.
aspectc.org does claim to offer a C and C++ compiler with language extensions supporting AOP. I have no idea what state it's in, and if you use it then of course you're not really writing C (or C++) any more.
Remember that C++ has multiple inheritance, which is sometimes helpful with cross-cutting concerns. With enough templates you can do remarkable things, perhaps even implementing your own method dispatch system that allows some sort of join points, but it's a big thing to take on.
On GCC you could use variadic macros: http://gcc.gnu.org/onlinedocs/cpp/Variadic-Macros.html . It makes possible to define dprintf() with any number of parameters.
Using additional hidden verbose_level parameter you can filter the messages.
In this case the logging loggic will only contain
dprintf_cond(flags_or_verbose_level, msg, param1, param2);
and there will be no need in separating it from the rest of code.
A flag and proper logic is probably the safer way to do it, but you could do the same at compile type. Ie. Use #define and #ifdef to include/exclude the printfs.
Hmm, this sounds similar to a problem I encountered when working on a C++ project last summer. It was a distributed app which had to be absolutely bulletproof and this resulted in a load of annoying exception handling bloat. A 10 line function would double in size by the time you added an exception or two, because each one involved building a stringstream from a looong exception string plus any relevant parameters, and then actually throwing the exception maybe five lines later.
So I ended up building a mini exception handling framework which meant I could centralise all my exception messages inside one class. I would initialise that class with my (possibly parameterised) messages on startup, and this allowed me to write things like throw CommunicationException(28, param1, param2) (variadic arguments). I think I'll catch a bit of flak for that, but it made the code infinitely more readable. The only danger, for example, was that you could inadvertently throw that exception with message #27 rather than #28.
#ifndef DEBUG_OUT
# define DBG_MGS(level, format, ...)
# define DBG_SET_LEVEL(x) do{}while(0)
#else
extern int dbg_level;
# define DBG_MSG(level, format, ...) \
do { \
if ((level) >= dbg_level) { \
fprintf(stderr, (format), ## __VA_ARGS__); \
} \
} while (0)
# define DBG_SET_LEVEL(X) do { dbg_level = (X); } while (0)
#endif
The ## before __VA_ARGS__ is a GCC specific thing that makes , __VA_ARGS__ actually turn into the right code when there are no actual extra arguments.
The do { ... } while (0) stuff is just to make you put ; after the statements when you use them, like you do when you call regular functions.
If you don't want to get as fancy you can do away with the debug level part. That just makes it so that if you want you can alter the level of debugging/tracing date you want.
You could turn the entire print statement into a separate function (either inline or a regular one) that would be called regardless of the debug level, and would make the decision as to printing or not internally.
#include <stdarg.h>
#include <stdio.h>
int dbg_level = 0;
void DBG_MGS(int level, const char *format, ...) {
va_list ap;
va_start(ap, format);
if (level >= dbg_level) {
vfprintf(stderr, format, ap);
}
va_end(ap);
}
If you are using a *nix system then you should have a look at syslog.
You might also want to search for some tracing libraries. There are a few that do similar things to what I have outlined.

compile time assertions *across modules* / c,c++

recently i discovered in a relatively large project, that ugly runtime crashes occurred because various headers were included in different order in different cpp files.
These headers included #pragma pack - and these pragmas were sometimes not 'closed' ( i mean, set back to the compiler default #pragma pack() ) - resulting in different object layouts in different object files. No wonder the application crashed when it accessed struct members being created in one module and passed to another module. Or derived classes accessing members from base classes.
Since i like the idea to create a more general debugging and assertion strategy from every bug i find, i would really like to assert that object layouts are always and everywhere the same.
So it would be easy to assert
ASSERT( offsetof(membervar) == 4 )
But this would not catch a different layout in another module - or require manual updates whenever the struct layout changes .. so my favourite idea would be something like
ASSERT( offsetof(membervar) == offsetof(othermodule_membervar) )
Would this be possible with an assertion? Or is this a case for a unit test?
Thanks,
H
ASSERT( offsetof(membervar) == offsetof(othermodule_membervar) )
I can't see way to make this technically possible. Further, even if it was phyiscally possible, it isn't practical. You'd need an assert for every pair of source files:
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(B.c::MyClass.membervar) )
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(C.c::MyClass.membervar) )
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(D.c::MyClass.membervar) )
ASSERT( offsetof(B.c::MyClass.membervar) == offsetof(C.c::MyClass.membervar) )
ASSERT( offsetof(B.c::MyClass.membervar) == offsetof(D.c::MyClass.membervar) )
etc
You might be able to get away with this by asserting the sizeof(class) in different files. If the packing is causing the size of the object to be smaller, than I would expect that sizeof() would show that up.
You could also do this as a static assert using C++0x's static assert, or Boost's (or a handrolled one of course)
On the part of not wanting to do this in every file, I would recommend putting together a header file that includes all the headers you're worried about, and the static_asserts.
Personally though, I'd just recommend searching through the code base over the list of pragmas and fix them.
Wendy,
In Win32, there are single functions that can populate different versions of a given struct. Over the years, the FOOBAR struct might have new features added to it, so they create a FOOBAR2 or FOOBAREX. In some cases there are more than two versions.
Anyway, the way they handle this is to have the caller pass in sizeof(theStruct) in addition to the pointer to the struct:
FOOBAREX foobarex = {0};
long lResult = SomeWin32Api(sizeof(foobarex), &foobarex);
Within the implementation of SomWin32Api(), they check the first parameter and determine which version of the struct they're dealing with.
You could do something similar in a debug build to assure that the caller and callee agree on the size of the struct being referred to, and assert if the value doesn't match the expected size. With macros, you might even be able to automate/hide this so that it only happens in a debug build.
Unfortunately, this is a run-time check and not a compile-time check...
What you want isn't directly possible as such. If you're using VC++, the following may be of interest:
http://blogs.msdn.com/vcblog/archive/2007/05/17/diagnosing-hidden-odr-violations-in-visual-c-and-fixing-lnk2022.aspx
There's probably scope to create some way of semi-automating the process it describes, collating the output and cross-referencing.
To detect this sort of problem somewhat more automatically, the following occurs to me. Create a file that defines a struct that will have a particular size with the designated default packing amount, but a different size with different pack values. Also include some kind of static assert that its size is correct. For example, if the default is 4-byte packing:
struct X {
char c;
int i;
double d;
};
extern const char g_check[sizeof(X)==16?1:-1];
Then #include this file at the start of every header (just write a program to put the extra includes in if there's too many to do by hand), and compile and see what happens. This won't directly detect changes in struct layout, just non-standard packing settings, which is what you're interested in anyway.
(When adding new headers one would put this #include at the top, along with the usual ifdef boilerplate and so on. This is unfortunate but I'm not sure there's any way around it. The best solution is probably to ask people to do it, but assume they'll forget, and run the extra-include-inserting program every now and again...)
Apologies for posting an answer - which this is not - but I don't know how to post code in comments. Sorry.
To wrap Brone's idea in a macro, here is what free we currently use (feel free to edit it):
/** Our own assert macro, which will trace a FATAL error message if the assert
* fails. A FATAL trace will cause a system restart.
* Note: I would love to use CPPUNIT_ASSERT_MESSAGE here, for a nice clean
* test failure if testing with CppUnit, but since this header file is used
* by C code and the relevant CppUnit include file uses C++ specific
* features, I cannot.
*/
#ifdef TESTING
/* ToDo: might want to trace a FATAL if integration testing */
#define ASSERT_MSG(subsystem, message, condition) if (!(condition)) {printf("Assert failed: \"%s\" at line %d in file \"%s\"\n", message, __LINE__, __FILE__); fflush(stdout); abort();}
/* we can also use this, which prints of the failed condition as its message */
#define ASSERT_CONDITION(subsystem, condition) if (!(condition)) {printf("Assert failed: \%s\" at line %d in file \%s\"\n", #condition, __LINE__, __FILE__); fflush(stdout); abort();}
#else
#define ASSERT_MSG(subsystem, message, condition) if (!condition) DebugTrace(FATAL, subsystem, __FILE__, __LINE__, "%s", message);
#define ASSERT_CONDITION(subsystem, condition) if (!(condition)) DebugTrace(FATAL, subsystem, __FILE__, __LINE__, "%s", #condition);
#endif
What you would be looking for is an assertion like ASSERT_CONSISTENT(A_x, offsetof(A,x)), placed in a header file. Let me explain why, and what the problem is.
Because the problem exists across translation units, you can only detect the error at link time. That means you need to force the linker to spit out an error. Unfortunately, most cross-translation unit problems are formally of the "no diagnosis needed" kind. The most familiar one is the ODR rule. We can trivially cause ODR violations with such assertions, but you just can't rely on the linker to warn you about them. If you can, the implementation of the ODR can be as simple as
#define ASSERT_CONSISTENT(label, x) class ASSERT_ ## label { char test[x]; };
But if the linker doesn't notice these ODR violations, this will pass by silently. And here lies the problem: the linker really only needs to complain if it can't find something.
With two macro's the problem is solved:
template <int i> class dummy; // needed to differentiate functions
#define ASSERT_DEFINE(label, x) void ASSERT_label(dummy<x>&) { }
#define ASSERT_CHECK(label, x) void (*check)(dummy<x>&) = &ASSERT_label;
You'd need to put the ASSERT_DEFINE macro in a .cpp, and ASSERT_CHECK in its header. If the x value checked isn't the x value defined for that label, you're taking the address of an undefined function. Now, a linker doesn't need to warn about multiple definitions, but it must warn about missing definitions.
BTW, for this particular problem, see Diagnosing Hidden ODR Violations in Visual C++ (and fixing LNK2022)