Asserts and unused variables in Visual Studio 2010 SP1 - c++

I use the code below for assert in "release", have for some time with no issues ever.
Then along came Visual Studio 2010 Pro SP1, and things went south, as also happened to mr. Krunthar.
Problem is, when I have a piece of code in which I do sanity checks like this:
#define ASSERT(condition, msg) do { (void)sizeof(condition); } while (0,0)
// Note: (0,0) is to avoid warning C4127: conditional expression is constant
{
int result = CallMeOnce(); // its side effects are the important stuff
// perform additional sanity checks in debug
ASSERT(result >= 0, "too low");
ASSERT(result <= 100, "too high");
ASSERT(!isPrime(result), "too prime");
}
VS2010 spits out a warning C4189: 'result' : local variable is initialized but not referenced
I am at a loss on how to fix that:
Code like (void)(condition) will execute any expression passed as condition, which is a no no
Putting CallMeOnce() inside the ASSERT expression is impossible
Refactoring all the different CallMeOnce()s is NOT an option
I'd rather not have to write scaffolding code like (void)result, if (result == result) {} or UNREFERENCED_PARAMETER(result) (or equivalent) outside the macro just to avoid the warning as it makes the code even harder to read (pollution), and is easy to forget while writing code in Debug. Also: in lots of places!
I'm considering creating another macro (ASSERTU?) just for variables, but it feels so... quirky!
Has anyone found a better way out?
Thanks a lot!
Edit: Clarified preference for the variable handling at caller's level

in your assert macro you have
(void)sizeof(condition);
presumably this code was written by someone else, so, explanation:
the rôle of the (void) cast is to tell the compiler that you really intended this do-nothing expression statement to do nothing.
now do the same for your result
that was easy, wasn't it? sometimes solution is just staring you in the face. ;-)
by the way, when this construct is used to suppress warnings about unused formal arguments, you might want to add a redefinition of the name, like
(void) unusedArg; struct unusedArg;
this prevents inadvertently using the argument with later maintenance of the code
however, the error generated by visual c++ is not exactly informative
there are umpteen level of sophistication that can be added, but i think even the name redefinition is perhaps going too far – the cost greater than the advantage, perhaps

You can use the UNREFERENCED_PARAMETER macro.

It seems I got somewhere!
#define ASSERT(condition, msg) \
do { \
if (0,0) { \
(void)(condition); \
} \
} while (0,0)
Mandatory explanation:
(void)(condition); will suppress C4189, but will execute any expression or function call passed in.
However, if (false) {...} will make sure that whatever (valid expression) "..." may be, it will not be executed. Code optimization phase will see it as dead code and throw it away (no code generated at all for the block in my tests!).
Finally, the owl trick (0,0) will prevent C4127, which seems a quite useless warning in the first place but hey, less clutter in the compilation output!
The only weakness I could find to this solution is that condition needs to be compilable code, so if you #ifdef-ed out part of the expression, it will raise an error. It might be that it's also compiling (though not calling) the code for the called functions; more research would be useful.

This is much nicer. Also: an expression instead of a statement
#define ASSERT(condition, msg) ( false ? (void)(condition) : (void)0 )
though you might want both debug and release versions of your assert to have the same semantic, so a do {...} while (0,0) around it might be appropriate.

You can use pairs of __pragma(warning(push)) __pragma(warning(disable: 4127)) and __pragma(warning(pop)) to silence C4127 just for the ASSERT line.
Then (void)(true ? (void)0 : ((void)(expression))) silences C4189.
This is an excerpt from my own implementation of an assertion macro.
The PPK_ASSERT(expression) macro will ultimately expand to PPK_ASSERT_3(level, expression) or PPK_ASSERT_UNUSED(expression) depending on whether assertions are enabled or disabled.
#define PPK_ASSERT_3(level, expression, ...)\
__pragma(warning(push))\
__pragma(warning(disable: 4127))\
do\
{\
static bool _ignore = false;\
if (PPK_ASSERT_LIKELY(expression) || _ignore || pempek::assert::implementation::ignoreAllAsserts());\
else\
{\
if (pempek::assert::implementation::handleAssert(PPK_ASSERT_FILE, PPK_ASSERT_LINE, PPK_ASSERT_FUNCTION, #expression, level, _ignore, __VA_ARGS__) == pempek::assert::implementation::AssertAction::Break)\
PPK_ASSERT_DEBUG_BREAK();\
}\
}\
while (false)\
__pragma(warning(pop))
and
#define PPK_ASSERT_UNUSED(expression) (void)(true ? (void)0 : ((void)(expression)))

Related

Should I use if unlikely for hard crashing errors?

I often find myself writing a code that looks something like this:
if(a == nullptr) throw std::runtime_error("error at " __FILE__ ":" S__LINE__);
Should I prefer handling errors with if unlikely?
if unlikely(a == nullptr) throw std::runtime_error("error at " __FILE__ ":" S__LINE__);
Will the compiler automatically deduce which part of the code should be cached or is this an actually useful thing to do? Why do I not see many people handling errors like this?
Yes you can do that. But even better is to move the throw to a separate function, and mark it with __attribute__((cold, noreturn)). This will remove the need to say unlikely() at each call site, and may improve code generation by moving the exception throwing logic entirely outside the happy path, improving instruction cache efficiency and inlining possibilities.
If you prefer to use unlikely() for semantic notation (to make the code easier to read), that's fine too, but it isn't optimal by itself.
Should I use "if unlikely" for hard crashing errors?
For cases like that I'd prefer moving code that throws to a standalone extern function that's marked as noreturn. This way, your actual code isn't "polluted" with lots of exception-related code (or whatever your "hard crashing" code). Contrary to the accepted answer, you don't need to mark it as cold, but you really need noreturn to make compiler not to try generating code to preserve registers or whatever state and essentially assume that after going there there is no way back.
For example, if you write code this way:
#include <stdexcept>
#define _STR(x) #x
#define STR(x) _STR(x)
void test(const char* a)
{
if(a == nullptr)
throw std::runtime_error("error at " __FILE__ ":" STR(__LINE__));
}
compiler will generate lots of instructions that deal with constructing and throwing this exception. You also introduce dependency on std::runtime_error. Check out how generated code will look like if you have just three checks like that in your test function:
First improvement: to move it to a standalone function:
void my_runtime_error(const char* message);
#define _STR(x) #x
#define STR(x) _STR(x)
void test(const char* a)
{
if (a == nullptr)
my_runtime_error("error at " __FILE__ ":" STR(__LINE__));
}
this way you avoid generating all that exception related code inside your function. Right away generated instructions become simpler and cleaner and reduce effect on the instructions that are generated by your actual code where you perform checks:
There is still room for improvement. Since you know that your my_runtime_error won't return you should let the compiler know about it, so that it wouldn't need to preserve registers before calling my_runtime_error:
#if defined(_MSC_VER)
#define NORETURN __declspec(noreturn)
#else
#define NORETURN __attribute__((__noreturn__))
#endif
void NORETURN my_runtime_error(const char* message);
...
When you use it multiple times in your code you can see that generated code is much smaller and reduces effect on instructions that are generated by your actual code:
As you can see, this way compiler doesn't need to preserve registers before calling your my_runtime_error.
I would also suggest against concatenating error strings with __FILE__ and __LINE__ into monolithic error message strings. Pass them as standalone parameters and simply make a macro that passes them along!
void NORETURN my_runtime_error(const char* message, const char* file, int line);
#define MY_ERROR(msg) my_runtime_error(msg, __FILE__, __LINE__)
void test(const char* a)
{
if (a == nullptr)
MY_ERROR("error");
if (a[0] == 'a')
MY_ERROR("first letter is 'a'");
if (a[0] == 'b')
MY_ERROR("first letter is 'b'");
}
It may seem like there is more code generated per each my_runtime_error call (2 more instructions in case of x64 build), but the total size is actually smaller, as the saved size on constant strings is way larger than the extra code size.
Also, note that these code examples are good for showing benefit of making your "hard crashing" function an extern. Need for noreturn becomes more obvious in real code, for example:
#include <math.h>
#if defined(_MSC_VER)
#define NORETURN __declspec(noreturn)
#else
#define NORETURN __attribute__((noreturn))
#endif
void NORETURN my_runtime_error(const char* message, const char* file, int line);
#define MY_ERROR(msg) my_runtime_error(msg, __FILE__, __LINE__)
double test(double x)
{
int i = floor(x);
if (i < 10)
MY_ERROR("error!");
return 1.0*sqrt(i);
}
Generated assembly:
Try to remove NORETURN, or change __attribute__((noreturn)) to __attribute__((cold)) and you'll see completely different generated assembly!
As a last point (which is obvious IMO and was omitted). You need to define your
my_runtime_error function in some cpp file. Since it's going to be one copy only, you can put whatever code you want in this function.
void NORETURN my_runtime_error(const char* message, const char* file, int line)
{
// you can log the message over network,
// save it to a file and finally you can throw it an error:
std::string msg = message;
msg += " at ";
msg += file;
msg += ":";
msg += std::to_string(line);
throw std::runtime_error(msg);
}
One more point: clang actually recognizes that this type of function would benefit from noreturn and warns about it if -Wmissing-noreturn warning was enabled:
warning: function 'my_runtime_error' could be declared with attribute
'noreturn' [-Wmissing-noreturn] { ^
It depends.
First of all, you can definitely do this, and this will likely (pun intended) not harm the performance of your application. But please note that likely/unlikely attributes are compiler-specific, and should be decorated accordingly.
Secondly, if you want a performance gain, the outcome will depend on the target platform (and corresponding compiler backend). If we're talking about the 'default' x86 architecture, you will not get much of a profit on modern chips - the only change these attributes will produce is a change in the code layout (unlike earlier times when x86 supported software branch prediction). For small branches (like your example), it will have very little effect on cache utilization and/or front-end latencies.
UPDATE:
Will the compiler automatically deduce which part of the code should be cached or is this an actually useful thing to do?
This is actually a very wide and complicated topic. What will compiler do depends on the particular compiler, its backend (target architecture) and compilation options. Again, for x86, here's the following rule (taken from Intel® 64 and IA-32 Architectures Optimization Reference Manual):
Assembly/Compiler Coding Rule 3. (M impact, H generality) Arrange code to be consistent with
the static branch prediction algorithm: make the fall-through code following a conditional branch be the
likely target for a branch with a forward target, and make the fall-through code following a conditional
branch be the unlikely target for a branch with a backward target.
As far as I'm aware, that's the only thing that's left from static branch prediction in modern x86, and likely/unlikely attributes might only be used to "overwrite" this default behaviour.
Since you're "crashing hard" anyways, I'd go with
#include <cassert>
...
assert(a != nullptr);
This is compiler-independent, should give you close to optimal performance, gives you a breakpoint when running in a debugger, generates a core dump when not in a debugger, and can be disabled by setting the NDEBUG preprocessor symbol, which many build systems do by default for release builds.

Is it possible to have a zero-cost assert() such that code should not have to be modified between debug and release builds?

I've noticed that some code often looks like this:
#ifdef DEBUG
assert(i == 1);
#endif //DEBUG
and that you may have several blocks of these sitting around in your raw code. Having to write out each block is tedious and messy.
Would it be plausible to have a function like this:
auto debug_assert = [](auto expr) {
#ifdef DEBUG
assert(expr);
#endif //DEBUG
};
or something like this:
#ifdef DEBUG
auto debug_assert = [](bool expr) {
assert(expr);
};
#else //DEBUG
void debug_assert(bool expr) {}
#endif //DEBUG
to get a zero-cost assert when the DEBUG flag is not specified? (i.e. it should have the same effect as if it was not put into the code without the lambda running, etc. and be optimized out by the g++/clang compilers).
As mentioned by #KerrekSB, you can already disable asserts by defining NDEBUG before including <cassert>. The best way to ensure that it's defined before including the header file is to list it in as the argument to the compiler (with gcc it's -DNDEBUG)
Note: the assert is removed by replacing it with a no-op expression, and there, the argument isn't evaluated at all (which is different from your suggested solution)! This is why it's of utmost importance to not call any functions that have side effects in assert.
For completeness: here is how assert can be implemented:
#include <cstdio>
#include <cstdlib>
#ifndef NDEBUG
#define assert(EXPRESSION) ((EXPRESSION) ? (void)0 : (printf("assertion failed at line %d, file %s: %s\n", __LINE__, __FILE__, #EXPRESSION), exit(-1)))
#else
#define assert(EXPRESSION) (void)0
#endif
Introducing your own assert-style macro is very commonly done. There are quite a lot of reasons you may want to do this:
you want to include more information about the evaluated expression (see Catch's REQUIRE and how they use expression templates to decompose the expression into individual elements and stringify them)
you want to do action other than exit()ing the program, like throwing an exception, mailing the developer, logging to a file, breaking into the debugger
you want to evaluate the expression even on release builds which is less error prone than not evaluating it at all (after all, if it doesn't have side effects, it can be eliminated by a compiler optimizations, and if it does, you just avoided a heisenbug)
and so on, and so on (if you have an idea, you can post a comment, I'll add it to the answer)

How to make robust assert?

I want to realize this behavior:
When program runs in debug mode, assert_robust(expression, commands) works strictly like classical assert(expression)
When program runs in release mode, assert_robust(expression, commands) perform some commands if expression is false
This can be done this way:
#ifdef NDEBUG
#define assert_robust(expression, command) if (!(expression)) command;
#else
#define assert_robust(expression, command) assert(expression);
#endif
And this can be used for example this way to make myfunction fault-tolerant:
char myfunction(const string& s, int i)
{
assert_robust(i >= 0, return '\0');
/* Normal code */
}
This work well, but how to make macro assert_robust that supports more than one (arbitrary) number of commands? (preferably by a standard C++ way)
And another question is:
Is it good thing to be strict in debug and benevolent in release?
EDIT: My idea why do such a thing is because it is practicaly much better if it is a bug in the program that the program maintains sometimes a little weird than when it crashes and user losing their data.
The more interesting question is the second:
Is it good thing to be strict in debug and benevolent in release?
My experiences is that it is a horrible idea to have different behavior in debug and release builds. You are signing up for issues in production that you will never be able to reproduce in a debug build because the behavior of the two is different.
Other than that, which you may claim won't be an issue if you assert in the first place in debug mode, asserts should be used to flag programming issues, situations from which you cannot recover safely. If you can recover safely in release mode, why assert in DEBUG? If you cannot, are you willing to fiddle with production data in a way you don't quite understand what it will do?
Without getting into the issue of if this is a good idea or not, you can use your macro to wrap multiple commands in a do-while(0); loop.
#ifdef NDEBUG
#define assert_robust(expression, command) if (!(expression)) \
do{command;} while(0)
#else
#define assert_robust(expression, command) assert(expression)
#endif
Note also that I did not include semicolons at the ends of the macros. If you include them in the macros, then something like
assert_robust(cond1, command1) /* no semicolon here, no problem */
assert_robust(cond2, command2) /* no semicolon here, no problem */
would be allowed, which would be really weird.
I don't think to use assertions this way is a good idea. Usually you use an assert, if you want the predicate to be always true, because its part of critical code. If its not true, than there is obviously a big problem and aborting is reasonable. But more and more people use assert like an ordinary error check for debugging. In this case its adequate to disable it completely in release mode. It think you should decide for one of this two approaches.
But if you want to run some kind of emergency commands before aborting, you could use the new lambda functions from C++11:
void ASSERT(int expression, std::function<void()> func) {
if(!expression) {
if (func) func();
abort();
}
}
You could use it like this:
ASSERT(a >= 0, []() { std::cerr << "ERROR" << std::endl;});
Or:
ASSERT(a >= 0, [this]() { this->terminate(); });
Or:
ASSERT(a >= 0, nullptr);

Get return type of function in macro (C++)

I have ASSERT(x) macro and I want to call return if it asserts (in release configuration).
To do this I need to know return type of function where I use this ASSERT. How to get it (I deal with C++03, LLVM GCC 4.2 compiler)?
My ASSERT macro:
#define ASSERT(x) \
if(!(x)) {
LOG ("ASSERT in %s: %d", __FILE__, __LINE__); \
return /*return_type()*/; \
}
PS: I tried return 0; - compiler shows error for void functions (and I didn't try it for complex returning types), if return; - error for non-void functions.
(Updated...)
I'll answer to werewindle, nyarlathotep and jdv-Jan de Vaan here. I use standard assert for debug configuration. But after beta-testing I still get crash reports from final customers, and in most cases I need to change my crashing functions:
ASSERT (_some_condition_);
if (!_some_condition_) // add this return
return _default_value_;
I understand, that my program may crash probably later (otherwise it will definitely crash in current function). Also I can't quit application because developing is for iPhone (apps may not quit programmatically there). So the easiest way would be to "auto return" when assertion failed.
You can't determine the return type of the surrounding function in a Macro; macros are expanded by the preprocessor, which doesn't have this kind of information about the surroundings where these macros occur; it is basically just "searching and replacing" the macros. You would have to write separate macros for each return type.
But why not exit the program (i.e. calling the exit function)? Just returning from a function doesn't seem like a very robust error handling. Failed assertions should after all only occur when something is terribly wrong (meaning the program is in a state it was not designed to handle), so it is best to quit the program as soon as possible.
There are no proper way to determine return type inside a function in C.
Also, if you somehow implement your variant of ASSERT it will lead to erroneous program behavior. Main idea of ASSERT: if it fails, then program is in undefined state and only proper way is to stop it now. I.e. by calling exit().
i think you can do this with a template function, that you call default(x) from within the macro.
template<class T> default<T>(T x) { return T(); }
that will work for everyting with a default constructor. I think you need to write a special macro for void.
I hope i got the template syntax right, my c++ is getting rusty.
You can't do that, the C/C++ preprocessor is pretty basic and it can't do any code analysis. At most what you can do is pass the return type to the macro.
But here's my opinion: you're using assertions the wrong way. They should only be used for sanity checks of the code (for errors than can only happen because of the programmer); if all assertions pass, you don't need to care about them, you don't need to log them.
And not only that, but (in general) you should employ the element of least surprise. Do you expect ASSERT to log something and then forcefully make the function return? I know I wouldn't. I either expect it to close the application completely (what the standard assert does) or let me decide what happens next (maybe I have some pointers to free).
Macros do not return values, as they are no functions per se. They substitute the source code where they are used, so you'd return in the function where the macro is used.
There is no way to get the return value from a macro.
You could just define another macro for your needs.
#define ASSERT(x) \
if(!(x)) { \
LOG ("ASSERT in %s: %d", __FILE__, __LINE__); \
ASSERT_DEFAULT_RETURN(); \
}
And then inside a function:
int foo(){
#ifdef ASSERT_DEFAULT_RETURN
#undef ASSERT_DEFAULT_RETURN
#endif
#define ASSERT_DEFAULT_RETURN() return 0
// ...
ASSERT(some_expression);
// ...
// cleanup
#undef ASSERT_DEFAULT_RETURN
}
Just do this
#define ASSERT(x, ret, type) \
if(!(x)){
LOG ("ASSERT in %s: %d", __FILE__, __LINE__); \
return (type) ret;
}
I believe you are trying to solve the wrong problem. You don't want your program to crash in case of assertions, you best improve your testing.
Having a 'return' in these assertions give you a false sense of security. Instead it hides your problems and causes unexpected behavior in your program so the bugs you do get are much more complex. A colleague of mine actually wrote a good blog post about it.
If you really would want it, you could try writing return {}; so it default constructs the value, or have an assert macro where you also provide the failure case. However I really don't recommend it!

How to properly rewrite ASSERT code to pass /analyze in msvc?

Visual Studio added code analysis (/analyze) for C/C++ in order to help identify bad code. This is quite a nice feature but when you deal with and old project you may be overwhelmed by the number of warnings.
Most of the problems are generating because the old code is doing some ASSERT at the beginning of the method or function.
I think this is the ASSERT definition used in the code (from afx.h)
#define ASSERT(f) DEBUG_ONLY((void) ((f) || !::AfxAssertFailedLine(THIS_FILE, __LINE__) || (AfxDebugBreak(), 0)))
Example code:
ASSERT(pBytes != NULL);
*pBytes = 0; // <- warning C6011: Dereferencing NULL pointer 'pBytes'
I'm looking for an easy, clean and safe solution to solve these warnings that does not imply disabling these warnings. Did I mention that there are lots of occurrences in current codebase?
/analyze is not guaranteed to yield relevant and correct warnings.
It can and will miss a lot of issues, and it also gives a number of false positives (things it identifies as warnings, but which are perfectly safe and will never actually occur)
It is unrealistic to expect to have zero warnings with /analyze.
It has pointed out a situation where you dereference a pointer which it can not verify is always valid. As far as PREfast can tell, there's no guarantee that it will never be NULL.
But that doesn't mean it can be NULL. Just that the analysis required to prove that it's safe is too complex for PREfast.
You may be able to use the Microsoft-specific extension __assume to tell the compiler that it shouldn't produce this warning, but a better solution is to leave the warning. Every time you compile with /analyze (which need not be every time you compile), you should verify that the warnings it does come up with are still false positives.
If you use your asserts correctly (to catch logic error during programming, guarding against situations that cannot happen, then I see no problem with your code, or with leaving the warning. Adding code to handle a problem that can never occur is just pointless. You're adding more code and more complexity for no reason (if it can never occur, then you have no way of recovering from it, because you have absolutely no clue what state the program will be in. All you know is that it has entered a code path you thought impossible.
However, if you use your assert as actual error handling (the value can be NULL in exceptional cases, you just expect that it won't happen), then it is a defect in your code. Then proper error handling (exceptions, typically) is needed.
Never ever use asserts for problems that are possible. Use them to verify that the impossible isn't happening. And when /analyze gives you warnings, look at them. If it is a false positive, ignore it (don't suppress it, because while it's a false positive today, the code you check in tomorrow may turn it into a real issue).
PREFast is telling you that you have a defect in your code; don't ignore it. You do in fact have one, but you have only skittered around acknowleging it. The problem is this: just because pBytes has never been NULL in development & testing doesn't mean it won't be in production. You don't handle that eventuality. PREfast knows this, and is trying to warn you that production environments are hostile, and will leave your code a smoking, mutilated mass of worthless bytes.
/rant
There are two ways to fix this: the Right Way, and a hack.
The right way is to handle NULL pointers at runtime:
void DoIt(char* pBytes)
{
assert(pBytes != NULL);
if( !pBytes )
return;
*pBytes = 0;
}
This will silence PREfast.
The hack is to use an annotation. For example:
void DoIt(char* pBytes)
{
assert(pBytes != NULL);
__analysis_assume( pBytes );
*pBytes = 0;
}
EDIT: Here's a link describing PREfast annotations. A starting point, anyway.
Firstly your assertion statement must guarantee to throw or terminate the application. After some experimentation I found in this case /analyse ignores all implementation in either template functions, inline functions or normal functions. You must instead use macros and the do{}while(0) trick, with inline suppression of
If you look at the definition of ATLENSURE() Microsoft use __analyse_assume() in their macro, they also have several paragraphs of very good documentation on why and how they are migrating ATL to use this macro.
As an example of this I have modified the CPPUNIT_ASSERT macros in the same way to clean up thousands of warnings in our unit tests.
#define CPPUNIT_ASSERT(condition) \
do { ( CPPUNIT_NS::Asserter::failIf( !(condition), \
CPPUNIT_NS::Message( "assertion failed" ), \
CPPUNIT_SOURCELINE() ) ); __analysis_assume(!!(condition)); \
__pragma( warning( push)) \
__pragma( warning( disable: 4127 )) \
} while(0) \
__pragma( warning( pop))
remember, ASSERT() goes away in a retail build so the C6011 warning is absolutely correct in your code above: you must check that pBytes is non-null as well as doing the ASSERT(). the ASSERT() simply throws your app into a debugger if that condition is met in a debug bug.
I work a great deal on /analyze and PREfast, so if you have other questions, please feel to let me know.
You seem to assume that ASSERT(ptr) somehow means that ptr is not NULL afterwards. That's not true, and the code analyzer doesn't make that assumption.
My co-author David LeBlanc would tell me this code is broken anyway, assuming you're using C++, you should use a reference rather than a pointer, and a ref can't be NULL :)