What are assertions? and why would you use them? - c++

How are assertions done in c++? Example code is appreciated.

Asserts are a way of explicitly checking the assumptions that your code makes, which helps you track down lots of bugs by narrowing down what the possible problems could be. They are typically only evaluated in a special "debug" build of your application, so they won't slow down the final release version.
Let's say you wrote a function that took a pointer as an argument. There's a good chance that your code will assume that the pointer is non-NULL, so why not explicitly check that with an assertion? Here's how:
#include <assert.h>
void function(int* pointer_arg)
{
assert(pointer_arg != NULL);
...
}
An important thing to note is that the expressions you assert must never have side effects, since they won't be present in the release build. So never do something like this:
assert(a++ == 5);
Some people also like to add little messages into their assertions to help give them meaning. Since a string always evaulates to true, you could write this:
assert((a == 5) && "a has the wrong value!!");

Assertion are boolean expressions which should typically always be true.
They are used to ensure what you expected is also what happens.
void some_function(int age)
{
assert(age > 0);
}
You wrote the function to deal with ages, you also 'know' for sure you're always passing sensible arguments, then you use an assert. It's like saying "I know this can never go wrong, but if it does, I want to know", because, well, everyone makes mistakes.
So it's not to check for sensible user input, if there are scenario's where something can go wrong, don't use an assert. Do real checks and deal with the errors.
Asserts are typically only for debug builds, so don't put code with side effects in asserts.

Assertions are used to verify design assumptions, usually in terms of input parameters and return results. For example
// Given customer and product details for a sale, generate an invoice
Invoice ProcessOrder(Customer Cust,Product Prod)
{
assert(IsValid(Cust));
assert(IsValid(Prod);
'
'
'
assert(IsValid(RetInvoice))
return(RetInvoice);
}
The assert statements aren't required for the code to run, but they check the validity of the input and output. If the input is invalid, there is a bug in the calling function. If the input is valid and output is invalid, there is a bug in this code. See design by contract for more details of this use of asserts.
Edit: As pointed out in other posts, the default implementation of assert is not included in the release run-time. A common practice that many would use, including myself, is to replace it with a version that is included in the release build, but is only called in a diagnostics mode. This enables proper regression testing on release builds with full assertion checking. My version is as follows;
extern void _my_assert(void *, void *, unsigned);
#define myassert(exp) \
{ \
if (InDiagnostics) \
if ( !(exp) ) \
_my_assert(#exp, __FILE__, __LINE__); \
} \
There is a small runtime overhead in this technique, but it makes tracking any bugs that have made it into the field much easier.

Use assertions to check for "can't happen" situations.
Typical usage: check against invalid/impossible arguments at the top of a function.
Seldom seen, but still useful: loop invariants and postconditions.

Assertions are statements allowing you to test any assumptions you might have in your program. This is especially useful to document your program logic (preconditions and postconditions). Assertions that fail usually throw runtime errors, and are signs that something is VERY wrong with your program - your assertion failed because something you assumed to be true was not. The usual reasons are: there is a flaw in your function's logic, or the caller of your function passed you bad data.

An assertion is something you add to your program that causes the program to stop immediately if a condition is met, and display an error message. You generally use them for things which you believe can never happen in your code.

This doesn't address the assert facility which has come down to us from early C days, but you should also be aware of Boost StaticAssert functionality, in the event that your projects can use Boost.
The standard C/C++ assert works during runtime. The Boost StaticAssert facility enables you to make some classes of assertions at compile time, catching logic errors and the like even earlier.

Here is a definition of what an assertion is and here is some sample code. In a nutshell an assertion is a way for a developer to test his (or her) assumptions about the state of the code at any given point. For example, if you were doing the following code:
mypointer->myfunct();
You probably want to assert that mypointer is not NULL because that's your assumption--that mypointer will never be NULL before the call.

Related

Exceptions vs assert for a scientific computing guy (I am the sole user of my code)?

Exceptions vs assert has been asked here before: Design by contract using assertions or exceptions?, Assertion VS Runtime exception, C++ error-codes vs ASSERTS vs Exceptions choices choices :(, Design by contract using assertions or exceptions?, etc. (*) There are also books, like Herb Sutter's Coding Standards that talk about this. The general consensus seems to be this:
Use assertions for internal errors, in the sense that the user of the module and the developer are one and the same person/team. Use exceptions for everything else. (**)
This rule makes a lot of sense to me, except for one thing. I am a scientist, using C++ for scientific simulations. In my particular context, this means that I am the sole user of most of my code. If I apply this rule, it means I never have to use exceptions? I guess not, for example, there are still I/O errors, or memory allocation issues, where exceptions are still necessary. But apart from those interactions of my program with the "outside world", are there other scenarios where I should be using exceptions?
In my experience, many good programming practices have been very useful to me, in spite of those practices being designed mostly for large complex systems or for large teams, while my programs are mostly small scientific simulations which are written mostly by me alone. Hence this question. What good practices of exception use apply in my context? Or should I use only asserts (and exceptions for I/O, memory allocation, and other interactions with the "outside world")?
(*) I hope that after reading the complete question, you agree that this is not a duplicate. The topic of exceptions vs assert has been dealt with before in general, but, as I try to explain here, I don't feel that any of those questions addresses my particular situation.
(**) I wrote this with my own words, trying to resume what I've read. Feel free to criticize this statement if you feel it does not reflect the majority's consensus.
assert() is a safeguard against programmer mistakes, while exceptions are safeguards against the rest of existence.
Let's explain this with an example:
double divide(double a, double b) {
return a / b;
}
The obvious problem of this function is that if b == 0, you'll get an error.
Now, let's assume this function is called with arguments which values are decided by you and only you. You can detect the problem by changing the function into this:
double divide(double a, double b) {
ASSERT(b != 0);
return a / b;
}
If you have accidentally made a mistake in your code so that b can take a 0 value, you're covered, and can fix the calling code, either by testing explicitely for 0, or to make sure such a condition never occurs in the first place.
As long as this assertion is in place, you will get some level of protection as the developer of the code.
It is a contract that makes it easy to see what kind of problem can occur in the function, especially while you are testing your code.
Now, what happens if you have no control over the values that are passed to the function ?
The assertion will just disrupt the flow of the program without any protection whatsoever.
The sensible thing to do is this:
double divide(double a, double b) {
ASSERT(b != 0);
if (b == 0)
throw DivideByZeroException();
return a / b;
}
try {
result = divide(num, user_val);
} catch (DivideByZeroException & e) {
display_informative_message_to_user(e);
}
Note that the assertion is still in place because it is the most readable indication of what can go wrong.
The addition of the exception, however, allows you to recover more easily from the problem.
It can be argued that such an approach is redundant, but in a release build, the assertions will usually be NOOPs without generated code, so the exception remains the sole protection.
Also, this function is very simple, so the assertion and the exception throw are immediately visible, but with a few dozens of lines of code added, that would not be the case anymore.
Now, when you are developing and likely to do mistakes, the assertion failure will be visible at exactly the line where it occured, while the exception might bubble up into an unrelated try/catch block that would make it harder to pintpoint the problem exactly, especially if the catch block does not log stack traces.
So, if you want to be safe and mitigate the risks of mistakes during development and during normal execution, you can never be too careful, and might want to provide both mechanisms in a complementary way.
I'm in a similar situation; engineering software, sole developer, very few users of my programs. My rule of thumb is to use exceptions when the program could feasibly recover from an error, or when a user should be expected to react to the error in some way. An example is checking for negative numbers where only positive numbers are allowed: the program doesn't need to terminate because the user typed in a negative value for mass, they just need to recheck their inputs and try again.
On the other hand, I use asserts to catch major bugs in the software. In the event that some error occurs from which the program has no hope of recovering (or that the user has no hope of fixing themselves), I just let the assert print out the file name and line number so that the user can report it to me, and I can fix it. An example of where I would use an assert is checking that the number of rows and columns of a matrix are equal when I'm expecting a square matrix. If num_rows != num_cols, then something is seriously broken with the code and some debugging is required. In my opinion, this is easier than trying to imagine all the possible ways that a matrix could become invalid, and test for them all.
As far as performance, I only disable or remove asserts and other error checks in critical sections of code, and then only when that section has been thoroughly tested and debugged.
My approach is probably not good for production software though. I can't imagine some program like Microsoft Excel bombing out with an "assertion failed" message. Ha ha. It's one thing if the three coworkers who use your software complain about your error-handling strategy, but quite another if you have thousands of unhappy customers who paid cash for it.
I'd use assertions where I expect the check to have a performance impact. I.e. when writing a vector or matrix class of simple types (e.g. double, complex<double>), and I wanted to make a bounds check I'd use assert(), because the check there has a potentially large performance impact, since it happens with every element access. I can then turn off this check in production builds with -DNDEBUG.
If the cost of the check does not matter (e.g. a check that an initial solution does not contain NaN values before you pass it to an iterative scheme), I would use an exception or another mechanism that is also active in production builds. If your job aborts after waiting in the queue of a cluster for three days and running for 10 hours, you at least want to have a diagnostic better than "killed (SIGSEGV)", so you can avoid rebuilding in debug-mode, waiting another 3 days and spending another 10 hours of expensive computation time.
The are situations where neither exceptions nor asserts are appropriate. An example would be an error where the cost of checking does not matter, but that is nevertheless fatal enough that the program cannot continue under any circumstances. An assertion is not appropriate, because it only triggers in debug mode, an exception is not appropriate, because it can (accidentally) be caught, obscuring the problem. In such a case I'd use a custom assert macro, that does not depend on NDEBUG, e.g.:
// This assert macro does not depend on the value of NDEBUG
#define assert_always(expr) \
do \
{ \
if(!(expr)) \
{ \
std::cerr << __FILE__ << ":" << __LINE__ << ": assert_always(" \
<< #expr << ") failed" << std::endl; \
std::abort(); \
} \
} while(false)
(This example was taken from here with a modified name to indicate the slightly broader purpose).

Code littered with asserts

Hi I am programming on some device.
There is some sample with such code:
Verify(SomeFunc(argc, argv) == SDK_OK);
Verify(SomeOtherFunction(&st_initialData) == SDK_OK);
Verify(SomeOtherFunction2(x,y) == SDK_OK);
In doc, Verify is defined as 'similar' to assert.
My question is: if I build my project in Release mode, what will happen with above statements? Will they lose their power? Will the Verify have any effect still?
To avoid possible problem with above, will I have to replace above codes with checking return values like this?:
if(SomeFunc(argc, argv) == SDK_OK)
{
// we are fine
}
else
{
// handle error somehow, such that it is also available in Release mode
}
It is impossible to say, as it seems that it is your project which implements Verify, as a macro or as a function. Why don't you take a look at the implementation?
That being said, MFC framework has VERIFY macro which is similar to ASSERT, with the distinction that the expression is always evaluated, even in release build, but doesn't do anything if result of the expression is false. This might be the similar approach, as your examples seem to call some functions which can affect the system state.
I assume you mean the MFC VERIFY macro or something very similar.
Using this macro is safe for release builds. The argument is executed in any case, just the macro itself does nothing in release.
In contrast to this, the ASSERT macro is completely skipped in release builds, so the "side effects" of the argument do not happen. Therefore, VERIFY is used if the argument is required for the actual program flow, and ASSERT is used when the argument is for asserting only.
Almost certainly you will not have to replace them. If your project wanted just to remove the calls in production compilation, it would probably have just plain assert directly. Try to read the source of the project (always a good idea) and understand what he macro does.

Assert() - what is it good for ?

I don't understand the purpose of assert() .
My lecturer says that the purpose of assert is to find bugs .
For example :
double divide(int a , int b )
{
assert (0 != b);
return a/b;
}
Does the above assert justified ? I think that the answer is yes , because if my program
doesn't supposed to work with 0 (the number zero) , but somehow a zero does find its way into the b variable , then something is wrong with the code .
Am I correct ?
Can you show me some examples for a justified assert() ?
Regards
assert is used to validate things that should always be true if the
program is correct. Whether assert is justified in your example
depends on the specification of divide: if b != 0 is a precondition,
then the assert is usually the preferred way of verifying it: if
someone calls the function without fulfilling the preconditions, it is a
programming error, and you should terminate the program with extreme
prejudice, doing as little additional work as possible. (Usually.
There are applications where this is not the case, and where it is
better to throw an exception, and stumble along, hoping for the best.)
If, however, the specification of divide defines somw behavior when b
== 0 (e.g. return +/-Inf), then you should implement this instead of
using assert.
Also, it's possible to turn the assert off, if it turns out that it
takes too much runtime. Generally, however, this should only be done in
critical sections of code, and only if the profiler shows that you
really need it.
FWIW: not related to your question, but the code you've posted will
return 0.0 for divide( 1, 3 ). Somehow, I don't think that this is
what you wanted.
Another aspect of assertions:
They are also a kind of documentation.
Instead of comments like
// ptr is never NULL
// vec has now n elements
better write
assert(ptr!=0);
assert(vec.size()==n);
Comments may become outdated over time and will cause confusion.
But assertions are verified all the time.
Comments can be ignored. Assertions cannot.
You're pretty much spot-on in your assesment of assert, except for the fact you typically use assert during a debug-phase ... This is because you don't want an assert to trigger during production code ... throwing exceptions (and properly handling them) is the proper method for run-time error-management in production level code.
In general though, assert is used for testing an assumption. If an assumed condition is not met in the code during the debugging phase, especially when you are getting values that are out-of-bound for the desired input, you want your program to bail out at the point that the error is encountered so you can fix it. For instance, suppose you were calling a function that returned a pointer, and that function should never return a NULL pointer value. In other words returning a NULL value is not just some indicator of an error-condition, but it means that the assumption of how you imagine your code works is wrong. That is a good place to use assert ... you assume your program will work one way, and if it doesn't then you don't want that error propagating to cause some crazy hard-to-find bug somewhere else ... you want to nix it right when it occurs.
Finally, you can combine built in macros with assert such as __LINE__ and __FILE__ that will give you the file and line number in the code where the assert took place to help you quickly identify the problem area.
The purpose of an assert is to signal out unexpected behavior during debugging (as it's only available in a debug build). Your example is a justified case of assert. The next line would probably crash, but with the assert there you have the option to break execution right before the line is hit, and do some debugging.
This is usually done in parallel with exceptions - you assert to signal that something is wrong, and throw an exception to treat the case gracefully (even exiting the program):
double divide(int a , int b )
{
assert (0 != b);
if ( b )
return a/b;
throw division_by_0_exception();
}
There are cases where you want to continue execution, but still want to signal that something went wrong.
Assert is used to test assumptions about your code in a debug environment. Asserts generally have no effect on your final build.
Whether or not it is a valid test is another matter entirely. We can't answer that without intimate knowledge of your application.
Asserts should never fail. If you see any possibility that the assertion could fail, then you need an if statement instead to handle those cases where the condition is not true. Assertions are only for conditions that you believe will never fail.
Asserts are used to check invariants during code execution, those are the conditions that are assumed by programmer to always stay the same, if they differ from assumptions then there is a bug in the code.
Asserts can be also used for checking preconditions and postconditions, the first is checked before some code block and verifies if provided data/state is correct, the second one checks whether the outcome of some calculations are correct. This helps to narrow where problems/bugs might be located:
assert( /*preconditions*/ );
/*here some algorithm - and maybe more asserts checking invariants*/
assert( /*postconditions*/ );
Some examples of justified asserts:
Checking function return value, for example if you call some external API function and you know that it returns some error value only in case of programming error:
WinAPI Thread32First function requires that provided LPTHREADENTRY32 structure has properly assigned dwSize field, in case of error it fails. This failure should be catched by assert.
If function accepts pointer to some data, then add assert at the start of function to verify that it is non-null. This makes sense if this function cannot work on null pointer.
If you have a lock on mutex with set timeout then if this timeout ends then you can use assert to indicate possible race condition / deadlock
... and many many more
Nice trick with asserts is to add some info inside, ex.:
assert(false && "Reason for this assert");
"Reason for this assert" will show up to you in a message box
You might also want to know that we also have static asserts that indicate errors during compilation.

How to properly rewrite ASSERT code to pass /analyze in msvc?

Visual Studio added code analysis (/analyze) for C/C++ in order to help identify bad code. This is quite a nice feature but when you deal with and old project you may be overwhelmed by the number of warnings.
Most of the problems are generating because the old code is doing some ASSERT at the beginning of the method or function.
I think this is the ASSERT definition used in the code (from afx.h)
#define ASSERT(f) DEBUG_ONLY((void) ((f) || !::AfxAssertFailedLine(THIS_FILE, __LINE__) || (AfxDebugBreak(), 0)))
Example code:
ASSERT(pBytes != NULL);
*pBytes = 0; // <- warning C6011: Dereferencing NULL pointer 'pBytes'
I'm looking for an easy, clean and safe solution to solve these warnings that does not imply disabling these warnings. Did I mention that there are lots of occurrences in current codebase?
/analyze is not guaranteed to yield relevant and correct warnings.
It can and will miss a lot of issues, and it also gives a number of false positives (things it identifies as warnings, but which are perfectly safe and will never actually occur)
It is unrealistic to expect to have zero warnings with /analyze.
It has pointed out a situation where you dereference a pointer which it can not verify is always valid. As far as PREfast can tell, there's no guarantee that it will never be NULL.
But that doesn't mean it can be NULL. Just that the analysis required to prove that it's safe is too complex for PREfast.
You may be able to use the Microsoft-specific extension __assume to tell the compiler that it shouldn't produce this warning, but a better solution is to leave the warning. Every time you compile with /analyze (which need not be every time you compile), you should verify that the warnings it does come up with are still false positives.
If you use your asserts correctly (to catch logic error during programming, guarding against situations that cannot happen, then I see no problem with your code, or with leaving the warning. Adding code to handle a problem that can never occur is just pointless. You're adding more code and more complexity for no reason (if it can never occur, then you have no way of recovering from it, because you have absolutely no clue what state the program will be in. All you know is that it has entered a code path you thought impossible.
However, if you use your assert as actual error handling (the value can be NULL in exceptional cases, you just expect that it won't happen), then it is a defect in your code. Then proper error handling (exceptions, typically) is needed.
Never ever use asserts for problems that are possible. Use them to verify that the impossible isn't happening. And when /analyze gives you warnings, look at them. If it is a false positive, ignore it (don't suppress it, because while it's a false positive today, the code you check in tomorrow may turn it into a real issue).
PREFast is telling you that you have a defect in your code; don't ignore it. You do in fact have one, but you have only skittered around acknowleging it. The problem is this: just because pBytes has never been NULL in development & testing doesn't mean it won't be in production. You don't handle that eventuality. PREfast knows this, and is trying to warn you that production environments are hostile, and will leave your code a smoking, mutilated mass of worthless bytes.
/rant
There are two ways to fix this: the Right Way, and a hack.
The right way is to handle NULL pointers at runtime:
void DoIt(char* pBytes)
{
assert(pBytes != NULL);
if( !pBytes )
return;
*pBytes = 0;
}
This will silence PREfast.
The hack is to use an annotation. For example:
void DoIt(char* pBytes)
{
assert(pBytes != NULL);
__analysis_assume( pBytes );
*pBytes = 0;
}
EDIT: Here's a link describing PREfast annotations. A starting point, anyway.
Firstly your assertion statement must guarantee to throw or terminate the application. After some experimentation I found in this case /analyse ignores all implementation in either template functions, inline functions or normal functions. You must instead use macros and the do{}while(0) trick, with inline suppression of
If you look at the definition of ATLENSURE() Microsoft use __analyse_assume() in their macro, they also have several paragraphs of very good documentation on why and how they are migrating ATL to use this macro.
As an example of this I have modified the CPPUNIT_ASSERT macros in the same way to clean up thousands of warnings in our unit tests.
#define CPPUNIT_ASSERT(condition) \
do { ( CPPUNIT_NS::Asserter::failIf( !(condition), \
CPPUNIT_NS::Message( "assertion failed" ), \
CPPUNIT_SOURCELINE() ) ); __analysis_assume(!!(condition)); \
__pragma( warning( push)) \
__pragma( warning( disable: 4127 )) \
} while(0) \
__pragma( warning( pop))
remember, ASSERT() goes away in a retail build so the C6011 warning is absolutely correct in your code above: you must check that pBytes is non-null as well as doing the ASSERT(). the ASSERT() simply throws your app into a debugger if that condition is met in a debug bug.
I work a great deal on /analyze and PREfast, so if you have other questions, please feel to let me know.
You seem to assume that ASSERT(ptr) somehow means that ptr is not NULL afterwards. That's not true, and the code analyzer doesn't make that assumption.
My co-author David LeBlanc would tell me this code is broken anyway, assuming you're using C++, you should use a reference rather than a pointer, and a ref can't be NULL :)

Testing for assert in the Boost Test framework

I use the Boost Test framework to unit test my C++ code and wondered if it is possible to test if a function will assert? Yes, sounds a bit strange but bear with me! Many of my functions check the input parameters upon entry, asserting if they are invalid, and it would be useful to test for this. For example:
void MyFunction(int param)
{
assert(param > 0); // param cannot be less than 1
...
}
I would like to be able to do something like this:
BOOST_CHECK_ASSERT(MyFunction(0), true);
BOOST_CHECK_ASSERT(MyFunction(-1), true);
BOOST_CHECK_ASSERT(MyFunction(1), false);
...
You can check for exceptions being thrown using Boost Test so I wondered if there was some assert magic too...
Having the same problem, I digged through the documentation (and code) and
found a "solution".
The Boost UTF uses boost::execution_monitor (in
<boost/test/execution_monitor.hpp>). This is designed with the aim to catch
everything that could happen during test execution. When an assert is found
execution_monitor intercepts it and throws boost::execution_exception. Thus,
by using BOOST_REQUIRE_THROW you may assert the failure of an assert.
so:
#include <boost/test/unit_test.hpp>
#include <boost/test/execution_monitor.hpp> // for execution_exception
BOOST_AUTO_TEST_CASE(case_1)
{
BOOST_REQUIRE_THROW(function_w_failing_assert(),
boost::execution_exception);
}
Should do the trick. (It works for me.)
However (or disclaimers):
It works for me. That is, on Windows XP, MSVC 7.1, boost 1.41.0. It might
be unsuitable or broken on your setup.
It might not be the intention of the author of Boost Test.
(although it seem to be the purpose of execution_monitor).
It will treat every form of fatal error the same way. I e it could be
that something other than your assert is failing. In this case you
could miss e g a memory corruption bug, and/or miss a failed failed assert.
It might break on future boost versions.
I expect it would fail if run in Release config, since the assert will be
disabled and the code that the assert was set to prevent will
run. Resulting in very undefined behavior.
If, in Release config for msvc, some assert-like or other fatal error
would occur anyway it would not be caught. (see execution_monitor docs).
If you use assert or not is up to you. I like them.
See:
http://www.boost.org/doc/libs/1_41_0/libs/test/doc/html/execution-monitor/reference.html#boost.execution_exception
the execution-monitor user-guide.
Also, thanks to Gennadiy Rozental (Author of Boost Test), if you happen to
read this, Great Work!!
There are two kinds of errors I like to check for: invariants and run-time errors.
Invariants are things that should always be true, no matter what. For those, I use asserts. Things like you shouldn't be passing me a zero pointer for the output buffer you're giving me. That's a bug in the code, plain and simple. In a debug build, it will assert and give me a chance to correct it. In a retail build, it will cause an access violation and generate a minidump (Windows, at least in my code) or a coredump (Mac/unix). There's no catch that I can do that makes sense to deal with dereferencing a zero pointer. On Windows catch (...) can suppress access violations and give the user a false sense of confidence that things are OK when they've already gone horribly, horribly wrong.
This is one reason why I've come to believe that catch (...) is generally a code smell in C++ and the only reasonable place where I can think of that being present is in main (or WinMain) right before you generate a core dump and politely exit the app.
Run-time errors are things like "I can't write this file because of permissions" or "I can't write this file because the disk is full". For these sorts of errors throwing an exception makes sense because the user can do something about it like change the permission on a directory, delete some files or choose an alternate location to save the file. These run-time errors are correctable by the user. A violation of an invariant can't be corrected by the user, only by a programmer. (Sometimes the two are the same, but typically they aren't.)
Your unit tests should force code to throw the run-time error exceptions that your code could generate. You might also want to force exceptions from your collaborators to ensure that your system under test is exception safe.
However, I don't believe there is value in trying to force your code to assert against invariants with unit tests.
I don't think so. You could always write your own assert which throws an exception and then use BOOST_CHECK_NOTHROW() for that exception.
I think this question, and some of replies, confuse run-time errors detection with bug detection. They also confuse intent and mechanism.
Run-time error is something that can happen in a 100% correct program. It need detection, and it needs proper reporting and handling, and it should be tested. Bugs also happen, and for programmer's convenience it's better to catch them early using precondition checks or invariant checks or random assert. But this is programmer's tool. The error message will make no sense for ordinary user, and it does not seem reasonable to test function behaviour on the data that properly written program will never pass to it.
As for intent and mechanism, it should be noted that exception is nothing magic. Some time ago, Peter Dimov said on Boost mailing list (approximately) that "exceptions are just non-local jump mechanism". And this is very true. If you have application where it's possible to continue after some internal error, without the risk that something will be corrupted before repair, you can implement custom assert that throws C++ exception. But it would not change the intent, and won't make testing for asserts much more reasonable.
At work I ran into the same problem. My solution is to use a compile flag. When my flag GROKUS_TESTABLE is on my GROKUS_ASSERT is turned into an exception and with Boost you can test code paths that throw exceptions. When GROKUS_TESTABLE is off, GROKUS_ASSERT is translated to c++ assert().
#if GROKUS_TESTABLE
#define GROKUS_ASSERT ... // exception
#define GROKUS_CHECK_THROW BOOST_CHECK_THROW
#else
#define GROKUS_ASSERT ... // assert
#define GROKUS_CHECK_THROW(statement, exception) {} // no-op
#endif
My original motivation was to aid debugging, i.e. assert() can be debugged quickly and exceptions often are harder to debug in gdb. My compile flag seems to balance debuggability and testability pretty well.
Hope this helps
Sorry, but you're attacking your problem the wrong way.
"assert" is the spawn of the devil (a.k.a. "C") and is useless with any language that has proper exceptions. It's waaaaaay better to reimplement an assert-like functionality with exceptions. This way you actually get a chance of handling errors the right way (incl proper cleanup procedures) or triggering them at will (for unit testing).
Besides, if your code ever runs in Windows, when you fail an assertion you get a useless popup offering you to debug/abort/retry. Nice for automated unit tests.
So do yourself a favor and re-code an assert function that throws exceptions. There's one here:
How can I assert() without using abort()?
Wrap it in a macro so you get _ _FILE _ _ and _ _ LINE _ _ (useful for debug) and you're done.