Is it evil to redefine assert? - c++

Is it evil to redefine the assert macro?
Some folks recommend using your own macro ASSERT(cond) rather than redefining the existing, standard, assert(cond) macro. But this does not help if you have a lot of legacy code using assert(), that you don't want to make source code changes to, that you want to intercept, regularize, the assertion reporting of.
I have done
#undef assert
#define assert(cond) ... my own assert code ...
in situations such as the above - code already using assert, that I wanted to extend the assert-failing behavior of - when I wanted to do stuff like
1) printing extra error information to make the assertions more useful
2) automatically invoking a debugger or stack track on an assert
... this, 2), can be done without redefining assert, by implementing a SIGABRT signal handler.
3) converting assertion failures into throws.
... this, 3), cannot be done by a signal handler - since you can't throw a C++ exception from a signal handler. (At least not reliably.)
Why might I want to make assert throw? Stacked error handling.
I do this latter usually not because I want the program to continue running after the assertion (although see below), but because I like using exceptions to provide better context on errors. I often do:
int main() {
try { some_code(); }
catch(...) {
std::string err = "exception caught in command foo";
std::cerr << err;
exit(1);;
}
}
void some_code() {
try { some_other_code(); }
catch(...) {
std::string err = "exception caught when trying to set up directories";
std::cerr << err;
throw "unhandled exception, throwing to add more context";
}
}
void some_other_code() {
try { some_other2_code(); }
catch(...) {
std::string err = "exception caught when trying to open log file " + logfilename;
std::cerr << err;
throw "unhandled exception, throwing to add more context";
}
}
etc.
I.e. the exception handlers add a bit more error context, and then rethrow.
Sometimes I have the exception handlers print, e.g. to stderr.
Sometimes I have the exception handlers push onto a stack of error messages.
(Obviously that won't work when the problem is running out of memory.)
** These assert exceptions still exit ... **
Somebody who commented on this post, #IanGoldby, said "The idea of an assert that doesn't exit doesn't make any sense to me."
Lest I was not clear: I usually have such exceptions exit. But eventually, perhaps not immediately.
E.g. instead of
#include <iostream>
#include <assert.h>
#define OS_CYGWIN 1
void baz(int n)
{
#if OS_CYGWIN
assert( n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin." );
#else
std::cout << "I know how to do baz(n) most places, and baz(n), n!=1 on Cygwin, but not baz(1) on Cygwin.\n";
#endif
}
void bar(int n)
{
baz(n);
}
void foo(int n)
{
bar(n);
}
int main(int argc, char** argv)
{
foo( argv[0] == std::string("1") );
}
producing only
% ./assert-exceptions
assertion "n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin."" failed: file "assert-exceptions.cpp", line 9, function: void baz(int)
/bin/sh: line 1: 22180 Aborted (core dumped) ./assert-exceptions/
%
you might do
#include <iostream>
//#include <assert.h>
#define assert_error_report_helper(cond) "assertion failed: " #cond
#define assert(cond) {if(!(cond)) { std::cerr << assert_error_report_helper(cond) "\n"; throw assert_error_report_helper(cond); } }
//^ TBD: yes, I know assert needs more stuff to match the definition: void, etc.
#define OS_CYGWIN 1
void baz(int n)
{
#if OS_CYGWIN
assert( n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin." );
#else
std::cout << "I know how to do baz(n) most places, and baz(n), n!=1 on Cygwin, but not baz(1) on Cygwin.\n";
#endif
}
void bar(int n)
{
try {
baz(n);
}
catch(...) {
std::cerr << "trying to accomplish bar by baz\n";
throw "bar";
}
}
void foo(int n)
{
bar(n);
}
int secondary_main(int argc, char** argv)
{
foo( argv[0] == std::string("1") );
}
int main(int argc, char** argv)
{
try {
return secondary_main(argc,argv);
}
catch(...) {
std::cerr << "main exiting because of unknown exception ...\n";
}
}
and get the slightly more meaningful error messages
assertion failed: n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin."
trying to accomplish bar by baz
main exiting because of unknown exception ...
I should not have to explain why these context sensitive error messages can be more meaningful.
E.g. the user may not have the slightest idea why baz(1) is being called.
It may well ne a pogram error - on cygwin, you may have to call cygwin_alternative_to_baz(1).
But the user may understand what "bar" is.
Yes: this is not guaranteed to work. But, for that matter, asserts are not guaranteed to work, if they do anything more complicated than calling in the abort handler.
write(2,"error baz(1) has occurred",64);
and even that is not guaranteed to work (there's a secure bug in this invocation.)
E.g. if malloc or sbrk has failed.
Why might I want to make assert throw? Testing
The other big reason that I have occasionally redefined assert has been to write unit tests for legacy code, code that uses assert to signal errors, which I am not allowed to rewrite.
If this code is library code, then it is convenient to wrap calls via try/catch. See if the error is detected, and go on.
Oh, heck, I might as well admit it: sometimes I wrote this legacy code. And I deliberately used assert() to signal errors. Because I could not rely on the user doing try/catch/throw - in fact, oftentimes the same code must be used in a C/C++ environment. I did not want to use my own ASSERT macro - because, believe it or not, ASSERT often conflicts. I find code that is littered with FOOBAR_ASSERT() and A_SPECIAL_ASSERT() ugly. No... simply using assert() by itself is elegant, works basically. And can be extended.... if it is okay to override assert().
Anyway, whether the code that uses assert() is mine or from someone else: sometimes you want code to fail, by calling SIGABRT or exit(1) - and sometimes you want it to throw.
I know how to test code that fails by exit(a) or SIGABRT - something like
for all tests do
fork
... run test in child
wait
check exit status
but this code is slow. Not always portable. And often runs several thousand times slower
for all tests do
try {
... run test in child
} catch (... ) {
...
}
This is a riskier than just stacking error message context since you may continue operating. But you can always choose types of exceptions to cactch.
Meta-Observation
I am with Andrei Alexandresciu in thinking that exceptions are the best known method to report errors in code that wants to be secure. (Because the programmer cannot forget to check an error return code.)
If this is right ... if there is a phase change in error reporting, from exit(1)/signals/ to exceptions ... one still has the question of how to live with the legacy code.
And, overall - there are several error reporting schemes. If different libraries use different schemes, how make them live together.

Redefining a Standard macro is an ugly idea, and you can be sure the behaviour's technically undefined, but in the end macros are just source code substitutions and it's hard to see how it could cause problems, as long as the assertion causes your program to exit.
That said, your intended substitution may not be reliably used if any code in the translation unit after your definition itself redefines assert, which suggests a need for a specific order of includes etc. - damned fragile.
If your assert substitutes code that doesn't exit, you open up new problems. There are pathological edge cases where your ideas about throwing instead could fail, such as:
int f(int n)
{
try
{
assert(n != 0);
call_some_library_that_might_throw(n);
}
catch (...)
{
// ignore errors...
}
return 12 / n;
}
Above, a value of 0 for n starts crashing the application instead of stopping it with a sane error message: any explanation in the thrown message won't be seen.
I am with Andrei Alexandresciu in thinking that exceptions are the best known method to report errors in code that wants to be secure. (Because the programmer cannot forget to check an error return code.)
I don't recall Andrei saying quite that - do you have a quote? He's certainly thought very carefully about how to create objects that encourage reliable exception handling, but I've never heard/seen him suggest that a stop-the-program assert is inappropriate in certain cases. Assertions are a normal way of enforcing invariants - there's definitely a line to be drawn concerning which potential assertions can be continued from and which can't, but on one side of that line assertions continue to be useful.
The choice between returning an error value and using exceptions is the traditional ground for the kind of argument/preference you mention, as they're more legitimately alternatives.
If this is right ... if there is a phase change in error reporting, from exit(1)/signals/ to exceptions ... one still has the question of how to live with the legacy code.
As above, you shouldn't try to migrate all existing exit() / assert etc. to exceptions. In many cases, there will be no way to meaningfully continue processing, and throwing an exception just creates doubt about whether the issue will be recorded properly and lead to the intended termination.
And, overall - there are several error reporting schemes. If different libraries use different schemes, how make them live together.
Where that becomes a real issue, you'd generally select one approach and wrap the non-conforming libraries with a layer that provides the error handling you like.

I wrote an application that runs on an embedded system. In the early days I sprinkled asserts through the code liberally, ostensibly to document conditions in the code that should be impossible (but in a few places as lazy error-checking).
It turned out that the asserts were occasionally being hit, but no one ever got to see the message output to the console containing the file and line number, because the console serial port generally was not connected to anything. I later redefined the assert macro so that instead of outputting a message to the console it would send a message over the network to the error logger.
Whether or not you think redefining assert is 'evil', this works well for us.

If you include any headers/libraries that utilize assert, then you would experience unexpected behavior, otherwise the compiler allows you to do it so you can do it.
My suggestion, which is based on personal opinion is that in any case you can define your own assert without the need to redefine the existing one. You are never gaining extra benefit from redefining the existing one over defining a new one with a new name.

Related

Properly handling an exception

My understanding is that in modern C++, it is preferable to use throw instead of returning an error code. That is to say, if I have some function:
void MyFunction(int foo, double bar){
// Do some stuff....
if (exception_criteria_met){
throw "Call to MyFunction with inputs" << foo << " and " << bar << " failed because ...";
}
};
Does this mean that, in order to properly handle this, then in all functions that make a call to MyFunction, I must implement something like:
void MyOuterFunction(int foo){
// Do some stuff...
try {
MyFunction(foo, bar);
} catch (const char * exception_message) {
throw "Call to MyOuterFunction with inputs " << foo << " failed because call to MyFunction with ....";
};
And then wouldn't that imply that I must perform this same kind of error handling all the way down through all functions that calls into this stack of function calls? Obviously at some point, maybe one of the functions can actually handle the exception and not need to throw an exception of its own, but perhaps just print a warning or something similar. But how do you keep this organized and intelligible? It seems like a remarkably complicated thing to think all the way through when you've got many functions interacting with one another.
(As a secondary question, if MyFunction(foo,bar) were to return a value that I wanted to use, wouldn't it mean that after leaving the try { } block, the returned value would fall out of scope? Does that imply I should avoid using function return values if I'm going to be using try-catch?)
I think most of your question is either too broad or opinion based or both. However, your confusion seems to start with
then in all functions that make a call to MyFunction, I must implement something like:
That's not right.
The function calling MyFunction that potentially throws can look like this:
void MyOuterFunction(int foo){
// Do some stuff...
MyFunction(foo, bar);
};
If you would have to always catch an exception immediately, exceptions would be rather impractical. Thats not how they work.
In a nutshell: The exception continues to travel up the call stack until you either catch it or main is reached and it is still not caught, in that case the program terminates.
There is no fixed rule where you should catch the exception. My personal rule of thumb is to catch it when it is possible to recover from it, but not sooner. Sometimes not catching an expection at all is the most viable. Catching and rethrowing is rarely useful and catching it only to rethrow as you do is never useful to my knowledge.

NIFs raise Segmentation Fault while loading function has try catch block to handle the exception [duplicate]

Here is a simple piece of code where a division by zero occurs. I'm trying to catch it :
#include <iostream>
int main(int argc, char *argv[]) {
int Dividend = 10;
int Divisor = 0;
try {
std::cout << Dividend / Divisor;
} catch(...) {
std::cout << "Error.";
}
return 0;
}
But the application crashes anyway (even though I put the option -fexceptions of MinGW).
Is it possible to catch such an exception (which I understand is not a C++ exception, but a FPU exception) ?
I'm aware that I could check for the divisor before dividing, but I made the assumption that, because a division by zero is rare (at least in my app), it would be more efficient to try dividing (and catching the error if it occurs) than testing each time the divisor before dividing.
I'm doing these tests on a WindowsXP computer, but would like to make it cross platform.
It's not an exception. It's an error which is determined at hardware level and is returned back to the operating system, which then notifies your program in some OS-specific way about it (like, by killing the process).
I believe that in such case what happens is not an exception but a signal. If it's the case: The operating system interrupts your program's main control flow and calls a signal handler, which - in turn - terminates the operation of your program.
It's the same type of error which appears when you dereference a null pointer (then your program crashes by SIGSEGV signal, segmentation fault).
You could try to use the functions from <csignal> header to try to provide a custom handler for the SIGFPE signal (it's for floating point exceptions, but it might be the case that it's also raised for integer division by zero - I'm really unsure here). You should however note that the signal handling is OS-dependent and MinGW somehow "emulates" the POSIX signals under Windows environment.
Here's the test on MinGW 4.5, Windows 7:
#include <csignal>
#include <iostream>
using namespace std;
void handler(int a) {
cout << "Signal " << a << " here!" << endl;
}
int main() {
signal(SIGFPE, handler);
int a = 1/0;
}
Output:
Signal 8 here!
And right after executing the signal handler, the system kills the process and displays an error message.
Using this, you can close any resources or log an error after a division by zero or a null pointer dereference... but unlike exceptions that's NOT a way to control your program's flow even in exceptional cases. A valid program shouldn't do that. Catching those signals is only useful for debugging/diagnosing purposes.
(There are some useful signals which are very useful in general in low-level programming and don't cause your program to be killed right after the handler, but that's a deep topic).
Dividing by zero is a logical error, a bug by the programmer. You shouldn't try to cope with it, you should debug and eliminate it. In addition, catching exceptions is extremely expensive- way more than divisor checking will be.
You can use Structured Exception Handling to catch the divide by zero error. How that's achieved depends on your compiler. MSVC offers a function to catch Structured Exceptions as catch(...) and also provides a function to translate Structured Exceptions into regular exceptions, as well as offering __try/__except/__finally. However I'm not familiar enough with MinGW to tell you how to do it in that compiler.
There's isn't a
language-standard way of catching
the divide-by-zero from the CPU.
Don't prematurely "optimize" away a
branch. Is your application
really CPU-bound in this context? I doubt it, and it isn't really an
optimization if you break your code.
Otherwise, I could make your code
even faster:
int main(int argc, char *argv[]) { /* Fastest program ever! */ }
Divide by zero is not an exception in C++, see https://web.archive.org/web/20121227152410/http://www.jdl.co.uk/briefings/divByZeroInCpp.html
Somehow the real explanation is still missing.
Is it possible to catch such an exception (which I understand is not a C++ exception, but a FPU exception) ?
Yes, your catch block should work on some compilers. But the problem is that your exception is not an FPU exception. You are doing integer division. I don’t know whether that’s also a catchable error but it’s not an FPU exception, which uses a feature of the IEEE representation of floating point numbers.
On Windows (with Visual C++), try this:
BOOL SafeDiv(INT32 dividend, INT32 divisor, INT32 *pResult)
{
__try
{
*pResult = dividend / divisor;
}
__except(GetExceptionCode() == EXCEPTION_INT_DIVIDE_BY_ZERO ?
EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH)
{
return FALSE;
}
return TRUE;
}
MSDN: http://msdn.microsoft.com/en-us/library/ms681409(v=vs.85).aspx
Well, if there was an exception handling about this, some component actually needed to do the check. Therefore you don't lose anything if you check it yourself. And there's not much that's faster than a simple comparison statement (one single CPU instruction "jump if equal zero" or something like that, don't remember the name)
To avoid infinite "Signal 8 here!" messages, just add 'exit' to Kos nice code:
#include <csignal>
#include <iostream>
#include <cstdlib> // exit
using namespace std;
void handler(int a) {
cout << "Signal " << a << " here!" << endl;
exit(1);
}
int main() {
signal(SIGFPE, handler);
int a = 1/0;
}
As others said, it's not an exception, it just generates a NaN or Inf.
Zero-divide is only one way to do that. If you do much math, there are lots of ways, like
log(not_positive_number), exp(big_number), etc.
If you can check for valid arguments before doing the calculation, then do so, but sometimes that's hard to do, so you may need to generate and handle an exception.
In MSVC there is a header file #include <float.h> containing a function _finite(x) that tells if a number is finite.
I'm pretty sure MinGW has something similar.
You can test that after a calculation and throw/catch your own exception or whatever.

Properly terminating program. Using exceptions

Question:
Is using exceptions the proper way to terminate my program if all I want is to display an error message and close (accounting that I may be deep in the program)? Or can I just explicitly call something like exit(EXIT_FAILURE) instead?
What I'm Currently Doing:
I'm working on a game project and am trying to figure out the best way to terminate the program in the case of an error that calls for such an action. For example, in the case the textures can't be loaded I display an error message and terminate the program.
I'm currently doing this with exceptions like so:
int main()
{
Game game;
try
{
game.run();
}
catch (BadResolutionException & e)
{
Notification::showErrorMessage(e.what(), "ERROR: Resolution");
return 1;
}
catch (BadAssetException & e)
{
Notification::showErrorMessage(e.what(), "ERROR: Assets");
return 1;
}
catch (std::bad_alloc & e)
{
Notification::showErrorMessage(e.what(), "ERROR: Memory");
return 1;
}
return 0;
}
All but bad_alloc are my own defined exceptions derived from runtime_error.
I don't need any manual resource cleanup and I'm using std::unique_ptr for any dynamic allocation. I just need to display the error message and close the program.
Research/Alternatives to Exceptions:
I've looked up a lot of posts on SO and other places and have seen others say anything from don't use exceptions, to use exceptions but your using them wrong. I've also looked up explicitly calling something like exit().
Using exit() sounds nice but I read it won't go back through the call stack up to main cleaning everything up (if I can find this again I'll post the link). Additionally, according to http://www.cplusplus.com/reference/cstdlib/exit/ this should not be used if multiple threads are active. I do expect to be creating a second thread for a short time at least once, and an error could occur in that thread.
Not using exceptions was mentioned in some replies here in relation to games https://gamedev.stackexchange.com/questions/103285/how-industy-games-handle-their-code-errors-and-exceptions
Use exceptions was discussed here: http://www.quora.com/Why-do-some-people-recommend-not-using-exception-handling-in-C++
There are a number of other sources I've read but those were the most recent I looked at.
Personal Conclusion:
Due to my limited experience of working with error handling and using exceptions, I'm not sure if I'm on the right track. I've chosen the route of using exceptions based on the code I posted above. If you agree that I should tackle those cases with exceptions, am I using it correctly?
It's generally considered good practice to let all exceptions propagate through to main. This is primarily because you can be sure the stack is properly unwound and all destructors are called (see this answer). I also think it's more organised to do things this way; you always known where your program will terminate (unless the program crashes). It's also facilitates more consistent error reporting (a point often neglected in exception handling; if you can't handle the exception, you should make sure your user knows exactly why). If you always start with this basic layout
int main(int argc, const char **argv)
{
try {
// do stuff
return EXIT_SUCCESS;
} catch (...) {
std::cerr << "Error: unknown exception" << std::endl;
return EXIT_FAILURE;
}
}
then you won't go far wrong. You can (and should) add specific catch statements for better error reporting.
Exceptions when multithreading
There are two basic ways of executing code asynchronously in C++11 using standard library features: std::async and std::thread.
First the simple one. std::async will return a std::future which will capture and store any uncaught exceptions thrown in the given function. Calling std::future::get on the future will cause any exceptions to propagate into the calling thread.
auto fut = std::async(std::launch::async, [] () { throw std::runtime_error {"oh dear"}; });
fut.get(); // fine, throws exception
On the other hand, if an exception in a std::thread object is uncaught then std::terminate will be called:
try {
std::thread t {[] () { throw std::runtime_error {"oh dear"};}};
t.join();
} catch(...) {
// only get here if std::thread constructor throws
}
One solution to this could be to pass a std::exception_ptr into the std::thread object which it can pass the exception to:
void foo(std::exception_ptr& eptr)
{
try {
throw std::runtime_error {"oh dear"};
} catch (...) {
eptr = std::current_exception();
}
}
void bar()
{
std::exception_ptr eptr {};
std::thread t {foo, std::ref(eptr)};
try {
// do stuff
} catch(...) {
t.join(); // t may also have thrown
throw;
}
t.join();
if (eptr) {
std::rethrow_exception(eptr);
}
}
Although a better way is to use std::package_task:
void foo()
{
throw std::runtime_error {"oh dear"};
}
void bar()
{
std::packaged_task<void()> task {foo};
auto fut = task.get_future();
std::thread t {std::move(task)};
t.join();
auto result = fut.get(); // throws here
}
But unless you have good reason to use std::thread, prefer std::async.
There's nothing wrong with catching unrecoverable errors and shutting down your program this way. In fact, it's how exceptions should be used. However, be careful not to cross the line of using exceptions to control the flow of your program in ordinary circumstances. They should always represent an error which cannot be gracefully handled at the level the error occurred.
Calling exit() would not unwind the stack from wherever you called it. If you want to exit cleanly, what you're already doing is ideal.
You have already accepted an answer, but I wanted to add something about this:
Can I just explicitly call something like exit() instead?
You can call exit, but (probably) shouldn't.
std::exit should be reserved for situations where you want to express "exit right now!", not simply "application has nothing left to do".
As an example, if you were to write a controller for a laser used in cancer treatments, your first priority in case something went wrong would be to shut down the laser and call std::exit - or possibly std::terminate (to ensure any side effects of a hanging, slow or crashing application do not kill a patient).
Similar to how exceptions should not be used for controlling application flow, exit should not be used to stop the application in normal conditions.
Is using exceptions the proper way to terminate my program if all I want is to display an error message and close (accounting that I may be deep in the program)?
Yes. This is the reason for using exceptions. An error occurred deep down in the code and something at a higher level will handle it. In your case, at the highest level.
There are arguments for/against exceptions vs. error codes, and this is a good read:
Exceptions or error codes
Can I just explicitly call something like exit() instead?
You can, but you may end up duplicating your logging code. Also, if in the future you decide that you want to handle an exception differently you will have to change all your exit calls. Imagine you wanted a different message, or to fall back on an alternative process.
Another similar question:
Correct usage of exit() in c++?
You also have a flaw in your approach as you don't handle all (C++) exceptions. You want something like this:
int main()
{
Game game;
try
{
game.run();
}
catch (BadResolutionException & e)
{
Notification::showErrorMessage(e.what(), "ERROR: Resolution");
return 1;
}
catch (BadAssetException & e)
{
Notification::showErrorMessage(e.what(), "ERROR: Assets");
return 1;
}
catch (std::bad_alloc & e)
{
Notification::showErrorMessage(e.what(), "ERROR: Memory");
return 1;
}
catch (...)
{
// overload?
Notification::showErrorMessage("ERROR: Unhandled");
return 1;
}
return 0;
}
If you don't handle all* exceptions you could have the game terminate without telling you anything useful.
You can't handle ALL exceptions. See this link:
C++ catching all exceptions
From the documentation:
[[noreturn]] void exit (int status);
Terminate calling process
Terminates the process normally, performing the regular cleanup for terminating programs.
Normal program termination performs the following (in the same order):
Objects associated with the current thread with thread storage duration are destroyed (C++11 only).
Objects with static storage duration are destroyed (C++) and functions registered with atexit are called.
All C streams (open with functions in ) are closed (and flushed, if buffered), and all files created with tmpfile are removed.
Control is returned to the host environment.
Note that objects with automatic storage are not destroyed by calling exit (C++).
If status is zero or EXIT_SUCCESS, a successful termination status is returned to the host environment.
If status is EXIT_FAILURE, an unsuccessful termination status is returned to the host environment.
Otherwise, the status returned depends on the system and library implementation.
For a similar function that does not perform the cleanup described above, see quick_exit.

Can this use of C++ exceptions justified

I have a C++ API which throws exceptions in error conditions. Usually, the method I have seen in C++ APIs to notify errors is by special return codes and functions which returns last error string which can be checked if method returns an error code. This method has limitations, for example if you need to return an integer from a function and the whole integer value range is valid for return values so you can't return an error code.
Due to this, I choose to throw an exception in my API when an error occurs in a function.
Is this an acceptable usage of exceptions in C++?
Also, In some functions in my API (eg. authenticate()), I have two options.
return bool to indicate success.
return void and throw an exception if failed.
If first option is used, it is not consistent with other functions because they all throw exceptions. Also, it is difficult to indicate what is the error.
So is it ok to use second method in such functions too?
In following answer, it is mentioned that it is bad to use C++ exceptions for controlling program flow. Also, I have heard the same elsewhere too.
https://stackoverflow.com/a/1736162/1015678
Does my usage violates this? I cannot clearly identify whether I am using exceptions for controlling program flow here.
the method I have seen in C++ APIs to notify errors is by special return codes and functions which returns last error string which can be checked if method returns an error code.
Sometimes that's done for good reasons, but more often when you see that the C++ library wraps an older C library, has been written by someone more comfortable with C, written for client coders more comfortable with C, or is written for interoperability with C.
return an integer from a function and the whole integer value range is valid for return values so you can't return an error code.
Options include:
exceptions
returning with a wider type (e.g. getc() returns an int with -1 indicating EOF).
returning a success flag alongside the value, wrapped in a boost::optional, pair, tuple or struct
having at least one of the success flag and/or value owned by the caller and specified to the function as a non-const by-reference or by-pointer parameter
Is this an acceptable usage of exceptions in C++?
Sounds ok, but the art is in balancing the pros and cons and we don't know whether it's optimally convenient and robust for client code calling your functions. Understanding their expectations in key, which will partly be formed based on their overall C++ experience, but also from the rest of your API and any other APIs shipped alongside yours, and even from other APIs for other libraries they're likely to be using in the same apps etc..
Consider too whether the caller of a function is likely to want to handle the success or failure of that function in the context of the call, separately from other possible failures. For example, sometimes it's easier for client code to work with functions returning boolean success values:
if (fn(1) && !fn(2))
fn(3);
try
{
fn(1);
try
{
fn2();
}
catch (const ExpectedExceptionFromFn2Type&)
{
fn3();
}
}
catch (const PossibleExceptionFromFn1Type&)
{
// that's ok - we don't care...
}
But other times it can be easier with exceptions:
try
{
My_X x { get_X(99) };
if (x.is_happy(42))
x += next_prime_after(x.to_int() * 3);
}
catch (std::exception& e)
{
std::cerr << "oops\n";
}
...compared to...
bool success;
My_X x;
if (get_X(&x, 99)) {
if (x.is_valid() {
bool happy;
if (x.can_get_happy(&happy, 42) && happy) {
int my_int;
if (x.can_convert_to_int(&my_int)) {
if (!x.add(next_prime_after(x.to_int() * 3))) {
std::cerr << "blah blah\n";
return false;
} else { cerr / return false; }
} else { cerr / return false; }
} else { cerr / return false; }
} else { cerr / return false; }
} else { cerr / return false; }
(Exactly how bad it gets depends on whether functions support reporting an error, or can be trusted to always work. That's difficult too, because it something happens that makes it possible for a function to start failing (e.g. it starts using a data file that could potentially be missing), if client code didn't already accept and check an error code or catch exceptions, then that client code may need to be reworked once the potential for errors is recognised. That's less true for exceptions, which - when you're lucky - may propagate to some already-suitable client catch statement, but on the other hand it's a risky assuming so without at least eyeballing the client code.)
Once you've considered whatever you know about client usage, there may still be some doubt about which approach is best: often you can just pick something and stick to it throughout your API, but occasionally you may want to offer multiple versions of a function, e.g.:
bool can_authenticate();
void authenticate_or_throw();
...or...
enum Errors_By { Return_Value, Exception };
bool authenticate(Errors_By e) { ... if (e == Exception) throw ...; return ...; }
...or...
template <class Error_Policy>
struct System
{
bool authenticate() { ... Error_Policy::return_or_throw(...); }
...
}
Also, In some functions in my API (eg. authenticate()), I have two options.
As above, you have more than 2 options. Anyway, consistency is very important. It sounds like exceptions are appropriate.
mentioned that it is bad to use C++ exceptions for controlling program flow
That is precisely what exceptions do and all they can be used for, but I do understand what you mean. Ultimately, striking the right balance is an art that comes with having used a lot of other software, considering other libraries your clients will be using alongside yours etc.. That said, if something is an "error" in some sense, it's at least reasonable to consider exceptions.
For something like authenticate(), I'd expect you to return a bool if you were able to compute a true/false value for the authentication, and throw an exception if something prevented you from doing that. The comment about using exceptions for flow control is suggesting NOT doing something like:
try {
...
authenticate();
// rely on the exception to not execute the rest of the code.
...
} catch (...) { ... }
For instance, I can imagine an authenticate() method that relies on contacting some service, and if you can't communicate with that service for some reason, you don't know if the credentials are good or bad.
Then again, the other major rule of thumb for APIs is "be consistent". If the rest of the API relies on exceptions to serve as the false value in similar cases, use that, but to me, it's a little on the ugly side. I'd lean toward reserving exceptions for the exceptional case - i.e. rare, shouldn't ever happen during normal operations, cases.

How should one log when an exception is triggered?

In a program I recently wrote, I wanted to log when my "business logic" code triggered an exception in third-party or project APIs. ( To clarify, I want to log when use of an an API causes an exception. This can be many frames above the actual throw, and may be many frames below the actual catch ( where logging of the exception payload can occur. ) ) I did the following:
void former_function()
{
/* some code here */
try
{
/* some specific code that I know may throw, and want to log about */
}
catch( ... )
{
log( "an exception occurred when doing something with some other data" );
throw;
}
/* some code here */
}
In short, if an exception occurs, create a catch-all clause, log the error, and re-throw. In my mind this is safe. I know in general catch-all is considered bad, since one doesn't have a reference to the exception at all to get any useful information. However, I'm just going to re-throw it, so nothing is lost.
Now, on its own it was fine, but some other programmers modified this program, and ended up violating the above. Specifically, they put a huge amount of code into the try-block in one case, and in another removed the 'throw' and placed a 'return'.
I see now my solution was brittle; it wasn't future-modification-proof.
I want a better solution that does not have these problems.
I have another potential solution that doesn't have the above issue, but I wonder what others think of it. It uses RAII, specifically a "Scoped Exit" object that implicitly triggers if std::uncaught_exception is not true on construction, yet is true on destruction:
#include <ciso646> // not, and
#include <exception> // uncaught_exception
class ExceptionTriggeredLog
{
private:
std::string const m_log_message;
bool const m_was_uncaught_exception;
public:
ExceptionTriggeredLog( std::string const& r_log_message )
: m_log_message( r_log_message ),
m_was_uncaught_exception( std::uncaught_exception() )
{
}
~ExceptionTriggeredLog()
{
if( not m_was_uncaught_exception
and std::uncaught_exception() )
{
try
{
log( m_log_message );
}
catch( ... )
{
// no exceptions can leave an destructor.
// especially when std::uncaught_exception is true.
}
}
}
};
void potential_function()
{
/* some code here */
{
ExceptionTriggeredLog exception_triggered_log( "an exception occurred when doing something with some other data" );
/* some specific code that I know may throw, and want to log about */
}
/* some code here */
}
I want to know:
technically, would this work robustly? Initially it seems to work, but I know there are some caveats about using std::uncaught_exception.
is there another way to accomplish what I want?
Note: I've updated this question. Specifically, I've:
added the try/catch that was initially missing, around the log function-call.
added tracking the std::uncaught_exception state on construction. This guards against the case where this object is created inside a 'try' block of another destructor which is triggered as part of exception stack-unwinding.
fixed the new 'potential_function' to create a named object, not a temporary object as before.
I have no comment on your method, but it seems interesting! I have another way that might also work for what you want, and might be a little more general-purpose. It requires lambdas from C++11 though, which might or might not be an issue in your case.
It's a simple function template that accepts a lambda, runs it and catches, logs and rethrows all exceptions:
template <typename F>
void try_and_log (char const * log_message, F code_block)
{
try {
code_block ();
} catch (...) {
log (log_message);
throw;
}
}
The way you use it (in the simplest case) is like this:
try_and_log ("An exception was thrown here...", [&] {
this_is_the_code ();
you_want_executed ();
and_its_exceptions_logged ();
});
As I said before, I don't know how it stacks against your own solution. Note that the lambda is capturing everything from its enclosing scope, which is quite convenient. Also note that I haven't actually tried this, so compile errors, logical problems and/or nuclear wars may result from this.
The problem I see here is that it is not easy to wrap this into a macro, and expecting your colleagues to write the [=] { and } parts correctly and all the time might be too much!
For wrapping and idiot-proofing purposes, you'll probably need two macros: a TRY_AND_LOG_BEGIN to emit the first line till the opening brace for the lambda and an TRY_AND_LOG_END to emit the closing brace and parenthesis. Like so:
#define TRY_AND_LOG_BEGIN(message) try_and_log (message, [&] {
#define TRY_AND_LOG_END() })
And you use them like this:
TRY_AND_LOG_BEGIN ("Exception happened!") // No semicolons here!
whatever_code_you_want ();
TRY_AND_LOG_END ();
Which is - depending on your perspective - is either a net gain or a net loss! (I personally prefer the straightforward function call and lambda syntax, which gives me more control and transparency.
Also, it is possible to write the log message at the end of the code block; just switch the two parameters of the try_and_log function.