I have a C++ application that calls SQLite's (SQLite is in C) sqlite3_exec() which in turn can call my callback function implemented in C++. SQLite is compiled into a static library.
If an exception escapes my callback will it propagate safely through the C code of SQLite to the C++ code calling sqlite3_exec()?
My guess is that this is compiler dependent. However, throwing an exception in the callback would be a very bad idea. Either it will flat-out not work, or the C code in the SQLite library will be unable to handle it. Consider if this is some code in SQLite:
{
char * p = malloc( 1000 );
...
call_the_callback(); // might throw an exception
...
free( p );
}
If the exception "works", the C code has no possible way of catching it, and p will never be freed. The same goes for any other resources the library may have allocated, of course.
There is already a protocol for the callback to abort the API call. From the docs:
If an sqlite3_exec() callback returns
non-zero, the sqlite3_exec() routine
returns SQLITE_ABORT without invoking
the callback again and without running
any subsequent SQL statements.
I'd strongly recommend you use this instead of an exception.
SQLite is expecting you to return a SQLITE_ABORT on error and a 0 return code for no error. So you ought to wrap all your C++ callback in a try catch. Then in the catch return a SQLite SQLITE_ABORT error code, otherwise a zero.
Problems will occur if you bypass returning through SQLite as it will not free up/complete whatever code it does after you return back from your callback. This will cause untold problems potentially some of which maybe very obscure.
That was a really interesting question and I tested it out myself out of curiosity. On my OS X w/ gcc 4.2.1 the answer was YES. It works perfectly. I think a real test would be using gcc for the C++ and some other (MSVC?, LLVM?) for the C part and see if it still works.
My code:
callb.h:
#ifdef __cplusplus
extern "C" {
#endif
typedef void (*t_callb)();
void cfun(t_callb fn);
#ifdef __cplusplus
}
#endif
callb.c:
#include "callb.h"
void cfun(t_callb fn) {
fn();
}
main.cpp:
#include <iostream>
#include <string>
#include "callb.h"
void myfn() {
std::string s( "My Callb Except" );
throw s;
}
int main() {
try {
cfun(myfn);
}
catch(std::string s) {
std::cout << "Caught: " << s << std::endl;
}
return 0;
}
If your callback called from sqlite is from the same thread from which you called sqlite3_exec() a throw somewhere in the callstack should be caught by a higher level catch.
Testing this yourself should be straightforward, no?
[edit]
After digging a little more myself I found out that the C++ standard is somewhat vague on what the behavior a c++ function called from c should have when throwing an exception.
You should definitely use the error handling mechanism the API expects. Otherwise you'll mostly the API itself in an undefined state and any further calls could potentially fail/crash.
Not a single answer mentioning building your C library with -fexceptions? Throw in a -fno-omit-framepointer and you are good to go. This works even across a shared library (written in C), as long as you throw from your main program, and catch in your main program again.
Related
Good Morning!
Edit: This is not a duplicate as it specifically pertains to SEH, not code-level thrown exceptions.
I'm using SEH to catch hardware errors thrown by some unreliable libraries. I'd like to get more information from the catchall exception. The following code simulates what I'm doing. As you can see, I'm using boost's current_exception_diagnostic_information, but it just spits out "No diagnostic information available." - not terribly helpful.
Is it possible to, at a minimum, get the termination code that would have been returned had the exception not have been caught? (In this case 0xC0000005, Access Violation)
#include "stdafx.h"
#include <future>
#include <iostream>
#include <boost/exception/diagnostic_information.hpp>
int slowTask()
{
//simulates a dodgy bit of code
int* a = new int[5]();
a[9000009] = 3;
std::cout << a[9000009];
return 0;
}
int main()
{
{
try
{
std::future<int> result(std::async(slowTask));
std::cout<< result.get();
}
catch(...)
{
std::cout << "Something has gone dreadfully wrong:"
<< boost::current_exception_diagnostic_information()
<< ":That's all I know."
<< std::endl;
}
}
return 0;
}
Using catch (...) you get no information at all, whether about C++ exceptions or SEH. So global handlers are not the solution.
However, you can get SEH information through a C++ exception handler (catch), as long as you also use set_se_translator(). The type of the catch should match the C++ exception built and thrown inside your translator.
The function that you write must be a native-compiled function (not compiled with /clr). It must take an unsigned integer and a pointer to a Win32 _EXCEPTION_POINTERS structure as arguments. The arguments are the return values of calls to the Win32 API GetExceptionCode and GetExceptionInformation functions, respectively.
Or you can use __try/__except, which are specific to SEH. They can be used inside C++ just fine, including in programs where C++ exceptions are used, as long as you don't have SEH __try and C++ try in the same function (to trap both types of exceptions in the same block, use a helper function).
Is it evil to redefine the assert macro?
Some folks recommend using your own macro ASSERT(cond) rather than redefining the existing, standard, assert(cond) macro. But this does not help if you have a lot of legacy code using assert(), that you don't want to make source code changes to, that you want to intercept, regularize, the assertion reporting of.
I have done
#undef assert
#define assert(cond) ... my own assert code ...
in situations such as the above - code already using assert, that I wanted to extend the assert-failing behavior of - when I wanted to do stuff like
1) printing extra error information to make the assertions more useful
2) automatically invoking a debugger or stack track on an assert
... this, 2), can be done without redefining assert, by implementing a SIGABRT signal handler.
3) converting assertion failures into throws.
... this, 3), cannot be done by a signal handler - since you can't throw a C++ exception from a signal handler. (At least not reliably.)
Why might I want to make assert throw? Stacked error handling.
I do this latter usually not because I want the program to continue running after the assertion (although see below), but because I like using exceptions to provide better context on errors. I often do:
int main() {
try { some_code(); }
catch(...) {
std::string err = "exception caught in command foo";
std::cerr << err;
exit(1);;
}
}
void some_code() {
try { some_other_code(); }
catch(...) {
std::string err = "exception caught when trying to set up directories";
std::cerr << err;
throw "unhandled exception, throwing to add more context";
}
}
void some_other_code() {
try { some_other2_code(); }
catch(...) {
std::string err = "exception caught when trying to open log file " + logfilename;
std::cerr << err;
throw "unhandled exception, throwing to add more context";
}
}
etc.
I.e. the exception handlers add a bit more error context, and then rethrow.
Sometimes I have the exception handlers print, e.g. to stderr.
Sometimes I have the exception handlers push onto a stack of error messages.
(Obviously that won't work when the problem is running out of memory.)
** These assert exceptions still exit ... **
Somebody who commented on this post, #IanGoldby, said "The idea of an assert that doesn't exit doesn't make any sense to me."
Lest I was not clear: I usually have such exceptions exit. But eventually, perhaps not immediately.
E.g. instead of
#include <iostream>
#include <assert.h>
#define OS_CYGWIN 1
void baz(int n)
{
#if OS_CYGWIN
assert( n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin." );
#else
std::cout << "I know how to do baz(n) most places, and baz(n), n!=1 on Cygwin, but not baz(1) on Cygwin.\n";
#endif
}
void bar(int n)
{
baz(n);
}
void foo(int n)
{
bar(n);
}
int main(int argc, char** argv)
{
foo( argv[0] == std::string("1") );
}
producing only
% ./assert-exceptions
assertion "n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin."" failed: file "assert-exceptions.cpp", line 9, function: void baz(int)
/bin/sh: line 1: 22180 Aborted (core dumped) ./assert-exceptions/
%
you might do
#include <iostream>
//#include <assert.h>
#define assert_error_report_helper(cond) "assertion failed: " #cond
#define assert(cond) {if(!(cond)) { std::cerr << assert_error_report_helper(cond) "\n"; throw assert_error_report_helper(cond); } }
//^ TBD: yes, I know assert needs more stuff to match the definition: void, etc.
#define OS_CYGWIN 1
void baz(int n)
{
#if OS_CYGWIN
assert( n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin." );
#else
std::cout << "I know how to do baz(n) most places, and baz(n), n!=1 on Cygwin, but not baz(1) on Cygwin.\n";
#endif
}
void bar(int n)
{
try {
baz(n);
}
catch(...) {
std::cerr << "trying to accomplish bar by baz\n";
throw "bar";
}
}
void foo(int n)
{
bar(n);
}
int secondary_main(int argc, char** argv)
{
foo( argv[0] == std::string("1") );
}
int main(int argc, char** argv)
{
try {
return secondary_main(argc,argv);
}
catch(...) {
std::cerr << "main exiting because of unknown exception ...\n";
}
}
and get the slightly more meaningful error messages
assertion failed: n == 1 && "I don't know how to do baz(1) on Cygwin). Should not call baz(1) on Cygwin."
trying to accomplish bar by baz
main exiting because of unknown exception ...
I should not have to explain why these context sensitive error messages can be more meaningful.
E.g. the user may not have the slightest idea why baz(1) is being called.
It may well ne a pogram error - on cygwin, you may have to call cygwin_alternative_to_baz(1).
But the user may understand what "bar" is.
Yes: this is not guaranteed to work. But, for that matter, asserts are not guaranteed to work, if they do anything more complicated than calling in the abort handler.
write(2,"error baz(1) has occurred",64);
and even that is not guaranteed to work (there's a secure bug in this invocation.)
E.g. if malloc or sbrk has failed.
Why might I want to make assert throw? Testing
The other big reason that I have occasionally redefined assert has been to write unit tests for legacy code, code that uses assert to signal errors, which I am not allowed to rewrite.
If this code is library code, then it is convenient to wrap calls via try/catch. See if the error is detected, and go on.
Oh, heck, I might as well admit it: sometimes I wrote this legacy code. And I deliberately used assert() to signal errors. Because I could not rely on the user doing try/catch/throw - in fact, oftentimes the same code must be used in a C/C++ environment. I did not want to use my own ASSERT macro - because, believe it or not, ASSERT often conflicts. I find code that is littered with FOOBAR_ASSERT() and A_SPECIAL_ASSERT() ugly. No... simply using assert() by itself is elegant, works basically. And can be extended.... if it is okay to override assert().
Anyway, whether the code that uses assert() is mine or from someone else: sometimes you want code to fail, by calling SIGABRT or exit(1) - and sometimes you want it to throw.
I know how to test code that fails by exit(a) or SIGABRT - something like
for all tests do
fork
... run test in child
wait
check exit status
but this code is slow. Not always portable. And often runs several thousand times slower
for all tests do
try {
... run test in child
} catch (... ) {
...
}
This is a riskier than just stacking error message context since you may continue operating. But you can always choose types of exceptions to cactch.
Meta-Observation
I am with Andrei Alexandresciu in thinking that exceptions are the best known method to report errors in code that wants to be secure. (Because the programmer cannot forget to check an error return code.)
If this is right ... if there is a phase change in error reporting, from exit(1)/signals/ to exceptions ... one still has the question of how to live with the legacy code.
And, overall - there are several error reporting schemes. If different libraries use different schemes, how make them live together.
Redefining a Standard macro is an ugly idea, and you can be sure the behaviour's technically undefined, but in the end macros are just source code substitutions and it's hard to see how it could cause problems, as long as the assertion causes your program to exit.
That said, your intended substitution may not be reliably used if any code in the translation unit after your definition itself redefines assert, which suggests a need for a specific order of includes etc. - damned fragile.
If your assert substitutes code that doesn't exit, you open up new problems. There are pathological edge cases where your ideas about throwing instead could fail, such as:
int f(int n)
{
try
{
assert(n != 0);
call_some_library_that_might_throw(n);
}
catch (...)
{
// ignore errors...
}
return 12 / n;
}
Above, a value of 0 for n starts crashing the application instead of stopping it with a sane error message: any explanation in the thrown message won't be seen.
I am with Andrei Alexandresciu in thinking that exceptions are the best known method to report errors in code that wants to be secure. (Because the programmer cannot forget to check an error return code.)
I don't recall Andrei saying quite that - do you have a quote? He's certainly thought very carefully about how to create objects that encourage reliable exception handling, but I've never heard/seen him suggest that a stop-the-program assert is inappropriate in certain cases. Assertions are a normal way of enforcing invariants - there's definitely a line to be drawn concerning which potential assertions can be continued from and which can't, but on one side of that line assertions continue to be useful.
The choice between returning an error value and using exceptions is the traditional ground for the kind of argument/preference you mention, as they're more legitimately alternatives.
If this is right ... if there is a phase change in error reporting, from exit(1)/signals/ to exceptions ... one still has the question of how to live with the legacy code.
As above, you shouldn't try to migrate all existing exit() / assert etc. to exceptions. In many cases, there will be no way to meaningfully continue processing, and throwing an exception just creates doubt about whether the issue will be recorded properly and lead to the intended termination.
And, overall - there are several error reporting schemes. If different libraries use different schemes, how make them live together.
Where that becomes a real issue, you'd generally select one approach and wrap the non-conforming libraries with a layer that provides the error handling you like.
I wrote an application that runs on an embedded system. In the early days I sprinkled asserts through the code liberally, ostensibly to document conditions in the code that should be impossible (but in a few places as lazy error-checking).
It turned out that the asserts were occasionally being hit, but no one ever got to see the message output to the console containing the file and line number, because the console serial port generally was not connected to anything. I later redefined the assert macro so that instead of outputting a message to the console it would send a message over the network to the error logger.
Whether or not you think redefining assert is 'evil', this works well for us.
If you include any headers/libraries that utilize assert, then you would experience unexpected behavior, otherwise the compiler allows you to do it so you can do it.
My suggestion, which is based on personal opinion is that in any case you can define your own assert without the need to redefine the existing one. You are never gaining extra benefit from redefining the existing one over defining a new one with a new name.
So I have a library (not written by me) which unfortunately uses abort() to deal with certain errors. At the application level, these errors are recoverable so I would like to handle them instead of the user seeing a crash. So I end up writing code like this:
static jmp_buf abort_buffer;
static void abort_handler(int) {
longjmp(abort_buffer, 1); // perhaps siglongjmp if available..
}
int function(int x, int y) {
struct sigaction new_sa;
struct sigaction old_sa;
sigemptyset(&new_sa.sa_mask);
new_sa.sa_handler = abort_handler;
sigaction(SIGABRT, &new_sa, &old_sa);
if(setjmp(abort_buffer)) {
sigaction(SIGABRT, &old_sa, 0);
return -1
}
// attempt to do some work here
int result = f(x, y); // may call abort!
sigaction(SIGABRT, &old_sa, 0);
return result;
}
Not very elegant code. Since this pattern ends up having to be repeated in a few spots of the code, I would like to simplify it a little and possibly wrap it in a reusable object. My first attempt involves using RAII to handle the setup/teardown of the signal handler (needs to be done because each function needs different error handling). So I came up with this:
template <int N>
struct signal_guard {
signal_guard(void (*f)(int)) {
sigemptyset(&new_sa.sa_mask);
new_sa.sa_handler = f;
sigaction(N, &new_sa, &old_sa);
}
~signal_guard() {
sigaction(N, &old_sa, 0);
}
private:
struct sigaction new_sa;
struct sigaction old_sa;
};
static jmp_buf abort_buffer;
static void abort_handler(int) {
longjmp(abort_buffer, 1);
}
int function(int x, int y) {
signal_guard<SIGABRT> sig_guard(abort_handler);
if(setjmp(abort_buffer)) {
return -1;
}
return f(x, y);
}
Certainly the body of function is much simpler and more clear this way, but this morning a thought occurred to me. Is this guaranteed to work? Here's my thoughts:
No variables are volatile or change between calls to setjmp/longjmp.
I am longjmping to a location in the same stack frame as the setjmp and returning normally, so I am allowing the code to execute the cleanup code that the compiler emitted at the exit points of the function.
It appears to work as expected.
But I still get the feeling that this is likely undefined behavior. What do you guys think?
I assume that f is in a third party library/app, because otherwise you could just fix it to not call abort. Given that, and that RAII may or may not reliably produce the right results on all platforms/compilers, you have a few options.
Create a tiny shared object that defines abort and LD_PRELOAD it. Then you control what happens on abort, and NOT in a signal handler.
Run f within a subprocess. Then you just check the return code and if it failed try again with updated inputs.
Instead of using the RAII, just call your original function from multiple call points and let it manually do the setup/teardown explicitly. It still eliminates the copy-paste in that case.
I actually like your solution, and have coded something similar in test harnesses to check that a target function assert()s as expected.
I can't see any reason for this code to invoke undefined behaviour. The C Standard seems to bless it: handlers resulting from an abort() are exempted from the restriction on calling library functions from a handler. (Caveat: this is 7.14.1.1(5) of C99 - sadly, I don't have a copy of C90, the version referenced by the C++ Standard).
C++03 adds a further restriction: If any automatic objects would be destroyed by a thrown exception transferring control to another (destination) point in the program, then a call to longjmp(jbuf, val) at the throw point that transfers control to the same (destination) point has undefined behavior. I'm supposing that your statement that 'No variables are volatile or change between calls to setjmp/longjmp' includes instantiating any automatic C++ objects. (I guess this is some legacy C library?).
Nor is POSIX async signal safety (or lack thereof) an issue - abort() generates its SIGABRT synchronously with program execution.
The biggest concern would be corrupting the global state of the 3rd party code: it's unlikely that the author will take pains to get the state consistent before an abort(). But, if you're correct that no variables change, then this isn't a problem.
If someone with a better understanding of the standardese can prove me wrong, I'd appreciate the enlightenment.
I recently stumbled into this this C++/Lua error
int function_for_lua( lua_State* L )
{
std::string s("Trouble coming!");
/* ... */
return luaL_error(L,"something went wrong");
}
The error is that luaL_error use longjmp, so the stack is never unwound and s is never destructed, leaking memory. There are a few more Lua API's that fail to unwind the stack.
One obvious solution is to compile Lua in C++ mode with exceptions. I, however, cannot as Luabind needs the standard C ABI.
My current thought is to write my own functions that mimic the troublesome parts of the Lua API:
// just a heads up this is valid c++. It's called a function try/catch.
int function_for_lua( lua_State* L )
try
{
/* code that may throw Lua_error */
}
catch( Lua_error& e )
{
luaL_error(L,e.what());
}
So my question: Is function_for_lua's stack properly unwound. Can something go wrong?
If I understand correctly, with Luabind functions that throw exceptions are properly caught and translated anyway. (See reference.)
So whenever you need to indicate an error, just throw a standard exception:
void function_for_lua( lua_State* L )
{
std::string s("Trouble coming!");
/* ... */
// translated into lua error
throw std::runtime_error("something went wrong");
}
Disclaimer: I've never used Lubind.
Does C++ offer a way to 'show' something visual if an unhandled exception occurs?
What I want to do is to make something like assert(unhandled exception.msg()) if it actually happens (like in the following sample):
#include <stdexcept>
void foo() {
throw std::runtime_error("Message!");
}
int main() {
foo();
}
I expect this kind of code not to terminate immediately (because exception was unhandled), rather show custom assertion message (Message! actually).
Is that possible?
There's no way specified by the standard to actually display the message of the uncaught exception. However, on many platforms, it is possible anyway. On Windows, you can use SetUnhandledExceptionFilter and pull out the C++ exception information. With g++ (appropriate versions of anyway), the terminate handler can access the uncaught exception with code like:
void terminate_handler()
{
try { throw; }
catch(const std::exception& e) { log(e.what()); }
catch(...) {}
}
and indeed g++'s default terminate handler does something similar to this. You can set the terminate handler with set_terminate.
IN short, no there's no generic C++ way, but there are ways depending on your platform.
Microsoft Visual C++ allows you to hook unhandled C++ exceptions like this. This is standard STL behaviour.
You set a handler via a call to set_terminate. It's recommended that your handler do not very much work, and then terminate the program, but I don't see why you could not signal something via an assert - though you don't have access to the exception that caused the problem.
I think you would benefit from a catch-all statement as follows:
int main() {
try {
foo();
catch (...) {
// Do something with the unhandled exception.
}
}
If you are using Windows, a good library for handling unhandled exceptions and crashes is CrashRpt. If you want to do it manually you can also use the following I wrote in this answer.
If I'm reading your question correctly, you're asking if you can overload throw (changing its default behavior) so it does something user-defined. No, you can't.
Edit: since you're insistent :), here's a bad idea™:
#include <iostream>
#include <stdlib.h>
#include <windows.h>
void monkey() {
throw std::exception("poop!");
}
LONG WINAPI MyUnhandledExceptionFilter(struct _EXCEPTION_POINTERS *lpTopLevelExceptionFilter) {
std::cout << "poop was thrown!" << std::endl;
return EXCEPTION_EXECUTE_HANDLER;
}
int main() {
SetUnhandledExceptionFilter(&MyUnhandledExceptionFilter);
monkey();
return 1;
}
Again, this is a very bad idea, and it's obviously platform-dependent, but it works.
Yes, its possible. Here you go:
#include <iostream>
#include <exception>
void foo()
{
throw std::exception("Message!");
}
int main()
{
try
{
foo();
}
catch (std::exception& e)
{
std::cout << "Got exception: " << e.what() << std::endl;
}
return 0;
}
The c++ standard is the terminate handler - as other have said
If you are after better traceablility for throws then this is what we do
We have a macro Throw that logs the file name and line number and message and then throws. It takes a printf style varargs message.
Throw(proj::FooException, "Fingle %s unable to process bar %d", fingle.c_str(), barNo);
I get a nice log message
Throw FooException from nargle.cpp:42 Fingle barf is unable to process bar 99
If you're really interested in what happened to cause your program to fail, you might benefit from examining the process image in a post-mortem debugger. The precise technique varies a bit from OS to OS, but the basic train is to first enable core dumping, and compile your program with debug symbols on. Once the program crashes, the operating system will copy its memory to disk, and you can then examine the state of the program at the time it crashed.