Compiler/platform handling of buggy std::terminate() handlers - c++

While reviewing bit of code, I came across a buggy std::terminate() handler
that wasn't terminating the program, but returning. Based on the documentation for std::set_terminate(), I think this falls within the realms of undefined - or at least implementation defined - behaviour.
Under Linux, and compiled using GCC, I found that cores were being dumped, which implies that some guardian angel was calling abort() or something similar on our behalf.
So I wrote the following test snippet, which confirmed my hunch. It looks like GCC or it's std library does wrap std::terminate() handlers so they do terminate the program.
#include <iostream>
#include <exception>
#include <dlfcn.h>
// Compile using
// g++ main.cpp -ldl
// Wrap abort() in my own implementation using
// dlsym(), so I can see if GCC generates code to
// call it if my std::terminate handler doesn't.
namespace std
{
void abort()
{
typedef void (*aborter)();
static aborter real_abort = 0x0;
if (0x0 == real_abort)
{
void * handle = 0x0;
handle = dlsym(RTLD_NEXT, "abort");
if (handle)
{
real_abort = (aborter )(handle);
}
}
std::cout << "2. Proof that GCC calls abort() if my buggy\n"
<< " terminate handler returns instead of terminating."
<< std::endl;
if (real_abort)
{
real_abort();
}
}
}
// Buggy terminate handler that returns instead of terminating
// execution via abort (or exit)
void buggyTerminateHandler()
{
std::cout << "1. In buggyTerminateHandler." << std::endl;
}
int main (int argc, char ** argv)
{
// Set terminate handler
std::set_terminate(buggyTerminateHandler);
// Raise unhandled exception
throw 1;
}
That a compiler (or library) would wrap std::terminate() handlers seems sensible to be, so at a guess I'd assume that most compilers do something along these lines.
Can anyone advise regarding the behaviour on Windows using Visual Studio or OS X using GCC?

Related

Destructor of stack allocated object does not get called when program is killed from the terminal [duplicate]

I have a class with a user-defined destructor. If the class was instantiated initially, and then SIGINT is issued (using CTRL+C in unix) while the program is running, will the destructor be called? What is the behaviour for SIGSTP (CTRL + Z in unix)?
No, by default, most signals cause an immediate, abnormal exit of your program.
However, you can easily change the default behavior for most signals.
This code shows how to make a signal exit your program normally, including calling all the usual destructors:
#include <iostream>
#include <signal.h>
#include <unistd.h>
#include <cstring>
#include <atomic>
std::atomic<bool> quit(false); // signal flag
void got_signal(int)
{
// Signal handler function.
// Set the flag and return.
// Never do real work inside this function.
// See also: man 7 signal-safety
quit.store(true);
}
class Foo
{
public:
~Foo() { std::cout << "destructor\n"; }
};
int main(void)
{
struct sigaction sa;
memset( &sa, 0, sizeof(sa) );
sa.sa_handler = got_signal;
sigfillset(&sa.sa_mask);
sigaction(SIGINT,&sa,NULL);
Foo foo; // needs destruction before exit
while (true)
{
// do real work here...
sleep(1);
if( quit.load() ) break; // exit normally after SIGINT
}
return 0;
}
If you run this program and press control-C, you should see the word "destructor" printed.
Be aware that your signal handler function (got_signal) should rarely do any work, other than setting a flag and returning quietly, unless you really know what you are doing. See also: https://man7.org/linux/man-pages/man7/signal-safety.7.html
Most signals are catchable as shown above, but not SIGKILL, you have no control over it because SIGKILL is a last-ditch method for killing a runaway process, and not SIGSTOP which allows a user to freeze a process cold. Note that you can catch SIGTSTP (control-Z) if desired, but you don't need to if your only interest in signals is destructor behavior, because eventually after a control-Z the process will be woken up, will continue running, and will exit normally with all the destructors in effect.
If you do not handle these signals yourself, then, no, the destructors are not called. However, the operating system will reclaim any resources your program used when it terminates.
If you wish to handle signals yourself, then consider checking out the sigaction standard library function.
Let's try it:
#include <stdio.h>
#include <unistd.h>
class Foo {
public:
Foo() {};
~Foo() { printf("Yay!\n"); }
} bar;
int main(int argc, char **argv) {
sleep(5);
}
And then:
$ g++ -o test ./test.cc
$ ./test
^C
$ ./test
Yay!
So I'm afraid not, you'll have to catch it.
As for SIGSTOP, it cannot be caught, and pauses the process until a SIGCONT is sent.

How to pass C++ exceptions through functions compiled with LibJIT?

I'm using LibJIT to JIT-compile a programming language. My compiler is written in C++, so I have a C++ program calling the LibJIT library, which is written in C.
In my C++ code I have some functions that throw exceptions, and sometimes these get triggered on the far side of a LibJIT-compiled function. That is, my C++ program uses LibJIT to compile a function, calls it, and the JIT-compiled function in turn calls a C++ function that throws an exception. But instead of the exception being propagated back through the JIT-compiled function to my handler, the runtime calls std::terminate.
Apparently the exception-handling mechanism needs each function in the call stack to implement some kind of exception support. When compiling C code, the -fexceptions flag tells the compiler (at least gcc) to include this support.
Is there a way to include this exception support in my LibJIT-generated functions?
I tried using the LibJIT instructions to set up a "catcher" block that rethrows all exceptions, but that didn't work; it looks like the LibJIT's exception-handling model is separate from the C++ model.
Obviously I don't really want to throw C++ exceptions into my language's code, and I'll have to figure out some other error-handling strategy. But I'm wondering if it's even possible to get it to work.
Here's a sample program that demonstrates the issue. I'm sorry it's this long but LibJIT is pretty low-level so you have to write a lot of code to get anything done. I'm compiling on Linux with clang 10.0.0.
#include <array>
#include <exception>
#include <iostream>
#include <string>
#include <jit/jit.h>
class ex : public std::exception {};
extern "C" int bad(int arg) {
std::cout << "Called with arg=" << arg << std::endl;
throw ex();
}
jit_function_t build_function(jit_context_t context) {
// The signature of the compiled function and "bad" are the same.
std::array<jit_type_t, 1> params{jit_type_int};
jit_type_t signature = jit_type_create_signature(jit_abi_cdecl, jit_type_int, params.data(), params.size(), 1);
jit_function_t function = jit_function_create(context, signature);
jit_value_t arg_value = jit_value_get_param(function, 0);
jit_value_t bad_value =
jit_value_create_nint_constant(function, jit_type_void_ptr, (reinterpret_cast<jit_nint>(bad)));
std::array<jit_value_t, 1> bad_args{arg_value};
jit_value_t result = jit_insn_call_indirect(function, bad_value, signature, bad_args.data(), bad_args.size(), 0);
jit_insn_return(function, result);
jit_function_compile(function);
return function;
}
int main(int argc, char* argv[]) {
// This exception will be caught:
try {
bad(7);
} catch (const ex&) {
std::cout << "Caught exception from direct call" << std::endl;
}
// Initialize libjit.
// We're single-threaded so not going to bother with jit_context_build_start/jit_context_build_end.
jit_init();
jit_context_t context = jit_context_create();
jit_function_t function = build_function(context);
// This one will not be caught, calls std::terminate instead:
try {
int (*closure)(int) = reinterpret_cast<int (*)(int)>(jit_function_to_closure(function));
closure(7);
} catch (const ex&) {
std::cout << "Caught exception from libjit call" << std::endl;
}
return 0;
}
Output:
Called with arg=7
Caught exception from direct call
Called with arg=7
terminate called after throwing an instance of 'ex'
what(): std::exception
Aborted

boost, coroutine2 (1.63.0): throwing exception crashes visual studio on 32bit windows

In my application I'm using coroutine2 to generate some objects which I have to decode from a stream. These objects are generated using coroutines. My problem is that as soon as I reach the end of the stream and would theoretically throw std::ios_base::failure my application crashes under certain conditions.
The function providing this feature is implemented in C++, exported as a C function and called from C#. This all happens on a 32bit process on Windows 10 x64. Unfortunately it only reliably crashes when I start my test from C# in debugging mode WITHOUT the native debugger attached. As soon as I attach the native debugger everything works like expected.
Here is a small test application to reproduce this issue:
Api.h
#pragma once
extern "C" __declspec(dllexport) int __cdecl test();
Api.cpp
#include <iostream>
#include <vector>
#include <sstream>
#include "Api.h"
#define BOOST_COROUTINES2_SOURCE
#include <boost/coroutine2/coroutine.hpp>
int test()
{
using coro_t = boost::coroutines2::coroutine<bool>;
coro_t::pull_type source([](coro_t::push_type& yield) {
std::vector<char> buffer(200300, 0);
std::stringstream stream;
stream.write(buffer.data(), buffer.size());
stream.exceptions(std::ios_base::eofbit | std::ios_base::badbit | std::ios_base::failbit);
try {
std::vector<char> dest(100100, 0);
while (stream.good() && !stream.eof()) {
stream.read(&dest[0], dest.size());
std::cerr << "CORO: read: " << stream.gcount() << std::endl;
}
}
catch (const std::exception& ex) {
std::cerr << "CORO: caught ex: " << ex.what() << std::endl;
}
catch (...) {
std::cerr << "CORO: caught unknown exception." << std::endl;
}
});
std::cout << "SUCCESS" << std::endl;
return 0;
}
C#:
using System;
using System.Runtime.InteropServices;
namespace CoroutinesTest
{
class Program
{
[DllImport("Api.dll", EntryPoint = "test", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)]
internal static extern Int32 test();
static void Main(string[] args)
{
test();
Console.WriteLine("SUCCESS");
}
}
}
Some details:
We are using Visual Studio 2015 14 and dynamically link the c++ runtime.
The test library statically links Boost 1.63.0.
We also tried to reproduce this behaviour with calling the functionallity directly from c++ and from python. Both tests have not been successful so far.
If you start the c# code with CTRL F5 (meaning without the .net debugger) everything will also be fine. Only if you start it with F5 (meaning the .NET Debugger attached) the visual studio instance will crash. Also be sure not to enable the native debugger!
Note: If we don't use the exceptions in the stream, everything seams to be fine as well. Unfortunately the code decoding my objects makes use of them and therefore I cannot avoid this.
It would be amazing if you had some additional hints on what might go wrong here or a solution. I'm not entirely sure if this is a boost bug, could also be the c# debugger interfering with boost-context.
Thanks in advance! Best Regards, Michael
I realize this question is old but I just finished reading a line in the docs that seemed pertinent:
Windows using fcontext_t: turn off global program optimization (/GL) and change /EHsc (compiler assumes that functions declared as extern "C" never throw a C++ exception) to /EHs (tells compiler assumes that functions declared as extern "C" may throw an exception).
This is just a guess but in your coroutine I think you are supposed to push a boolean to your sink (named as yield in your code) and the code is not doing it.

QueueUserAPC - throwing exception crashes, possible mingw bug

Solved: I upgraded from mingw 4.6.2 to 4.7.0 and it works perfectly, guess it was just a bug
I started to do some research on how terminate a multithreaded application properly and I found those 2 post(first, second) about how to use QueueUserAPC to signal other threads to terminate.
I thought I should give it a try, and the application keeps crashing when I throw the exception from the APCProc.
Code:
#include <stdio.h>
#include <windows.h>
class ExitException
{
public:
char *desc;
DWORD exit_code;
ExitException(char *desc,int exit_code): desc(desc), exit_code(exit_code)
{}
};
//I use this class to check if objects are deconstructed upon termination
class Test
{
public:
char *s;
Test(char *s): s(s)
{
printf("%s ctor\n",s);
}
~Test()
{
printf("%s dctor\n",s);
}
};
DWORD CALLBACK ThreadProc(void *useless)
{
try
{
Test t("thread_test");
SleepEx(INFINITE,true);
return 0;
}
catch (ExitException &e)
{
printf("Thread exits\n%s %lu",e.desc,e.exit_code);
return e.exit_code;
}
}
void CALLBACK exit_apc_proc(ULONG_PTR param)
{
puts("In APCProc");
ExitException e("Application exit signal!",1);
throw e;
return;
}
int main()
{
HANDLE thread=CreateThread(NULL,0,ThreadProc,NULL,0,NULL);
Sleep(1000);
QueueUserAPC(exit_apc_proc,thread,0);
WaitForSingleObject(thread,INFINITE);
puts("main: bye");
return 0;
}
My question is why does this happen?
I use mingw for compilation and my OS is 64bit.
Can this be the reason?I read that you shouldn't call QueueApcProc from a 32bit app for a thread which runs in a 64bit process or vice versa, but this shouldn't be the case.
EDIT: I compiled this with visual studio's c++ compiler 2010 and it worked flawlessly, it is possible that this is a bug in gcc/mingw?
I can reproduce the same thing with VS2005. The problem is that the compiler optimizes the catch away. Why? Because according to the C++ standard it's undefined what happens if an extern "C" function exits with an exception. So the compiler assumes that SleepEx (which is extern "C") does not ever throw. After inlining of Test::Test and Test::~Test it sees that the printf doesn't throw either, and consequently if something in this block exits via an exception
Test t("thread_test");
SleepEx(INFINITE,true);
return 0;
the behavior is undefined!
In MSVC the code doesn't work with the /EHsc switch in Release build, but works with /EHa or /EHs, which tell it to assume that C function may throw. Perhaps GCC has a similar flag.

avoiding abort in libgmp

I have some code that uses libgmp. At some point the user may request a factorial of a very large number. Unfortunately, this results in libgmp raising an abort signal.
For example the following code:
#include <cmath>
#include <gmp.h>
#include <iostream>
int main() {
mpz_t result;
mpz_init(result);
mpz_fac_ui(result, 20922789888000);
std::cout << mpz_get_si(result) << std::endl;
}
Results in:
$ ./test
gmp: overflow in mpz type
Aborted
Apparently, the number produced is REALLY big. Is there anyway to handle the error more gracefully than an abort. This is a GUI based application and it aborting is pretty much the least desirable way to handle this sort of issue.
It would appear that you are out of luck, based on the code in mpz/realloc.c and mpz/realloc2.c. If too much memory was requested, it just does this:
if (UNLIKELY (new_alloc > INT_MAX))
{
fprintf (stderr, "gmp: overflow in mpz type\n");
abort ();
}
The best way to handle these errors gracefully in your application is probably to fork off a helper process to perform the GMP calculations. If the helper process is killed by SIGABRT, your parent process can detect that and report an error to the user.
(The below is my original answer, which has "undefined results" according to the GMP documentation - it is left here for completeness).
You can catch the error if you install a signal handler for SIGABRT that uses longjmp():
jmp_buf abort_jb;
void abort_handler(int x)
{
longjmp(abort_jb, 1);
}
int dofac(unsigned long n)
{
signal(SIGABRT, abort_handler);
if (setjmp(abort_jb))
goto error;
mpz_t result;
mpz_init(result);
mpz_fac_ui(result, 20922789888000);
std::cout << mpz_get_si(result) << std::endl;
signal(SIGABRT, SIG_DFL);
return 0;
error:
signal(SIGABRT, SIG_DFL);
std::cerr << "Caught SIGABRT from GMP.\n";
return 1;
}
Overwrite abort() with LD_PRELOAD.
What is the LD_PRELOAD trick?
Edit: To make the answer more self-contained, I copy the text of that answer here:
If you set LD_PRELOAD to the path of a shared object, that file will be loaded before any other library (including the C runtime, libc.so). So to run ls with a your special malloc() implementation, do this:
$ LD_PRELOAD=/path/to/my/malloc.so /bin/ls
Credits to JesperE.