How can MSVC6 handle exceptions from extern "C" functions? - c++

I'm working on an application written in Visual Studio 6 (I know, FML) that is calling functions in a DLL using LoadLibrary and GetProcAddress. The newer code can't compile in VC6, and needs a newer compiler. The DLL has a few functions that construct a C++ object, and then the VC6 program uses the object through an abstract class.
This works just fine usually, but it runs into problems when the functions retrieved by GetProcAddress throw exceptions -- even when the exceptions are caught within the DLL. I've noticed that this doesn't happen when the abstract class's methods throw an exception. Things work normally in that case.
What am I doing wrong here? How can I make VC6 generate code to handle the exceptions properly?
Edit: Here's an example of a function that causes the program to crash:
extern "C" __declspec(dllexport) Box* getBox(const char* addr)
{
try {
return createBox(addr);
} catch (std::exception& ex) {
LOG_ERROR("Open failed: " << ex.what());
return 0;
} catch (...) {
LOG_ERROR("Error while opening.");
return 0;
}
}

You cannot do inheritance cross compiler versions like that. It almost works but exceptions and a few other things go crazy.

Related

How to catch SEH thrown from ntdll.dll's TppRaiseInvalidParameter?

I am using MSVC2019 and COM and compiling using /EHa
getting a SEH from ntdll.dll from TppRaiseInvalidParameter that I am trying to catch but seem unable to. I know exactly why the exception is thrown, but that is not the issue here.
I tried using all the mechanisms described in the MSDN docs (__try/__except, _set_se_translator, SetUnhandledExceptionFilter), but none seem to trigger in this case.
I also tried raising exceptions using RaiseException and RtlRaiseException (used by TppRaiseInvalidParameter) and those seem to be caught no problem in the __except handler.
The only thing I've been able to spot in TppRaiseInvalidParameter is that it calls __SEH_prolog4_GS at the beginning, but from what I've read that is normal code generated by the compiler for SEHs, but I'm new to SEHs in general.
My questions are: why can't I catch that exception? Is there any way to catch it?
Minimal code for reproduction
extern "C"
{
void (WINAPI* TppRaiseInvalidParameter)();
}
void func()
{
__try
{
HMODULE ntdll;
GetModuleHandleExA(GET_MODULE_HANDLE_EX_FLAG_UNCHANGED_REFCOUNT, "ntdll.dll", &ntdll);
TppRaiseInvalidParameter = reinterpret_cast<decltype(TppRaiseInvalidParameter)>((LONG)ntdll + 0x104EBDL); // it's not an exported function and your offset may be different
TppRaiseInvalidParameter();
}
__except (EXCEPTION_EXECUTE_HANDLER)
{
puts("exception caught");
}
}

How to pass C++ exceptions through functions compiled with LibJIT?

I'm using LibJIT to JIT-compile a programming language. My compiler is written in C++, so I have a C++ program calling the LibJIT library, which is written in C.
In my C++ code I have some functions that throw exceptions, and sometimes these get triggered on the far side of a LibJIT-compiled function. That is, my C++ program uses LibJIT to compile a function, calls it, and the JIT-compiled function in turn calls a C++ function that throws an exception. But instead of the exception being propagated back through the JIT-compiled function to my handler, the runtime calls std::terminate.
Apparently the exception-handling mechanism needs each function in the call stack to implement some kind of exception support. When compiling C code, the -fexceptions flag tells the compiler (at least gcc) to include this support.
Is there a way to include this exception support in my LibJIT-generated functions?
I tried using the LibJIT instructions to set up a "catcher" block that rethrows all exceptions, but that didn't work; it looks like the LibJIT's exception-handling model is separate from the C++ model.
Obviously I don't really want to throw C++ exceptions into my language's code, and I'll have to figure out some other error-handling strategy. But I'm wondering if it's even possible to get it to work.
Here's a sample program that demonstrates the issue. I'm sorry it's this long but LibJIT is pretty low-level so you have to write a lot of code to get anything done. I'm compiling on Linux with clang 10.0.0.
#include <array>
#include <exception>
#include <iostream>
#include <string>
#include <jit/jit.h>
class ex : public std::exception {};
extern "C" int bad(int arg) {
std::cout << "Called with arg=" << arg << std::endl;
throw ex();
}
jit_function_t build_function(jit_context_t context) {
// The signature of the compiled function and "bad" are the same.
std::array<jit_type_t, 1> params{jit_type_int};
jit_type_t signature = jit_type_create_signature(jit_abi_cdecl, jit_type_int, params.data(), params.size(), 1);
jit_function_t function = jit_function_create(context, signature);
jit_value_t arg_value = jit_value_get_param(function, 0);
jit_value_t bad_value =
jit_value_create_nint_constant(function, jit_type_void_ptr, (reinterpret_cast<jit_nint>(bad)));
std::array<jit_value_t, 1> bad_args{arg_value};
jit_value_t result = jit_insn_call_indirect(function, bad_value, signature, bad_args.data(), bad_args.size(), 0);
jit_insn_return(function, result);
jit_function_compile(function);
return function;
}
int main(int argc, char* argv[]) {
// This exception will be caught:
try {
bad(7);
} catch (const ex&) {
std::cout << "Caught exception from direct call" << std::endl;
}
// Initialize libjit.
// We're single-threaded so not going to bother with jit_context_build_start/jit_context_build_end.
jit_init();
jit_context_t context = jit_context_create();
jit_function_t function = build_function(context);
// This one will not be caught, calls std::terminate instead:
try {
int (*closure)(int) = reinterpret_cast<int (*)(int)>(jit_function_to_closure(function));
closure(7);
} catch (const ex&) {
std::cout << "Caught exception from libjit call" << std::endl;
}
return 0;
}
Output:
Called with arg=7
Caught exception from direct call
Called with arg=7
terminate called after throwing an instance of 'ex'
what(): std::exception
Aborted

boost, coroutine2 (1.63.0): throwing exception crashes visual studio on 32bit windows

In my application I'm using coroutine2 to generate some objects which I have to decode from a stream. These objects are generated using coroutines. My problem is that as soon as I reach the end of the stream and would theoretically throw std::ios_base::failure my application crashes under certain conditions.
The function providing this feature is implemented in C++, exported as a C function and called from C#. This all happens on a 32bit process on Windows 10 x64. Unfortunately it only reliably crashes when I start my test from C# in debugging mode WITHOUT the native debugger attached. As soon as I attach the native debugger everything works like expected.
Here is a small test application to reproduce this issue:
Api.h
#pragma once
extern "C" __declspec(dllexport) int __cdecl test();
Api.cpp
#include <iostream>
#include <vector>
#include <sstream>
#include "Api.h"
#define BOOST_COROUTINES2_SOURCE
#include <boost/coroutine2/coroutine.hpp>
int test()
{
using coro_t = boost::coroutines2::coroutine<bool>;
coro_t::pull_type source([](coro_t::push_type& yield) {
std::vector<char> buffer(200300, 0);
std::stringstream stream;
stream.write(buffer.data(), buffer.size());
stream.exceptions(std::ios_base::eofbit | std::ios_base::badbit | std::ios_base::failbit);
try {
std::vector<char> dest(100100, 0);
while (stream.good() && !stream.eof()) {
stream.read(&dest[0], dest.size());
std::cerr << "CORO: read: " << stream.gcount() << std::endl;
}
}
catch (const std::exception& ex) {
std::cerr << "CORO: caught ex: " << ex.what() << std::endl;
}
catch (...) {
std::cerr << "CORO: caught unknown exception." << std::endl;
}
});
std::cout << "SUCCESS" << std::endl;
return 0;
}
C#:
using System;
using System.Runtime.InteropServices;
namespace CoroutinesTest
{
class Program
{
[DllImport("Api.dll", EntryPoint = "test", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)]
internal static extern Int32 test();
static void Main(string[] args)
{
test();
Console.WriteLine("SUCCESS");
}
}
}
Some details:
We are using Visual Studio 2015 14 and dynamically link the c++ runtime.
The test library statically links Boost 1.63.0.
We also tried to reproduce this behaviour with calling the functionallity directly from c++ and from python. Both tests have not been successful so far.
If you start the c# code with CTRL F5 (meaning without the .net debugger) everything will also be fine. Only if you start it with F5 (meaning the .NET Debugger attached) the visual studio instance will crash. Also be sure not to enable the native debugger!
Note: If we don't use the exceptions in the stream, everything seams to be fine as well. Unfortunately the code decoding my objects makes use of them and therefore I cannot avoid this.
It would be amazing if you had some additional hints on what might go wrong here or a solution. I'm not entirely sure if this is a boost bug, could also be the c# debugger interfering with boost-context.
Thanks in advance! Best Regards, Michael
I realize this question is old but I just finished reading a line in the docs that seemed pertinent:
Windows using fcontext_t: turn off global program optimization (/GL) and change /EHsc (compiler assumes that functions declared as extern "C" never throw a C++ exception) to /EHs (tells compiler assumes that functions declared as extern "C" may throw an exception).
This is just a guess but in your coroutine I think you are supposed to push a boolean to your sink (named as yield in your code) and the code is not doing it.

Why the c++ object destructor not called when luaL_error is called?

I have a piece of code like this
class Test
{
public:
Test() {printf(">>> Test()\n");}
~Test() {printf(">>> ~Test()\n");}
}
int myFunc(lua_State *L)
{
Test t;
luaL_error(L, "error");
return 0;
}
I know when lua complied by c complier it use longjmp to raise an error. So, I compiled it use c++ compiler so that it use c++ exception to hand the errors and the destructor should be called even if an error is thrown. But my problem is that the object's destructor is not called.
However, the following code is working (the destructor is called)
int myFunc(lua_State *L)
{
Test t;
throw(1) // just for testing
return 0;
}
Why this happend? I'm sure the LUAI_THROW macro is interpreted as throw key word.
The function luaL_error() will call exit() which cancels the whole execution of your program! The desctructor is not called then because the scope where Test t is in does not end. You should use a different functionality to be able to recover from an error. How do you call the error from lua? I think you need to do a protected call using lua_cpcall to go arround this exit on error feature!
The root cause is related to exception handling mode in visual c++ compiler. I use the lua function (such as luaL_error) with extern "C" modifier to prevent compiler from name-mangling. And the default exception handling mode is /EHsc which assume extern "C" function don't throw exception. So, the exception can't be catched. The solution is change /EHsc to /EHs.
For more information please refer to http://msdn.microsoft.com/en-us/library/1deeycx5.aspx.

QueueUserAPC - throwing exception crashes, possible mingw bug

Solved: I upgraded from mingw 4.6.2 to 4.7.0 and it works perfectly, guess it was just a bug
I started to do some research on how terminate a multithreaded application properly and I found those 2 post(first, second) about how to use QueueUserAPC to signal other threads to terminate.
I thought I should give it a try, and the application keeps crashing when I throw the exception from the APCProc.
Code:
#include <stdio.h>
#include <windows.h>
class ExitException
{
public:
char *desc;
DWORD exit_code;
ExitException(char *desc,int exit_code): desc(desc), exit_code(exit_code)
{}
};
//I use this class to check if objects are deconstructed upon termination
class Test
{
public:
char *s;
Test(char *s): s(s)
{
printf("%s ctor\n",s);
}
~Test()
{
printf("%s dctor\n",s);
}
};
DWORD CALLBACK ThreadProc(void *useless)
{
try
{
Test t("thread_test");
SleepEx(INFINITE,true);
return 0;
}
catch (ExitException &e)
{
printf("Thread exits\n%s %lu",e.desc,e.exit_code);
return e.exit_code;
}
}
void CALLBACK exit_apc_proc(ULONG_PTR param)
{
puts("In APCProc");
ExitException e("Application exit signal!",1);
throw e;
return;
}
int main()
{
HANDLE thread=CreateThread(NULL,0,ThreadProc,NULL,0,NULL);
Sleep(1000);
QueueUserAPC(exit_apc_proc,thread,0);
WaitForSingleObject(thread,INFINITE);
puts("main: bye");
return 0;
}
My question is why does this happen?
I use mingw for compilation and my OS is 64bit.
Can this be the reason?I read that you shouldn't call QueueApcProc from a 32bit app for a thread which runs in a 64bit process or vice versa, but this shouldn't be the case.
EDIT: I compiled this with visual studio's c++ compiler 2010 and it worked flawlessly, it is possible that this is a bug in gcc/mingw?
I can reproduce the same thing with VS2005. The problem is that the compiler optimizes the catch away. Why? Because according to the C++ standard it's undefined what happens if an extern "C" function exits with an exception. So the compiler assumes that SleepEx (which is extern "C") does not ever throw. After inlining of Test::Test and Test::~Test it sees that the printf doesn't throw either, and consequently if something in this block exits via an exception
Test t("thread_test");
SleepEx(INFINITE,true);
return 0;
the behavior is undefined!
In MSVC the code doesn't work with the /EHsc switch in Release build, but works with /EHa or /EHs, which tell it to assume that C function may throw. Perhaps GCC has a similar flag.