I'm trying to use exceptions in a C++ code that runs on raspberry-pi zero.
When I link the code dynamically it works fine.
However, when I link it statically, I get:
terminate called after throwing an instance of 'std::exception'
terminate called recursively
Aborted
An example code to reproduce the problem:
#include <exception>
#include <stdio.h>
void foo(void)
{
throw std::exception();
}
int main (void)
{
try
{
foo();
}
catch(...)
{
printf("caught an exception\n");
}
return (0);
}
My "problematic" link command is:
g++ exceptionTest.cpp -o exceptionTest.out --static
Why does the static linking disrupts the exceptions mechanism?
Is there something I could do in my link command (e.g. add flags) to fix this?
Note: my problem appears very similar to
GCC arm-none-eabi (Codesourcery) and C++ Exceptions
but the OP there worked on a different system and the suggested solution was related to a link script (which I don't use).
Related
EDIT: Nearly got the answer, I just dont completely understand it, see last paragraph.
I try to build a shared lua library and use it within a larger project. When calling the script which loads the shared library from shell everything works. However, when I wrap the script within another shell, I get a runtime error when loading the library. Dependent on the script it is just any call to a lua function from c (i.e. lua_pushnumber). Here is a minimal example.
totestlib.cpp:
extern "C" {
#include "lua.h"
#include "lualib.h"
#include "lauxlib.h"
}
int init(lua_State *L) {
lua_toboolean(L, -1);
return 0;
}
static const struct luaL_Reg testlib[] = {
{"init", init},
{NULL, NULL}
};
extern "C"
int luaopen_libtotestlib(lua_State *L) {
luaL_newlib(L, testlib);
return 1;
}
Compiled with: g++ -shared -fPIC -I./lua-5.4.4/src -L./lua-5.4.4/src totestlib.cpp -o libtotestlib.so
testlib.lua (testing shared library):
testlib.lua
print("start")
testlib = require("libtotestlib")
print("done")
testlib.init(true)
print("called")
Calling the lua script using ./lua-5.4.4/src/lua testlib.lua works. Everything is printed. Wrapping script in the following c++ code does not work:
call_testlib.cpp
extern "C" {
#include <lua.h>
#include <lauxlib.h>
#include <lualib.h>
}
#include <unistd.h>
static lua_State *L;
int main(int argc, char *argv[]) {
L = luaL_newstate();
luaL_openlibs(L);
int tmp = luaL_loadfile(L, "testlib.lua");
if(tmp != 0) {
return 1;
}
tmp = lua_pcall(L, 0, 0, 0);
if(tmp != 0) {
printf("error pcall\n");
return 1;
}
}
Compiled with g++ call_testlib.cpp -o ./call_testlib -I./lua-5.4.4/src -L./lua-5.4.4/src -llua it prints "error pcall". If I print the error message on the lua stack, I get:
string error loading module 'libtotestlib' from file './libtotestlib.so':
./libtotestlib.so: undefined symbol: luaL_checkversion_
In this case the undefined symbol is luaL_checkversion_ (which I dont call myself), but with other scripts it is usually the first lua_... function that I call.
I have tried several things to fix this. For example, linking -llua when compiling the shared library, but this does not work (and should not be the problem as calling the script itself works). I also tried to load preload the library from c++ (as done in this question) instead of from lua, but I guess it does not really make a difference and I am getting the same error. I also uninstalled all lua versions from my path to make sure I always use the same version.
What is the difference between calling the script directly from shell and calling it inside a c function? Am I doing something wrong?
EDIT: Nearly got the answer. When using MYCFLAGS= -fPIC when compiling lua I can link lua to the shared library. At least this one works, but does not seem like a good solution to me and does not really answer my question: Why can lua itself (from shell) somehow add these symbols to the library while the wrapped c version can not? Additionally, my program has lua once linked in the shared library and once in the compiled C++ project (not optimal imo).
I'm currently working on a game with a plugin based architecture. The executable consists mostly of a shared library loader and a couple of interface definitions. All the interesting stuff is happening in dynamic shared libraries which are loaded at start up.
One of the library classes throws an exception under certain circumstances. I would expect to be able to catch this exception and do useful stuff with it but this is where it gets weird. See following simplified example code:
main.cpp
int main()
{
try
{
Application app;
app.loadPlugin();
app.doStuffWithPlugin();
return 0;
}
catch(const std::exception& ex)
{
// Log exception
return 1;
}
}
Application.cpp
...
void doStuffWithPlugin()
{
plugin.doStuff();
}
...
Plugin.cpp
...
void doStuff()
{
throw exception_derived_from_runtime_error("Something is wrong");
}
...
Plugin.cpp exists in a dynamic shared library which is successfully loaded and which has afterwards created an object of class Plugin. The exception_derived_from_runtime_error is defined in the application. There is no throw() or noexcept.
I would expect to catch the exception_derived_from_runtime_error in main but that doesn't happen. Compiled with GCC 4.8 using C++11 the application crashes with This application has requested the Runtime to terminate it in an unusual way..
I replaced catch(const std::exception& ex) with catch(...) but that didn't make any difference. The weird part is if i catch the exception in doStuffWithPlugin() it works. If i rethrow it using throw; it fails again but it can be caught if i use throw ex;:
Application.cpp
void doStuffWithPlugin()
{
try
{
plugin.doStuff();
}
catch(const exception_derived_from_runtime_error& ex)
{
// throw; <- Not caught in main().
// throw ex; <- Caught in main().
}
}
Hopefully somebody has an idea. Thanks for every help you can give.
As mentioned in the comments this seems to be a problem with shared libraries on Windows. The behavior occurs if the library is unloaded and an object created in this libraries remains in memory. The application seems to crash immediately. The only reference to this problems are found if gcc as an cross compiler or MinGW is used. See also https://www.sourceware.org/ml/crossgcc/2005-01/msg00022.html
Let's consider the following three files.
tclass.h:
#include <iostream>
#include <vector>
template<typename rt>
class tclass
{
public:
void wrapper()
{
//Storage is empty
for(auto it:storage)
{
}
try
{
thrower();
}
catch(...)
{
std::cout << "Catch in wrapper\n";
}
}
private:
void thrower(){}
std::vector<int> storage;
};
spec.cpp:
#include "tclass.h"
//The exact type does not matter here, we just need to call the specialized method.
template<>
void tclass<long double>::thrower()
{
//Again, the exception may have any type.
throw (double)2;
}
main.cpp:
#include "tclass.h"
#include <iostream>
int main()
{
tclass<long double> foo;
try
{
foo.wrapper();
}
catch(...)
{
std::cerr << "Catch in main\n";
return 4;
}
return 0;
}
I use Linux x64, gcc 4.7.2, the files are compiled with this command:
g++ --std=c++11 *.cpp
First test: if we run the program above, it says:
terminate called after throwing an instance of 'double'
Aborted
Second test: if we comment for(auto it:storage) in the tclass.h file, the program will catch the exception in main function. WATWhy? Is it a stack corruption caused by an attempt to iterate over the empty vector?
Third test: lets uncomment back the for(auto it:storage) line and move the method specialization from spec.cpp to main.cpp. Then the exception is caught in wrapper. How is it possible and why does possible memory corruption not affect this case?
I also tried to compile it with different optimization levels and with -g, but results were the same.
Then I tried it on Windows 7 x64, VS2012 express, compiling with x64 version of cl.exe with no extra command line arguments. At the first test this program produced no output, so I think it just crashed silently, so the result is similar with Linux version. For the second test it produced no output again, so result is different from Linux. For the third test the result was similar with Linux result.
Are there any errors in this code so they can lead to such behavior? May the results of the first test be caused by possible bug in compilers?
With your code, I have with gcc 4.7.1:
spec.cpp:6: multiple definition of 'tclass<long double>::thrower()'
You may correct your code by declaring the specialization in your .h as:
template<> void tclass<long double>::thrower();
Solved: I upgraded from mingw 4.6.2 to 4.7.0 and it works perfectly, guess it was just a bug
I started to do some research on how terminate a multithreaded application properly and I found those 2 post(first, second) about how to use QueueUserAPC to signal other threads to terminate.
I thought I should give it a try, and the application keeps crashing when I throw the exception from the APCProc.
Code:
#include <stdio.h>
#include <windows.h>
class ExitException
{
public:
char *desc;
DWORD exit_code;
ExitException(char *desc,int exit_code): desc(desc), exit_code(exit_code)
{}
};
//I use this class to check if objects are deconstructed upon termination
class Test
{
public:
char *s;
Test(char *s): s(s)
{
printf("%s ctor\n",s);
}
~Test()
{
printf("%s dctor\n",s);
}
};
DWORD CALLBACK ThreadProc(void *useless)
{
try
{
Test t("thread_test");
SleepEx(INFINITE,true);
return 0;
}
catch (ExitException &e)
{
printf("Thread exits\n%s %lu",e.desc,e.exit_code);
return e.exit_code;
}
}
void CALLBACK exit_apc_proc(ULONG_PTR param)
{
puts("In APCProc");
ExitException e("Application exit signal!",1);
throw e;
return;
}
int main()
{
HANDLE thread=CreateThread(NULL,0,ThreadProc,NULL,0,NULL);
Sleep(1000);
QueueUserAPC(exit_apc_proc,thread,0);
WaitForSingleObject(thread,INFINITE);
puts("main: bye");
return 0;
}
My question is why does this happen?
I use mingw for compilation and my OS is 64bit.
Can this be the reason?I read that you shouldn't call QueueApcProc from a 32bit app for a thread which runs in a 64bit process or vice versa, but this shouldn't be the case.
EDIT: I compiled this with visual studio's c++ compiler 2010 and it worked flawlessly, it is possible that this is a bug in gcc/mingw?
I can reproduce the same thing with VS2005. The problem is that the compiler optimizes the catch away. Why? Because according to the C++ standard it's undefined what happens if an extern "C" function exits with an exception. So the compiler assumes that SleepEx (which is extern "C") does not ever throw. After inlining of Test::Test and Test::~Test it sees that the printf doesn't throw either, and consequently if something in this block exits via an exception
Test t("thread_test");
SleepEx(INFINITE,true);
return 0;
the behavior is undefined!
In MSVC the code doesn't work with the /EHsc switch in Release build, but works with /EHa or /EHs, which tell it to assume that C function may throw. Perhaps GCC has a similar flag.
Here is what I did, I want to gracefully handle this exception:
code_snippet: my.cpp
#include<iostream>
extern "C" void some_func()
{
throw "(Exception Thrown by some_func!!)";
}
code_snippet: exception.c
#include <stdio.h>
extern void some_func();
int so_main()
{
some_func();
return 0;
}
From above two snippets I have created a shared_object libexception.so by using following commands:
g++ -c -fPIC src/my.cpp
gcc -c -ansi -fPIC src/exception.c
g++ -fPIC -shared -o libexception.so
Then I called the function so_main from my main.cpp
code_snippet: main.cpp
#include<iostream>
#include <dlfcn.h>
using namespace std;
extern "C" void some_func();
int main()
{
int (*fptr)() = 0;
void *lib = dlopen("./libexception.so", RTLD_LAZY);
if (lib) {
*(void **)(&fptr) = dlsym(lib, "so_main");
try{
if(fptr) (*fptr)(); <-- problem lies here
//some_func(); <-- calling this directly won't crash
}
catch (char const* exception) {
cout<<"Caught exception :"<<exception<<endl;
}
}
return 0;
}
final command: g++ -g -ldl -o main.exe src/main.cpp -L lib/ -lexception
Executing main.exe crashes. Can somebody help me detect where the problem lies and how to avoid this crash to happen (We have already some architecture where some .SO calls extern c++ function, so that can't be changed, We want to handle it in main.cpp only)
Try compiling src/exception.c with -fexceptions
-fexceptions
Enable exception handling. Generates extra code needed to propagate exceptions. For some targets, this implies GNU CC will generate frame unwind information for all functions ...
However, you may need to enable this option when compiling C code that needs to interoperate properly with exception handlers written in C++. You may also wish to disable this option if you are compiling older C++ programs that don't use exception handling.
That function probably throws an exception that is not char* so you're not catching it.
try using a generalized catch:
*(void **)(&fptr) = dlsym(lib, "so_main");
try{
if(fptr) (*fptr)(); <-- problem lies here
//some_func(); <-- calling this directly won't crash
}
catch (char const* exception) {
cout<<"Caught exception :"<<exception<<endl;
}
catch (...) {
cout<<"Caught exception : Some other exception"
}