I have a C++ program receiving calls from users. Some of these calls should be processed by python scripts. This is how I am doing it:
On start, it loads a Python interpreter using Py_Initialize() and loads some modules and functions. I keep references to them. This works.
On each call, the corresponding function is called
The first time it works nice, but the second time I always get a segmentation fault when calling PyObject_CallObject. I have tried fixing this with the tips from Calling python method from C++ (or C) callback, but it still doesn't work.
Moreover, if I try to run PyRun_SimpleString("import <module_name>"), I also get a segmentation fault! And this time I am not even using references.
Note: the initialization is done via a singleton pattern, so the first call happens immediately after the intitialization.
Related
In a Java application, I use JNI to call several C++ methods. One of the methods creates an object that has to persist after the method finished and that is used in other method calls. To this end, I create a pointer of the object, which I return to Java as a reference for later access (note: the Java class implements Closable and in the close method, I call a method to delete the object).
However, in rare cases, approximately after 50.000 calls, the C++ code throws a segmentation fault. Based on the content of the log file, only a few lines of code are suspicious to be the source of error (they between the last printed log message and the next one):
MyObject* handle = new MyObject(some_vector, shared_ptr1, shared_ptr2);
handles.insert(handle); // handles is a std::set
jlong handleId = (jlong) handle;
I'd like to know whether there is a possible issue here apart from the fact that I'm using old-style C pointers. Could multi-threading be a problem? Or could the pointer ID be truncated when converted to jlong?
I also want to note that from my previous experience, I'm aware that the log is only a rough indicator of where a segmentation fault occurred. It may as well have been occurred later in the code and the next log message was simply not printed yet. However, reproducing this error may take 1-2 days, so I'd like to check out whether these lines have a problem.
After removing my std::set from the code, the error did not occur anymore. Conclusion: std::set in multithreading must be protected to avoid unrecoverable crashes.
In Ubuntu 14.04, I have a C++ API as a shared library which I am opening using dlopen, and then creating pointers to functions using dlsym. One of these functions CloseAPI releases the API from memory. Here is the syntax:
void* APIhandle = dlopen("Kinova.API.USBCommandLayerUbuntu.so", RTLD_NOW|RTLD_GLOBAL);
int (*CloseAPI) = (int (*)()) dlsym(APIhandle,"CloseAPI");
If I ensure that during my code, the CloseAPI function is always called before the main function returns, then everything seems fine, and I can run the program again the next time. However, if I Ctrl-C and interrupt the program before it has had time to call CloseAPI, then on the next time I run the program, I get a return error whenever I call any of the API functions. I have no documentation saying what this error is, but my intuition is that there is some sort of lock on the library from the previous run of the program. The only thing that allows me to run the program again, is to restart my machine. Logging in and out does not work.
So, my questions are:
1) If my library is a shared library, why am I getting this error when I would have thought a shared library can be loaded by more than one program simultaneously?
2) How can I resolve this issue if I am going to be expecting Ctrl-C to be happening often, without being able to call CloseAPI?
So, if you do use this api correctly then it requires you to do proper clean up after use (which is not really user friendly).
First of all, if you really need to use Ctrl-C, allow program to end properly on this signal: Is destructor called if SIGINT or SIGSTP issued?
Then use a technique with a stack object containing a resource pointer (to a CloseAPI function in this case). Then make sure this object will call CloseAPI in his destructor (you may want to check if CloseAPI wasn't called before). See more in "Effective C++, Chapter 3: Resource Management".
That it, even if you don't call CloseAPI, pointer container will do it for you.
p.s. you should considering doing it even if you're not going to use Ctrl-C. Imagine exception occurred and your program has to be stopped: then you should be sure you don't leave OS in an undefined state.
We have a large Fortran application which calls many C++ modules. I am trying to use the C++ objects' destructors to free resources and close files, but it seems they are not being called when the Fortran program exits.
The Fortran program exits using the STOP command.
Do I need to use a different Fortran command to exit, or call the C++ exit(0) command from Fortran?
To get proper construction/destruction, you just about need the entry point to be on the C++ side.
At least offhand, the simplest approach I can think of that seems at all likely to work would be something like this:
set up main in C++, and have it as the entry point.
move your current Fortran entry point into a function.
call that function from main
write a small function in C++ named something like do_stop() that just throws an exception
in your Fortran, replace STOP with calls to do_stop().
You can either leave the exception uncaught, or have a try/catch in main, which can give a slightly more graceful exit (display an error message of your choice instead of something written by the library author saying your program has an error).
I have a C++ program which uses lua. C++ exposes a reference counted datatype as userdata with an assigned finalizer so that lua can take ownership of such values.
This works fine. However one thing worries me: if there was to occur an error executing a script where lua holds instances of that datatype, will the finalizer be called then?
Another formulation of the question would be: Does lua run a garbage collection cycle upon an error?
Yes, everything continues to run fine if the error occurs inside a protected call. If Lua panics, then the Lua state is not in a usable condition.
I just started working with the Android NDK but I keep getting SIGSEGV when I have this call in my C code:
jobjectArray someStringArray;
someStringArray = (*env)->NewObjectArray(env, 10,
(*env)->FindClass(env,"java/lang/String"),(*env)->NewStringUTF(env, ""));
Base on all the example I can find, the above code is correct but I keep getting SIGSERGV and everything is ok if the NewObjectArray line is commented out. Any idea what could cause such a problem?
that looks right, so i'm guessing you've done something else wrong. i assume you're running with checkjni on? you might want to break that up into multiple lines: do the FindClass and check the return value, do the NewStringUTF and check the return value, and then call NewObjectArray.
btw, you might want to pass NULL as the final argument; this pattern of using the empty string as the default value for each element of the array is commonly used (i think it's copy & pasted from some Sun documentation and has spread from there) but it's rarely useful, and it's slightly wasteful. (and it doesn't match the behavior of "new String[10]" in Java.)
I guess one of the possible causes is that in a long-run JNI method, the VM aborts when running out of the per-method-invocation local reference slots (normally 512 slots in Android).
Since FindClass() and NewStringUTF() functions would allocate local references, if you stay in a JNI method for a long time, the VM do not know whether a specific local reference should be recycled or not. So you should explicitly call DeleteLocalRef() to release the acquired local references when not required anymore. If you don't do this, the "zombie" local references will occupy slots in VM, and the VM aborts while running out of all the local reference slots.
In short-run JNI method, this may not be a problem due to all the local references would be recycled when exiting from a JNI method.