I have a C++ program which uses lua. C++ exposes a reference counted datatype as userdata with an assigned finalizer so that lua can take ownership of such values.
This works fine. However one thing worries me: if there was to occur an error executing a script where lua holds instances of that datatype, will the finalizer be called then?
Another formulation of the question would be: Does lua run a garbage collection cycle upon an error?
Yes, everything continues to run fine if the error occurs inside a protected call. If Lua panics, then the Lua state is not in a usable condition.
Related
We have a large Fortran application which calls many C++ modules. I am trying to use the C++ objects' destructors to free resources and close files, but it seems they are not being called when the Fortran program exits.
The Fortran program exits using the STOP command.
Do I need to use a different Fortran command to exit, or call the C++ exit(0) command from Fortran?
To get proper construction/destruction, you just about need the entry point to be on the C++ side.
At least offhand, the simplest approach I can think of that seems at all likely to work would be something like this:
set up main in C++, and have it as the entry point.
move your current Fortran entry point into a function.
call that function from main
write a small function in C++ named something like do_stop() that just throws an exception
in your Fortran, replace STOP with calls to do_stop().
You can either leave the exception uncaught, or have a try/catch in main, which can give a slightly more graceful exit (display an error message of your choice instead of something written by the library author saying your program has an error).
Say I have a script like
my_global = my_cpp_class()
my_global = nil
now while this properly calls the destructor of my_cpp_class, this code does not:
my_global = my_cpp_class()
call_script("a.lua") -- a.lua contains "my_global = nil"
-- "call_script" is actually a simplified notation
-- I create script as userdata and then I can
-- set its child scripts that should be reloaded once script is modified
The same when I simply replace call_script with dofile.
Basically, I want a particular script file to be able to assign "nil" to an existing global so the old value becomes unreachable.
Now what I need this for is script reloading in real time - whenever a script file is modified it is instantly compiled and called with call_script again with its child scripts in runtime, so it should cause globals to be reassigned and hence old values should be garbage collected.
Is there easier, more preferred way to do such script reloads?
Should I use locals somehow, or manipulate with environments?
I use luabind, if it's relevant.
If you execute the script within the same lua_State, the global actually is reassigned (you can check that the variable is visible before assigning nil). The C++ object's destructor is run when the object is collected, which may not happen immediately after it becomes unreachable. Calling lua_gc() from C++ or collectgarbage() from Lua will probably help.
I have a C++ program receiving calls from users. Some of these calls should be processed by python scripts. This is how I am doing it:
On start, it loads a Python interpreter using Py_Initialize() and loads some modules and functions. I keep references to them. This works.
On each call, the corresponding function is called
The first time it works nice, but the second time I always get a segmentation fault when calling PyObject_CallObject. I have tried fixing this with the tips from Calling python method from C++ (or C) callback, but it still doesn't work.
Moreover, if I try to run PyRun_SimpleString("import <module_name>"), I also get a segmentation fault! And this time I am not even using references.
Note: the initialization is done via a singleton pattern, so the first call happens immediately after the intitialization.
I have a program that performs very fast-paced calls to a Lua script using lua_pcall. It seems if the program calls the lua script too fast, things will foul up and cause access violations in the most random of places.
I've tried mutexes and even enabled SEH exceptions with try/catch to no avail. Error functions are in place and I'm checking all of the approprate return codes; the problem is an actual access violation deep within the pcall, not a safely handled Lua error.
A lot of the time the break occurs in luaV_execute, but sometimes it's in other random places. I've checked to make sure all parameters pushed to the stack are valid.
Is there a way to force Lua to complete a call before returning, or some way to ensure the call stack doesn't get corrupted?
Although the Lua system as a whole is fully re-entrant, individual lua_State instances are not in themselves thread safe.
If you're accessing a lua_State from multiple threads, you should use a mutex or other locking mechanism to ensure that only one thread at a time can manipulate that state. Simultaneous accesses could easily result in the sort of corruption you're seeing.
If you're working with multiple lua_State instances, each state can have its own access lock; you don't need a single global lock for the whole Lua runtime.
I just started working with the Android NDK but I keep getting SIGSEGV when I have this call in my C code:
jobjectArray someStringArray;
someStringArray = (*env)->NewObjectArray(env, 10,
(*env)->FindClass(env,"java/lang/String"),(*env)->NewStringUTF(env, ""));
Base on all the example I can find, the above code is correct but I keep getting SIGSERGV and everything is ok if the NewObjectArray line is commented out. Any idea what could cause such a problem?
that looks right, so i'm guessing you've done something else wrong. i assume you're running with checkjni on? you might want to break that up into multiple lines: do the FindClass and check the return value, do the NewStringUTF and check the return value, and then call NewObjectArray.
btw, you might want to pass NULL as the final argument; this pattern of using the empty string as the default value for each element of the array is commonly used (i think it's copy & pasted from some Sun documentation and has spread from there) but it's rarely useful, and it's slightly wasteful. (and it doesn't match the behavior of "new String[10]" in Java.)
I guess one of the possible causes is that in a long-run JNI method, the VM aborts when running out of the per-method-invocation local reference slots (normally 512 slots in Android).
Since FindClass() and NewStringUTF() functions would allocate local references, if you stay in a JNI method for a long time, the VM do not know whether a specific local reference should be recycled or not. So you should explicitly call DeleteLocalRef() to release the acquired local references when not required anymore. If you don't do this, the "zombie" local references will occupy slots in VM, and the VM aborts while running out of all the local reference slots.
In short-run JNI method, this may not be a problem due to all the local references would be recycled when exiting from a JNI method.