Best way for C++ and Lua I/O interaction on stack - c++

I was wondering the best way to interact with lua I/O in C++ code.
lua script:
while true do
local input = io.read("*line")
if input == "hello" then
print("world")
else
print("hmmm...")
end
end
C++ code:
lua_State* L = luaL_newstate();
luaL_openlibs(L);
int r = luaL_dofile(L, "foo.lua");
When I ran the C++ code, the foo.lua script I/O seems replacing the original I/O in C++. The another interesting thing is lua's function print seems to insert the value into the stack.(I can use lua_tostring(L, -1) to fetch the print message).
What I want to know is there any ways to interact with lua script more elegantly instead of covering my std I/O. Like, if I push the "hello" into the stack, it can gives me "world" back?

This is how I tend to write a C program that executes a Lua script using Lua's C API:
#include <stdio.h>
#include <stdlib.h>
#include "lua.h"
#include "lualib.h"
#include "lauxlib.h"
#define MIN_ARGC 2
int main(int argc, char** argv) {
// Check to make sure a Lua file was passed to this program as a command-line
// argument.
if (argc < MIN_ARGC) {
fprintf(stderr, "Lua filename expected from command line\n");
exit(EXIT_FAILURE);
}
lua_State* L = luaL_newstate();
// NOTE: good idea to make sure the lua_State* initialized.
if (!L) {
fprintf(stderr, "Error initializing Lua\n");
exit(EXIT_FAILURE);
}
// Open standard Lua libraries--math, string, etc.
luaL_openlibs(L);
// At this point, I register any C functions for use in Lua using the macro
// lua_register.
// I usually create an int called fail; if an error occurs during the execution of
// the Lua script, fail is set to 1.
int fail = 0;
// Execute the Lua script passed to this program as argv[1]; and, in the event of
// an error, print it to stderr.
if ((fail = luaL_dofile(L, argv[1])))
fprintf(stderr, "%s\n", lua_tostring(L, -1));
// Make sure to close your lua_State*.
lua_close(L);
return (fail) ? EXIT_FAILURE : EXIT_SUCCESS;
}
Here is a trivial example of a C function that can be called from Lua. It returns an int and has one parameter, a lua_State*:
int lua_foo(lua_State* L) {
printf("foo\n");
return 0;
}
This is a lua_CFunction, and its return value refers to the number of values it will push on to the Lua stack. In the case of my trivial function lua_foo, it pushes nothing on to the stack, so it returns 0.
It could be registered with the macro lua_register as follows:
lua_register(L, "foo", lua_foo);
It could then be called from Lua, in a script executed by this program via luaL_dofile, as foo().
As for the example script that you provided in your question, there is no condition to break out of the while-loop. Try this instead:
while (true) do
local input = io.read("*l")
if (input == "hello") then
print("world")
-- Include a statement here to break out of the loop once the user has
-- input 'hello'.
break
else
print("hmm...")
end
end
Upon inputting "hello," the loop will break and the script should successfully exit; and the value of r in your C/C++ code will be set to 0, indicating normal script execution by luaL_dofile.
EDIT:
As far as the part of your question concerning Lua I/O and "elegant" interaction with C, you have to remember that Lua is implemented in ANSI C; which means that, at a lower level, Lua calls C functions. There is nothing wrong with calling Lua I/O functions, so long as you handle them correctly, as they are simply Lua wrappers for C functions.

Related

Lua coroutines -- setjmp longjmp clobbering?

In a blog post from not too long ago, Scott Vokes describes a technical problem associated to lua's implementation of coroutines using the C functions setjmp and longjmp:
The main limitation of Lua coroutines is that, since they are implemented with setjmp(3) and longjmp(3), you cannot use them to call from Lua into C code that calls back into Lua that calls back into C, because the nested longjmp will clobber the C function’s stack frames. (This is detected at runtime, rather than failing silently.)
I haven’t found this to be a problem in practice, and I’m not aware of any way to fix it without damaging Lua’s portability, one of my favorite things about Lua — it will run on literally anything with an ANSI C compiler and a modest amount of space. Using Lua means I can travel light. :)
I have used coroutines a fair amount and I thought I understood broadly what was going on and what setjmp and longjmp do, however I read this at some point and realized that I didn't really understand it. To try to figure it out, I tried to make a program that I thought should cause a problem based on the description, and instead it seems to work fine.
However there are a few other places that I've seen people seem to allege that there are problems:
http://coco.luajit.org/
http://lua-users.org/lists/lua-l/2005-03/msg00179.html
The question is:
Under what circumstances do lua coroutines fail to work because of C function stack frames getting clobbered?
What exactly is the result? Does "detected at runtime" mean, lua panic? Or something else?
Does this still affect the most recent versions of lua (5.3) or is this actually a 5.1 issue or something?
Here was the code which I produced. In my test, it is linked with lua 5.3.1, compiled as C code, and the test itself is compiled itself as C++ code at C++11 standard.
extern "C" {
#include <lauxlib.h>
#include <lua.h>
}
#include <cassert>
#include <iostream>
#define CODE(C) \
case C: { \
std::cout << "When returning to " << where << " got code '" #C "'" << std::endl; \
break; \
}
void handle_resume_code(int code, const char * where) {
switch (code) {
CODE(LUA_OK)
CODE(LUA_YIELD)
CODE(LUA_ERRRUN)
CODE(LUA_ERRMEM)
CODE(LUA_ERRERR)
default:
std::cout << "An unknown error code in " << where << std::endl;
}
}
int trivial(lua_State *, int, lua_KContext) {
std::cout << "Called continuation function" << std::endl;
return 0;
}
int f(lua_State * L) {
std::cout << "Called function 'f'" << std::endl;
return 0;
}
int g(lua_State * L) {
std::cout << "Called function 'g'" << std::endl;
lua_State * T = lua_newthread(L);
lua_getglobal(T, "f");
handle_resume_code(lua_resume(T, L, 0), __func__);
return lua_yieldk(L, 0, 0, trivial);
}
int h(lua_State * L) {
std::cout << "Called function 'h'" << std::endl;
lua_State * T = lua_newthread(L);
lua_getglobal(T, "g");
handle_resume_code(lua_resume(T, L, 0), __func__);
return lua_yieldk(L, 0, 0, trivial);
}
int main () {
std::cout << "Starting:" << std::endl;
lua_State * L = luaL_newstate();
// init
{
lua_pushcfunction(L, f);
lua_setglobal(L, "f");
lua_pushcfunction(L, g);
lua_setglobal(L, "g");
lua_pushcfunction(L, h);
lua_setglobal(L, "h");
}
assert(lua_gettop(L) == 0);
// Some action
{
lua_State * T = lua_newthread(L);
lua_getglobal(T, "h");
handle_resume_code(lua_resume(T, nullptr, 0), __func__);
}
lua_close(L);
std::cout << "Bye! :-)" << std::endl;
}
The output I get is:
Starting:
Called function 'h'
Called function 'g'
Called function 'f'
When returning to g got code 'LUA_OK'
When returning to h got code 'LUA_YIELD'
When returning to main got code 'LUA_YIELD'
Bye! :-)
Much thanks to # Nicol Bolas for the very detailed answer!
After reading his answer, reading the official docs, reading some emails and playing around with it some more, I want to refine the question / ask a specific follow-up question, however you want to look at it.
I think this term 'clobbering' is not good for describing this issue and this was part of what confused me -- nothing is being "clobbered" in the sense of being written to twice and the first value being lost, the issue is solely, as #Nicol Bolas points out, that longjmp tosses part of the C stack, and if you are hoping to restore the stack later, too bad.
The issue is actually described very nicely in section 4.7 of lua 5.2 manual, in a link provided by #Nicol Bolas.
Curiously, there is no equivalent section in the lua 5.1 documentation. However, lua 5.2 has this to say about lua_yieldk:
Yields a coroutine.
This function should only be called as the return expression of a C function, as follows:
return lua_yieldk (L, n, i, k);
Lua 5.1 manual says something similar, about lua_yield instead:
Yields a coroutine.
This function should only be called as the return expression of a C function, as follows:
return lua_yieldk (L, n, i, k);
Some natural questions then:
Why does it matter if I use return here or not? If lua_yieldk will call longjmp then the lua_yieldk will never return anyways, so it shouldn't matter if I return then? So that cannot be what is happening, right?
Supposing instead that lua_yieldk just makes a note within the lua state that the current C api call has stated that it wants to yield, and then when it finally does return, lua will figure out what happens next. Then this solves the problem of saving C stack frames, no? Since after we return to lua normally, those stack frames have expired anyways -- so the complications described in #Nicol Bolas picture are skirted around? And second of all, in 5.2 at least the semantics are never that we should restore C stack frames, it seems -- lua_yieldk resumes to a continuation function, not to the lua_yieldk caller, and lua_yield apparently resumes to the caller of the current api call, not to the lua_yield caller itself.
And, the most important question:
If I consistently use lua_yieldk in the form return lua_yieldk(...) specified in the docs, returning from a lua_CFunction that was passed to lua, is it still possible to trigger the attempt to yield across a C-call boundary error?
Finally, (but this is less important), I would like to see a concrete example of what it looks like when a naive programmer "isn't careful" and triggers the attempt to yield across a C-call boundary error. I get the idea that there could be problem associated to setjmp and longjmp tossing stack frames that we later need, but I want to see some real lua / lua c api code that I can point to and say "for instance, don't do that", and this is surprisingly elusive.
I found this email where someone reported this error with some lua 5.1 code, and I attempted to reproduce it in lua 5.3. However what I found was that, this looks like just poor error reporting from the lua implementation -- the actual bug is being caused because the user is not setting up their coroutine properly. The proper way to load the coroutine is, create the thread, push a function onto the thread stack, and then call lua_resume on the thread state. Instead the user was using dofile on the thread stack, which executes the function there after loading it, rather than resuming it. So it is effectively yield outside of a coroutine iiuc, and when I patch this, his code works fine, using both lua_yield and lua_yieldk in lua 5.3.
Here is the listing I produced:
#include <cassert>
#include <cstdio>
extern "C" {
#include "lua.h"
#include "lauxlib.h"
}
//#define USE_YIELDK
bool running = true;
int lua_print(lua_State * L) {
if (lua_gettop(L)) {
printf("lua: %s\n", lua_tostring(L, -1));
}
return 0;
}
int lua_finish(lua_State *L) {
running = false;
printf("%s called\n", __func__);
return 0;
}
int trivial(lua_State *, int, lua_KContext) {
printf("%s called\n", __func__);
return 0;
}
int lua_sleep(lua_State *L) {
printf("%s called\n", __func__);
#ifdef USE_YIELDK
printf("Calling lua_yieldk\n");
return lua_yieldk(L, 0, 0, trivial);
#else
printf("Calling lua_yield\n");
return lua_yield(L, 0);
#endif
}
const char * loop_lua =
"print(\"loop.lua\")\n"
"\n"
"local i = 0\n"
"while true do\n"
" print(\"lua_loop iteration\")\n"
" sleep()\n"
"\n"
" i = i + 1\n"
" if i == 4 then\n"
" break\n"
" end\n"
"end\n"
"\n"
"finish()\n";
int main() {
lua_State * L = luaL_newstate();
lua_pushcfunction(L, lua_print);
lua_setglobal(L, "print");
lua_pushcfunction(L, lua_sleep);
lua_setglobal(L, "sleep");
lua_pushcfunction(L, lua_finish);
lua_setglobal(L, "finish");
lua_State* cL = lua_newthread(L);
assert(LUA_OK == luaL_loadstring(cL, loop_lua));
/*{
int result = lua_pcall(cL, 0, 0, 0);
if (result != LUA_OK) {
printf("%s error: %s\n", result == LUA_ERRRUN ? "Runtime" : "Unknown", lua_tostring(cL, -1));
return 1;
}
}*/
// ^ This pcall (predictably) causes an error -- if we try to execute the
// script, it is going to call things that attempt to yield, but we did not
// start the script with lua_resume, we started it with pcall, so it's not
// okay to yield.
// The reported error is "attempt to yield across a C-call boundary", but what
// is really happening is just "yield from outside a coroutine" I suppose...
while (running) {
int status;
printf("Waking up coroutine\n");
status = lua_resume(cL, L, 0);
if (status == LUA_YIELD) {
printf("coroutine yielding\n");
} else {
running = false; // you can't try to resume if it didn't yield
if (status == LUA_ERRRUN) {
printf("Runtime error: %s\n", lua_isstring(cL, -1) ? lua_tostring(cL, -1) : "(unknown)" );
lua_pop(cL, -1);
break;
} else if (status == LUA_OK) {
printf("coroutine finished\n");
} else {
printf("Unknown error\n");
}
}
}
lua_close(L);
printf("Bye! :-)\n");
return 0;
}
Here is the output when USE_YIELDK is commented out:
Waking up coroutine
lua: loop.lua
lua: lua_loop iteration
lua_sleep called
Calling lua_yield
coroutine yielding
Waking up coroutine
lua: lua_loop iteration
lua_sleep called
Calling lua_yield
coroutine yielding
Waking up coroutine
lua: lua_loop iteration
lua_sleep called
Calling lua_yield
coroutine yielding
Waking up coroutine
lua: lua_loop iteration
lua_sleep called
Calling lua_yield
coroutine yielding
Waking up coroutine
lua_finish called
coroutine finished
Bye! :-)
Here is the output when USE_YIELDK is defined:
Waking up coroutine
lua: loop.lua
lua: lua_loop iteration
lua_sleep called
Calling lua_yieldk
coroutine yielding
Waking up coroutine
trivial called
lua: lua_loop iteration
lua_sleep called
Calling lua_yieldk
coroutine yielding
Waking up coroutine
trivial called
lua: lua_loop iteration
lua_sleep called
Calling lua_yieldk
coroutine yielding
Waking up coroutine
trivial called
lua: lua_loop iteration
lua_sleep called
Calling lua_yieldk
coroutine yielding
Waking up coroutine
trivial called
lua_finish called
coroutine finished
Bye! :-)
Think about what happens when a coroutine does a yield. It stops executing, and processing returns to whomever it was that called resume on that coroutine, correct?
Well, let's say you have this code:
function top()
coroutine.yield()
end
function middle()
top()
end
function bottom()
middle()
end
local co = coroutine.create(bottom);
coroutine.resume(co);
At the moment of the call to yield, the Lua stack looks like this:
-- top
-- middle
-- bottom
-- yield point
When you call yield, the Lua call stack that is part of the coroutine is preserved. When you do resume, the preserved call stack is executed again, starting where it left off before.
OK, now let's say that middle was in fact not a Lua function. Instead, it was a C function, and that C function calls the Lua function top. So conceptually, your stack looks like this:
-- Lua - top
-- C - middle
-- Lua - bottom
-- Lua - yield point
Now, please note what I said before: this is what your stack looks like conceptually.
Because your actual call stack looks nothing like this.
In reality, there are really two stacks. There is Lua's internal stack, defined by a lua_State. And there's C's stack. Lua's internal stack, at the time when yield is about to be called, looks something like this:
-- top
-- Some C stuff
-- bottom
-- yield point
So what does the stack look like to C? Well, it looks like this:
-- arbitrary Lua interpreter stuff
-- middle
-- arbitrary Lua interpreter stuff
-- setjmp
And that right there is the problem. See, when Lua does a yield, it's going to call longjmp. That function is based on the behavior of the C stack. Namely, it's going to return to where setjmp was.
The Lua stack will be preserved because the Lua stack is separate from the C stack. But the C stack? Everything between the longjmp and setjmp?. Gone. Kaput. Lost forever.
Now you may go, "wait, doesn't the Lua stack know that it went into C and back into Lua"? A bit. But the Lua stack is incapable of doing something that C is incapable of. And C is simply not capable of preserving a stack (well, not without special libraries). So while the Lua stack is vaguely aware that some kind of C process happened in the middle of its stack, it has no way to reconstitute what was there.
So what happens if you resume this yielded coroutine?
Nasal demons. And nobody likes those. Fortunately, Lua 5.1 and above (at least) will error whenever you attempt to yield across C.
Note that Lua 5.2+ does have ways of fixing this. But it's not automatic; it requires explicit coding on your part.
When Lua code that is in a coroutine calls your C code, and your C code calls Lua code that may yield, you can use lua_callk or lua_pcallk to call the possibly-yielding Lua functions. These calling functions take an extra parameter: a "continuation" function.
If the Lua code you call does yield, then the lua_*callk function won't ever actually return (since your C stack will have been destroyed). Instead, it will call the continuation function you provided in your lua_*callk function. As you can guess by the name, the continuation function's job is to continue where your previous function left off.
Now, Lua does preserve the stack for your continuation function, so it gets the stack in the same state that your original C function was in. Well, except that the function+arguments that you called (with lua_*callk) are removed, and the return values from that function are pushed onto your stack. Outside of that, the stack is all the same.
There is also lua_yieldk. This allows your C function to yield back to Lua, such that when the coroutine is resumed, it calls the provided continuation function.
Note that Coco gives Lua 5.1 the ability to resolve this problem. It is capable (though OS/assembly/etc magic) of preserving the C stack during a yield operation. LuaJIT versions before 2.0 also provided this feature.
C++ note
You marked your question with the C++ tag, so I'll assume that's involved here.
Among the many differences between C and C++ is the fact that C++ is far more dependent on the nature of its callstack than Lua. In C, if you discard a stack, you might lose resources that weren't cleaned up. C++ however is required to call destructors of functions declared on the stack at some point. The standard does not allow you to just throw them away.
So continuations only work in C++ if there is nothing on the stack which needs to have a destructor call. Or more specifically, only types that are trivially destructible can be sitting on the stack if you call any of the continuation function Lua APIs.
Of course, Coco handles C++ just fine, since it's actually preserving the C++ stack.
Posting this as an answer which complements #Nicol Bolas' answer, and so that
I can have space to write down what it took for me to understand the original
question, and the answers to the secondary questions / a code listing.
If you read Nicol Bolas' answer but still have questions like I did, here are
some additional hints:
The three layers on the call stack, Lua, C, Lua, are essential to the problem.
If you only have two layers, Lua and C, you don't get the problem.
In imagining how the coroutine call is supposed to work -- the lua stack looks
a certain way, the C stack looks a certain way, the call yields (longjmp) and
later is resumed... the problem does not happen immediately when it is
resumed.
The problem happens when the resumed function later tries to return, to your
C function.
Because, for the coroutine semantics to work out, it is supposed to return
into a C function call, but the stack frames for that are gone, and cannot be
restored.
The workaround for this lack of ability to restore those stack frames is to
use lua_callk, lua_pcallk, which allow you to provide a substitute
function which can be called in place of that C function whose frames were
wiped out.
The issue about return lua_yieldk(...) appears to have nothing to do with
any of this. From skimming the implementation of lua_yieldk it appears that
it does indeed always longjmp, and it may only return in some obscure case
involving lua debugging hooks (?).
Lua internally (at current version) keeps track of when yield should not be
allowed, by keeping a counter variable nny (number non-yieldable) associated
to the lua state, and when you call lua_call or lua_pcall from a C api
function (a lua_CFunction which you earlier pushed to lua), nny is
incremented, and is only decremented when that call or pcall returns. When
nny is nonzero, it is not safe to yield, and you get this yield across
C-api boundary error if you try to yield anyways.
Here is a simple listing that produces the problem and reports the errors,
if you are like me and like to have a concrete code examples. It demonstrates
some of the difference in using lua_call, lua_pcall, and lua_pcallk
within a function called by a coroutine.
extern "C" {
#include <lauxlib.h>
#include <lua.h>
}
#include <cassert>
#include <iostream>
//#define USE_PCALL
//#define USE_PCALLK
#define CODE(C) \
case C: { \
std::cout << "When returning to " << where << " got code '" #C "'" << std::endl; \
break; \
}
#define ERRCODE(C) \
case C: { \
std::cout << "When returning to " << where << " got code '" #C "': " << lua_tostring(L, -1) << std::endl; \
break; \
}
int report_resume_code(int code, const char * where, lua_State * L) {
switch (code) {
CODE(LUA_OK)
CODE(LUA_YIELD)
ERRCODE(LUA_ERRRUN)
ERRCODE(LUA_ERRMEM)
ERRCODE(LUA_ERRERR)
default:
std::cout << "An unknown error code in " << where << ": " << lua_tostring(L, -1) << std::endl;
}
return code;
}
int report_pcall_code(int code, const char * where, lua_State * L) {
switch(code) {
CODE(LUA_OK)
ERRCODE(LUA_ERRRUN)
ERRCODE(LUA_ERRMEM)
ERRCODE(LUA_ERRERR)
default:
std::cout << "An unknown error code in " << where << ": " << lua_tostring(L, -1) << std::endl;
}
return code;
}
int trivial(lua_State *, int, lua_KContext) {
std::cout << "Called continuation function" << std::endl;
return 0;
}
int f(lua_State * L) {
std::cout << "Called function 'f', yielding" << std::endl;
return lua_yield(L, 0);
}
int g(lua_State * L) {
std::cout << "Called function 'g'" << std::endl;
lua_getglobal(L, "f");
#ifdef USE_PCALL
std::cout << "pcall..." << std::endl;
report_pcall_code(lua_pcall(L, 0, 0, 0), __func__, L);
// ^ yield across pcall!
// If we yield, there is no way ever to return normally from this pcall,
// so it is an error.
#elif defined(USE_PCALLK)
std::cout << "pcallk..." << std::endl;
report_pcall_code(lua_pcallk(L, 0, 0, 0, 0, trivial), __func__, L);
#else
std::cout << "call..." << std::endl;
lua_call(L, 0, 0);
// ^ yield across call!
// This results in an error being reported in lua_resume, rather than at
// the pcall
#endif
return 0;
}
int main () {
std::cout << "Starting:" << std::endl;
lua_State * L = luaL_newstate();
// init
{
lua_pushcfunction(L, f);
lua_setglobal(L, "f");
lua_pushcfunction(L, g);
lua_setglobal(L, "g");
}
assert(lua_gettop(L) == 0);
// Some action
{
lua_State * T = lua_newthread(L);
lua_getglobal(T, "g");
while (LUA_YIELD == report_resume_code(lua_resume(T, L, 0), __func__, T)) {}
}
lua_close(L);
std::cout << "Bye! :-)" << std::endl;
}
Example output:
call
Starting:
Called function 'g'
call...
Called function 'f', yielding
When returning to main got code 'LUA_ERRRUN': attempt to yield across a C-call boundary
Bye! :-)
pcall
Starting:
Called function 'g'
pcall...
Called function 'f', yielding
When returning to g got code 'LUA_ERRRUN': attempt to yield across a C-call boundary
When returning to main got code 'LUA_OK'
Bye! :-)
pcallk
Starting:
Called function 'g'
pcallk...
Called function 'f', yielding
When returning to main got code 'LUA_YIELD'
Called continuation function
When returning to main got code 'LUA_OK'
Bye! :-)

How to make tcl interpreter not to continue after exit command?

static int
MyReplacementExit(ClientData unused, Tcl_Interp *interp, int argc, const char *argv[])
{
// Tcl_DeleteInterp(interp);
// Tcl_Finalize();
return TCL_OK;
}
int main() {
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_CreateCommand(interp, "exit", MyReplacementExit, NULL, NULL);
Tcl_Eval(interp, "exit ; puts 11111111");
std::cout << "22222222222" << std::endl;
return 0;
}
I need to handle exit command evaluation of tcl interpreter. By default it tries to delete itself and also calls std::exit which closes whole program.It is not what I want so I am trying to replace it by custom proc. I dont need to delete interpreter in exit handler proc(I can do it later), only need it to not continue evaluating commands after exit command.
In this code I need to change MyReplacementExit proc somehow, so 11111111 doesn't be printed but
22222222222 does printed.
It can be achieved by returning TCL_ERROR from MyReplacementExit proc, but then I can't distinguish other error situations from this.
Make your replacement for exit delete the interpreter (which stops further commands from being executed, but doesn't actually immediately delete the data structure as it is still in use) and, important, wrap the call to Tcl_Eval with calls to Tcl_Preserve and Tcl_Release. Don't call Tcl_Finalize if you can possibly avoid it; that is for when you're about to unload the Tcl library from memory and can be quite tricky (it's easier to just quit the process, frankly).
Here's how to do it with your code (adapted):
static int
MyReplacementExit(ClientData unused, Tcl_Interp *interp, int argc, const char *argv[])
{
Tcl_DeleteInterp(interp); // <------------------
return TCL_OK;
}
int main() {
Tcl_Interp *interp = Tcl_CreateInterp();
Tcl_CreateCommand(interp, "exit", MyReplacementExit, NULL, NULL);
Tcl_Preserve(interp); // <------------------
Tcl_Eval(interp, "exit ; puts 11111111");
Tcl_Release(interp); // <------------------
std::cout << "22222222222" << std::endl;
return 0;
}
Be aware that you should not access the interpreter at all after the Tcl_Release as it might've been destroyed (as in, the memory released and scribbled over with random junk) at that point. If you need to retrieve results and use them (e.g., printing them out) do so beforehand.
Note that in this specific case, you don't need the preserve/release pair; your code isn't actually touching the interpreter after the Tcl_Eval (which does its own preserve/release internally).
If you don't want the interpreter to terminate, that's much trickier. The cleanest way in 8.4 is probably to throw a custom exception code (i.e., anything greater than TCL_CONTINUE) but there's no guarantee that it will work for arbitrary code as Tcl's catch command can still trap it. If you're really in that situation, it's actually easier to create an interpreter, run the arbitrary code in the sub-interp, and tear it down at the end of the script; you can then drop that interpreter without losing much context. Indeed, you could do:
Tcl_Preserve(interp);
if (Tcl_Eval(interp, theScriptToEval) != TCL_OK)
// Handle unexpected errors here
if (!Tcl_InterpDeleted(interp))
Tcl_DeleteInterp(interp);
Tcl_Release(interp);
Yes, this will mean you want to keep the amount of work you do to set up the interpreter fairly small; you probably won't want to try to call this every millisecond on an interrupt…

DeleteInterpProc called with active evals

I am writing a program which executes tcl scripts. When the script has exit command, the program crashes with this error
DeleteInterpProc called with active evals
Aborted
I am calling Tcl_EvalFile(m_interpreter, script.c_str()) where script is the file name.
Also I have tried Tcl_Eval with arguments interpreter and "source filename". Result is the same. Other tcl comands (eg. puts) interpreter executes normally. How this can be fixed?
#include <tcl.h>
#include <iostream>
int main() {
Tcl_Interp *interp = Tcl_CreateInterp();
//Tcl_Preserve(interp);
Tcl_Eval (interp, "exit");
//Tcl_Release(interp);
std::cout << "11111111111" << std::endl;
return 0;
}
This is the simple case. "11111111111" are not printed. As I understand whole program is exited when calling Tcl_Eval (interp, "exit");. The result is same after adding Tcl_Preserve and Tcl_Release.
The problem is that the interpreter, the execution context for Tcl code, is getting its feet deleted out from under itself; this makes it very confused! At least you're getting a clean panic/abort rather than a disgusting hard-to-reproduce crash.
The easiest fix is probably to do:
Tcl_Preserve(m_interpreter);
// Your code that calls Tcl_EvalFile(m_interpreter, script.c_str())
// and deals with the results.
Tcl_Release(m_interpreter);
Be aware that after the Tcl_Release, the Tcl_Interp handle may refer to deleted memory.
(Yes, wrapping the Tcl_Preserve/Tcl_Release in RAII goodness is reasonable.)
If you want instead to permit your code to run after the script does an exit, you have to take additional steps. In particular, the standard Tcl exit command is not designed to cause a return to the calling context: it will cause the process to call the _exit(2) system call. To change it's behavior, replace it:
// A callback function that implements the replacement
static int
MyReplacementExit(ClientData unused, Tcl_Interp *interp, int argc, const char *argv[])
{
// We ought to check the argument count... but why bother?
Tcl_DeleteInterp(interp);
return TCL_OK;
}
int main() {
Tcl_Interp *interp = Tcl_CreateInterp();
// Install that function over the standard [exit]
Tcl_CreateCommand(interp, "exit", MyReplacementExit, NULL, NULL);
// Important; need to keep the *handle* live until we're finished
Tcl_Preserve(interp);
// Or run whatever code you want here...
Tcl_Eval(interp, "exit");
// Important piece of cleanup code
if (!Tcl_InterpDeleted(interp))
Tcl_DeleteInterp(interp);
Tcl_Release(interp);
// After this point, you *MUST NOT* use interp
std::cout << "11111111111" << std::endl;
return 0;
}
The rules for doing memory management in these sorts of scenarios are laid out in the manual page for Tcl_CreateInterp. (That's the 8.6 manual page, but the relevant rules have been true since at least Tcl 7.0, which is over 2 decades ago.) Once an interpreter is deleted, you can no longer count on executing any commands or accessing any variables in it; the Tcl library handles the state unwinding for you.
It might be better to replace (hide) the exit command and create your own exit command that exit your program gracefully. I'm not that good with C and the Tcl C Api, but I hope this can help you.
Eggdrop for example uses the die command to exit gracefully.

Generate exception from a C library function that forces quit

I am trying to wrap a C library function using C++. The function attempts to initialize a device. On error, it forces the execution of the program to terminate (probably with an exit(1)). I would like to throw an exception on error instead. Is there any way to do this without editing the C source?
Can I somehow disallow the called function to terminate the program?
Install atexit handler, throw exception from handler. Ugh.
PS. So, C++ exception, as people pointed out, does not work, then we use C "exception":
#include <cstdlib>
#include <iostream>
#include <csetjmp>
jmp_buf buf;
void foo ()
{
longjmp (buf, 1);
}
void bar () { exit(-1); }
int
main ()
{
atexit (foo);
if (setjmp (buf))
{
bar ();
}
else
{
std::cout << "graceful" << std::endl;
}
return 0;
}
If you are on Unix/Linux, you can check with strace what exactly your library calls, then you can override called function using LD_PRELOAD.
Not a super nice solution, but one which should work: fork a new process and call that C function in the child process. In the parent process, wait for the child to finish, check the error code, if it is 1, which means exit(1) was called, throw an exception.

How to call an external program with parameters?

I would like to call a windows program within my code with parameters determined within the code itself.
I'm not looking to call an outside function or method, but an actual .exe or batch/script file within the WinXP environment.
C or C++ would be the preferred language but if this is more easily done in any other language let me know (ASM, C#, Python, etc).
When you call CreateProcess(), System(), etc., make sure you double quote your file name strings (including the command program filename) in case your file name(s) and/or the fully qualified path have spaces otherwise the parts of the file name path will be parsed by the command interpreter as separate arguments.
system("\"d:some path\\program.exe\" \"d:\\other path\\file name.ext\"");
For Windows it is recommended to use CreateProcess(). It has messier setup but you have more control on how the processes is launched (as described by Greg Hewgill). For quick and dirty you can also use WinExec().
(system() is portable to UNIX).
When launching batch files you may need to launch with cmd.exe (or command.com).
WinExec("cmd \"d:some path\\program.bat\" \"d:\\other path\\file name.ext\"",SW_SHOW_MINIMIZED);
(or SW_SHOW_NORMAL if you want the command window displayed ).
Windows should find command.com or cmd.exe in the system PATH so in shouldn't need to be fully qualified, but if you want to be certain you can compose the fully qualified filename using CSIDL_SYSTEM (don't simply use C:\Windows\system32\cmd.exe).
C++ example:
char temp[512];
sprintf(temp, "command -%s -%s", parameter1, parameter2);
system((char *)temp);
C# example:
private static void RunCommandExample()
{
// Don't forget using System.Diagnostics
Process myProcess = new Process();
try
{
myProcess.StartInfo.FileName = "executabletorun.exe";
//Do not receive an event when the process exits.
myProcess.EnableRaisingEvents = false;
// Parameters
myProcess.StartInfo.Arguments = "/user testuser /otherparam ok";
// Modify the following to hide / show the window
myProcess.StartInfo.CreateNoWindow = false;
myProcess.StartInfo.UseShellExecute = true;
myProcess.StartInfo.WindowStyle = ProcessWindowStyle.Maximized;
myProcess.Start();
}
catch (Exception e)
{
// Handle error here
}
}
I think you are looking for the CreateProcess function in the Windows API. There are actually a family of related calls but this will get you started. It is quite easy.
One of the simplest ways to do this is to use the system() runtime library function. It takes a single string as a parameter (many fewer parameters than CreateProcess!) and executes it as if it were typed on the command line. system() also automatically waits for the process to finish before it returns.
There are also limitations:
you have less control over the stdin and stdout of the launched process
you cannot do anything else while the other process is running (such as deciding to kill it)
you cannot get a handle to the other process in order to query it in any way
The runtime library also provides a family of exec* functions (execl, execlp, execle, execv, execvp, more or less) which are derived from Unix heritage and offer more control over the process.
At the lowest level, on Win32 all processes are launched by the CreateProcess function, which gives you the most flexibility.
simple c++ example (found after searching a few websites)
#include <bits/stdc++.h>
#include <cassert>
#include <exception>
#include <iostream>
int main (const int argc, const char **argv) {
try {
assert (argc == 2);
const std::string filename = (const std::string) argv [1];
const std::string begin = "g++-7 " + filename;
const std::string end = " -Wall -Werror -Wfatal-errors -O3 -std=c++14 -o a.elf -L/usr/lib/x86_64-linux-gnu";
const std::string command = begin + end;
std::cout << "Compiling file using " << command << '\n';
assert (std::system ((const char *) command.c_str ()) == 0);
std::cout << "Running file a.elf" << '\n';
assert (std::system ((const char *) "./a.elf") == 0);
return 0; }
catch (std::exception const& e) { std::cerr << e.what () << '\n'; std::terminate (); }
catch (...) { std::cerr << "Found an unknown exception." << '\n'; std::terminate (); } }