Embedding Lua in C++ - c++

I've been trying to embed lua in a c++ application but to no avail since the compiler complains about "lua_open".I'm using Lua 5.2.
I found alot of articles claiming that lua_open() was replaced in the fifth version but none of them mentioned with what.
Here's the code I am trying to compile
extern "C" {
#include "../lua/lua.h"
#include "../lua/lualib.h"
#include "../lua/lauxlib.h"
}
int main()
{
int s=0;
lua_State *L = lua_open();
// load the libs
luaL_openlibs(L);
luaL_dofile(L,"example.lua");
printf("\nDone!\n");
lua_close(L);
return 0;
}

Indeed, the lua_open function is not mentioned in the lua 5.2 reference manual
A lua_State is constructed with lua_newstate, and you can use luaL_newstate from lauxlib.h
A faster way to get the answers to such question is to look into the Lua 5.2 source code (which I just did).

Related

Lua: Cant open shared library when lua is wrapped in C++

EDIT: Nearly got the answer, I just dont completely understand it, see last paragraph.
I try to build a shared lua library and use it within a larger project. When calling the script which loads the shared library from shell everything works. However, when I wrap the script within another shell, I get a runtime error when loading the library. Dependent on the script it is just any call to a lua function from c (i.e. lua_pushnumber). Here is a minimal example.
totestlib.cpp:
extern "C" {
#include "lua.h"
#include "lualib.h"
#include "lauxlib.h"
}
int init(lua_State *L) {
lua_toboolean(L, -1);
return 0;
}
static const struct luaL_Reg testlib[] = {
{"init", init},
{NULL, NULL}
};
extern "C"
int luaopen_libtotestlib(lua_State *L) {
luaL_newlib(L, testlib);
return 1;
}
Compiled with: g++ -shared -fPIC -I./lua-5.4.4/src -L./lua-5.4.4/src totestlib.cpp -o libtotestlib.so
testlib.lua (testing shared library):
testlib.lua
print("start")
testlib = require("libtotestlib")
print("done")
testlib.init(true)
print("called")
Calling the lua script using ./lua-5.4.4/src/lua testlib.lua works. Everything is printed. Wrapping script in the following c++ code does not work:
call_testlib.cpp
extern "C" {
#include <lua.h>
#include <lauxlib.h>
#include <lualib.h>
}
#include <unistd.h>
static lua_State *L;
int main(int argc, char *argv[]) {
L = luaL_newstate();
luaL_openlibs(L);
int tmp = luaL_loadfile(L, "testlib.lua");
if(tmp != 0) {
return 1;
}
tmp = lua_pcall(L, 0, 0, 0);
if(tmp != 0) {
printf("error pcall\n");
return 1;
}
}
Compiled with g++ call_testlib.cpp -o ./call_testlib -I./lua-5.4.4/src -L./lua-5.4.4/src -llua it prints "error pcall". If I print the error message on the lua stack, I get:
string error loading module 'libtotestlib' from file './libtotestlib.so':
./libtotestlib.so: undefined symbol: luaL_checkversion_
In this case the undefined symbol is luaL_checkversion_ (which I dont call myself), but with other scripts it is usually the first lua_... function that I call.
I have tried several things to fix this. For example, linking -llua when compiling the shared library, but this does not work (and should not be the problem as calling the script itself works). I also tried to load preload the library from c++ (as done in this question) instead of from lua, but I guess it does not really make a difference and I am getting the same error. I also uninstalled all lua versions from my path to make sure I always use the same version.
What is the difference between calling the script directly from shell and calling it inside a c function? Am I doing something wrong?
EDIT: Nearly got the answer. When using MYCFLAGS= -fPIC when compiling lua I can link lua to the shared library. At least this one works, but does not seem like a good solution to me and does not really answer my question: Why can lua itself (from shell) somehow add these symbols to the library while the wrapped c version can not? Additionally, my program has lua once linked in the shared library and once in the compiled C++ project (not optimal imo).

What is the difference between lua 5.0.2 modules and 5.3.5?

i try to write some c++ code for lua. First i used the latest version 5.3.5 and i was able to register some new functions. But the final program that i want to write the code for uses 5.0.2. After i compiled the old source and build the dll with lua 5.0.2, require cannot read the file
Lua 5.0.2 Copyright (C) 1994-2004 Tecgraf, PUC-Rio
> require("remaster_IO.dll")
stdin:1: error loading package `remaster_IO.dll' (remaster_IO.dll:1: `=' expected near `É')
stack traceback:
[C]: in function `require'
stdin:1: in main chunk
[C]: ?
This is the dll code:
extern "C" {
#include <lua.h>
#include <lualib.h>
#include <lauxlib.h>
__declspec(dllexport) int luaopen_remaster_IO(lua_State* L);
}
#define lua_register(L,n,f) \
(lua_pushstring(L, n), \
lua_pushcfunction(L, f), \
lua_settable(L, LUA_GLOBALSINDEX))
#include <iostream>
#include <Windows.h>
int rema_add1(lua_State* L) {
double d = luaL_checknumber(L, 1); // get item 1
lua_pushnumber(L, d + 1);
return 1; // number of items returned
}
int luaopen_remaster_IO(lua_State* L) {
Beep(200, 200);
std::cout << "Hello World!!!" << std::endl;
lua_register(L, "average", rema_add1);
return 1;
}
If you look at the documentation (scroll down to require, back then it didn't have per-function anchors yet…), you'll see that require in 5.0 indeed only loads Lua files. If you don't have access to loadlib, you're probably out of luck.
(There's even a good chance that they compiled Lua in such a way that dynamic linking of libraries is not supported at all – so even if you were aware of a bug that you could use to find, push & (re-)add loadlib's pointer as a Lua function value, that might not be enough…)

error when using extern "C" to include a header in c++ program

I am working on a school project which requires to work with sheepdog. Sheepdog provides a c api which enables you to connect to a sheepdog server.
First i create c source file(test.c) with the following content :
#include "sheepdog/sheepdog.h"
#include <stdio.h>
int main()
{
struct sd_cluster *c = sd_connect("192.168.1.104:7000");
if (!c) {
fprintf(stderr, "failed to connect %m\n");
return -1;
}else{
fprintf(stderr, "connected successfully %m\n");
}
return 0;
}
then i compile with no error using the following command
gcc -o test test.c -lsheepdog -lpthread
But what i need is to use it with c++ project so i created a cpp file(test.cpp) with the following content :
extern "C"{
#include "sheepdog/sheepdog.h"
}
#include <stdio.h>
int main()
{
struct sd_cluster *c = sd_connect("192.168.1.104:7000");
if (!c) {
fprintf(stderr, "failed to connect %m\n");
return -1;
}else{
fprintf(stderr, "connected successfully %m\n");
}
return 0;
}
now, when i compiled using the following command :
g++ -o test test.cpp -lsheepdog -lpthread
I got this error :
You can't just wrap extern "C" around a header and expect it to compile in a C++ program. For example, the header sheepdog_proto.h uses an argument named new; that's a keyword in C++, so there's no way that will compile as C++. The library was not designed to be called from C++.
I agree with #PeteBecker. From a quick look around Google, I am not sure there is an easy solution. Sheepdog is using C features and names that don't port well to C++. You might need to hack sheepdog fairly extensively. For example:
move the inline functions out of sheepdog_proto.h into a new C file, leaving prototypes in their place. This should take care of the offsetof errors, e.g., discussed in this answer.
#define new not_a_keyword_new in sheepdog/sheepdog.h
and whatever other specific changes you have to make to get it to compile. More advice from the experts here.
As sheepdog was not designed to be useable from C++ you should build a tiny wrapper in C language to call the functions from sheepdog and only call the wrapper from your c++ code. Some hints to write such a wrapper:
void * is great to pass opaque pointers
extractors can help to access badly named members. If a struct has a member called new (of type T), you could write:
T getNew(void *otherstruct); // declaration in .h
and
T getNew(void *otherstruct) { // implementation in a c file
return ((ActualStruct *) otherstruct)->new;
}
Depending on the complexity of sheepdog (I do not know it) and the part you want to use, it may or not be an acceptable solution. But it is the way I would try facing such a problem.
Anyway, the linker allows mixing modules compiled in C and in C++, either in static linking or dynamic linking.

Neko Dlls in Haxe C++ target

I am trying to use Neko dlls (written in C++) with the C++ target of Haxe. I am able to call the functions in haxe but not able to pass values.
This is the C++ code -
value Hello(value h)
{
cout << val_int(h);
return val_int(1);
}DEFINE_PRIM(Hello, 1);
This is the Haxe code -
class Main
{
var load = cpp.Lib.loadLazy( "ndll" , "Hello", 1 );
static function main()
{
load(1);
}
}
It executes only if the function does not take parameters. Also, the value that is returned form the C++ function to Haxe is null.
This code actually works perfectly when I compile for the neko target, but it doesn't seem to work with the cpp target.
Any help is appreciated.
Here's the fully corrected C++ code :
#define IMPLEMENT_API
/* Will be compatible with Neko on desktop targets. */
#if defined(HX_WINDOWS) || defined(HX_MACOS) || defined(HX_LINUX)
#define NEKO_COMPATIBLE
#endif
#include <hx/CFFI.h>
#include <stdio.h>
/* Your hello function. */
value hello(value h)
{
printf("%i\n", val_int(h));
return alloc_int(1);
}
DEFINE_PRIM(hello, 1);
/* Main entry point. */
extern "C" void mylib_main()
{
// Initialization code goes here
}
DEFINE_ENTRY_POINT(mylib_main);
What's important is that every value given as an argument to a primitive or returned by a primitive must be of the type value. That's why your parameter and return didn't work.
val_int is used to convert a value into a native C type, so your printing code was correct. But your return was wrong : you can't return a C int type when the function expects you to return a value to Haxe. You need to create a new Haxe Int type and return it. This is done with the help of alloc_int.
Here's the Haxe part of the code as a reference :
class Main
{
static var hello = cpp.Lib.load("myLib", "hello", 1);
static function main()
{
var myReturnedInt:Int = hello(1);
}
}
A few helpful links :
Neko C FFI
Neko FFI tutorial
CPP FFI notes
In order for this to work, you'll have to add to the header of your cpp file:
#define IMPLEMENT_API
#include <hx/CFFI.h>
(instead of neko's headers)
If you want the ndll to run on both neko and hxcpp, you should also add
#define NEKO_COMPATIBLE
before the hx/CFFI.h include.
You can compile using whatever is best for you, but I recommend using a Build.xml to generate your ndll, since it will automatically add the include and lib paths correctly for hxcpp's headers. You can see an example of a very simple Build.xml here:
http://pastebin.com/X9rFraYp
You can see more documentation about hxcpp's CFFI here: http://haxe.org/doc/cpp/ffi

Why does this dynamic library loading code work with gcc?

Background:
I've found myself with the unenviable task of porting a C++ GNU/Linux application over to Windows. One of the things this application does is search for shared libraries on specific paths and then loads classes out of them dynamically using the posix dlopen() and dlsym() calls. We have a very good reason for doing loading this way that I will not go into here.
The Problem:
To dynamically discover symbols generated by a C++ compiler with dlsym() or GetProcAddress() they must be unmangled by using an extern "C" linkage block. For example:
#include <list>
#include <string>
using std::list;
using std::string;
extern "C" {
list<string> get_list()
{
list<string> myList;
myList.push_back("list object");
return myList;
}
}
This code is perfectly valid C++ and compiles and runs on numerous compilers on both Linux and Windows. It, however, does not compile with MSVC because "the return type is not valid C". The workaround we've come up with is to change the function to return a pointer to the list instead of the list object:
#include <list>
#include <string>
using std::list;
using std::string;
extern "C" {
list<string>* get_list()
{
list<string>* myList = new list<string>();
myList->push_back("ptr to list");
return myList;
}
}
I've been trying to find an optimal solution for the GNU/Linux loader that will either work with both the new functions and the old legacy function prototype or at least detect when the deprecated function is encountered and issue a warning. It would be unseemly for our users if the code just segfaulted when they tried to use an old library. My original idea was to set a SIGSEGV signal handler during the call to get_list (I know this is icky - I'm open to better ideas). So just to confirm that loading an old library would segfault where I thought it would I ran a library using the old function prototype (returning a list object) through the new loading code (that expects a pointer to a list) and to my surprise it just worked. The question I have is why?
The below loading code works with both function prototypes listed above. I've confirmed that it works on Fedora 12, RedHat 5.5, and RedHawk 5.1 using gcc versions 4.1.2 and 4.4.4. Compile the libraries using g++ with -shared and -fPIC and the executable needs to be linked against dl (-ldl).
#include <dlfcn.h>
#include <stdio.h>
#include <stdlib.h>
#include <list>
#include <string>
using std::list;
using std::string;
int main(int argc, char **argv)
{
void *handle;
list<string>* (*getList)(void);
char *error;
handle = dlopen("library path", RTLD_LAZY);
if (!handle)
{
fprintf(stderr, "%s\n", dlerror());
exit(EXIT_FAILURE);
}
dlerror();
*(void **) (&getList) = dlsym(handle, "get_list");
if ((error = dlerror()) != NULL)
{
printf("%s\n", error);
exit(EXIT_FAILURE);
}
list<string>* libList = (*getList)();
for(list<string>::iterator iter = libList->begin();
iter != libList->end(); iter++)
{
printf("\t%s\n", iter->c_str());
}
dlclose(handle);
exit(EXIT_SUCCESS);
}
As aschepler says, its because you got lucky.
As it turns out, the ABI used for gcc (and most other compilers) for both x86 and x64 returns 'large' structs (too big to fit in a register) by passing an extra 'hidden' pointer arg to the function, which uses that pointer as space to store the return value, and then returns the pointer itself. So it turns out that a function of the form
struct foo func(...)
is roughly equivlant to
struct foo *func(..., struct foo *)
where the caller is expected to allocate space for a 'foo' (probably on the stack) and pass in a pointer to it.
So it just happens that if you have a function that is expecting to be called this way (expecting to return a struct) and instead call it via a function pointer that returns a pointer, it MAY appear to work -- if the garbage bits it gets for the extra arg (random register contents left there by the caller) happen to point to somewhere writable, the called function will happily write its return value there and then return that pointer, so the called code will get back something that looks a like a valid pointer to the struct it is expecting. So the code may superficially appear to work, but its actually probably clobbering a random bit of memory that may be important later.