I want to pass callback from my python code to c++
I want my code look something like this:
In C++ :
typedef void (*MyCallback_t) (CallbackInfo);
class MyClass
{...
void setcallback(MyCallback_t cb);
...
}
And to use it in python :
import mylib
def myCallback(mylib_CallbackInfo):
...
t = mylib.MyClass()
t.setcallback(myCallback)
I saw some topics near my problem but couldn't solve it
For example here :
Realtime processing and callbacks with Python and C++ there is advice to use boost::python and warning about GLI but no examples.
And here
How to call a python function from a foreign language thread (C++) there is no full description with python code part and with "BOOST_PYTHON_MODULE" part
I also found link to use py_boost_function.hpp for example in Boost python howto but it didn't compile and actualy I couldn't understand how to use it.
Ok, I'm still trying to figure this out too, but here's whats working for me so far:
#this is the variable that will hold a reference to the python function
PyObject *py_callback;
#the following function will invoked from python to populate the call back reference
PyObject *set_py_callback(PyObject *callable)
{
py_callback = callable; /* Remember new callback */
return Py_None;
}
...
#Initialize and acquire the global interpreter lock
PyEval_InitThreads();
#Ensure that the current thread is ready to call the Python C API
PyGILState_STATE state = PyGILState_Ensure();
#invoke the python function
boost::python::call<void>(py_callback);
#release the global interpreter lock so other threads can resume execution
PyGILState_Release(state);
The python function is invoked from C++, and executes as expected.
These test files from the boost::python source code repository contains great examples about how to pass callbacks from Python into C++:
https://github.com/boostorg/python/blob/develop/test/callbacks.cpp
https://github.com/boostorg/python/blob/develop/test/callbacks.py
Related
So I'm working on a little project in which I'm using Python as an embedded scripting engine. So far I've not had much trouble with it using boost.python, but there's something I'd like to do with it if it's possible.
Basically, Python can be used to extend my C++ classes by adding functions and even data values to the class. I'd like to be able to have these persist in the C++ side, so one python function can add data members to a class, and then later the same instance passed to a different function will still have them. The goal here being to write a generic core engine in C++, and let users extend it in Python in any way they need without ever having to touch the C++.
So what I thought would work was that I would store a boost::python::object in the C++ class as a value self, and when calling the python from the C++, I'd send that python object through boost::python::ptr(), so that modifications on the python side would persist back to the C++ class. Unfortunately when I try this, I get the following error:
TypeError: No to_python (by-value) converter found for C++ type: boost::python::api::object
Is there any way of passing an object directly to a python function like that, or any other way I can go about this to achieve my desired result?
Thanks in advance for any help. :)
Got this fantastic solution from the c++sig mailing list.
Implement a std::map<std::string, boost::python::object> in the C++ class, then overload __getattr__() and __setattr__() to read from and write to that std::map. Then just send it to the python with boost::python::ptr() as usual, no need to keep an object around on the C++ side or send one to the python. It works perfectly.
Edit: I also found I had to override the __setattr__() function in a special way as it was breaking things I added with add_property(). Those things worked fine when getting them, since python checks a class's attributes before calling __getattr__(), but there's no such check with __setattr__(). It just calls it directly. So I had to make some changes to turn this into a full solution. Here's the full implementation of the solution:
First create a global variable:
boost::python::object PyMyModule_global;
Create a class as follows (with whatever other information you want to add to it):
class MyClass
{
public:
//Python checks the class attributes before it calls __getattr__ so we don't have to do anything special here.
boost::python::object Py_GetAttr(std::string str)
{
if(dict.find(str) == dict.end())
{
PyErr_SetString(PyExc_AttributeError, JFormat::format("MyClass instance has no attribute '{0}'", str).c_str());
throw boost::python::error_already_set();
}
return dict[str];
}
//However, with __setattr__, python doesn't do anything with the class attributes first, it just calls __setattr__.
//Which means anything that's been defined as a class attribute won't be modified here - including things set with
//add_property(), def_readwrite(), etc.
void Py_SetAttr(std::string str, boost::python::object val)
{
try
{
//First we check to see if the class has an attribute by this name.
boost::python::object obj = PyMyModule_global["MyClass"].attr(str.c_str());
//If so, we call the old cached __setattr__ function.
PyMyModule_global["MyClass"].attr("__setattr_old__")(ptr(this), str, val);
}
catch(boost::python::error_already_set &e)
{
//If it threw an exception, that means that there is no such attribute.
//Put it on the persistent dict.
PyErr_Clear();
dict[str] = val;
}
}
private:
std::map<std::string, boost::python::object> dict;
};
Then define the python module as follows, adding whatever other defs and properties you want:
BOOST_PYTHON_MODULE(MyModule)
{
boost::python::class_<MyClass>("MyClass", boost::python::no_init)
.def("__getattr__", &MyClass::Py_GetAttr)
.def("__setattr_new__", &MyClass::Py_SetAttr);
}
Then initialize python:
void PyInit()
{
//Initialize module
PyImport_AppendInittab( "MyModule", &initMyModule );
//Initialize Python
Py_Initialize();
//Grab __main__ and its globals
boost::python::object main = boost::python::import("__main__");
boost::python::object global = main.attr("__dict__");
//Import the module and grab its globals
boost::python::object PyMyModule = boost::python::import("MyModule");
global["MyModule"] = PyMyModule;
PyMyModule_global = PyMyModule.attr("__dict__");
//Overload MyClass's setattr, so that it will work with already defined attributes while persisting new ones
PyMyModule_global["MyClass"].attr("__setattr_old__") = PyMyModule_global["MyClass"].attr("__setattr__");
PyMyModule_global["MyClass"].attr("__setattr__") = PyMyModule_global["MyClass"].attr("__setattr_new__");
}
Once you've done all of this, you'll be able to persist changes to the instance made in python over to the C++. Anything that's defined in C++ as an attribute will be handled properly, and anything that's not will be appended to dict instead of the class's __dict__.
I've wrapped a C++ class using Py++ and everything is working great in Python. I can instantiate the c++ class, call methods, etc.
I'm now trying to embed some Python into a C++ application. This is also working fine for the most-part. I can call functions on a Python module, get return values, etc.
The python code I'm calling returns one of the classes that I wrapped:
import _myextension as myext
def run_script(arg):
my_cpp_class = myext.MyClass()
return my_cpp_class
I'm calling this function from C++ like this:
// ... excluding error checking, ref counting, etc. for brevity ...
PyObject *pModule, *pFunc, *pArgs, *pReturnValue;
Py_Initialize();
pModule = PyImport_Import(PyString_FromString("cpp_interface"));
pFunc = PyObject_GetAttrString(pModule, "run_script");
pArgs = PyTuple_New(1); PyTuple_SetItem(pArgs, 0, PyString_FromString("an arg"));
pReturnValue = PyObject_CallObject(pFunc, pArgs);
bp::extract< MyClass& > extractor(pReturnValue); // PROBLEM IS HERE
if (extractor.check()) { // This check is always false
MyClass& cls = extractor();
}
The problem is the extractor never actually extracts/converts the PyObject* to MyClass (i.e. extractor.check() is always false).
According to the docs this is the correct way to extract a wrapped C++ class.
I've tried returning basic data types (ints/floats/dicts) from the Python function and all of them are extracted properly.
Is there something I'm missing? Is there another way to get the data and cast to MyClass?
I found the error. I wasn't linking my bindings in my main executable because the bindings were compiled in a separate project that created the python extension only.
I assumed that by loading the extension using pModule = PyImport_Import(PyString_FromString("cpp_interface")); the bindings would be loaded as well, but this is not the case.
To fix the problem, I simply added the files that contain my boost::python bindings (for me, just wrapper.cpp) to my main project and re-built.
My app have some events, each event can have some actions. These actions is implemented in C++. I want to expose those core functions to python and use python to write the action. The advantage is I can modify actions without recompile. For example:
CppClass o;
// --- this is a action----
o.f1();
o.f2();
// ------------------------
use python to script the action:
def action1(o):
o.f1()
o.f2()
In c++, use interpreter to run this script, find action1 and call it with a PyObject which convert from c++ object. Actually, I have not expose f1() & f2() to python, I just use python to regroup the definition of action, all function is running by c++ binary code. Notice that I have not to give a definition of f1() & f2() in python.
The problem is: how I expose global functions? such as:
def action2():
gf1()
gf2()
boost::python can expose function, but it is different, it need compile a DLL file and the main() is belong to python script. Of course I can make global functions to be a class static member, but I just want to know. Notice that I HAVE TO give a definition of gf1() & gf2() in python.
Jython can do this easily: just import Xxx in python code and call Xxx.gf1() is ok. But in cython, how I define gf1() in python? This is a kind of extension, but extension requires Xxx be compiled ahead. It seems only way is make gf() into a class?
Solved. boost::python's doc is really poor...
For example: expose function
void ff(int x, int y)
{
std::cout<<x<<" "<<y<<std::endl;
}
to python:
import hello
def foo(x, y)
hello.ff(x, y)
you need to expose it as a module:
BOOST_PYTHON_MODULE(hello)
{
boost::python::def("ff", ff, boost::python::arg("x"), boost::python::arg("y"));
}
But this still is not a 'global function', so expose it to python's main scope:
BOOST_PYTHON_MODULE(__main__)
{
boost::python::def("ff", ff, boost::python::arg("x"), boost::python::arg("y"));
}
then you can write:
def foo(x, y)
ff(x, y)
You might also want to have a look at Cython.
Cython's main function is to translate (a subset of) Python code to C
or C++ code to build native code Python extensions. As a consequence,
it allows interfacing C/C++ code with a very terse and Python-ish
syntax.
Cython's user guide provides a
good example of how to call a simple C++ class from Python:
http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#declaring-a-c-class-interface
In addition to creating extensions, Cython can also generate C/C++
code that embeds the Python interpreter, so you do not need to build
and ship an external DLL. See details at: http://wiki.cython.org/EmbeddingCython
ffpython is c++ lib that wrap python 2.x api. see code repo
call python function: ffpython.call("fftest", "test_stl", a1, a2, a3);
reg c++ class:
ffpython.reg_class<foo_t, PYCTOR(int)>("foo_t")
.reg(&foo_t::get_value, "get_value")
.reg(&foo_t::set_value, "set_value")
.reg(&foo_t::test_stl, "test_stl")
.reg_property(&foo_t::m_value, "m_value");
I'm working on embedding Python in our test suite application. The purpose is to use Python to run several tests scripts to collect data and make a report of tests. Multiple test scripts for one test run can create global variables and functions that can be used in the next script.
The application also provides extension modules that are imported in the embedded interpreter, and are used to exchange some data with the application.
But the user can also make multiple test runs. I don't want to share those globals, imports and the exchanged data between multiple test runs. I have to be sure I restart in a genuine state to control the test environment and get the same results.
How should I reinitialise the interpreter?
I used Py_Initialize() and Py_Finalize(), but get an exception on the second run when initialising a second time the extension modules I provide to the interpreter.
And the documentation warns against using it more than once.
Using sub-interpreters seems to have the same caveats with extension modules initialization.
I suspect that I'm doing something wrong with the initialisation of my extension modules, but I fear that the same problem happens with 3rd party extension modules.
Maybe it's possible to get it to work by launching the interpreter in it's own process, so as to be sure that all the memory is released.
By the way, I'm using boost-python for it, that also warns AGAINST using Py_Finalize!
Any suggestion?
Thanks
Here is another way I found to achieve what I want, start with a clean slate in the interpreter.
I can control the global and local namespaces I use to execute the code:
// get the dictionary from the main module
// Get pointer to main module of python script
object main_module = import("__main__");
// Get dictionary of main module (contains all variables and stuff)
object main_namespace = main_module.attr("__dict__");
// define the dictionaries to use in the interpreter
dict global_namespace;
dict local_namespace;
// add the builtins
global_namespace["__builtins__"] = main_namespace["__builtins__"];
I can then use use the namespaces for execution of code contained in pyCode:
exec( pyCode, global_namespace, lobaca_namespace );
I can clean the namespaces when I want to run a new instance of my test, by cleaning the dictionaries:
// empty the interpreters namespaces
global_namespace.clear();
local_namespace.clear();
// Copy builtins to new global namespace
global_namespace["__builtins__"] = main_namespace["__builtins__"];
Depending at what level I want the execution, I can use global = local
How about using code.IteractiveInterpreter?
Something like this should do it:
#include <boost/python.hpp>
#include <string>
#include <stdexcept>
using namespace boost::python;
std::string GetPythonError()
{
PyObject *ptype = NULL, *pvalue = NULL, *ptraceback = NULL;
PyErr_Fetch(&ptype, &pvalue, &ptraceback);
std::string message("");
if(pvalue && PyString_Check(pvalue)) {
message = PyString_AsString(pvalue);
}
return message;
}
// Must be called after Py_Initialize()
void RunInterpreter(std::string codeToRun)
{
object pymodule = object(handle<>(borrowed(PyImport_AddModule("__main__"))));
object pynamespace = pymodule.attr("__dict__");
try {
// Initialize the embedded interpreter
object result = exec( "import code\n"
"__myInterpreter = code.InteractiveConsole() \n",
pynamespace);
// Run the code
str pyCode(codeToRun.c_str());
pynamespace["__myCommand"] = pyCode;
result = eval("__myInterpreter.push(__myCommand)", pynamespace);
} catch(error_already_set) {
throw std::runtime_error(GetPythonError().c_str());
}
}
I'd write another shell script executing the sequence of test scripts with new instances of python each time. Or write it in python like
# run your tests in the process first
# now run the user scripts, each in new process to have virgin env
for script in userScript:
subprocess.call(['python',script])
I'm currently writing an applications that embedds the python interpreter. The idea is to have the program call user specified scripts on certain events in the program. I managed this part but now I want the scripts to be able to call functions in my program.
Here's my code so far:
#include "python.h"
static PyObject* myTest(PyObject* self,PyObject *args)
{
return Py_BuildValue("s","123456789");
}
static PyMethodDef myMethods[] = {{"myTest",myTest},{NULL,NULL}};
int main()
{
Py_Initialize();
Py_InitModule("PROGRAM",myMethods);
PyRun_SimpleString("print PROGRAM.myTest()");
Py_Finalize();
}
Thanks!
You need to bind that function to some module, see http://docs.python.org/extending/embedding.html#extending-embedded-python
Edit:
Basicly your code should work. Whats not working?