How to add a property to a module in boost::python? - c++

You can add a property to a class using a getter and a setter (in a simplistic case):
class<X>("X")
.add_property("foo", &X::get_foo, &X::set_foo);
So then you can use it from python like this:
>>> x = mymodule.X()
>>> x.foo = 'aaa'
>>> x.foo
'aaa'
But how to add a property to a module itself (not a class)?
There is
scope().attr("globalAttr") = ??? something ???
and
def("globalAttr", ??? something ???);
I can add global functions and objects of my class using the above two ways, but can't seem to add properties the same way as in classes.

__getattr__ and __setattr__ aren't called on modules, so you can't do this in ordinary Python without hacks (like storing a class in the module dictionary). Given that, it's very unlikely there's an elegant way to do it in Boost Python either.

boost.python/HowTo on Python Wiki has an example of exposing C++ object as a module attribute inside BOOST_PYTHON_MODULE:
namespace bp = boost::python;
BOOST_PYTHON_MODULE(example)
{
bp::scope().attr("my_attr") = bp::object(bp::ptr(&my_cpp_object));
}
To set the attribute outside of BOOST_PYTHON_MODULE use
bp::import("example").attr("my_attr") = bp::object(bp::ptr(&my_cpp_object));
Now you can do in python something like
from example import my_attr
Of course, you need to register class of my_cpp_object in advance (e.g. you can do this inside the same BOOST_PYTHON_MODULE call) and ensure C++ object lifetime exceeds that of the python module. You can use any bp::object instead of wrapping C++ one.
Note that BOOST_PYTHON_MODULE swallows exceptions, so if you make a mistake, you don't receive any error indication and BOOST_PYTHON_MODULE-generated function will just immediately return. To ease debugging of this case you can catch exceptions inside BOOST_PYTHON_MODULE or temporary add some logging statement as a last line of BOOST_PYTHON_MODULE to see that it is reached:
BOOST_PYTHON_MODULE(example)
{
bp::scope().attr("my_attr2") = new int(1); // this will compile
std::cout << "init finished\n"; // OOPS, this line will not be reached
}

Related

boost::python hybrid embedding/exposing: how do I get globals() and see my own module?

I'm using boost::python to do a hybrid C++/python application: the C++ app calls a collection of python scripts, which in turn use the C++ program's functions, classes, etc., exposed as python objects. (Python 2.x.)
BOOST_PYTHON_MODULE(MyModule) exposes the C++ to python as expected.
My initialization code:
Py_Initialize();
initMyModule(); // import MyModule
namespace bpl = boost::python;
Now I want my C++ code to get at MyModule, too. In python, you just write globals()['MyModule']. But this (and things like it) don't work in C++:
bpl::object globals = bpl::eval("globals()");
This fails at run-time with
File "<string>", line 1, in <module>; NameError: name 'globals' is not defined
As an aside, I see many examples of setting up __main__ like this:
bpl::object m = bpl::import("__main__");
bpl::dict g = m.attr("__dict__"); // like locals(), but not globals()
This doesn't fail, and gives locals, but according to the Py_Initialize docs, __main__ is already set up. And it doesn't let you see globals, where you'd find your imported module.
You don't need the explicit bpl::import("__main__");.
Here are the globals:
bpl::dict globals()
{
bpl::handle<> mainH(bpl::borrowed(PyImport_GetModuleDict()));
return bpl::extract<bpl::dict>(bpl::object(mainH));
}
Since everything is managed by smart pointers, returning and manipulating bpl::dict directly works fine.
bpl::object myMod = globals()["MyModule"];
globals()["myNewGlobal"] = 88;

Expose C++ dynamically in boost.python

I'm wondering if boost.python allows C++ functionality to be exposed to python after the module has loaded. For example it would be nice if something like this might work:
#include <boost/python.hpp>
int a;
void expose_var() {
boost::python::scope().attr( "a" ) = a;
}
BOOST_PYTHON_MODULE( mod )
{
boost::python::def( "expose_var", expose_var, "Expose an attribute." );
}
Then in python:
import mod
mod.expose_var()
mod.a = 2
With similar code I'm getting an error when I call the equivalent of expose_var():
AttributeError: 'NoneType' object has no attribute '__dict__'
I want to do this as I'm exposing a C++ library that is heavily templated and I don't wish to expose every possible combination of template parameters by default. I'd like to let the python client ask for specific combinations to be exposed at runtime.

Boost.Python - Passing boost::python::object as argument to python function?

So I'm working on a little project in which I'm using Python as an embedded scripting engine. So far I've not had much trouble with it using boost.python, but there's something I'd like to do with it if it's possible.
Basically, Python can be used to extend my C++ classes by adding functions and even data values to the class. I'd like to be able to have these persist in the C++ side, so one python function can add data members to a class, and then later the same instance passed to a different function will still have them. The goal here being to write a generic core engine in C++, and let users extend it in Python in any way they need without ever having to touch the C++.
So what I thought would work was that I would store a boost::python::object in the C++ class as a value self, and when calling the python from the C++, I'd send that python object through boost::python::ptr(), so that modifications on the python side would persist back to the C++ class. Unfortunately when I try this, I get the following error:
TypeError: No to_python (by-value) converter found for C++ type: boost::python::api::object
Is there any way of passing an object directly to a python function like that, or any other way I can go about this to achieve my desired result?
Thanks in advance for any help. :)
Got this fantastic solution from the c++sig mailing list.
Implement a std::map<std::string, boost::python::object> in the C++ class, then overload __getattr__() and __setattr__() to read from and write to that std::map. Then just send it to the python with boost::python::ptr() as usual, no need to keep an object around on the C++ side or send one to the python. It works perfectly.
Edit: I also found I had to override the __setattr__() function in a special way as it was breaking things I added with add_property(). Those things worked fine when getting them, since python checks a class's attributes before calling __getattr__(), but there's no such check with __setattr__(). It just calls it directly. So I had to make some changes to turn this into a full solution. Here's the full implementation of the solution:
First create a global variable:
boost::python::object PyMyModule_global;
Create a class as follows (with whatever other information you want to add to it):
class MyClass
{
public:
//Python checks the class attributes before it calls __getattr__ so we don't have to do anything special here.
boost::python::object Py_GetAttr(std::string str)
{
if(dict.find(str) == dict.end())
{
PyErr_SetString(PyExc_AttributeError, JFormat::format("MyClass instance has no attribute '{0}'", str).c_str());
throw boost::python::error_already_set();
}
return dict[str];
}
//However, with __setattr__, python doesn't do anything with the class attributes first, it just calls __setattr__.
//Which means anything that's been defined as a class attribute won't be modified here - including things set with
//add_property(), def_readwrite(), etc.
void Py_SetAttr(std::string str, boost::python::object val)
{
try
{
//First we check to see if the class has an attribute by this name.
boost::python::object obj = PyMyModule_global["MyClass"].attr(str.c_str());
//If so, we call the old cached __setattr__ function.
PyMyModule_global["MyClass"].attr("__setattr_old__")(ptr(this), str, val);
}
catch(boost::python::error_already_set &e)
{
//If it threw an exception, that means that there is no such attribute.
//Put it on the persistent dict.
PyErr_Clear();
dict[str] = val;
}
}
private:
std::map<std::string, boost::python::object> dict;
};
Then define the python module as follows, adding whatever other defs and properties you want:
BOOST_PYTHON_MODULE(MyModule)
{
boost::python::class_<MyClass>("MyClass", boost::python::no_init)
.def("__getattr__", &MyClass::Py_GetAttr)
.def("__setattr_new__", &MyClass::Py_SetAttr);
}
Then initialize python:
void PyInit()
{
//Initialize module
PyImport_AppendInittab( "MyModule", &initMyModule );
//Initialize Python
Py_Initialize();
//Grab __main__ and its globals
boost::python::object main = boost::python::import("__main__");
boost::python::object global = main.attr("__dict__");
//Import the module and grab its globals
boost::python::object PyMyModule = boost::python::import("MyModule");
global["MyModule"] = PyMyModule;
PyMyModule_global = PyMyModule.attr("__dict__");
//Overload MyClass's setattr, so that it will work with already defined attributes while persisting new ones
PyMyModule_global["MyClass"].attr("__setattr_old__") = PyMyModule_global["MyClass"].attr("__setattr__");
PyMyModule_global["MyClass"].attr("__setattr__") = PyMyModule_global["MyClass"].attr("__setattr_new__");
}
Once you've done all of this, you'll be able to persist changes to the instance made in python over to the C++. Anything that's defined in C++ as an attribute will be handled properly, and anything that's not will be appended to dict instead of the class's __dict__.

How can I extract a wrapped C++ type from a Python type using boost::python?

I've wrapped a C++ class using Py++ and everything is working great in Python. I can instantiate the c++ class, call methods, etc.
I'm now trying to embed some Python into a C++ application. This is also working fine for the most-part. I can call functions on a Python module, get return values, etc.
The python code I'm calling returns one of the classes that I wrapped:
import _myextension as myext
def run_script(arg):
my_cpp_class = myext.MyClass()
return my_cpp_class
I'm calling this function from C++ like this:
// ... excluding error checking, ref counting, etc. for brevity ...
PyObject *pModule, *pFunc, *pArgs, *pReturnValue;
Py_Initialize();
pModule = PyImport_Import(PyString_FromString("cpp_interface"));
pFunc = PyObject_GetAttrString(pModule, "run_script");
pArgs = PyTuple_New(1); PyTuple_SetItem(pArgs, 0, PyString_FromString("an arg"));
pReturnValue = PyObject_CallObject(pFunc, pArgs);
bp::extract< MyClass& > extractor(pReturnValue); // PROBLEM IS HERE
if (extractor.check()) { // This check is always false
MyClass& cls = extractor();
}
The problem is the extractor never actually extracts/converts the PyObject* to MyClass (i.e. extractor.check() is always false).
According to the docs this is the correct way to extract a wrapped C++ class.
I've tried returning basic data types (ints/floats/dicts) from the Python function and all of them are extracted properly.
Is there something I'm missing? Is there another way to get the data and cast to MyClass?
I found the error. I wasn't linking my bindings in my main executable because the bindings were compiled in a separate project that created the python extension only.
I assumed that by loading the extension using pModule = PyImport_Import(PyString_FromString("cpp_interface")); the bindings would be loaded as well, but this is not the case.
To fix the problem, I simply added the files that contain my boost::python bindings (for me, just wrapper.cpp) to my main project and re-built.

How to reinitialise an embedded Python interpreter?

I'm working on embedding Python in our test suite application. The purpose is to use Python to run several tests scripts to collect data and make a report of tests. Multiple test scripts for one test run can create global variables and functions that can be used in the next script.
The application also provides extension modules that are imported in the embedded interpreter, and are used to exchange some data with the application.
But the user can also make multiple test runs. I don't want to share those globals, imports and the exchanged data between multiple test runs. I have to be sure I restart in a genuine state to control the test environment and get the same results.
How should I reinitialise the interpreter?
I used Py_Initialize() and Py_Finalize(), but get an exception on the second run when initialising a second time the extension modules I provide to the interpreter.
And the documentation warns against using it more than once.
Using sub-interpreters seems to have the same caveats with extension modules initialization.
I suspect that I'm doing something wrong with the initialisation of my extension modules, but I fear that the same problem happens with 3rd party extension modules.
Maybe it's possible to get it to work by launching the interpreter in it's own process, so as to be sure that all the memory is released.
By the way, I'm using boost-python for it, that also warns AGAINST using Py_Finalize!
Any suggestion?
Thanks
Here is another way I found to achieve what I want, start with a clean slate in the interpreter.
I can control the global and local namespaces I use to execute the code:
// get the dictionary from the main module
// Get pointer to main module of python script
object main_module = import("__main__");
// Get dictionary of main module (contains all variables and stuff)
object main_namespace = main_module.attr("__dict__");
// define the dictionaries to use in the interpreter
dict global_namespace;
dict local_namespace;
// add the builtins
global_namespace["__builtins__"] = main_namespace["__builtins__"];
I can then use use the namespaces for execution of code contained in pyCode:
exec( pyCode, global_namespace, lobaca_namespace );
I can clean the namespaces when I want to run a new instance of my test, by cleaning the dictionaries:
// empty the interpreters namespaces
global_namespace.clear();
local_namespace.clear();
// Copy builtins to new global namespace
global_namespace["__builtins__"] = main_namespace["__builtins__"];
Depending at what level I want the execution, I can use global = local
How about using code.IteractiveInterpreter?
Something like this should do it:
#include <boost/python.hpp>
#include <string>
#include <stdexcept>
using namespace boost::python;
std::string GetPythonError()
{
PyObject *ptype = NULL, *pvalue = NULL, *ptraceback = NULL;
PyErr_Fetch(&ptype, &pvalue, &ptraceback);
std::string message("");
if(pvalue && PyString_Check(pvalue)) {
message = PyString_AsString(pvalue);
}
return message;
}
// Must be called after Py_Initialize()
void RunInterpreter(std::string codeToRun)
{
object pymodule = object(handle<>(borrowed(PyImport_AddModule("__main__"))));
object pynamespace = pymodule.attr("__dict__");
try {
// Initialize the embedded interpreter
object result = exec( "import code\n"
"__myInterpreter = code.InteractiveConsole() \n",
pynamespace);
// Run the code
str pyCode(codeToRun.c_str());
pynamespace["__myCommand"] = pyCode;
result = eval("__myInterpreter.push(__myCommand)", pynamespace);
} catch(error_already_set) {
throw std::runtime_error(GetPythonError().c_str());
}
}
I'd write another shell script executing the sequence of test scripts with new instances of python each time. Or write it in python like
# run your tests in the process first
# now run the user scripts, each in new process to have virgin env
for script in userScript:
subprocess.call(['python',script])