How to expose a C++ class to Python without building a module - c++

I want to know if there is any way to expose a C++ class to Python but without building an intermediate shared library.
Here is my desirable scenario. For example I have following C++ class:
class toto
{
public:
toto(int iValue1_, int iValue2_): iValue1(iValue1_), iValue2(iValue2_) {}
int Addition(void) const {if (!this) return 0; return iValue1 + iValue2;}
private:
int iValue1;
int iValue2;
};
I would like to convert somehow this class (or its intance) to a PyObject* in order to send it as paremter (args) to for example PyObject_CallObject:
PyObject* PyObject_CallObject(PyObject* wrapperFunction, PyObject* args)
In the other hand in my python side, I'll have a wrapperFunction which gets the pointer on my C++ class (or its instance) as parameter and it calls its methods or uses its properties:
def wrapper_function(cPlusPlusClass):
instance = cPlusPlusClass(4, 5)
result = instance.Addition()
As you can see, I don't really need/want to have a separate shared library or build a module by boost python. All that I need is to find a way to convert a C++ code to PyObject and send it to python. I cannot find a way to do that by C python libraries, boost or SWIG.

As far as I know, there is no easy way to accomplish this.
To extend Python with C++ with neither a module nor an intermediate library, it would require dynamically loading a library, then importing the functions. This approach is used by the ctypes module. To accomplish the same with C++, one would need to write a ctypes-like library that understood the C++ ABI for the target compiler(s).
To extend Python without introducing a module, an intermediate library could be created that provided a C API that wraps the C++ library. This intermediate library could then be used in Python through ctypes. While it does not provide the exact calling syntax and does introduce an intermediate library, it would likely be less effort than building a ctypes-like library that could interface directly with C++.
However, if an intermediate library is going to be introduced, it may be worthwhile to use Boost.Python, SWIG, or some other C++/Python language binding tool. While many of these tools will introduce the extension via a module, they often provide cleaner calling conventions, better error checking in the binding process, and may be easier to maintain.

I found my answer. Actually what I was searching was pretty similar to this answer (thanks moooeeeep for his comment):
  Exposing a C++ class instance to a python embedded interpreter
Following C++ class (Attention! default constructor is mandatory):
class TwoValues
{
public:
TwoValues(void): iValue1(0), iValue2(0) {}
TwoValues(int iValue1, int iValue2): iValue1(iValue1_), iValue2(iValue2_) {}
int Addition(void) const {if (!this) return 0; return iValue1 + iValue2;}
public:
int iValue1;
int iValue2;
};
could be exposed by boost by following macro:
BOOST_PYTHON_MODULE(ModuleTestBoost)
{
class_<TwoValues>("TwoValues")
.def("Addition", &TWOVALUES::Addition)
.add_property("Value1", &TWOVALUES::iValue1)
.add_property("Value2", &TWOVALUES::iValue2);
};
In the other hand I have a python function defined in python_script.py which takes an instance of this class and do something. For example:
def wrapper_function(instance):
result = instance.Addition()
myfile = open(r"C:\...\testboostexample.txt", "w")
output = 'First variable is {0}, second variable is {1} and finally the addition is {2}'.format(instance.Value1, instance.Value2, result)
myfile .write(output)
myfile .close()
Then in C++ side, I can call this function by sending at the same time the instance of my class, like this:
Py_Initialize();
try
{
TwoValues instance(5, 10);
initModuleTestBoost();
object python_script = import("python_script");
object wrapper_function = python_script.attr("wrapper_function");
wrapper_function(&instance);
}
catch (error_already_set)
{
PyErr_Print();
}
Py_Finalize();
Advantages:
I don't need build any shared library or binary
As I'm using Boost, I don't need to be worry about memory management
& reference counting
I don't use shared boost pointer (boost::shared_ptr) to point to the
instance of my class

Related

Program crashing with embedded Python/C++ code across DLL boundary in Windows

Sorry for the long post. I've searched around quite a bit and couldn't find an answer for this so here it goes:
I am working developing a Python extension library using C++ (BoostPython). For testing, we have a Python-based test harness but I also want to add a separate C++ executable (eg. using BoostUnitTest or similar) to add further testing of the library including testing of the functionality that is not directly exposed to Python.
I am currently running this in Linux without problems. I am building the library and this then is dynamically linked to an executable that uses BoostUnitTest. Everything compiles and runs as expected.
In Windows though, I'm having problems. I think it might a problem with the registering of the C++->Python type converters across DLL boundaries.
To show the problem I have the following example:
In my library I have defined:
namespace bp = boost::python;
namespace bn = boost::numpy;
class DLL_API DummyClass
{
public:
static std::shared_ptr<DummyClass> Create()
{
return std::make_shared<DummyClass>();
}
static void RegisterPythonBindings();
};
void DummyClass::RegisterPythonBindings()
{
bp::class_<DummyClass>("DummyClass", bp::init<>())
;
bp::register_ptr_to_python< std::shared_ptr<DummyClass> >();
}
where DLL_API is the usual _declspec(…) for Windows. The idea is that this dummy class would be exported as part of a bigger Python module with
BOOST_PYTHON_MODULE(module)
{
DummyClass::RegisterPythonBindings();
}
From within the executable linking to the library I have (omitting includes, etc):
void main()
{
Py_Initialize();
DummyClass::RegisterPythonBindings();
auto myDummy = DummyClass::Create();
auto dummyObj = bp::object( myDummy );
}
The last line where I wrap myDummy within a boost::python::object crashes with an unhandled exception in Windows. The exception is being thrown from Python (throw_error_already_set). I believe (but could be wrong) that it is not finding an appropriate converter of the C++ type to Python, even though I made the call to register the bindings.
KernelBase.dll!000007fefd91a06d()
msvcr110.dll!000007fef7bde92c()
TestFromMain.exe!boost::python::throw_error_already_set(void)
TestFromMain.exe!boost::python::converter::registration::to_python(void const volatile *)
TestFromMain.exe!boost::python::converter::detail::arg_to_python_base::arg_to_python_base(void const volatile *,struct boost::python::converter::registration const &)
TestFromMain.exe!main() Line 66
TestFromMain.exe!__tmainCRTStartup()
kernel32.dll!0000000077a259cd()
ntdll.dll!0000000077b5a561()
As a test, I copied the exact same code defining the DummyClass all inside the executable just before the main function, instead of linking to the dll, and this works as expected.
Is my model of compiling as a DLL using embedded python in both sides of the boundary even possible in Windows (this is only used for a testing harness so I’d always use the exact same toolchain all over).
Thanks very much.
In case anyone ever reads this again, the solution in Windows was to compile Boost as dynamic libraries and link everything dynamically. We had to change the structure of our code a bit, but it now works.
There is a (small) reference in the Boost documentation stating that in Windows the dynamic lib version of Boost has one common register of types used for conversion between Python/C+. The doc doesn't mention not having a common register for the static lib version (but I now know it doesn't work).

Is the python C API entirely compatible with C++?

As I understand the relationship between C and C++, the latter is essentially an extension of the former and retains a certain degree of backwards compatibility. Is it safe to assume that the python C API can be called with C++ code?
More to the point, I notice that the official python documentation bundles C and C++ extensions together on the same page. Nowhere am I able to find a C++ API. This leads me to believe that the same API is safe to use in both languages.
Can someone confirm or deny this?
EDIT:
I think I made my question much more complicated than it needs to be. The question is this: what must I do in order to write a python module in C++? Do I just follow the same directions as listed here, substituting C code for C++? Is there a separate API?
I can confirm that the same Python C API is safe to be used in both languages, C and C++.
However, it is difficult to provide you with more detailed answer, unless you will ask more specific question. There are numerous caveats and issues you should be aware of. For example, your Python extensions are defined as C types struct, not as C++, so don't expect to have their constructor/destructor implicitly defined and called.
For example, taking the sample code from Defining New Types in the Python manual, it can be written in C++ way and you can even blend-in C++ types:
// noddy.cpp
namespace {
struct noddy_NoddyObject
{
PyObject_HEAD
// Type-specific fields go here.
std::shared_ptr<int> value; // WARNING
};
PyObject* Noddy_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
try {
Noddy *self = (Noddy *)type->tp_alloc(type, 0);
if (self) {
self->value = std::make_shared(7);
// or more complex operations that may throw
// or extract complex initialisation as Noddy_init function
return self;
}
}
catch (...) {
// do something, log, etc.
}
return 0;
}
PyTypeObject noddy_NoddyType =
{
PyObject_HEAD_INIT(NULL)
// ...
}
} // unnamed namespace
But, neither constructor nor destructor of the std::shared_ptr will be called.
So, remember to define dealloc function for your noddy_NoddyType where you will reset the value with nullptr. Why even bother with having value defined as shared_ptr, you may ask. It is useful if you use your Python extension in C++, with exceptions, to avoid type conversions and casts, to have more seamless integration inside definitions of your implementation, error handling based on exception may be easier then, etc.
And in spite of the fact that your objects of the noddy_NoddyType are managed by machinery implemented in pure C, thanks to dealloc function the value will be released according to well-known RAII rules.
Here you can find interesting example of nearly seamless integration of Python C API with the C++ language: How To catch Python stdout in c++ code
Python C API can be called within C++ code.
Python C++ extensions are written using the same C API as C extensions use, or using some 3rd party API, such as boost::python.

Convert C++ Syntax to Objective C

My background experience is C/C++/C#.
I am using a C++ library in an xcode project (to be specific the library is PJSIP). To use the library i have to wire couple of callbacks to my code like this:
SipTest.m
#include < pjsua-lib/pjsua.h >
static void on_reg_state(pjsua_acc_id acc_id)
{
// Do work
}
static void Init()
{
// pjsua_config and psjua_config_default are defined in the header file from pjsip
pjsua_config cfg;
psjua_config_default(&cfg);
cfg.cb.on_regstate = &on_reg_state;
}
I want to switch this C++ sytnax to Objective C
so I did:
+(void) on_reg_state:(pjsua_acc_id) acc_id
{
// Do work
}
+(void) Init
{
pjsua_config cfg;
psjua_config_default(&cfg);
cfg.cb.on_regstate = &on_reg_state; // ***** this is causing compile error
// I tried [CLASS NAME on_reg_state] and i get runtime error
}
I tried to search for delegate in Objective C but i could not find an a similar case where the callback is already implemented in C++ and you want to use it with Objective-C syntax.
Thanks
First of all, there's absolutely no need to convert anything at all. It is perfectly fine to call C++ libraries from Objective-C.
Secondly, whats causing the compiler error is that you're trying to stick a method in a place where there should be a function pointer. You can't make a function pointer out of an Objective-C method using the & Operator. Simply keep your on_reg_state() function and use it as you did before, that's how you do callbacks in Apple's C-based frameworks, too (which you'll need as soon as you move beyond what the high-level Objective-C APIs provide).
And thirdly, your + (void)Init method seems a bit strange. I would strongly discourage you to write a method called Init (capitalized). If you intend to write an initializer, it should be - (id)init, i.e. lowercase and returning id. And don't forget to call the designated initializer of its superclass, check its return value, assign it to self, and return it at the end of the init method (see Implementing an Initializer in Apple's documentation if you're not familiar with that). And if your method is not an initializer, use a different name, e.g. - (void)createConfig.
in this case you'd want to use selectors.
+(void) on_reg_state:(pjsua_acc_id) acc_id
{
// Do work
}
+(void) Init
{
pjsua_config cfg;
psjua_config_default(&cfg);
cfg.cb.on_regstate_selector = #selector(on_reg_state:);
cfg.cb.target = self; //Self here is the class object in your 'Init' method, which is poorly named.
//Use this like [cfg.cb.target performSelector:cfg.cb.on_regstate_selector withObject:...etc]
}

How to wrap an init/cleanup function in Boost python

I recently discovered the existence of boost-python and was astonished by it's apparent simplicity. I wanted to give it a try and started to wrap an existing C++ library.
While wrapping the basic library API calls is quite simple (nothing special, just regular function calls and very common parameters), I don't know how to properly wrap the initialization/cleanup functions:
As it stands, my C++ library requires the caller to first call mylib::initialize() when the program starts, and to call mylib::cleanup() before it ends (actually there is also an initializer object that takes care of that, but it is probably irrelevant).
How should I wrap this using boost python ?
Forcing a Python user to call mymodule.initialize() and mymodule.cleanup() seems not very pythonic. Is there any way to that in an automatic fashion ? Ideally, the call to initialize() would be done transparently when the module is imported and the call to cleanup() also done when the python script ends.
Is there any way to do that ? If not, what is the most elegant solution ?
Thank you.
You could try to do a guard object and assign it to a hidden attribute of your module.
struct MyLibGuard
{
MyLibGuard() { myLib::initialize();}
~MyLibGuard() { myLib::cleanup();}
};
using namespace boost::python;
BOOST_PYTHON_MODULE(arch_lib)
{
boost::shared_ptr<MyLibGuard> libGuard = new MyLibGuard();
class_<MyLibGuard, boost::shared_ptr<MyLibGuard>, boost::noncopyable>("MyLibGuard", no_init);
scope().attr("__libguard") = libGuard;
}

Dynamic casts returns null when library with C++ python extensions is used as a plugin on RHEL5

I have a library with C++ python extensions (C++ calls python which in turn calls C++) using boost::python and python libraries (this is messy, but a lot of it is legacy) which when tested standalone works correctly. In particular, a certain dynamic_cast works correctly.
But when the library is packaged for use as a plugin on RHEL5 using gcc 4.1.2 with an external application, the dynamic_cast returns NULL resulting in the application not working as expected. On Windows (tested Vista 64 bit using Visual Studio 2005 and 2008) it works fine. When I debugged using ddd for instance, I am able to see that the pointer before casting has the right type_name (slightly mangled by compiler as is usual, I suppose?). Any specific debugging tips here will also be of help.
A reinterpret_cast solved the problem. While this will be certainly baulked at, I am at a loss about how to proceed, esp. since this could be due to issues with the external app. It is a convoluted mess and almost seems futile, but if it can help here is some sample code. The following C++ snippet creates a "smart_handle" to queue certain python commands stored in string "input". The string IMPORT imports locations and definitions of some functions that are called by boost::python::exec(..) in the function call py_api::execute_py_command:
boost::shared_ptr<my_base_class>
processor(new my_derived_class());
std::map<std::string, smart_handle> context;
context.insert(std::make_pair<std::string, smart_handle>("default_queue",
make_smart_handle(processor)));
const std::string py_command =
IMPORT +
"namesp.dialects.cpython.set_command_queue('default', default_queue)\n" +
input;
if( !py_api::execute_py_command(py_command, context) ) {
return false;
}
The make_smart_handle is defined as:
template <typename type_t>
const smart_handle make_smart_handle(const boost::shared_ptr<type_t>& ptr) {
if( !ptr ) {
return smart_handle();
}
return smart_handle(new detail::smart_handle_weak_impl<type_t>(ptr));
}
The function set_command_queue is defined in a python __init__.py as:
import func1
import func2
import func3
import func4
COMMAND_QUEUE_MAP = {}
def set_command_queue(queue_name, object):
COMMAND_QUEUE_MAP[queue_name] = object
def get_command_queue(queue_name = 'default'):
return COMMAND_QUEUE_MAP[queue_name]
Now the actual python functions func1, func2, etc. defined in separate python files calls C++ functions defined under namespace "namesp". The very first line of these C++ functions is to recover the "smart_handle" to the "queue" by:
boost::shared_ptr<my_base_class> queue = smart_handle_cast<my_base_class>(handle).lock();
It is in the above function smart_handle_cast that the dynamic_cast is used which returns NULL, when the library is used as a plugin in an external app. Using reinterpret_cast allows it to work correctly. The smart_handle_cast returns a const boost::weak_ptr. For those interested, here is the defintion of the smart_handle_cast<..>() function:
template <typename type_t>
const boost::weak_ptr<type_t> smart_handle_cast(const smart_handle& handle, bool
throw_if_failure) {
if( !handle.is_valid() ) {
if( throw_if_failure ) {
throw smart_handle::bad_handle("Bad handle, attempting to access an invalid
handle");
}
//-No throw version returns a non-initialized weak pointer
return boost::weak_ptr<type_t>();
}
//-This line fails at run time and returns null.
const detail::smart_handle_weak_impl<type_t>* casted = dynamic_cast<const
detail::smart_handle_weak_impl<type_t>* >(handle.impl());
if( !casted ) {
if( throw_if_failure ) {
throw smart_handle::bad_handle_cast("Bad handle cast, attempting to \
convert to incorrect pointee type");
}
//-No throw version returns a non-initialized weak pointer
return boost::weak_ptr<type_t>();
}
return casted->pointee;
}
Take a look at similar question and GCC FAQ
If you use dlopen to explicitly load code from a shared library, you must do several things. First, export global symbols from the executable by linking it with the "-E" flag (you will have to specify this as "-Wl,-E" if you are invoking the linker in the usual manner from the compiler driver, g++). You must also make the external symbols in the loaded library available for subsequent libraries by providing the RTLD_GLOBAL flag to dlopen. The symbol resolution can be immediate or lazy.