CPP Middleware in a python enviroment
I Want to realize this structure.
App (python) <-> Middleware (cpp) <-> Driver (python)
A App (python) will access some middleware classes in CPP. And then use a driver (python).
App.py
import pybind_11_example as middleware_cpp
n = 40
print('C++:')
print('Answer:', middleware_cpp.testFunctionA_cpp(n))
Middleware.h
#include <pybind11/pybind11.h>
class middleware
{
public:
unsigned int testFunctionA(const unsigned int n);
namespace py = pybind11;
PYBIND11_MODULE(pybind_11_example, mod) {
mod.def("testFunctionA_cpp", &testFunctionA, "Middleware class.");
}
Middleware.cpp
#include "Middleware.h"
#include "stdio.h"
#include <pybind11/pybind11.h>
#include <pybind11/embed.h> // python interpreter
#include <pybind11/stl.h> // type conversion
unsigned int middleware::testFunctionA(const unsigned int n)
{
printf("Now Sent back to python driver interface\n");
//py::scoped_interpreter guard{}; // start interpreter, dies when out of scope
py::module driver= py::module::import("Driver");
py::object result = driver.attr("setValue")("setting value from cpp to python driver");
return 42;
}
Driver.py
class Driver:
def __init__(self):
print ("Init Driver Class")
self.name = "This is the Driver Class"
def printMyClass(self):
print(self.name)
def setValue(self, value):
print(value)
I have read the docs but did this example not working. So i want to place it here as a example template if other want also try this approach.
Question 1:
How do I call the python Driver class function "setValue" from cpp?
The current implementation from Middleware.cpp calling the function setValue leads to this error:
AttributeError: module 'Driver' has no attribute 'setValue'
Question 2:
Calling the interpreter twice should not be needed, because it is assumed, the the middleware is accessed from the running app process, which is already in python.
py::scoped_interpreter guard{}; // start interpreter, dies when out of scope
Best regards Holger
Related
I am using pytorch c++ extension to write some logic in the c++ code. But I require the pytorch loss function to be passed from python code. My dummy c++ code is as follows,
#include <torch/extension.h>
#include <iostream>
namespace py = pybind11;
torch::Tensor calculateLoss(
torch::Tensor pred,
torch::Tensor target,
<pytorch_loss_fn> fn
){
return fn(pred,target);
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("calculateLoss", &calculateLoss, "Calculate Loss In C++");
}
And the python code triggering this,
import torch
import stack_cpp
from torch.nn import BCELoss
y_p = torch.Tensor([0.25,0.75])
y = torch.Tensor([0.0,1.0])
print(stack_cpp.calculateLoss(y_p,y,BCELoss()))
What is the pytorch way to achieve the same? What datatype should I use in place of <pytorch_loss_fn>?
If I were to pass an activation function too (say torch.Sigmoid()), what data type should I use?
I am trying to call a python function from a C++ code which contains main() function using Pybind11. But I found very few references are available. Most of existing documents talk about the reversed direction, i.e. calling C++ from Python.
Is there any complete example showing how to do that? The only reference I found is: https://github.com/pybind/pybind11/issues/30
But it has very little information.
The answer to your question really has two parts: one about calling a Python function from C++, the other about embedding the interpreter.
Calling a function in pybind11 is simply a matter of getting that function into a pybind11::object variable, on which you can invoke operator() to attempt to call the object. (It doesn't have to be a function, but just something callable: for example, it could also be an object with a __call__ method). For example, to call math.sqrt(2) from C++ code you'd use:
auto math = py::module::import("math");
auto resultobj = math.attr("sqrt")(2);
double result = resultobj.cast<double>();
or you could condense it all to just:
double result = py::module::import("math").attr("sqrt")(2).cast<double>();
The second part of the question involves how to do this from a C++ executable. When building an executable (i.e. when your C++ code contains main()) you have to embed the Python interpreter in your binary before you can do anything with Python (like calling a Python function).
Embedded support is a new feature added in the current pybind11 master branch (which will become the 2.2 release). Here's a basic example that starts an embedded Python interpreter and calls a Python function (math.sqrt):
#include <pybind11/embed.h>
#include <iostream>
namespace py = pybind11;
int main() {
py::scoped_interpreter python;
auto math = py::module::import("math");
double root_two = math.attr("sqrt")(2.0).cast<double>();
std::cout << "The square root of 2 is: " << root_two << "\n";
}
Outputs:
The square root of 2 is: 1.41421
More examples and documentation of calling functions and embedding are available at http://pybind11.readthedocs.io/en/master/advanced/pycpp/object.html and http://pybind11.readthedocs.io/en/master/advanced/embedding.html, respectively.
Jasons answer is pretty much on point, but I want to add a slightly more complex (and clean) example calling a python method with a numpy input.
I want to showcase two points:
We can cast a py::object to a py::function using py::reinterpret_borrow<py::function>
We can input a std::vector that automatically gets converted to a numpy.array
Note that the user is responsible for making sure that the PyModule.attr is actually a python function. Also note that the type conversion works for a wide variety of c++ types (see here for details).
In this example I want to use the method scipy.optimize.minimize with a starting point x0 that is provided from the c++ interface.
#include <iostream>
#include <vector>
#include <pybind11/pybind11.h>
#include <pybind11/embed.h> // python interpreter
#include <pybind11/stl.h> // type conversion
namespace py = pybind11;
int main() {
std::cout << "Starting pybind" << std::endl;
py::scoped_interpreter guard{}; // start interpreter, dies when out of scope
py::function min_rosen =
py::reinterpret_borrow<py::function>( // cast from 'object' to 'function - use `borrow` (copy) or `steal` (move)
py::module::import("py_src.exec_numpy").attr("min_rosen") // import method "min_rosen" from python "module"
);
py::object result = min_rosen(std::vector<double>{1,2,3,4,5}); // automatic conversion from `std::vector` to `numpy.array`, imported in `pybind11/stl.h`
bool success = result.attr("success").cast<bool>();
int num_iters = result.attr("nit").cast<int>();
double obj_value = result.attr("fun").cast<double>();
}
with the python script py_src/exec_numpy.py
import numpy as np
from scipy.optimize import minimize, rosen, rosen_der
def min_rosen(x0):
res = minimize(rosen, x0)
return res
Hope this helps someone!
project structure
CMakeLists.txt
calc.py
main.cpp
main.cpp
#include <pybind11/embed.h>
#include <iostream>
namespace py = pybind11;
using namespace py::literals;
int main() {
py::scoped_interpreter guard{};
// append source dir to sys.path, and python interpreter would find your custom python file
py::module_ sys = py::module_::import("sys");
py::list path = sys.attr("path");
path.attr("append")("..");
// import custom python class and call it
py::module_ tokenize = py::module_::import("calc");
py::type customTokenizerClass = tokenize.attr("CustomTokenizer");
py::object customTokenizer = customTokenizerClass("/Users/Caleb/Desktop/codes/ptms/bert-base");
py::object res = customTokenizer.attr("custom_tokenize")("good luck");
// show the result
py::list input_ids = res.attr("input_ids");
py::list token_type_ids = res.attr("token_type_ids");
py::list attention_mask = res.attr("attention_mask");
py::list offsets = res.attr("offset_mapping");
std::string message = "input ids is {},\noffsets is {}"_s.format(input_ids, offsets);
std::cout << message << std::endl;
}
calc.py
from transformers import BertTokenizerFast
class CustomTokenizer(object):
def __init__(self, vocab_dir):
self._tokenizer = BertTokenizerFast.from_pretrained(vocab_dir)
def custom_tokenize(self, text):
return self._tokenizer(text, return_offsets_mapping=True)
def build_tokenizer(vocab_dir: str) -> BertTokenizerFast:
tokenizer = BertTokenizerFast.from_pretrained(vocab_dir)
return tokenizer
def tokenize_text(tokenizer: BertTokenizerFast, text: str) -> dict:
res = tokenizer(text, return_offsets_mapping=True)
return dict(res)
CMakeLists.txt
cmake_minimum_required(VERSION 3.4)
project(example)
set(CMAKE_CXX_STANDARD 11)
# set pybind11 dir
set(pybind11_DIR /Users/Caleb/Softwares/pybind11)
find_package(pybind11 REQUIRED)
# set custom python interpreter(under macos)
link_libraries(/Users/Caleb/miniforge3/envs/py38/lib/libpython3.8.dylib)
add_executable(example main.cpp)
target_link_libraries(example PRIVATE pybind11::embed)
I've written part of a class in C++ and I want to be able to use it in conjunction with a Python GUI, so I'm using Boost.Python to try and make it easy. The issue I'm running into is that in following their guide (http://www.boost.org/doc/libs/1_55_0/libs/python/doc/tutorial/doc/html/python/exposing.html), I keep getting the following exception whenever I run bjam:
PacketWarrior/pcap_ext.cc:21:5: error: too few template arguments for class template 'class_'
Obviously it's complaining at me for omitting what they claim are optional arguments to the 'class_' template function, but I can't figure out why. I'm assuming it's a compiler issue but I don't know how to fix it. I'm running OS X 10.9 and using darwin for the default toolset, but GCC throws the same error. My Boost version is 1_55_0 if that helps at all.
Class header file (header guards omitted):
#include <queue>
#include "pcap.h"
#include "Packet.h"
class PacketEngine {
public:
PacketEngine();
~PacketEngine();
const char** getAvailableDevices(char *error_buf);
bool selectDevice(const char* dev);
Packet getNextPacket();
private:
char *selected_device;
char **devices;
int num_devices;
std::queue<Packet> packet_queue;
};
The cc file containing the references to Boost.Python and my class:
#include <boost/python/module.hpp>
#include <boost/python/def.hpp>
#include "PacketEngine.h"
BOOST_PYTHON_MODULE(pcap_ext) {
using namespace boost::python;
class_<PacketEngine>("PacketEngine")
.def("getAvailableDevices", &PacketEngine::getAvailableDevices);
}
And my bjam file (irrelevant parts and comments omitted):
use-project boost : ../../../Downloads/boost_1_55_0 ;
project
: requirements <library>/boost/python//boost_python
<implicit-dependency>/boost//headers
: usage-requirements <implicit-dependency>/boost//headers
;
python-extension pcap_ext : PacketWarrior/pcap_ext.cc ;
install convenient_copy
: pcap_ext
: <install-dependencies>on <install-type>SHARED_LIB <install-type>PYTHON_EXTENSION
<location>.
;
local rule run-test ( test-name : sources + )
{
import testing ;
testing.make-test run-pyd : $(sources) : : $(test-name) ;
}
run-test pcap : pcap_ext pcap.py ;
Any ideas as to how to circumvent this exception are greatly appreciated! I looked into the obvious route of just adding the optional parameters but I don't think they're relevant to my project. The class_ definition can be found here:
http://www.boost.org/doc/libs/1_37_0/libs/python/doc/v2/class.html
In short, include either:
boost/python.hpp: The Boost.Python convenient header file.
boost/python/class.hpp: The header that defines boost::python::class_.
The current included header files are declaring class_ with no default template arguments from def_visitor.hpp.
Also, trying to directly expose PacketEngine::getAvailableDevices() will likely present a problem:
It accepts a char* argument, but strings are immutable in Python.
There are no types that automatically convert to/from a const char** in Boost.Python.
It may be reasonable for a Python user to expect PacketEngine.getAvailableDevices() to return an iterable type containing Python strs, or throw an exception on error. This can be accomplished in a non-intrusive manner by writing a helper or auxiliary function that delegates to original function, but is exposed to Python as PacketEngine.getAvailableDevices().
Here is a complete example based on the original code:
#include <exception> // std::runtime_error
#include <boost/python.hpp>
namespace {
const char* devices_str[] = {
"device A",
"device B",
"device C",
NULL
};
} // namespace
class PacketEngine
{
public:
PacketEngine() : devices(devices_str) {}
const char** getAvailableDevices(char *error_buf)
{
// Mockup example to force an error on second call.
static bool do_error = false;
if (do_error)
{
strcpy(error_buf, "engine not responding");
}
do_error = true;
return devices;
}
private:
const char **devices;
};
/// #brief Auxiliary function for PacketEngine::getAvailableDevices that
/// provides a more Pythonic API. The original function accepts a
/// char* and returns a const char**. Both of these types are
/// difficult to use within Boost.Python, as strings are immutable
/// in Python, and Boost.Python is focused to providing
/// interoperability to C++, so the const char** type has no direct
/// support.
boost::python::list PacketEngine_getAvailableDevices(PacketEngine& self)
{
// Get device list and error from PacketEngine.
char error_buffer[256] = { 0 };
const char** devices = self.getAvailableDevices(error_buffer);
// On error, throw an exception. Boost.Python will catch it and
// convert it to a Python's exceptions.RuntimeError.
if (error_buffer[0])
{
throw std::runtime_error(error_buffer);
}
// Convert the c-string array to a list of Python strings.
namespace python = boost::python;
python::list device_list;
for (unsigned int i = 0; devices[i]; ++i)
{
const char* device = devices[i];
device_list.append(python::str(device, strlen(device)));
}
return device_list;
}
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::class_<PacketEngine>("PacketEngine")
.def("getAvailableDevices", &PacketEngine_getAvailableDevices);
}
Interactive usage:
>>> import example
>>> engine = example.PacketEngine()
>>> for device in engine.getAvailableDevices():
... print device
...
device A
device B
device C
>>> devices = engine.getAvailableDevices()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: engine not responding
I'm currently developing a simulator that runs on a server and should display data in the browser.
For serving files, communication and things like that, I'd like to use Node.js. But, I'm not sure if it will perform as well as I'd want it to in the computation department, so I would like to develop the simulation part in C++.
The simulation is divided into separate "worlds", which all start with some initial parameters.
What is the best way to do this?
Well, V8 allows for C++ code to be called from JavaScript.
So you can have 3 parts of your code:
Normal C++, unaware of node.js and V8. This would be where World is.
Glue node.js/V8-C++ code, allowing JS to "see" parts of your World class.
Normal JavaScript code, which communicates with the C++ side via the "glue" layer
First, understand how V8 and C++ communicate. Google provides a guide for this: https://developers.google.com/v8/embed
Then, you need node.js specific glue. See http://www.slideshare.net/nsm.nikhil/writing-native-bindings-to-nodejs-in-c and http://syskall.com/how-to-write-your-own-native-nodejs-extension
From the slideshare link above:
#include <v8.h>
#include <node.h>
using namespace v8;
extern "C" {
static void init(Handle<Object> target) {}
NODE_MODULE(module_name, init)
}
We can expand that into something closer to what you want:
src/world.h
#ifndef WORLD_H_
#define WORLD_H_
class World {
public:
void update();
};
extern World MyWorld;
#endif
src/world.cpp
#include "world.h"
#include <iostream>
using std::cout;
using std::endl;
World MyWorld;
void World::update() {
cout << "Updating World" << endl;
}
src/bind.cpp
#include <v8.h>
#include <node.h>
#include "world.h"
using namespace v8;
static Handle<Value> UpdateBinding(const Arguments& args) {
HandleScope scope;
MyWorld.update();
return Undefined();
}
static Persistent<FunctionTemplate> updateFunction;
extern "C" {
static void init(Handle<Object> obj) {
v8::HandleScope scope;
Local<FunctionTemplate> updateTemplate = FunctionTemplate::New(UpdateBinding);
updateFunction = v8::Persistent<FunctionTemplate>::New(updateTemplate);
obj->Set(String::NewSymbol("update"), updateFunction->GetFunction());
}
NODE_MODULE(world, init)
}
demo/demo.js
var world = require('../build/Release/world.node');
world.update();
wscript
def set_options(opt):
opt.tool_options("compiler_cxx")
def configure(conf):
conf.check_tool("compiler_cxx")
conf.check_tool("node_addon")
def build(bld):
obj = bld.new_task_gen("cxx", "shlib", "node_addon")
obj.cxxflags = ["-g", "-D_FILE_OFFSET_BITS=64", "-D_LARGEFILE_SOURCE", "-Wall"]
# This is the name of our extension.
obj.target = "world"
obj.source = "src/world.cpp src/bind.cpp"
obj.uselib = []
On Linux shell, some setup:
node-waf configure
To build, run:
node-waf
To test:
node demo/demo.js
Output:
Updating World
for example I have a function in python that I want to convert to c++ (or call from c++ but I don't want to depend on python interpretor)
simple python function
//test.py
def my_sum(x,y):
print "Hello World!"
return x*x+y
I run shedskin and have
//test.cpp
#include "builtin.hpp"
#include "test.hpp"
namespace __test__ {
str *__name__;
void __init() {
__name__ = new str("__main__");
}
} // module namespace
int main(int, char **) {
__shedskin__::__init();
__shedskin__::__start(__test__::__init);
}
//test.hpp
#ifndef __TEST_HPP
#define __TEST_HPP
using namespace __shedskin__;
namespace __test__ {
extern str *__name__;
} // module namespace
#endif
ugly code and there is no my function my_sum and code depends on "builtin.hpp". is it possible to convert only function?
or
I want to call function from my c++ code something like
int sum= py.my_sum(3,5);
how can I do this?
or
maybe I can do DLL or Lib from python code that I can use in c++ code?
notice the warning that shedskin gives for this program:
*WARNING* test.py:1: function my_sum not called!
it is also mentioned in the documentation that for compilation to work, a function should be called (directly or indirectly), as it's not possible to do type inference otherwise.. how to determine the types of the arguments of my_sum, if there's not even a single call to it..? :-)
adding this, for example:
if __name__ == '__main__':
my_sum(1,1)
makes my_sum appear in the generated C++ code, which can potentially be called from another C++ program.