I am currently embedding Python in C++ using boost-python and boost-numpy.
I have the following Python test script:
import numpy as np
import time
def test_qr(m,n):
print("create numpy array")
A = np.random.rand(m, n)
print("Matrix A is {}".format(A))
print("Lets QR factorize this thing! Mathematics is great !!")
ts = time.time()
Q, R = np.linalg.qr(A)
te = time.time()
print("It took {} seconds to factorize A".format(te - ts))
print("The Q matrix is {}".format(Q))
print("The R matrix is {}".format(R))
return Q,R
def sum(m,n):
return m+n
I am able to execute a part of the code in C++ like this:
namespace p = boost::python;
namespace np = boost::python::numpy;
int main() {
Py_Initialize(); //initialize python environment
np::initialize(); //initialize numpy environment
p::object main_module = p::import("__main__");
p::object main_namespace = main_module.attr("__dict__");
// execute code in the main_namespace
p::exec_file("/Users/Michael/CLionProjects/CythonTest/test_file.py",main_namespace); //loads python script
p::exec("m = 100\n"
"n = 100\n"
"Q,R = test_qr(m,n)", main_namespace);
np::ndarray Q_matrix = p::extract<np::ndarray>(main_namespace["Q"]); // extract results as numpy array types
np::ndarray R_matrix = p::extract<np::ndarray>(main_namespace["R"]);
std::cout<<"C++ Q Matrix: \n" << p::extract<char const *>(p::str(Q_matrix)) << std::endl; // extract every element as a
std::cout<<"C++ R Matrix: \n" << p::extract<char const *>(p::str(R_matrix)) << std::endl;
std::cout<<"code also works with numpy, ask for a raise" << std::endl;
p::object sum = main_namespace.attr("sum")(10,10);
int result = p::extract<int>(main_namespace.attr("sum")(10,10));
std::cout<<"sum result works " << result << std::endl;
return 0;}
Now I am trying to use the sum function in the Python script but I do not always want to write a string like:
p::exec("m = 100\n"
"n = 100\n"
"Q,R = test_qr(m,n)", main_namespace);}
How can this be done without using the exec function?
I have tried things like:
p::object sum = main_namespace.attr("sum")(10,10);
int result = p::extract<int>(main_namespace.attr("sum")(10,10));
std::cout<<"sum result works " << result << std::endl;
As mentioned in the documentation of boost.
I also tried using the call_method function, but it didn't work.
I get either boost::python::error_already_set exception which mean there is something wrong in Python, but I do not know what.
Or an exit code 11.
The issue is rather trivial. Let's look at the tutorial you mention:
object main_module = import("__main__");
object main_namespace = main_module.attr("__dict__");
object ignored = exec("result = 5 ** 2", main_namespace);
int five_squared = extract<int>(main_namespace["result"]);
Notice how they extract the result object in the last line: main_namespace["result"]
The main_namespace object is a Python dictionary, and rather than extracting it's attribute, you're just looking for a value stored with the particular key. Hence, indexing with [] is the way to go.
C++ code:
#define BOOST_ALL_NO_LIB
#include <boost/python.hpp>
#include <boost/python/numpy.hpp>
#include <iostream>
namespace bp = boost::python;
int main()
{
try {
Py_Initialize();
bp::object module = bp::import("__main__");
bp::object globals = module.attr("__dict__");
bp::exec_file("bpcall.py", globals);
bp::object sum_fn = globals["sum"];
int result = bp::extract<int>(sum_fn(1,2));
std::cout << "Result (C++) = " << result << "\n";
} catch (bp::error_already_set) {
PyErr_Print();
}
Py_Finalize();
}
Python script:
def sum(m,n):
return m+n
Output:
Result (C++) = 3
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 11 months ago.
Improve this question
How to make the c ++ application work with the browser. I mean a program that retrieves data from a given page (let's assume that the page displays a string) and then performs some reaction on the page. For example, the page displays a random string, and the program enters the length of the string into the form.
I am a novice programmer, so I care about information and advice on where to start. Thanks in advance for any help.
As I already promised to OP in comments, posting Partial answer, which doesn't answer all questions, but only provides handy tool to wrap (call) any Python code inside C++ program.
In my code snippet I don't even do anything with browsers, but instead show only example of computing Greatest Common Divisor using Python's standard function math.gcd().
I decided to introduce this Python-in-C++ bridge only because there exist many beautiful Python modules that work with browsers or with parsing/composing HTML, hence it is much easier to write such tools in Python instead of C++.
But expert without knowledge of default Python C API, it is not that easy to implement even simple use case - compile text of Python code, pass to it any arguments from C++, receive response arguments, return arguments back to C++. Only these simple actions need usage of a dozen of different Python C API functions. That's why I decided to show how to do it, as I know.
I implemented from scratch (specifically for OP's question) handy class PyRunner which does all the magic, usage of this class is simple:
PyRunner pyrun;
std::string code = R"(
def gcd(a, b):
import math
return math.gcd(a, b)
res = gcd(*arg)
print('GCD of', arg[0], 'and', arg[1], 'is', res, flush = True)
)";
std::cout << pyrun.Run(code, "(2 * 3 * 5, 2 * 3 * 7)") << std::endl;
std::cout << pyrun.Run(code, "(5 * 7 * 11, 5 * 7 * 13)") << std::endl;
Basically you just pass any Python code snippet to PyRunner::Run() method and also any argument (represented as Python object converted to string). Result of this call is also a returned Python object converted to string. You can also use JSON to pass any large argument as string and parse returned argument, as any JSON string is also a valid stringized Python object.
Of course you need a knowledge of Python to be able to write complex code snippets inside C++.
One drawback of my PyRunner class is that for some reason (that I didn't yet understand), you can't import Python module inside global scope, as you can see I did import math within function scope. But this is not a big deal, I think, and maybe some experts will clarify the reason.
To compile and run code you need to have pre-installed Python, and pass Python's include folder and library file as compiler arguments. For example in Windows CLang you do following:
clang.exe -std=c++20 -O3 -Id:/bin/Python39/include/ d:/bin/Python39/libs/python39.lib prog.cpp
and in Linux:
clang -std=c++20 -O3 -I/usr/include/ -lpython3.9 prog.cpp
To run the program either you should provide environment variables PYTHONHOME or PYTHONPATH or run program from Python folder (like d:/bin/Python39/) or do sys.path.append("d:/bin/Python39/") on first lines of Python code snippet embedded in C++. Without these paths Python can't find location of its standard library.
PyRunner class is thread-safe, but only single-threaded always. It means that two calls to .Run() inside two threads will be exclusively blocked by mutex. I use std::mutex instead of Python's GIL to protect from multi-threading, because it is quite alright (and faster), if you don't use Python C API in any other threads simultaneously. Also it is not allowed right now to have two instances of PyRunner objects as it does Py_Initialize() and Py_FinalizeEx() in constructor and destructor, which should be done globally only once. Hence PyRunner should be a singleton.
Below is full C++ code with implementation of PyRunner class and its usage (usage is inside main()). See console output after code below. Click Try it online! link to see compile/run of this code on free GodBolt online Linux servers.
Try it online!
#include <iostream>
#include <functional>
#include <string>
#include <string_view>
#include <stdexcept>
#include <memory>
#include <mutex>
#include <Python.h>
#define ASSERT_MSG(cond, msg) { if (!(cond)) throw std::runtime_error("Assertion (" #cond ") failed at line " + std::to_string(__LINE__) + "! Msg: '" + std::string(msg) + "'."); }
#define ASSERT(cond) ASSERT_MSG(cond, "")
#define PY_ASSERT_MSG(cond, msg) { if (!(cond) || PyErr_Occurred()) { PyErr_Print(); ASSERT_MSG(false && #cond, msg); } }
#define PY_ASSERT(cond) PY_ASSERT_MSG(cond, "")
#define LN { std::cout << "LN " << __LINE__ << std::endl << std::flush; }
class PyRunner {
private:
class PyObj {
public:
PyObj(PyObject * pobj, bool inc_ref = false) : p_(pobj) {
if (inc_ref)
Py_XINCREF(p_);
PY_ASSERT_MSG(p_, "NULL PyObject* passed!");
}
PyObject * Get() { return p_; }
~PyObj() {
Py_XDECREF(p_);
p_ = nullptr;
}
private:
PyObject * p_ = nullptr;
};
public:
PyRunner() {
Py_SetProgramName(L"prog.py");
Py_Initialize();
}
~PyRunner() {
codes_.clear();
Py_FinalizeEx();
}
std::string Run(std::string code, std::string const & arg = "None") {
std::unique_lock<std::mutex> lock(mutex_);
code = StrUnIndent(code);
if (!codes_.count(code))
codes_.insert(std::pair{code, std::make_shared<PyObj>(Py_CompileString(code.c_str(), "script.py", Py_file_input))});
PyObj & compiled = *codes_.at(code);
PyObj globals_arg_mod = PyModule_New("arg"), globals_arg = PyModule_GetDict(globals_arg_mod.Get()), locals_arg = PyDict_New(),
globals_mod = PyModule_New("__main__"), globals = PyModule_GetDict(globals_mod.Get()), locals = PyDict_New();
// py_arg = PyUnicode_FromString(arg.c_str()),
PyObj py_arg = PyRun_String(arg.c_str(), Py_eval_input, globals_arg.Get(), locals_arg.Get());
PY_ASSERT(PyDict_SetItemString(locals.Get(), "arg", py_arg.Get()) == 0);
#if 0
PyObj result = PyEval_EvalCode(compiled.Get(), globals.Get(), locals.Get());
#else
PyObj builtins(PyEval_GetBuiltins(), true), exec(PyDict_GetItemString(builtins.Get(), "exec"), true);
PyObj exec_args = PyTuple_Pack(3, compiled.Get(), globals.Get(), locals.Get());
PyObj result = PyObject_CallObject(exec.Get(), exec_args.Get());
#endif
PyObj res(PyDict_GetItemString(locals.Get(), "res"), true), res_str = PyObject_Str(res.Get());
char const * cres = nullptr;
PY_ASSERT(cres = PyUnicode_AsUTF8(res_str.Get()));
return cres;
}
private:
static std::string StrUnIndent(std::string_view const & s) {
auto lines = StrSplit(s, "\n");
size_t min_off = size_t(-1);
for (auto const & line: lines) {
if (StrTrim(line).empty())
continue;
min_off = std::min<size_t>(min_off, line.find_first_not_of("\t\n\v\f\r "));
}
ASSERT(min_off < 10000ULL);
std::string res;
for (auto const & line: lines)
res += line.substr(std::min<size_t>(min_off, line.size())) + "\n";
return res;
}
static std::string StrTrim(std::string s) {
s.erase(0, s.find_first_not_of("\t\n\v\f\r ")); // left trim
s.erase(s.find_last_not_of("\t\n\v\f\r ") + 1); // right trim
return s;
}
static std::vector<std::string> StrSplit(std::string_view const & s, std::string_view const & delim) {
std::vector<std::string> res;
size_t start = 0;
while (true) {
size_t pos = s.find(delim, start);
if (pos == std::string::npos)
pos = s.size();
res.emplace_back(s.substr(start, pos - start));
if (pos >= s.size())
break;
start = pos + delim.size();
}
return res;
}
private:
std::unordered_map<std::string, std::shared_ptr<PyObj>> codes_;
std::mutex mutex_;
};
int main() {
try {
PyRunner pyrun;
std::string code = R"(
def gcd(a, b):
import math
return math.gcd(a, b)
res = gcd(*arg)
print('GCD of', arg[0], 'and', arg[1], 'is', res, flush = True)
)";
std::cout << pyrun.Run(code, "(2 * 3 * 5, 2 * 3 * 7)") << std::endl;
std::cout << pyrun.Run(code, "(5 * 7 * 11, 5 * 7 * 13)") << std::endl;
return 0;
} catch (std::exception const & ex) {
std::cout << "Exception: " << ex.what() << std::endl;
return -1;
}
}
Console output:
GCD of 30 and 42 is 6
6
GCD of 385 and 455 is 35
35
I'm using the same traced model in pytorch and libtorch but I'm getting different outputs.
Python Code:
import cv2
import numpy as np
import torch
import torchvision
from torchvision import transforms as trans
# device for pytorch
device = torch.device('cuda:0')
torch.set_default_tensor_type('torch.cuda.FloatTensor')
model = torch.jit.load("traced_facelearner_model_new.pt")
model.eval()
# read the example image used for tracing
image=cv2.imread("videos/example.jpg")
test_transform = trans.Compose([
trans.ToTensor(),
trans.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
resized_image = cv2.resize(image, (112, 112))
tens = test_transform(resized_image).to(device).unsqueeze(0)
output = model(tens)
print(output)
C++ Code:
#include <iostream>
#include <algorithm>
#include <opencv2/opencv.hpp>
#include <torch/script.h>
int main()
{
try
{
torch::jit::script::Module model = torch::jit::load("traced_facelearner_model_new.pt");
model.to(torch::kCUDA);
model.eval();
cv::Mat visibleFrame = cv::imread("example.jpg");
cv::resize(visibleFrame, visibleFrame, cv::Size(112, 112));
at::Tensor tensor_image = torch::from_blob(visibleFrame.data, { 1, visibleFrame.rows,
visibleFrame.cols, 3 }, at::kByte);
tensor_image = tensor_image.permute({ 0, 3, 1, 2 });
tensor_image = tensor_image.to(at::kFloat);
tensor_image[0][0] = tensor_image[0][0].sub(0.5).div(0.5);
tensor_image[0][1] = tensor_image[0][1].sub(0.5).div(0.5);
tensor_image[0][2] = tensor_image[0][2].sub(0.5).div(0.5);
tensor_image = tensor_image.to(torch::kCUDA);
std::vector<torch::jit::IValue> input;
input.emplace_back(tensor_image);
// Execute the model and turn its output into a tensor.
auto output = model.forward(input).toTensor();
output = output.to(torch::kCPU);
std::cout << "Embds: " << output << std::endl;
std::cout << "Done!\n";
}
catch (std::exception e)
{
std::cout << "exception" << e.what() << std::endl;
}
}
The model gives (1x512) size output tensor as shown below.
Python output
tensor([[-1.6270e+00, -7.8417e-02, -3.4403e-01, -1.5171e+00, -1.3259e+00,
-1.1877e+00, -2.0234e-01, -1.0677e+00, 8.8365e-01, 7.2514e-01,
2.3642e+00, -1.4473e+00, -1.6696e+00, -1.2191e+00, 6.7770e-01,
...
-7.1650e-01, 1.7661e-01]], device=‘cuda:0’,
grad_fn=)
C++ output
Embds: Columns 1 to 8 -84.6285 -14.7203 17.7419 47.0915 31.8170 57.6813 3.6089 -38.0543
Columns 9 to 16 3.3444 -95.5730 90.3788 -10.8355 2.8831 -14.3861 0.8706 -60.7844
...
Columns 505 to 512 36.8830 -31.1061 51.6818 8.2866 1.7214 -2.9263 -37.4330 48.5854
[ CPUFloatType{1,512} ]
Using
Pytorch 1.6.0
Libtorch 1.6.0
Visual studio 2019
Windows 10
Cuda 10.1
before the final normalization, you need to scale your input to the range 0-1 and then carry on the normalization you are doing. convert to float and then divide by 255 should get you there. Here is the snippet I wrote, there might be some syntaax errors, that should be visible.
Try this :
#include <iostream>
#include <algorithm>
#include <opencv2/opencv.hpp>
#include <torch/script.h>
int main()
{
try
{
torch::jit::script::Module model = torch::jit::load("traced_facelearner_model_new.pt");
model.to(torch::kCUDA);
cv::Mat visibleFrame = cv::imread("example.jpg");
cv::resize(visibleFrame, visibleFrame, cv::Size(112, 112));
at::Tensor tensor_image = torch::from_blob(visibleFrame.data, { visibleFrame.rows,
visibleFrame.cols, 3 }, at::kByte);
tensor_image = tensor_image.to(at::kFloat).div(255).unsqueeze(0);
tensor_image = tensor_image.permute({ 0, 3, 1, 2 });
ensor_image.sub_(0.5).div_(0.5);
tensor_image = tensor_image.to(torch::kCUDA);
// Execute the model and turn its output into a tensor.
auto output = model.forward({tensor_image}).toTensor();
output = output.cpu();
std::cout << "Embds: " << output << std::endl;
std::cout << "Done!\n";
}
catch (std::exception e)
{
std::cout << "exception" << e.what() << std::endl;
}
}
I don't have access to a system to run this so if you face anything comment below.
I'm having fun coding simple OpenGL demos and I recently decided to use Lua with my C++ engine in order to change the rendering dynamically without having to recompile on and on my project. Thus I can tweak more easily the rendering algorithm. But I know that my current rendering update functions are probably far from being efficient.
For the moment, I'm transfering a matrix from C++ to Lua, modifying it in a Lua script and sending it back to my C++ rendering engine. But I'm reloading the Lua script each time I get an update call from the C++ engine, and I'm losing all of the variable context. That means I'm always starting from scratch and my rendering is far from being smooth. I include some code sample below to explain what I'm doing. I am currently learning Lua with C++ embedding, so I know I still don't have the best practices.
update.lua
function transform(m)
amplitude = 1.5
frequency = 500
phase = 0.0
r = {}
for i = 1, #m do
r[i] = {}
for j = 1, #m[i] do
if (i % 2) then
r[i][j] = amplitude * math.sin(m[i][j] + phase)
else
r[i][j] = -amplitude * math.sin(m[i][j] + phase)
end
phase = phase + 0.001
end
end
return r
end
-- called by c++
function update()
m = pull()
r = transform(m)
push(r)
end
matrix.cpp
// pull matrix from lua point of view
static int pull(lua_State * _L)
{
_push(_L, &_m);
return 1;
}
// push matrix from lua point of view
static int push(lua_State * _L)
{
// get number of arguments
int n = lua_gettop(_L);
if(1 == n) {
_pull(_L, 1, &_m);
}
return 1;
}
void matrix::load_file(char * file, char * function)
{
int status;
// load the file containing the script we are going to run
status = luaL_loadfile(_L, file);
switch (status) {
case LUA_OK:
break;
case LUA_ERRFILE:
std::cout << "LUA_ERRFILE: " << lua_error(_L) << std::endl;
break;
case LUA_ERRSYNTAX:
std::cout << "LUA_ERRSYNTAX: " << lua_error(_L) << std::endl;
break;
default:
std::cout << lua_error(_L) << std::endl;
}
lua_getglobal(_L, function);
status = lua_pcall(_L, 1, 1, 0);
if (status != LUA_OK) {
std::cout << "error running file" << lua_error(_L) << std::endl;
}
}
void matrix::update()
{
load_file("lua/update.lua", "update");
}
I'm thinking of passing some arguments when calling the update() function, but I'm wondering if the C++ to Lua then back to C++ approach is correct and efficient. Especially considering the fact that I might transfer and modify huge matrix in Lua. I probably lack some embedded Lua knowledge to keep context while loading a script. Do you have some general advice on how I would improve my code ? I know that my current approach is overly complicated.
A quick fix would be to only load the file if it has been modified since the last frame:
static time_t last_modified = 0;
struct stat sbuf;
stat(file, &sbuf);
if (sbuf.st_mtime > last_modified) {
last_modified = sbuf.st_mtime;
status = luaL_loadfile(_L, file);
// etc
}
// Now call the function
lua_getglobal(_L, function);
status = lua_pcall(_L, 1, 1, 0);
OK, loading the chunk of the update() function into a global variable and having a global parameter table in the Lua script is the way to go. I achieved this using the following guidelines, and I will post the detailed steps below. Basically, loading the script entirely first ensures that all global variables are stored in the C++ context. Then storing the wanted function as an index allows us to run it again, while keeping the global variables in the script evolving on their own.
Step 1
First call luaL_loadfile once at init
Step 2
Run the script once using lua_pcall(_L, 0, 0, 0);
This ensures that the global variables, which are used as parameters in the Lua script are in memory.
Step 3
Store the Lua function. I managed to do it with the following C++ code:
void matrix::store(char * function)
{
lua_newtable(_L); // create table for functions
_idx = luaL_ref(_L, LUA_REGISTRYINDEX); // store said table in pseudo-registry
lua_rawgeti(_L, LUA_REGISTRYINDEX, _idx); // retrieve table for functions
lua_getglobal(_L, function); // retrieve function to store
if (lua_isfunction(_L, -1)) {
_f = luaL_ref(_L, -2); // store a function in the function table
}
else {
lua_pop(_L, 1);
std::cout << "can't find " << function << std::endl;
}
// table is two places up the current stack counter
lua_pop(_L, 1); // we are done with the function table, so pop it
std::cout << "idx: " << _idx << ", function: " << _f << std::endl;
}
Step 4
Call the stored function again when rendering using the following C++ function:
void matrix::run()
{
int status;
if (_f == -1) {
std::cout << "invalid function index " << _f << std::endl;
}
else {
lua_rawgeti(_L, LUA_REGISTRYINDEX, _idx); // retrieve function table
lua_rawgeti(_L, -1, _f); // retrieve function
//use function
status = lua_pcall(_L, 0, 0, 0); // 0 arguments, 0 results
if (status != LUA_OK) {
std::cout << "error running function" << lua_error(_L) << std::endl;
}
//don't forget to pop the function table from the stack
lua_pop(_L, 1);
}
}
Step 5 (optional)
If we set all the Lua parameters in a global table, we can retrieve them dynamically in C++ using the following piece of code:
void matrix::get_params(char * p)
{
lua_getglobal(_L, p);
lua_pushnil(_L);
int i = 0;
while(lua_next(_L,-2))
{
const char * key = lua_tostring(_L,-2);
double value = lua_tonumber(_L,-1);
lua_pop(_L,1);
std::cout << key << " = " << value << std::endl;
_h[i].key.assign(key);
_h[i].value = value;
i++;
}
lua_pop(_L, 1);
}
Where _his a simple dynamic structure defined as such:
typedef struct {
std::string key;
float value;
} hash;
I only use float, so this simple structure is convenient enough for my needs, and allows me to add lots of variables in my Lua script without bothering with a structure definition in C++. This way I can add as many parameters in my Lua table and do the maths when updating.
Step 6
Tweak the Lua script forever ! Et voila:
p = {
amplitude = 1.5,
frequency = 500,
phase = 0.0
}
function transform(m)
r = {}
for i = 1, #m do
r[i] = {}
for j = 1, #m[i] do
if (i % 2) then
r[i][j] = p.amplitude * math.sin(m[i][j] + p.phase)
else
r[i][j] = -p.amplitude * math.sin(m[i][j] + p.phase)
end
p.phase = p.phase + 0.001
end
end
return r
end
-- called by c++
function update()
m = pull()
r = transform(m)
push(r)
end
This solution fits my needs, but seems very complicated and inefficient. But it was a fine hacking session anyway.
I want to write a Boost-Python program to take a symbolic python function from user and evaluate its derivative in my program.
For example the User provide a python file (Function.py) which defines a function like
F = sin(x)*cos(x).
Then I want to have access to F'(x) (derivative of F(x)) using symbolic differentiation ability of Sympy. I don't want to use numerical differentiation.
Is there a way to make such a function F'(x) accessible in the C++ using Boost-Python.
Here is some code that should help you get started.
main.cpp:
#include <boost/python.hpp>
#include <iostream>
using namespace boost::python;
int main(void) {
Py_Initialize();
object main_module = import("__main__");
object main_namespace = main_module.attr("__dict__");
exec("from __future__ import division\n"
"from sympy import *\n"
"x = symbols('x')\n"
"f = symbols('f', cls=Function)\n"
"f = cos(x) * sin(x)\n"
"f1 = lambda u: diff(f).subs(x, u);\n",
main_namespace);
exec("result = f1(1.0)", main_namespace);
double res = extract<double>(main_namespace["result"]);
std::cout << "Out: " << res << std::endl;
return 0;
}
Compile command, replace with your path and compiler:
$ clang++ -I"/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/Headers/" -L"/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/" -lpython2.7 main.cpp
It compiles but does not work for me right now. Hope it helped.
I am not a SymPy expert but maybe this can help you:
You can define a Python method like:
def f(x):
return sin(x)*cos(x)
You can create an evaluable function f1 as the derivative of f using:
from sympy import *
x = symbols('x')
f1 = lambdify(x, diff(f(x)))
This function f1 can be called from C++ using boost::python. You may create an object to function f1, call the function using the () operator and convert the result to double using extract<>.
Here is an example:
namespace py = boost::python;
Py_Initialize();
py::object main_module = py::import("__main__");
py::object main_dict = main_module.attr("__dict__");
py::exec(
"def f(x):\n"
" return sin(x)*cos(x)\n",
main_dict
);
py::exec(
"from sympy import *\n"
"x = symbols('x')\n"
"f1 = lambdify(x, diff(f(x)))\n",
main_dict
);
py::object f1 = main_dict["f1"];
std::cout << py::extract<double>(f1(0.0)) << std::endl;
std::cout << py::extract<double>(f1(1.0)) << std::endl;
return 0;
I want to pass data from C++ to Python in Linux and I do this through a pipe.
My C++ program looks like this:
#include<iostream>
#include<stdio.h>
#include<sstream>
#include<math.h>
int main() {
FILE *cmd;
std::ostringstream ss;
double val;
cmd = popen("python plot.py","w");
for (int i=0; i<100; i++) {
val = cos(3.14159*i/40);
ss.str("");
ss << val << std::endl;
fputs(ss.str().c_str(),cmd);
fflush(cmd);
}
fputs("\n",cmd);
fflush(cmd);
pclose(cmd);
return 0;
}
The corresponding python function plot.py looks like:
fig, = plt.plot([],[])
index = 0
plt.ion()
while True:
data = sys.stdin.readline().strip()
if not data:
plt.ioff()
break
index = index + 1
val = [float(n) for n in self.data.split()]
fig.set_xdata(numpy.append(fig.get_xdata(),index))
fig.set_ydata(numpy.append(fig.get_ydata(),val))
axis = list(plt.axis())
min_x = 0
max_x = max(self.fig.get_xdata())
min_y = min(self.fig.get_ydata())
max_y = max(self.fig.get_ydata())
axis[0] = min_x
axis[1] = max_x
axis[2] = min_y
axis[3] = max_y
plt.axis(axis)
plt.draw()
plt.show()
Now my problem is that there seems to be a buffering of data from my c++ function before data is being passed to the python function, which I am not able to get rid of. I have searched for a solution but have not been able to find any. The best suggestion I could find was to use setvbuf in the c++ function, so I also tried to include the following line to the c++ function
setvbuf(cmd,NULL,_IONBUF,0)
But that did not have any effects.
Does anyone know how I can pass data from my c++ function to python in an unbuffered manner?