I have c++ code that has grown exponential. I have a number of variables (mostly Boolean) that need to be changed for each time I run my code (different running conditions). I have done this using the argument command line inputs for the main( int argc, char* argv[]) function in the past.
Since this method has become cumbersome (I have 18 different running conditions, hence 18 different argument :-( ), I would like to move to interfacing with python (if need be Bash ). Ideally I would like to code a python script, where I set the values of data members and then run the code.
Does anyone have a any pointer/information that could help me out? Better still a simple coded example or URL I could look up.
Edit From Original Question:
Sorry I don't think I was clear with my question. I don't want to use the main( int argc, char* argv[]) feature in c++. Instead of setting the variables on the command line. Can I use python to declare and initialize the data members in my c++ code?
Thanks again mike
Use subprocess to execute your program from python.
import subprocess as sp
import shlex
def run(cmdline):
process = sp.Popen(shlex.split(cmdline), stdout=sp.PIPE, stderr=sp.PIPE)
output, err = process.communicate()
retcode = process.poll()
return retcode, output, err
run('./a.out '+arg1+' '+arg2+' '+...)
Interfacing between C/C++ and Python is heavily documented and there are several different approaches. However, if you're just setting values then it may be overkill to use Python, which is more geared toward customising large operations within your process by farming it off to the interpreter.
I would personally recommend researching an "ini" file method, either traditionally or by using XML, or even a lighter scripting language like Lua.
You can use subprocess module to launch an executable with defined command-line arguments:
import subprocess
option1 = True
option2 = Frue
# ...
optionN = True
lstopt = ['path_to_cpp_executable',
option1,
option2,
...
optionN
]
lstopt = [str(item) for item in lstopt] # because we need to pass strings
proc = subprocess.Popen(lstrun, close_fds = True)
stdoutdata, stderrdata = proc.communicate()
If you're using Python 2.7 or Python 3.2, then OrderedDict will make the code more readable:
from collections import OrderedDict
opts = OrderedDict([('option1', True),
('option2', False),
]
lstopt = (['path_to_cpp_executable'] +
list(str(item) for item in opts.values())
)
proc = subprocess.Popen(lstrun, close_fds = True)
stdoutdata, stderrdata = proc.communicate()
With the ctypes module, you can call arbitrary C libraries.
There are several ways for interfacing C and C++ code with Python:
SWIG
Boost.Python
Cython
I can only advise to have a look at swig : using director feature, it allows to fully integrate C++ and python, including cross derivation from onle language to the other
Related
I'm new to python, and while it's a pretty simple language, I'm having a hard time finding a solid and easy to read language reference that lists all the supported build-in methods and libraries that come with the installation. The main documentation site is confusing. There's more info about what's deprecated than what's recommended. I tried using pydoc to find method usage. For example, I want to see a simple list of all the methods that are part of the string class (e.g. replace(), toupper(), etc). But I'm not sure how to use it to list the methods, or to list a method and its usage. What do people use for a quick reference that works?
When I do something like 'pydoc string', I see a message that says "Warning: most of the code you see here isn't normally used nowadays.
Beginning with Python 1.6, many of these functions are implemented as
methods on the standard string object. They used to be implemented by
a built-in module called strop, but strop is now obsolete itself."
So while there's info about the method replace() there, I'm worried that it's not the right info based on that warning. How can I see the methods of the standard string object?
Documentation about standard functions:
https://docs.python.org/2/library/functions.html
Documentation about standard libraries:
https://docs.python.org/2/library/
You could use dir() and help(). i.e. :
From python shell :
>>> import math
>>> dir(math)
['__doc__', '__name__', '__package__', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'copysign', 'cos', 'cosh', 'degrees', 'e', 'erf', 'erfc', 'exp', 'expm1', 'fabs', 'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'hypot', 'isinf', 'isnan', 'ldexp', 'lgamma', 'log', 'log10', 'log1p', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc']
>>> help(math.tan)
Will print :
Help on built-in function tan in module math:
tan(...)
tan(x)
Return the tangent of x (measured in radians).
(press "q" to exit the help page)
Hope it helps.
EDIT
Another solution from the shell :
$ python -m pydoc sys
Then press "q" to exit.
I'm using boost::python to do a hybrid C++/python application: the C++ app calls a collection of python scripts, which in turn use the C++ program's functions, classes, etc., exposed as python objects. (Python 2.x.)
BOOST_PYTHON_MODULE(MyModule) exposes the C++ to python as expected.
My initialization code:
Py_Initialize();
initMyModule(); // import MyModule
namespace bpl = boost::python;
Now I want my C++ code to get at MyModule, too. In python, you just write globals()['MyModule']. But this (and things like it) don't work in C++:
bpl::object globals = bpl::eval("globals()");
This fails at run-time with
File "<string>", line 1, in <module>; NameError: name 'globals' is not defined
As an aside, I see many examples of setting up __main__ like this:
bpl::object m = bpl::import("__main__");
bpl::dict g = m.attr("__dict__"); // like locals(), but not globals()
This doesn't fail, and gives locals, but according to the Py_Initialize docs, __main__ is already set up. And it doesn't let you see globals, where you'd find your imported module.
You don't need the explicit bpl::import("__main__");.
Here are the globals:
bpl::dict globals()
{
bpl::handle<> mainH(bpl::borrowed(PyImport_GetModuleDict()));
return bpl::extract<bpl::dict>(bpl::object(mainH));
}
Since everything is managed by smart pointers, returning and manipulating bpl::dict directly works fine.
bpl::object myMod = globals()["MyModule"];
globals()["myNewGlobal"] = 88;
I have a big c++ program in a single .cpp file which defines a lot of classes(interdependent of each other) and finally runs a main function. Now I am interested only in using one of this classes in python, specifically one method of this class which accepts 5 floats as inputs and outputs one float. I am trying to find the simplest method to achieve this. After not having success with boost:python(mainly because of installation issues) I have come to Cython which in the current version supports C++. I could successfully run the Rectangle example given in the Cython tutorial but I can't get how to proceed and adapt this to my case where I don't need a so complicated .pyx file, and where I don't have a .h file. Can somebody explain me in simple words what should I write in setup.py and in the .pyx file if my .cpp file has for example the structure:
...
class Nuclei {
public:
...
double potential(float,float,float,float,float);
...
private:
...
};
...
If all you are looking to do is call a single function, Extending Python With C/C++ is probably the simplest approach. This page provides a good example.
The relevant setup.py code in that example is
from distutils.core import setup, Extension
module1 = Extension('demo',
define_macros = [('MAJOR_VERSION', '1'),
('MINOR_VERSION', '0')],
include_dirs = ['/usr/local/include'],
libraries = ['tcl83'],
library_dirs = ['/usr/local/lib'],
sources = ['demo.c'])
setup (name = 'PackageName',
version = '1.0',
description = 'This is a demo package',
author = 'Martin v. Loewis',
author_email = 'martin#v.loewis.de',
url = 'http://docs.python.org/extending/building',
long_description = '''
This is really just a demo package.
''',
ext_modules = [module1])
If the C++ code you want to call is in demo.c, it could be used with import demo in Python.
Note that it is not nearly that simple- you'll be creating a function that takes a PyObject * with arguments and returns a PyObject *, and a lot can be said about how those are constructed. Take a look at the pages linked to above- they are filled with examples.
I'm working on embedding Python in our test suite application. The purpose is to use Python to run several tests scripts to collect data and make a report of tests. Multiple test scripts for one test run can create global variables and functions that can be used in the next script.
The application also provides extension modules that are imported in the embedded interpreter, and are used to exchange some data with the application.
But the user can also make multiple test runs. I don't want to share those globals, imports and the exchanged data between multiple test runs. I have to be sure I restart in a genuine state to control the test environment and get the same results.
How should I reinitialise the interpreter?
I used Py_Initialize() and Py_Finalize(), but get an exception on the second run when initialising a second time the extension modules I provide to the interpreter.
And the documentation warns against using it more than once.
Using sub-interpreters seems to have the same caveats with extension modules initialization.
I suspect that I'm doing something wrong with the initialisation of my extension modules, but I fear that the same problem happens with 3rd party extension modules.
Maybe it's possible to get it to work by launching the interpreter in it's own process, so as to be sure that all the memory is released.
By the way, I'm using boost-python for it, that also warns AGAINST using Py_Finalize!
Any suggestion?
Thanks
Here is another way I found to achieve what I want, start with a clean slate in the interpreter.
I can control the global and local namespaces I use to execute the code:
// get the dictionary from the main module
// Get pointer to main module of python script
object main_module = import("__main__");
// Get dictionary of main module (contains all variables and stuff)
object main_namespace = main_module.attr("__dict__");
// define the dictionaries to use in the interpreter
dict global_namespace;
dict local_namespace;
// add the builtins
global_namespace["__builtins__"] = main_namespace["__builtins__"];
I can then use use the namespaces for execution of code contained in pyCode:
exec( pyCode, global_namespace, lobaca_namespace );
I can clean the namespaces when I want to run a new instance of my test, by cleaning the dictionaries:
// empty the interpreters namespaces
global_namespace.clear();
local_namespace.clear();
// Copy builtins to new global namespace
global_namespace["__builtins__"] = main_namespace["__builtins__"];
Depending at what level I want the execution, I can use global = local
How about using code.IteractiveInterpreter?
Something like this should do it:
#include <boost/python.hpp>
#include <string>
#include <stdexcept>
using namespace boost::python;
std::string GetPythonError()
{
PyObject *ptype = NULL, *pvalue = NULL, *ptraceback = NULL;
PyErr_Fetch(&ptype, &pvalue, &ptraceback);
std::string message("");
if(pvalue && PyString_Check(pvalue)) {
message = PyString_AsString(pvalue);
}
return message;
}
// Must be called after Py_Initialize()
void RunInterpreter(std::string codeToRun)
{
object pymodule = object(handle<>(borrowed(PyImport_AddModule("__main__"))));
object pynamespace = pymodule.attr("__dict__");
try {
// Initialize the embedded interpreter
object result = exec( "import code\n"
"__myInterpreter = code.InteractiveConsole() \n",
pynamespace);
// Run the code
str pyCode(codeToRun.c_str());
pynamespace["__myCommand"] = pyCode;
result = eval("__myInterpreter.push(__myCommand)", pynamespace);
} catch(error_already_set) {
throw std::runtime_error(GetPythonError().c_str());
}
}
I'd write another shell script executing the sequence of test scripts with new instances of python each time. Or write it in python like
# run your tests in the process first
# now run the user scripts, each in new process to have virgin env
for script in userScript:
subprocess.call(['python',script])
I thought to try using D for some system administration scripts which require high performance (for comparing performance with python/perl etc).
I can't find an example in the tutorials I looked through so far (dsource.org etc.) on how to make a system call (i.e. calling another software) and receiving it's output from stdout, though?
If I missed it, could someone point me to the right docs/tutorial, or provide with the answer right away?
Well, then I of course found it: http://www.digitalmars.com/d/2.0/phobos/std_process.html#shell (Version using the Tango library here: http://www.dsource.org/projects/tango/wiki/TutExec).
The former version is the one that works with D 2.0 (thereby the current dmd compiler that comes with ubuntu).
I got this tiny example to work now, compiled with dmd:
import std.stdio;
import std.process;
void main() {
string output = shell("ls -l");
write(output);
}
std.process has been updated since... the new function is spawnShell
import std.stdio;
import std.process;
void main(){
auto pid = spawnShell("ls -l");
write(pid);
}