So I have set up a c++ class with say class.cpp, class.h. class.cpp uses some functions from gsl (it has #include <gsl/gsl_blas.h>)
I have no problem linking this to another c++ file main.cpp and I can compile it with
g++ -o main main.o class.o -I/home/gsl/include -lm -L/home/gsl/lib -lgsl -lgslcblas
Also, without including the gsl libary in class.cpp, I have managed to create a cython file that uses my class in class.cpp, and it works.
However, when I try to combine these two (ie using a c++ class in cython, where the c++ class uses gsl functions), I do not know what to do. I guess I have to include the
I/home/gsl/include -lm -L/home/gsl/lib -lgsl -lgslcblas
somewhere in the setup file, but I dont know where or how. My setup.py looks like
from distutils.core import setup
from Cython.Build import cythonize
import os
os.environ["CC"] = "g++"
os.environ["CXX"] = "g++"
setup(
name = "cy",
ext_modules = cythonize('cy.pyx'),
)
and I have
# distutils: language = c++
# distutils: sources = class.cpp
in the beginning of my .pyx file.
Thanks for any help!
I suggest you to use the extra_compile_args option in your extension.
I already wrote some answer that luckily uses GSL dependency here.
Customize following your needs, but it should work this way.
Hope this can help...
You can specify the required external libraries using the libraries, include_dirs and library_dirs parameters of the Extension class. For example:
from distutils.extension import Extension
from distutils.core import setup
from Cython.Build import cythonize
import numpy
myext = Extension("myext",
sources=['myext.pyx', 'myext.cpp'],
include_dirs=[numpy.get_include(), '/path/to/gsl/include'],
library_dirs=['/path/to/gsl/lib'],
libraries=['m', 'gsl', 'gslcblas'],
language='c++',
extra_compile_args=["-std=c++11"],
extra_link_args=["-std=c++11"])
setup(name='myproject',
ext_modules=cythonize([myext]))
Here, the C++ class is defined in myext.cpp and the cython interface in myext.pyx. See also: Cython Documentation
Related
I'm trying to compile both C and C++ sources at the same time in Cython. This is my current setup:
-setup.py
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
import os
language = "c++"
extra_compile_flags = ["-std=c++17"]
os.environ["CC"] = "clang++"
ext_modules = [
Extension(
name="Dummy",
sources=["mydummy.pyx", "source1.cpp","source2.c"],
language=language,
extra_compile_args=extra_compile_flags,
)
]
ext_modules = cythonize(ext_modules)
setup(
name="myapp",
ext_modules=ext_modules,
)
-compile command:
python3 setup.py build_ext --inplace --verbose
In the log I get the following message:
clang++ -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I. -I/usr/include/python3.6m -I/usr/include/python3.6m -c /path/source2.c -o build/temp.linux-x86_64-3.6/./path/source2.o -std=c++17 -O3
clang: warning: treating 'c' input as 'c++' when in C++ mode, this behavior is deprecated [-Wdeprecated]
The compilation goes trough, but the warning looks pretty nasty. How can I get rid of it?
The naive solution
os.environ["CC"] = "clang"
os.environ["CXX"] = "clang++"
is failing with error: invalid argument '-std=c++17' not allowed with 'C'
gcc (or clang) is just a frontend and it calls the appropriate compiler (c- or c++-compiler) given the file-extension of the compiled file (.c means it should be compiled with c-compiler and .cpp means it should be compiled with c++-compiler), see for example this similar SO-issue.
Thus there is no need to set the compiler to "clang++", as it will happen automatically.
However, cython needs to know that it has to produce a cpp-file (along with accepting c++-syntax) and not a c-file. Also the linker has to know, that it has to link against cpp-libaries (it is done automatically if g++/clang++ is used for linking). Thus we need to add language = "c++" to the Extension's definition, as suggested in DavidW's answer - this will take care of the last two problems.
The remaining problem is, that we would to like use different compiler options (-std=c++17 only for cpp-files) for different source-files. This can be achieved using a less known command build_clib:
#setup.py:
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
ext_modules = [
Extension(
name="mydummy",
sources=["mydummy.pyx", "source1.cpp"]
language="c++",
extra_compile_args=["-std=c++17"],
)
]
ext_modules = cythonize(ext_modules)
myclib = ('myclib', {'sources': ["source2.c"]})
setup(
name="mydummy",
libraries=[myclib],
ext_modules=ext_modules,
)
And now building either with
python setup.py build --verbose
or
python setup.py build_clib --verbose build_ext -i --verbose
will first build a simple static library consisting of C-code and link it to the resulting extension consisting of c++-code.
The above code uses the default compilation flags when building the static library, which should be enough in your case.
distutils doesn't offer the possibility to specify additinal flags, so if it is necessary we have either to switch to setuptools which offers this functionality, or to patch the build_clib command.
For the second alternative we have to add the following to the above setup.py-file:
#setup.py
...
# adding cflags to build_clib
from distutils import log
from distutils.command.build_clib import build_clib
# use original implementation but with tweaked build_libraries!
class build_clib_with_cflags(build_clib):
def build_libraries(self, libraries):
for (lib_name, build_info) in libraries:
sources = build_info.get('sources')
if sources is None or not isinstance(sources, (list, tuple)):
raise DistutilsSetupError(
"in 'libraries' option (library '%s'), "
"'sources' must be present and must be "
"a list of source filenames" % lib_name)
sources = list(sources)
log.info("building '%s' library", lib_name)
macros = build_info.get('macros')
include_dirs = build_info.get('include_dirs')
cflags = build_info.get('cflags') # HERE we add cflags
objects = self.compiler.compile(sources,
output_dir=self.build_temp,
macros=macros,
include_dirs=include_dirs,
extra_postargs=cflags, # HERE we use cflags
debug=self.debug)
self.compiler.create_static_lib(objects, lib_name,
output_dir=self.build_clib,
debug=self.debug)
...
setup(
...
cmdclass={'build_clib': build_clib_with_cflags}, # use our class instead of built-in!
)
and now we can add additional compile flags to the library-definitions (the previous step can be skipped if setuptools is used):
...
myclib = ('myclib', {'sources': ["source2.c"], 'cflags' : ["-O3"]})
...
I am trying to build custom NS3 module which depends on some static library. This static library depends on NS3 module.
Platform: Ubuntu 16.04 x64
Toolchain: GCC 5.4.0
I will refer to my custom NS3 module as mymodule
I will refer to the library which mymodule depends on as mylib
I will refer to the program which links with mymodule and mylib as myprog
wscript for mymodule:
def build(bld):
module = bld.create_ns3_module('mymodule', ['network'])
module.features = 'c cxx cxxstlib ns3module'
module.source = [
'model/mymodule.cc' ]
# Make a dependency to some other static lib:
bld.env.INCLUDES_MYLIB = [ "some/include/path" ]
bld.env.LIB_MYLIB = ['mylib']
bld.env.LIBPATH_MYLIB = [ "some/path" ]
module.use.append('MYLIB')
# Create a program which uses mymodule
p = bld.create_ns3_program('myprog', ['core', 'mymodule'])
p.source = 'prog.cpp'
headers = bld(features='ns3header')
headers.module = 'mymodule'
headers.source = ['model/mymodule.h']
When I do ./waf build it fails: LD cannot link myprog because mylib has unresolved symbols. This failure is actually expected because mylib and mymodule are codependent and should be linked in non-standard way.
Workarounds:
If I build myprog by hand and use -Wl,--start-group
-lns3.26-mymodule-debug -lmylib -Wl,--end-group it links perfectly fine and works as expected.
If I combine two static libs by hand (using ar -M script) and then run ./waf build it also works fine.
The question: How can I integrate one of the workarounds above into wscript?
it looks like a known problem with order of static libs inclusion. The behavior has changed in waf 1.9, due to this problem.
One workaround might be to use the linkflags attribute of program. You should prefer the use of STLIB_MYLIB and STLIBPATH_MYLIB as mylib is static. In waf 1.9 with he correct order of libs it might suffice.
Anyway, use -v to see the command line generated by waf, it might help !
I have code written in C++:
#include <boost/python.hpp>
char const* greet()
{
return "Yay!";
}
BOOST_PYTHON_MODULE(libtest)
{
using namespace boost::python;
def("greet", greet);
}
Now i want to import this dynamic library to python by:
import libtest
But I get:
ImportError: /usr/lib/libboost_python.so.1.54.0: undefined symbol: PyClass_Type
What should I do? My OS is Arch Linux.
Ok, I have found solution for this problem. The simplest options is to compile by:
g++ testing.cpp -I/usr/include/python3.3m -I/usr/include/boost -lboost_python3 -lpython3.3m -o testing.so -shared -fPIC
Previously I used -lboost_python instead of -lboost_python3 ... But this solution is not cross platform so we can achieve this by cmake:
cmake_minimum_required(VERSION 2.6)
find_package(Boost 1.54 EXACT REQUIRED COMPONENTS python3)
INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS} "/usr/include/python3.3m/" )
find_package(PythonLibs)
ADD_LIBRARY(testing SHARED testing.cpp)
TARGET_LINK_LIBRARIES(testing ${Boost_LIBRARIES} ${PythonLibs_LIBRARIES})
Of course "/usr/include/python3.3m" won't be path to pythons include directory in all linux distros.
Use the same version of Python when building both Boost.Python and the libtest module, as well as when importing libtest.
PyClass_Type is is part of the Python 2 C API and not part of the Python 3 C API. Hence, the Boost.Python library was likely built against Python 2. However, it is being loaded by a Python 3 interpreter, where the PyClass_Type is not available.
Sorry for such a general title, but i'm not quite sure what exactly i'm missing or what i'm doing wrong. My aim is to build a python extension using boost.python under cygwin and avoiding boost.build tools, that is using make instead of bjam. The latter way was working for me quite good, but now i want to do it this way. I solved many issues by googling and looking for similar topics and that helped me to figure out some tricks and to move forward. Yet at the last step there seems to be some problem. I'll try to describe all my steps in some details in hope that this post may be useful to others in the future and also to better describe the setup.
Because i was not quite sure about the original (from various cygwin repositories) installations of both python and boost i decided to install them from scratch in my home directory, so here is what i do:
first install python. I'll skip the details for this, it is more-or-less straightforward.Important for the latter description is just the path:
/home/Alexey_2/Soft/python2.6 - this is PYTHONPATH, also included in PATH
working on boost:
a) unpack boost source into
/home/Alexey_2/Soft/boost_1_50_0 - this is BOOST_ROOT
b) making bjam. first go into directory:
/home/Alexey_2/Soft/boost_1_50_0/tools/build/v2
next, invoke bootstrap.sh this eventually creates b2 and bjam executables in this directory. In .bash_profile add this directory to PATH, so we can call bjam. Here and after each future edits of .bash_profile i restart cygwin to make the changes
come to the effect
c) still in the
/home/Alexey_2/Soft/boost_1_50_0/tools/build/v2
directory - edit user-config.jam, to let bjam know which python to use. So in my case i only add one line:
using python : 2.6 : /home/Alexey_2/Soft/python2.6/bin/python2.6 : /home/Alexey_2/Soft/python2.6/include/python2.6 : /home/Alexey_2/Soft/python2.6/bin ;
in the lib-path (last entry) i put /home/Alexey_2/Soft/python2.6/bin because it contains libpython2.6.dll
d) ok. now we can make boost-python libraries. go to BOOST_ROOT directory and execute command
bjam --with-python toolset=gcc link=shared
this creates necessary libraries (cygboost_python.dll and libboost_python.dll.a) and places them into
/home/Alexey_2/Soft/boost_1_50_0/stage/lib
building python extension.
here is my simple test program (actually part of the example code)
// file xyz.cpp
#include <boost/python.hpp>
#include <iostream>
#include <iomanip>
using namespace std;
using namespace boost::python;
class Hello
{
std::string _msg;
public:
Hello(std::string msg){_msg = msg;}
void set(std::string msg) { this->_msg = msg; }
std::string greet() { return _msg; }
};
BOOST_PYTHON_MODULE(xyz)
{
class_<Hello>("Hello", init<std::string>())
.def("greet", &Hello::greet)
.def("set", &Hello::set)
;
}
And here is the Makefile:
FLAGS= -fno-for-scope -O2 -fPIC
CPP=c++
# BOOST v 1.50.0
p1=/home/Alexey_2/Soft/boost_1_50_0
pl1=/home/Alexey_2/Soft/boost_1_50_0/stage/lib
# PYTHON v 2.6
p2=/home/Alexey_2/Soft/python2.6/include/python2.6
pl2=/home/Alexey_2/Soft/python2.6/bin
I=-I${p1} -I${p2}
L=-L${pl1} -lboost_python -L${pl2} -lpython2.6
all: xyz.so
xyz.o: xyz.cpp
${CPP} ${FLAGS} ${I} -c xyz.cpp
xyz.so: xyz.o
${CPP} ${FLAGS} -shared -o xyz.so xyz.o ${L}
clean:
rm *.o
rm xyz.so
Some comments:
library paths are set and i compile against proper libraries (see more: compile some code with boost.python by mingw in win7-64bit ).
The link above explains why it is important to configure user-config.jam - i did it in step 1c.
To avoid possible problems (as mentioned in above link and also in Cannot link boost.python with mingw (although for mingw) ) with boost.python libraries liked statically i use
link=shared
as the argument to bjam (see 1d)
As explained here: MinGW + Boost: undefined reference to `WSAStartup#8' the libraries with which one wants to compile something should be listed after object files, that is why we have:
${CPP} ${FLAGS} -shared -o xyz.so xyz.o ${L}
and not
${CPP} ${FLAGS} -shared ${L} -o xyz.so xyz.o
And here is a piece of my .bash_profile (eventually), where i define environmental variables:
# Python
export PATH=/home/Alexey_2/Soft/python2.6/bin:$PATH
export PYTHONPATH=/home/Alexey_2/Soft/python2.6
export LD_LIBRARY_PATH=/home/Alexey_2/Soft/python2.6/lib:/home/Alexey_2/Soft/python2.6/bin:$LD_LIBRARY_PATH
# Boost
export BOOST_ROOT=/home/Alexey_2/Soft/boost_1_50_0
export LD_LIBRARY_PATH=/home/Alexey_2/Soft/boost_1_50_0/stage/lib:$LD_LIBRARY_PATH
export PATH=/home/Alexey_2/Soft/boost_1_50_0/stage/lib:$PATH
# bjam
export PATH=/home/Alexey_2/Soft/boost_1_50_0/tools/build/v2:$PATH
Finally, to the problem. With the above setup i was able to successfully build the python extension object file:
xyz.so
However, when i test it by simple script:
# this is a test.py script
import xyz
the ImportError comes:
$ python test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
import xyz
ImportError: No module named xyz
It has been noted that the reason for such a problem may be the wrong python executable being used, but this is not the case:
$ which python
/home/Alexey_2/Soft/python2.6/bin/python
what is expected (note python is a symbolic link to python2.6 from that directory)
Here is one more useful piece of information i have:
$ ldd xyz.so
ntdll.dll => /cygdrive/c/Windows/SysWOW64/ntdll.dll (0x76fa0000)
kernel32.dll => /cygdrive/c/Windows/syswow64/kernel32.dll (0x76430000)
KERNELBASE.dll => /cygdrive/c/Windows/syswow64/KERNELBASE.dll (0x748e0000)
cygboost_python.dll => /home/Alexey_2/Soft/boost_1_50_0/stage/lib/cygboost_python.dll (0x70cc0000)
cygwin1.dll => /usr/bin/cygwin1.dll (0x61000000)
cyggcc_s-1.dll => /usr/bin/cyggcc_s-1.dll (0x6ff90000)
cygstdc++-6.dll => /usr/bin/cygstdc++-6.dll (0x6fa90000)
libpython2.6.dll => /home/Alexey_2/Soft/python2.6/bin/libpython2.6.dll (0x67ec0000)
??? => ??? (0x410000)
I'm wondering what
??? => ??? (0x410000)
could possibly mean. May be this what i'm missing. But what is that?
Any comments and suggestions (not only about the last question) are very much appreciated.
EDIT:
Following suggestion of (twsansbury) examining the python module search path with -vv option:
python -vv test.py
gives
# trying /home/Alexey_2/Programming/test/xyz.dll
# trying /home/Alexey_2/Programming/test/xyzmodule.dll
# trying /home/Alexey_2/Programming/test/xyz.py
# trying /home/Alexey_2/Programming/test/xyz.pyc
...
# trying /home/Alexey_2/Soft/python2.6/lib/python2.6/site-packages/xyz.dll
# trying /home/Alexey_2/Soft/python2.6/lib/python2.6/site-packages/xyzmodule.dll
# trying /home/Alexey_2/Soft/python2.6/lib/python2.6/site-packages/xyz.py
# trying /home/Alexey_2/Soft/python2.6/lib/python2.6/site-packages/xyz.pyc
Traceback (most recent call last):
File "test.py", line 1, in <module>
import xyz
ImportError: No module named xyz
The first directory is wherefrom i call python to run the script. The main conclusion is that cygwin python is looking for modules (libraries) with the standard Windows extension - dll (among other 3 types), not the .so as i originally expected from the Linux-emulation-style of cygwin. So changing the following lines in the previous Makefile to:
all: xyz.dll
xyz.o: xyz.cpp
${CPP} ${FLAGS} ${I} -c xyz.cpp
xyz.dll: xyz.o
${CPP} ${FLAGS} -shared -o xyz.dll xyz.o ${L}
clean:
rm *.o
rm xyz.dll
produces xyz.dll, which can successfully be loaded:
python -vv test.py
now gives:
Python 2.6.8 (unknown, Mar 21 2013, 17:13:04)
[GCC 4.5.3] on cygwin
Type "help", "copyright", "credits" or "license" for more information.
# trying /home/Alexey_2/Programming/test/xyz.dll
dlopen("/home/Alexey_2/Programming/test/xyz.dll", 2);
import xyz # dynamically loaded from /home/Alexey_2/Programming/test/xyz.dll
ThatImportError is generally not Boost.Python related. Rather, it normally indicates that xyz is not in the Python Module Search Path.
To debug this, consider running python with -vv arguments. This will cause python to print a message for each file that is checked for when trying to import xyz. Regardless, the build process look correct, so the problem is likely the result of either the file extension or the module is not in the search path.
I am unsure as to how Cygwin will interact with Python's run-time loading behavior. However:
On Windows, python extensions have a .pyd extension.
On Linux, python extensions have a .so extension.
Additionally, verify that the xyz library is located in one of the following:
The directory containing the test.py script (or the current directory).
A directory listed in the PYTHONPATH environment variable.
The installation-dependent default directory.
If the unresolved library shown in ldd causes errors, it will generally manifest as an ImportError with a message indicating undefined references.
So, i search good tool to integrate my C++ code with python, and at first i looked at boost.python.
I've get hello examle from boost documentation and try to build and run it. Source code is (src/hello.cpp):
#include <Python.h>
#include <boost/python.hpp>
char const* greet()
{
return "hello, world";
}
BOOST_PYTHON_MODULE(hello_ext)
{
using namespace boost::python;
def("greet", greet);
}
Problem 1 - Windows and mingw
I try to build and my result :
g++ -o build\hello.o -c -IE:\Programming\libs\boost_1_48_0 -IE:\Programming\Python\include src\hello.cpp
g++ -shared -o pyhello.dll build\hello.o -LE:\Programming\libs\boost_1_48_0\stage\lib -LE:\Programming\Python\libs -lboost_python-mgw45-mt-1_48 -lpython27 -Wl,--out-implib,libpyhello.a
Creating library file: libpyhello.a
build\hello.o:hello.cpp:(.text+0x20): undefined reference to `_imp___ZN5boost6python6detail11init_moduleEPKcPFvvE'
Also similar 4 undefined errors with boost::python.
My build boost command line : bjam toolset=gcc variant=release
I found similar troubles in google (and on stackoverflow too), but didn't found answer at my case.
Problem 2 - Using module (linux)
At linux platform i've no problem with building module, same source compiled well :
g++ -o build/hello.os -c -fPIC -I/usr/include/python2.7 src/hello.cpp
g++ -o libpyhello.so -shared build/hello.os -lboost_python -lpython2.7
Now, how can i use that? In documentation no words about module naming, quote :
can be exposed to Python by writing a Boost.Python wrapper:
#include <boost/python.hpp>
BOOST_PYTHON_MODULE(hello_ext)
{
using namespace boost::python;
def("greet", greet);
}
That's it. We're done. We can now build this as a shared library. The
resulting DLL is now visible to Python. Here's a sample Python
session:
>>> import hello_ext
>>> print hello_ext.greet()
hello, world
So, my module named: libpyhello.so, but how i can use it in python iterpreter? I try import pyhello, hello_ext, libpyhello - and only with libpyhello interpreter is printed :
ImportError: dynamic module does not define init function (initlibpyhello)
All other variants of import failed with : ImportError: No module named pyhello
UPDATE 2nd question : Solved, *.so module must be named as ID used in BOOST_PYTHON_MODULE. After i change : BOOST_PYTHON_MODULE(hello_ext) to BOOST_PYTHON_MODULE(libpyhello), module is imported well as libpyhello.
It's important that the library file is named like you declare the module here:
BOOST_PYTHON_MODULE(hello_ext)
that is hello_ext.dll or hello_ext.so.
Hi,I have got the same problem as yours under win7 32bit with mingw, however I fixed it at last.
The possible solution is:
When build the lib boost python, use link=shared instead.
like:
bjam stage toolset=gcc --with-python link=shared threading=multi runtime-link=shared variant=release,debug --user-config=user-config.jam cxxflags="-include cmath "
When link,use the BOOST_PYTHON_STATIC_LIB macro explicitly
The following is the sample cmd line:
g++ hello_ext.cpp -shared -O3 -DBOOST_PYTHON_STATIC_LIB -lboost_python -lpython25 -o hello_ext.pyd
To save your time,just add some lines in file boost\python.hpp:
#include <cmath> //fix cmath:1096:11: error: '::hypot' has not been declared
#if defined(__MINGW32__) && defined(_WIN32)
#if !defined(BOOST_PYTHON_SOURCE)
#define BOOST_PYTHON_STATIC_LIB
#endif
#endif
... here,other includes files ...
Then, you can simply use cmd like this:
g++ hello_ext.cpp -shared -lboost_python -lpython25 -o hello_ext.pyd
This shell will be ok, have a try.