Problems while creating a c++ extension with cython - c++

I'm working on osx 10.8.4 64 bit with python 2.7, cython 0.19.1 and numpy 1.6.1.
I'm trying to create an c++ extension to be used with python. The c++ code is given and I wrote a wrapper c++ class to make using the needed functions in python easier.Compiling works but importing the extension file causes the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: dlopen(./mserP.so, 2): Symbol not found: __ZN4mser12MSERDetectorC1Ejj
Referenced from: ./mserP.so
Expected in: flat namespace
in ./mserP.so
I tried a smaller example with an easy c++ class with a function that has got a numpy array as its argument. Importing and using of the extension file works great!
Here the wrapper class (maser_wrapper.cpp):
#include "mser_wrapper.h"
#include "mser.h"
#include <iostream>
namespace mser {
CallMser::CallMser(unsigned int imageSizeX,unsigned int imageSizeY)
{
//Create MSERDetector
mser::MSERDetector* detector = new mser::MSERDetector(imageSizeX, imageSizeY);
}
CallMser::~CallMser()
{
delete detector;
}
}
And here the cython file (mserP.pyx):
# distutils: language = c++
# distutils: sources= mser_wrapper.cpp
cdef extern from "mser_wrapper.h" namespace "mser":
cdef cppclass CallMser:
CallMser(unsigned int, unsigned int) except +
cdef class PyCallMser:
cdef CallMser *thisptr
def __cinit__(self, unsigned int imageSizeX, unsigned int imageSizeY):
self.thisptr = new CallMser(imageSizeX, imageSizeY)
def __dealloc__(self):
del self.thisptr
Last but not least the setup.py:
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules = cythonize(
"mserP.pyx", # our Cython source
sources=["mser_wrapper.cpp"], # additional source file(s)
language="c++", # generate C++ code
))
In namespace "mser" the class "MSERDetector" exists but cannot be found. It's defined in the header file "mser.h" which is included by my wrapper class.
Has anybody an idea what the problem could be? Thanks!

You are missing the object code from mser.cpp. Tell cython to include it by adding it to the sources in setup.py and distutil sources in the cython file.

Related

Runing Robot Framework Script From Python Library which Load cdll throwing error

from ctypes import cdll
from time import sleep
class SDKLibrary(object):
def __init__(self):
self.lib = cdll.LoadLibrary("sharedlibrary.so")
self.sdk = self.lib.SDK_new()
def Function(self):
self.lib.Function1(self.sdk, 1, 2)
x = SDKLibrary()
x.Function() //This call from Python working fine
But when I'm trying to call from robot the Function its throwing an error with no arguments failed: OSError: sharedlibrary.so: cannot open shared object file: No such file or directory
Not sure what your file structure looks like but I had this same issue when I put my robot test scripts in a different folder than my python libraries. I solved it by pre-pending an absolute file path to the C library.
def __init__(self):
self.lib = cdll.LoadLibrary(absolute/file/path/ + "sharedlibrary.so")
self.sdk = self.lib.SDK_new()

Using c++ complex functions in Cython

I am trying to do a complex exponential in Cython.
I have been able to cobble together the following code for my pyx:
from libc.math cimport sin, cos, acos, exp, sqrt, fabs, M_PI, floor, ceil
cdef extern from "complex.h":
double complex cexp(double complex z)
import numpy as np
cimport numpy as np
import cython
from cython.parallel cimport prange, parallel
def try_cexp():
cdef:
double complex rr1
double complex rr2
rr1 = 1j
rr2 = 2j
print(rr1*rr2)
#print(cexp(rr1))
Note that the print(cexp(rr1)) is commented. When the line is active, I get the following error when running setup.py:
error: command 'C:\\WinPYthon\\Winpython-64bit-3.4.3.6\\python-3.4.3.amd64\\scripts\\gcc.exe' failed with exit status 1
Note that when cexp is commented out, everything runs expected... I can run setup.py, and when I test the function it prints out the product of the two complex numbers.
Here is my setup.py file. Note that it includes code to run openmp in Cython using g++:
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np
import os
os.environ["CC"] = "g++-4.7"
os.environ["CXX"] = "g++-4.7"
# These were added based on some examples I had seen of cexp in Cython. No effect.
#import pyximport
#pyximport.install(reload_support=True)
ext_modules = [
Extension('complex_test',
['complex_test.pyx'],
language="c++",
extra_compile_args=['-fopenmp'],
extra_link_args=['-fopenmp', '-lm']) # Note that '-lm' was
# added due to an example where someone mentioned g++ required
# this. Same results with and without it.
]
setup(
name='complex_test',
cmdclass={'build_ext': build_ext},
ext_modules=ext_modules,
include_dirs=[np.get_include()]
)
Ultimately my goal is to speed up a calcualtion that looks like k*exp(z), where k and z are complex. Currently I am using Numerical expressions, however that has a large memory overhead, and I believe that it's possible to optimize further than it can.
Thank you for your help.
You're using cexp instead of exp as it is in C++. Change your cdef extern to:
cdef extern from "<complex.h>" namespace "std":
double complex exp(double complex z)
float complex exp(float complex z) # overload
and your print call to:
print(exp(rr1))
and it should work as a charm.
I know the compilation messages are lengthy, but in there you can find the error that points to the culprit:
complex_test.cpp: In function ‘PyObject* __pyx_pf_12complex_test_try_cexp(PyObject*)’:
complex_test.cpp:1270:31: error: cannot convert ‘__pyx_t_double_complex {aka std::complex<double>}’ to ‘__complex__ double’ for argument ‘1’ to ‘__complex__ double cexp(__complex__ double)’
__pyx_t_3 = cexp(__pyx_v_rr1);
It's messy, but you can see the cause. *You're supplying a C++ defined type (__pyx_t_double_complex in Cython Jargon) to a C function which expects a different type (__complex__ double).

ImportError: No module named stanford_segmenter

The StanfordSegmenter does not have an interface in nltk, different from the case of StanfordPOStagger or StanfordNER. So to use it, basically I have to create an interface manually for StanfordSegmenter, namely stanford_segmenter.py under ../nltk/tokenize/. I follow the instructions here http://textminingonline.com/tag/chinese-word-segmenter
However, when I tried to run this from nltk.tokenize.stanford_segmenter import stanford_segmenter, I got an error
msg Traceback (most recent call last):
File "C:\Users\qubo\Desktop\stanfordparserexp.py", line 48, in <module>
from nltk.tokenize.stanford_segmenter import stanford_segmenter
ImportError: No module named stanford_segmenter
[Finished in 0.6s]
The instructions mentioned to reinstall nltk after creating the stanford_segmenter.py. I don't quite get the point but so I did. However, the process can hardly be called 'reinstall', but rather a detaching and reconnecting nltk to python libs.
I'm using Windows 64 and Python 2.7.11. NLTK and all relevant pkgs are updated to the latest version. Wonder if you guys can shed some light on this. Thank you all so much.
I was able to import the module by running the following code:
import imp
yourmodule = imp.load_source("module_name.py", "/path/to/module_name.py")
yourclass = yourmodule.TheClass()
yourclass is an instance of the class and TheClass is the name of the class you want to create the obj in. This is similar to the use of:
from pkg_name.module_name import TheClass
So in the case of StanfordSegmenter, the complete lines of code is as follows:
# -*- coding: utf-8 -*-
import imp
import os
ini_path = 'D:/jars/stanford-segmenter-2015-04-20/'
os.environ['STANFORD_SEGMENTER'] = ini_path + 'stanford-segmenter-3.5.2.jar'
stanford_segmenter = imp.load_source("stanford_segmenter", "C:/Users/qubo/Miniconda2/pkgs/nltk-3.1-py27_0/Lib/site-packages/nltk/tokenize/stanford_segmenter.py")
seg = stanford_segmenter.StanfordSegmenter(path_to_model='D:/jars/stanford-segmenter-2015-04-20/data/pku.gz', path_to_jar='D:/jars/stanford-segmenter-2015-04-20/stanford-segmenter-3.5.2.jar', path_to_dict='D:/jars/stanford-segmenter-2015-04-20/data/dict-chris6.ser.gz', path_to_sihan_corpora_dict='D:/jars/stanford-segmenter-2015-04-20/data')
sent = '我有一只小毛驴我从来也不骑。'
text = seg.segment(sent.decode('utf-8'))

How to configure pyximport to always make a cpp file? [duplicate]

pyximport is super handy but I can't figure out how to get it to engage the C++ language options for Cython. From the command line you'd run cython --cplus foo.pyx. How do you achieve the equivalent with pyximport? Thanks!
One way to make Cython create C++ files is to use a pyxbld file. For example, create foo.pyxbld containing the following:
def make_ext(modname, pyxfilename):
from distutils.extension import Extension
return Extension(name=modname,
sources=[pyxfilename],
language='c++')
Here's a hack.
The following code monkey-patches the get_distutils_extension function in pyximport so that the Extension objects it creates all have their language attribute set to c++.
import pyximport
from pyximport import install
old_get_distutils_extension = pyximport.pyximport.get_distutils_extension
def new_get_distutils_extension(modname, pyxfilename, language_level=None):
extension_mod, setup_args = old_get_distutils_extension(modname, pyxfilename, language_level)
extension_mod.language='c++'
return extension_mod,setup_args
pyximport.pyximport.get_distutils_extension = new_get_distutils_extension
Put the above code in pyximportcpp.py. Then, instead of using import pyximport; pyximport.install(), use import pyximportcpp; pyximportcpp.install().
A more lightweight/less intrusive solution would be to use setup_args/script_args, which pyximport would pass to distutils used under the hood:
script_args = ["--cython-cplus"]
setup_args = {
"script_args": script_args,
}
pyximport.install(setup_args=setup_args, language_level=3)
Other options for python setup.py build_ext can be passed in similar maner, e.g. script_args = ["--cython-cplus", "--force"].
The corresponding part of the documentation mentions the usage of setup_args, but the exact meaning is probably clearest from the code itself (here is a good starting point).
You can have pyximport recognize the header comment # distutils : language = c++ by having pyximport make extensions using the cythonize command. To do so, you can create a new file filename.pyxbld next to your filename.pyx:
# filename.pyxbld
from Cython.Build import cythonize
def make_ext(modname, pyxfilename):
return cythonize(pyxfilename, language_level = 3, annotate = True)[0]
and now you can use the distutils header comments:
# filename.pyx
# distutils : language = c++
Pyximport will use the make_ext function from your .pyxbld file to build the extension. And cythonize will recognize the distutils header comments.

cython c++ undefined reference to std::ios_base::failure

I just wrote cython code just as simple as that
# distutils: language = c++
from libcpp.map cimport map,pair
from ios import *
cdef map[int,int] * u = new map[int,int]()
cdef add_item(int n, int x):
cdef pair[int,int]p = pair[int,int](n,x)
u.insert(p)
def add(int n, int x):
add_item(n,x)
added build file like
def make_ext(modname, pyxfilename):
from distutils.extension import Extension
return Extension(name=modname,
sources=[pyxfilename],
language='C++')
and run simple script like
import hello
with lines
import pyximport
pyximport.install()
in my sitecustomize.py
At script execution i get ImportError: Building module hello failed: ['ImportError: /home/odomontois/.pyxbld/lib.linux-x86_64-2.7/hello.so: undefined symbol: _ZTINSt8ios_base7failureE\n']
c++filt _ZTINSt8ios_base7failureE prints typeinfo for std::ios_base::failure
Is there any possibility to find out what object file i should include and how to do this in my pyxbld file for example.
Resolved by adding
libraries=["stdc++"]
to pyxbld