python-for-android, Cython, C++, CythonRecipe: Operation only allowed in c++ - c++

I have this setup.py for my Cython project:
from setuptools import setup
from Cython.Build import cythonize
setup(
name = 'phase-engine',
version = '0.1',
ext_modules = cythonize(["phase_engine.pyx"] + ['music-synthesizer-for-android/src/' + p for p in [
'fm_core.cc', 'dx7note.cc', 'env.cc', 'exp2.cc', 'fm_core.cc', 'fm_op_kernel.cc', 'freqlut.cc', 'lfo.cc', 'log2.cc', 'patch.cc', 'pitchenv.cc', 'resofilter.cc', 'ringbuffer.cc', 'sawtooth.cc', 'sin.cc', 'synth_unit.cc'
]],
include_path = ['music-synthesizer-for-android/src/'],
language = 'c++',
)
)
when I run buildozer, it gets angry about some Cython features only being available in C++ mode:
def __dealloc__(self):
del self.p_synth_unit
^
------------------------------------------------------------
phase_engine.pyx:74:8: Operation only allowed in c++
from which I understand it's ignoring my setup.py and doing its own somehow. How do I give it all these parameters?

CythonRecipe doesn't work well for Cython code that imports C/C++ code. Try CompiledComponentsPythonRecipe, or if you're having issues with #include <ios> or some other thing from the C++ STL, CppCompiledComponentsPythonRecipe:
from pythonforandroid.recipe import IncludedFilesBehaviour, CppCompiledComponentsPythonRecipe
import os
import sys
class MyRecipe(IncludedFilesBehaviour, CppCompiledComponentsPythonRecipe):
version = 'stable'
src_filename = "../../../phase-engine"
name = 'phase-engine'
depends = ['setuptools']
call_hostpython_via_targetpython = False
install_in_hostpython = True
def get_recipe_env(self, arch):
env = super().get_recipe_env(arch)
env['LDFLAGS'] += ' -lc++_shared'
return env
recipe = MyRecipe()
The dependency on setuptools is essential because of some weird stuff, otherwise you get an error no module named setuptools. The two other flags were also related to that error, the internet said they're relevant so I tried value combinations until one worked.
The LDFLAGS thing fixes an issue I had later (see buildozer + Cython + C++ library: dlopen failed: cannot locate symbol symbol-name referenced by module.so).

Related

cx_Freeze fails to freeze script due to mpl_toolkits

I am trying to freeze this program Using the following setup script:
import cx_Freeze
import sys
import os
base = None
if sys.platform == 'win32':
base = "Win32GUI"
executables = [cx_Freeze.Executable("Electric Field API.py", base=base, icon=os.getcwd()+"\\bin\\EFAPIicon.ico")]
cx_Freeze.setup(
name = "Electric Field API",
options = {"build_exe": {'includes': ['numpy.core._methods','numpy.lib.format','tkFileDialog','FileDialog'], 'packages': ["matplotlib",'Tkinter','FileDialog','tkFileDialog'], "include_files":[os.getcwd()+"\\bin\\EFAPIicon.ico"]}},
version = "1.3",
description = "Electric Field Visualization",
executables = executables
)
Unfortunately, when running this, I receive the following error:
When these imports are listed in the setup.py file, I receive the following error from powershell:
If anyone has a way to solve this issue, it would be greatly appreciated.
Apparently mpl_toolkits is a namespace package (no 'init'), hence it has to be treated differently. ( I read a little about this on bitbucket(thanks D. Reaver)
Try adding the following to your build_exe in the options:
'namespace_packages': ['mpl_toolkits']

Imported library 'owaspapi' contains no keywords. (if it's installed using pip)

I have made a library for Robot Framework (myapi.py). If I place it in the same directory with my robot test I can import the library like this:
Library myapi.py
It works just fine.
However, I made the library pip installable so that others may take it into use in other projects easily. The library installs just fine with pip. I also changed the robot test to import the library like this:
Library myapi
When I run the robot test I get warning:
[ WARN ] Imported library 'myapi' contains no keywords.
Here's the (pip installable) library file structure:
setup.py
myapi
\__init__.py
\myapi.py
\version.py
The setup.py content is:
from setuptools import setup, find_packages
exec(open('myapi/version.py').read())
setup(
name='myapi',
version=__version__,
packages=['myapi'],
install_requires=['requests']
)
The init.py content is:
from .version import __version__
The version.py content is:
__version__ = '1.1.0'
The myapi.py content is (included only the first function I have):
import requests
import time
from time import strftime
import urllib2
__all__ = ['create_new_MY_session']
def create_new_MY_session():
session_name = strftime('my_session_%S_%H_%M_%d_%m_%Y')
r = requests.get("http://localhost:8080/JSON/core/action/newSession/?zapapiformat=JSON&name=" + session_name + "/'")
print ("Creating new session: " + session_name + ". Status code...")
print (r.status_code)
assert (r.status_code) == 200
And finally the beginning of the robot test (login.robot):
*** Settings ***
Suite Setup Open Firefox With Proxy
Suite Teardown Close Browser
Library mypapi
Library OperatingSystem
Library Selenium2Library
Resource ws_keywords/product/webui.robot
*** Test Cases ***
MY Start New MY Session
Create New MY Session
I wonder if the library works just fine when located right next to the robot test, what am I missing if I make it pip installable...? Why does it complain that there are no keywords?
In your myapi.py file you missing the class reference. When the file is placed inside your Robot Framework project this wasn't an issue, but when creating a pip installable module, this is required. A basic Python Library code example is this:
myapi.py
class myapi(object):
ROBOT_LIBRARY_VERSION = 1.0
def __init__(self):
pass
def keyword(self):
pass

Using cython to speed up thousands of set operations

I have been trying to get over my fear of Cython (fear because I literally know NOTHING about c, or c++)
I have a function which takes 2 arguments, a set (we'll call it testSet), and a list of sets (we'll call that targetSets). The function then iterates through targetSets, and computes the length of the intersection with testSet, adding that value to a list, which is then returned.
Now, this isn't by itself that slow, but the problem is I need to do simulations of the testSet (and a large number at that, ~ 10,000), and the targetSet is about 10,000 sets long.
So for a small number of simulations to test, the pure python implementation was taking ~50 secs.
I tried making a cython function, and it worked and it's now running at ~16 secs.
If there is anything else that I could do to the cython function that anyone could think of that would be great (python 2.7 btw)
Here is my Cython implementation in overlapFunc.pyx
def computeOverlap(set testSet, list targetSets):
cdef list obsOverlaps = []
cdef int i, N
cdef set overlap
N = len(targetSets)
for i in range(N):
overlap = testSet & targetSets[i]
if len(overlap) <= 1:
obsOverlaps.append(0)
else:
obsOverlaps.append(len(overlap))
return obsOverlaps
and the setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
ext_modules = [Extension("overlapFunc",
["overlapFunc.pyx"])]
setup(
name = 'computeOverlap function',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
)
and some code to build some random sets for testing and to time the function. test.py
import numpy as np
from overlapFunc import computeOverlap
import time
def simRandomSet(n):
for i in range(n):
simSet= set(np.random.randint(low=1, high=100, size=50))
yield simSet
if __name__ == '__main__':
np.random.seed(23032014)
targetSet = [set(np.random.randint(low=1, high=100, size=50)) for i in range(10000)]
simulatedTestSets = simRandomSet(200)
start = time.time()
for i in simulatedTestSets:
obsOverlaps = computeOverlap(i, targetSet)
print time.time()-start
I tried changing the def at the start of the computerOverlap function, as in:
cdef list computeOverlap(set testSet, list targetSets):
but I get the following warning message when I run the setup.py script:
'__pyx_f_11overlapFunc_computeOverlap' defined but not used [-Wunused-function]
and then when I run something that tries to use the function I get an import Error:
from overlapFunc import computeOverlap
ImportError: cannot import name computeOverlap
Thanks in advance for your help,
Cheers,
Davy
In the following line, the extension module name and the filename does not match actual filename.
ext_modules = [Extension("computeOverlapWithGeneList",
["computeOverlapWithGeneList.pyx"])]
Replace it with:
ext_modules = [Extension("overlapFunc",
["overlapFunc.pyx"])]

How to configure pyximport to always make a cpp file? [duplicate]

pyximport is super handy but I can't figure out how to get it to engage the C++ language options for Cython. From the command line you'd run cython --cplus foo.pyx. How do you achieve the equivalent with pyximport? Thanks!
One way to make Cython create C++ files is to use a pyxbld file. For example, create foo.pyxbld containing the following:
def make_ext(modname, pyxfilename):
from distutils.extension import Extension
return Extension(name=modname,
sources=[pyxfilename],
language='c++')
Here's a hack.
The following code monkey-patches the get_distutils_extension function in pyximport so that the Extension objects it creates all have their language attribute set to c++.
import pyximport
from pyximport import install
old_get_distutils_extension = pyximport.pyximport.get_distutils_extension
def new_get_distutils_extension(modname, pyxfilename, language_level=None):
extension_mod, setup_args = old_get_distutils_extension(modname, pyxfilename, language_level)
extension_mod.language='c++'
return extension_mod,setup_args
pyximport.pyximport.get_distutils_extension = new_get_distutils_extension
Put the above code in pyximportcpp.py. Then, instead of using import pyximport; pyximport.install(), use import pyximportcpp; pyximportcpp.install().
A more lightweight/less intrusive solution would be to use setup_args/script_args, which pyximport would pass to distutils used under the hood:
script_args = ["--cython-cplus"]
setup_args = {
"script_args": script_args,
}
pyximport.install(setup_args=setup_args, language_level=3)
Other options for python setup.py build_ext can be passed in similar maner, e.g. script_args = ["--cython-cplus", "--force"].
The corresponding part of the documentation mentions the usage of setup_args, but the exact meaning is probably clearest from the code itself (here is a good starting point).
You can have pyximport recognize the header comment # distutils : language = c++ by having pyximport make extensions using the cythonize command. To do so, you can create a new file filename.pyxbld next to your filename.pyx:
# filename.pyxbld
from Cython.Build import cythonize
def make_ext(modname, pyxfilename):
return cythonize(pyxfilename, language_level = 3, annotate = True)[0]
and now you can use the distutils header comments:
# filename.pyx
# distutils : language = c++
Pyximport will use the make_ext function from your .pyxbld file to build the extension. And cythonize will recognize the distutils header comments.

Speeding up build process with distutils

I am programming a C++ extension for Python and I am using distutils to compile the project. As the project grows, rebuilding it takes longer and longer. Is there a way to speed up the build process?
I read that parallel builds (as with make -j) are not possible with distutils. Are there any good alternatives to distutils which might be faster?
I also noticed that it's recompiling all object files every time I call python setup.py build, even when I only changed one source file. Should this be the case or might I be doing something wrong here?
In case it helps, here are some of the files which I try to compile: https://gist.github.com/2923577
Thanks!
Try building with environment variable CC="ccache gcc", that will speed up build significantly when the source has not changed. (strangely, distutils uses CC also for c++ source files). Install the ccache package, of course.
Since you have a single extension which is assembled from multiple compiled object files, you can monkey-patch distutils to compile those in parallel (they are independent) - put this into your setup.py (adjust the N=2 as you wish):
# monkey-patch for parallel compilation
def parallelCCompile(self, sources, output_dir=None, macros=None, include_dirs=None, debug=0, extra_preargs=None, extra_postargs=None, depends=None):
# those lines are copied from distutils.ccompiler.CCompiler directly
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(output_dir, macros, include_dirs, sources, depends, extra_postargs)
cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
# parallel code
N=2 # number of parallel compilations
import multiprocessing.pool
def _single_compile(obj):
try: src, ext = build[obj]
except KeyError: return
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
# convert to list, imap is evaluated on-demand
list(multiprocessing.pool.ThreadPool(N).imap(_single_compile,objects))
return objects
import distutils.ccompiler
distutils.ccompiler.CCompiler.compile=parallelCCompile
For the sake of completeness, if you have multiple extensions, you can use the following solution:
import os
import multiprocessing
try:
from concurrent.futures import ThreadPoolExecutor as Pool
except ImportError:
from multiprocessing.pool import ThreadPool as LegacyPool
# To ensure the with statement works. Required for some older 2.7.x releases
class Pool(LegacyPool):
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
self.join()
def build_extensions(self):
"""Function to monkey-patch
distutils.command.build_ext.build_ext.build_extensions
"""
self.check_extensions_list(self.extensions)
try:
num_jobs = os.cpu_count()
except AttributeError:
num_jobs = multiprocessing.cpu_count()
with Pool(num_jobs) as pool:
pool.map(self.build_extension, self.extensions)
def compile(
self, sources, output_dir=None, macros=None, include_dirs=None,
debug=0, extra_preargs=None, extra_postargs=None, depends=None,
):
"""Function to monkey-patch distutils.ccompiler.CCompiler"""
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
output_dir, macros, include_dirs, sources, depends, extra_postargs
)
cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
for obj in objects:
try:
src, ext = build[obj]
except KeyError:
continue
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
# Return *all* object filenames, not just the ones we just built.
return objects
from distutils.ccompiler import CCompiler
from distutils.command.build_ext import build_ext
build_ext.build_extensions = build_extensions
CCompiler.compile = compile
I've got this working on Windows with clcache, derived from eudoxos's answer:
# Python modules
import datetime
import distutils
import distutils.ccompiler
import distutils.sysconfig
import multiprocessing
import multiprocessing.pool
import os
import sys
from distutils.core import setup
from distutils.core import Extension
from distutils.errors import CompileError
from distutils.errors import DistutilsExecError
now = datetime.datetime.now
ON_LINUX = "linux" in sys.platform
N_JOBS = 4
#------------------------------------------------------------------------------
# Enable ccache to speed up builds
if ON_LINUX:
os.environ['CC'] = 'ccache gcc'
# Windows
else:
# Using clcache.exe, see: https://github.com/frerich/clcache
# Insert path to clcache.exe into the path.
prefix = os.path.dirname(os.path.abspath(__file__))
path = os.path.join(prefix, "bin")
print "Adding %s to the system path." % path
os.environ['PATH'] = '%s;%s' % (path, os.environ['PATH'])
clcache_exe = os.path.join(path, "clcache.exe")
#------------------------------------------------------------------------------
# Parallel Compile
#
# Reference:
#
# http://stackoverflow.com/questions/11013851/speeding-up-build-process-with-distutils
#
def linux_parallel_cpp_compile(
self,
sources,
output_dir=None,
macros=None,
include_dirs=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
depends=None):
# Copied from distutils.ccompiler.CCompiler
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
output_dir, macros, include_dirs, sources, depends, extra_postargs)
cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
def _single_compile(obj):
try:
src, ext = build[obj]
except KeyError:
return
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
# convert to list, imap is evaluated on-demand
list(multiprocessing.pool.ThreadPool(N_JOBS).imap(
_single_compile, objects))
return objects
def windows_parallel_cpp_compile(
self,
sources,
output_dir=None,
macros=None,
include_dirs=None,
debug=0,
extra_preargs=None,
extra_postargs=None,
depends=None):
# Copied from distutils.msvc9compiler.MSVCCompiler
if not self.initialized:
self.initialize()
macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
output_dir, macros, include_dirs, sources, depends, extra_postargs)
compile_opts = extra_preargs or []
compile_opts.append('/c')
if debug:
compile_opts.extend(self.compile_options_debug)
else:
compile_opts.extend(self.compile_options)
def _single_compile(obj):
try:
src, ext = build[obj]
except KeyError:
return
input_opt = "/Tp" + src
output_opt = "/Fo" + obj
try:
self.spawn(
[clcache_exe]
+ compile_opts
+ pp_opts
+ [input_opt, output_opt]
+ extra_postargs)
except DistutilsExecError, msg:
raise CompileError(msg)
# convert to list, imap is evaluated on-demand
list(multiprocessing.pool.ThreadPool(N_JOBS).imap(
_single_compile, objects))
return objects
#------------------------------------------------------------------------------
# Only enable parallel compile on 2.7 Python
if sys.version_info[1] == 7:
if ON_LINUX:
distutils.ccompiler.CCompiler.compile = linux_parallel_cpp_compile
else:
import distutils.msvccompiler
import distutils.msvc9compiler
distutils.msvccompiler.MSVCCompiler.compile = windows_parallel_cpp_compile
distutils.msvc9compiler.MSVCCompiler.compile = windows_parallel_cpp_compile
# ... call setup() as usual
You can do this easily if you have Numpy 1.10 available. Just add:
try:
from numpy.distutils.ccompiler import CCompiler_compile
import distutils.ccompiler
distutils.ccompiler.CCompiler.compile = CCompiler_compile
except ImportError:
print("Numpy not found, parallel compile not available")
Use -j N or set NPY_NUM_BUILD_JOBS.
In the limited examples you provided in the link, it seems fairly obvious that you have some misunderstanding on what some of the features of the language are. For example, the gsminterface.h has a whole lot of namespace level statics, which is probably unintended. Every translation unit that includes that header will compile it's own version for everyone of the symbols declared in that header. Side effects of this are not only the compile time but also code bloat (larger binaries) and link time as the linker needs to process all those symbols.
There are still many questions that affect the build process that you have not answered, for example, whether you clean every time before you recompile. If you are doing that, then you might want to consider ccache, which is a tool that caches the result of the build process, so that if you run make clean; make target only the preprocessor will be run for any translation unit that has not changed. Note that as long as you keep maintaining most code in headers, this will not offer much of an advantage, as a change in a header modifies all translation units that include it. (I don't know your build system, so I cannot tell you whether python setup.py build will clean or not)
The project does not seem large otherwise, so I would be surprised if it took more than a few seconds to compile.