python distutils not include the SWIG generated module - c++

I am using distutils to create an rpm from my project. I have this directory tree:
project/
my_module/
data/file.dat
my_module1.py
my_module2.py
src/
header1.h
header2.h
ext_module1.cpp
ext_module2.cpp
swig_module.i
setup.py
MANIFEST.in
MANIFEST
my setup.py:
from distutils.core import setup, Extension
module1 = Extension('my_module._module',
sources=['src/ext_module1.cpp',
'src/ext_module2.cpp',
'src/swig_module.i'],
swig_opts=['-c++', '-py3'],
include_dirs=[...],
runtime_library_dirs=[...],
libraries=[...],
extra_compile_args=['-Wno-write-strings'])
setup( name = 'my_module',
version = '0.6',
author = 'microo8',
author_email = 'magyarvladimir#gmail.com',
description = '',
license = 'GPLv3',
url = '',
platforms = ['x86_64'],
ext_modules = [module1],
packages = ['my_module'],
package_dir = {'my_module': 'my_module'},
package_data = {'my_module': ['data/*.dat']} )
my MANIFEST.in file:
include src/header1.h
include src/header2.h
the MANIFEST file is automatically generated by python3 setup.py sdist. And when i run python3 setup.py bdist_rpm it compiles and creates correct rpm packages. But the problem is that when im running SWIG on a C++ source, it creates a module.py file that wraps the binary _module.cpython32-mu.so file, it is created with the module_wrap.cpp file, and it isnt copied to the my_module directory.
What I must write to the setup.py file to automatically copy the SWIG generated python modules?
And also I have another question: When I install the rpm package, I want that an executable will be created, in /usr/bin or so, to run the application (for example if the my_module/my_module1.py is the start script of the application then I can run in bash: $ my_module1).

The problem is that build_py (which copies python sources to the build directory) comes before build_ext, which runs SWIG.
You can easily subclass the build command and swap around the order, so build_ext produces module1.py before build_py tries to copy it.
from distutils.command.build import build
class CustomBuild(build):
sub_commands = [
('build_ext', build.has_ext_modules),
('build_py', build.has_pure_modules),
('build_clib', build.has_c_libraries),
('build_scripts', build.has_scripts),
]
module1 = Extension('_module1', etc...)
setup(
cmdclass={'build': CustomBuild},
py_modules=['module1'],
ext_modules=[module1]
)
However, there is one problem with this: If you are using setuptools, rather than just plain distutils, running python setup.py install won't run the custom build command. This is because the setuptools install command doesn't actually run the build command first, it runs egg_info, then install_lib, which runs build_py then build_ext directly.
So possibly a better solution is to subclass both the build and install command, and ensure build_ext gets run at the start of both.
from distutils.command.build import build
from setuptools.command.install import install
class CustomBuild(build):
def run(self):
self.run_command('build_ext')
build.run(self)
class CustomInstall(install):
def run(self):
self.run_command('build_ext')
self.do_egg_install()
setup(
cmdclass={'build': CustomBuild, 'install': CustomInstall},
py_modules=['module1'],
ext_modules=[module1]
)
It doesn't look like you need to worry about build_ext getting run twice.

It's not a complete answer, because I don't have the complete solution.
The reason why the module is not copied to the install directory is because it wasn't present when the setup process tried to copy it. The sequence of events is:
running install
running build
running build_py
file my_module.py (for module my_module) not found
file vcanmapper.py (for module vcanmapper) not found
running build_ext
If you run a second time python setup.py install it will do what you wanted in the first place. The official SWIG documentation for Python proposes you run first swig to generate the wrap file, and then run setup.py install to do the actual installation.

It looks like you have to add a py_modules option, e.g.:
setup(...,
ext_modules=[Extension('_foo', ['foo.i'],
swig_opts=['-modern', '-I../include'])],
py_modules=['foo'],
)
Using rpm to Install System Scripts in Linux, you'll have to modify your spec file. The %files section tells rpm where to put the files, which you can move or link to in %post, but such can be defined in setup.py using:
options = {'bdist_rpm':{'post_install':'post_install', 'post_uninstall':'post_uninstall'}},
Running Python scripts in Bash can be done with the usual first line as #!/usr/bin/python and executable bit on the file using chmod +x filename.

Related

How to provide sys.path in python script to docker file

I have added sys.path
sys.path.append("C:\\Program Files\\FME\\fmeobjects\\python27")
in python script which works well when I run the script. I am not trying to dockerize the script. My docker script is
FROM python:2.7-alpine
ADD test1.py /
CMD [ "python", "./test1.py" ]
it builds the image but while running the image it gives error
Traceback (most recent call last):
File "./test1.py", line 17, in <module>
import fmeobjects
ImportError: No module named fmeobjects
It seems like your script cannot import fmeobjects because it is outside the container. Try adding the import for fmeobjects in the directory you ADD.
What does test1.py do?
If fmeobjects is a package / module, you need to add as mentioned above to the environment of the image.
You can also set up a distutils for it and you can pip install it in the image.
Effectively, as currently constructed, you're trying to import a package in your script that does not exist because it has not been installed.
Even for small standalone applications, using the standard distribution tools streamlines this process significantly. This is doubly true if you have colleagues that might have different usernames, directory layouts, or even operating systems. Don't manually edit sys.path in your script.
You should write a setup.py file that uses the setuptools library. Complete documentation is here but a minimal example might look like:
#!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name="fmeobjects",
version="0.1",
packages=find_packages(),
entry_points={
'console_scripts': [
'fmeobjects = fmeobjects.main:main'
]
}
)
For development use, create a virtual environment and install your package in it.
virtualenv vpy
. vpy/bin/activate
pip install -e .
The . activate line sets some additional environment variables for you, including adding the virtual environment to your $PATH. (source is an equivalent vendor extension that works in some shells; . is part of the standard and works even in minimal shells like what you get in Alpine or Busybox installations.) You can now run fmeobjects at the shell prompt, which will call the main() function in fmeobjects/main.py (see the entry_points declaration).
You have a couple of options of how to install this in Docker. Probably the most straightforward is to simply import your source tree and install it. Since Docker containers provide isolated filesystems and generally do only one thing, there's not much point in supporting an isolated Python installation within that; just install your package into the global Python.
FROM python:2.7
WORKDIR /usr/src/app
COPY . .
RUN pip install .
CMD ["fmeobjects"]
(If your virtual environment is in your source tree, you can add vpy to a .dockerignore file to cause it to not be copied, saving time and space.)

Linux platform tag for python module built with pybind11

I am using pybind11 and build the python module with setuptools and cmake as described in pybind/cmake_example:
setup(
name='libraryname',
...
ext_modules=[CMakeExtension('libraryname')],
cmdclass=dict(build_ext=CMakeBuild),
)
Locally, using python setup.py sdist build everything is fine and I can use and/or install the package from the generated files.
I now want to upload the package to PyPI.
From a different python package I know how to generate a general linux library (see also here) by manipulating the platform tag of a wheel:
class bdist_wheel(bdist_wheel_):
def finalize_options(self):
from sys import platform as _platform
platform_name = get_platform()
if _platform == "linux" or _platform == "linux2":
# Linux
platform_name = 'manylinux1_x86_64'
bdist_wheel_.finalize_options(self)
self.universal = True
self.plat_name_supplied = True
self.plat_name = platform_name
setup(
...
cmdclass = {'bdist_wheel': bdist_wheel},
)
The Question:
How to generate the appropriate platform tag when no bdist_wheel is built?
Should this be somehow built as wheel instead of as an extension (possibly related to this issue on GH)?
Also, how does pybind11 decide the suffix of the generated libraries (on my linux it is not just .so but .cpython-35m-x86_64-linux-gnu.so)?
Follow-up:
The main problem is that I cannot update the current Ubuntu-built package to PyPI: ValueError: Unknown distribution format: 'libraryname-0.8.0.cpython-35m-x86_64-linux-gnu.so'
If the platform tag cannot or should not be changed: what is best practice for uploading a pybind11 module to PyPI across platforms?
My bad!
It turns out the confusion was due to a build error I had when I initially tried running python setup.py sdist bdist_wheel.
Manually building with python setup.py build was not the right approach for publishing the package.
Note: the name of the .so file needed to be set without the -0.8.0 version identifier in order for python do be able to do the import from the wheel.
To Summarize:
Building and publishing binary wheels works exactly the same with pybind11 as with e.g. cpython and it should work just fine to follow the pybind/cmake_example.

How to use an already built Caffe when running py-faster-rcnn?

I'm trying to build and run py-faster-rcnn model on my Ubuntu 16.04.
However, when I run ./tools/demo.py (as stated in the installation guide), I get the following error:
➜ py-faster-rcnn git:(master) ✗ ./tools/demo.py
Traceback (most recent call last):
File "./tools/demo.py", line 18, in <module>
from fast_rcnn.test import im_detect
File "/home/denis/WEB/DeepLearning/py-faster-rcnn/tools/../lib/fast_rcnn/test.py", line 16, in <module>
import caffe
File "/home/denis/WEB/DeepLearning/py-faster-rcnn/tools/../caffe-fast-rcnn/python/caffe/__init__.py", line 1, in <module>
from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver
File "/home/denis/WEB/DeepLearning/py-faster-rcnn/tools/../caffe-fast-rcnn/python/caffe/pycaffe.py", line 13, in <module>
from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \
ImportError: No module named _caffe
Before attempting to install py-faster-rcnn, I've installed Caffe in my ~/code/caffe folder and it seems to work fine:
➜ ~ python
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import caffe
>>> print caffe.__version__
1.0.0-rc3
So, if I can import caffe module in python environment, I assume I've installed it successfully.
Here're the commands I've used (from the official guide):
sudo make all
sudo make test
sudo make runtest
sudo make pycaffe
sudo make distribute
Then I've cloned the py-faster-rcnn repository in my ~/WEB/DeepLearning folder.
After that I've followed the installation instructions from the repo:
Clone the repo
cd $FRCN_ROOT/lib && make
cd $FRCN_ROOT/caffe-fast-rcnn
make -j8 && make pycaffe (I didn't run this)
cd $FRCN_ROOT && ./data/scripts/fetch_faster_rcnn_models.sh
cd $FRCN_ROOT && ./tools/demo.py
So, step 4 in the installation guide says I have to build caffe and pycaffe in $FRCN_ROOT/caffe-fast-rcnn folder. The contents of caffe-fast-rcnn folder seem to be identical with the original caffe repository from which I've built Caffe.
So, it seems that I don't need to build caffe again, right? When trying to run the demo, I've skipped the step of building caffe and got the error stated above.
After googling for a while, I've found out that my issue is connected with path environment variables, so below are my path variables in .bashrc:
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:~/code/caffe/distribute/lib:$LD_LIBRARY_PATH
export CPLUS_INCLUDE_PATH=/usr/include/python2.7
export PYTHONPATH=~/code/caffe/python:$PYTHONPATH
Am I doing something wrong and I have to change my path variables somehow?
Or I really need to build caffe again, but in a caffe-fast-rcnn folder?
And what about this distribute folder I've generated in ~/code/caffe/distribute by running sudo make distribute? Is it of any use? If so, how should I use it? The official documentation is not very clear about it.
A simple, clear and detailed explanation on how to use an already built Caffe framework with other projects like Faster-RCNN would be really helpful.
I struggled with this for a while and then got it working as below.
First, check PYTHONPATH env variable. It should have python path like, for eg. based on your python version and installation
PYTHONPATH = /usr/lib/python2.7
If its empty, you can set it with python path captured in your python shell. To check python path information, open python shell and type below
>>import sys
>>for p in sys.path
... print(p)
It will list you all python path info, for eg
...
/usr/lib/python2.7
/usr/lib/python2.7/plat-x86_64-linux-gnu
/usr/lib/python2.7/lib-tk
/usr/lib/python2.7/lib-old
/usr/lib/python2.7/lib-dynload
...
If you have installed caffe already and want it to configure to be used by python, you just need to update your PYTHONPATH env variable by adding path to your /caffe-installation-path/python folder to it, like
export PYTHONPATH = /home/mypc/caffe-master/python:$PYTHONPATH
Note:- You don't need to rebuild caffe but configure caffe and python in PYTHONPATH env variable correctly.

I can't get cython to work due to even though I have a C++ compiler.

I am using python 2.7.10 (64bit) with anaconda 2.4.0 (64bit) and cython 0.23.4, with the latest updates for setuptools, pip, and wheel. I have also downloaded and installed a C compiler from this link http://www.microsoft.com/en-us/download/details.aspx?id=44266.
I then wrote the following hi.pyx file:
print "Hello"
And the following setup.py file
from distutils.core import setup
from Cython.Build import cythonize
setup(
name = 'Hello world app',
ext_modules = cythonize("hi.pyx"),
)
The vcvarsall.bat file is located here:
C:\Users\c3126_000\AppData\Local\Programs\Common\Microsoft\Visual_C++_for_Python\9.0
so I have added this to the path system variable.
I ran the following command in the Anaconda prompt
cython -2 hi.pyx
And this produced the file hi.c
I then ran the command
python setup.py build_ext --inplace
which gave the following error:
Unable to find vcvarsall.bat
so I ran the following commands:
SET DISTUTILS_USE_SDK=1
SET MSSKdK=1
And then ran this command again:
python setup.py build_ext --inplace
which gave the error: command 'cl.exe' failed: No such file or directory.
Now I don't know what else to do. Can anyone help with this?
I am not using anaconda.
To compile the pyx file I open the CMD shell from the SDK and there I enter (compiling for x64):
set DISTUTILS_USE_SDK=1
setenv /x64 /release
python setup.py build_ext --inplace

ImportError: No module named twisted.internet

I installed python 2.7.5 which is working fine.
I then install scrapy (which, I think, uses twisted internally). My scrapy spider is also working fine.
I installed twisted:
sudo apt-get install python-twisted
Then, I created a sample program using Echo Server code shown here
Here is the code
from twisted.internet import protocol, reactor
class Echo(protocol.Protocol):
def dataReceived(self, data):
self.transport.write(data)
class EchoFactory(protocol.Factory):
def buildProtocol(self, addr):
return Echo()
reactor.listenTCP(1234, EchoFactory())
reactor.run()
I try to run this code using this command:
$ python twistedTester.py
Traceback (most recent call last):
File "twistedTester.py", line 1, in <module>
from twisted.internet import protocol, reactor
ImportError: No module named twisted.internet
Can anyone help me with how I can debug why my twisted package is not being picked up by Python installation?
If you use pip just try:
pip install twisted
The same works with w3lib and lxml.
On some *nix systems this might give you a permission error. If that happens, try:
sudo -H pip install twisted
I figured out why this error was happening. For some reason, using apt-get to install a python package was not installing it right.
So, I had to download a tar ball and install the package from them.
I downloaded Twisted tar from here.
I did a tar xjf Twisted-13.1.0.tar.bz2 - this created a directory called Twisted-13.1.0
next, cd Twisted-13.1.0
Finally, python setup.py install
This gave me an error. Twisted required another package called zope.interface. So, I downloaded tar ball for zope.interface from here. Then, ran this command tar xzf zope.interface-3.6.1.tar.gz That created a folder called zope.interface-3.6.1. So, cd into zope.interface-3.6.1 and run python setup.py install
Note: Depending on your user's rights, you may want to do these commands in sudo mode. Just add the keyword sudo before every command.
please rename the file twisted.py to something else. whenever you import a function from a file the interpreter will search for the file in the current location and then it searches in the library. so if you have any file in the name of "twisted.py" you should probably rename it.
after renaming it. dont fail to remove the twisted.pyc file before running it again.
It happened to me too. Finally I figure out that there is a file named twisted.py my present working directory. I removed twisted.py and twisted.pyc. Problem resolved.
Looks like Twisted may have removed the twisted.internet module from the current release. Pinning on the version required by scrapy (17.9.0) worked for me:
$ pip install twisted==17.9.0
Checking if it's installed:
$ python -c "import twisted.internet; print(twisted.internet)"
<module 'twisted.internet' from '/Users/username/dev/env/redacted-ewmlD2h2/lib/python3.7/site-packages/twisted/internet/__init__.py'>
I figured out why apt-get install python-twisted was not enough or "installing it right", as you said, user1700184.
I use Debian Wheezy and Python 2.7.
I just had to move the folder named "twisted" from /usr/lib/python2.7/dist-packages/ to /usr/lib/python2.7/
The same has to be done with the package "zope" and any other one that you do install but is not retrieved when you try run your code.
However, why this is even necessary in my case is still a mystery since my sys.path does include both /usr/lib/python2.7/ and /usr/lib/python2.7/dist-packages, so whatever was under dist-packages should have been retrieved by the interpreter.
I think it is worth noting that if you use sudo to launch python you are using your original default system python. This is NOT the python that your PATH points to. For example if you are using Anaconda and you have updated your path such that which python points to path/to/anaconda/bin/python, sudo which python will still point to usr/bin/python.
So obviously sudo python twistedTester.py will not find the twisted module. To get around this you should explicitly pass the path to the anaconda python. Like so:
sudo path/to/anaconda/bin/python twistedTester.py