The question
I'm trying to write a package to install libmonome, a toolkit for using the monome hardware (OSC controllers with LEDs, mostly for music).
My efforts are based on these instructions for building libmonome from source. Note that those instructions use waf, not make.
My hello world packages that use make build successfully. But my libmonome package that tries to use python waf analogously is not building:
[jeff#jbb-dell:~/nix/jbb-config/custom-packages/libmonome]$ nix-build
these derivations will be built:
/nix/store/kkf4c8l0njqdapnm2qgk6ffmybmafrpv-libmonome.drv
building '/nix/store/kkf4c8l0njqdapnm2qgk6ffmybmafrpv-libmonome.drv'...
Error: Cannot unpack waf lib into /nix/store/dk3pnwg7z9q14f4yj35y2kaqdmahnhhh-libmonome/.waf-2.0.17-6b308e91b5eb321c61bd5469cd6d43aa
Move waf in a writable directory
builder for '/nix/store/kkf4c8l0njqdapnm2qgk6ffmybmafrpv-libmonome.drv' failed with exit code 1
error: build of '/nix/store/kkf4c8l0njqdapnm2qgk6ffmybmafrpv-libmonome.drv' failed
[jeff#jbb-dell:~/nix/jbb-config/custom-packages/libmonome]$
Some maybe-helpful, maybe-redundant information
If you're cloning the repo, note that you'll need to fetch submodules to get all the libmonome code. Here's one way to do that:
git clone --recurse-submodules https://github.com/JeffreyBenjaminBrown/nixos-experiments
git checkout c20581f839f8e0fb39b2762baeea7d0a7ab10783
I already put absolute links to my code above, but in case you would rather see that code on this page, here is my default.nix file:
{...}:
with (import <nixpkgs> {});
derivation {
name = "libmonome";
builder = "${bash}/bin/bash";
args = [ ./builder.sh ];
buildInputs = [ git
coreutils
liblo
python2
];
# I would like to use fetchGit but haven't gotten it to work.
# src = fetchGit {
# url = "https://github.com/monome/libmonome.git";
# };
repo = ./libmonome;
system = builtins.currentSystem;
}
and here is the builder.sh script it calls:
set -e # Exit the build on any error
unset PATH # because it's initially a non-existant path
for p in $buildInputs; do
export PATH=$p/bin${PATH:+:}$PATH
done
cd $repo
# I've tried with python2 and python3
python2 ./waf configure
python2 ./waf
sudo python2 ./waf install
Related
When I tried to install the Google Core API package, it always gets errors for every different version of this package. The app is run in Python 3, and I got the following logs:
The user requested google-api-core==1.21.0
google-cloud-core 1.4.3 depends on google-api-core<2.0.0dev and >=1.19.0
google-api-core[grpc,grpcgcp] 1.29.0 depends on google-api-core 1.29.0
ERROR: Cannot install -r requirements.txt (line 52) and google-api-core[grpc,grpcgcp]==1.14.0 because these package versions have conflicting dependencies.
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
The command '/bin/sh -c pip3 install -r requirements.txt' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
We want to install this package on the Google Cloud Platform.
Could somebody please help me with the conflicting dependency issue?
We tried the following packages but none of them work.
#google-api-core==1.29.0
#google-api-core[grpc,grpcgcp]==1.14.0
#google-api-core==1.23.0
# google-api-core==1.19.0
# google-api-python-client==1.9.3
# google-auth==1.30.0
# google-auth-httplib2==0.0.4
# google-auth-oauthlib==0.4.1
# google-cloud==0.34.0
# google-cloud-bigquery==1.25.0
# google-cloud-bigquery-storage==2.0.1
# google-cloud-bigtable==1.2.1
# google-cloud-core #==1.4.3
# google-cloud-datastore==1.12.0
# google-cloud-language==2.0.0
# google-cloud-logging==1.15.0
# google-cloud-pubsub==2.1.0
# google-cloud-resource-manager==0.30.2
# google-cloud-scheduler==2.2.0
# google-cloud-secret-manager==2.0.0
# google-cloud-spanner==1.19.1
# google-cloud-storage==1.29.0
# google-cloud==0.34.0
# google-auth==1.22.1
#grpc-google-iam-v1==0.12.3
#grpcio==1.29.0
# google-resumable-media
All the other related packages versions are as follow, and they are working:
google-api-core==1.21.0
google-api-python-client==1.6.7
google-auth==1.30.0
google-auth-httplib2==0.0.4
google-auth-oauthlib==0.4.1
google-cloud
google-cloud-bigquery
google-cloud-bigquery-storage==2.0.1
google-cloud-bigtable==1.2.1
google-cloud-core==1.4.3
google-cloud-datastore==1.12.0
google-cloud-language==2.0.0
google-cloud-logging==1.15.0
google-cloud-pubsub==1.7.0
google-cloud-resource-manager==0.30.2
google-cloud-scheduler==2.0.0
google-cloud-secret-manager==2.0.0
google-cloud-spanner==1.19.1
google-cloud-storage==1.29.0
google-cloud-translate==3.0.1
google-cloud-videointelligence==1.16.0
google-cloud-vision==2.0.0
google-crc32c==1.0.0
google-pasta==0.2.0
googleapis-common-protos==1.52.0
Thanks for your time and support!
Write these steps in your requirements.txt file:
google-api-core==1.29.0
google-api-core[grpc,grpcgcp]==1.14.0
google-api-python-client==1.9.3
google-auth==1.30.0
google-auth-httplib2==0.0.4
google-auth-oauthlib==0.4.1
google-cloud==0.34.0
google-cloud-bigquery==1.25.0
google-cloud-bigquery-storage==2.0.1
google-cloud-bigtable==1.2.1
google-cloud-core ==1.4.3
google-cloud-datastore==1.12.0
google-cloud-language==2.0.0
google-cloud-logging==1.15.0
google-cloud-pubsub==2.1.0
google-cloud-resource-manager==0.30.2
google-cloud-scheduler==2.2.0
google-cloud-secret-manager==2.0.0
google-cloud-spanner==1.19.1
google-cloud-storage==1.29.0
google-cloud==0.34.0
grpc-google-iam-v1==0.12.3
grpcio==1.29.0
google-resumable-media
Create a virtual environment and then install the dependencies. The steps for doing this are:
python3 -m venv env
source env/bin/activate
pip list
pip install -r requirements.txt
Try using the below requirements.txt file. I have used the same, and it is working fine for me.
I have installed conda using miniforge. Since my mac has a m1 chip, i had to install conda using Miniforge3-MacOSX-arm64.sh, inorder to get tensorflow working. unfortunately this version (minforge/minconda arm64) doesn't have python2 for some reason. As I require python2 for another project (doesnot require tensorflow) I have decided to install anaconda3.
But now I am unaware how to switch between the two conda versions (anaconda3 and miniconda/miniforge3).
For example when I enter activate conda in the terminal, it activates the base environment of the miniforge version.
How do I activate base environment of the anaconda version. So that I can create python2 environment there (anaconda3).
According to this post, one solution is to change the content of your .zshrc file, save your changes, close and reopen your terminal. I tested on a MacBook Pro M1 where Miniforge3 and Anaconda3 are currently installed and it works. In the following, just replace --PATH-- with the path of the requested environment management system. For example, I replace --PATH-- with opt/anaconda3 for Anaconda3 and miniforge3 for .. Miniforge3.
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/username/--PATH--/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/username/--PATH--/etc/profile.d/conda.sh" ]; then
. "/Users/username/--PATH--/etc/profile.d/conda.sh"
else
export PATH="/Users/username/--PATH--/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
A quick fix for switching between environments is to pick out the path you get from the output of conda env list. Here is what I get from both miniforge and miniconda:
(base) user#machine script % conda env list
# conda environments:
#
base * /Users/user/miniforge3
nmgp /Users/user/miniforge3/envs/nmgp
scphere /Users/user/miniforge3/envs/scphere
/opt/miniconda3
/opt/miniconda3/envs/gpcounts
/opt/miniconda3/envs/gpy
/opt/miniconda3/envs/test
/opt/miniconda3/envs/nmgp
/opt/miniconda3/envs/scphere
/opt/miniconda3/envs/ssdgp
To activate the miniforge environments you can use the name directly:
conda activate nmgp
To activate a miniconda environment you can use the absolute path:
conda activate /opt/miniconda3/envs/nmgp
I am trying to build a Freeplane derivation based on Freemind, see: https://github.com/razvan-panda/nixpkgs/blob/freeplane/pkgs/applications/misc/freeplane/default.nix
{ stdenv, fetchurl, jdk, jre, gradle }:
stdenv.mkDerivation rec {
name = "freeplane-${version}";
version = "1.6.13";
src = fetchurl {
url = "mirror://sourceforge/project/freeplane/freeplane%20stable/freeplane_src-${version}.tar.gz";
sha256 = "0aabn6lqh2fdgdnfjg3j1rjq0bn4d1947l6ar2fycpj3jy9g3ccp";
};
buildInputs = [ jdk gradle ];
buildPhase = "gradle dist";
installPhase = ''
mkdir -p $out/{bin,nix-support}
cp -r ../bin/dist $out/nix-support
sed -i 's/which/type -p/' $out/nix-support/dist/freeplane.sh
cat >$out/bin/freeplane <<EOF
#! /bin/sh
JAVA_HOME=${jre} $out/nix-support/dist/freeplane.sh
EOF
chmod +x $out/{bin/freeplane,nix-support/dist/freeplane.sh}
'';
meta = with stdenv.lib; {
description = "Mind-mapping software";
homepage = https://www.freeplane.org/wiki/index.php/Home;
license = licenses.gpl2Plus;
platforms = platforms.linux;
};
}
During the gradle build step it is throwing the following error:
building path(s)
‘/nix/store/9dc1x2aya5p8xj4lq9jl0xjnf08n7g6l-freeplane-1.6.13’
unpacking sources unpacking source archive
/nix/store/c0j5hgpfs0agh3xdnpx4qjy82aqkiidv-freeplane_src-1.6.13.tar.gz
source root is freeplane-1.6.13 setting SOURCE_DATE_EPOCH to timestamp
1517769626 of file freeplane-1.6.13/gitinfo.txt patching sources
configuring no configure script, doing nothing building
FAILURE: Build failed with an exception.
What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64.
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. builder for ‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed with exit code 1 error: build of
‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed
Running gradle dist from terminal works fine. I'm guessing that maybe one of the globally installed Nix packages provides a fix to the issue and they are not visible during the build.
I searched a lot but couldn't find any working solution. For example, removing the ~/.gradle folders didn't help.
Update
To reproduce the issue just git clone https://github.com/razvan-panda/nixpkgs, checkout the freeplane branch and run nix-build -A freeplane in the root of the repository.
Link to GitHub issue
Maybe you just don't have permission for the folder/file
sudo chmod 777 yourFolderPath
you can also : sudo chmod 777 yourFolderPath/* (All folder)
Folder will not be locked,then You can use it normally
[At least I succeeded。。。]
EX:
sudo chmod 777 Ruby/
now ,that's ok
To fix this error: What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64. do the following:
Check if your Gradle cache (**~user/.gradle/**native folder exist at all or not).
Check if your Gradle cache (~user/.gradle/native folder exist and the file in question i.e. libnative-platform.so exists in that directory or not).
Check if the above folder ~user/.gradle or ~/.gradle/native or file: ~/.gradle/native/libnative-platform.so has valid permissions (should not be read-only. Running chmod -R 755 ~/.gradle is enough).
IF you don't see native folder at all or if your native folder seems corrupted, run your Gradle task ex: gradle clean build using -g or --gradle-user-home option and pass it's value.
Ex: If I ran mkdir /tmp/newG_H_Folder; gradle clean build -g /tmp/newG_H_Folder, you'll see Gradle will populate all the required folder/files (that it needs to run even before running any task or any option) are now in this new Gradle Home folder (i.e. /tmp/newG_H_Folder/.gradle directory).
From this folder, you can copy - just the native folder to your user's ~/.gradle folder (take backup of existing native folder in ~/.gradle first if you want to) if it already exists -or copy the whole .gradle folder to your ~ (home directory).
Then rerun your Gradle task and it won't error out anymore.
Gradle docs says:
https://docs.gradle.org/current/userguide/command_line_interface.html
-g, --gradle-user-home
Specifies the Gradle user home directory. The default is the .gradle directory in the user’s home directory.
I am using pybind11 and build the python module with setuptools and cmake as described in pybind/cmake_example:
setup(
name='libraryname',
...
ext_modules=[CMakeExtension('libraryname')],
cmdclass=dict(build_ext=CMakeBuild),
)
Locally, using python setup.py sdist build everything is fine and I can use and/or install the package from the generated files.
I now want to upload the package to PyPI.
From a different python package I know how to generate a general linux library (see also here) by manipulating the platform tag of a wheel:
class bdist_wheel(bdist_wheel_):
def finalize_options(self):
from sys import platform as _platform
platform_name = get_platform()
if _platform == "linux" or _platform == "linux2":
# Linux
platform_name = 'manylinux1_x86_64'
bdist_wheel_.finalize_options(self)
self.universal = True
self.plat_name_supplied = True
self.plat_name = platform_name
setup(
...
cmdclass = {'bdist_wheel': bdist_wheel},
)
The Question:
How to generate the appropriate platform tag when no bdist_wheel is built?
Should this be somehow built as wheel instead of as an extension (possibly related to this issue on GH)?
Also, how does pybind11 decide the suffix of the generated libraries (on my linux it is not just .so but .cpython-35m-x86_64-linux-gnu.so)?
Follow-up:
The main problem is that I cannot update the current Ubuntu-built package to PyPI: ValueError: Unknown distribution format: 'libraryname-0.8.0.cpython-35m-x86_64-linux-gnu.so'
If the platform tag cannot or should not be changed: what is best practice for uploading a pybind11 module to PyPI across platforms?
My bad!
It turns out the confusion was due to a build error I had when I initially tried running python setup.py sdist bdist_wheel.
Manually building with python setup.py build was not the right approach for publishing the package.
Note: the name of the .so file needed to be set without the -0.8.0 version identifier in order for python do be able to do the import from the wheel.
To Summarize:
Building and publishing binary wheels works exactly the same with pybind11 as with e.g. cpython and it should work just fine to follow the pybind/cmake_example.
I am using distutils to create an rpm from my project. I have this directory tree:
project/
my_module/
data/file.dat
my_module1.py
my_module2.py
src/
header1.h
header2.h
ext_module1.cpp
ext_module2.cpp
swig_module.i
setup.py
MANIFEST.in
MANIFEST
my setup.py:
from distutils.core import setup, Extension
module1 = Extension('my_module._module',
sources=['src/ext_module1.cpp',
'src/ext_module2.cpp',
'src/swig_module.i'],
swig_opts=['-c++', '-py3'],
include_dirs=[...],
runtime_library_dirs=[...],
libraries=[...],
extra_compile_args=['-Wno-write-strings'])
setup( name = 'my_module',
version = '0.6',
author = 'microo8',
author_email = 'magyarvladimir#gmail.com',
description = '',
license = 'GPLv3',
url = '',
platforms = ['x86_64'],
ext_modules = [module1],
packages = ['my_module'],
package_dir = {'my_module': 'my_module'},
package_data = {'my_module': ['data/*.dat']} )
my MANIFEST.in file:
include src/header1.h
include src/header2.h
the MANIFEST file is automatically generated by python3 setup.py sdist. And when i run python3 setup.py bdist_rpm it compiles and creates correct rpm packages. But the problem is that when im running SWIG on a C++ source, it creates a module.py file that wraps the binary _module.cpython32-mu.so file, it is created with the module_wrap.cpp file, and it isnt copied to the my_module directory.
What I must write to the setup.py file to automatically copy the SWIG generated python modules?
And also I have another question: When I install the rpm package, I want that an executable will be created, in /usr/bin or so, to run the application (for example if the my_module/my_module1.py is the start script of the application then I can run in bash: $ my_module1).
The problem is that build_py (which copies python sources to the build directory) comes before build_ext, which runs SWIG.
You can easily subclass the build command and swap around the order, so build_ext produces module1.py before build_py tries to copy it.
from distutils.command.build import build
class CustomBuild(build):
sub_commands = [
('build_ext', build.has_ext_modules),
('build_py', build.has_pure_modules),
('build_clib', build.has_c_libraries),
('build_scripts', build.has_scripts),
]
module1 = Extension('_module1', etc...)
setup(
cmdclass={'build': CustomBuild},
py_modules=['module1'],
ext_modules=[module1]
)
However, there is one problem with this: If you are using setuptools, rather than just plain distutils, running python setup.py install won't run the custom build command. This is because the setuptools install command doesn't actually run the build command first, it runs egg_info, then install_lib, which runs build_py then build_ext directly.
So possibly a better solution is to subclass both the build and install command, and ensure build_ext gets run at the start of both.
from distutils.command.build import build
from setuptools.command.install import install
class CustomBuild(build):
def run(self):
self.run_command('build_ext')
build.run(self)
class CustomInstall(install):
def run(self):
self.run_command('build_ext')
self.do_egg_install()
setup(
cmdclass={'build': CustomBuild, 'install': CustomInstall},
py_modules=['module1'],
ext_modules=[module1]
)
It doesn't look like you need to worry about build_ext getting run twice.
It's not a complete answer, because I don't have the complete solution.
The reason why the module is not copied to the install directory is because it wasn't present when the setup process tried to copy it. The sequence of events is:
running install
running build
running build_py
file my_module.py (for module my_module) not found
file vcanmapper.py (for module vcanmapper) not found
running build_ext
If you run a second time python setup.py install it will do what you wanted in the first place. The official SWIG documentation for Python proposes you run first swig to generate the wrap file, and then run setup.py install to do the actual installation.
It looks like you have to add a py_modules option, e.g.:
setup(...,
ext_modules=[Extension('_foo', ['foo.i'],
swig_opts=['-modern', '-I../include'])],
py_modules=['foo'],
)
Using rpm to Install System Scripts in Linux, you'll have to modify your spec file. The %files section tells rpm where to put the files, which you can move or link to in %post, but such can be defined in setup.py using:
options = {'bdist_rpm':{'post_install':'post_install', 'post_uninstall':'post_uninstall'}},
Running Python scripts in Bash can be done with the usual first line as #!/usr/bin/python and executable bit on the file using chmod +x filename.